text
stringlengths 6
2.78M
| meta
dict |
---|---|
---
abstract: 'In the course of conducting a deep ($14.5 \la r \la 23$), 20 night survey for transiting planets in the rich $\sim 550$ Myr old open cluster M37 we have measured the rotation periods of 575 stars which lie near the cluster main sequence, with masses $0.2 M_{\odot} \lesssim M \lesssim 1.3 M_{\odot}$. This is the largest sample of rotation periods for a cluster older than $500~{\rm Myr}$. Using this rich sample we investigate a number of relations between rotation period, color and the amplitude of photometric variability. Stars with $M \ga 0.8 M_{\odot}$ show a tight correlation between period and mass with heavier stars rotating more rapidly. There is a group of 4 stars with $P > 15~{\rm days}$ that fall well above this relation, which, if real, would present a significant challenge to theories of stellar angular momentum evolution. Below $0.8 M_{\odot}$ the stars continue to follow the period-mass correlation but with a broad tail of rapid rotators that expands to shorter periods with decreasing mass. We combine these results with observations of other open clusters to test the standard theory of lower-main sequence stellar angular momentum evolution. We find that the model reproduces the observations for solar mass stars, but discrepancies are apparent for stars with $0.6 \la M \la 1.0 M_{\odot}$. We also find that for late-K through early-M dwarf stars in this cluster rapid rotators tend to be bluer than slow rotators in $B-V$ but redder than slow rotators in $V-I_{C}$. This result supports the hypothesis that the significant discrepancy between the observed and predicted temperatures and radii of low-mass main sequence stars is due to stellar activity.'
author:
- 'J. D. Hartman, B. S. Gaudi, M. H. Pinsonneault, K. Z. Stanek, M. J. Holman, B. A. McLeod, S. Meibom, J. A. Barranco, and J. S. Kalirai'
title: 'Deep MMT[^1] Transit Survey of the Open Cluster M37 III: Stellar Rotation at 550 Myr'
---
Introduction
============
This paper is the third in a series of papers on a deep survey for transiting planets in the open cluster M37 (NGC 2099) using the MMT. In the first paper [@Hartman.07a Paper I] we introduced the survey, described the spectroscopic and photometric observations and the data reduction, determined the fundamental parameters (age, metallicity, distance and reddening) of the cluster, and obtained its mass and luminosity functions and radial density profile. In the second paper [@Hartman.07b Paper II] we identified 1445 variable stars in the field of the cluster, of which 99$\%$ were new discoveries. We found that $\ga 500$ of these variables lie near the cluster main sequence on photometric color-magnitude diagrams (CMDs) and show a correlation between period and magnitude. In this paper we investigate this population of variables arguing that the photometric variations are due to spotted star rotation. We use these variables to study the rotation of F-M main sequence stars at the age of the cluster ($550 \pm 30$ Myr, Paper I; note that for the rest of this paper we adopt this age which was derived by comparison to models with convective overshooting).
Rotation plays an important role in the life of a star. Empirically there is a strong relation between the rotation rate and the activity and age of low-mass stars [@Skumanich.72], with older stars being slower rotators and less active than younger stars. Rotation affects the structure of a star [see @Sills.00] and its evolution may have important consequences for mixing at the base of a star’s convection zone [see the review by @Pinsonneault.97]. Rotation also affects the spectral energy distribution of low-mass stars [@Stauffer.03]. It has even been suggested that rotation may be used as a tool to determine the ages of field stars older than a few hundred million years, and that ages determined in this fashion may be significantly more accurate than ages determined using other methods [@Barnes.07]. However, the usefulness of these “gyrochronology” ages is predicated on an accurate understanding and calibration of the age-rotation rate relation. As we discuss next, there are relatively few constraints on this relation for ages $\ga 200~{\rm Myr}$.
The surface rotation period of a star can be measured directly from photometric variations if it has substantial surface brightness inhomogeneities, from variations in the strength of spectroscopic features such as the cores of the Ca II H and K lines, or it can be constrained by measuring the projected equatorial rotation velocity $v \sin i$ from the Doppler broadening of spectral absorption lines. The latter method suffers from the inclination axis ambiguity and thus requires a substantial ensemble of stars, as well as an assumption on the inclination axis distribution, to determine the underlying velocity, and hence rotation period, distribution. While it is more desirable for the study of stellar rotation to directly measure the rotation period of a star than $v \sin i$, a consequence of the activity-age relation is that older stars show lower amplitude photometric variations. For example, at $100~{\rm Myr}$ a typical solar-like star will have a photometric amplitude of $\sim 7\%$ in $r$, at $500~{\rm Myr}$ its photometric amplitude will have fallen to $\sim 1\%$, and at $2~{\rm Gyr}$ the typical amplitude is only $\sim 0.1\%$. Moreover, longer rotation periods are more difficult to measure as they require observations spanning a longer baseline. As a result it becomes increasingly difficult to directly measure the rotation periods of stars at greater ages. Our understanding of the angular momentum evolution of stars older than a few hundred million years has been gleaned primarily from $v \sin i$ measurements (See for example the review by @Bouvier.97).
Most photometric studies of stellar rotation have focused on young ($< 100$ Myr) open clusters whose stars show relatively large amplitude photometric variations (See @Stassun.03 for a review). These studies have given great insight into the evolution of angular momentum for pre-main sequence stars (PMS). As a star contracts on to the main sequence the expectation is that it will spin up. Observations of the youngest clusters, however, have revealed that not all PMS achieve these short rotation periods (See for example @Herbst.02 for observations of the $\sim 1$ Myr Orion Nebula Cluster, @Cieza.06 for observations of $\sim 2-3$ Myr IC 348, @Lamm.05 for observations of $\sim 2-4$ Myr NGC 2264, @Irwin.08b for observations of $\sim 5$ Myr NGC 2362, and @Irwin.08a for observations of $\sim 40$ Myr NGC 2547). Possible explanations for the presence of slow rotators in these clusters include magnetic locking of the star’s rotation to the inner accretion disk [@Konigl.91; @Shu.94] or the presence of an accretion-driven wind which carries angular momentum away from the star [@Shu.00].
Photometric determinations of rotation periods have been obtained for stars in a number of clusters in the age range $50-200$ Myr; at these ages solar-like stars have settled on to the main sequence. Clusters that have been studied include IC 2391 [$\sim 50~{\rm Myr}$; @Patten.96], IC 2602 [$\sim 50~{\rm Myr}$; @Barnes.99], $\alpha$ Persei [$\sim 80~{\rm Myr}$; @Stauffer.85; @Stauffer.89; @Prosser.91a; @Prosser.93a; @Prosser.93b; @ODell.93; @ODell.94; @Prosser.95; @ODell.96; @Allain.96; @Martin.97; @Prosser.98a; @Prosser.98b; @Barnes.98a], the Pleiades [$\sim 125~{\rm Myr}$; @Magnitskii.87; @Stauffer.87a; @VanLeeuwen.87; @Prosser.93a; @Prosser.93b; @Prosser.95; @Krishnamurthi.98; @Terndrup.99; @Scholz.04], NGC 2516 [$\sim 150~{\rm Myr}$; @Irwin.07], and M34 [$\sim 200~{\rm Myr}$; @Barnes.03; @Irwin.06].
The data for older clusters is scarcer. Periods have been determined for 87 stars in NGC 3532 [$\sim 300~{\rm Myr}$; @Barnes.98b], 4 stars in Coma [$\sim 600~{\rm Myr}$; @Radick.90], 35 stars in the Hyades [$\sim 625~{\rm Myr}$; @Radick.87; @Radick.95; @Prosser.95], and 5 stars in Praesepe [$\sim 625~{\rm Myr}$; @Scholz.07].
The observations discussed above have clearly demonstrated that once a low-mass star (G and later) reaches the main sequence it begins to lose angular momentum; this is understood to be the result of a magnetized stellar wind [@Webber.67]. By comparing the rotation velocities of stars in the Pleiades and the Hyades with the Sun, @Skumanich.72 found the scaling relation $v \propto t^{-1/2}$, where $v$ is the average equatorial velocity and $t$ is the age. This can be explained by a surface angular momentum loss rate that is proportional to $\omega^3$ [@Kawaler.88] and naturally leads to a convergence in the rotation rates of stars at a given mass as seen by @Radick.87 for stars in the Hyades. The presence of solar-mass rapid rotators in the Pleiades [@VanLeeuwen.82] appears to contradict the @Skumanich.72 law and suggests that the angular momentum loss rate is saturated above some critical rotation rate $\omega_{crit}$ so that it scales as $\omega_{crit}^2\omega$ for $\omega > \omega_{crit}$ [@Kawaler.88]. The critical rotation rate must be mass dependent to explain the spread in rotation rates for lower mass stars [@Krishnamurthi.97].
The rotation history of a star will depend not only on the rate of angular momentum loss, but also on the efficiency of internal angular momentum transport. Models in which the core and envelope of a star are decoupled lead to a rapid spin-down of the envelope [@Soderblom.93]. In these models once the core and envelope become re-coupled the spin-down is much less dramatic. Models in which the internal angular momentum transport is calculated by hydrodynamic mechanisms, and in which some degree of core-envelope decoupling is permitted have been produced (e.g. @Sills.00 show that models that incorporate differential rotation with depth in stars are needed to reproduce the angular momentum evolution of systems younger than the Pleiades). Alternatively, models in which the star is assumed to rotate as a solid body have also been developed (e.g. @Bouvier.97b).
This paper presents rotation periods for 575 stars in the open cluster M37 (NGC 2099). In Paper I we found that the cluster has an age of $550 \pm 30$ Myr, a reddening of $E(B-V) = 0.227 \pm 0.038$, a distance modulus of $(m-M)_{V} = 11.57 \pm 0.13$ and a metallicity of $[M/H] = +0.045 \pm 0.044$ which are in good agreement with previous measurements. The age of M37 is thus comparable to that of the Hyades [625 Myr with overshooting; @Perryman.98] which at present is the oldest cluster for which a significant number of stellar rotation periods have been measured. M37 is, however, substantially richer than the Hyades and thus has the potential to provide a much larger data-set of rotation periods for older stars.
As we were preparing to submit this manuscript, we became aware of a similar, independent study by @Messina.08, which presents rotation periods for 122 stars in this cluster. Our survey goes more than 2 magnitudes deeper than @Messina.08 with more than an order of magnitude more observations from more than twice as many nights. As a result we are able to study the rotation evolution for late K and early M dwarfs as well as the late F, G and early K dwarfs studied by @Messina.08. On the other hand, the @Messina.08 survey has measured periods for early F stars that are saturated in our survey. We compare the periods measured by both surveys in §3.6.
In the next section we provide a brief summary of the observations and data reduction. In §3 we compile the catalog of candidate cluster members with measured rotation periods. In §4 we fit analytic models to the observed period-color sequence in M37. In §5 we compare the photometric observations to spectroscopic $v \sin i$ measurements for a number of these stars. In §6 we study the amplitude distribution as a function of period and color. In §7 we study the Blue K dwarf phenomenon in M37. In §8 we compare these observations to theories of stellar angular momentum evolution. We discuss the results in §9.
Summary of Observations
=======================
The observations and data reduction procedure were described in detail in Papers I and II, we provide a brief overview here. The observations consist of both $gri$ photometry for $\sim 16000$ stars, and $r$ time-series photometry for $\sim 23000$ stars obtained with the Megacam mosaic imager [@McLeod.00] on the 6.5 m MMT. We also obtained high-resolution spectroscopy of 127 stars using the Hectochelle multi-fiber, high-dispersion spectrograph [@Szentgyorgyi.98] on the MMT.
The primary time-series photometric observations were done using the $r$ filter and consist of $\sim 4000$ high quality images obtained over twenty four nights (including eight half nights) between December 21, 2005 and January 21, 2006. We obtained light curves for stars with $14.5 \la r \la 23$ using a reduction pipeline based on the image subtraction technique and software due to @Alard.98 and @Alard.00. We apply two cleaning routines to the data: clipping outlier points and removing individual bad images. We do not decorrelate against other systematic variations since doing so tends to distort the light curves of large amplitude variables.
The spectra were obtained on four separate nights between February 23, 2007 and March 12, 2007 and were used to measure $T_{eff}$, $[Fe/H]$, $v\sin i$ and the radial velocity (RV) via cross-correlation against a grid of model stellar spectra computed using ATLAS 9 and SYNTHE [@Kurucz.93]. The classification procedure was developed by Meibom et al. (2008, in preparation), and made use of the *xcsao* routine in the [<span style="font-variant:small-caps;">Iraf</span>]{}[^2] *rvsao* package [@Kurtz.98] to perform the cross-correlation. In performing the cross-correlation we found that it was necessary to fix $\log(g) = 4.5$, however given that very few of the field stars in our sample are likely to be giants or sub-giants fixing $\log(g)$ should not substantially bias the resulting parameters. We measured the parameters separately for each of the four nights choosing the $v\sin i$ and $T_{eff}$ values that maximize the cross-correlation peak-height value at three $[Fe/H]$ grid points ($[Fe/H] = -0.5$, $[Fe/H] = 0.0$, and $[Fe/H] = +0.5$). For each $[Fe/H]$ grid point we determined the average $v\sin i$ and $T_{eff}$ values over the four nights together with uncertainties on the values for each night estimated using the standard deviation of the measurements. We then fit a quadratic relation between $[Fe/H]$ and the average peak-height values to estimate the value of $[Fe/H]$ that maximizes the cross-correlation and then determined $v\sin i$ and $T_{eff}$ by fitting a quadratic relation between each of these parameters and $[Fe/H]$. The final errors on each parameter are the standard errors from the fit. For a given star the error on $v\sin i$ can be substantially larger than the average value, particularly for fainter targets, when the value of $[Fe/H]$ is not strongly constrained or when $v\sin i$ is small, in these cases the error should be taken as an upper limit on the value. To determine the $RV$ for each star we used *rvsao* to cross-correlate the spectra on each night against the best matching template spectrum from the full grid.
As described in Paper I we also take $BV$ photometry for stars in the field of this cluster from @Kalirai.01, $K_{S}$ photometry from 2MASS [@Skrutskie.06] and we transform our $ri$ photometry to $I_{C}$ using a transformation based on the $I_{C}$ photometry from @Nilakshi.02.
Catalog of Candidate Cluster Members with Measured Rotation Periods
===================================================================
Selection of Rotational Variables {#sec:rotsel}
---------------------------------
In Paper II we selected 1445 periodic variable stars using the Lomb-Scargle [L-S; @Lomb.76; @Scargle.82; @Press.89; @Press.92], phase-binning analysis of variance [AoV; @Schwarzenberg-Czerny.89; @Devor.05] and box-fitting least squares [BLS; @Kovacs.02] algorithms. From this catalog we will now select a population of probable cluster member, rotational variables. To do this we follow the procedure for selecting candidate photometric cluster members described in Paper I. For each star we determine the $g,r,i,B,V$ point within the fiducial cluster main sequences, generated by eye, that has a minimum $\chi^{2}$ deviation from the observed $g,r,i,B,V$ values for the star. We then select as candidate cluster members stars with $\chi^{2} < 150$ in $g,r,i$ and $\chi^{2} < 250$ in $B,V$ for $B-V < 1.38$ and $V < 20$ or $\chi^{2} < 150$ in $B,V$ for other stars. Figure \[fig:VarSelect\] shows the selected variables on $g-r$ and $g-i$ CMDs. After rejecting variables that were classified in Paper II as eclipsing binaries or short period pulsators and rejecting variables without period determinations we are left with 575 variables that are candidate cluster members. A catalog of these 575 variables is given in tables \[tab:M37\_rotation1\]-\[tab:M37\_rotation3\], we show only the first ten rows of each table for illustration, the full tables are available in the online edition of the journal. Note that only stars with spectroscopy are included in table \[tab:M37\_rotation3\].
Revised Periods
---------------
In Paper II we calculated the periods of the variable stars using the L-S, phase-binning AoV, and BLS algorithms. We then selected, by eye, the most likely period for each star from the values returned by the three algorithms. Here we recalculate the periods using the multi-harmonic AoV algorithm due to @Schwarzenberg-Czerny.96. This method is equivalent to fitting to each light curve a harmonic series of the form:
$$\tilde{r}(t) = a_{0} + \sum_{i=1}^{N}a_{i}\cos(2\pi i t / P + \phi_{i})
\label{eqn:fourier}$$
where $P$ is the period, $a_{i}$ are the amplitudes, and $\phi_{i}$ are the phases. This method is more general than the L-S algorithm which is equivalent to fitting a $N=1$ series without a floating mean, and may give a more accurate period determination for light curves that have multiple minima in a single cycle. We calculate the period for each light curve for $N = 1, 2$ and $3$. Figure \[fig:BVPeriod\_multiharm\] shows the period-$(B-V)_{0}$ relation for each value of $N$ while figure \[fig:AverPeriod\_multiharm\] shows the period-$<\!r\!>$ relation. Here $<\!r\!>$ is the average $r$ magnitude of the light curve. The magnitudes and colors have been converted to masses using the mass-$r$ and mass-$(B-V)_{0}$ relations for this cluster that were determined in Paper I.
As seen in figures \[fig:BVPeriod\_multiharm\] and \[fig:AverPeriod\_multiharm\], there is a clear period-stellar mass sequence running from $P \sim 3~{\rm days}$, $M \sim 1.2 M_{\odot}$ to $P \sim 17~{\rm days}$, $M \sim 0.5 M_{\odot}$. There also appears to be a significant number of stars with periods falling below the sequence for $M < 0.8 M_{\odot}$ and a cluster of stars with $15~{\rm days} < P < 20~{\rm days}$ and $0.7 M_{\odot} < M < 1.1 M_{\odot}$. For $N = 1$ there appears to be a second sequence with periods that are half the main-band values, while $N = 2$ and $N = 3$ yield a sequence with periods that are twice the main-band values. Light curves that fall in the long period sequence for $N = 2$ and $N = 3$ show multiple minima in a cycle when phased at the long period. Changing the number of harmonics in the fit can select different harmonics of the true period. Light curves with multiple, unequal, minima in a cycle or light curves that are not perfectly periodic (e.g. due to spot evolution, or uncorrected systematic errors in the photometry) are particularly susceptible to choosing an incorrect harmonic of the true period. For the remainder of the paper it is important to have a sample of stars with unambiguous periods. We therefore select a clean sample of 372 stars for which periods determined with $N=1,2$ and $3$ do not differ by more than $10\%$. For this sample we adopt the $N=2$ periods. As seen in figure \[fig:BVPeriod\_multiharm\] and \[fig:AverPeriod\_multiharm\], the half-period harmonic sequence is removed from the clean sample. We note that a handful of long-period variables remain in the clean sample, we discuss these further in §9. Tables \[tab:M37\_rotation1\] and \[tab:M37\_rotation2\] include the full sample of stars, the $N = 1$, $2$ and $3$ periods are provided in table \[tab:M37\_rotation1\].
In figure \[fig:MainBandLC\] we show phased light curves for a random sample of stars that fall along the main period-color band (see §4). Figure \[fig:RapidRotLC\] shows phased light curves for a random sample of rapid rotators ($P < 2~{\rm days}$). Finally, in figure \[fig:LongPeriodRotsLCS\] we show the light curves of 4 of the 5 long-period stars in the clean sample with $P > 15~{\rm days}$ and $r < 18.5~{\rm mag}$. Note that the fifth star is rejected as a non-cluster member based on its RV.
Period and Color Uncertainties
------------------------------
As we will discuss in §8.2, the spread in rotation periods for stars of a given mass puts a powerful constraint on theories of stellar angular momentum evolution. To determine whether or not the observed period spread is real it is important to have accurate estimates for both the period and color uncertainties.
### Period Uncertainties {#sec:perunc}
Uncertainties in the period can be divided into two classes: errors due to aliasing or choosing the wrong harmonic of the true period and random errors. Aliasing generally yields a discrete set of peaks in the periodogram of a light curve resulting in an uncertainty in choosing the peak that corresponds to the physical period of the star. Random errors correspond to the uncertainty in the centroid of each periodogram peak, these errors are typically assumed to be Gaussian. There are at least four factors which contribute to random uncertainties in the stellar rotation period inferred from starspots. These factors include:
1. Noise in the photometry.
2. Inadequacies in the model used to determine the period (e.g. the light curve is periodic but not sinusoidal).
3. Spot evolution.
4. Differential rotation.
In Paper II we conducted bootstrap simulations of the period detection for each star. This effectively determines the contribution of photometric noise to the period uncertainty. We redo these bootstrap simulations here for the multi-harmonic AoV period determinations.
To assess the uncertainty due to inadequacies in the model and due to spot evolution we conduct Monte Carlo simulations. We simulate light curves for 1000 spotted stars using the spot model due to @Dorren.87. For each simulation we place three spots with random angular sizes between $0.05$ and $0.5$ radians, random latitudes and random longitudes on the surface of a star, assign the star a random rotation period between $0.2$ and $20~{\rm days}$, and allow the spots to vary sinusoidally in angular size. The amplitude, phase and period of the variation are chosen randomly for each spot. We limit the period for spot size variation to lie between 5 and 20 times the rotation period. We generate light curves for each simulated star using the same time sampling as our observations. We then measure the period of each simulated light curve using the multi-harmonic AoV algorithm, rejecting light curves for which $N=1,2$ and $3$ do not return the same period within $10\%$. Figure \[fig:spotsimulation\] shows an example of a simulated light curve and the detected period as a function of the injected period for all simulations. We find the RMS difference between the injected and recovered periods is $\sim 5\%$. We also find that the $N=2$ period has the lowest RMS deviation from the injected rotation period. Note that this estimate for the error is conservative since most of the stars that we observed do not show as much spot evolution as our models do.
The rotation periods measured for an ensemble of stars with spots located at different latitudes may exhibit scatter even if all stars of a given mass have the same equatorial rotation period. The Sun exhibits differential rotation that can be modeled as $$P_{\beta} = P_{EQ}/(1 - k \sin^{2}\beta)$$ where $P_{\beta}$ is the rotation period at latitude $\beta$ and $P_{EQ}$ is the equatorial rotation period. Observations of spots on the Sun yield $k = 0.19$ while surface radial velocity measurements give $k = 0.12$ [@Kitchatinov.05]. Theoretical simulations of turbulent convection predict that $k$ should decrease with increasing rotation rate [@Brown.04], this has been confirmed by observations of starspots on $\kappa^{1}$ Ceti by the MOST satellite which yield $k = 0.09$ for this solar-like star rotating with $P = 8.77~{\rm days}$ [@Walker.07]. While sunspots are rarely seen with $|\beta| > 30^{\circ}$, there are indications that younger stars may have spots at any latitude. For example, the aforementioned observations of $\kappa^{1}$ Ceti found 7 spots over the range $10^{\circ} < |\beta| < 80^{\circ}$. Assuming spots may be uniformly distributed over the surface of a star (i.e. uniformly distributed in $\sin\beta$), a value of $k = 0.09$ will yield an RMS spread in detected periods of $\sim 3\%$. We adopt this as an estimate for the expected contribution of differential rotation to the period uncertainty for stars in M37.
### Color Uncertainties
There are several effects that contribute to the uncertainty in the color of a star including uncertainties in the photometric precision, photometric variability and binarity. All of these effects can cause the measured color to differ from the photosphere color of the star (or the star with the measured rotation period in the case of a multiple star system). The net uncertainty from all of these effects can be determined empirically from the spread of the main sequence on a CMD. Following the procedure described in §7, we draw a fiducial main sequence by eye through the rotational variables plotted on $B-V$, $V-I_{C}$, $g-r$ and $g-i$ CMDs. We then calculate for each variable the difference between its observed color and the color along the fiducial main sequence interpolated at the $V$ magnitude of the star. The data are binned in magnitude and we calculate the RMS of the color residuals for each bin. The resulting uncertainties in $(B-V)_{0}$, $(V-I_{C})_{0}$, $g-r$ and $g-i$ as a function of absolute magnitude are listed in table \[tab:coloruncertainties\]. The drop in the RMS of the color residuals at $M_{V} = 11$ for $(V-I_{C})_{0}$, $g-r$ and $g-i$ is due to incompleteness in the selection of probable members at this magnitude.
Completeness {#sec:completeness}
------------
Incompleteness in the selection of variable stars can bias the observed period-color and period-amplitude distributions. To assess the completeness of our sample of rotation periods we conduct Monte Carlo simulations of the variable star selection process. We inject sinusoidal variations, and spot signals (generated using the model described in § \[sec:perunc\]) into the light curves of 1081 stars that pass the cuts used to select candidate cluster members discussed in § \[sec:rotsel\] but were not selected as variable stars. For each star we conduct 1000 simulations. For the sinusoid models we inject signals with random phases, periods uniformly distributed in logarithm between $0.1$ and $20.0~{\rm days}$ and semi-amplitudes uniformly distributed in logarithm between $1~{\rm mmag}$ and $0.1~{\rm mag}$. We attempt to recover the injected signals using the L-S algorithm in combination with the multi-harmonic AoV algorithm. An injected signal is considered to be recovered if it does not have a period falling within one of the rejected period bins discussed in Paper II, has a formal L-S false alarm probability logarithm that is less than $-150$ and if the N=1, 2 and 3 periods returned by AoV agree with one another to within $10\%$. To make the problem computationally feasible we use a lower period resolution in the L-S algorithm than what we used to select the variables in Paper II (we sample at $0.1$ times the Nyquist frequency rather than $0.005$ times the Nyquist frequency).
In figure \[fig:RotationRecovery\] we show the recovery fraction as a function of period and of amplitude for the sinusoid injections and the spot model injections while figure \[fig:RotationRecovery\_mag\] shows the recovery fraction as a function of magnitude. When we don’t apply the multi-harmonic AoV selection, the recovery fraction is more or less insensitive to period between $0.1$ and $20$ days for semi-amplitudes larger than $0.01$ mag. The recovery fraction for stars brighter than $r \sim 20$ is close to $100\%$ for the sine-curve model and is between $80-100\%$ for the spot model. The recovery fraction drops at a few periods ($\sim 1~{\rm day}$ and $2-4~{\rm days}$) due to the period rejection that we perform in selecting the variables. For the sinusoid models applying the multi-harmonic AoV selection reduces the recovery fraction dramatically to $10-20\%$. This is due to the degeneracy between harmonic number and period (i.e. doubling the period and setting the amplitude of the first harmonic to one and all other harmonics to zero yields the same signal as using the correct period and setting the amplitude of the fundamental to one and all harmonics to zero). Because we limit the period search to 20 days, periods longer than 10 days cannot be recovered at twice the period or three times the period, and periods longer than 7 days cannot be recovered at three times the period. As a result the recovery fraction for long periods using the multi-harmonic AoV selection is close to the recovery fraction when the AoV selection is not used. For more realistic spot models the degeneracy between choosing the fundamental mode with the true period or either of the harmonics with a longer period is broken, so applying the multi-harmonic AoV selection does not have as significant an effect on the recovery fraction. In this case the recovery fraction is reduced by $\sim 20\%$ independent of period. Based on the results for the spot model signals we conclude that the rotation period-color diagram may be biased toward brighter stars, but at fixed magnitude, it is not strongly biased in period due to incompleteness.
Field Contamination
-------------------
In addition to incompleteness, the presence of field stars may also bias the observed period-color and period-amplitude distributions. Based on spectroscopic observations of several of the candidate rotational variables that we discuss in § \[sec:speccomp\] we estimate that $\sim 20\%$, or $\sim 115$ of the stars in our catalog ($\sim 74$ after applying the multi-harmonic AoV selection) are field stars. While the field star variables located away from the cluster main sequence on a CMD do not show an obvious correlation between color and rotation period (figure \[fig:fieldrots\]), we cannot infer that the same is true for the field stars near the cluster main sequence since these stars generally have different masses and radii. To disentangle the period-color distribution of field stars from cluster members would require either a complete spectroscopic survey of all the rotational variables, or a time series study of a field off the cluster. Note that the estimated contamination fraction is small enough that the conclusions of the paper should not be strongly affected by the contamination.
Out of a total sample of $\sim 1450$ cluster members with $14.5 < r < 22.3$ that we have observed, we estimate that $\sim 460$ are detected as variables, and $\sim 220$ have semi-amplitudes greater than $0.01~{\rm mag}$. Note that we used a stricter $\chi^{2}$-based membership selection for choosing candidate variable cluster members than what was used to estimate the total number of surveyed cluster members in Paper I. The number of cluster members that could have potentially been selected as variable cluster members is thus likely to be slightly smaller than the estimate of $\sim 1450$. Based on the completeness estimates for spot-model injected light curves without multi-harmonic AoV selection, we estimate that we have detected $\sim 70\%$ of the large-amplitude cluster member variables in this sample, so we estimate that $\sim 20\%$ of cluster members are variable with semi-amplitudes greater than $0.01~{\rm mag}$.
Comparison with the RACE-OC Project
-----------------------------------
@Messina.08 have recently presented rotation periods for a number of stars in M37 as part of the RACE-OC project. We find that 91 of the 106 candidate cluster member stars listed in table 2 of @Messina.08 that are not classified as $\delta$-Scuti, RR Lyr or eclipsing binaries match to stars in our point source catalog. To do the matching we allow for a second-order polynomial transformation from the @Messina.08 coordinates to our coordinates, and match stars within a radius of $5\arcsec$. For sources that match to multiple stars in our catalog we choose the match with the smallest magnitude difference. Thirteen of the 15 unmatched sources lie on a Megacam chip gap, while stars 2835 and 3021 do not lie within $5\arcsec$ of any stars in our point source catalog. We have independently detected periodic variability for 58 of the 91 matched stars. We classify 38 of these stars as potential cluster members, 21 are not classified as potential cluster members, while 2 stars do not have $BV$ photometry from @Kalirai.01 and are excluded from the catalog of rotating candidate cluster members presented in this paper. Note that the membership classification by @Messina.08 is based on photometry through two filters while our classification utilizes five filters. Of the 32 stars that we do not classify as variables, 19 are saturated in more than two-thirds of our time-series observations, 11 have a light curve RMS that is higher than the median RMS as a function of magnitude, while two have an RMS that is lower than the median RMS as a function of magnitude.
In figure \[fig:MessinaPeriodComparison\] we compare our periods to those measured by @Messina.08 for the 22 matching stars which are included in our clean sample, we also compare the resulting period-magnitude diagrams. Note that the @Messina.08 periods are generally shorter than the periods that we measure. The period measurements disagree by more than 20% for the following 8 stars: V424 (M3208), V589 (M3245), V827 (M2257), V888 (M2395), V961 (M3866), V1008 (M2549), V1025 (M2895), V1135 (M4134), where the identification listed in parentheses for each star is taken from @Messina.08. In each case we find that our light curve does not phase at the period reported by @Messina.08, while inspection of the Scargle-periodograms displayed in figures 17-26 of @Messina.08 reveals additional peaks near the periods that we have measured. Note that our light curves contain an order of magnitude more observations obtained on more than twice as many nights as the light curves used by @Messina.08, as a result the periods presented here are less susceptable to aliasing.
The Period-Color Sequence in M37
================================
The relation between period and color for the clean sample of M37 stars seen in figure \[fig:BVPeriod\_multiharm\] is similar to that seen for stars in the Hyades [@Radick.87]. As we will discuss in §8, this sequence provides a powerful test for theories of stellar angular momentum evolution. In this section we provide an analytic fit to the period-color relation, and also evaluate the spread in rotation periods about this fit as a function of mass.
The selection of stars in the $(B-V)_{0}$-period sequence is shown in figure \[fig:M37sequence\]. We take the period uncertainty for each star to be the quadrature sum of the bootstrap error for the star, $0.05\%$ to account for errors in the model, and $0.03\%$ to account for differential rotation. The color error for each star is interpolated from table \[tab:coloruncertainties\]. We fit two models of the form: $$P = a_{lin} (B-V)_{0} + b_{lin}
\label{eqn:M37sequence_linear}$$ and $$P = \frac{a_{bpl}}{\left( \frac{(B-V)_{0}}{0.5} \right) ^{b_{bpl}} + \left( \frac{(B-V)_{0}}{0.5} \right) ^{-1}}
\label{eqn:M37sequence_bpl}$$ by minimizing the total $\chi^{2}_{tot}$ $$\chi^{2}_{tot} = \sum_{i}\left( \left(\frac{x_{i} - x_{i,0}}{\sigma_{x,i}} \right) ^{2} + \left(\frac{y(x_{i}) - y_{i,0}}{\sigma_{y,i}} \right) ^{2} \right)$$ where $x_{i,0}$ is the observed $(B-V)_{0}$ value of star $i$, $x_{i}$ is the predicted $(B-V)_{0}$ value for star $i$ and is treated as a free parameter, $\sigma_{x,i}$ is the uncertainty in $(B-V)_{0}$ for star $i$, and the $y$ values correspond to periods with $y(x_{i})$ being given by equation \[eqn:M37sequence\_linear\] or \[eqn:M37sequence\_bpl\] for the free $a$ and $b$ parameters.
We use the downhill simplex algorithm [@Nelder.65; @Press.92] to fit each relation solving for the $a$ and $b$ parameters as well as the $x_{i}$ values for each star. Note that this is equivalent to minimizing the orthogonal $\chi^{2}$ distance between each point and the model. The resulting parameters with uncertainties are listed in table \[tab:M37sequencefit\]. We list the $1\sigma$ uncertainties from 1000 bootstrap simulations (i.e. simulating data sets by resampling with replacement from the original data set), and from 1000 Monte Carlo simulations (i.e. simulating data sets by adjusting each observed $x$ and $y$ value by random variables drawn from normal distributions with standard deviations $\sigma_{x}$ and $\sigma_{y}$). We find that $\chi^{2}$ per degree of freedom for the linear and broken power-law relations are given respectively by $\chi^{2}_{dof,lin} = 3.23$ and $\chi^{2}_{dof,bpl} = 2.48$, where there are 242 degrees of freedom. We note, therefore, that we do detect a significant spread in rotation period about the main period-color sequence beyond what is due to observational uncertainties in the period and color (at the $16\sigma$ level for the broken power-law). This can be seen both from the deviation of $\chi^{2}_{dof}$ from $1$ and from the fact that the parameter errors from the bootstrap simulations are consistently larger than the parameter errors from the Monte Carlo simulations. In table \[tab:M37sequence\] we list the residual $RMS$ and $\chi^{2}$ per degree of freedom in $(B-V)_{0}$ bins for both the linear and broken power-law relations. Note that the spread in period is significant at greater than the $3\sigma$ level for all color bins with $(B-V)_{0} < 1.5$.
Comparison With Spectroscopy {#sec:speccomp}
============================
Of the 127 stars for which we obtained spectra with Hectochelle, 41 match to candidate rotational variables. Of these, 33 have an average radial velocity that is within $3\sigma$ of the cluster systemic radial velocity (see Paper I).
In figure \[fig:BVPer\_withspec\] we show the stars with spectroscopy on the $(B-V)_{0}$-period relation. Using the measured $v\sin i$ and rotation periods, together with the stellar radii inferred from the best fit YREC isochrone, we can estimate the inclination angle of the rotation axis via: $$\sin i = \frac{P v \sin i}{2 \pi R}.
\label{eqn:sini}$$ In figure \[fig:BVPer\_withspec\] we plot the sine of the angle as a function of $(B-V)_{0}$ for the 33 stars with RV consistent with being cluster members.
For all but three stars in the clean sample the measured values of $P$ and $v \sin i$ appear to be consistent with the inferred radii since we find values that are consistent with $\sin i < 1$. The errors on this determination are dominated by the errors on $v \sin i$.
Amplitude Distribution
======================
Because photometric observations of spotted stars are only sensitive to changes in the flux integrated over the visible hemisphere of the star, much of the information on the actual surface brightness distribution is lost. Due to well known degeneracies between the latitude, area, and temperature of a spot, it is not generally possible to obtain a unique fit to a single-filter light curve using a simple single spot model [e.g. @Dorren.87]. When multiple spots are present their individual signals merge into a rather featureless spot-wave. Nonetheless, by studying an ensemble of stars it may be possible to gain insight on the activity-rotation relation from our data. Moreover, from an observational point of view, knowing the distribution of amplitudes is useful for planning purposes since the amplitude and period will determine if the rotation period of a star can be measured.
We calculate the amplitudes for the clean sample of 372 stars using equation \[eqn:fourier\] with $N = 2$. We take the amplitude ($A_{r}$) to be the peak-to-peak amplitude of the Fourier series.
As seen in figure \[fig:AmpPerBV\] the amplitude appears to be anti-correlated with the period and positively correlated with the $(B-V)_{0}$ color. There is a fairly steep selection effect, however, between the amplitude and $(B-V)_{0}$ which is caused by the drop in photometric precision for fainter stars. Due to the non-trivial relation between period and color, the selection in the period-amplitude plane is complicated.
To evaluate whether or not the observed correlations are due to selection effects we determine for each light curve the minimum amplitude that the signal could have had and still have been detected. To do this, we find $\alpha$ such that $$LS(r(t) - (1-\alpha)\tilde{r}(t),P) = -150
\label{eqn:minamp}$$ where $LS(x,P)$ is the logarithm of the Lomb-Scargle (L-S) formal false alarm probability for light curve $x$ with period $P$ [see @Press.92], $\tilde{r}(t)$ is the model signal in eq. \[eqn:fourier\], $r(t)$ is the observed light curve, and $-150$ is the selection threshold used to select variables in Paper II. For the purposes of this investigation we do not include stars that were not selected with L-S; we also reject stars for which no positive $\alpha$ can be found that satisfies eq. \[eqn:minamp\] as these are light curves for which the simple model in eq. \[eqn:fourier\] does not adequately describe the periodic signal. We then take the minimum observable amplitude to be: $$A_{min} = \alpha A_{r}$$ Note that we have ignored the by-eye selection that light curves are passed through following the selection on L-S. We also caution that stars with light curves that are not well modeled by the simple Fourier series will have minimum observable amplitudes that are underestimated. The minimum observable amplitudes for each point are also shown in figure \[fig:AmpPerBV\].
To evaluate the significance of the apparent correlation in the presence of the selection we use Kendell’s $\tau$ (non-parametric correlation statistic) modified for the case of data suffering a one-sided truncation [@Tsai.90; @Efron.92; @Efron.99]. Letting the data set be represented by the set of points $\{(x_{i},y_{i},y_{min,i})\}$, where $y_{min,i}$ is the minimum value of $y_{i}$ that could have been measured for observation $i$, define the set of comparable pairs $$\mathcal{C} = \{(i,j)\,|\,y_{i} \geq y_{min,j}\ {\rm and}\ y_{j} \geq y_{min,i}\}.
\label{eq:defC}$$ Define the risk-set numbers by $$N_j = \#\{i\,|\,y_{min,i} \leq y_{j}\ {\rm and}\ y_{i} \geq y_{j}\}.
\label{eqn:risksetnumber}$$ The normalized correlation statistic is then $$\hat{T} = \left( \sum_{(i,j) \in \mathcal{C}} {\rm sign}((x_{i} - x_{j})(y_{i} - y_{j})) \right)/\sigma
\label{eqn:defT}$$ with the variance of the numerator given by $$\sigma^{2} \approx 4\sum_{i}\frac{N_{i}^{2} - 1}{12}.$$ The statistic is normalized such that a value of $\hat{T} = 1$ corresponds to a $1\sigma$ rejection of the null hypothesis of non-correlation.
Applying eq \[eqn:defT\] to the $(P,A_{r})$ and $((B-V)_{0},A_{r})$ data we find $\hat{T} = -11091$ and $\hat{T} = 3900$ respectively. We conclude, therefore, that there is a significant anti-correlation between rotation period and amplitude and a positive correlation between $(B-V)_{0}$ and amplitude for the lower main sequence stars in this cluster.
As shown by @Noyes.84, there is a tight correlation between stellar activity, measured via the ratio of emission in the cores of the Ca II H and K lines to the total luminosity of the star, and the Rossby number ($R_{O}$, the ratio of the rotation period to the characteristic time scale of convection). @Messina.01a find that the light curve amplitude also appears to be anti-correlated with $R_{O}$.
To compute $R_{O}$ for each star we use the empirical expression for the convective time-scale from @Noyes.84 $$\log \tau_{c} = \left\{ \begin{array}{ll}
1.362 - 0.166x + 0.025x^{2} - 5.323x^{3}, & x > 0 \\
1.362 - 0.14x, & x < 0
\end{array}
\right.$$ where $\tau_{c}$ is the convective time-scale in days and $x = 1.0 - (B-V)_{0}$. We plot the amplitude of the variables against $R_{O}$ in figure \[fig:AmplitudeRossby\], showing stars with $(B-V)_{0} < 1.36$ and $(B-V)_{0} > 1.36$ separately. For comparison we also show the candidate field rotators from Paper II.
The anti-correlation between $R_{O}$ and $A_{r}$ appears to be steeper for stars with $(B-V)_{0} < 1.36$ than for stars with $(B-V)_{0} > 1.36$. We note that the empirical constraints on the convective time-scale are less stringent for redder stars, so the values of $R_{O}$ are more susceptible to systematic errors for $(B-V)_{0} > 1.36$. Focusing on cluster members with $(B-V)_{0} < 1.36$, the relation between $R_{O}$ and $A_{r}$ appears to flatten out, or saturate, for stars with $R_{O} < 0.3$. There appears to be a dearth of low-amplitude stars with $R_{O} < 0.2$, note that stars with peak-to-peak amplitudes $A_{r} > 0.02~{\rm mag}$ should have been detectable. There is also a hint that the relation between $R_{O}$ and $A_{r}$ is flat for $R_{O} > 0.6$. Note that as seen in figure \[fig:BVPer\_highRossby\] many of the stars with $R_{O} > 0.6$ fall above the main period-color sequence. As we discuss below, it is possible that the periods measured for these stars are not the rotation periods but rather the spot-evolution timescales.
Applying eq \[eqn:defT\] to the $(R_{O},A_{r})$ data we find $\hat{T} = -16462$ for all stars in the clean data set, $\hat{T} = -10520$ for 233 stars with $(B-V)_{0} < 1.36$ and $\hat{T} = -1150$ for 129 stars with $(B-V)_{0} > 1.36$. For comparison, the $((B-V)_{0},A_{r})$ data has $\hat{T} = 3846$ for $(B-V)_{0} < 1.36$ and $\hat{T} = -540$ for $(B-V)_{0} > 1.36$, while the $(P,A_{r})$ data has $\hat{T} = -5430$ for $(B-V)_{0} < 1.36$ and $\hat{T} = -1145$ for $(B-V)_{0} > 1.36$. For $(B-V)_{0} < 1.36$ the correlation for $(R_{O},A_{r})$ is more significant than for $((B-V)_{0},A_{r})$ or $(P,A_{r})$.
For $(B-V)_{0}$, $R_{O} < 0.6$ the selection on $A_{r}$ does not appear to bias the $(R_{O},A_{r})$ distribution. We find that the $(R_{O},A_{r})$ data for cluster members with $(B-V)_{0} < 1.36$ and $R_{0} < 0.6$ can be fit with the function: $$A_{r} = \frac{0.078 \pm 0.008}{1 + \left( \frac{R_{O}}{0.31 \pm 0.02}\right) ^{3.5 \pm 0.5}}
\label{eqn:arrofit}$$ where the errors listed are the $1\sigma$ errors from 1000 bootstrap simulations. Figure \[fig:AmplitudeRossbyFit\] shows this relation. We also show the approximate location of the Sun on this diagram, assuming a typical large sunspot with an area that is $500$ millionths that of the solar disk, a temperature ratio to the photosphere of $0.7$, the bolometric corrections for the $r$-band from @Girardi.04 and $(B-V)_{\odot} = 0.656$ [@Gray.92] to convert the bolometric amplitude to an $r$-band amplitude, and the solar equatorial rotation period of 24.79 days [@Howard.84]. The error bar shows the approximate range of values for the Sun, where the upper limit is for the largest sunspot group observed to date [the giant sunspot group of April, 1947 had an area of $\sim 6000$ millionths that of the solar disk and was large enough to be seen without optical aid, see @Taylor.89].
As noted above, there appears to be a minimum amplitude of $\sim 0.03~{\rm mag}$ for stars with $(B-V)_{0} < 1.36$, $R_{O} < 0.2$. The fact that there are stars with $R_{O} > 0.3$ that have amplitudes down to the minimum detectable limits below $0.01~{\rm mag}$ shows that this is not likely a result of underestimating the minimum detectable amplitude and is likely to be a real effect. This indicates that these rapidly rotating stars may have a distribution of spot-sizes that is peaked, or that they have several spots. To illustrate this we have conducted Monte Carlo simulations of spotted star light curves for four different models of the spot-size and spot-number distributions. For the spot-size distribution we either fixed the spot area to $1\%$ that of the stellar surface area or we allowed the spot-sizes to be uniformly distributed in logarithm between $10^{-5}$ times the stellar surface area and $10\%$ of the stellar surface area. We calculated models with two spots per star or with ten spots per star. For each model we simulate 1000 light curves using the latest version of the Wilson-Devinney program [@Wilson.71; @vanHamme.03]. The simulations are conducted assuming a spot to photosphere temperature ratio of 0.7, random orientations for the rotation axis, and that the spots are distributed randomly over the surface of the star. Figure \[fig:spotamplitudedistribution\] shows the distribution of $A_{r}$ for the model simulations together with the observed $A_{r}$ distribution for stars with $(B-V)_{0} < 1.36$, $R_{O} < 0.3$. While fitting a model to the observed distribution is beyond the scope of this paper, we note that the models with a fixed spot-size or ten spots per star have peaked $A_{r}$ distributions whereas the model with a broad spot-size distribution and only two spots per star results in a broad $A_{r}$ distribution that is inconsistent with our observations. This suggests that the logarithmic distribution of spot filling-factors (i.e. the fraction of the stellar surface that is covered by spots) is peaked.
The Blue K Dwarf Phenomenon in M37
==================================
It is well known that the K dwarfs in the Pleiades fall blueward of a main sequence isochrone when plotted on a $B-V$ CMD [see @Stauffer.03 hereafter S03]. S03 have argued that this is due to differences in the spectral energy distribution of the Pleiades stars and the field dwarfs which are used to define the main sequence. They argue that this difference is due to more significant cool spots and plage areas on the photospheres of the young, rapidly rotating, heavily spotted Pleiades stars than are present on the older, slowly rotating, less heavily spotted field dwarfs. The plage areas result in excess flux in the $B$ and $V$ bands, while spots cause excess flux in the near infrared. As evidence for this explanation they show that the discrepancy at fixed color between the magnitude of the Pleiades stars and the main sequence isochrone correlates with $v \sin i$. @An.07 [hereafter A07] have also shown that the K dwarfs in the young cluster NGC 2516 are too blue in $B-V$ and that the discrepancy correlates with $v \sin i$.
Using our rotation periods we can look for this phenomenon in M37. We first define two groups of stars: those which lie on the main color-rotation period band (with the selection performed as in §4), and rapid rotators with $P < 2$ days. In figure \[fig:BVrapidvsslow\] we show the locations of these two groups of stars on $B-V$ and $V-I_{C}$ CMDs and in figure \[fig:grirapidvsslow\] we show them on $g-r$ and $g-i$ CMDs. The results appear to be similar to that found by S03 and A07. In the $B-V$ and $g-r$ CMDs the rapid rotators appear to be bluer than the slower rotators at a fixed magnitude along the lower main sequence. While in the $V-I_{C}$ and $g-i$ CMDs the rapid rotators are slightly redder at fixed magnitude than the slower rotators.
To illustrate these effects quantitatively we bin the data by magnitude and plot in figures \[fig:BVcolorDiff\_vsPeriod\]-\[fig:gicolorDiff\_vsPeriod\] the rotation period of the variables against the color difference $c_{obs} - c_{fid}$ where $c_{fid}$ is the color interpolated within a fiducial main sequence, at the $V$ magnitude of the star; the fiducial sequence is drawn by eye. We use a fiducial sequence rather than a theoretical sequence because the discrepancy between the observed and theoretical sequences varies with magnitude, and correlations between period and magnitude could lead to spurious correlations between period and color difference. For each bin we calculate the Spearman rank-order correlation coefficient ($r_{s}$) as well as the two-sided significance level of its deviation from zero, we ignore uncertainties in the photometry in calculating this statistic.
As seen in figure \[fig:BVcolorDiff\_vsPeriod\], there is a significant correlation (greater than $98\%$ significance) between period and $(B-V)_{obs} - (B-V)_{fid}$ at fixed $V$ for stars with $6.5 < M_{V} < 9.5$. Fitting a linear relation of the form $(B-V)_{obs} - (B-V)_{fid} = mP + b$ gives a slope of $m \sim 0.005$ for the bin with the most significant correlation detection. When using $(V-I_{C})_{0}$ we find anti-correlations with greater than $80\%$ significance between period and $(V-I_{C})_{obs} - (V-I_{C})_{fid}$ for stars with $8.5 < M_{V} < 10.5$ (figure \[fig:VIcolorDiff\_vsPeriod\]). In $g-r$, the stars with $6.5 < M_{V} < 9.5$ or $10.5 < M_{V} < 11.5$ show a positive correlation between period and $(g - r)_{obs} - (g - r)_{fid}$ (figure \[fig:grcolorDiff\_vsPeriod\]). Finally in $g-i$, stars with $7.5 < M_{V} < 8.5$ have a positive correlation between period and $(g-i)_{obs} - (g-i)_{fid}$ (figure \[fig:gicolorDiff\_vsPeriod\]) while stars with $9.5 < M_{V} < 10.5$ show a negative correlation.
The positive correlations seen in $B-V$ and $g-r$, together with the anti-correlations in $V-I_{C}$ and $g-i$ for the red stars is consistent with the blue K dwarf phenomenon in the Pleiades described by S03. This effect has been seen in the Pleiades and in NGC 2516 amongst stars with $0.8 < (B-V)_{0} < 1.2$ ($6 \la M_{V} \la 7.5$; see A07). We find that this phenomenon extends into the M dwarf regime where $(B-V)_{0}$ saturates at $\sim 1.5$ mag ($M_{V} \ga 9.5$).
These observations bear on a well-known problem in the theory of low-mass stars. In recent years studies of M-dwarf eclipsing binaries have revealed that these stars have radii that are $\sim 10\%$ larger than predicted by theory [@Torres.02; @Ribas.03; @Lopez-Morales.05; @Ribas.06; @Torres.07]. Note that the @Torres.07 finding is for an M-dwarf orbited by a short-period transiting Neptune-mass planet. The observed luminosities of these stars are in good agreement with theoretical predictions, but their effective temperatures are lower than predicted. The observed low-mass eclipsing binary systems generally have short periods, and it has been suggested that the discrepancy between theory and observations may be due to enhanced magnetic activity on these rapidly rotating stars [@Ribas.06; @Torres.06; @Lopez-Morales.07]. There is also evidence that the discrepancy may be correlated with metallicity [@Lopez-Morales.07]. Recently @Morales.07 have shown that for fixed luminosity, active stars tend to have lower temperatures than inactive stars. This result was based on a sample of isolated field stars.
Our observations show that, at fixed luminosity, rapidly rotating late-K and early-M dwarfs tend to be bluer in $(B-V)$ but redder in $(V-I_{C})$ than slowly rotating dwarfs. Since the bulk of the flux from these stars is emitted in the near infrared, it is reasonable to suppose that the correlation between $(V-I_{C})$ and period more closely represents the correlation between effective temperature and period for these stars than the correlation between $(B-V)$ and the period does. For stars with $8.5 < M_{V} < 9.5$ we find that the slope between $(V-I_{C})$ and the period is $\sim -0.005~{\rm mag}/{\rm day}$. Using the best fit YREC isochrone for M37 (see Paper I), a difference in $(V-I_{C})$ of $0.1~{\rm mag}$ (corresponding to a difference in period of 20 days, which is comparable to the difference between an old field star and a tidally synchronized binary) would result in a $\sim 3\%$ difference in effective temperature at fixed luminosity, or a $\sim 6\%$ difference in radius. This is comparable to, but still slightly less than, the radius discrepancy from eclipsing binaries. Since the flux through the $V$ filter may be slightly enhanced for rapidly rotating stars, it is likely that colors using only near-infrared filters will be more strongly anti-correlated with period than $(V-I_{C})$ is. Deep near-infrared observations of this cluster would confirm or refute this hypothesis. Note that because our present sample of stars are all members of the same cluster, we can rule out age effects as the source of the color discrepancy. This is a conclusion which is not possible to make using samples of field stars.
Angular Momentum Evolution
==========================
By comparing the distribution of stellar rotation periods between star clusters of different ages we can study the evolution of stellar angular momentum. Both changes in the moment of inertia of a star and changes in its angular momentum contribute to changes in the rotation period. After $\sim 100$ Myr stars have settled onto the main sequence and their moment of inertia changes very little until they evolve onto the sub-giant branch. During this time period the rotation evolution is thought to be dominated by angular momentum loss via a magnetized wind. In this section we compare our observations of M37 with observations of other open clusters to test a simple model of stellar angular momentum evolution.
Data for Other Clusters
-----------------------
Besides M37, there are four clusters older than $\sim 100$ Myr that have significant, publicly available, samples of rotation periods; these are the Pleiades [$100$ Myr; @Meynet.93], NGC 2516 [$140$ Myr; @Meynet.93], M34 [$200$ Myr; @Jones.96] and the Hyades [$625$ Myr; @Perryman.98].
### Pleiades
We used the WEBDA database[^3] to obtain the rotation periods for 50 Pleiads; the periods are compiled from a number of sources [@Stauffer.87a; @Prosser.93a; @Prosser.93b; @Prosser.95; @Marilli.97; @Krishnamurthi.98; @Terndrup.99; @Messina.01b; @Clarke.04; @Scholz.04]. For stars with multiple periods listed we took the average value. We do not include 11 low-mass stars with periods for which optical photometry is unavailable.
We also used WEBDA to obtain the photometry for this cluster. We took the mean photoelectric $BV$ measurements [@Johnson.53; @Johnson.58; @Iriarte.67; @Iriarte.74; @Mendoza.67; @Jones.73; @Robinson.74; @Landolt.79; @Stauffer.80; @Stauffer.82b; @Stauffer.84; @Stauffer.87b; @Prosser.91b; @Andruk.95; @Messina.01b], Kron $VI_{K}$ measurements [@Stauffer.82b; @Stauffer.84; @Prosser.91b; @Stauffer.87b], Johnson $VI_{J}$ measurements [@Mendoza.67; @Iriarte.69; @Landolt.79] and Johnson-Cousins $VI_{C}$ measurements [@Stauffer.98]. Following A07 we converted $VI_{K}$ to $VI_{C}$ using the transformation from @Bessell.87 and $VI_{J}$ to $VI_{C}$ using the transformation given in A07.
To obtain $(B-V)_{0}$, $(V-I_{C})_{0}$ and $M_{V}$ for each star we take $E(B-V) = 0.02$, and $(m-M)_{0} = 5.63$ (A07). We also assume $R_{V} = 3.1$ and $E(V-I_{C})/E(B-V) = 1.37$ (see A07).
### NGC 2516
The rotation periods for 362 stars in NGC 2516 come from @Irwin.07 [hereafter I07]. These authors also provide $VI_{C}$ photometry for all of their rotators. We find, however, that when adopting $E(B-V) = 0.125$, $(m-M)_{0} = 8.03$ for this cluster (A07), the $(V-I_{C})_{0}$ vs. $M_V$ lower main sequence falls $\sim 0.1$ mag to the red of the sequence for M37, despite the cluster having a metallicity of $[Fe/H] = -0.07 \pm 0.06$ (A07) or $0.01 \pm 0.07$ [@Terndrup.02] compared with $[Fe/H] = 0.045 \pm 0.044$ for M37 (Paper I). When taking the $VI_{C}$ photometry for NGC 2516 from @Jeffries.01 [hereafter J01] or @Sung.02 the sequence is in good agreement with the M37 sequence. We therefore transform the photometry from I07 to match the J01 photometry via: $$\begin{aligned}
V_{J01} & = & 1.014V_{I07} - 0.172(V-I_{C})_{I07} + 0.043 \\
I_{C,J01} & = & 0.972I_{C,I07} + 0.040(V-I_{C})_{I07} + 0.314.\end{aligned}$$ Finally there is $BV$ photometry from J01 for 73 of the I07 rotators.
### M34
The rotation periods for 105 stars in M34 were taken from @Irwin.06 [hereafter I06]. We adopt the $VI_{C}$ photometry from this paper as well. The lower main sequence in this case appears to be quite comparable to that for M37 when taking $E(B-V) = 0.07$ [@Canterna.79] and $(m-M)_{0} = 8.38$ [@Jones.96]. Note that this cluster appears to be slightly more metal rich than M37, with $[Fe/H] = +0.07 \pm 0.04$ [@Schuler.03]. There is $BV$ photometry from @Jones.96 for 25 of the I06 rotators.
### Hyades
As for the Pleiades we obtained the rotation periods for 25 Hyads from WEBDA; the periods are compiled from three sources [@Radick.87; @Radick.95; @Prosser.95] and we take the average value for stars with multiple measurements. We also take the average photoelectric $BV$ photometry [@Johnson.55; @Argue.66; @Eggen.68; @Eggen.74; @Mendoza.68; @vanAltena.69; @Sturch.72; @Robinson.74; @Upgren.77; @Upgren.85; @Stauffer.82a; @Herbig.86; @Weis.82; @Weis.88; @Andruk.95], Kron $VI_{K}$ photometry [@Upgren.77; @Upgren.85; @Weis.82; @Weis.88; @Stauffer.82a], Johnson $VI_{J}$ photometry [@Mendoza.67; @Johnson.68; @Mendoza.68; @Sturch.72; @Carney.79], and Johnson-Cousins $VI_{C}$ photometry [@Reid.93]. We transform the $VI_{K}$ and $VI_{J}$ photometry to the $VI_{C}$ system using the same relations that we used for the Pleiades. Finally, we adopt $E(B-V) = 0.003 \pm 0.002$ [@Crawford.75; @Taylor.80] and an average distance modulus of $(m-M)_{0} = 3.33 \pm 0.01$ [@Perryman.98].
Comparison of Period-Age Data with Models
-----------------------------------------
In figure \[fig:BVPeriod\] we compare the $(B-V)_{0}$ vs. Period relation for M37 to each of the four clusters discussed above. In figure \[fig:VIPeriod\] we show the $(V-I_{C})_{0}$ vs. Period relation.
The convergence of stellar rotation periods into a sequence for $(B-V)_{0} < 1.3$ is clearly seen in M37 and the Hyades. A sequence may also be present in the Pleiades and M34 (this is more clearly seen in the less complete $BV$ data for M34). The data for NGC 2516 is incomplete for stars hotter than $(B-V)_{0} \sim 1.3$ as the I07 survey was focused on very low mass stars.
The formation of such a sequence can be explained by an angular momentum loss law that is a steep function of the angular frequency. Typically modelers adopt a modified @Kawaler.88 $N = 1.5$ wind law [@Chaboyer.95], where $N$ is a parameter used in modeling the geometry of the coronal magnetic field of the star and can vary between 0 and 2, with $N = 3/7$ for a dipolar field and $N = 2$ for a purely radial field. The general law has the form: $$\frac{dJ}{dt} = \left\{ \begin{array}{ll}
f_{k}K_{w}\omega^{1 + 4N/3}\left(\frac{R}{R_{\odot}}\right)^{2-N}\left(\frac{M}{M_{\odot}}\right)^{-N/3}\left(\frac{\dot{M}}{10^{-14}M_{\odot}{\rm yr}^{-1}}\right)^{1 - 2N/3}, & \omega < \omega_{sat} \\
f_{k}K_{w}\omega\omega_{crit}^{4N/3}\left(\frac{R}{R_{\odot}}\right)^{2-N}\left(\frac{M}{M_{\odot}}\right)^{-N/3}\left(\frac{\dot{M}}{10^{-14}M_{\odot}{\rm yr}^{-1}}\right)^{1 - 2N/3}, & \omega \geq \omega_{sat}
\end{array}
\right.
\label{eqn:angmomlossgeneral}$$ Here $\omega_{sat}$ is a saturation angular frequency, which is needed to account for the presence of rapid rotators in the Pleiades. The leading constant $f_{k}K_{w}$ determines the overall angular momentum loss rate. @Kawaler.88 gives $f_{k}K_{w} = 2.035 \times 10^{33}(24.93 K_{V}^{-1/2})^{n}K_{B}^{4n/3}$, where $K_{V}$ is the ratio of the wind speed to the escape velocity at a radius of $r_{A}$, the radius out to which the stellar wind co-rotates with the star, and $K_{B}$ is a constant of proportionality between the surface magnetic field strength and the stellar rotation rate. @Chaboyer.95 let $K_{w} = 2.036 \times 10^{33} (1.452 \times 10^{9})^{N}$ in cgs units, for $K_{V} = 1.0$ and $K_{B}$ set to the value obtained from calibration to the solar global magnetic field strength, and introduce $f_{k}$ as a dimensionless parameter to account for our ignorance of $K_{V}$ and $K_{B}$. For $N = 1.5$ this reduces to $$\frac{dJ}{dt} = \left\{ \begin{array}{ll}
-K\omega^{3}\left(\frac{R}{R_{\odot}}\right)^{0.5}\left(\frac{M}{M_{\odot}}\right)^{-0.5}, & \omega < \omega_{sat} \\
-K\omega\omega_{sat}^{2}\left(\frac{R}{R_{\odot}}\right)^{0.5}\left(\frac{M}{M_{\odot}}\right)^{-0.5}, & \omega \geq \omega_{sat}
\end{array}
\right.
\label{eqn:angmomloss}$$ where $f_{k}K_{w}$ is combined into a single calibration constant $K$. @Bouvier.97 determined a value of $K = 2.7 \times 10^{47}$ g cm$^{2}$ s by requiring that the law reproduce the rotation frequency of the Sun at $4.5~{\rm Gyr}$ assuming $\omega_{sat} = 14 \omega_{\odot}$ for a $1 M_{\odot}$ star.
For rigid body rotation, the angular frequency obeys the differential equation: $$\frac{d\omega}{dt} = \frac{1}{I}\frac{dJ}{dt} - \frac{\omega}{I}\frac{dI}{dt}
\label{eqn:domegadt}$$ where $I$ is the moment of inertia of the star. Helioseismology suggests that the rigid body rotation approximation is reasonable for the Sun [@Gough.90], though it is uncertain whether this approximation is valid for younger stars. Previous investigations have found that the rigid body approximation reproduces the observed angular velocity distribution of stars older than the Pleiades, while models incorporating internal differential rotation are needed to reproduce the observations of younger clusters [e.g. @Sills.00]. In simple models where the core and envelope of the star are assumed to rotate as distinct rigid bodies, $I$ and $\omega$ in equation \[eqn:domegadt\] are replaced with $I_{conv}$ and $\omega_{conv}$ because the magnetic wind is tied to the convective envelope. Additional terms are then required to allow for coupling between the core and the envelope (see I07).
In figures \[fig:periodvsageBV\] and \[fig:periodvsageVI\] we plot the rotation period as a function of age for stars in the clusters presented in figures \[fig:BVPeriod\] and \[fig:VIPeriod\]. Following I07, we show the 10th, and 90th percentile rotation periods for each cluster within color bins. We conduct 1000 bootstrap simulations for each cluster to estimate the $1-\sigma$ uncertainties on these percentiles. For clusters with less than 10 points in a bin, we take the minimum and maximum observed periods in the bin as estimates of the 10th and 90th percentiles. We do not show clusters with less than 4 points in a bin. We include the Sun in the bluest color bins. Note that while the period of the Sun ($P_{\odot} = 24.79~{\rm days}$) is very well determined, we do expect stars at the age of the Sun to exhibit a range in rotation periods. Based on the models described above, we estimate this range to be $\sim 1.0~{\rm day}$ at $4.5~{\rm Gyr}$. We therefore adopt this as the uncertainty on the rotation period of a Sun-like star.
For each color bin we fit a model given eq. \[eqn:domegadt\] to the 10th and 90th percentile periods letting $\omega_{0,10}$, $\omega_{0,90}$, $K$, and $\omega_{sat}$ be the free parameters. Here $\omega_{0,10}$ and $\omega_{0,90}$ are the $\omega_{0}$ values at $t_{0} = 100~{\rm Myr}$ for the 10th and 90th percentiles respectively. Note that in solving eq. \[eqn:domegadt\] we use evolutionary tracks computed with the YREC isochrones (Terndrup et al. 2008, in preparation) to determine $I$ and $R$ as functions of $M$ and $t$. As seen in figure \[fig:periodvsageBV\], the model can reproduce the observed spin-down and convergence of rotation periods for stars with $0.5 < (B-V)_{0} < 0.7$ or $1.1 < (B-V)_{0} < 1.5$, however we find that for $0.7 < (B-V)_{0} < 1.1$ ($0.76 M_{\odot} < M < 0.99 M_{\odot}$) the model fails to fit the Pleiades, M34, M37 and the Hyades simultaneously with greater than 95% confidence. In this color-range, the models predict a greater degree of convergence in the rotation periods at the age of M37 than is observed. The models also under-predict the periods of the slowest rotators in M34. When using $(V-I_{C})_{0}$, the models for stars with $1.0 < (V-I_{C})_{0} < 2.5$ ($0.56 M_{\odot} < M < 0.82 M_{\odot}$) fit the M37 observations, but fail to fit the younger clusters. The difference between the $(V-I_{C})_{0}$ and $(B-V)_{0}$ data is that the $(V-I_{C})_{0}$ data is more complete for M34 and NGC 2516 than the $(B-V)_{0}$ data, but is less complete for the Hyades.
Table \[tab:rotparams\] gives the parameters for the models displayed in figures \[fig:periodvsageBV\] and \[fig:periodvsageVI\]. To simplify the comparison with the data presented in this paper we list periods rather than angular frequencies so that $P_{sat}$ corresponds to $\omega_{sat}$, etc. A few points bear mentioning. The first is that the saturation period appears to increase with decreasing stellar mass, as found by other authors (e.g. I07). The second is that while there is no clear trend between $K$ and stellar mass, we find that to reproduce M37, the Hyades and the Sun $K$ must be a factor of $\sim 1.2$ larger than what was found by @Bouvier.97.
Throughout this subsection we have implicitely assumed that the samples of stars with rotation period measurements are unbiased in period. Given the relation between period and amplitude (figure \[fig:AmpPerBV\]), however, it is likely that the samples are biased towards short-period rotators (though as we saw in § \[sec:completeness\] any bias in period is relatively minor for the M37 data). Since we cannot tell how this bias may differ from sample to sample, it is unclear what affect this has on the above conclusions. Given the fairly sharp upper limit on the period as a function of mass for M37 and the Hyades, we do not expect the estimates of the 90th percentile period measurements for M37 and the Hyades to be substantially different from the actual values for $(B-V)_{0} < 1.1$. We also note that we have not included uncertainties in the fundamental cluster parameters in this analysis. Including these uncertainties will reduce the signficance of the discrepancy between the models and the observed period distributions.
Discussion
==========
We have measured rotation periods for 575 candidate members of the open cluster M37. This is the largest sample of rotation periods for a cluster older than $100~{\rm Myr}$ and only the second cluster older than $500~{\rm Myr}$ with a large sample of rotation periods (the other cluster being the Hyades with 25 stars that have measured periods). As mentioned in the introduction, this is the second set of rotation periods published for this cluster. @Messina.08 have recently published periods for 122 cluster members. Using this data we have investigated the Rossby number-amplitude and period-color distributions.
We find that for stars with $(B-V)_{0} < 1.36$ the amplitude and Rossby number are anti-correlated, and are related via equation \[eqn:arrofit\]. Extrapolating this relation to higher Rossby number values, we expect $A_{r}$ to drop below $1~{\rm mmag}$ at $R_{O} \sim 1.1$. For a Skumanich spin-down, we expect stars to reach this Rossby number at $\sim 2~{\rm Gyr}$. We note that stellar activity will be a non-negligible effect to consider when conducting surveys for transiting planets with amplitudes of $\sim 1~{\rm mmag}$ (e.g. Neptune-sized planets orbiting sun-sized stars) that orbit stars younger than $\sim 2~{\rm Gyr}$. However, since the rotation and transit time-scales typically differ by $\sim 2$ orders of magnitude, it should be possible to find these planets nonetheless. We also find that for stars with $R_{O} < 0.2$, $(B-V)_{0} < 1.36$, the logarithmic distribution of $A_{r}$ appears to be peaked above $0.03~{\rm mag}$ which suggests that the distribution of spot filling-factors is peaked for these rapidly rotating stars.
We have also investigated the effect of rotation on the shape of the main sequence on CMDs. We find that at fixed $V$ magnitude rapid rotators tend to lie blueward of slow rotators on a $B-V$ CMD and redward of slow rotators on a $V-I_{C}$ CMD. This effect, seen previously for early K-dwarfs in the Pleiades and NGC 2516, extends down to early M-dwarf stars ($M_{V} < 10.5$). We note that the relation between $V-I_{C}$ and rotation period is consistent with observations by @Morales.07 who found that at fixed luminosity more active M-dwarfs tend to have lower effective temperatures. Our observations quantitatively and qualitatively support the hypothesis that the well-known discrepancy between the observed and predicted radii and effective temperatures of M-dwarfs is due to stellar activity/rotation and is not an age effect.
We have also investigated the rotation period-color distribution for this cluster. We find that, like the Hyades, the rotation periods of stars in M37 with $0.4 < (B-V)_{0} < 1.0$ follow a tight sequence. At the blue end of this color range stars have rotation periods of $\sim 3~{\rm days}$ while at the red end stars have rotation period of $\sim 10~{\rm days}$. Redward of $(B-V)_{0} \sim 1.0$ the maximum rotation period as a function of color continues to follow this sequence, however there is also a broad tail of stars with shorter rotation periods.
We have identified a group of 4 stars with $(B-V)_{0} < 1.2$ and $P > 15~{\rm days}$ that fall well above the main period-color sequence. The fact that at least one of these stars, V223, is a likely cluster member warrants discussion. If the measured period corresponds to the rotation period, this star would pose a significant challenge to the theory of stellar angular momentum evolution. While the periods of some of the stars may be shorter than the measured values, the light curve of V223 appears to be well-modeled by a sinusoid with a period of $18.0 \pm 2.9~{\rm days}$ (fig. \[fig:LongPeriodRotsLCS\]), the star also has a spectroscopic temperature consistent with its photometric colors assuming that it is a cluster member and $v\sin i = 4.6 \pm 4.6~{\rm km/s}$ that is consistent with a rotation period of $18.0~{\rm days}$. Note that the $r$-band amplitude of the star, $0.024~{\rm mag}$, is an order of magnitude higher than the characteristic amplitude extrapolated to a Rossby number of $0.82$.
Standard rotation evolution models predict a strong convergence of stellar surface rotation rates for a wide range of initial rotation rates [e.g. @Sills.00]. There is no plausible initial rotation rate that would yield a period of $18~{\rm days}$ for V223, which is more than twice as long as the periods of similar stars on the main period-color sequence, at $\sim 500~{\rm Myr}$. To explain such a long rotation period the surface of the star would have to spin-down very rapidly relative to the other stars in the cluster. This may indicate that the internal angular momentum transport varies from star to star as predicted by some theoretical models involving magnetic angular momentum transport [e.g. @Charbonneau.93]. Alternatively the slow rotation of these stars might be explained by a threshold effect. For example, stars rotating faster than some threshold may have sufficient mixing to erase gravitational settling, while slower rotators cannot prevent a $\mu$ gradient from being established, and as a result experience core-envelope decoupling. The latter explanation would predict a bimodal distribution of rotation periods, while the former class of models might produce a continuous spectrum.
It is also possible that the variation is due to the evolution of permanently visible spots (e.g. on the pole of an inclined star), rather than the rotational modulation of spots. In that case the measured period corresponds to a spot evolution time-scale rather than to the rotation period.
Finally, we have combined our observations of M37 with previous observations of the Pleiades, NGC 2516, M34, the Hyades, and the Sun to perform a test of a simple theory of stellar angular momentum evolution which assumes rigid body rotation and an $N = 1.5$ wind-model including saturation for short periods. We find that the model provides a good fit to the data for stars with $0.5 < (B-V)_{0}
< 0.7$ ($0.99 M_{\odot} < M < 1.21 M_{\odot}$) or $1.1 < (B-V)_{0} <
1.5$ ($0.53 M_{\odot} < M < 0.76 M_{\odot}$), but for stars with $0.7
< (B-V)_{0} < 1.1$ ($0.76 M_{\odot} < M < 0.99 M_{\odot}$) the model fails to fit the Pleiades, M34, M37 and the Hyades simultaneously at the $2-4\sigma$ level. In this color-range, the best-fit models predict a greater degree of convergence in the rotation periods at the age of M37 than is observed. The models also under-predict the periods of the slowest rotators in M34. When using $(V-I_{C})_{0}$, the models for stars with $1.0 < (V-I_{C})_{0} < 2.0$ ($0.56 M_{\odot} < M < 0.82
M_{\odot}$) fit the M37 observations, but fail to fit the younger clusters. Taking the parameters from the best fit models at face value, we find that the saturation period increases with decreasing stellar mass, which is consistent with the results from previous studies (see I07).
Comparing our results to the survey of M37 by @Messina.08, we note that both groups have found that there does not appear to be significant spin-down between M34 and M37 for stars with $(B-V)_{0} \la 1.0$, while there is significant spin-down between M37 and the Hyades. While the rotation periods and photometry are independent for each group, both groups adopt the same fundamental parameters for the clusters. The results from the two surveys are thus not independent against errors in these parameters. Our survey goes more than 2 magnitudes deeper than @Messina.08 (the faintest star with a measured period in the @Messina.08 survey has $V
\sim 20$, while the faintest star in our survey has $V \sim 22.8$), as a result we are able to study the rotation evolution for late K and early M dwarfs as well as late F, G and early K dwarfs (note the @Messina.08 survey has measured periods for early F stars that are saturated in our survey). We find that for later spectral-type stars there is a noticeable spin-down between the slowest rotators in M34 and M37.
The survey presented in this paper compliments previous surveys of younger clusters. The recent results from the Monitor project, in particular, have provided a wealth of data for testing the rotation evolution of low-mass stars at ages between $5$ and $200~{\rm Myr}$ [@Irwin.06; @Irwin.07; @Irwin.08a; @Irwin.08b]. Studying stellar evolution at these young ages yields insight into processes such as disk regularization, which may be important for stellar and planetary formation theory, and internal differential rotation. Other processes, such as non-saturated angular momentum wind-loss are more significant for the older cluster studied in this paper.
The discrepancies between this simple theory of stellar angular momentum evolution and the observed rotation periods of subsolar mass stars in clusters older than $\sim 100~{\rm Myr}$ may suggest that one or more of the assumptions in the theory is wrong. In particular, it may be necessary to relax the assumption that these stars rotate as rigid bodies, or to revise the assumed wind model. A detailed test of more complicated models is beyond the scope of this paper. Additional observations of the rotation periods of stars in clusters with ages $\ga 100~{\rm Myr}$ and a thorough treatment of all sources of observational errors are needed to map the period-age evolution in detail and understand the physical mechanisms behind angular momentum evolution.
We are grateful to C. Alcock for providing partial support for this project through his NSF grant (AST-0501681). Funding for M. Holman came from NASA Origins grant NNG06GH69G. We would like to thank G. Fürész and A. Szentgyorgyi for help in preparing the Hectochelle observations, S. Barnes, S. Saar, N. Brickhouse and S. Baliunas for helpful discussions, and the staff of the MMT, without whom this work would not have been possible. We are grateful to the anonymous referee for providing a thoughtful critique which improved the quality of this paper. We would also like to thank the MMT TAC for awarding us a significant amount of telescope time for this project. This research has made use of the WEBDA database, operated at the Institute for Astronomy of the University of Vienna; it has also made use of the SIMBAD database, operated at CDS, Strasbourg, France.
Alard, C. & Lupton, R. H. 1998, ApJ, 503, 325
Alard, C. 2000, A&AS, 144, 363
Allain, S., Fernandez, M., Martín, E. L, & Bouvier, J. 1996, A&A, 314, 173
An, D., Terndrup, D. M., Pinsonneault, M. H., Paulson, D. B., Hanson, R. B., & Stauffer, J. R. 2007, ApJ, 655, 233 (A07)
Andruk, V., Kharchenko, N., Schilbach, E., & Scholz, R.-D. 1995, AN, 316, 225
Argue, A. N. 1966, MNRAS, 133, 475
Barnes, J. R., Collier Cameron, A., Unruh, Y. C., Donati, J. F., & Hussain, G. A. J. 1998, MNRAS, 299, 904
Barnes, S. A. 1998, Ph.D. thesis, Yale University
Barnes, S. A., Sofia, S., Prosser, C. F., Stauffer, J. R. 1999, ApJ, 516, 263
Barnes, S. A. 2003, ApJ, 586, 464
Barnes, S. A. 2007, ApJ, in press, arXiv:0704.3068v2 \[astro-ph\]
Bessell, M. S., & Weis, E. W. 1987, PASP, 99, 642
Bouvier, J. 1997, MmSAI, 68, 881
Bouvier, J., Forestini, M., & Allain, S. 1997, A&A, 326, 1023
Brown, B. P., Browning, M. K., Brun, A. S., & Toomre, J. 2004, in Proceedings of the SOHO 14 / GONG 2004 Workshop (ESA SP-559). “Helio- and Asteroseismology: Towards a Golden Future”. Editor: D. Danesy., p. 341
Canterna, R., Crawford, D. L., & Perry, C. L. 1979, PASP, 91, 263
Carney, B. W., & Aaronson, M. 1979, AJ, 84, 867
Chaboyer, B., Demarque, P., & Pinsonneault, M. H. 1995, ApJ, 441, 865
Charbonneau, P., & MacGregor, K. B. 1993, ApJ, 417, 762
Cieza, L., & Baliber, N. 2006, ApJ, 649, 862
Clarke, D., MacDonald, E. C., & Owens, S. 2004, A&A, 415, 677
Crawford, D. L. 1975, AJ, 80, 955
Devor, J. 2005, ApJ, 628, 411
Dorren, J. D. 1987, ApJ, 320, 756
Efron, B., & Petrosian, V. 1992, ApJ, 399, 345
Efron, B., & Petrosian, V. 1999, JSTOR, 94, 447, p. 824-834
Eggen, O. J. 1968, ApJS, 16, 49
Eggen, O. J. 1974, PASP, 86, 697
Girardi, L., Grebel, E. K., Odenkirchen, M., & Chiosi, C. 2004, A&A, 422, 205
Gough, D. O. 1990, in: Angular Momentum Evolution of Young Stars, eds. S. Catalano, J. R. Stauffer, 271
Gray, D. F. 1992, PASP, 104, 1035
Hartman, J. D., et al. 2008a, ApJ, 675, 1233.
Hartman, J. D., et al. 2008b, ApJ, 675, 1254.
Herbig, G. H., Vrba, F. J., & Rydgren, A. E. 1986, AJ, 91, 575
Herbst, W., Bailer-Jones, C. A. L., Mundt, R., Meisenheimer, K., & Wackermann, R. 2002, A&A, 396, 513
Howard, R., Gilman, P. A., & Gilman, P. I. 1984, ApJ, 283, 373
Iriarte, B. 1967, BOTT, 4, 79
Irairte, B. 1969, BOTT, 5, 89
Iriarte, B. 1974, BITon, 1, 73
Irwin, J., Aigrain, S., Hodgkin, S., Irwin, M., Bouvier, J., Clarke, C., Hebb, L., & Moraux, E. 2006, MNRAS, 370, 954
Irwin, J., Hodgkin, S., Aigrain, S., Hebb, L., Bouvier, J., Clarke, C., Moraux, E., & Bramich, D. M. 2007, MNRAS, 377, 741 (I07)
Irwin, J., Hodgkin, S., Aigrain, S., Bouvier, J., Hebb, L., & Moraux, E. 2008a, MNRAS, 383, 1588
Irwin, J., Hodgkin, S., Aigrain, S., Bouvier, J., Hebb, L., Irwin, M., & Moraux, E. 2008b, MNRAS, 384, 675
Jeffries, R. D., Thurston, M. R., & Hambly, N. C. 2001, A&A, 375, 863 (J01)
Johnson, H. L., & Morgan, W. W. 1953, ApJ, 117, 313
Johnson, H. L., & Knuckles, C. F. 1955, ApJ, 122, 209
Johnson, H. L., & Mitchell, R. I. 1958, ApJ, 128, 31
Johnson, H. L., MacArthur, J. W., & Mitchell, R. I. 1968, ApJ, 152, 465
Jones, B. F. 1973, A&AS, 9, 313
Jones, B. F., & Prosser, C. F. 1996, AJ, 111, 1193
Kalirai, J. S., Ventura, P., Richer, H. B., Fahlman, G. G., Durrell, P. R., D’Antona, F., & Marconi, G. 2001, AJ, 122, 3239
Kawaler, S. D. 1988, ApJ, 333, 236
Kiss, L. L., Szabó, Gy. M., Sziládi, K., Furész, G., Sárneczky, K., & Csák, B. 2001, A&A, 376, 561
Kitchatinov, L. L. 2005, Physics - Uspekhi 48 (5), 449
Königl, A. 1991, ApJ, 370, L39
Kovács, G., Zucker, S., & Mazeh, T. 2002, A&A, 391, 369
Kovács, G., Bakos, G., & Noyes, R. W. 2005, MNRAS, 356, 557
Krishnamurthi, A., et al. 1997, ApJ, 480, 303
Krishnamurthi, A., et al. 1998, ApJ, 493, 914
Kurtz, M. J., & Mink, D. J. 1998, PASP, 110, 934
Kurucz, R. L. 1993, ASPC, 44, 87
Lamm, M. H., Mundt, R., Bailer-Jones, C. A. L., & Herbst, W. 2005, A&A, 430, 1005
Landolt, A. U. 1979, ApJ, 231, 468
Lomb, N. R. 1976, Ap&SS, 39, 447
López-Morales, M., & Ribas, I. 2005, ApJ, 631, 1120L
López-Morales, M. 2007, ApJ, 660, 732L
Magnitskii, A. K. 1987, Soviet Astron. Lett., 13, 451
Marilli, E., Catalano, S., & Frasca, A. 1997, MmSAI, 68, 895
Martín, E. L., & Zapatero Osorio, M. R. 1997, MNRAS, 286, L17
McLeod, B. A., Conroy, M., Gauron, T. M., Geary, J. C., & Ordway, M. P. 2000, in Proc. International Conference on Scientific Optical Imaging, Further Developments in Scientific Optical Imaging, ed. M. Bonner Denton (Cambridge: Royal Soc. Chemistry), 11
Mendoza, E. E. 1967, BOTT, 4, 149
Mendoza, E. E. 1968, PDAUC, 1, 106
Mermilliod, J.-C., Huestamendia, G., del Rio, G., & Mayor, M. 1996, A&A, 307, 80
Messina, S., Rodonò, M., & Guinan, E. F. 2001, A&A, 366, 215
Messina, S. 2001, A&A, 371, 1024
Messina, S., Distefano, E., Parihar, P., Kang, Y. B., Kim, S.-L., Rey, S.-C., & Lee, C.-U. 2008, A&A, in press (arXiv:0803.1134)
Meynet, G., Mermilliod, J.-C., & Maeder, A. 1993, A&AS, 98, 477
Morales, J. C., Ribas, I., & Jordi, C. 2007, A&A, in press (arXiv:0711.3523v1)
Nelder, J. A., & Mead, R. 1965, Computer Journal, 7, 308
Nilakshi, & Sagar, R. 2002, A&A, 381, 65
Noyes, R. W., Hartmann, L., Baliunas, S. L., Duncan, D. K., & Vaughn, A. H. 1984, ApJ, 279, 763
O’Dell, M. A., & Collier Cameron, A. 1993, MNRAS, 262, 521
O’Dell, M. A., Hendry, M. A., & Collier Cameron, A. 1994, MNRAS, 268, 181
O’Dell, M. A., Hilditch, R. W., Collier Cameron, A., & Bell, S. A. 1997, MNRAS, 284, 874
Patten, B. M., & Simon, T. 1996, ApJS, 106, 489
Perryman, M. A. C., et al. 1998, A&A, 331, 81
Pinsonneault, M. H. 1997, ARA&A, 35, 557
Pinsonneault, M. H., Terndrup, D. M., Hanson, R. B., & Stauffer, J. R. 2004, ApJ, 600, 946
Press, W. H., & Rybicki, G. G. 1989, ApJ, 338, 277
Press, W. H., Teukolsky, S. A., Vetterling, W. T., & Flannery, B. P. 1992, Numerical Recipes in C, second edition (Cambridge University Press, New York, NY) 575
Prosser, C. F. 1991, PhD Thesis, University of California, Santa Cruz
Prosser, C. F., Stauffer, J. R., & Kraft, R. P. 1991, AJ, 101, 1361
Prosser, C. F., Schild, R. E., Stauffer, J. R., & Jones, B. F. 1993a, PASP, 105, 269
Prosser, C. F., et al. 1993, PASP, 105, 1407
Prosser, C. F., et al. 1995, PASP, 107, 211
Prosser, C. F., & Randich, S. 1998, AN, 319, 201
Prosser, C. F., Randich, S., & Simon, T. 1998, AN, 319, 215
Radick, R. R., Thompson, D. T., Lockwood, G. W., Duncan, D. K., & Baggett, W. E. 1987, ApJ, 321, 459
Radick, R. R., Skiff, B. A., & Lockwood, G. W. 1990, ApJ, 353, 524
Radick, R. R., Lockwood, G. W., Skiff, B. A., & Thompson, D. T. 1995, ApJ, 452, 332
Reid, N. 1993, MNRAS, 265, 785
Ribas, I. 2003, A&A, 398, 239
Ribas, I. 2006, Ap&SS, 304, 89
Robinson, E. L., & Kraft, R. P. 1974, AJ, 79, 698
Scargle, J. D. 1982, ApJ, 263, 835
Scholz, A., & Eislöffel, J. 2004, A&A, 421, 259
Scholz, A., & Eislöffel, J. 2007, MNRAS, in publication (arXiv:0708.2274)
Schuler, S. C., King, J. R., Fischer, D. A., Soderblom, D. R., & Jones, B. F. 2003, AJ, 125, 2085
Schwarzenberg-Czerny, A. 1989, MNRAS, 241, 153
Schwarzenberg-Czerny, A. 1996, ApJ, 460, 107L
Shu, F., Najita, J., Ostriker, E., Wilkin, F., Ruden, S., & Lizano, S. 1994, ApJ, 429, 781
Shu, F. H., Najita, J. R., Shang, H., & Li, Z.-Y. 2000, in Protostars and Planets IV, ed. V. Mannings et al. (Tucson: Univ. of Arizona), 789
Sills, A., Pinsonneault, M. H., & Terndrup, D. M. 2000, ApJ, 534, 335
Skrutskie, M. F., et al. 2006, AJ, 131, 1163
Skumanich, A. 1972, ApJ, 171, 565
Soderblom, D. R., Stauffer, J. R., MacGregor, K. B., & Jones, B. F. 1993, ApJ, 409, 624
Stassun, K. G., & Terndrup, D. 2003, PASP, 115, 505
Stauffer, J. R. 1980, AJ, 85, 1341
Stauffer, J. R. 1982(a), AJ, 87, 899
Stauffer, J. R. 1982(b), AJ, 87, 1507
Stauffer, J. R. 1984, ApJ, 280, 189
Stauffer, J. R., Hartmann, L. W., Burnham, J. N., & Jones, B. F. 1985, ApJ, 289, 247
Stauffer, J. R., Schild, R. A., Baliunas, S. L., & Africano, J. L. 1987, PASP, 99, 471
Stauffer, J. R., & Hartmann, L. W. 1987, ApJ, 318, 337
Stauffer, J. R., Hartmann, L. W., & Jones, B. F. 1989, ApJ, 346, 160
Stauffer, J. R., Schild, R., Barrado y Navascues, D., Backman, D. E., Angelova, A. M., Kirkpatrik, J. D., Hambly, N., & Vanzi, L. 1998, ApJ, 504, 805
Stauffer, J. R., Jones, B. F., Backman, D., Hartmann, L. W., Barrado y Navascués, D., Pinsonneault, M. H., Terndrup, D. M., & Muench, A. A. 2003, AJ, 126, 833 (S03)
Sturch, C. 1972, PASP, 84, 666
Sung, H., Bessell, M. S., Lee, B.-W., & Lee, S.-G. 2002, AJ, 123, 290
Szentgyorgyi, A. H., Cheimets, P., Eng, R., Fabricant, D. G., Geary, J. C., Hartmann, L., Pieri, M. R., & Roll, J. B. 1998, in Proc. SPIE 3355, Optical Astronomical Instrumentation, ed. S. D’Odorico, 242-252
Taylor, B. J. 1980, AJ, 85, 242
Taylor, P. O. 1989, JAVSO, 18, 65
Terndrup, D. M., Krishnamurthi, A., Pinsonneault, M. H., & Stauffer, J. R., 1999, AJ, 118, 1814
Terndrup, D. M., Pinsonneault, M. H., Jeffries, R. D., Ford, A., Stauffer, J. R., & Sills, A. 2002, ApJ, 576, 950
Torres, G., & Ribas, I. 2002, ApJ, 567, 1140
Torres, G., Lacy, C., H., Marschall, L. A., Sheets, H. A., & Mader, J. A. 2006, ApJ, 640, 1018
Torres, G. 2007, ApJ, 671L, 65
Tsai, W. 1990, Biometrika, 77, 169
Upgren, A. R., & Weis, E. W. 1977, AJ, 82, 978
Upgren, A. R, Weis, E. W., & Hanson, R. B. 1985, AJ, 90, 2039
van Altena, W. F. 1969, AJ, 74, 2
van Hamme, W., & Wilson, R. E. 2003, ASPC, 298, 323
Van Leeuwen, F., & Alphenaar, P. 1982, ESO Messenger, 28, 15
Van Leeuwen, F., Alphenaar, P., & Meys, J. J. M. 1987, A&AS, 67, 483
Walker, G. A. H., et al. 2007, ApJ, 659, 1611
Webber, E. D., & Davis, L. 1967, ApJ, 148, 217
Weis, E. W., & Upgren, A. R. 1982, PASP, 94, 475
Weis, E. W., & Hanson, R. B. 1988, AJ, 96, 148
Wilson, R. E., & Devinney, E. J. 1971, ApJ, 166, 605
[rrrrr]{} 4.0 & 0.012 & 0.027 & 0.032 & 0.034\
5.0 & 0.017 & 0.046 & 0.035 & 0.049\
6.0 & 0.016 & 0.036 & 0.034 & 0.046\
7.0 & 0.022 & 0.045 & 0.046 & 0.062\
8.0 & 0.035 & 0.062 & 0.066 & 0.073\
9.0 & 0.039 & 0.107 & 0.069 & 0.094\
10.0 & 0.066 & 0.147 & 0.059 & 0.102\
11.0 & 0.075 & 0.069 & 0.061 & 0.060\
\[tab:coloruncertainties\]
[llll]{} $a_{lin}$ & 10.81 days/mag & 0.35 & 0.14\
$b_{lin}$ & -1.36 days & 0.35 & 0.12\
$a_{bpl}$ & 4.752 days & 0.031 & 0.021\
$b_{bpl}$ & -20.9 & 4.0 & 2.0\
\[tab:M37sequencefit\]
[rrrrrrrr]{} 0.60 & 33 & 1.07 & 10.77 & 39.51 & 9.28 & 5.11 & 16.62\
0.80 & 51 & 0.54 & 1.69 & 3.46 & 2.58 & 1.65 & 3.25\
1.00 & 61 & 0.77 & 2.22 & 6.73 & 4.34 & 2.34 & 7.38\
1.20 & 35 & 0.72 & 2.90 & 7.92 & 5.52 & 2.71 & 7.11\
1.40 & 57 & 1.36 & 1.57 & 3.03 & 3.03 & 1.80 & 4.24\
1.60 & 7 & 0.69 & 2.72 & 3.21 & 2.79 & 1.93 & 1.73\
\[tab:M37sequence\]
[rrrrrrrrrr]{} $[0.5, 0.7]$ & $[0.99, 1.21]$ & $0.64 (0.06)$ & $ 2.71 (0.50)$ & $ 4.26 (0.18)$ & 5.82 & $ 3.25 (0.18)$ & 2.39 & 4 & 0.43\
$[0.7, 0.9]$ & $[0.86, 0.99]$ & $0.13 (0.04)$ & $ 3.18 (0.84)$ & $ 2.28 (0.27)$ & 10.89 & $ 3.65 (0.18)$ & 12.21 & 4 & 2.41\
$[0.9, 1.1]$ & $[0.76, 0.86]$ & $0.16 (0.05)$ & $ 5.06 (1.01)$ & $ 2.85 (0.32)$ & 8.70 & $ 3.24 (0.27)$ & 21.75 & 4 & 3.69\
$[1.1, 1.3]$ & $[0.68, 0.76]$ & $0.24 (0.05)$ & $ 4.48 (0.74)$ & $ 5.06 (1.27)$ & 4.90 & $ 3.16 (0.36)$ & 1.19 & 2 & 0.59\
$[1.3, 1.5]$ & $[0.53, 0.68]$ & $0.32 (0.05)$ & $ 6.76 (0.74)$ & $12.15 (1.85)$ & 2.04 & $ 5.35 (1.09)$ & 0.31 & 2 & 0.18\
$[0.5, 0.7]$ & $[0.99, 1.21]$ & $0.23 (0.05)$ & $ 2.07 (0.55)$ & $ 3.41 (0.15)$ & 7.26 & $ 3.29 (0.18)$ & 1.07 & 2 & 0.54\
$[0.5, 1.0]$ & $[0.82, 1.35]$ & $0.63 (0.11)$ & $ 4.55 (0.99)$ & $ 3.56 (0.34)$ & 6.97 & $ 2.85 (0.16)$ & 2.96 & 6 & 0.24\
$[1.0, 1.5]$ & $[0.67, 0.82]$ & $0.26 (0.06)$ & $ 5.16 (1.24)$ & $ 4.25 (0.48)$ & 5.83 & $ 3.11 (0.44)$ & 6.59 & 2 & 2.08\
$[1.5, 2.0]$ & $[0.56, 0.67]$ & $0.58 (0.16)$ & $ 4.39 (0.47)$ & $12.02 (2.08)$ & 2.06 & $ 8.20 (2.37)$ & 68.21 & 4 & 7.52\
$[2.0, 2.5]$ & $[0.24, 0.56]$ & $0.29 (0.05)$ & $ 7.28 (1.06)$ & $15.67 (3.20)$ & 1.58 & $ 3.16 (1.59)$ & 3.99 & 2 & 1.49\
$[0.5, 1.0]$ & $[0.82, 1.35]$ & $0.42 (0.18)$ & $ 4.09 (1.32)$ & $ 3.22 (0.40)$ & 7.69 & $ 2.89 (0.17)$ & 2.35 & 2 & 1.02\
\[tab:rotparams\]
[^1]: Observations reported here were obtained at the MMT Observatory, a joint facility of the Smithsonian Institution and the University of Arizona.
[^2]: [<span style="font-variant:small-caps;">Iraf</span>]{} is distributed by the National Optical Astronomy Observatories, which is operated by the Association of Universities for Research in Astronomy, Inc., under agreement with the National Science Foundation.
[^3]: http://www.univie.ac.at/webda/webda.html
| {
"pile_set_name": "ArXiv"
} |
---
abstract: |
In this paper we give a detailed analysis of deterministic and randomized algorithms that enumerate any number of irreducible polynomials of degree $n$ over a finite field and their roots in the extension field in quasilinear[^1] time cost per element.
Our algorithm is based on an improved algorithm for enumerating all the Lyndon words of length $n$ in linear delay time and the known reduction of Lyndon words to irreducible polynomials.
author:
- |
[**Nader H. Bshouty**]{}\
Dept. of Computer Science\
Technion\
Haifa, 32000
- |
[**Nuha Diab**]{}\
Sisters of Nazareth High School\
Grade 12\
P.O.B. 9422, Haifa, 35661
- |
[**Shada R. Kawar**]{}\
Nazareth Baptist High School\
Grade 11\
P.O.B. 20, Nazareth, 16000
- |
[**Robert J. Shahla**]{}\
Sisters of Nazareth High School\
Grade 11\
P.O.B. 9422, Haifa, 35661
title: |
Enumerating all the Irreducible Polynomials\
over Finite Field
---
Introduction
============
The problem of enumerating the strings in a language $L$ is to list all the elements in $L$ in some order. Several papers study this problem. For example, Enumerating all spanning trees, [@KR00], minimal transversals for some Geometric Hypergraphs, [@EMR09], maximal cliques, [@MU04], ordered trees, [@E85], certain cuts in graphs, [@VY92; @YWS10], paths in a graph, [@S95], bipartite perfect matchings, [@U01], maximum and maximal matchings in bipartite graphs, [@U97], and directed spanning trees in a directed graph [@U96]. See the list in [@FM] for other enumeration problems.
One of the challenges in enumeration problems is to find an order of the elements of $L$ such that finding the next element in that order can be done in quasilinear time in the length of the representation of the element. The time that the algorithm takes before giving the first element is called the [*preprocessing time*]{}. The time of finding the next element is called the [*delay time*]{}. In [@AS09], Ackerman and Shallit gave a linear preprocessing and delay time for enumerating the words of any regular language (expressed as a regular expression or NFA) in lexicographic order.
Enumeration is also of interest to mathematicians without addressing the time complexity. Calkin and Wilf,[@CW00], gave an enumeration of all the rational numbers such that the denominator of each fraction is the numerator of the next one.
Another problem that has received considerable attention is the problem of ranking the elements of $L$. In ranking the goal is to find some total order on the elements of $L$ where the problem of returning the $n$th element in that order can be solved in polynomial time. Obviously, polynomial time ranking implies polynomial time enumeration. In the literature, the problem of ranking is already solved for permutations [@MR01; @T08] and trees of special properties [@GLW82; @L86; @P86; @RW11; @WC11; @WCW11; @WCC11; @AKN11; @WCCL13; @WCCK13]. Those also give enumerating algorithms for such objects.
Let ${\mathbb{F}}_q$ be a finite field with $q$ elements. Let $P_{n,q}$ be the set of irreducible polynomials over ${\mathbb{F}}_q$ of degree $n$ and their roots in ${\mathbb{F}}_{q^n}$. Several algorithms in the literature use irreducible polynomials of degree $n$ over finite fields, especially algorithms in coding theory, cryptography and problems that use the Chinese Remainder Theorem for polynomials [@C71; @LSW; @B15; @DPS]. Some other algorithms use only the roots of those polynomials. See for example [@B15].
In this paper, we study the following problems
1. Enumeration of any number of irreducible polynomials of degree $n$ over a finite fields.
2. Enumeration of any number of irreducible polynomials of degree $n$ and their roots over the extended field.
3. Enumeration of any number of roots of irreducible polynomials of degree $n$ over the extended field. One root for each polynomial.
There are many papers in the literature that mention the result of enumerating all the irreducible polynomials of degree [*less than or equal*]{} to $n$ but do not give the exact algebraic complexity of this problem [@CR00; @D88; @RS92; @FK86; @FM78; @KRR15]. In this paper we give a detailed analysis of deterministic and randomized algorithms that enumerate any number of irreducible polynomials of degree $n$ over a finite field and/or their roots in the extension field in quasilinear[^2] time cost per element.
Our algorithm is based on an improved algorithm for enumerating all the Lyndon words of length $n$ in linear delay time and the well known reduction of Lyndon words to irreducible polynomials. In the next subsection we define the Lyndon word and present the result of the improved algorithm.
The Enumeration of Lyndon Words
-------------------------------
Let $<$ be any total order on ${\mathbb{F}}_q$. A [*Lyndon word*]{} (or string) over ${\mathbb{F}}_q$ of length $n$ is a word $w=w_1\cdots w_n\in {\mathbb{F}}_q^n$ where every rotation $w_i\cdots w_nw_1\cdots w_{i-1}$, $i\not=1$ of $w$ is lexicographically larger than $w$. Let $L_{n,q}$ be the set of all the Lyndon words over ${\mathbb{F}}_q$ of length $n$. In many papers in the literature, it is shown that there is polynomial time (in $n$) computable bijective function $\phi: L_{n,q}\to P_{n,q}$, where $P_{n,q}$ is the set of all polynomials of degree $n$ over ${\mathbb{F}}_q$. So the enumeration problem of the irreducible polynomials can be reduced to the problem of enumerating the elements of $L_{n,q}$.
Bshouty gave in [@B15] a large subset $L'\subseteq L_{n,q}$ where any number of words in $L'$ can be enumerated in a linear delay time. In fact, one can show that $L'$ has a small DFA and, therefore, this result follows from [@CW00]. It is easy to show that the set $L_{n,q}$ cannot be accepted by a small size NFA, i.e., size polynomial in $n$, so one cannot generalize the above result to all $L_{n,q}$. Duval [@D88] and Fredricksen et. al., [@FK86; @FM78] gave enumeration algorithms of all the words in $\cup_{m\le n}L_{m,q}$ that run in linear delay time. Berstel and Pocchiola in [@BP94] and Cattell et. al. in [@CR00; @RS92] show that, in Duval’s algorithm, in order to find the next Lyndon word in $\cup_{m\le n}L_{m,q}$, the amortized number of [*updates*]{} is constant. The number of updtes is the number of symbols that the algorithm change in a Lyndon word in order to get the next word. Such an algorithm is called CAT algorithm. See the references in [@CR00] for other CAT algorithms. Kociumaka et. al. gave an algorithm that finds the rank of a Lyndon word in $O(n^2\log q)$ time and does unranking in $O(n^3\log^2 q)$ time.
In this paper, we give an enumeration algorithm of $L_{n,q}$ with linear delay time. Our algorithm is the same as Duval’s algorithm with the addition of a simple data structure. We show that this data structure enable us to find the next Lyndon word of [*length $n$*]{} in constant updates per symbol and therefore in linear time. We also show that our algorithm is CAT algorithm and give an upper bound for the amortized update cost.
Another problem is testing whether a word of length $n$ is Lyndon word. In [@D83], Duval gave a linear time algorithm for such test. In this paper we give a simple algorithm that uses the suffix trie data structure and runs in linear time.
This paper is organized as follows. In Section 2 we give the exact arithmetic complexity of the preprocessing and delay time for enumerating any number of irreducible polynomials and/or their roots. In Section 3 we give a simple data structure that enable us to change Duval’s algorithm to an algorithm that enumerates all the Lyndon words of length $n$ in linear delay time. We then show in Section 4 that the algorithm is CAT algorithm. In Section 5 we give a simple linear time algorithm that tests whether a word is a Lyndon word.
Enumerating Irreducible Polynomials
===================================
In this section we give the analysis for the algebraic complexity of the preprocessing time and delay time of enumerating irreducible polynomials of degree $n$ over a finite field and/or their roots in the extended field.
Let $q$ be a power of a prime $p$ and ${\mathbb{F}}_{q}$ be the finite field with $q$ elements. Our goal is to enumerate all the irreducible polynomials of degree $n$ over ${\mathbb{F}}_q$ and/or their roots in the extension field ${\mathbb{F}}_{q^n}$.
The best deterministic algorithm for constructing an irreducible polynomial over ${\mathbb{F}}_q$ of degree $n$ has time complexity $T_D:=O(p^{1/2+\epsilon}n^{3+\epsilon}+(\log q)^{2+\epsilon}n^{4+\epsilon})$ for any $\epsilon>0$. The best randomized algorithm has time complexity $T_R:=O((\log n)^{2+\epsilon} n^{2}+(\log q) (\log n)^{1+\epsilon} n)$ for any $\epsilon>0$. For a comprehensive survey of this problem see [@S99] Chapter 3. Obviously, the preprocessing time for enumerating irreducible polynomials cannot be less than the time for constructing one. Therefore, $T_D$ for the deterministic algorithm, and $T_R$ for the randomized algorithm.
The main idea of the enumeration algorithm is to enumerate the roots of the irreducible polynomials in the extension field and then construct the polynomials from their roots. Let ${\mathbb{F}}_{q^n}$ be the extension field of ${\mathbb{F}}_q$ of size $q^n$. One possible representation of the elements of the field ${\mathbb{F}}_{q^n}$ is by polynomials of degree at most $n-1$ in ${\mathbb{F}}_q[\beta]/(f(\beta))$ where $f(x)$ is an irreducible polynomial of degree $n$. A [*normal basis*]{} of ${\mathbb{F}}_{q^n}$ is a basis over ${\mathbb{F}}_q$ of the form $N(\alpha):=\{\alpha,\alpha^q,\alpha^{q^2},\ldots,\alpha^{q^{n-1}}\}$ for some $\alpha\in {\mathbb{F}}_{q^n}$ where $N(\alpha)$ is linearly independent. The [*normal basis theorem*]{} states that for every finite field ${\mathbb{F}}_{q^n}$ there is a normal basis $N(\alpha)$. That is, an $\alpha$ for which $N(\alpha)$ is linearly independent over ${\mathbb{F}}_q$. It is known that such an $\alpha$ can be constructed in deterministic time $O(n^3+(\log n)(\log\log n)(\log q) n)$ and randomized time $O((\log \log n)^2(\log n)^4 n^2 + (\log n)(\log \log n)(\log q) n )$ [@GS; @Nor01; @Nor02]. The enumeration algorithm will use the normal basis for representing the elements of ${\mathbb{F}}_{q^n}$. Notice that the time complexity to find such an element $\alpha$ is less than constructing one irreducible polynomial. If we use the normal basis $N(\alpha)$ for the representation of the elements of ${\mathbb{F}}_{q^n}$, then every element $\gamma\in {\mathbb{F}}_{q^n}$ has a unique representation $\gamma=\lambda_1\alpha+\lambda_2\alpha^{q}+\lambda_3\alpha^{q^2}+\cdots+\lambda_n\alpha^{q^{n-1}}$ where $\lambda_i\in {\mathbb{F}}_q$ for all $i$.
It is known that any irreducible polynomial $g$ of degree $n$ over ${\mathbb{F}}_q$ has $n$ distinct roots in ${\mathbb{F}}_{q^n}$. If one can find one root $\gamma\in {\mathbb{F}}_{q^n}$ of $g$ then the other roots are $\gamma^q,\gamma^{q^2},\ldots,\gamma^{q^{n-1}}$ and therefore $g_\gamma(x):=(x-\gamma)(x-\gamma^q)\cdots (x-\gamma^{q^{n-1}})=g(x)$. The coefficients of $g_\gamma(x)$ can be computed in quadratic time $O(n^2\log^3 n(\log\log n)^2)$. See Theorem A and B in [@S99] and references within. The element $\gamma=\lambda_1\alpha+\lambda_2\alpha^{q}+\lambda_3\alpha^{q^2}+\cdots+\lambda_n\alpha^{q^{n-1}}$ is a root of an irreducible polynomial of degree $n$ if and only if $\gamma,\gamma^{q},\gamma^{q^2},\ldots,\gamma^{q^{n-1}}$ are distinct. Now since $$\begin{aligned}
\gamma^{q^{n-k}}=\lambda_k\alpha+\lambda_{k+1}\alpha^q\cdots+
\lambda_n\alpha^{q^{n-k}}+\lambda_1\alpha^{q^{n-k+1}}+\cdots+\lambda_{k-1}\alpha^{q^{n-1}},
\label{pk}\end{aligned}$$ $\gamma$ is a root of an irreducible polynomial of degree $n$ if and only if the following $n$ elements $$(\lambda_1,\lambda_2,\lambda_3,\cdots,\lambda_n), (\lambda_2,\lambda_3,\lambda_4,\cdots,\lambda_n,\lambda_1)
,(\lambda_3,\lambda_4,\lambda_5,\cdots,\lambda_n,\lambda_1,\lambda_2),\cdots,$$ $$\begin{aligned}
(\lambda_n,\lambda_1,\lambda_2,\cdots,\lambda_{n-1})\label{ddd}\end{aligned}$$ are distinct.
When (\[ddd\]) happens then we call $\lambda=(\lambda_1,\lambda_2,\lambda_3,\cdots,\lambda_n)$ [*aperiodic word*]{}. We will write $\lambda$ as a word $\lambda=\lambda_1\lambda_2\lambda_3\cdots\lambda_n$ and define $\gamma(\lambda):=\lambda_1\alpha+\lambda_2\alpha^{q}+\lambda_3\alpha^{q^2}+\cdots+\lambda_n\alpha^{q^{n-1}}$. Therefore
We have
1. For any word $\lambda=\lambda_1\cdots\lambda_n\in {\mathbb{F}}_q^n$ the element $\gamma(\lambda)$ is a root of an irreducible polynomial of degree $n$ if and only if $\lambda$ is an aperiodic word.
2. Given an aperiodic word $\lambda$, the irreducible polynomial $g_{\gamma(\lambda)}$ can be constructed in time[^3] $O((\log\log n)^2(\log n)^3 n^2)=\tilde O(n^2)$.
Obviously, the aperiodic word $\lambda=\lambda_1\lambda_2\lambda_3 \cdots\lambda_n$ and $R_k(\lambda):=\lambda_k\lambda_{k+1}$ $\cdots\lambda_n\lambda_1$ $\cdots\lambda_{k-1}$ corresponds to the same irreducible polynomial. See (\[pk\]). That is, $g_{\gamma(\lambda)}=g_{\gamma(R_i(\lambda))}$ for any $1\le i\le n$. Therefore to avoid enumerating the same polynomial more than once, the algorithm enumerates only the minimum element (in lexicographic order) among $\lambda,R_2(\lambda),\ldots,R_{n}(\lambda)$. Such an element is called [*Lyndon word*]{}. Therefore
The word $\lambda=\lambda_1\lambda_2\lambda_3\cdots\lambda_n$ is called a Lyndon word if $\lambda<R_i(\lambda)$ for all $i=2,\ldots,n$.
To enumerate all the irreducible polynomials the algorithm enumerates all the Lyndon words of length $n$ and, for each one, it computes the corresponding irreducible polynomial.
In the next section, we show how to enumerate all the Lyndon words of length $n$ in linear delay time $O(n)$. Then from $\gamma(\lambda)$ (that corresponds to an irreducible polynomial) the algorithm constructs the irreducible polynomial $g_{\gamma(\lambda)}(x)$ and all the other $n-1$ roots in quadratic time $\tilde O(n^2)$. Since the size of all the roots is $O(n^2)$, this complexity is quasilinear in the output size. For the problem of enumerating only the roots (one root for each irreducible polynomial) the delay time is $O(n)$.
Let $L_{n,q}$ be the set of all Lyndon words over ${\mathbb{F}}_q$ of length $n$. We have shown how to reduce our problem to the problem of enumerating all the Lyndon words over ${\mathbb{F}}_q$ of length $n$ with linear delay time. Algorithm “Enumerate” in Figure \[Alg\] shows the reduction.
Putting all the above algebraic complexities together, we get the following
Let $\epsilon>0$ be any constant. There is a randomized enumeration algorithm for
1. the irreducible polynomial over ${\mathbb{F}}_q$ and their roots in ${\mathbb{F}}_{q^n}$ in preprocessing time $O((\log n)^4(\log\log n)^2 n^2+(\log q)(\log n)^{1+\epsilon}n)$ and delay time $O((\log\log n)^2(\log n)^3n^2)$.
2. the roots in ${\mathbb{F}}_{q^n}$ of irreducible polynomials of degree $n$ over ${\mathbb{F}}_q$ in preprocessing time $O((\log n)^4(\log\log n)^2 n^2+(\log q)(\log n)^{1+\epsilon}n)$ and delay time $O(n)$.
Let $\epsilon>0$ be any constant. There is a deterministic enumeration algorithm for
1. the irreducible polynomial over ${\mathbb{F}}_q$ and their roots in ${\mathbb{F}}_{q^n}$ in preprocessing time $O(n^{3+\epsilon}p^{1/2+\epsilon}+(\log q)^{2+\epsilon}n^{4+\epsilon}$ and delay time $O((\log\log n)^2$ $(\log n)^3n^2)$.
2. the roots in ${\mathbb{F}}_{q^n}$ of irreducible polynomials of degree $n$ over ${\mathbb{F}}_q$ in preprocessing time $O(n^{3+\epsilon}p^{1/2+\epsilon}+(\log q)^{2+\epsilon}n^{4+\epsilon})$ and delay time $O(n)$.
Linear Delay Time for Enumerating $L_{n,q}$
===========================================
In this section we give Duval’s algorithm, [@D88], that enumerates all the Lyndon words of length at most $n$, $\cup_{m\le n} L_{m,q}$, in linear delay time and change it to an algorithm that enumerates the Lyndon words of length $n$, $L_{n,q}$ in linear time. We will use a simple data structure that enable the algorithm to give the next Lyndon word of length $n$ in Duval’s algorithm in a constant update per symbol and therefore in linear time.
Let $\Sigma=\{0,1,\ldots , q-1\}$ be the alphabet with the order $0<1<\cdots <q-1$. We here identify ${\mathbb{F}}_q$ with $\Sigma$. We will sometime write the symbols in brackets. For example for $q=5$ the word $[q-1]^2[q-3]$ is $442$. Let $w=\sigma_1\sigma_2\cdots\sigma_m$ be a Lyndon word for some $m\le n$. To find the next Lyndon word, (of length $\le n$) Duval’s algorithm first define the word $v=D(w)=w^hw'$ of length $n$ where $w$ is a non-empty prefix of $w$ and $h\ge 0$ (and therefore $h|w|+|w'|=n$). That is, $v=D(w)=\sigma_1\cdots\sigma_m\sigma_1\cdots\sigma_m\cdots \sigma_1\cdots\sigma_m\sigma_1\cdots\sigma_{(n\mod m)}$. Then if $v$ is of the form $v=ub[q-1]^t$ where $t\ge 0$ and $b\not= [q-1]$ then the next Lyndon word in Duval’s algorithm is $P(v)=u[b+1]$. We denote the next Lyndom word of $w$ (in Duval’s algorithm) by $N(w):=P(D(w))$. For example, for $q=3$, $n=7$ and $w=0222$, $v=D(w)=0222022$ and $N(w)=P(D(w))=02221$. Then $N(N(w))=022211$.
The following lemma is well known. We give the proof for completeness
\[increase\] If $w$ is a Lyndon word and $|w|<n$ then $|N(w)|>|w|$.
Let $w=ub[q-1]^t$ where $b\not= [q-1]$. Then $u_1\le b$ because otherwise we would have $R_{|u|+1}(w)=b[q-1]^tu<ub[q-1]^t=w$ and then $w$ is not a Lyndon word. Let $D(w)=w^hw'$ where $h\ge 0$ and $w'$ is a nonempty prefix of $w$. Since $|D(w)|=n>|w|$ we have $h\ge 1$. Since $w'_1=u_1\le b<q-1$, we have that $|N(w)|=|P(D(w))|\ge h|w|+1>|w|.$
The Algorithm
-------------
In this subsection we give the data structure and the algorithm that finds the next Lyndon word of length $n$ in linear time.
We note here that, in the literature, the data structure that is used for the Lyndon word is an array of symbols. All the analyses of the algorithms in the literature treat an access to an element in an $n$ element array and comparing it to another symbol as an operation of time complexity equal to $1$. The complexity of incrementing/decrementing an index $0\le i\le n$ of an array of length $n$ and comparing two such indices are not included in the complexity. In this paper, the Lyndon words are represented with symbols and [*numbers*]{} in the range $[1,n]$. Every access to an element in this data structure and comparison between two elements are (as in literature) counted as an operation of time complexity equal $1$. Operations that are done on the indices of the array (as in literature) are not counted but their time complexity is linear in the number of updates.
Let $v\in \Sigma^n$. We define the [*compressed representation*]{} of $v$ as $v=v^{(0)}[q-1]^{i_1}v^{(1)}[q-1]^{i_2}\cdots v^{(t-1)}[q-1]^{i_t}$ where $i_1,\ldots,i_{t-1}$ are not zero ($i_t$ may equal to zero) and $v^{(0)},\ldots,v^{(t-1)}$ are nonempty words that do not contain the symbol $[q-1]$. If $v$ do not contain the symbol $[q-1]$ then $v=v^{(0)}[q-1]^0$ where $[q-1]^0$ is the empty word and $v^{(0)}=v$. The data structure will be an array (or double link list) that contains $v^{(0)},i_1,v^{(1)},\cdots,v^{(t-1)},i_t$ if $i_t\not=0$ and $v^{(0)},i_1,v^{(1)},\cdots,v^{(t-1)}$ otherwise.
Define $\|v\|=\sum_{j=0}^{t-1} |v^{(j)}|+t$. This is the [*compressed length*]{} of the compressed representation of $v$. Notice that for a word $v=v_1\cdots v_r$ that ends with a symbol $v_r\not=[q-1]$ we have $P(v)=v_1\cdots v_{r-1}[v_r+1]$ and for $u=v\cdot [q-1]^i$ we have $P(u)=P(v)$. Therefore $\|v\|-1\le \|P(v)\|\le \|v\|$.
Let $v=v^{(0)}[q-1]^{i_1}v^{(1)}[q-1]^{i_2}\cdots v^{(t-1)}[q-1]^{i_t}$ be any Lyndon word of length $n$. The next Lyndon word in Duval’s algorithm is $$u^{(1)}:=N(v)=v^{(0)}[q-1]^{i_1}v^{(1)}[q-1]^{i_2}\cdots [q-1]^{i_{t-1}}\cdot P(v^{(t-1)})$$ To find the next Lyndon word $u^{(2)}$ after $u^{(1)}$ we take $\left(u^{(1)}\right)^hz^{(1)}$ of length $n$ where $z^{(1)}$ is a nonempty prefix of $u^{(1)}$ and then $u^{(2)}=\left(u^{(1)}\right)^h\cdot P(z^{(1)})$. This is because $z^{(1)}_1=u^{(1)}\not=[q-1]$. Since by Lemma \[increase\], $|u^{(1)}|<|u^{(2)}|<\cdots$ we will eventually get a Lyndon word of length $n$. We now show that using the compressed representation we have
\[kkk\]The time complexity of computing $u^{(i+1)}$ from $u^{(i)}$ is at most $|u^{(i+1)}|-|u^{(i)}|+1$.
Let $u^{(i)}=w^{(0)}[q-1]^{i_1}w^{(1)}[q-1]^{i_2}\cdots w^{(t-1)}[q-1]^{i_t}$ of length less than $n$. Then $u^{(i+1)}=(u^{(i)})^h\cdot P(z^{(i)})$ where $z^{(i)}$ is a nonempty prefix of $u^{(i)}$. So it is enough to show that $P(z^{(i)})$ can be computed in at most $|P(z^{(i)})|+1$ time. Notice that the length of $z^{(i)}$ is $(n\mod |u^{(i)}|)$ (here the mod is equal to $|u^{(i)}|$ if $|u^{(i)}|$ divides $n$). Since $z^{(i)}$ is a prefix of $u^{(i)}$ we have that, in the compressed representation, $z^{(i)}=w^{(0)}[q-1]^{i_1}w^{(1)}[q-1]^{i_2}\cdots w^{(t'-1)}[q-1]^{i_{t'}}$ for some $t'\le t$. Then $P(z^{(i)})=w^{(0)}[q-1]^{i_1}w^{(1)}[q-1]^{i_2}\cdots P(w^{(t'-1)})$. Therefore the complexity of computing $P(z^{(i)})$ is $\|z^{(i)}\|\le |P(z^{(i)})|+1.$
From the above lemma it follows that
\[Th3\] Let $v$ be a Lyndon word of length $n$. Using the compressed representation, the next Lyndon word of length $n$ can be computed in linear time.
To compress $v$ and find $u^{(1)}=N(v)$ we need a linear time. By Lemma \[increase\] the Lyndon words after $v$ are $u^{(1)},\ldots, u^{(j)}$ where $|u^{(1)}|<|u^{(2)}|<\cdots<|u^{(j)}|=n$. By Lemma \[kkk\] the time complexity of computing the next Lyndon word $u^{(j)}$ of length $n$ is $\sum_{i=1}^{j-1} |u^{(i+1)}|-|u^{(i)}|+1\le |u^{(j)}|+n=O(n)$. Then decompressing the result takes linear time.
We now give a case where Duval’s algorithm fails to give the next Lyndon word of length $n$ in linear time. Consider the Lyndon word $01^{k}01^{k+1}$ of length $n=2k+3$. The next Lyndon word in Duval’s algorithm is $01^{k+1}$. Then $01^{k+2}, 01^{k+3}, \ldots, 01^{2k+2}$. To get to the next Lyndon word of length $n$, $01^{2k+2}$, the algorithm does $\sum_{i=1}^{k+2}i=O(n^2)$ updates.
Constant Amortized Time for Enumerating $L_{n,q}$
=================================================
In this section, we show that our algorithm in the previous section is CAT algorithm. That is, it has a constant amortized update cost.
We first give some notation and preliminary results. Let $\ell_n$ be the number of Lyndon words of length $n$, $L_i=\ell_1+\cdots +\ell_i$ for all $i=1,\ldots, n$ and $\Lambda_n=L_1+\cdots+L_n=n\ell_1+(n-1)\ell_{2}+\cdots+\ell_n$. It is known from [@D88] that for $n\ge 11$ and any $q$ $$\begin{aligned}
\label{one}
\frac{q^n}{n}\left(1-\frac{q}{(q-1)q^{n/2}}\right)\le \ell_n\le \frac{q^n}{n}\end{aligned}$$ and for any $n$ and $q$ $$\begin{aligned}
\label{two}
L_n\ge \frac{q}{q-1}\frac{q^n}{n}\end{aligned}$$ and $$\begin{aligned}
\label{three}
\Lambda_n= \frac{q^2}{(q-1)^2}\frac{q^n}{n}\left(1+\frac{2}{(q-1)(n-1)}+O\left(\frac{1}{(qn)^2}\right)\right).\end{aligned}$$ Denote by $\ell_{n,i}$ the number of Lyndon words of length $n$ of the form $w=ub[q-1]^i$ where $b\in \Sigma\backslash \{q-1\}$. Then $\ell_n=\ell_{n,0}+\ell_{n,1}+\cdots+\ell_{n,n-1}$. Let $\ell^*_n$ be the number of Lyndon words of length $n$ that ends with the symbol $[q-2]$. That is, of the form $u[q-2]$.
For the analysis we will use the following.
Let $w=ub[q-1]^t\in \Sigma^n$ where $b\in \Sigma\backslash \{q-1\}$ and $t\ge 1$. If $w=ub[q-1]^t$ is a Lyndon word of length $n$ then $u[b+1]$ is a Lyndon word.
In particular, $$\ell_{n,t}\le \ell_{n-t}.$$
If $w=u[q-2]$ is a Lyndon word of length $n$ then $u[q-1]$ is a Lyndon word. In particular, $$\ell^*_n\le \ell_{n,1}+\cdots+\ell_{n,n-1}.$$
\[klkl\]
If $w=ub[q-1]^t$ is Lyndon word of length $n$ then the next Lyndon word in Duvel’s algorithm is $P(D(ub[q-1]^t))=P(ub[q-1]^t)=u[b+1]$.
If $w=u[q-2]$ is a Lyndon word of length $n$ then $P(D(w))=u[q-1]$ is the next Lyndon word in Duvel’s algorithm.
The amortized number of updates of listing all the Lyndon words of length at most $n$ in Duval’s algorithm is [@D88] $$\gamma_n\le \frac{2\Lambda_n}{L_n}-1=1+\frac{2}{q-1}+O\left(\frac{1}{qn}\right)$$ We now show that
Using the compressed representation the amortized number of updates for enumerating all the Lyndon words of length exactly $n$ is at most $$\frac{3(\Lambda_n-L_n)+\ell_n}{\ell_n}= 1+\frac{3q}{(q-1)^2}+o(1)$$
The number of Lyndon words of length $n$ of the forms $w=ub$ where $b\in \Sigma$, $b\not=[q-1]$ and $b\not= [q-2]$ is $\ell_n-(\ell_{n,1}+\cdots+\ell_{n,n-1})-\ell^*_n$. The next word of length $n$ is $u[b+1]$. So each such word takes one update to find the next word. For words that end with the symbol $[q-2]$ we need to change this symbol to $[q-1]$ and plausibly merge it with the previous one in the compressed representation. This takes at most two updates. One for removing this symbol and one for merging it with the cells of the form $[q-1]^t$. Therefore for such words we need $2\ell_n^*$ updates. Thus, for Lyndon words that do not ends with $[q-1]$ we need $\ell_n-(\ell_{n,1}+\cdots+\ell_{n,n-1})+\ell_n^*$ updates.
For strings of the form $w=ub[q-1]^t$ where $b\not=[q-1]$ and $t\ge 1$ we need at most $3t$ updates and therefore at most $3t\ell_{n,t}$ for all such words. See the proof of Theorem \[Th3\]. Therefore, the total updates is at most $$\ell_n-(\ell_{n,1}+\cdots+\ell_{n,n-1})+\ell_n^*+3(\ell_{n,1}+2\ell_{n,2}+\cdots+(n-1)\ell_{n,n-1})$$ By Lemma \[klkl\], this is at most $$\ell_n+3(\ell_{n-1}+2\ell_{n-2}\cdots+(n-1)\ell_{1}).$$ Now, the amortized update is $$\begin{aligned}
\frac{\ell_n+3(\ell_{n-1}+2\ell_{n-2}\cdots+(n-1)\ell_{1})}{\ell_n}
&=&\frac{3(\Lambda_n-L_n)+\ell_n}{\ell_n}\\
&=&1+3\frac{\Lambda_n-L_n}{\ell_n}.\\\end{aligned}$$
By (\[one\]), (\[two\]) and (\[three\]) we get $$\begin{aligned}
1+3\frac{\Lambda_n-L_n}{\ell_n}&\le & 1+3\frac{\frac{q^2}{(q-1)^2}\left(1+\frac{2}{(q-1)(n-1)}+O\left(\frac{1}{(qn)^2}\right)\right)-\frac{q}{q-1}}
{1-\frac{q}{(q-1)q^{n/2}}}\\
&=& 1+3\frac{\frac{q}{(q-1)^2}+\frac{q^2}{(q-1)^2}\left(\frac{2}{(q-1)(n-1)}+O\left(\frac{1}{(qn)^2}\right)\right)}
{1-\frac{q}{(q-1)q^{n/2}}}\\
&=& 1+\frac{3q}{(q-1)^2}+O\left(\frac{1}{qn}\right).\end{aligned}$$
Membership in $L_{n,q}$
=======================
In this subsection, we study the complexity of deciding membership in $L_{n,q}$. That is, given a word $\sigma\in {\mathbb{F}}_q^n$. Decide whether $\sigma$ is in $L_{n,q}$.
Since $\sigma\in L_{n,q}$ if and only if for all $1<i\le n$, $R_i(\sigma)>\sigma$, and each comparison of two words of length $n$ takes $O(n)$ operations, membership can be decided in time $O(n^2)$. Duval in [@D83] gave a linear time algorithm. In this subsection, we give a simple algorithm that decides membership in linear time. To this end, we need to introduce the suffix tree data structure.
The suffix tree of a word $s$ is a trie that contains all the suffixes of $s$. See for example the suffix tree of the word $s=1010110\$$ in Figure \[SuffixTree\]. A suffix tree of a word $s$ of length $n$ can be constructed in linear time in $n$ [@W73; @F97]. Using the suffix tree, one can check if a word $s'$ of length $|s'|=m$ is a suffix of $s$ in time $O(m)$.
Denote by $ST(s)$ the suffix tree of $s$. Define any order $<$ on the symbols of $s$. Define Min$(ST(s))$ as follows: Start from the root of the trie and follow, at each node, the edges with the minimal symbol. Then Min$(ST(s))$ is the word that corresponds to this path. One can find this word in $ST(s)$ in time that is linear in its length.
The function ${{\rm Min}}$ defines the following total order $\prec$ on the suffixes: Let $T=ST(s)$. Take ${{\rm Min}}(T)$ as the minimum element in that order. Now remove this word from $T$ and take ${{\rm Min}}(T)$ as the next one in that order. Repeat the above until the tree is empty. For example, if $0<1<\$$ then the order in the suffix tree in Figure \[SuffixTree\] is $$010110\$,0110\$,0\$,1010110\$,10110\$,10\$,110\$,\$.$$ Obviously, for two suffixes $s$ and $r$, $s\prec r$ if and only if for $j=\min(|r|,|s|)$ we have $s_1\cdots s_j<r_1\cdots r_j$ (in the lexicographic order).
We define $ST_m(s)$ the suffix tree of the suffixes of $s$ of length at least $m$. We can construct $ST_m(s)$ in linear time in $|s|$ by taking a walk in the suffix tree $ST(s)$ and remove all the words of length less than $m$. In the same way as above, we define ${{\rm Min}}(ST_m(s))$.
We now show
\[cond01\] Let $\$\not\in{\mathbb{F}}_q$ be a symbol. Define any total order $<$ on $\Sigma={\mathbb{F}}_q\cup\{\$\}$ such that $\$<\alpha$ for all $\alpha\in {\mathbb{F}}_q$. Let $\sigma\in {\mathbb{F}}_q^n$. Then $\sigma\in L_{n,q}$ if and only if $${{\rm Min}}(ST_{n+2}(\sigma\sigma\$))=\sigma\sigma\$.$$
First, notice that every word in $ST_{n+2}(\sigma\sigma\$)$ is of the form $\sigma_i\cdots\sigma_n\sigma\$$ for some $i=1,\ldots,n$. Let $T=ST_{n+2}(\sigma\sigma\$)$.
If $R_i(\sigma)<\sigma$ then $\sigma_{i}\cdots\sigma_n\sigma_1\cdots\sigma_{i-1} < \sigma$, and therefore $\sigma_{i}\cdots\sigma_n\sigma\$=\sigma_{i}\cdots\sigma_n\sigma_1\cdots\sigma_{i-1}\sigma_{i}\cdots \sigma_n\$
\prec \sigma\sigma\$$. Thus, ${{\rm Min}}(T)\not=\sigma\sigma\$$.
If $R_i(\sigma)=\sigma$ then $\sigma_{i}\cdots\sigma_n\sigma_1\cdots\sigma_{i-1} = \sigma$, and then $$\sigma_{i}\cdots\sigma_n\sigma=\sigma_{i}\cdots\sigma_n\sigma_1\cdots\sigma_{i-1} \sigma_{i}\cdots\sigma_n= \sigma\sigma_1\cdots\sigma_{n-i+1}.$$ Thus, $\sigma_{i}\cdots\sigma_n\sigma\$< \sigma\sigma_1\cdots\sigma_{n-i+2}$ which implies $\sigma_{i}\cdots\sigma_n\sigma\$\prec \sigma\sigma\$$. Therefore, we have ${{\rm Min}}(T)\not=\sigma\sigma\$$.
If $R_i(\sigma)>\sigma$ then $\sigma_{i}\cdots\sigma_n\sigma_1\cdots\sigma_{i-1} > \sigma$, and therefore $\sigma_{i}\cdots\sigma_n\sigma\$ $ $\succ \sigma\sigma\$$ and then ${{\rm Min}}(T)\not=\sigma_{i}\cdots\sigma_n\sigma\$$.
Now, if $\sigma\in L_{n,q}$ then $R_i(\sigma)>\sigma$ for all $1<i\le n$. Thus ${{\rm Min}}(T)\not=\sigma_{i}\cdots\sigma_n\sigma\$$ for all $i$. Therefore we have ${{\rm Min}}(T)=\sigma\sigma\$$. If $\sigma\not\in L_{n,q}$ then there is $i$ such that $R_i(\sigma)\le \sigma$, and then ${{\rm Min}}(T)\not=\sigma\sigma\$$.
We now prove
\[linear\] There is a linear time algorithm that decides whether a word $\sigma$ is in $L_{n,q}$.
The algorithm is in Figure \[Alg02\]. We use Lemma \[cond01\]. The algorithm constructs the trie $ST_{n+2}(\sigma\sigma\$)$. The construction takes linear time in $\sigma\sigma\$$ and therefore linear time in $n$. Finding ${{\rm Min}}(ST_{n+2}(\sigma\sigma\$))$ in a trie takes linear time.
F. Ashari-Ghomi, N. Khorasani, A. Nowzari-Dalini. Ranking and Unranking Algorithms for k-ary Trees in Gray Code Order. International Scholarly and Scientific Research & Innovation. 6(8). pp. 833–838. (2012)
M. Ackerman, E. Mäkinen: Three New Algorithms for Regular Language Enumeration. COCOON 2009. pp. 178–191. (2009).
M. Ackerman, J. Shallit. Efficient enumeration of words in regular languages. [*Theor. Comput. Sci.*]{} 410(37) pp. 3461–3470. (2009)
N. H. Bshouty. Dense Testers: Almost Linear Time and Locally Explicit Constructions. Electronic Colloquium on Computational Complexity (ECCC). 22: 6 (2015).
J. Berstel, M. Pocchiola. Average cost of Duval’s algorithm for generating Lynon words. [*Theoretical Computer Science*]{}. 132(1). pp. 415-425. (1994).
G. E. Collins. The Calculation of Multivariate Polynomial Resultants. [*J. ACM*]{}. 18(4): pp. 515–532. (1971).
K. Cattell, F. Ruskey, J. Sawada, M. Serra, C. R. Miers. Fast Algorithms to Generate Necklaces, Unlabeled Necklaces, and Irreducible Polynomials over GF(2). [*J. Algorithms.*]{} 37(2). pp. 267-282. (2000).
N. J. Calkin, H. S. Wilf. Recounting the Rationals. [*The American Mathematical Monthly*]{}. 107(4). pp. 360–363. (2000).
P. Dömösi. Unusual Algorithms for Lexicographical Enumeration. [*Acta Cybern.*]{} 14(3). pp. 461–468. (2000)
J. P. Duval. Factorizing words over an ordered alphabet. [*Jornal of Algorithms*]{}. 4(4). pp. 363-381. (1983).
J.-P. Duval. Génération d’une Section des Classes de Conjugaison et Arbre des Mots de Lyndon de Longueur Bornée. [*Theor. Comput. Sci.*]{} 60, pp. 255–283. (1988).
C. Ding, D. Pei, A. Salomaa. Chinese Remainder Theorem. Application in Computing, Coding, Cryptography. World Scientific Publication. (1996).
M. C. Er. Enumerating Ordered Trees Lexicographically. [*Comput. J.*]{} 28(5). pp. 538–542. (1985).
K. M. Elbassioni, K. Makino, I. Rauf. Output-Sensitive Algorithms for Enumerating Minimal Transversals for Some Geometric Hypergraphs. ESA 2009. pp. 143–154. (2009).
M. Farach. Optimal Suffix Tree Construction with Large Alphabets. 38th IEEE Symposium on Foundations of Computer Science (FOCS ’97), pp. 137–143. (1997)
H. Fredricksen, I. J. Kessler. An algorithm for generating necklaces of beads in two colors. Discrete Mathematics 61(2-3): 181-188 (1986)
H. Fredricksen, J. Maiorana. Necklaces of beads in $k$ color and $k$-aray de Bruijn sequences. [*Discrete Mathematics*]{}, 23(3). pp. 207–210. (1978).
K. Fukuda and Y. Matsui. Enumeration of Enumeration Algorithms and Its Complexity. http://www-ikn.ist.hokudai.ac.jp/$\sim$wasa/enumerationcomplexity.html.
D. T. Huynh. The Complexity of Ranking Simple Languages. [*Mathematical Systems Theory.*]{} 23(1). pp. 1–19. (1990)
A. V. Goldberg, M. Sipser. Compression and Ranking. STOC 1985. pp. 440–448. (1985).
U. Gupta, D. T. Lee, C. K. Wong. Ranking and Unranking of 2-3 Trees. [*SIAM J. Comput.*]{} 11(3). pp. 582–590. (1982).
J. von zur Gathen, V. Shoup. Computing Frobenius Maps and Factoring Polynomials. [*Computational Complexity*]{}, 2. pp. 187-224. (1992).
L. A. Hemachandra. On ranking. Structure in Complexity Theory Conference. (1987).
L. A. Hemachandra, S. Rudich. On the Complexity of Ranking. [*J. Comput. Syst. Sci.*]{} 41(2). pp. 251–271. (1990)
S. Kapoor, H. Ramesh. An Algorithm for Enumerating All Spanning Trees of a Directed Graph. [*Algorithmica*]{}. 27(2). pp. 120–130. (2000).
T. Kociumaka, J. Radoszewski, W. Rytter. Efficient Ranking of Lyndon Words and Decoding Lexicographically Minimal de Bruijn Sequence. CoRR abs/1510.02637 (2015)
H. W. Lenstra. Finding isomorphisms between finite fields. Mathematics of Computation, 56, 193, pp. 329–347. (1991).
R. Lidl and H. Niederreiter. Finite Fields. Encyclopedia of Mathematics and its Applications. Addison-Wesley Publishing Company. (1984).
A. Poli. A deterministic construction of normal bases with complexity $O(n^3+n log n log log n log q)$. [*J. Symb. Comp.*]{}, 19, pp. 305–319. (1995).
L. Li. Ranking and Unranking of AVL-Trees. [*SIAM J. Comput.*]{} 15(4). pp. 1025-1035. (1986).
A. Lempel, G. Seroussi, S. Winograd: On the Complexity of Multiplication in Finite Fields. [*Theor. Comput. Sci.*]{} 22: pp. 285–296. (1983).
E. Mäkinen. Ranking and Unranking Left Szilard Languages. University of Tampere. Report A-1997-2.
K. Makino, T. Uno. New Algorithms for Enumerating All Maximal Cliques. SWAT 2004. pp. 260–272. (2004).
E. Mäkinen. On Lexicographic Enumeration of Regular and Context-Free Languages. [*Acta Cybern.*]{} 13(1). pp. 55–61. (1997)
W. J. Myrvold, F. Ruskey. Ranking and unranking permutations in linear time. [*Inf. Process. Lett.*]{} 79(6). pp. 281–284. (2001).
J. M. Pallo. Enumerating, Ranking and Unranking Binary Trees. [*Comput. J.*]{} 29(2). pp. 171–175. (1986).
F. Ruskey, C. D. Savage, T. M. Y. Wang. Generating Necklaces. [*J. Algorithms*]{}. 13(3). pp. 414–430. (1992).
J. B. Remmel, S. G. Williamson. Ranking and Unranking Trees with a Given Number or a Given Set of Leaves. arXiv:1009.2060.
Y. Shen. A new simple algorithm for enumerating all minimal paths and cuts of a graph. [*Microelectronics Reliability*]{}. 35(6). pp. 973–976. (1995).
I. Shparlinski. Finite fields: theory and computation. Mathematics and Its Applications, Vol. 477. (1999).
T. J. Savitsky. Enumeration of 2-Polymatroids on up to Seven Elements. [*SIAM J. Discrete Math.*]{} 28(4). pp. 1641–1650. (2014)
P. Tarau. Ranking and Unranking of Hereditarily Finite Functions and Permutations. arXiv:0808.0554. (2008).
T. Uno. An Algorithm for Enumerating all Directed Spanning Trees in a Directed Graph. ISAAC 1996. pp. 166–173. (1996).
T. Uno. Algorithms for Enumerating All Perfect, Maximum and Maximal Matchings in Bipartite Graphs. ISAAC 1997. pp. 92–101. (1997).
T. Uno. A Fast Algorithm for Enumerating Bipartite Perfect Matchings. ISAAC 2001. pp. 367–379. (2001)
R-Y. Wu, J-M. Chang. Ranking and unranking of well-formed parenthesis strings in diverse representations. ICCRD - International Conference on Computer Research and Development. (2011).
V. V. Vazirani, M. Yannakakis. Suboptimal Cuts: Their Enumeration, Weight and Number (Extended Abstract). ICALP 1992. pp. 366-377. (1992).
P. Weiner., Linear pattern matching algorithms. 14th Annual IEEE Symposium on Switching and Automata Theory. pp. 1–11. (1973).
R-Y. Wu, J-M. Chang, A-H. Chen, C-L. Liu. Ranking and Unranking $t$-ary Trees in a Gray-Code Order. [*Comput. J.*]{} 56(11). pp. 1388–1395. (2013).
R-Y. Wu, J-M. Chang, A-H. Chen, M-T. Ko. Ranking and Unranking of Non-regular Trees in Gray-Code Order. [*IEICE Transactions*]{}. 96-A(6), pp. 1059–1065. (2013).
R-Y. Wu, J-M. Chang, Y-L. Wang. Ranking and Unranking of $t$-Ary Trees Using RD-Sequences. [*IEICE Transactions*]{}. 94-D(2). pp. 226–232. (2011).
R-Y. Wu, J-M. Chang, C-H. Chang. Ranking and unranking of non-regular trees with a prescribed branching sequence. Mathematical and Computer Modelling. 53(5-6). pp. 1331–1335. (2011).
Li-Pu Yeh, B-F Wang, H-H Su. Efficient Algorithms for the Problems of Enumerating Cuts by Non-decreasing Weights. [*Algorithmica*]{}. 56(3). pp. 297-312. (2010).
[^1]: $O(N\cdot poly(\log N))$ where $N=n^2$ is the size of the output.
[^2]: $O(N\cdot poly(\log N))$ where $N=n^2$ is the size of the output.
[^3]: Here $\tilde O(N)=\tilde O(N\cdot poly(\log (N)))$
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We classify all finite $p$-groups $G$ for which $|{\operatorname{Aut} }_{c}(G)|$ attains its maximum value, where ${\operatorname{Aut} }_{c}(G)$ denotes the group of all class preserving automorphisms of $G$.'
address: |
School of Mathematics, Harish-Chandra Research Institute\
Chhatnag Road, Jhunsi, Allahabad - 211 019, INDIA
author:
- 'Manoj K. Yadav'
title: 'Class preserving automorphisms of finite $p$-groups'
---
**J. London Math. Soc., Vol. 75 (2007), 755-772.**
[^1] [^2]
[^1]: 2000 Mathematics Subject Classification. 20D45, 20D15
[^2]: Research supported by DST (SERC Division), the Govt. of INDIA
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Hypermaps were introduced as an algebraic tool for the representation of embeddings of graphs on an orientable surface. Recently a bijection was given between hypermaps and indecomposable permutations; this sheds new light on the subject by connecting a hypermap to a simpler object. In this paper, a bijection between indecomposable permutations and labelled Dyck paths is proposed, from which a few enumerative results concerning hypermaps and maps follow. We obtain for instance an inductive formula for the number of hypermaps with $n$ darts, $p$ vertices and $q$ hyper-edges; the latter is also the number of indecomposable permutations of ${\mathcal S}_n$ with $p$ cycles and $q$ left-to-right maxima. The distribution of these parameters among all permutations is also considered.'
bibliography:
- 'permutations.bib'
---
[**Indecomposable Permutations, Hypermaps and Labeled Dyck Paths**]{}\
[Robert Cori]{}\
[*Labri, Université Bordeaux 1\
351 cours de la Libération F33400 Talence (France)*]{}\
[robert.cori@labri.fr]{}
Introduction {#introduction .unnumbered}
============
Permutations and maps on surfaces have and old common history. Heffter [@heffter] was probably the first who mentioned the fact that any embedding of a graph on an orientable surface could be represented by a pair consisting of a permutation and a fixed point free involution; J. Edmonds [@edmonds] and J. Youngs [@youngs] gave in the early 60’s a more precise presentation of this idea by showing how to compute the faces of an embedding using the cycles of the product of the permutation and the fixed point free involution, giving a purely combinatorial definition of the genus. A. Jacques [@jacques] proved that this could be generalized to any pair of permutations (called hypermap in [@coriT]), hence relaxing the condition that the second one should be a fixed point free involution. He defined the genus of a pair of permutations by a formula involving the number of their cycles and of that of their product.
W. T. Tutte [@tutte2] generalized these constructions by introducing a combinatorial object consisting of three fixed point free involutions in order to represent embeddings in a nonorientable surface.
The combinatorial representation allows one to obtain results on automorphisms of maps and hypermaps, for instance A. Machí [@machi] obtained a combinatorial version of the Riemann–Hurwitz formula for hypermaps. A coding theory of rooted maps by words [@coriT] had also some extent for explaining the very elegant formulas found by W. T. Tutte [@tutte1] for the enumeration of maps. In the same years Jones and Singerman [@jonesSingerman] settle some important algebraic properties of maps. Recently G. Gonthier see [@gonthier], used hypermaps in giving a formal proof of the 4 colour theorem. A survey of the combinatorial and algebraic properties of maps and hypermaps is given in [@coriMachi].
In 2004 P. Ossona de Mendez and P. Rosenstiehl proved an important combinatorial result: they constructed a bijection between (rooted) hypermaps and indecomposable permutations (also called connected or irreducible). Indecomposable permutations are a central object in combinatorics (see for instance [@stanley] Problem 5.13), they were considered in different contexts, and probably for the first time by A. Lentin [@LentinT] while solving equations in the free monoid. They were also considered for instance as a basis of a Hopf algebra defined by Malvenuto and Reutenauer (see [@aguiarSottile], or [@duchampHivert]), and in the enumeration of a certain kind of Feynman diagrams [@cvitanovic].
In this paper we present the Ossona-Rosenstiehl result in simpler terms and focus on the main property of the bijection: the number of cycles and of left-to-right maxima of the indecomposable permutation are equal to the number of vertices and hyper-edges of the rooted hypermap associated with it.
This property have some nice consequences for the enumeration: it allows one to give a formula for the number of rooted hypermaps on $n$ darts, or with $n$ darts and $p$ vertices. The property shows that the number of indecomposable permutations of ${{\mathcal S}_n}$ with $p$ cycles and $q$ left-to-right maxima is symmetric in $p, q$. By a straightforward argument this result can be generalized to all permutations, answering positively to a conjecture of Guo-Niu Han and D. Foata. We introduce a simple bijection between some labelled Dyck paths and permutations which allows us to obtain a formula for the polynomials enumerating indecomposable permutations by the number of cycles and left-to-right maxima, (hence of hypermaps by vertices and hyper-edges).
The paper is organized as follows: in Section 1, we give a few elementary results on indecomposable permutations focusing mainly on the parameters left-to-right maxima and cycles. Section 2 is devoted to hypermaps and the bijection of P. Ossona de Mendez and P. Rosenstiehl. All the details of the proof of correctness are in [@ossonaRosenstiehl1]; we give here the key points and some examples in order to facilitate the reading of their paper. The main result of the present paper and consequences of this bijection are given at the end at this section
In Section 3 we recall some notions about Dyck paths and their labelling. We describe a bijection between them and permutations and show the main properties of this bijection. In Section 4 we introduce a family of polynomials enumerating permutations and show a formula for the generating function of the permutations with respect to the number of left-to-right maxima and cycles. In the last section we restrict the hypermaps to be maps and permutations to be fixed point free involutions, obtaining in a simpler way some old enumeration formulas for them (see [@walshLehman], [@arquesBeraud]).
| {
"pile_set_name": "ArXiv"
} |
---
author:
- Samuel Tardif
- Soshi Takeshita
- Hiroyuki Ohsumi
- Junichi Yamaura
- Daisuke Okuyama
- Zenji Hiroi
- Masaki Takata
- 'Taka-hisa Arima'
title: |
All-in/all-out magnetic domains: X-ray diffraction imaging and magnetic field control\
SUPPLEMENTAL MATERIALS
---
Polarization dependence of the flipping ratio
=============================================
The typical polarization of an X-ray beam produced by an undulator at a synchrotron facility is linear horizontal. In order to tune the incident polarization state, one can use a crystal in the Laue diffraction condition (*i.e.* an X-ray crystal phase retarder, or XPR) and take advantage of dynamical diffraction effects.[@hirano_application_1997] Since the incident polarization is horizontal, the scattering plane of the XPR is rotated by 45 degree to allow equivalent diffraction in the $\sigma_c$ and $\pi_c$ channels, a necessary condition to achieve circular polarized light as illustrated in figure \[fig:pol\_geo\].
![image](fig5.pdf){width="88mm"}
The XPR introduces a phase difference $\delta$ between the $\sigma_c$ and $\pi_c$ components of the downstream polarization of the beam (fig. \[fig:pol\_geo\]): $$\begin{aligned}
\label{eq:phase_shift}
\pi_c &=& \sigma_c e^{-\imath \delta} ,\end{aligned}$$ such that when $\delta=\pi/2$ or $\delta=-\pi/2$ the transmitted beam is right- or left-handed circularly polarized, respectively. The phase difference is proportional to the inverse of the offset angle $\Delta\theta_{XPR}$ from the Bragg angle: $$\begin{aligned}
\delta &=& A /\Delta\theta_{XPR} ,\end{aligned}$$ where $A$ is a parameter depending on the Bragg angle of the reflection in the XPR , the corresponding crystal structure factor and the incident wavelength.[@hirano_application_1997] Since the offset for right-handed circular polarized light $\Delta\theta_{RHCP}$ can be experimentally measured, *i.e.* when $\delta = \pi /2$ one can rewrite $$\begin{aligned}
\label{eq:phase_shift_angle}
\delta &=& \frac{\pi}{2} \frac{\Delta\theta_{RHCP}}{\Delta\theta_{XPR}} .\end{aligned}$$ The polarization state downstream of the XPR is better described in the $\left(\sigma_0,\pi_0\right)$ basis, such that: $$\begin{aligned}
\label{eq:XPR_basis}
\pi_0 &=& \frac{\pi_c - \sigma_c}{\sqrt{2}} = \frac{e^{-\imath \delta}+1}{\sqrt{2}}\sigma_c ,\\
\sigma_0 &=& \frac{\pi_c + \sigma_c}{\sqrt{2}} = \frac{e^{-\imath \delta}-1}{\sqrt{2}}\sigma_c .\end{aligned}$$ Substituting in equation \[M-eq:FR\] in the Letter gives an analytical expression of the FR $$\begin{aligned}
\label{eq:fr_analytic}
\mathrm{FR}(\Delta\theta_{XPR}) = \pm \frac{ \cos \phi \sin( \frac{\pi}{2} \frac{\Delta\theta_{RHCP}}{\Delta\theta_{XPR}}) \sin \theta \left( 1 - \sin ^2 \theta \right) r}{r ^2 \left( \cos ^2 (\frac{\pi}{4} \frac{\Delta\theta_{RHCP}}{\Delta\theta_{XPR}}) + \sin ^4 \theta \sin ^2 (\frac{\pi}{4} \frac{\Delta\theta_{RHCP}}{\Delta\theta_{XPR}}) \right) + \sin ^2 \theta } ,\end{aligned}$$ where the only unknown parameters are the amplitude ratio $r=\left\| F_{m} \right\| / \left\| F_{ATS} \right\|$ and the phase difference $\phi$ between $F_m$ and $F_{ATS}$. Note that $\mathrm{FR}(\Delta\theta_{XPR})$ is obtained from a measurement of the diffraction with the XPR at $+\Delta\theta_{XPR}$ and $-\Delta\theta_{XPR}$ : due to the shape of the phase retarder crystal, the more precise absolute values of the offset angle for left-handed and right-handed circular polarization actually differ in the simulation of $\mathrm{P}_C$ by $1.4 \times 10^{-4}$ degree, which is lower than our experimental resolution ($2 \times 10^{-4}$ degree). It is worth mentioning however that small experimental misalignements can increase the difference between the absolute values of the negative and positive $\Delta\theta_{XPR}$. When mapping the flipping ratio over the sample, we used the calibrated values for circular polarization, such that the difference between $\left|+\Delta\theta_{XPR}\right|$ and $\left|-\Delta\theta_{XPR}\right|$ was between 0 and $4 \times 10^{-4}$ degree. A typical calibration curve is shown in figure \[fig:pol\_cal\].
![image](fig6.pdf){width="88mm"}
Resonant magnetic diffraction from an all-in/all-out antiferromagnet
====================================================================
The pyrochlore lattice of cadmium osmate belongs to the space group Fd= 3m, where reflections $(hkl)$ are forbidden when $h,k,l$ are of mixed parity or when $h+k+l$ is not divisible by 4. In particular, $(0kl)$ reflections are forbidden when $k+l$ is not divisible by 4. Using the origin choice 2 in the International Tables of Crystallography, the osmium atoms are described by the 16*c* Wyckoff sites $\mathbf{d}_{i} = (0,0,0)+\mathbf{t}, (\frac{3}{4},\frac{1}{4},\frac{1}{2})+\mathbf{t},(\frac{1}{4},\frac{1}{2},\frac{3}{4})+\mathbf{t},(\frac{1}{2},\frac{3}{4},\frac{1}{4})+\mathbf{t}$, where $\mathbf{t} \in \lbrace(0,0,0),(0,\frac{1}{2},\frac{1}{2}),(\frac{1}{2},0,\frac{1}{2}),(\frac{1}{2},\frac{1}{2},0)\rbrace$.
Since the all-in/all-out magnetic structure shares the same lattice as the underlying pyrochlore crystal ($\mathbf{q} = 0$ type), it also shares the same reciprocal lattice, *i.e.* the magnetic diffraction can only occur at integer values of $(h,k,l)$. For the sake of simplicity, we neglect the corresponding Kronecker $\delta$ function in the following and restrict ourselves to the crystal unit cell.
The resonant scattering term that we are interested in is the one that is linear with the local magnetic moment $\mathbf{m}$, the sign of which is opposite for the AIAO and AOAI domains. It is expressed as [@lovesey_x-ray_1996] $$\begin{aligned}
\label{eq:fm}
f_{i,circ} &=& -\imath\left(\frac{3 \lambda}{8\pi ^{2}}\right)\left(\bm{\varepsilon '}\times \bm{\varepsilon }\right)\cdot \mathbf{m}_{i}\left[F_{-1}^{1}-F_{+1}^{1}\right]_{i}\end{aligned}$$ , where $\lambda$ is the wavelength of the incident x-ray beam, $\bm{\varepsilon '}$ and $\bm{\varepsilon}$ are the polarization vectors of the scattered and incident beam respectively, $\mathbf{m}_{i}$ is the unitary magnetic moment on the considered atom $i$ and $F_{M}^{L}$ are the resonant oscillator strength with $L=1$ and $M\in\left\lbrace-1,+1\right\rbrace$ in the dipole approximation. As a result, the scattered amplitude is proportional to the scalar product of the cross-polarization term $\left(\bm{\varepsilon '}\times \bm{\varepsilon }\right)$ and the Fourier transform $\mathbf{M}(\mathbf{Q})$ of the distribution of the magnetic moments $\mathbf{m}$.
Let us first evaluate $\mathbf{M}(\mathbf{Q})$: $$\begin{aligned}
\label{eq:FTM}
\mathbf{M}(\mathbf{Q})
&=& \sum\limits_{ i}\mathrm{e}^{\imath2\pi\mathbf{Q}\cdot\mathbf{d}_{i}}\mathbf{m}_{i} ,\end{aligned}$$ where $ \mathbf{d}_{i}$ is the position of atom $i$. We note that the magnetic moment direction distribution in a cubic all-in/all-out antiferromagnet can be described in the reduced basis $\left(\mathbf{\hat{a}}_{1},\mathbf{\hat{a}}_{2},\mathbf{\hat{a}}_{3} \right)$ of the unit cell as $$\begin{aligned}
\label{eq:maiao}
\mathbf{m}_{i,AIAO}
&=& \frac{\left| \mathbf{m} \right|}{\sqrt{3}} \sum\limits_{j=1,2,3}
\cos\left(4\pi\mathbf{r}_{i,j}\right)\mathbf{\hat{a}}_{j},\end{aligned}$$ where $\mathbf{r}_{i,j} = \mathbf{d}_{i} \cdot \mathbf{\hat{a}}_{j}$. Conversely, the magnetic moment direction distribution in the opposite domain (*i.e.* all-out/all-in) is simply described by the opposite orientation of the magnetic moments: $$\label{eq:maoai}
\mathbf{m}_{i,AOAI} = - \mathbf{m}_{i,AIAO} .$$ We can now rewrite equation \[eq:FTM\] for the all-in/all-out domain as $$\begin{aligned}
\label{eq:FTM2}
\mathbf{M}(\mathbf{Q})
&=& \frac{\left| \mathbf{m} \right|}{2\sqrt{3}}
\sum\limits_{j=1,2,3}
\sum\limits_{ i} \left( \mathrm{e}^{ \imath 2\pi \left(\mathbf{Q}+2 \mathbf{r}_{i,j}\right)} + \mathrm{e}^{ -\imath 2\pi \left(\mathbf{Q}+2 \mathbf{r}_{i,j}\right)} \right) \mathbf{\hat{a}}_{j} .\end{aligned}$$ Specifically, for a reflection $h,k,l=4n,4n,4n+2$, $$\label{eq:FTM3}
\mathbf{M}_{AIAO}(h=4n,k=4n,l=4n+2)) = \frac{16\left| \mathbf{m} \right|}{2\sqrt{3}}\mathbf{\hat{a}_{3}} .$$ The sign of $\mathbf{M}_{AOAI}(\mathbf{Q})$ is the opposite for all-out/all-in domains.
One can note that the amplitude of the Fourier transform of the magnetic moment orientation distribution is the same, even when the reflection is space-group forbidden. We can therefore choose the particular reflections $(0~0~4n+2)$. In this case $\mathbf{M}(\mathbf{Q})$ and the scattering vector $\mathbf{Q}$ are parallel to $\mathbf{\hat{a}_{3}}$ (*i.e.* the $\mathbf{c}$ axis) and it corresponds to the situation of specular reflection from the $\left( 0~0~1\right)$ facet. Such a situation is beneficial for the microscopy setup installed at BL19-LXU since the sample stage is moved parallel to the diffracting planes, *i.e.* there is no geometric distortion in imaging the $\left( 0~0~1\right)$ facet. Furthermore, the facet edges provides reliable landmarks to compare pictures acquired at different times.
We know turn to the cross-polarization term $\left(\bm{\varepsilon '}\times \bm{\varepsilon }\right)$. The diffraction geometry is sketched in figure \[fig:diff\_geo\]. The cross-polarization term can be written as a tensor linking the incoming polarization state $\bm{\varepsilon } = \left( \bm{\sigma }, \bm{\pi } \right)$ and the outgoing polarization state $\bm{\varepsilon '} = \left( \bm{\sigma '}, \bm{\pi '} \right)$. $$\begin{aligned}
\left(\bm{\varepsilon '}\times \bm{\varepsilon }\right)
&=& \label{eq:fcircMatrix}
\bordermatrix{\text{ } & \mathbf{\sigma} &\mathbf{\pi} \cr
\mathbf{\sigma '} & 0 &-\sin \theta\mathbf{x}+\cos \theta \mathbf{y} \cr
\mathbf{\pi '} & -\sin \theta \mathbf{x}-\cos \theta \mathbf{y} & \sin 2\theta \mathbf{z}}\end{aligned}$$
![image](fig7.pdf){width="88mm"}
Since $\mathbf{M}(\mathbf{Q})$, $\mathbf{c}$ and $\mathbf{x}$ are parallel, considering equations \[eq:fm\], \[eq:FTM3\] and \[eq:fcircMatrix\], the magnetic diffraction component for the all-in/all-out domain can be written as $$\begin{aligned}
\sum\limits_{ i}^{ }\mathrm{e}^{\imath2\pi\mathbf{Q}\cdot\mathbf{d}_{i}}f_{i,circ}
&=&
- \imath\left(\frac{3 \lambda}{8\pi ^{2}}\right) \mathbf{M}_{AIAO}(\mathbf{Q}) \cdot \mathbf{x}
\left(
\begin{array}{cc}
0 & -\sin \theta \\
-\sin \theta & 0
\end{array}
\right) \otimes
\mathrm{FT}\left[F_{-1}^{1}-F_{+1}^{1}\right]\\
&=&
F_m
\left(
\begin{array}{cc}
0 & \imath \sin \theta \\
\imath \sin \theta & 0
\end{array}
\right) ,\end{aligned}$$ where the sign of the magnetic structure factor $F_m$ is opposite for AIAO and AOAI domain.
Magnetic domain walls orientations
==================================
In order to determine the orientation of the domain walls, we assigned the orientations of the intersections of the domain walls with the different facets to high symmetry directions, as shown in figure \[fig:orientations\].
![image](fig8.pdf){width="170mm"}
As stated in the Letter, the only planes that can result in such intersections simultaneously on both facets are the “113” group ($\{(113),(11\bar{3}),(1\bar{1}3),(\bar{1}13)\}$ and circular *hkl* permutations) and the “011” group ($\{(011),(01\bar{1})\}$ and circular *hkl* permutations). The $(001)$ facet shows an additional two possible orientations of the domain wall intersection, as seen in figure \[M-fig:map\_reversal\](b,c) in the Letter. The low datapoint count prevents a precise orientation determination on this facet. However one of the orientations may be $[\bar{1}10]$, which can come from either the 113 or the 011 group. The other orientation was more precisely measured (not shown here) to be $[100]$, which is only possible for the 011 group.
Both 113 and 011 types of domain walls are sketched in figure \[fig:DW\]. The presence of the domain wall results in frustrated spins at the interface of the magnetic domains. There are respectively one and four frustrated spins per unit cell at the interface for 113-type and 011-type domain walls. One can therefore expect the properties of each type of interface to be quite different: in the 113 case the frustrated spins are independent while in the 011 case they are connected, which suggest the possibility of long-range ordering within the interface.
![image](fig9.pdf){width="170mm"}
[2]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\
12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty [****, ()](\doibase 10.1143/JJAP.36.L1272) @noop [**]{} (, , )
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'The single sensor *probability hypothesis density* (PHD) and *cardinalized probability hypothesis density* (CPHD) filters have been developed in the literature using the random finite set framework. The existing multisensor extensions of these filters have limitations such as sensor order dependence, numerical instability or high computational requirements. In this paper we derive update equations for the multisensor CPHD filter. The multisensor PHD filter is derived as a special case. Exact implementation of the multisensor CPHD involves sums over all partitions of the measurements from different sensors and is thus intractable. We propose a computationally tractable approximation which combines a greedy measurement partitioning algorithm with the Gaussian mixture representation of the PHD. Our greedy approximation method allows the user to control the tradeoff between computational overhead and approximation accuracy.'
author:
- 'Santosh Nannuru, Stephane Blouin, Mark Coates, and Michael Rabbat [^1]'
bibliography:
- 'gcphd\_bibtex.bib'
title: Multisensor CPHD filter
---
Random finite sets, multisensor CPHD filter, multisensor PHD filter, multisensor multitarget tracking.
Introduction {#sec:introduction}
============
In the multitarget tracking problem often the number of targets and the number of observations detected by sensors are unknown and time varying. Thus representing the targets and observations as vectors is inefficient. A finite set is a more suitable representation. This is the motivation for the random finite set framework [@goodman1997; @mahler2007B] which represents target states and observations as realizations of random finite sets. The implementation of the general multitarget Bayes filter for random finite sets is analytically and computationally infeasible [@mahler2003]. Several approximations have been proposed which make suitable assumptions to derive tractable filters [@mahler2003; @mahler2007; @vo2006; @vo2007; @vo2005].
The majority of research based on random finite set theory has focused on single sensor multitarget tracking. The *probability hypothesis density* (PHD) filter [@mahler2003] propagates, over time, the probability hypothesis density function which is defined over the single target state space. Improving on the PHD filter, the *cardinalized probability hypothesis density* (CPHD) filter [@mahler2007] propagates the distribution of the number of targets (the cardinality) in addition to the PHD function. Various implementations of the PHD and CPHD filter have been proposed, including the Gaussian mixture implementation [@vo2006; @vo2007] and the sequential Monte Carlo implementation [@vo2005]. These algorithms have been successfully applied to the problem of multitarget tracking in the presence of clutter.
A general multisensor extension of the PHD filter was first derived for the case of two sensors by Mahler [@mahler2009a; @mahler2009b]. The filter equations were further generalized to include an arbitrary number of sensors by Delande et al. [@delande2010]. Because of their combinatorial nature, the exact filter update equations of the general multisensor PHD filter are not computationally tractable except for a very few simple cases. Delande et al.[@delande2011a; @delande2011b] derive simplifications to the filter update equations for the case when the fields of view of different sensors have limited overlap. This reduces the computational complexity to some extent, and a particle filter based implementation is presented in [@delande2011b]. Jian et al. [@jian2013] suggest implementing the general multisensor PHD filter by repeated application of the two sensor PHD filter [@mahler2009a]. The implementation details for realizing the general multisensor PHD filter in this manner are not made explicit, and the reported numerical simulations are restricted to the case of two sensors.
To avoid the combinatorial computational complexity of the general multisensor PHD filter, some approximate multisensor filters have been proposed in the literature. The *iterated-corrector PHD filter* [@mahler2009b] processes the information from different sensors in a sequential manner. A single sensor PHD filter processes measurements from the first sensor. Using the output PHD function produced by this step as the predicted PHD function, another single sensor PHD filter processes measurements from the second sensor and so on. As a result, the final output strongly depends on the order in which sensors are processed [@nagappa2011]. This dependence on the sensor order can be mitigated by employing the *approximate product multisensor PHD and CPHD filters* proposed by Mahler [@mahler2010]. Although the final results are independent of sensor order, Ouyang and Ji [@ouyang2011] have reported that Monte Carlo implementation of the approximate product multisensor PHD filter is unstable and the problem worsens as the number of sensors increases. We have observed a similar instability in Gaussian mixture model-based implementations. Ouyang and Ji [@ouyang2011] have proposed a heuristic fix to stabilise the Monte Carlo implementation but it is not analytically verified. A comprehensive review of the different multisensor multitarget tracking algorithms based on random finite set theory can be found in [@mahler2014B Ch. 10].
In this paper we derive the update equations for the general multisensor CPHD filter. The derivation method is similar to that of the general multisensor PHD filter [@mahler2009a; @delande2010] with the additional propagation of the cardinality distribution. The multisensor CPHD filter we derive has combinatorial complexity and an exact implementation is computationally infeasible. To overcome this limitation we propose a two-step greedy approach based on a Gaussian mixture model implementation. Each step can be realized using a trellis structure constructed using the measurements from different sensors or measurement subsets for different Gaussian components. The algorithm is applicable to both the general multisensor CPHD and the general multisensor PHD filters.
Other trellis based algorithms have been developed for target tracking. For single-sensor single-target tracking, the Viterbi algorithm is applied over a trellis of measurements constructed over time in [@lascala1998]. Each column of the trellis is a measurement scan at a different time step. The Viterbi algorithm is used to find the best path in the trellis corresponding to data associations over time. This approach has been extended to multitarget tracking in [@pulford2006] for a fixed and known number of targets. The nodes of the trellis correspond to different data association hypotheses and the transition weights are based on measurement likelihoods. The Viterbi algorithm was also applied in [@wolf1989], in conjunction with energy based transition weights, to identify the $K$-best non-intersecting paths over the measurement trellis when $K$ targets are present.
The form of the update equations in the general multisensor PHD/CPHD filters are similar to the update equations of the single sensor PHD/CPHD filters for extended targets [@mahler2009c; @orguner2011]. The similarity is in the sense that for extended targets the update equation requires partitioning of the single sensor measurement set which can be computationally demanding. Granstrom et al. [@granstrom2010] propose a Gaussian mixture model-based implementation of the PHD filter for extended targets with reduced partitioning complexity. This is done by calculating the Mahalanobis distance between the measurements and grouping together measurements which are close to each other within a certain threshold. Orguner et al. [@orguner2011] use a similar method to reduce computations in the Gaussian mixture model-based implementation of the CPHD filter for extended targets.
The rest of the paper is organized as follows: Section \[sec:background\] provides a brief overview of random finite sets. Section \[sec:problem\_formulation\] formally poses the problem of multisensor multitarget tracking. In Section \[sec:gcphd\_filter\] we summarize the prediction and update equations of the general multisensor CPHD filter. The derivation of the filter update equations is provided in the appendices. We present computationally tractable implementations of the general multisensor PHD and CPHD filters in Section \[sec:apprx\_implementation\]. A performance comparison of the proposed filter with existing multisensor filters is conducted using numerical simulations in Section \[sec:simulations\]. We provide conclusions in Section \[sec:conclusion\].
Portions of this work are presented in a conference paper [@nannuru2014]; the present manuscript contains detailed derivations and proofs which were omitted from the conference paper, and the present manuscript also includes a more detailed description and evaluation of the proposed approximation and implementation of the general multisensor PHD and CPHD filters.
Background on random finite sets {#sec:background}
================================
Random finite sets {#sec:rfs}
------------------
Random finite sets are set-valued random variables. The PHD and CPHD filters are derived using notions of random finite sets. This section provides a review of this background, introducing definitions and notation used in the derivations that follow. Detailed treatments of random finite sets and the related statistics in the context of multitarget tracking can be found in [@goodman1997; @mahler2007B; @mahler2014B].
A random finite set is completely specified using its probability density function if it exists. The probability density function of a random finite set modeling the multitarget state is also referred to as the multitarget density function in this paper. Let $Y$ be a realization of a random finite set $\Xi$ with elements from an underlying space $\Y$, i.e. $Y \subseteq \Y$. For the random finite set $\Xi$, denote its density function by $f_{\Xi}(Y)$. Let the cardinality distribution of the random finite set $\Xi$ be $p_{\Xi}(n)$ $$\begin{aligned}
p_{\Xi}(n) {\mathrel{\overset{\makebox[0pt]{\mbox{\normalfont\tiny\sffamily def}}}{=}}}\textrm{Prob}(|\Xi| = n) \,, \; n = 1,2,\dots \,,\end{aligned}$$ where the notation $|\Xi|$ denotes the cardinality of set $\Xi$. The *probability generating function* (PGF) of the cardinality distribution $p_{\Xi}(n)$ is defined as $$\begin{aligned}
\pgf_{\Xi}(t) {\mathrel{\overset{\makebox[0pt]{\mbox{\normalfont\tiny\sffamily def}}}{=}}}\sum_{n=0}^{\infty} t^n p_{\Xi}(n).\end{aligned}$$
A statistic of the random finite set, which is used by the PHD and CPHD filters, is the probability hypothesis density (PHD) function [@mahler2007B]. For the random finite set $\Xi$ defined over an underlying space $\Y$, we denote its PHD by $D_{\Xi}(\y)$, $\y \in \Y$. Unlike the probability density function $f_{\Xi}(Y)$ which is defined over the space of finite sets in $\Y$, the PHD function $D_{\Xi}(\y)$ is defined over the space $\Y$. Instead of propagating the complete density function, which can be computationally challenging, the PHD and CPHD filters propagate the low dimensional PHD function over time.
IIDC random finite set
----------------------
An *independent and identically distributed cluster* (IIDC) random finite set [@mahler2007] is completely specified by its cardinality distribution and its spatial density function. Let $\Xi$ be an IIDC random finite set with cardinality distribution $p_{\Xi}(n)$ and the spatial density function $\zeta(\y)$. The probability density function $f_{\Xi}(Y)$ and the PHD $D_{\Xi}(\y)$ of the random finite set $\Xi$ are given by the relations $$\begin{aligned}
f_{\Xi}(Y) &= |Y|! \; p_{\Xi}(|Y|) \, \prod_{\y \in Y}{\zeta(\y)} \\
D_{\Xi}(\y) &= \zeta(\y) \, \mu \\
\mu &= E(|\Xi|) = \sum_{n=0}^{\infty} n \, p_{\Xi}(n).\end{aligned}$$ Samples from an IIDC random finite set can be generated by first sampling a cardinality $m$ from its cardinality distribution $p_{\Xi}(n)$ and then independently sampling $m$ points from its spatial density function $\zeta(\y)$.
The Poisson random finite set is an example of an IIDC random finite set where the cardinality distribution is assumed to be Poisson. The PHD filter [@mahler2003] models the multitarget state as a realization of a Poisson random finite set and propagates its PHD function over time. The Poisson random finite set assumption can be undesirable because the variance of the Poisson distribution is equal to its mean, which implies that as the number of targets increases, the error in its estimation becomes larger. To overcome this problem the CPHD filter [@mahler2007] models the multitarget state as a realization of an IIDC random finite set and propagates its PHD function and cardinality distribution over time. The additional cardinality information allows us to more accurately model the multitarget state.
Problem formulation {#sec:problem_formulation}
===================
We now specify the multisensor multitarget tracking problem. Let $\x_{k,i} \in \mathcal{X}$ be the state of the $i^{th}$ target at time $k$. In most of the tracking literature $\mathcal{X}$ is chosen to be the Euclidean space, $\mathcal{X} = \R^{n_{\x}}$, where $n_{\x}$ is the dimension of the single target state. If $n_k \geq 0$ targets are present at time $k$, the multitarget state can be represented by the finite set $X_k = \{\x_{k,1}, \ldots \x_{k,n_k} \}$, $X_k \subseteq \mathcal{X}$. We assume that each single target state evolves according to the Markovian transition function $f_{k+1|k}(\x_{k+1,i}|\x_{k,i})$. New targets can arrive and existing targets can disappear at each time step. Let the survival probability of an existing target with state $\x$ at time $k$ be given by the function $p_{sv,k}(\x)$.
Multiple sensors make observations about the multiple targets present within the monitored region. Assume that there are $s$ sensors, and conditional on the multitarget state, their observations are independent. Measurements $\z^{j}$ gathered by sensor $j$ lie in the space $\mathcal{Z}^{j}$, i.e., $\z^{j} \in \mathcal{Z}^{j}$. Let $Z^{j}_{k} = \{\z^{j}_{1,k}, \z^{j}_{2,k}, \dots, \z^{j}_{m_{j,k}}\}$, $Z^{j}_{k} \subseteq \mathcal{Z}^{j}$ be the set of measurements collected by the $j$-th sensor at time step $k$. The measurement set can be empty. We assume that each target generates at most one measurement per sensor at each time instant $k$. Each measurement is either associated with a target or is generated by the clutter process. Define $Z^{1:s}_{k} = Z^{1}_{k} \cup Z^{2}_{k} \cup \dots \cup Z^{s}_{k}$ to be the collection of measurement sets gathered by all sensors at time $k$. The probability of detection of sensor $j$ at time $k$ is given by $p^{j}_{d,k}(\x)$. The function $h_{j,k}(\z|\x)$ denotes the probability density (likelihood) that sensor $j$ makes a measurement $\z$ given that it detects a target with state $\x$. Denote the probability of a missed detection as $q^{j}_{d,k}(\x) = 1 - p^{j}_{d,k}(\x)$.
The objective of the multitarget tracking problem is to form an estimate $\widehat{X}_k$ of the multitarget state at each time step $k$. This estimate is formed using all the measurements up until time $k$ obtained from all the $s$ sensors which is denoted by $Z^{1:s}_{1:k} = \{Z^{1:s}_{1},Z^{1:s}_{2},\dots,Z^{1:s}_{k}\}$. More generally, we would like to estimate the posterior multitarget state distribution $f_{k|k}(X_k|Z^{1:s}_{1:k})$.
General multisensor CPHD filter {#sec:gcphd_filter}
===============================
In this section we develop the CPHD filter equations when multiple sensors are present. The derivation method is similar to the approach used to derive the general multisensor PHD filter equations by Mahler [@mahler2009a] and Delande et al. [@delande2010]. Since the CPHD filter explicitly accounts for the cardinality distribution of the multitarget state, the filter update equations are more involved. Specifically, the measurement set partitions are more explicitly listed when compared to the PHD filter update equations [@mahler2009a; @delande2010]. The CPHD filter also requires additional propagation of the cardinality distribution.
We make the following modeling assumptions while deriving the multisensor CPHD filter equations
1. Target birth at time $k+1$ is modelled using an IIDC random finite set.
2. The predicted multitarget distribution at time $k+1$ is IIDC.
3. The sensor observation processes are independent conditional on the multitarget state $X_{k+1}$, and the sensor clutter processes are IIDC.
Before deriving the filter equations, we introduce some notation. Let $b_{k+1}(\x)$ be the PHD function and let $p_{b,k+1}(n)$ be the cardinality distribution of the birth process at time $k$. For the $j$-th sensor let $c_{k+1,j}(\z)$ be the clutter spatial distribution and let $C_{k+1,j}(t)$ be the PGF of the clutter cardinality distribution at time $k+1$. Let $D_{k+1|k}(\x)$ denote the predicted PHD function and let $r_{k+1|k}(\x)$ denote the normalized predicted PHD function at time $k+1$ (normalized so that it integrates to one). Let the PGF of the predicted cardinality distribution $p_{k+1|k}(n)$ be denoted by $\pgf_{k+1|k}(t)$. To keep the expressions and derivation compact we drop the time index and write $$\begin{aligned}
c_{k+1,j}(\z) &\equiv c_{j}(\z), \; C_{k+1,j}(t) \equiv C_{j}(t), \;
p^{j}_{d,k+1}(\x) \equiv p^{j}_{d}(\x) \nonumber \\
\pgf_{k+1|k}(t) &\equiv \pgf(t), \; q^{j}_{d,k+1}(\x) \equiv q^{j}_{d}(\x), \;
p_{sv,k+1}(\x) \equiv p_{sv}(\x) \nonumber \\
h_{j,k+1}(\z|\x) &\equiv h_{j}(\z|\x), \; r_{k+1|k}(\x) \equiv r(\x), \;
m_{j,k+1} \equiv m_{j} \nonumber \\
p_{b,k+1}(n) &\equiv p_{b}(n) \,,\end{aligned}$$ when the time is clear from the context. Note that abbreviated notation is used only for convenience and the above quantities are in general functions of time. For functions $a(\y)$ and $b(\y)$, the notation ${\langle a, b \rangle}$ is defined as ${\langle a, b \rangle} = \int{a(\y) \, b(\y) \, d\y}$. In the following subsections we discuss the prediction and update steps of the general multisensor CPHD filter.
CPHD prediction step
--------------------
Since sensor information is not required in the prediction step, the prediction step of the CPHD filter for the multisensor case is the same as that for the single sensor case. Denote the posterior probability hypothesis density at time $k$ as $D_{k|k}(\x)$ and the posterior cardinality distribution as $p_{k|k}(n)$. The predicted probability hypothesis density function at time $k+1$ is given by [@mahler2007; @vo2007] $$\begin{aligned}
D_{k+1|k}(\x) &= b_{k+1}(\x) + \int \! {p_{sv}(\w)f_{k+1|k}(\x|\w)} D_{k|k}(\w) d\w \,,\end{aligned}$$ where the integral is over the complete single target state space. The predicted cardinality distribution at time $k+1$ is given by [@mahler2007; @vo2007] $$\begin{aligned}
& p_{k+1|k}(n) = \nonumber\\
&\, \sum_{j=0}^{n}p_b(n-j)
\sum_{l=j}^{\infty}{{l \choose j} \frac{{{\langle D_{k|k}, p_{sv} \rangle}}^j
{{\langle D_{k|k}, 1-p_{sv} \rangle}}^{l-j}}{{{\langle D_{k|k}, 1 \rangle}}^l} p_{k|k}(l)} \,,\end{aligned}$$ where $n$, $j$ and $l$ are non-negative integers. The normalized predicted PHD function is given by $$\begin{aligned}
r(\x) \equiv r_{k+1|k}(\x) &= \frac{D_{k+1|k}(\x)}{\mu_{k+1|k}}, \\
\text{where } \quad \mu_{k+1|k} &= \sum_{n=1}^{\infty}{n \, p_{k+1|k}(n)}.\end{aligned}$$
CPHD update step
----------------
We use the notation $\ll 1 , s \rr$ to denote the set of integers from $1$ to $s$. Let $W \subseteq Z^{1:s}_{k+1}$ such that for all $j \in \ll 1,s \rr$, $|W|_j \leq 1$ where $|W|_j = |\{ \z \in W : \z \in Z^{j}_{k+1}\}|$. Thus the subset $W$ can have at most one measurement from each sensor. Let $\mathcal{W}$ be the set of all such $W$. For any measurement subset $W$ we can uniquely associate with it a set of pair of indices $T_{W}$ defined as $T_{W} = \{(j,l): \z^{j}_{l} \in W\}$. For disjoint subsets $W_1, W_2, \dots W_{n}$, let $V = Z^{1:s}_{k+1} \setminus (\cup_{i=1}^{n}{W_{i}})$, so that $W_1, W_2, \dots W_{n}$ and $V$ partition $Z^{1:s}_{k+1}$. Think of the set $W_{i}$ as a collection of measurements made by different sensors, all of which are generated by the same target and the set $V$ as the collection of clutter measurements made by all the sensors. Let $P$ be a partition of $Z^{1:s}_{k+1}$, constructed using elements from the set $\mathcal{W}$ and a set $V$, given by
P &= {W\_1, W\_2, …W\_[|P|-1]{}, V }, \[eq:partition\_1\]\
& \_[i=1]{}\^[|P|-1]{} W\_[i]{} V = Z\^[1:s]{}\_[k+1]{}, \[eq:partition\_2\]&&\
& W\_i W\_j = , W\_i, W\_j P, i j \[eq:partition\_3\]\
& W\_i V = , W\_i P \[eq:partition\_4\]
where $|P|$ denotes the number of elements in the partition $P$.
The partition $P$ groups the measurements in $Z^{1:s}_{k+1}$ into disjoint subsets where each subset is either generated by a target (the $W$ subsets) or generated by the clutter process (the $V$ subset). Let $|P|_{j}$ be the number of measurements made by sensor $j$ which are generated by the targets. We have: $$\begin{aligned}
|P|_{j} &= \sum_{i=1}^{|P|-1}{|W_i|_j}.\end{aligned}$$ The number of measurements made by sensor $j$ which are classified as clutter in the partition $P$ is $(m_{j} - |P|_{j})$. Let $\S$ be the collection of all possible partitions $P$ of $Z^{1:s}_{k+1}$ constructed as above. A recursive expression for constructing the collection $\S$ is given in Appendix \[app:recursive\].
Denote the $v^{th}$-order derivatives of the PGFs of the clutter cardinality distribution and the predicted cardinality distribution as $$\begin{aligned}
C^{(v)}_{j}(t) &= \frac{d^v C_{j}}{d t^v}(t) \,, \quad
\pgf^{(v)}(t) = \frac{d^v \pgf}{d t^v}(t).\end{aligned}$$
We use $\gamma$ to denote the probability, under the predictive PHD, that a target is detected by no sensor, and we thus have: $$\gamma {\mathrel{\overset{\makebox[0pt]{\mbox{\normalfont\tiny\sffamily def}}}{=}}}\int \! r(\x) \prod_{j=1}^{s}{q^{j}_{d}(\x)} d\x \,. \label{eq:gamma}$$
For concise specification of the update equations, it is useful to combine the terms associated with the PGF of the clutter cardinality distribution for a partition $P$. Let us define the quantity $$\begin{aligned}
\kappa_{P} {\mathrel{\overset{\makebox[0pt]{\mbox{\normalfont\tiny\sffamily def}}}{=}}}\prod_{j=1}^{s} C^{(m_{j}-|P|_{j})}_{j}(0). \label{eq:kappa_P}\end{aligned}$$
For a set $W \in \mathcal{W}$ and the associated index set $T_{W}$ define the quantities $$\begin{aligned}
&d_{W} {\mathrel{\overset{\makebox[0pt]{\mbox{\normalfont\tiny\sffamily def}}}{=}}}\frac{\displaystyle \int \!\! r(\x) \left( \prod_{(i,l) \in T_{W}}{\!\!\!\! p_{d}^{i}(\x)\,h_{i}(\z^{i}_{l}|\x)}\right) \,
\!\!\! \prod_{j:(j,*) \notin T_{W}}{\!\!\!\!\!\! q^{j}_{d}(\x)} d\x}
{\displaystyle \prod_{(i,l) \in T_{W}}{c_{i}(\z^{i}_{l})}}, \label{eq:dW_cphd} \\
&\rho_{W}(\x) {\mathrel{\overset{\makebox[0pt]{\mbox{\normalfont\tiny\sffamily def}}}{=}}}\frac{\displaystyle \left( \prod_{(i,l) \in T_{W}}{\!\!\!\! p_{d}^{i}(\x)\,h_{i}(\z^{i}_{l}|\x)}\right) \,
\prod_{j:(j,*) \notin T_{W}}{\!\!\!\!\!\! q^{j}_{d}(\x)}}
{\displaystyle \int \!\! r(\x) \left( \prod_{(i,l) \in T_{W}}{\!\!\!\! p_{d}^{i}(\x)\,h_{i}(\z^{i}_{l}|\x)}\right) \,
\prod_{j:(j,*) \notin T_{W}}{\!\!\!\!\!\! q^{j}_{d}(\x)} d\x}, \label{eq:rhoW_cphd}\end{aligned}$$ where $(j,*)$ indicates any pair of indices of the form $(j,l)$. The quantity $d_W$ can be interpreted as the ratio of the likelihood that the measurement subset $W$ was generated by the target process to the likelihood that the measurement subset $W$ was generated by the clutter process. The quantity $\rho_W(\x)$ can be interpreted as the normalized pseudolikelihood contribution of the measurement subset $W$.
The updated probability hypothesis density function $D_{k+1|k+1}(\x)$ can be expressed as the product of the normalized predicted probability hypothesis density $r_{k+1|k}(\x)$ at time $k+1$ and a pseudolikelihood function. The pseudolikelihood function can be expressed as a linear combination of functions (one function for each partition $P$) with associated weights $\alpha_{P}$. The all-clutter partition $P = \{ V \}$ where $V = Z^{1:s}_{k+1}$ has an associated weight $\alpha_{0}$. Define $$\begin{aligned}
\alpha_{0} &{\mathrel{\overset{\makebox[0pt]{\mbox{\normalfont\tiny\sffamily def}}}{=}}}\frac{\displaystyle \sum_{P \in \S} \left(\kappa_{P} \pgf^{(|P|)}(\gamma) \prod_{W \in P}{d_{W}}\right)}
{\displaystyle \sum_{P \in \S} \left(\kappa_{P} \pgf^{(|P|-1)}(\gamma) \prod_{W \in
P}{d_{W}}\right)} \,, \label{eq:alpha_equation_0} \\
\alpha_{P} &{\mathrel{\overset{\makebox[0pt]{\mbox{\normalfont\tiny\sffamily def}}}{=}}}\frac{\displaystyle \kappa_{P} \pgf^{(|P|-1)}(\gamma) \prod_{W \in P}{d_{W}}}
{\displaystyle \sum_{Q \in \S} \left(\kappa_{Q} \pgf^{(|Q|-1)}(\gamma) \prod_{W \in
Q}{d_{W}}\right)} \,. \label{eq:alpha_equation_P}\end{aligned}$$ Note that the expression $W \in P$ only includes $W \in \mathcal{W}$ and does not include the component $V \in \mathcal{V}$ of $P$. For the all clutter partition $P = \{ V \}$ there are no elements in the partition of the type $W$ and we use the convention $\prod_{W \in P}() = 1$ whenever $P = \{ V \}$. Similary we use the convention $\sum_{W \in P}() = 0$ whenever $P = \{ V \}$.
\[thm:phd\_update\] Under the conditions of Assumption 1, the general multisensor CPHD filter update equation for the probability hypothesis density is $$\begin{aligned}
\frac{D_{k+1|k+1}(\x)}{r_{k+1|k}(\x)} &= \alpha_{0} \, \prod_{j=1}^{s}{q^{j}_{d}(\x)}
+ \sum_{P \in \S} \alpha_{P} \, \left(\sum_{W \in P} \rho_{W}(\x) \right)\, \label{eq:phd_update_gcphd}\end{aligned}$$ and the update equation for the posterior cardinality distribution is $$\begin{aligned}
\frac{p_{k+1|k+1}(n)}{p_{k+1|k}(n)} &= \frac{\displaystyle \sum_{\substack{P \in \S \\ |P| \leq n+1}}
\left(\kappa_{P} \frac{n!}{(n-|P| + 1)!} \gamma^{n-|P| + 1} \prod_{W \in P}{d_{W}}\right)}
{\displaystyle \sum_{P \in \S} \left(\kappa_{P} \pgf^{(|P|-1)}(\gamma)
\prod_{W \in P}{d_{W}}\right)}\,, \label{eq:card_update}\end{aligned}$$ where the quantities $\alpha_{0}$, $\alpha_{P}$, $\rho_{W}(\x)$ and $d_{W}$ are given in , , and , respectively.
The proof of Theorem \[thm:phd\_update\] is provided in Appendix \[app:phd\_update\]. It requires the concepts of functional derivatives, probability generating functionals and the multitarget Bayes filter which are revised in Appendices \[app:functional\_derivatives\], \[app:pgfl\] and \[app:multitarget\_bayes\] respectively. The proof depends on an intermediate result, Lemma \[lem:F\_derivative\], which is proved in Appendix \[app:F\_derivative\].
General multisensor PHD filter as a special case
------------------------------------------------
In this section we show that the general multisensor PHD filter can be obtained as a special case of the general multisensor CPHD filter when the following assumptions are made.
1. Target birth at time $k+1$ is modelled using a Poisson random finite set.
2. The predicted multitarget distribution at time $k+1$ is Poisson.
3. The sensor observation processes are independent conditional on the multitarget state $X_{k+1}$, and the sensor clutter processes are Poisson.
Since the multitarget state distribution is modelled as Poisson it suffices to propagate the PHD function over time. Let the rate of the Poisson clutter process be $\lambda_{j}$ and let $c_{j}(\z)$ be the clutter spatial distribution for the $j^{th}$ sensor. Let $\mu_{k+1|k}$ be the mean predicted cardinality at time $k+1$. Using the Poisson assumptions for the predicted multitarget distribution and the sensor clutter processes we have $$\begin{aligned}
\pgf^{(v)}(\gamma) &= \mu_{k+1|k}^{v} \; e^{\mu_{k+1|k}(\gamma - 1)} \\
C^{(v)}_{j}(0) &= \lambda_{j}^{v} \; e^{-\lambda_{j}}.\end{aligned}$$ Using these in we have the simplification $\alpha_{0} = \mu_{k+1|k} $. We can also simplify the term $\kappa_{P} \pgf^{(|P|-1)}(\gamma)$ as $$\begin{aligned}
\kappa_{P} & \pgf^{(|P|-1)}(\gamma) = \left( \prod_{j=1}^{s} C^{(m_{j}-|P|_{j})}_{j}(0) \right)
\, \pgf^{(|P|-1)}(\gamma) \\
&= \left( \prod_{j=1}^{s} \lambda_{j}^{m_{j}-|P|_{j}} \, e^{-\lambda_{j}} \right) \,
\mu_{k+1|k}^{|P|-1} \; e^{\mu_{k+1|k}(\gamma - 1)} \\
&= (e^{\mu_{k+1|k}(\gamma - 1) - \sum_{j=1}^{s}\lambda_{j}})
\left( \prod_{j=1}^{s} \lambda_{j}^{m_{j}} \right)
\frac{\mu_{k+1|k}^{|P|-1}}{\left( \prod_{j=1}^{s} \lambda_{j}^{|P|_{j}} \right)}.\end{aligned}$$ Since the expression $\kappa_{P} \pgf^{(|P|-1)}(\gamma)$ appears in both the numerator and the denominator of the term $\alpha_{P}$ in the PHD update expression, we can ignore the portion that is independent of $P$. Hence we have $$\begin{aligned}
\kappa_{P} \pgf^{(|P|-1)}(\gamma) & \propto \mu_{k+1|k}^{|P|-1}
\displaystyle \prod_{j=1}^{s} \lambda_{j}^{-|P|_{j}}.
\label{eq:psi_P_phd}\end{aligned}$$ From and , we can write $$\begin{aligned}
\kappa_{P} \pgf^{(|P|-1)}(\gamma) \prod_{W \in P}{d_{W}} & \propto
\prod_{W \in P}{\tilde{d}_W}\end{aligned}$$ where $\tilde{d}_W$ is defined as $$\begin{aligned}
& \tilde{d}_W {\mathrel{\overset{\makebox[0pt]{\mbox{\normalfont\tiny\sffamily def}}}{=}}}\nonumber \\
& \frac{\displaystyle \mu_{k+1|k} \int \!\! r(\x)
\left( \prod_{(i,l) \in T_{W}}{\!\!\!\! p_{d}^{i}(\x)\,h_{i}(\z^{i}_{l}|\x)}\right) \,
\prod_{j:(j,*) \notin T_{W}}{\!\!\!\!\!\! q^{j}_{d}(\x)} d\x}
{\displaystyle \prod_{(i,l) \in T_{W}}{\lambda_{i} \, c_{i}(\z^{i}_{l})}} \cdot\end{aligned}$$ The PHD update equation then reduces to $$\begin{aligned}
&\frac{D_{k+1|k+1}(\x)}{r_{k+1|k}(\x)} = \nonumber \\
& \qquad \mu_{k+1|k} \, \prod_{j=1}^{s}{q^{j}_{d}(\x)}
+ \sum_{P \in \S} \frac{\displaystyle \left( \prod_{W \in P}{\tilde{d}_{W}} \right) \,
\displaystyle \sum_{W \in P} \rho_{W}(\x)}
{\displaystyle \sum_{P \in \S} \left(\prod_{W \in P}{\tilde{d}_{W}}\right)} \cdot
\label{eq:phd_update_gphd}\end{aligned}$$ The above equation is equivalent to the general multisensor PHD update equation given in [@mahler2009a; @delande2010].
Implementations of the general multisensor CPHD and PHD filters {#sec:apprx_implementation}
===============================================================
In the previous section we derived update equations for the general multisensor CPHD filter which propagate the PHD function and cardinality distribution over time. Analytic propagation of these quantities is difficult in general without imposing further conditions. In the next subsection we develop a Gaussian mixture-based implementation of the filter update equations. Although the Gaussian mixture implementation is analytically tractable, it is computationally intractable. In Section \[sec:measurement\_subsets\] and \[sec:grouping\] we propose greedy algorithms to drastically reduce computations and develop computationally tractable approximate implementations for the general multisensor CPHD and PHD filters.
Gaussian mixture implementation {#sec:gm_implementation}
-------------------------------
We make the following assumptions to obtain closed form updates for equations and
1. The probability of detection for each sensor is constant throughout the single target state space; i.e., $p^{j}_{d}(\x) = p^{j}_{d}$, for all $\x$.
2. The predicted PHD is a mixture of weighted Gaussian densities.
3. The single sensor observations are linear functions of a single target state corrupted by zero-mean Gaussian noise.
4. The predicted cardinality distribution has finite support; i.e., there exists a positive integer $n_{0} < \infty$ such that $p_{k+1|k}(n) = 0$, for all $n > n_{0}$.
From the above assumptions we can express the normalized predicted PHD as a Gaussian mixture model $$\begin{aligned}
r(\x) &= \sum_{i=1}^{J_{k+1|k}}{w_{k+1|k}^{(i)} \, \N_{(i)}(\x)}\end{aligned}$$ where $w_{k+1|k}^{(i)}$ are non-negative weights satisfying $\sum_{i=1}^{J_{k+1|k}}{w_{k+1|k}^{(i)}} = 1$; and $\N_{(i)}(\x) {\mathrel{\overset{\makebox[0pt]{\mbox{\normalfont\tiny\sffamily def}}}{=}}}\N(\x;m_{k+1|k}^{(i)},\Sigma_{k+1|k}^{(i)})$ is the Gaussian density function with mean $m_{k+1|k}^{(i)}$ and covariance matrix $\Sigma_{k+1|k}^{(i)}$. If $H_{j}$ is the observation matrix for sensor $j$ then its likelihood function can be expressed as $h_{j}(\z|\x) = \N(\z;H_{j}\x,\Sigma_{j})$. Then under the conditions of Assumption 3, the posterior PHD at time $k+1$ can be expressed as a weighted mixture of Gaussian densities and the posterior cardinality distribution has a finite support.
Since the probability of detection is constant we have $\gamma = \prod_{j=1}^{s}{q^{j}_{d}}$. For each partition $P$ the quantities $\pgf^{(|P|-1)}$ and $\pgf^{(|P|)}$ can be easily calculated since the predicted cardinality distribution has finite support. The integration in the numerator of is analytically solvable under Assumption 3 and using properties of Gaussian density functions [@vo2006]. Hence $d_{W}$ can be analytically evaluated. From these quantities we can calculate $\alpha_{0}$ and $\alpha_{P}$ from and . For each measurement set $W$ we can express the product $r(\x) \rho_{W}(\x)$ as a sum of weighted Gaussian densities using the properties of Gaussian density functions [@vo2006]. Thus from the update equation the posterior PHD can be expressed as a mixture of Gaussian densities. Since the predicted cardinality distribution has finite support, from , the posterior cardinality distribution also has finite support. Similarly, under appropriate linear Gaussian assumptions, the posterior PHD in can be expressed as a mixture of Gaussian densities.
The conditions of Assumption 3 allow us to analytically propagate the PHD and cardinality distribution but the propagation is still numerically infeasible. The combinatorial nature of the update step can be seen from , and . Specifically, the exact implementation of the general multisensor CPHD and PHD filters would require evaluation of all the permissible partitions (i.e. all $P \in \S$) that could be constructed from all possible measurement subsets. The number of such partitions is prohibitively large and a direct implementation is infeasible. We now discuss an approximation of the update step to overcome this limitation.
The key idea of the approximate implementation is to identify elements of the collection $\S$ which make a significant contribution to the update expressions. We propose the following two-step greedy approximation to achieve this within the Gaussian mixture framework. The first approximation step is to select a few measurement subsets $W$ for each Gaussian component. These subsets are identified by evaluating a score function which quantifies the likelihood that the subset was generated by that Gaussian component. The second approximation step is to greedily construct partitions of these subsets which are significant for the update step. The following subsections explain these two steps in detail.
Selecting the best measurement subsets {#sec:measurement_subsets}
--------------------------------------
A measurement subset is any subset of the measurement set $Z^{1:s}_{k+1}$ such that it contains at most one measurement per sensor. The total number of measurement subsets that can be constructed when the $j^{th}$ sensor records $m_{j}$ measurements is $\displaystyle \prod_{j=1}^{s}{(m_{j}+1)}$. When there are many targets present and/or the clutter rate is high this number can be very large. Since the size of the collection $\S$ depends on the number of measurement subsets, to develop a tractable implementation of the update step it is necessary to limit the number of measurement subsets. Instead of enumerating all possible measurement subsets, they are greedily and sequentially constructed and only a few are retained based on the scores associated with them.
Consider the measurement subset $W$ and the associated set $T_{W}$ as defined earlier. For the $i^{th}$ Gaussian component and the measurement subset $W$ we can associate a score function $\beta^{(i)}(W)$ defined as $$\begin{aligned}
\beta^{(i)}(W) &{\mathrel{\overset{\makebox[0pt]{\mbox{\normalfont\tiny\sffamily def}}}{=}}}\frac{\displaystyle \int \!\! \N_{(i)}(\x)
\left( \prod_{(j,l) \in T_{W}}{\!\!\!\! p_{d}^{j}\,h_{j}(\z^{j}_{l}|\x)}\right)
\left( \prod_{j:(j,*) \notin T_{W}}{\!\!\!\!\!\! q^{j}_{d}} \right) d\x}
{\displaystyle \prod_{(j,l) \in T_{W}}{c_{j}(\z^{j}_{l})}} \cdot
\label{eq:beta_W}\end{aligned}$$ The above score function is obtained by splitting the $d_{W}$ term in for each Gaussian component. Intuitively, this score can be interpreted as the ratio of the likelihood that the measurement subset $W$ was generated by the single target represented by the $i^{th}$ Gaussian component to the likelihood that the measurement subset $W$ was generated by the clutter process. The score $\beta^{(i)}(W)$ can be analytically calculated since the integral is solvable under Assumption 3 and using properties of Gaussian densities [@vo2005]. The score is high when the elements of the set $W$ truly are the measurements caused by the target associated with the $i^{th}$ Gaussian component. We use $\beta^{(i)}(W)$ to rank measurement subsets for each Gaussian component and retain only a fraction of them with the highest scores.
in [1,2,3,4,5]{}[ at (1.5\*-1.5,6) [$\x$]{}; ]{} at (3,6.5) [Sensor $(j)$]{}; in [1,2,3,4,5]{}[ at (1.5\*- 1.5,5) [$\z_{\emptyset}^{\x}$]{}; ]{} in [1,2,3]{}[ at (0,5-) [$\z_{\y}^{1}$]{}; ]{} in [1,2,3]{}[ at (1.5,5-) [$\z_{\y}^{2}$]{}; ]{} in [1,2]{}[ at (3,5-) [$\z_{\y}^{3}$]{}; ]{} in [1,2,3]{}[ at (4.5,5-) [$\z_{\y}^{4}$]{}; ]{} in [1]{}[ at (6,5-) [$\z_{\y}^{5}$]{}; ]{}
(0.2,4.8) – (1.5,3.7); (1.5,3.7) – (2.8,4);
(0.2,2.8) – (1.5,1.7); (1.5,1.7) – (2.8,2.8);
(0.2,3.2) – (1.4,5.3); (1.4,5.3) – (2.8,5);
in [0,1,2,3]{}[ (3.2,4) – (4.3,5-); ]{}
For each Gaussian component, we select the measurement subsets by randomly ordering the sensors and incrementally incorporating information from each sensor in turn. We retain a maximum of $W_{\textrm{max}}$ subsets at each step. Figure \[fig:trellis\_subsets\] provides a graphical representation of the algorithm in the form of a trellis diagram. Each column of the trellis corresponds to observations from one of the sensors. The sensor number is indicated at the top of each column. The nodes of the trellis correspond to the sensor observations $(\z_{1}^{1}, \z_{2}^{1}, \dots, \, \z_{1}^{2}, \z_{2}^{2}, \dots)$ or the no detection case $(\z_{\emptyset}^{1}, \z_{\emptyset}^{2}, \dots)$.
The process of sequential construction of measurement subsets can be demonstrated using an example as follows. The solid lines in Figure \[fig:trellis\_subsets\] represent partial measurement subsets retained after processing observations from sensors 1 to 3. Now consider the measurement subset indicated by the thick solid line. It corresponds to the measurement subset $\{\z_{1}^{2}, \z_{1}^{3}\}$. When the sensor $4$ measurements are processed, this measurement subset is extended for each node of sensor $4$ as represented by the dashed lines. The scores $\beta^{(i)}(W)$ are calculated for these new measurement subsets using the expression in but limited to only the first $4$ sensors. This is done for each existing measurement subset in the sensor-measurement space and $W_{\textrm{max}}$ measurement subsets with highest scores are retained and considered at the next sensor. Although the process of constructing measurement subsets is dependent on the order in which sensors are processed, we observe from simulations that it has no significant effect on filter performance. Once the subsets have been selected, the ordering has no further effect in the update process.
Constructing partitions {#sec:grouping}
-----------------------
The algorithm to construct partitions from subsets is similar to the above algorithm used to identify the best measurement subsets. Since the $V$ component of a partition is unique given the $W$ components, it is sufficient to identify the $W$ components to uniquely specify a partition $P$. A graphical representation of the algorithm is provided in Figure \[fig:trellis\_groups\]. Each column of this trellis corresponds to the set of measurement subsets $\{W^{i}_{1}, W^{i}_{2}, \dots \}$ identified by the $i^{th}$ Gaussian component. The component number $(i)$ is indicated at the top of each column. The node $W_{\emptyset}^{i}$ represents the empty measurement subset $W_{\emptyset}^{i} = \emptyset$ which is always included for each component and it corresponds to the event that the Gaussian component was not detected by any of the sensors. With each valid partition $P$ we associate the score $d_{P} = \displaystyle \prod_{W \in P}{d_{W}}$ with $d_{\emptyset} = 1$.
in [1,2,3,4,5]{}[ at (1.5\*-1.5,6) [$\x$]{}; ]{} at (3,6.5) [Gaussian component $(i)$]{}; in [1,2,3,4,5]{}[ at (1.5\*- 1.5,5) [$W_{\emptyset}^{\x}$]{}; ]{} in [1,2,3]{}[ at (0,5-1.3\*) [$W_{\y}^{1}$]{}; ]{} in [1,2,3]{}[ at (1.5,5-1.3\*) [$W_{\y}^{2}$]{}; ]{} in [1,2,3]{}[ at (3,5-1.3\*) [$W_{\y}^{3}$]{}; ]{} in [1,2,3]{}[ at (4.5,5-1.3\*) [$W_{\y}^{4}$]{}; ]{} in [1,2]{}[ at (6,5-1.3\*) [$W_{\y}^{5}$]{}; ]{}
(0.2,2.7) – (1.5,3.4); (1.5,3.4) – (2.9,3.4);
(0.2,2.1) – (1.5,0.8); (1.5,0.8) – (2.8,0.9);
(0.2,4) – (1.5,4.8); (1.5,4.8) – (2.8,2.6);
in [0,1,3]{}[ (3.25,3.7) – (4.3,5-1.3\*); ]{}
We greedily identify partitions of subsets by incrementally incorporating measurement subsets from the different components. For example the solid lines in Figure \[fig:trellis\_groups\] correspond to the partitions that have been retained after processing components number 1 to 3. The existing partitions are expanded using the measurement subsets from the $4^{th}$ component as indicated by the dashed lines. Some extensions are not included as they do not lead to a valid partition. Since the empty measurement subset $W_{\emptyset}^{i}$ is always included in the trellis, a partition can always be found. We process the Gaussian components in decreasing order of their associated weights. After processing each component, we retain a maximum of $P_{\textrm{max}}$ partitions corresponding to the ones with highest $d_{P}$. These selected partitions of measurement subsets are used in the update equations , and to compute the posterior PHDs and cardinality distribution. In our current implementation we select measurement subsets and construct measurement partitions using only the prior PHD information. Future research can focus on enhancing this construction procedure by including the current measurements along with the prior PHD.
For the general multisensor PHD filter a slightly more accurate implementation can be used and is described as follows. After the first approximate step of identifying measurement subsets for each Gaussian component, instead of the approximate partition construction discussed in this section, we can find all possible partitions from the given collection of measurement subsets. This problem of finding all partitions can be mapped to the exact cover problem [@michael1979]. An efficient algorithm called Dancing Links has been suggested by Knuth [@knuth2000] for solving this problem. This implementation can be used when the number of sensors and measurement subsets are small.
Numerical simulations {#sec:simulations}
=====================
In this section we compare different multisensor multitarget tracking algorithms developed using the random finite set theory. Specifically we compare the following filters: iterated-corrector PHD (IC-PHD [@mahler2009a]), iterated-corrector CPHD (IC-CPHD), general multisensor PHD (G-PHD) and the general multisensor CPHD (G-CPHD) filter derived in this paper. Models used to simulate multitarget motion and multisensor observations are discussed in detail in the next subsections. The simulated observations are used by different algorithms to perform multitarget tracking. All the simulations were conducted using MATLAB [^2].
Target dynamics
---------------
The single target state is a four dimensional vector $\x = [x, y, v_x, v_y]$ consisting of its position coordinates $x$ and $y$ and its velocities $v_x$ and $v_y$ along the $x$-axis and $y$-axis respectively. The target state evolves according to the discretized version of the continuous time nearly constant velocity model [@barshalom2004] given by $$\begin{aligned}
\x_{k+1,i} &= \left[ \begin{array}{cccc} 1&0&T&0\\ 0&1&0&T\\ 0&0&1&0\\ 0&0&0&1 \end{array} \right] \x_{k,i} + \eta_{k+1,i} \\
\eta_{k+1,i} & \sim \N({\bf 0},\Sigma_{\eta}) \,, \;
\Sigma_{\eta} = \left[ \begin{array}{cccc} \frac{T^3}{3}&0&\frac{T^2}{2}&0 \\
0&\frac{T^3}{3}&0&\frac{T^2}{2} \\ \frac{T^2}{2}&0&T&0\\ 0&\frac{T^2}{2}&0&T \end{array} \right] \sigma_{\eta}^{2}
\nonumber\end{aligned}$$ where $T$ is the sampling period and $\sigma_{\eta}^{2}$ is the intensity of the process noise. We simulate $100$ time steps with a sampling period of $T = 1s$ and process noise intensity of $\sigma_{\eta} = 0.25m$. Figure \[fig:tracks\_8t\] shows the target tracks used in the simulations and Figure \[fig:card\_8t\] shows the variation of the number of targets over time. All the targets originate from one of the following four locations $(\pm 400m, \pm 400m)$ and targets are restricted to the $2000m \times 2000m$ square region centered at the origin. Targets 1 & 2 are present in the time range $k \in \ll 1 , 100 \rr$; targets 3 & 4 for $k \in \ll 21 , 100 \rr$; targets 5 & 6 for $k \in \ll 41 , 100 \rr$; and targets 7 & 8 for $k \in \ll 61 , 80 \rr$.
Measurement model
-----------------
Measurements are collected independently by six sensors. When a sensor detects a target, the corresponding measurement consists of the position coordinates of the target corrupted by additive Gaussian noise. Thus if a target located at $(x,y)$ is detected by a sensor, the measurement gathered by the sensor is given by $$\begin{aligned}
\z = \left[\begin{array}{c} x \\ y \end{array}\right] +
\left[\begin{array}{c} w_x \\ w_y \end{array}\right]\end{aligned}$$ where $w_x$ and $w_y$ are independent zero-mean Gaussian noise terms with standard deviation $\sigma_{w_x}$ and $\sigma_{w_y}$ respectively. In our simulations we use $\sigma_{w_x} = \sigma_{w_y} = 10m$. The probability of detection of each sensor is constant throughout the monitoring region. Five of the sensors have a fixed probability of detection of $0.5$. The probability of detection of the sixth sensor is variable and is changed from $0.2$ to $1$ in increments of $0.1$. The clutter measurements made by each of the sensors is Poisson with uniform spatial density and mean clutter rate $\lambda = 10$.
Filter implementation details and error metric {#sec:filter_details}
----------------------------------------------
All the filters model the survival probability at all times and at all locations as constant with $p_{sv} = 0.99$. The target birth intensity is modelled as a Gaussian mixture with four components centered at $(\pm 400, \pm 400, 0, 0)$, each with covariance matrix diag$([100,100,25,25])$ and weight $0.1$. The target birth cardinality distribution is assumed Poisson with mean $0.4$. We consider two cases of sensor ordering where the sensor with variable probability of detection is either processed first (Case 1) or last (Case 2).
For the different multisensor filters the PHD function is represented by a mixture of Gaussian densities whereas the cardinality distribution is represented by a vector of finite length which sums to one. This Gaussian mixture model approximation was first used in [@vo2006] and [@vo2007] for multitarget tracking using single sensor PHD and CPHD filters respectively. We perform pruning of Gaussian components with low weights and merging of Gaussian components in close vicinity [@vo2006] for computational tractability. For the iterated-corrector filters pruning and merging is done after processing each sensor since many components have negligible weight and propagating them has no significant impact on tracking accuracy. For the general multisensor PHD and CPHD filters pruning and merging is performed at the end of the update step since intermediate Gaussian components are not accessible. The general multisensor PHD and CPHD filters are implemented using the two-step greedy approach described in Section \[sec:apprx\_implementation\]. In our simulations the maximum number of measurement subsets per Gaussian component is set to $W_{\textrm{max}} = 6$ and the maximum number of partitions of measurement subsets is set as $P_{\textrm{max}} = 6$. For CPHD filters, the cardinality distribution is assumed to be zero for $n > 20$.
For the PHD filters, we estimate the number of targets by rounding the sum of weights of the Gaussian components to the nearest integer. For the CPHD filters, we estimate the number of targets as the peak of the posterior cardinality distribution. For all the filters, the target state estimates are the centres of the Gaussian components with highest weights in the posterior PHD. To reduce the computational overhead, after each time step we restrict the number of Gaussian components to a maximum of four times the estimated number of targets. When the estimated number of targets is zero we retain a maximum of four Gaussian components.
The tracking performance of the different filters are compared using the *optimal sub-pattern assignment* (OSPA) error metric [@schuhmacher2008]. For the OSPA metric, we set the cardinality penalty factor $c = 100$ and power $p = 1$. The OSPA error metric accounts for error in estimation of target states as well as the error in estimation of number of targets. Given the two sets of estimated multitarget state and the true multitarget state, it finds the best permutation of the larger set which minimizes its distance from the smaller set and assigns a fixed penalty $c$ for each cardinality error. We use the Euclidean distance metric and consider only the target positions while computing the OSPA error.
Results
-------
The target tracks shown in Figure \[fig:tracks\_8t\] are used in all the simulations. The generated observation sequence is changed by providing a different initialization seed to the random number generator. We generate 100 different observation sequences and report the average OSPA error obtained by running each multisensor filter over these 100 observation sequences. The probability of detection of the sensor with variable probability of detection is gradually increased from $0.2$ to $1$. Figure \[fig:plots\_ospa\_pd\] shows the average OSPA error as the probability of detection is changed for the two cases, Case 1 and Case 2. The IC-PHD filter performs significantly worse than all the other filters. For the IC-PHD filter Case 1, the accuracy improves relative to Case 2 as the probability of detection is increased since the sensor with more reliable information is processed towards the end.
Figure \[fig:plots\_ospa\_pd\_zoom\] shows a portion of Figure \[fig:plots\_ospa\_pd\] enlarged for clarity. We observe that for the G-PHD, IC-CPHD and G-CPHD filters there is very little difference between performance for Case 1 and Case 2. Thus the IC-CPHD filter performance does not depend significantly on the order in which sensors are processed. For the G-PHD and G-CPHD filters the order in which sensors are processed to greedily construct measurement subsets has little impact on the final filter performance. The G-CPHD filter is able to outperform both the G-PHD and the IC-CPHD filters and has the lowest average OSPA error. A box and whisker plot comparison of the G-PHD, IC-CPHD and G-CPHD filters is shown in Figure \[fig:plot\_box\_1\]. The median OSPA error and the $25-75$ percentiles are shown for different values of $p_d$ for the sensor with variable probability of detection.
We now examine the effect of the parameters $W_{\text{max}}$ and $P_{\text{max}}$, i.e., the maximum number of measurement subsets and the maximum number of partitions. $W_{\text{max}}$ is varied in the range $\{1,2,4,6,8,10\}$ and $P_{\text{max}}$ is varied in the range $\{1,2,4,6,8,10\}$. For this simulation we fix the probability of detection of all the six sensors to be $0.5$. All other parameters of the simulation are the same as before. We do tracking using the same tracks as before and over 100 different observation sequences for each pair of $(W_{\text{max}}, P_{\text{max}})$.
Figure \[fig:plot\_error\_time\] plots the effect of changing $W_{\text{max}}$ and $P_{\text{max}}$ on the average OSPA error and the average computational time required per time-step. Simulations were performed using algorithms implemented in Matlab on computers with two Xeon 4-core 2.5GHz processors and 14GB RAM. Each curve is obtained by fixing $P_{\text{max}}$ and changing $W_{\text{max}}$. Dashed curves correspond to G-PHD filter and solid curves correspond to G-CPHD filters. For a given pair of $(W_{\text{max}}, P_{\text{max}})$ values both the filters require almost the same computational time but the G-CPHD filter has a lower average OSPA error compared to the G-PHD filter. We observe that for each curve as $W_{\text{max}}$ increases the average OSPA error reaches a minimum quickly (around $W_{\text{max}} = 2$) and then starts rising. This is because as $W_{\text{max}}$ is increased the non-ideal measurement subsets also get involved in the construction of partitions leading to noise terms in the update. The computational time required grows approximately linearly with increase in $W_{\text{max}}$. As $P_{\text{max}}$ is varied the average OSPA error saturates at around $P_{\text{max}} = 4$ and increasing it beyond 4 has very little impact. Increasing $P_{\text{max}}$ does not significantly raise the computational time requirements of the approximate G-PHD and G-CPHD filter implementations.
We perform another set of simulations to study the effect of the number of sensors and clutter rate of the sensors on filter performance. In this simulation the number of sensors $s$ is varied in the range $\{2,4,6,8,10\}$ and the clutter rate $\lambda$ of the Poisson clutter process is varied in the range $\{1,5,10,15,20\}$ and is same for all the sensors. We fix the probability of detection of all the sensors to be $0.5$. The approximate greedy algorithm parameters are set to $W_{\text{max}} = 6$ and $P_{\text{max}} = 6$. All other parameters of the simulation are unchanged. Tracking is performed using the same target tracks as before and 100 different observation sequences are generated for each pair of $(s,\lambda)$.
Figures \[fig:plots\_sensor\_1\] and \[fig:plots\_sensor\_2\] plot the average computational time and the average OSPA error as the number of sensors is changed for different clutter rate values. Each curve is obtained by fixing the clutter rate and changing the number of sensors. Dashed curves correspond to the G-PHD filter and solid curves correspond to the G-CPHD filters. From Figure \[fig:plots\_sensor\_1\] we observe that for approximate greedy implementations of the G-PHD and G-CPHD filters the computational requirements grow linearly with the number of sensors. As the number of sensors is increased the average OSPA error reduces as seen from Figure \[fig:plots\_sensor\_2\]. The G-CPHD filter requires relatively fewer sensors to achieve the same accuracy as that of the G-PHD filter.
Extension to non-linear measurement model
-----------------------------------------
In this section we extend the Gaussian mixture based filter implementation discussed in Section \[sec:apprx\_implementation\] to include non-linear measurement models using the unscented Kalman filter [@julier1997] approach. The unscented extensions to non-linear models when a single sensor is present are discussed in [@vo2006] and [@vo2007] for the PHD and CPHD filters respectively. We implement the unscented versions of the general multisensor PHD and CPHD filters by repeatedly applying the equations provided in [@vo2006; @vo2007]. Specifically, the equations are recursively applied for each $\z \in W$ to evaluate the score function $\beta^{(i)}(W)$ while constructing the measurement subsets.
As an example, we consider the setup described in [@yu2013] based on at-sea experiments. Two targets are present within the monitoring region and portions of their tracks are shown in Figure \[fig:tracks\_drdc\]. The target state $\x = [x ,y]$ consists of its coordinates in the $x-y$ plane and the filters model the motion of individual targets using a random walk model given by $\x_{k+1,i} = \x_{k,i} + \eta_{k+1,i}$ where the process noise $\eta_{k+1,i}$ is zero-mean Gaussian with covariance matrix $\Sigma_{\eta} = \sigma_{\eta}^{2} \, \text{diag}(1,1)$. In our simulations we set $\sigma_{\eta} = 0.24 \, \text{km}$. Although we consider linear target dynamics in this paper, the unscented approach can be easily extended to include non-linear target dynamics as well.
The targets are monitored using acoustic sensors which collect bearings (angle) measurements. If sensor $j$ is present at location $[x^{j}, y^{j}]$ and a target detected by the sensor has coordinates $ [x, y]$ then the measurement made by this sensor is given by $$\begin{aligned}
z &= \text{arctan} \left(\frac{y - y^{j}}{x - x^{j}}\right) + w\end{aligned}$$ where the measurement noise $w$ is zero mean Gaussian with standard deviation $\sigma_{w}$ and ‘arctan’ denotes the four-quadrant inverse tangent function. The sensor locations are assumed to be known. The measurements $z$ are in the range $[0,360)$ degrees. Along with the target related measurements the sensors also record clutter measurements not associated with any target. Five sensors (which slowly drift over time) gather measurements and their approximate locations are indicated in Figure \[fig:tracks\_drdc\]. To demonstrate the feasibility of the proposed algorithms for non-linear measurement models, in this paper we consider sensor deployments and target trajectories based on at-sea experiments. The sensor measurements themselves are simulated to avoid the issue of measurement model mismatch. All sensors are assumed to have the same $\sigma_{w}$ and we vary $\sigma_{w}$ in the range $\{1,2,3,4\}$ (degrees) in our simulations. The probability of detection of each sensor is uniform throughout the monitoring region and is the same for all the sensors. The probability of detection of the sensors is changed from $0.7$ to $0.95$ in increments of $0.05$. The clutter measurements made by each of the sensors is a Poisson random finite set with uniform density in $[0,360)$ and mean clutter rate $\lambda = 5$.
The general multisensor PHD and the general multisensor CPHD filters are used to perform tracking in this setup. Most of the implementation details are the same as discussed in Section \[sec:filter\_details\]. The target birth intensity is modeled as a two component Gaussian mixture with components centered at true location of the targets at time $k = 1$ and each having covariance matrix $\text{diag}([0.65,0.79])$ and weight $0.1$. The target birth cardinality distribution is assumed to be Poisson with mean $0.2$. We set $W_{\textrm{max}} = 6$ and $P_{\textrm{max}} = 6$. While calculating the OSPA error we use the cardinality penalty factor of $c = 2$ and power $p = 1$. To demonstrate the feasibility of the proposed algorithm, in our current implementation a-priori information about initial target locations is assumed to be known. In a more practical scenario where this information is unavailable the Gaussian components (for birth) can be initialized based on sensor measurements.
The average OSPA error obtained by running the algorithms over 100 different observation sequences are shown in Figure \[fig:avg\_ospa\_drdc\]. Each curve is obtained by varying the probability of detection of the sensors from $0.7$ to $0.95$. As the probability of detection increases there is gradual decrease in the average OSPA error. Different curves correspond to different values of measurement noise standard deviation $\sigma_{w}$. As $\sigma_{w}$ is increased the average OSPA error increases as expected. For each $\sigma_{w}$ the G-CPHD filter performs better than the G-PHD filter. Estimated target locations obtained by the general multisensor CPHD filter are shown in Figure \[fig:tracks\_drdc\] when $\sigma_{w} = 2$ and $p_d = 0.9$. The tracks are obtained by joining the closest estimates across time.
Conclusions {#sec:conclusion}
===========
In this paper we address the problem of multitarget tracking using multiple sensors. Many of the existing approaches do not make complete use of the multisensor information or are computationally infeasible. The contribution of this work is twofold. As our first contribution we derive update equations for the general multisensor CPHD filter. These update equations, similar to the general multisensor PHD filter update equations, are combinatorial in nature and hence computationally intractable. Our second contribution is in developing an approximate greedy implementation of the general multisensor CPHD and PHD filters based on a Gaussian mixture model. The algorithm avoids any combinatorial calculations without sacrificing tracking accuracy. The algorithm is also scalable since the computational requirements grow linearly in the number of sensors as observed from the simulations.
Acknowledgements {#acknowledgements .unnumbered}
================
We thank Florian Meyer for pointing out an issue in an earlier version of this manuscript. We also thank the anonymous reviewers for comments which helped improve the presentation of this work.
Recursive expression for partitions {#app:recursive}
===================================
Let $\S^{(\l)}$ be the collection of all permissible partitions of the set $Z^{1:\l}_{k+1}$ ($1 \leq \l < s$) where partitions are as defined in the equations -. Since the $V$ component of a partition is unique given the $W$ components, we do not explicitly specify the $V$ component in the recursive expression. Let $P \in \S^{(\l)}$ be any partition of $Z^{1:\l}_{k+1}$ which is given as $P = \{W_1,W_2,\dots,,W_{|P|-1}, V \}$. Let $Z^{\l+1}_{k+1} = \{ \z^{\l+1}_{1}, \z^{\l+1}_{2}, \dots, \z^{\l+1}_{m_{\l+1}}\}$. Then we can express $\S^{(\l+1)}$ using $\S^{(\l)}$ and $Z^{\l+1}$ as given by the following relation $$\begin{aligned}
& \S^{(\l+1)} = \nonumber \\
& \bigcup_{P \in \S^{(\l)}} \bigcup_{n_{1}=0}^{m_{\l+1}} \bigcup_{n_{2}=0}^{\textrm{min}(m_{\l+1},|P|-1)}
\!\!\!\!\!\! \bigcup_{\substack{I_{1} \subseteq \ll 1 , m_{\l+1} \rr \\ |I_{1}| = n_{1}}}
\bigcup_{\substack{I_{2} \subseteq \ll 1 , m_{\l+1} \rr \\ |I_{2}| = n_{2} \\ I_{1} \cap I_{2} = \emptyset}}
\bigcup_{\substack{J \subseteq \ll 1 , |P|-1 \rr \\ |J| = |I_{2}|}} \nonumber \\
& \quad \bigcup_{B \in \mathcal{B}(I_{2},J)}
\left\{ \{ \z^{\l+1}_{i_{1}}\}_{i_{1} \in I_{1}} \cup
\{W_{B(i_{2})}, \z^{\l+1}_{i_{2}}\}_{i_{2} \in I_{2}} \cup
\{ W_{j}\}_{j \notin J} \right\}\end{aligned}$$ where $\mathcal{B}(I_{2},J)$ is the collection of all possible matchings [^3] from set $I_{2}$ to set $J$. The above relation mathematically expresses the fact that for each partition $P \in \S^{(\l)}$ and given $Z^{\l+1}_{k+1}$, a new partition belonging to $\S^{(\l+1)}$ can be constructed by adding some new singleton measurement subsets from $Z^{\l+1}_{k+1}$ (i.e. $\{ \z^{\l+1}_{i_{1}}\}_{i_{1} \in I_{1}}$), extending some existing subsets in $P$ by appending them with measurements from $Z^{\l+1}_{k+1}$ (i.e. $\{W_{B(i_{2})} \cup \z^{\l+1}_{i_{2}}\}_{i_{2} \in I_{2}}$) and retaining some existing measurement subsets (i.e. $\{ W_{j}\}_{j \notin J}$). By this definition we have $\S = \S^{(s)}$. As a special case for $\l = 1$, $$\begin{aligned}
\S^{(1)} &= \bigcup_{n=0}^{m_{1}} \bigcup_{\substack{I \subseteq \ll 1 , m_{1} \rr \\ |I| = n}}
\left\{ \{ \z^{1}_{i}\}_{i \in I} \right\}. \label{eq:union_r1}\end{aligned}$$
Functional derivatives {#app:functional_derivatives}
======================
We now review the notion of functional derivatives which play an important role in the derivation of filter update equations. A brief background is provided that is necessary for the derivations in this paper; for additional details see [@mahler2007], [@mahler2007B Ch. 11], [@mahler2014B]. Let $\F$ denote the set of mappings from $\Y$ to $\R$ and let $A$ be a functional mapping elements of $\F$ to $\R$. Let $u(\y)$ and $g(\y)$ be functions in $\F$. For the functional $A[u]$, its Gâteaux derivative along the direction of the function $g(\y)$ is defined as [@mahler2007B] $$\begin{aligned}
\frac{\partial A}{\partial g}[u] \, {\mathrel{\overset{\makebox[0pt]{\mbox{\normalfont\tiny\sffamily def}}}{=}}}\, \lim_{\epsilon \rightarrow 0}
\frac{A[u + \epsilon \cdot g] - A[u]}{\epsilon}.\end{aligned}$$ In this paper we are interested in the Gâteaux derivatives when the function $g(\y)$ is the Dirac delta function $\delta_{\y_{1}}(\y)$ localized at $\y_{1}$ and the corresponding Gâteaux derivatives are called functional derivatives [@mahler2007B; @mahler2007]. In this case, the functional derivative is commonly written, $$\begin{aligned}
\frac{\partial A}{\partial \delta_{\y_{1}}}[u] \equiv \frac{\delta A}{\delta \y_{1}}[u].\end{aligned}$$ If the functional $A[u]$ is of the form $A[u] = \int \! {u(\y) g(\y) d\y}$ then we have $\frac{\delta A}{\delta \y_{1}}[u] = g(\y_{1})$.
We can also define higher order functional derivatives of $A[u]$. For a set $Y = \{ \y_{1}, \y_{2}, \cdots, \y_{n} \}$, the $n^{th}$ order derivative is denoted by $$\begin{aligned}
\frac{\delta^{n} A}{\delta Y}[u] &{\mathrel{\overset{\makebox[0pt]{\mbox{\normalfont\tiny\sffamily def}}}{=}}}\frac{\delta^{n} A}{\delta \y_{1} \delta \y_{2} \dots \delta \y_{n}}[u].\end{aligned}$$ We call $\frac{\delta^{n} A}{\delta Y}[u]$ the functional derivative of $A[u]$ with respect to the set $Y$.
For functionals $A_{1}[u]$, $A_{2}[u]$ and $A_{3}[u]$, the product rule for functional derivatives [@mahler2007B Ch. 11] gives $$\begin{aligned}
&\frac{\delta}{\delta Y} \{A_{1}[u] \, A_{2}[u] \, A_{3}[u]\} \nonumber \\
&= \sum_{Y_{1} \subseteq Y} \sum_{\substack{Y_{2} \subseteq Y \\ Y_{1} \cap Y_{2} = \emptyset}}
\frac{\delta A_{1}}{\delta Y_{1}}[u] \frac{\delta A_{2}}{\delta Y_{2}}[u]
\frac{\delta A_{3}}{\delta (Y-Y_{1}-Y_{2})}[u] \\
&= \sum_{n_{1}=0}^{|Y|} \sum_{n_{2}=0}^{|Y|} \sum_{\substack{Y_{1} \subseteq Y \\ |Y_{1}| = n_{1}}}
\sum_{\substack{Y_{2} \subseteq Y \\ |Y_{2}| = n_{2} \\ Y_{1} \cap Y_{2} = \emptyset}}
\frac{\delta A_{1}}{\delta Y_{1}}[u] \frac{\delta A_{2}}{\delta Y_{2}}[u]
\frac{\delta A_{3}}{\delta (Y-Y_{1}-Y_{2})}[u].
\label{eq:product_rule_3}\end{aligned}$$ As a special case, for the product of two functionals we have $$\begin{aligned}
\frac{\delta}{\delta Y} \{A_{1}[u] \, A_{2}[u]\} &=
\sum_{n=0}^{|Y|} \sum_{\substack{Y_{1} \subseteq Y \\ |Y_{1}| = n}}
\frac{\delta A_{1}}{\delta Y_{1}}[u] \frac{\delta A_{2}}{\delta (Y-Y_{1})}[u].
\label{eq:product_rule_2}\end{aligned}$$
Probability generating functional {#app:pgfl}
=================================
Let $u(\x)$ be a function with the mapping $u:\mathcal{X} \rightarrow [0,1]$, where $\mathcal{X}$ is the single target state space. For a set $X \subseteq \mathcal{X}$, define $u^X {\mathrel{\overset{\makebox[0pt]{\mbox{\normalfont\tiny\sffamily def}}}{=}}}\prod_{\x \in X}{u(\x)}$. Let $\Xi$ be a random finite set with elements in $\mathcal{X}$ and let $f_{\Xi}(X)$ be its probability density function. The *probability generating functional* (PGFL [@mahler2007B]) of the random finite set $\Xi$ is defined as the following integral transform $$\begin{aligned}
G_{\Xi}[u] &{\mathrel{\overset{\makebox[0pt]{\mbox{\normalfont\tiny\sffamily def}}}{=}}}\int{u^X \; f_{\Xi}(X) \delta X}\end{aligned}$$ where the integration is a set integral [@mahler2007B].
For a constant $a \in [0,1]$ denote by $u(\x) \equiv a$ the constant function $u(\x) = a$, for all $\x \in \mathcal{X}$. Let $A[a]$ denote the value of the functional $A[u]$ evaluated at $u(\x) \equiv a$. Recall that for a random variable its moments are related to the derivatives of its PGF. Similarly, the first moment or the PHD function of a random finite set is related to the functional derivative of its PGFL. For the random finite set $\Xi$, its PHD function $D_{\Xi}(\x)$ is related to the functional derivative of the PGFL [@mahler2007B] as follows $$\begin{aligned}
D_{\Xi}(\x) = \frac{\delta G_{\Xi}}{\delta \x}[1]. \label{eq:phd_functional_derivative}\end{aligned}$$
The PGF $\pgf_{\Xi}(t)$ of the cardinality distribution of the random finite set $\Xi$ is obtained by substituting the constant function $u(\x) \equiv t$, in the PGFL i.e., $\pgf_{\Xi}(t) = G_{\Xi}[t]$. For an IIDC random finite set with spatial density function $\zeta(\x)$ we have the relation $G_{\Xi}[u] = \pgf_{\Xi}({\langle \zeta, u \rangle})$.
Multitarget Bayes filter {#app:multitarget_bayes}
========================
Let $f_{k+1|k}(X|Z^{1:s}_{1:k})$ and $f_{k+1|k+1}(X|Z^{1:s}_{1:k+1})$ be the predicted and posterior multitarget state distributions at time $k+1$ and let $L_{k+1,j}(Z^{j}_{k+1}|X)$ denote the multitarget likelihood function for the $j^{th}$ sensor at time $k+1$. Since the sensor observations are independent conditional on the multitarget state, the update equation for the multitarget Bayes filter [@mahler2007B] is given by $$\begin{aligned}
f_{k+1|k+1}(X|Z^{1:s}_{1:k+1}) &\propto f_{k+1|k}(X|Z^{1:s}_{1:k})
\prod_{j=1}^{s}{L_{k+1,j}(Z^{j}_{k+1}|X)}.\end{aligned}$$ We now define a multivariate functional which is the integral transform of the quantity in the right hand side of the above equation. Under the conditions of Assumption 1, we can obtain a closed form expression for this multivariate functional, which on differentiation gives the PGFL of the posterior multitarget state distribution.
Let $g_{j}(\z), j = 1,2,\dots,s$ be functions that map the space $\mathcal{Z}^{j}$ to $[0,1]$ where $\mathcal{Z}^{j}$ is the space of observations of sensor $j$. The intermediate functions $g_{j}(\z)$ will be used to define functionals and later set to zero to obtain the PGFL of the posterior multitarget distribution. Let $u(\x)$ be a function mapping the state space $\mathcal{X}$ to $[0,1]$. For brevity, denote the vector of functions $[g_1,g_2,\dots,g_s]$ as $g_{1:s}$ and define $g_{j}^{Z^{j}} {\mathrel{\overset{\makebox[0pt]{\mbox{\normalfont\tiny\sffamily def}}}{=}}}\prod_{\z \in Z^{j}}{g_{j}(\z)}$ where $Z^{j} \subseteq \mathcal{Z}^{j}$. We define the multivariate functional $F[g_1,g_2,\dots,g_s,u]$ as the following integral transform $$\begin{aligned}
&F[g_{1:s},u] {\mathrel{\overset{\makebox[0pt]{\mbox{\normalfont\tiny\sffamily def}}}{=}}}\int{u^{X} \, \left( \prod_{j=1}^{s}{\mathcal{L}_{k+1,j}[g_{j}|X]} \right) \,
f_{k+1|k}(X|Z^{1:s}_{1:k}) \, \delta X} \label{eq:multivariate_functional} \\
&\text{where } \mathcal{L}_{k+1,j}[g_{j}|X] {\mathrel{\overset{\makebox[0pt]{\mbox{\normalfont\tiny\sffamily def}}}{=}}}\int{g_{j}^{Z^{j}} \, L_{k+1,j}(Z^{j}|X) \delta Z^{j}}.\end{aligned}$$ Later we will relate the PGFL of the posterior multitarget distribution to the derivatives of the functional $F[g_{1:s},u]$ with respect to the sensor observations $Z^{1:s}_{k+1}$. Recall that $c_{j}(\z)$ denotes the clutter spatial distribution and $C_{j}(t)$ denotes the PGF of the clutter cardinality distribution for the $j^{th}$ sensor. Under Assumption 1 it can be shown that [@mahler2003] $$\begin{aligned}
\mathcal{L}_{k+1,j}[g_{j}|X] &= \int{g_{j}^{Z^{j}} \, L_{k+1,j}(Z^{j}|X) \delta Z^{j}} \\
&= C_{j}({\langle c_{j}, g_{j} \rangle}) \, \phi_{g_{j}}^{X} \,, \\
\textrm{where } \phi_{g_{j}}(\x) &{\mathrel{\overset{\makebox[0pt]{\mbox{\normalfont\tiny\sffamily def}}}{=}}}1 - p^{j}_{d}(\x) + p^{j}_{d}(\x) \, p_{g_{j}}(\x) \\
p_{g_{j}}(\x) &{\mathrel{\overset{\makebox[0pt]{\mbox{\normalfont\tiny\sffamily def}}}{=}}}\int{g_{j}(\z) h_{j}(\z|\x) d\z}.\end{aligned}$$
Let $G_{k+1|k}[u]$ denote the PGFL of the predicted multitarget distribution. Using the above relations in we have $$\begin{aligned}
F[&g_{1:s},u] = \int \!\! {u^{X} \, \left( \prod_{j=1}^{s}{C_{j}({\langle c_{j}, g_{j} \rangle})} \,
\phi_{g_{j}}^{X} \right) \, f_{k+1|k}(X|Z^{1:s}_{1:k}) \, \delta X}\end{aligned}$$ Since both $u$ and $\phi_{g_{j}}$ are functions defined over the space $\mathcal{X}$, we can combine the product of $u^{X}$ and $\prod_{j=1}^{s}{\phi_{g_{j}}^{X}}$ and write $(u \, \prod_{j=1}^{s}{\phi_{g_{j}}})^{X}$. Hence we have $$\begin{aligned}
& F[g_{1:s},u] \nonumber \\
&= \left( \prod_{j=1}^{s}{C_{j}({\langle c_{j}, g_{j} \rangle})} \right) \,
\int{\left(u \, \prod_{j=1}^{s}{\phi_{g_{j}}} \right)^{X} \, f_{k+1|k}(X|Z^{1:s}_{1:k}) \, \delta X} \\
&= \left( \prod_{j=1}^{s}{C_{j}({\langle c_{j}, g_{j} \rangle})} \right) \, G[u\prod_{j=1}^{s}{\phi_{g_{j}}}] \\
&= \left( \prod_{j=1}^{s}{C_{j}({\langle c_{j}, g_{j} \rangle})} \right) \, \pgf({\langle r, u\prod_{j=1}^{s}{\phi_{g_{j}}} \rangle}).\end{aligned}$$ The last two steps result from the definition of the PGFL and the assumption that the predicted multitarget distribution $f_{k+1|k}(X|Z^{1:s}_{1:k})$ is IIDC.
Let $G_{k+1|k+1}[u]$ be the PGFL of the multitarget density $f_{k+1|k+1}(X|Z^{1:s}_{1:k+1})$, and let $D_{k+1|k+1}(\x)$ be the posterior PHD function. From [@mahler2009a; @delande2010] we have the following relation $$\begin{aligned}
G_{k+1|k+1}[u] &= \frac{\displaystyle \frac{\delta F}{\delta Z^{1:s}_{k+1}}[0,0,\dots,0,u]}
{\displaystyle \frac{\delta F}{\delta Z^{1:s}_{k+1}}[0,0,\dots,0,1]}.
\label{eq:G_defn_gen}\end{aligned}$$ Since the PHD is the functional derivative of the PGFL, from $$\begin{aligned}
D_{k+1|k+1}(\x) &=
\frac{\displaystyle \frac{\delta F}{\delta \x \, \delta Z^{1:s}_{k+1}}[0,0,\dots,0,1]}
{\displaystyle \frac{\delta F}{\delta Z^{1:s}_{k+1}}[0,0,\dots,0,1]}.
\label{eq:phd_defn_gen}\end{aligned}$$ Note that the differentiation $\frac{\delta}{\delta Z^{j}_{k+1}}$ is with respect to the function variable $g_{j}$ and the differentiation $\frac{\delta}{\delta \x}$ is with respect to the function variable $u$. The general multisensor CPHD filter update equation is derived by evaluating the functional derivatives of $F[g_{1:s},u]$ in and .
We now define a quantity $\Gamma$ and the functionals $\Psi_{P}[g_{1:s},u]$ and $\varphi_{W}[g_{1:s},u]$. The functional derivatives of $F[g_{1:s},u]$ can be expressed in terms of these quantities. Let $$\begin{aligned}
\Gamma &{\mathrel{\overset{\makebox[0pt]{\mbox{\normalfont\tiny\sffamily def}}}{=}}}\prod_{j=1}^{s}{\left( \prod_{\z \in Z^{j}_{k+1}}{c_{j}(\z)} \right)}, \label{eq:Gamma} \\
\Psi_{P}&[g_{1:s},u] \nonumber \\
&{\mathrel{\overset{\makebox[0pt]{\mbox{\normalfont\tiny\sffamily def}}}{=}}}\left( \prod_{j=1}^{s} {C^{(m_{j}-|P|_{j})}_{j}({\langle c_{j}, g_{j} \rangle})} \right) \,
\pgf^{(|P|-1)}({\langle r, u \prod_{j=1}^{s}{\phi_{g_{j}}} \rangle}). \label{eq:Psi}\end{aligned}$$ For $W \in \mathcal{W}$, let $$\begin{aligned}
&\varphi_{W}[g_{1:s},u] \nonumber \\
&{\mathrel{\overset{\makebox[0pt]{\mbox{\normalfont\tiny\sffamily def}}}{=}}}\frac{\displaystyle \int \!\! r(\x) u(\x) \left( \prod_{(i,l) \in T_{W}}{\!\!\!\! p_{d}^{i}(\x)\,h_{i}(\z^{i}_{l}|\x)}\right) \,
\left( \prod_{j:(j,*) \notin T_{W}}{\!\!\!\!\!\! \phi_{g_{j}(\x)}} \right) d\x}
{\displaystyle \prod_{(i,l) \in T_{W}}{c_{i}(\z^{i}_{l})}} \cdot
\label{eq:varphi}\end{aligned}$$ With these definitions we can prove, via mathematical induction, the following lemma.
\[lem:F\_derivative\] Under the conditions of Assumption 1, the functional derivative of $F[g_{1:s},u]$ with respect to the multisensor observation set $Z^{1:s}_{k+1}$ is given by $$\begin{aligned}
\frac{\delta F}{\delta Z^{1:s}_{k+1}}[g_{1:s},u] &=
\Gamma \sum_{P \in \S} \Psi_{P}[g_{1:s},u] \prod_{W \in P}{\varphi_{W}[g_{1:s},u]}\end{aligned}$$ where $\Gamma$, $\Psi_{P}[g_{1:s},u]$ and $\varphi_{W}[g_{1:s},u]$ are as defined in , and .
Lemma \[lem:F\_derivative\] is proved in Appendix \[app:F\_derivative\].
Proof of Lemma \[lem:F\_derivative\] {#app:F_derivative}
====================================
The derivation is based on the approach used by Mahler [@mahler2009a] to derive multisensor PHD filter equations for the two sensor case and its extension by Delande et al. [@delande2010] for the general case of $s$ sensors.
We prove using mathematical induction on $1 \leq \l \leq s$ the following result, $$\begin{aligned}
\frac{\delta F}{\delta Z^{1:\l}_{k+1}}[g_{1:s},u] &= \Gamma^{(\l)} \sum_{P \in \S^{(\l)}}
\Psi_{P}^{(\l)}[g_{1:s},u] \prod_{W \in P}{\varphi_{W}[g_{1:s},u]}
\label{eq:induction_statement}\end{aligned}$$ where, $$\begin{aligned}
\Gamma^{(\l)} &{\mathrel{\overset{\makebox[0pt]{\mbox{\normalfont\tiny\sffamily def}}}{=}}}\prod_{j=1}^{\l}{\prod_{\z \in Z^{j}_{k+1}}{c_{j}(\z)}}, \\
\Psi_{P}^{(\l)}&[g_{1:s},u] {\mathrel{\overset{\makebox[0pt]{\mbox{\normalfont\tiny\sffamily def}}}{=}}}\left( \prod_{j=1}^{\l}{C^{(m_{j}-|P|_{j})}_{j}({\langle c_{j}, g_{j} \rangle})} \right) \nonumber \\
& \times \left( \prod_{j=\l+1}^{s} {C_{j}({\langle c_{j}, g_{j} \rangle})} \right)
\times \pgf^{(|P|-1)}({\langle r, u \prod_{j=1}^{s}{\phi_{g_{j}}} \rangle}),\end{aligned}$$ and for $W \in \mathcal{W}$, $$\begin{aligned}
&\varphi_{W}[g_{1:s},u] \nonumber \\
&{\mathrel{\overset{\makebox[0pt]{\mbox{\normalfont\tiny\sffamily def}}}{=}}}\frac{\displaystyle \int \!\! r(\x) u(\x) \left( \prod_{(i,l) \in T_{W}}{\!\!\!\! p_{d}^{i}(\x)\,h_{i}(\z^{i}_{l}|\x)}\right) \,
\left( \prod_{j:(j,*) \notin T_{W}}{\!\!\!\!\! \phi_{g_{j}(\x)}} \right) d\x}
{\displaystyle \prod_{(i,l) \in T_{W}}{c_{i}(\z^{i}_{l})}} \cdot\end{aligned}$$
We first establish the induction result for the base case, i.e. $\l = 1$. Ignoring the time index let the observation set gathered by sensor 1 at time $k+1$ be $Z^{1}_{k+1} = \{ \z^{1}_{1}, \z^{1}_{2}, \dots, \z^{1}_{m_{1}}\}$. We have, for the case of $s$ sensors $$\begin{aligned}
F[g_{1:s},u] &= \left( \prod_{j=1}^{s}{C_{j}({\langle c_{j}, g_{j} \rangle})} \right) \,
\pgf({\langle r, u\prod_{j=1}^{s}{\phi_{g_{j}}} \rangle}).\end{aligned}$$
Differentiating the above expression with respect to the set $Z^{1}_{k+1}$ we get $$\begin{aligned}
&\frac{\delta F}{\delta Z^{1}_{k+1}}[g_{1:s},u] \nonumber \\
&= \frac{\delta}{\delta Z^{1}_{k+1}}
\left\{ \left( \prod_{j=1}^{s}{C_{j}({\langle c_{j}, g_{j} \rangle})} \right) \,
\pgf({\langle r, u\prod_{j=1}^{s}{\phi_{g_{j}}} \rangle}) \right\} \\
&= \left( \prod_{j=2}^{s}{C_{j}({\langle c_{j}, g_{j} \rangle})} \right) \frac{\delta}{\delta Z^{1}_{k+1}}
\left\{ {C_{1}({\langle c_{1}, g_{1} \rangle})} \, \pgf({\langle r, u\prod_{j=1}^{s}{\phi_{g_{j}}} \rangle}) \right\}\end{aligned}$$ since the differential $\frac{\delta}{\delta Z^{1}_{k+1}}$ only differentiates the variable $g_1$. If $I \subseteq \ll 1 , m_{1} \rr$ we can express $Y \subseteq Z^{1}_{k+1}$ as $Y = \{ \z^{1}_{i}: i \in I \}$ for some $I$. We also have $Z^{1}_{k+1} - Y = \{ \z^{1}_{i}: i \notin I \}$. Using the product rule for functional derivatives from we have $$\begin{aligned}
&\frac{\delta}{\delta Z^{1}_{k+1}} \left\{ {C_{1}({\langle c_{1}, g_{1} \rangle})} \,
\pgf({\langle r, u\prod_{j=1}^{s}{\phi_{g_{j}}} \rangle}) \right\} \nonumber \\
&= \sum_{n=0}^{m_{1}} \sum_{\substack{I \subseteq \ll 1 , m_{1} \rr \\ |I| = n}}
\frac{\delta}{\delta \{\z^{1}_{i}\}_{i \in I}} \pgf({\langle r, u\prod_{j=1}^{s}{\phi_{g_{j}}} \rangle})
\frac{\delta}{\delta \{\z^{1}_{i}\}_{i \notin I}} {C_{1}({\langle c_{1}, g_{1} \rangle})}.
\label{eq:rhs_1}\end{aligned}$$ Now we consider the derivatives of each of the individual terms in the above expression. By application of the chain rule for functional derivatives [@mahler2007B Ch. 11] $$\begin{aligned}
&\frac{\delta}{\delta \{\z^{1}_{i}\}_{i \in I}} \pgf({\langle r, u\prod_{j=1}^{s}{\phi_{g_{j}}} \rangle}) \nonumber \\
&= \pgf^{(n)}({\langle r, u\prod_{j=1}^{s}{\phi_{g_{j}}} \rangle}) \,
\prod_{i \in I}{{\langle r, u\,p^{1}_{d}\,h_{1}(\z^{1}_{i})\,\prod_{j=2}^{s}{\phi_{g_{j}}} \rangle}}.\end{aligned}$$ In the expression above and in later expressions we have used the convention that whenever $|I| = 0$, $\prod_{i \in I}() = 1$. Applying the chain rule to the second derivative $$\begin{aligned}
\frac{\delta}{\delta \{\z^{1}_{i}\}_{i \notin I}} {C_{1}({\langle c_{1}, g_{1} \rangle})}
&= C^{(m_{1}-n)}_{1}({\langle c_{1}, g_{1} \rangle}) \; \prod_{i \notin I}{c_{1}(\z^{1}_{i})} \\
&= C^{(m_1-n)}_{1}({\langle c_{1}, g_{1} \rangle}) \; \frac{\Gamma^{(1)}}{\displaystyle \prod_{i \in I}{c_{1}(\z^{1}_{i})}}.\end{aligned}$$ As before we have used the convention $\prod_{i \in I}() = 1$ when $|I| = 0$. Thus the right hand side of can be expressed as $$\begin{aligned}
\Gamma^{(1)} &\, \sum_{n=0}^{m_{1}} \sum_{\substack{I \subseteq \ll 1 , m_{1} \rr \\ |I| = n}}
\Bigg\{ \pgf^{(n)}({\langle r, u\prod_{j=1}^{s}{\phi_{g_{j}}} \rangle}) \times \nonumber \\
& C^{(m_1-n)}_{1}({\langle c_{1}, g_{1} \rangle})
\prod_{i \in I}{\frac{{\langle r, u\,p^{1}_{d}\,h_{1}(\z^{1}_{i})\,\prod_{j=2}^{s}{\phi_{g_{j}}} \rangle}}
{c_{1}(\z^{1}_{i})}} \Bigg\}.\end{aligned}$$ In the double summation above, each set $I$ maps to a partition $P$ of the form $P = \bigcup_{i \in I}{ \{ \z^{1}_{i} \} }$ in $\S^{(1)}$ and vice versa. Hence using result in equation of Appendix \[app:recursive\] we have $$\begin{aligned}
&\frac{\delta F}{\delta Z^{1}_{k+1}}[g_{1:s},u] = \Gamma^{(1)} \sum_{P \in \S^{(1)}} \Bigg\{
C^{(m_1-|P|_{1})}_{1}({\langle c_{1}, g_{1} \rangle}) \times \nonumber \\
& \left( \prod_{j=2}^{s}{C_{j}({\langle c_{j}, g_{j} \rangle})} \right)
\pgf^{(|P|-1)}({\langle r, u\prod_{j=1}^{s}{\phi_{g_{j}}} \rangle}) \;
\prod_{W \in P}{\varphi_{W}[g_{1:s},u]} \Bigg\}\end{aligned}$$ Note that for the empty partition $P = \{V\}$, there are no elements of the form $W$ in $P$. Hence in the above expression we use the convention $\prod_{W \in P}() = 1$ whenever $P = \{V\}$. Further grouping of the terms gives the compact expression $$\begin{aligned}
\frac{\delta F}{\delta Z^{1}_{k+1}}[g_{1:s},u] = \Gamma^{(1)} \sum_{P \in \S^{(1)}} \Psi_{P}^{(1)}[g_{1:s},u] \prod_{W \in P}{\varphi_{W}[g_{1:s},u]}.\end{aligned}$$ Hence the result is established for the case $\l = 1$.
Now assuming that the result is true for some $\l = b \geq 1$, we establish that the result holds for $\l = b+1 \leq s$. Let $Z^{b+1}_{k+1} = \{ \z^{b+1}_{1}, \, \z^{b+1}_{2}, \dots, \z^{b+1}_{m_{b+1}}\}$. We can write $$\begin{aligned}
\frac{\delta F}{\delta Z^{1:b+1}_{k+1}}[g_{1:s},u] &= \frac{\delta}{\delta Z^{b+1}_{k+1}}
\left\{ \frac{\delta F}{\delta Z^{1:b}_{k+1}}[g_{1:s},u] \right\}.\end{aligned}$$ Substituting the result for the case $\l = b$ we get $$\begin{aligned}
&\frac{\delta F}{\delta Z^{1:b+1}_{k+1}}[g_{1:s},u] \nonumber \\
&= \frac{\delta}{\delta Z^{b+1}_{k+1}} \Bigg\{ \Gamma^{(b)} \sum_{P \in \S^{(b)}}
\Psi_{P}^{(b)}[g_{1:s},u] \prod_{W \in P}{\varphi_{W}[g_{1:s},u]} \Bigg\} \\
&= \Gamma^{(b)} \left( \prod_{j=b+2}^{s} {C_{j}({\langle c_{j}, g_{j} \rangle})} \right) \sum_{P \in \S^{(b)}}
\left( \prod_{j=1}^{b} {C^{(m_{j}-|P|_{j})}_{j}({\langle c_{j}, g_{j} \rangle})} \right) \nonumber \\
& \quad \times \frac{\delta}{\delta Z^{b+1}_{k+1}} \Bigg\{ C_{b+1}({\langle c_{b+1}, g_{b+1} \rangle})
\pgf^{(|P|-1)}({\langle r, u \prod_{j=1}^{s}{\phi_{g_{j}}} \rangle}) \times \nonumber \\
& \qquad \qquad \prod_{W \in P}{\varphi_{W}[g_{1:s},u]}\Bigg\}. \label{eq:rhs_2}\end{aligned}$$
Let $I_{1} \subseteq \ll 1 , m_{b+1} \rr$ and $I_{2} \subseteq \ll 1 , m_{b+1} \rr$ such that $I_{1} \cap I_{2} = \emptyset$. Then we can express $Y_{1} \subseteq Z^{b+1}_{k+1}$ and $Y_{2} \subseteq Z^{b+1}_{k+1}$ satisfying $Y_{1} \cap Y_{2} = \emptyset$ as $Y_{1} = \{ \z^{b+1}_{i}: i \in I_{1} \}$ and $Y_{2} = \{ \z^{b+1}_{i}: i \in I_{2} \}$ respectively. Applying the product rule from to the expression above we have $$\begin{aligned}
& \frac{\delta}{\delta Z^{b+1}_{k+1}} \Bigg\{ C_{b+1}({\langle c_{b+1}, g_{b+1} \rangle})
\pgf^{(|P|-1)}({\langle r, u \prod_{j=1}^{s}{\phi_{g_{j}}} \rangle}) \times \nonumber \\
& \qquad \qquad \prod_{W \in P}{\varphi_{W}[g_{1:s},u]}\Bigg\} \nonumber \\
& \; = \sum_{n_{1}=0}^{m_{b+1}} \sum_{n_{2}=0}^{\textrm{min}(m_{b+1},|P|-1)}
\sum_{\substack{I_{1} \subseteq \ll 1 , m_{b+1} \rr \\ |I_{1}| = n_{1}}}
\sum_{\substack{I_{2} \subseteq \ll 1 , m_{b+1} \rr \\ |I_{2}| = n_{2}; \; I_{1} \cap I_{2} = \emptyset}} \nonumber \\
& \qquad \Bigg\{ \frac{\delta}{\delta \{\z^{b+1}_{i_{1}}\}_{i_{1} \in I_{1}}}
\pgf^{(|P|-1)}({\langle r, u \prod_{n=1}^{s}{\phi_{g_{n}}} \rangle}) \times \nonumber \\
& \qquad \qquad \frac{\delta}{\delta \{\z^{b+1}_{i_{2}}\}_{i_{2} \in I_{2}}}
\left( \prod_{W \in P}{\varphi_{W}[g_{1:s},u]} \right) \times \nonumber \\
& \qquad \qquad \frac{\delta}{\delta \{\z^{b+1}_{i}\}_{i \notin I_{1} \cup I_{2}}}
C_{b+1}({\langle c_{b+1}, g_{b+1} \rangle}) \Bigg\} .\end{aligned}$$ The second summation above is restricted to the limit $\textrm{min}(m_{b+1},|P|-1)$ because the derivatives of $\prod_{W \in P}{\varphi_{W}[g_{1:s},u]}$ for $n_{2} > |P|-1$ are zero. Now considering each of the individual derivatives above we have $$\begin{aligned}
& \frac{\delta}{\delta \{\z^{b+1}_{i_{1}}\}_{i_{1} \in I_{1}}}
\pgf^{(|P|-1)}({\langle r, u \prod_{n=1}^{s}{\phi_{g_{n}}} \rangle}) \nonumber \\
&= \pgf^{(|P|+n_{1}-1)}({\langle r, u \prod_{n=1}^{s}{\phi_{g_{n}}} \rangle})
\prod_{i_{1} \in I_{1}}{\varphi_{\{\z^{b+1}_{i_{1}}\}}[g_{1:s},u] \, c_{b+1}(\z^{b+1}_{i_{1}})}.\end{aligned}$$ Denote $P = \{ W_{1}, W_{2}, \dots, W_{|P|-1}, V \}$ for notational convenience. Then we have $$\begin{aligned}
\frac{\delta}{\delta \{\z^{b+1}_{i_{2}}\}_{i_{2} \in I_{2}}} &
\left( \prod_{W \in P}{\varphi_{W}[g_{1:s},u]} \right) \nonumber \\
&= \sum_{\substack{J \subseteq \ll 1 , |P|-1 \rr \\ |J| = |I_{2}|}} \sum_{B \in \mathcal{B}(I_{2},J)}
\Bigg\{ \left( \prod_{j \notin J}{\varphi_{W_{j}}[g_{1:s},u]} \right) \times \nonumber \\
& \qquad \left( \prod_{i_{2} \in I_{2}}{\varphi_{W^{B}_{i_{2}}}
[g_{1:s},u] \, c_{b+1}(\z^{b+1}_{i_{2}})} \right) \Bigg\}\end{aligned}$$ where $\mathcal{B}(I_{2},J)$ is the collection of all possible matchings from set $I_{2}$ to set $J$ and we define the measurement subset $W^{B}_{i_{2}} {\mathrel{\overset{\makebox[0pt]{\mbox{\normalfont\tiny\sffamily def}}}{=}}}W_{B(i_{2})} \cup \z^{b+1}_{i_{2}}$. Also $$\begin{aligned}
& \frac{\delta}{\delta \{\z^{b+1}_{i}\}_{i \notin I_{1} \cup I_{2}}}
\left( C_{b+1}({\langle c_{b+1}, g_{b+1} \rangle}) \right) \nonumber \\
&= C^{(m_{b+1}-n_{1}-n_{2})}_{b+1}({\langle c_{b+1}, g_{b+1} \rangle})
\prod_{i \notin I_{1} \cup I_{2}}{c_{b+1}(\z^{b+1}_{i})}.\end{aligned}$$
Combining the three derivatives into the right hand side of expression we get
$$\begin{aligned}
& \Gamma^{(b)} \left( \prod_{j=b+2}^{s} {C_{j}({\langle c_{j}, g_{j} \rangle})} \right)
\left( \prod_{\z^{b+1} \in Z^{b+1}_{k+1}}{\!\!\!\! c_{b+1}(\z^{b+1})}\right) \nonumber \\
& \sum_{P \in \S^{(b)}} \sum_{n_{1}=0}^{m_{b+1}} \sum_{n_{2}=0}^{\textrm{min}(m_{b+1},|P|-1)}
\!\!\!\!\!\! \sum_{\substack{I_{1} \subseteq \ll 1 , m_{b+1} \rr \\ |I_{1}| = n_{1}}}
\sum_{\substack{I_{2} \subseteq \ll 1 , m_{b+1} \rr \\ |I_{2}| = n_{2}; \; I_{1} \cap I_{2} = \emptyset}}
\sum_{\substack{J \subseteq \ll 1 , |P|-1 \rr \\ |J| = |I_{2}|}} \nonumber \\
& \sum_{B \in \mathcal{B}(I_{2},J)} \Bigg\{ C^{(m_{b+1}-n_{1}-n_{2})}_{b+1}({\langle c_{b+1}, g_{b+1} \rangle})
\left( \prod_{j \notin J}{\varphi_{W_{j}}[g_{1:s},u]} \right) \times \nonumber \\
& \quad \pgf^{(|P|+n_{1}-1)}({\langle r, u \prod_{n=1}^{s}{\phi_{g_{n}}} \rangle}) \,
\left( \prod_{j=1}^{b} {C^{(m_{j}-|P|_{j})}_{j}({\langle c_{j}, g_{j} \rangle})} \right) \times \nonumber \\
& \quad \left( \prod_{i_{1} \in I_{1}}{\varphi_{\{\z^{b+1}_{i_{1}}\}}[g_{1:s},u]} \right)
\left( \prod_{i_{2} \in I_{2}}{\varphi_{W^{B}_{i_{2}}}[g_{1:s},u]} \right) \Bigg\}.\end{aligned}$$
Using result of Appendix \[app:recursive\] we can simplify the multiple summation term and write $$\begin{aligned}
&\frac{\delta F}{\delta Z^{1:b+1}_{k+1}}[g_{1:s},u] =
\Gamma^{(b+1)} \!\!\!\! \sum_{P \in \S^{(b+1)}} \Bigg\{
\left( \prod_{j=1}^{b+1} {C^{(m_{j}-|P|_{j})}_{j}({\langle c_{j}, g_{j} \rangle})} \right) \nonumber \\
& \qquad \left( \prod_{j=b+2}^{s} {C_{j}({\langle c_{j}, g_{j} \rangle})} \right)
\pgf^{(|P|+n_{1}-1)}({\langle r, u \prod_{n=1}^{s}{\phi_{g_{n}}} \rangle}) \times \nonumber \\
& \qquad \left( \prod_{W \in P}{\varphi_{W}[g_{1:s},u]} \right) \Bigg\} \nonumber \\
&= \Gamma^{(b+1)} \sum_{P \in \S^{(b+1)}} \Psi_{P}^{(b+1)}[g_{1:s},u]
\prod_{W \in P}{\varphi_{W}[g_{1:s},u]}.\end{aligned}$$ Hence we have established the result stated in using the method of mathematical induction. We obtain the result of Lemma \[lem:F\_derivative\] by substituting $\l = s$ in this result.
Proof of Theorem \[thm:phd\_update\] {#app:phd_update}
====================================
For brevity denote $\Psi_{P}[0,0,\dots,0,u] = \Psi_{P}[u]$ and $\varphi_{W}[0,0,\dots,0,u] = \varphi_{W}[u]$. Substituting $g_{j} \equiv 0$ for $j = 1,2,\dots,s$ in the result of Lemma \[lem:F\_derivative\] we get $$\begin{aligned}
\frac{\delta F}{\delta Z^{1:s}_{k+1}}[0,0,\dots,0,u] &= \Gamma
\sum_{P \in \S} \Psi_{P}[u] \prod_{W \in P}{\varphi_{W}[u]}.
\label{eq:F_derv_0}\end{aligned}$$
Differentiating equation (\[eq:F\_derv\_0\]) with respect to set $\{\x\}$ we have $$\begin{aligned}
& \frac{\delta F}{\delta \x \delta Z^{1:s}_{k+1}}[0,0,\dots,0,u] =
\frac{\delta}{\delta \x} \left\{ \frac{\delta F} {\delta Z^{1:s}_{k+1}}[0,0,\dots,0,u] \right\} \\
&= \frac{\delta}{\delta \x} \left\{ \Gamma \sum_{P \in \S} \Psi_{P}[u]
\prod_{W \in P}{\varphi_{W}[u]} \right\} \\
&= \Gamma \sum_{P \in \S} \left( \prod_{j=1}^{s} {C^{(m_{j}-|P|_{j})}_{j}(0)} \right) \times \nonumber \\
& \qquad \frac{\delta}{\delta \x} \left\{ \pgf^{(|P|-1)}({\langle r, u \prod_{j=1}^{s}{q^{j}_{d}} \rangle}) \,
\prod_{W \in P}{\varphi_{W}[u]} \right\}.\end{aligned}$$ Applying the product rule for set derivatives from $$\begin{aligned}
\frac{\delta}{\delta \x} &\left\{ \pgf^{(|P|-1)}({\langle r, u \prod_{j=1}^{s}{q^{j}_{d}} \rangle}) \,
\prod_{W \in P}{\varphi_{W}[u]} \right\} \nonumber \\
&= \frac{\delta}{\delta \x} \left\{ \pgf^{(|P|-1)}({\langle r, u \prod_{j=1}^{s}{q^{j}_{d}} \rangle}) \right\} \,
\prod_{W \in P}{\varphi_{W}[u]} + \nonumber \\
& \quad \pgf^{(|P|-1)}({\langle r, u \prod_{j=1}^{s}{q^{j}_{d}} \rangle}) \frac{\delta}{\delta \x} \left\{
\prod_{W \in P}{\varphi_{W}[u]} \right\}.\end{aligned}$$ Evaluating the individual derivatives above and substituting the constant function $u(\x) \equiv 1$, we get $$\begin{aligned}
&\frac{\delta}{\delta \x} \left\{ \pgf^{(|P|-1)}({\langle r, u \prod_{j=1}^{s}{q^{j}_{d}} \rangle}) \right\}_{u \equiv 1}
= \pgf^{(|P|)}(\gamma) \, r(\x) \prod_{j=1}^{s}{q^{j}_{d}(\x)} \\
&\frac{\delta}{\delta \x} \left\{ \prod_{W \in P}{\varphi_{W}[u]} \right\}_{u \equiv 1}
= \left( \prod_{W \in P}{d_{W}} \right) \left( \sum_{W \in P} r(\x) \rho_{W}(\x) \right) \label{eq:ddx_varphi}\end{aligned}$$ where $\gamma$, $d_{W}$ and $\rho_{W}(\x)$ are defined in , and respectively. We note that for the empty partition $P = \{V\}$ there are no elements of the form $W$ in $P$. In this case the derivative in equation is zero since the quantity being differentiated is a constant equal to 1. To have a compact representation of the update equations we use the convention $\sum_{W \in P} () = 0$ when $P = \{V\}$. Hence we have $$\begin{aligned}
& \frac{\delta F}{\delta \x \delta Z^{1:s}_{k+1}}[0,0,\dots,0,1] =
\Gamma \sum_{P \in \S} \left( \prod_{j=1}^{s} {C^{(m_{j}-|P|_{j})}_{j}(0)} \right) \times \nonumber \\
& \quad \Bigg\{ \pgf^{(|P|)}(\gamma) \, r(\x) \prod_{j=1}^{s}{q^{j}_{d}(\x)}
\left( \prod_{W \in P}{d_{W}} \right) + \nonumber \\
& \qquad \pgf^{(|P|-1)}(\gamma) \left( \prod_{W \in P}{d_{W}} \right)
\left( \sum_{W \in P}{r(\x) \rho_{W}(\x)} \right) \Bigg\} \\
&= \Gamma \sum_{P \in \S} \left( \kappa_{P} \pgf^{(|P|)} \prod_{W \in P}{d_{W}} \right)
\, \left(r(\x) \prod_{j=1}^{s}{q^{j}_{d}(\x)}\right) \nonumber \\
& \quad + \Gamma \sum_{P \in \S} \left( \kappa_{P} \pgf^{(|P|-1)} \prod_{W \in P}{d_{W}} \right)
\, \left(\sum_{W \in P}{r(\x) \, \rho_{W}(\x)}\right) \label{eq:phd_nume}\end{aligned}$$ where $\kappa_{P}$ is defined in . Substituting $u(\x) \equiv 1$ in equation (\[eq:F\_derv\_0\]) we have $$\begin{aligned}
\frac{\delta F}{\delta Z^{1:s}_{k+1}}[0,0,\dots,0,1] &= \Gamma
\sum_{P \in \S} \kappa_{P} \pgf^{(|P|-1)} \prod_{W \in P}{d_{W}}. \label{eq:phd_deno}\end{aligned}$$ Dividing by and using the definition of PHD from , we get $$\begin{aligned}
& D_{k+1|k+1}(\x) = \frac{\displaystyle \frac{\delta F}{\delta \x \delta Z^{1:s}_{k+1}}[0,0,\dots,0,1]}
{\displaystyle \frac{\delta F}{\delta Z^{1:s}_{k+1}}[0,0,\dots,0,1]} \\
& \quad = r(\x) \left\{ \alpha_{0} \, \prod_{j=1}^{s}{q^{j}_{d}(\x)} + \sum_{P \in \S}
\alpha_{P} \, \left(\sum_{W \in P} \rho_{W}(\x) \right) \right\}\end{aligned}$$ where $\alpha_{0}$ and $\alpha_{P}$ are as given in and . [**Cardinality update**]{}
We now derive the update equation for the posterior cardinality distribution. Using the expression for the posterior probability generating functional in and the results of and we have $$\begin{aligned}
G_{k+1|k+1}[u] &= \frac{\displaystyle \sum_{P \in \S} \Psi_{P}[u] \prod_{W \in P}{\varphi_{W}[u]}}
{\displaystyle \sum_{P \in \S} \kappa_{P} \pgf^{(|P|-1)} \prod_{W \in P}{d_{W}}}.\end{aligned}$$ The probability generating function $\pgf_{k+1|k+1}(t)$ of the posterior cardinality distribution is obtained by substituting the constant function $u(\x) \equiv t$ in the expression for $G_{k+1|k+1}[u]$. Thus $$\begin{aligned}
\pgf_{k+1|k+1}(t)
&= \frac{\displaystyle \sum_{P \in \S} \Psi_{P}[t] \prod_{W \in P}{\varphi_{W}[t]}}
{\displaystyle \sum_{P \in \S} \kappa_{P} \pgf^{(|P|-1)} \prod_{W \in P}{d_{W}}}.\end{aligned}$$ For constant $t$ we have $$\begin{aligned}
\displaystyle \prod_{W \in P}{\varphi_{W}[t]} &= t^{|P|-1} \prod_{W \in P}{d_{W}} \\
\text{and } \Psi_{P}[t] &= \left( \prod_{j=1}^{s} {C^{(m_{j}-|P|_{j})}_{j}(0)} \right) \pgf^{(|P|-1)}(t \gamma).\end{aligned}$$ Since $\pgf_{k+1|k+1}(t)$ is the PGF corresponding to the cardinality distribution $p_{k+1|k+1}(n)$, $$\begin{aligned}
&p_{k+1|k+1}(n) = \frac{1}{n!} \, \pgf^{(n)}_{k+1|k+1}(0) \\
&= \frac{1}{n!} \, \left\{ \frac{d^n}{d t^n} \frac{\displaystyle \sum_{P \in \S}
\Psi_{P}[t] t^{|P|-1} \prod_{W \in P}{d_{W}}}
{\displaystyle \sum_{P \in \S} \kappa_{P} \pgf^{(|P|-1)} \prod_{W \in P}{d_{W}}} \right\}_{t = 0} \\
&= \frac{\displaystyle \sum_{P \in \S} \left( \prod_{j=1}^{s}
{C^{(m_{j}-|P|_{j})}_{j}(0)} \right) \, \prod_{W \in P}{d_{W}}
\left\{ \frac{d^n}{d t^n} t^{|P|-1} \, \pgf^{(|P|-1)}(t \gamma) \right\}_{t = 0}}
{n! \, \displaystyle \sum_{P \in \S} \kappa_{P} \pgf^{(|P|-1)} \prod_{W \in P}{d_{W}}}.\end{aligned}$$ Evaluating the derivative we get $$\begin{aligned}
\bigg\{ \frac{d^n}{d t^n} & t^{|P|-1} \, \pgf^{(|P|-1)}(t \gamma) \bigg\}_{t=0} \nonumber \\
&= \begin{cases}
0 & \text{if $n < |P|-1$} \\
\displaystyle \frac{n!}{(n-|P|+1)!} \pgf^{(n)}(0) \, \gamma^{n-|P|+1} & \text{if $n \geq |P|-1$}.
\end{cases}\end{aligned}$$ We also have $\pgf^{(n)}(0) = n! \, p_{k+1|k}(n)$, hence $$\begin{aligned}
& p_{k+1|k+1}(n) = p_{k+1|k}(n) \, \times \nonumber \\
& \quad \frac{\displaystyle \sum_{\substack{P \in \S \\ |P| \leq n+1}} \frac{n!}{(n-|P|+1)!}
\left( \prod_{j=1}^{s} {C^{(m_{j}-|P|_{j})}_{j}(0)} \right) \, \gamma^{n-|P|+1} \prod_{W \in P}{d_{W}}}
{\displaystyle \sum_{P \in \S} \kappa_{P} \pgf^{(|P|-1)} \prod_{W \in P}{d_{W}}}.\end{aligned}$$ We thus have $$\begin{aligned}
\frac{p_{k+1|k+1}(n)}{p_{k+1|k}(n)} &= \frac{\displaystyle \sum_{\substack{P \in \S \\ |P| \leq n+1}}
\left(\kappa_{P} \frac{n!}{(n-|P|+1)!} \gamma^{n-|P|+1} \prod_{W \in P}{d_{W}}\right)}
{\displaystyle \sum_{P \in \S} \kappa_{P} \pgf^{(|P|-1)} \prod_{W \in P}{d_{W}}}\end{aligned}$$ where $\kappa_{P}$ is as defined in .
[Santosh Nannuru]{} is currently a Postdoc in the Scripps Institution of Oceanography at University of California, San Diego. He obtained his doctorate in electrical engineering from McGill University, Canada in 2015. He received both his B.Tech and M.Tech Degrees in electrical engineering from Indian Institute of Technology, Bombay in 2009. He worked as a design engineer at iKoa Semiconductors for a year before starting his Ph.D. He is recipient of the McGill Engineering Doctoral Award (MEDA). His research interests are in sparse signal processing, Bayesian inference, Monte Carlo methods and random finite sets.
[Stéphane Blouin]{} is a professional engineer for 24 years. Dr. Stéphane Blouin received a B.Sc. degree in mechanical engineering (Laval U. in Québec city (QC), 1992), an M.Sc. degree in electrical engineering (Ecole Polytechnique in Montréal (QC), 1995), and a Ph.D. degree in chemical engineering (Queen’s U. in Kingston (ON), 2003). With more than 15 years of industrial experience, Dr. Blouin held various R&D positions in Canada, France and U.S.A. related to technology development and commercialization for automated processes, assembly lines, robotic systems, and process controllers. In 2010, he became a Defence Scientist at the Atlantic Research Centre of Defence R&D Canada (DRDC). Dr. Blouin is the first author of the paper which received the Best Paper Award at the 2015 International Conference on Sensor Networks (SENSORNETS). He currently holds adjunct professor positions at Dalhousie University (Halifax, Nova Scotia) and at Carleton University (Ottawa, Ontario). Dr. Blouin has authored more than 50 scientific documents, holds 8 inventions and patents, and is the Canadian authority on many international projects. His current research interests include theoretical aspects of dynamic modeling, real-time monitoring, control, and optimization as well as experimental research applied to adaptive signal processing, sensors, distributed sensor networks, underwater networks, and intelligent unmanned systems.
[Mark J. Coates]{} received the B.E. degree in computer systems engineering from the University of Adelaide, Australia, in 1995, and a Ph.D. degree in information engineering from the University of Cambridge, U.K., in 1999. He joined McGill University (Montreal, Canada) in 2002, where he is currently an Associate Professor in the Department of Electrical and Computer Engineering. He was a research associate and lecturer at Rice University, Texas, from 1999-2001. In 2012-2013, he worked as a Senior Scientist at Winton Capital Management, Oxford, UK. He was an Associate Editor of IEEE Transactions on Signal Processing from 2007-2011 and a Senior Area Editor for IEEE Signal Processing Letters from 2012-2015. In 2006, his research team received the NSERC Synergy Award in recognition of their successful collaboration with Canadian industry, which has resulted in the licensing of software for anomaly detection and Video-on-Demand network optimization. Coates’ research interests include communication and sensor networks, statistical signal processing, and Bayesian and Monte Carlo inference. His most influential and widely cited contributions have been on the topics of network tomography and distributed particle filtering. His contributions on the latter topic received awards at the International Conference on Information Fusion in 2008 and 2010.
\[[![image](rabbat){width="1in" height="1.25in"}]{}\][Michael Rabbat]{} (S’02–M’07–SM’15) received the B.Sc. degree from the University of Illinois, Urbana-Champaign, in 2001, the M.Sc. degree from Rice University, Houston, TX, in 2003, and the Ph.D. degree from the University of Wisconsin, Madison, in 2006, all in electrical engineering. He joined McGill University, Montréal, QC, Canada, in 2007, and he is currently an Associate Professor. During the 2013–2014 academic year he held visiting positions at Télécom Bretegne, Brest, France, the Inria Bretagne-Atlantique Reserch Centre, Rennes, France, and KTH Royal Institute of Technology, Stockholm, Sweden. He was a Visiting Researcher at Applied Signal Technology, Inc., Sunnyvale, USA, during the summer of 2003. Dr. Rabbat co-authored the paper which received the Best Paper Award (Signal Processing and Information Theory Track) at the 2010 IEEE International Conference on Distributed Computing in Sensor Systems (DCOSS). He received an Honorable Mention for Outstanding Student Paper Award at the 2006 Conference on Neural Information Processing Systems (NIPS) and a Best Student Paper Award at the 2004 ACM/IEEE International Symposium on Information Processing in Sensor Networks (IPSN). He currently serves as Senior Area Editor for the <span style="font-variant:small-caps;">IEEE Signal Processing Letters</span> and as Associate Editor for <span style="font-variant:small-caps;">IEEE Transactions on Signal and Information Processing over Networks</span> and <span style="font-variant:small-caps;">IEEE Transactions on Control of Network Systems</span>. His research interests include distributed algorithms for optimization and inference, consensus algorithms, and network modelling and analysis, with applications in distributed sensor systems, large-scale machine learning, statistical signal processing, and social networks.
[^1]: S. Nannuru, M. Coates and M. Rabbat are with the Department of Electrical and Computer Engineering, McGill University, Montreal, QC, Canada. S. Blouin is with DRDC Atlantic Research Centre, Halifax, Canada. e-mail: santosh.nannuru@mail.mcgill.ca, Stephane.Blouin@drdc-rddc.gc.ca, mark.coates@mcgill.ca, michael.rabbat@mcgill.ca. This research was conducted under PWGSC contract W7707-145675/001/HAL supported by Defence R&D Canada.
[^2]: The MATLAB code is available at http://networks.ece.mcgill.ca/software
[^3]: $\mathcal{B}(I,J)$ is the collection of all possible one-to-one mappings from set $I$ to set $J$.
| {
"pile_set_name": "ArXiv"
} |
---
author:
- 'Andrei Okounkov[^1]'
title: '$SL(2)$ and $z$-measures'
---
Introduction
============
This paper is about the $z$-measures which are a remarkable two-parametric family of measures on partitions introduced in [@KOV] in the context of harmonic analysis on the infinite symmetric group. In a series of papers, A. Borodin and G. Olshanski obtained several fundamental results on these $z$-measures, see their survey [@BO3] which appears in this volume and also [@BO4]. The culmination of this development is an exact determinantal formula for the correlation functions of the $z$-measures in terms of the hypergeometric kernel [@BO2]. We mention [@BOO] as one of the applications of this formula. The main result of this paper is a representation-theoretic derivation of the formula of Borodin and Olshanski.
In the early days of $z$-measures, it was already noticed that $z$-measures have some mysterious connection to representation theory of $SL(2)$. For example, the $z$-measure is actually positive if its two parameters $z$ and $z'$ are either complex conjugate $z'=\bar z$ or $z,z'\in (n,n+1)$ for some $n\in{\mathbb{Z}}$. In these cases $z-z'$ is either imaginary or lies in $(-1,1)$, which was certainly reminiscent of the principal and complementary series of representations of $SL(2)$.
Later, S. Kerov constructed an $SL(2)$-action on partitions for which the $z$-measures are certain matrix elements [@Ke]. Finally, Borodin and Olshanski computed the correlation functions of the $z$-measures is in terms of the Gauss hypergeometric function which is well known to arise as matrix elements of representations of $SL(2)$. The aim of this paper is to put these pieces together.
I want to thank A. Borodin, S. Kerov, G. Olshanski, and A. Vershik for numerous discussions of the $z$-measures. I also want to thank the organizers of the Random Matrices program at MSRI, especially P. Bleher, P. Deift, and A. Its. My research was supported by NSF under grant DMS-9801466.
The constructions of this paper were subsequently generalized beyond $SL(2)$ and $z$-measures in [@O].
The $z$-measures, Kerov’s operators, and correlation functions {#s1}
==============================================================
Definition of the $z$-measures
------------------------------
Let $z,z'\in{\mathbb{C}}$ be two parameters and consider the following measure on the set of all partitions ${\lambda}$ of $n$ $$\label{defM}
{\mathcal{M}}_n({\lambda})=\frac{n!}{(zz')_n}
\prod_{{\square}\in{\lambda}} \frac{(z+c({\square}))(z'+c({\square}))}{h({\square})^2} \,,$$ where $$(x)_n = x(x+1)\dots(x+n-1)\,,$$ the product is over all squares ${\square}$ in the diagram of ${\lambda}$, $h({\square})$ is the length of the corresponding hook, and $c({\square})$ stands for the content of the square ${\square}$. Recall that, by definition, the content of ${\square}$ is $$c({\square}) = \textup{column}({\square}) - \textup{row}({\square})\,,$$ where $\textup{column}({\square})$ denotes the column number of the square ${\square}$. The reader is referred to [@M] for general facts about partitions.
It is not immediately obvious from the definition that $$\label{prob}
\sum_{|{\lambda}|=n} {\mathcal{M}}_n({\lambda})=1 \,.$$ One possible proof of uses the following operators on partitions introduced by S. Kerov.
Kerov’s operators
-----------------
Consider the vector space with an orthonormal basis $\{{\delta}_{\lambda}\}$ indexed by all partitions of ${\lambda}$ of any size. Introduce the following operators $$\begin{aligned}
{2}
U\, {\delta}_{\lambda}&= \sum_{\mu={\lambda}+{\square}} &(z+c({\square})) \, &{\delta}_\mu \,\\
L\, {\delta}_{\lambda}&= &(zz'+2|{\lambda}|)\, &{\delta}_{\lambda}\,\\
D\, {\delta}_{\lambda}&= \sum_{\mu={\lambda}-{\square}} &(z'+c({\square})) \, &{\delta}_\mu \,,\end{aligned}$$ where $\mu={\lambda}+{\square}$ means that $\mu$ is obtained from ${\lambda}$ by adding a square ${\square}$ and $c({\square})$ is the content of this square. The letters $U$ and $D$ here stand for “up” and “down”.
These operators satisfy the commutation relations $$\label{comm}
[D,U]=L\,, \quad [L,U]=2U\,, \quad [L,D]=-2D\,,$$ same as for the following basis of ${\mathfrak{sl}(2)}$ $$U=\begin{pmatrix}0 & 1\\ 0 & 0 \end{pmatrix} \,, \quad
L=\begin{pmatrix}1 & 0\\ 0 & -1 \end{pmatrix}\,, \quad
D=\begin{pmatrix}0 & 0\\ -1 & 0 \end{pmatrix}\,.$$
In particular, it is clear that if $|{\lambda}|=n$ then $$(U^n {\delta}_\emptyset, {\delta}_{\lambda}) = \dim{\lambda}\prod_{{\square}\in{\lambda}} (z+c({\square}))$$ where $$\dim{\lambda}= n! \prod_{{\square}\in{\lambda}} h({\square})^{-1}$$ is the number of standard tableaux on ${\lambda}$. It follows that $${\mathcal{M}}_n({\lambda}) = \frac1{n!\, (zz')_n} \,
(U^n {\delta}_\emptyset,{\delta}_{\lambda}) \, (L^n {\delta}_{\lambda}, {\delta}_\emptyset)\,.$$ Using this presentation and the commutation relations one proves by induction on $n$.
The measure ${\mathcal{M}}$ and its normalization
-------------------------------------------------
In a slightly different language, with induction on $n$ replaced by the use of generating functions, this computation goes as follows.
The sequence of the measures ${\mathcal{M}}_n$ can be conveniently assembled as in [@BO2] into one measure ${\mathcal{M}}$ on the set of all partitions of all numbers as follows $${\mathcal{M}}= (1-\xi)^{zz'} \sum_{n=0}^\infty \xi^n \, \frac{(zz')_n}{n!} \, {\mathcal{M}}_n\,,
\quad \xi\in [0,1)\,,$$ where $\xi$ is a new parameter. In other words, ${\mathcal{M}}$ is the mixture of the measures ${\mathcal{M}}_n$ by means of a negative binomial distribution on $n$ with parameter $\xi$.
It is clear that is now equivalent to ${\mathcal{M}}$ being a probability measure. It is also clear that $$\label{mM}
{\mathcal{M}}({\lambda}) = (1-\xi)^{zz'}
(e^{\sqrt{\xi}\, U}\, {\delta}_\emptyset,{\delta}_{\lambda}) \, (e^{\sqrt{\xi} \, D}\, {\delta}_{\lambda}, {\delta}_\emptyset)$$ Therefore $$\begin{aligned}
\label{Z}
\sum_{{\lambda}} {\mathcal{M}}({\lambda}) = (1-\xi)^{zz'} (e^{\sqrt{\xi} \, D} \, e^{\sqrt{\xi}\, U} \, {\delta}_\emptyset,
{\delta}_\emptyset)\end{aligned}$$ It follows from the definitions that $$\label{Dvac}
D \, {\delta}_\emptyset = 0\,, \quad L \, {\delta}_\emptyset = zz'\,
{\delta}_\emptyset\,, \quad
U^* \, {\delta}_\emptyset = 0\,,$$ where $U^*$ is the operator adjoint to $U$. Therefore, in order to evaluate , it suffices to commute $e^{\sqrt{\xi} \, L}$ through $e^{\sqrt{\xi} \, U}$.
The following computation in the group $SL(2)$ $$\begin{pmatrix}
1 & 0 \\
-\beta & 1
\end{pmatrix}
\,
\begin{pmatrix}
1 & \alpha \\
0 & 1
\end{pmatrix} =
\begin{pmatrix}
1 & \frac{\alpha}{1-\alpha\beta} \\
0 & 1
\end{pmatrix} \,
\begin{pmatrix}
\frac1{1-\alpha\beta} &0 \\
0 & {1-\alpha\beta}
\end{pmatrix} \,
\begin{pmatrix}
1 & 0 \\
-\frac\beta{1-\alpha\beta} & 1
\end{pmatrix}$$ implies that $$\begin{gathered}
\label{DU}
\exp\left(\beta\, D\right) \, \exp \left( \alpha\, U \right)=\\
\exp\left(\frac\alpha{1-\alpha\beta}\, U \right) \,
(1-\alpha\beta)^{-L}
\exp\left(\frac\beta{1-\alpha\beta}\, D\right) \,,\end{gathered}$$ provided $|\alpha\beta|<1$. Therefore, $$\begin{aligned}
\sum_{{\lambda}} {\mathcal{M}}({\lambda}) &=
(1-\xi)^{zz'} \left(\exp\left(\frac{\sqrt{\xi}}{1-\xi}\,U\right) \,
(1-\xi)^{-L} \, \exp\left(\frac{\sqrt{\xi}}{1-\xi}\,D\right)\, {\delta}_\emptyset,
{\delta}_\emptyset\right)\\
&= (1-\xi)^{zz'} \left(
(1-\xi)^{-L} \, {\delta}_\emptyset,
{\delta}_\emptyset\right) = 1 \,,\end{aligned}$$ as was to be shown.
Correlation functions
---------------------
Introduce the following coordinates on the set of partitions. To a partition ${\lambda}$ we associate a subset $${\mathfrak{S}}({\lambda})=\{{\lambda}_i-i+1/2\}\subset {\mathbb{Z}}+{{\textstyle \frac12}}\,.$$ For example, $${\mathfrak{S}}(\emptyset)=\left\{-\frac 12,-\frac 32,-\frac 52,\dots\right\}$$ This set ${\mathfrak{S}}({\lambda})$ has the following geometric interpretation. Take the diagram of ${\lambda}$ and rotate it $135^\circ$ as in the following picture:
The positive direction of the axis points to the left in the above figure. The boundary of ${\lambda}$ forms a zigzag path and the elements of ${\mathfrak{S}}({\lambda})$, which are marked by $\bullet$, correspond to moments when this zigzag goes up.
Subsets $S\subset{\mathbb{Z}}+\frac12$ of the form $S={\mathfrak{S}}({\lambda})$ can be characterized by $$|S_+|=|S_-|<\infty$$ where $$S_+ = S \setminus \left({\mathbb{Z}}_{\le 0} - {{\textstyle \frac12}}\right) \,, \quad
S_- = \left({\mathbb{Z}}_{\le 0} - {{\textstyle \frac12}}\right) \setminus S \,.$$ The number $|{\mathfrak{S}}_+({\lambda})|=|{\mathfrak{S}}_-({\lambda})|$ is the number of squares in the diagonal of the diagram of ${\lambda}$ and the finite set ${\mathfrak{S}}_+({\lambda})\cup {\mathfrak{S}}_-({\lambda}) \subset {\mathbb{Z}}+\frac 12$ is known as the modified Frobenius coordinates of ${\lambda}$.
Given a finite subset $X\in{\mathbb{Z}}+{{\textstyle \frac12}}$, define the *correlation function* by $$\rho(X)={\mathcal{M}}\big(\{{\lambda}, X\subset {\mathfrak{S}}({\lambda})\}\big) \,.$$ In [@BO2], A. Borodin and G. Olshanski proved that $$\rho(X)=\det\Big[K(x_i,x_j)\Big]_{x_i,x_j\in X}$$ where $K$ the *hypergeometric kernel* introduced in [@BO2]. This kernel involves the Gauss hypergeometric function and the explicit formula for $K$ will be reproduced below.
It is our goal in the present paper to give a representation-theoretic derivation of the formula for correlation functions and, in particular, show how the kernel $K$ arises from matrix elements of irreducible $SL(2)$-modules.
SL(2) and correlation functions
===============================
Matrix elements of ${\mathfrak{sl}(2)}$-modules and Gauss hypergeometric function
---------------------------------------------------------------------------------
The fact that the hypergeometric function arises as matrix coefficients of $SL(2)$ modules is well known. A standard way to see this is to use a functional realization of these modules; the computation of matrix elements leads then to an integral representation of the hypergeometric function, see for example how matrix elements of $SL(2)$-modules are treated in [@Vi]. An alternative approach is to use explicit formulas for the action of the Lie algebra ${\mathfrak{sl}(2)}$ and it goes as follows.
Consider the ${\mathfrak{sl}(2)}$-module $V$ with the basis $v_k$ indexed by all half-integers $k\in{\mathbb{Z}}+{{\textstyle \frac12}}$ and the following action of ${\mathfrak{sl}(2)}$ $$\begin{aligned}
{2}\label{e1d1}
U\, v_k = &&(z+k+{{\textstyle \frac12}})\,&v_{k+1}\,,\\
L\, v_k = &&(2k+z+z')\,&v_{k}\,,\\
D\, v_k = &&(z'+k-{{\textstyle \frac12}})\,&v_{k-1}\,.\end{aligned}$$ It is clear that $$e^{\alpha\,U}\, v_k = \sum_{s=0}^\infty \frac{\alpha^s}{s!}\,
(z+k+{{\textstyle \frac12}})_s \, v_{k+s} \,.$$ Introduce the following notation $$(a)_{{\downarrow}s} = a(a-1)(a-2)\cdots (a-s+1) \,.$$ With this notation we have $$e^{\beta\, D} \, v_k = \sum_{s=0}^\infty \frac{\beta^s}{s!}\,
(z'+k-{{\textstyle \frac12}})_{{\downarrow}s} \, v_{k-s} \,.$$ Denote by ${\left[i\to j\right]}_{\alpha,\beta,z,z'}$ the coefficient of $v_j$ in the expansion of $e^{\alpha \,U} \, e^{\beta\, D}\, v_i$ $$e^{\alpha \,U} \, e^{\beta\, D}\, v_i = \sum_j \,
{\left[i\to j\right]}_{\alpha,\beta,z,z'}
\, v_j \,.$$ A direct computation yields $$\begin{gathered}
\label{e12}
{\left[i\to j\right]}_{\alpha,\beta,z,z'}= \\
\begin{cases}
{\displaystyle \frac{\alpha^{j-i}}{(j-i)!} \, (z+i+{{\textstyle \frac12}})_{j-i}} \,
{F\left(\begin{matrix} -z-i+{{\textstyle \frac12}}\,,\, -z'-i+{{\textstyle \frac12}}\\ j-i+1 \end{matrix}\, ;
\alpha\beta \right)} \,, & i \le j \,, \\
{\displaystyle \frac{\beta^{i-j}}{(i-j)!} \, (z'+j+{{\textstyle \frac12}})_{i-j}} \,
{F\left(\begin{matrix} -z-j+{{\textstyle \frac12}}\,,\, -z'-j+{{\textstyle \frac12}}\\ i-j+1 \end{matrix}\, ;
\alpha\beta \right)} \,,
& i \ge j \,,
\end{cases}\end{gathered}$$ where $${F\left(\begin{matrix} a \,,\, b \\ c \end{matrix}\, ;
z \right)}=\sum_{k=0}^\infty \frac{(a)_k\, (b)_k}{(c)_k \, k!} \,
z^k \,,$$ is the Gauss hypergeometric function.
Consider now the dual module $V^*$ spanned by functionals $v^*_j$ such that $${\langle}v^*_i, v_j {\rangle}={\delta}_{ij}$$ and equipped with the dual action of ${\mathfrak{sl}(2)}$ $$\begin{aligned}
{2}
U\, v^*_k &= - (z+k-{{\textstyle \frac12}})&&\,v^*_{k-1} \,, \\
D\, v^*_k &= - (z'+k+{{\textstyle \frac12}})&&\,v^*_{k+1} \,.\end{aligned}$$ Denote by ${\left[i\to j\right]}^*_{\alpha,\beta,z,z'}$ the coefficient of $v^*_j$ in the expansion of $e^{\alpha \,U} \, e^{\beta\, D}\, v^*_i$ $$e^{\alpha \,U} \, e^{\beta\, D}\, v^*_i = \sum_j \,
{\left[i\to j\right]}^*_{\alpha,\beta,z,z'}
\, v^*_j \,.$$ We have $$\begin{gathered}
\label{e13}
{\left[i\to j\right]}^*_{\alpha,\beta,z,z'}= \\
\begin{cases}
{\displaystyle \frac{(-\beta)^{j-i}}{(j-i)!} \, (z'+i+{{\textstyle \frac12}})_{j-i}} \,
{F\left(\begin{matrix} z+j+{{\textstyle \frac12}}\,,\, z'+j+{{\textstyle \frac12}}\\ j-i+1 \end{matrix}\, ;
\alpha\beta \right)} \,, & i \le j \,, \\
{\displaystyle \frac{(-\alpha)^{i-j}}{(i-j)!} \, (z+j+{{\textstyle \frac12}})_{i-j}} \,
{F\left(\begin{matrix} z+i+{{\textstyle \frac12}}\,,\, z'+i+{{\textstyle \frac12}}\\ i-j+1 \end{matrix}\, ;
\alpha\beta \right)} \,,
& i \ge j \,,
\end{cases}\end{gathered}$$
Remarks
-------
### Periodicity
Observe that representations whose parameters $z$ and $z'$ are related by the tranformation $$(z,z')\mapsto (z+m,z'+m)\,, \quad m\in{\mathbb{Z}}\,,$$ are equivalent. The above transformation amounts to just a renumeration of the vectors $v_k$. G. Olshanski pointed out that this periodicity in $(z,z')$ is reflected in a similar periodicity of various asymptotic properties of $z$-measures, see Sections 10 and 11 of [@BO4].
### Unitarity
Recall that the $z$-measures are positive if either $z'=\bar z$ or $z,z'\in(n,n+1)$ for some $n$. By analogy with representation theory of $SL(2)$, these cases were called the principal and the complementary series.
Observe that in these case the above representations have a positive defined Hermitian form $Q$ which is invariant in the following sense $$Q(Lu,v)=Q(u,Lv)\,, \quad Q(Uu,v)=Q(u,Dv)\,.$$ The form $Q$ is given by $$Q(v_k,v_k)=
\begin{cases}1 & z'=\bar z\,, \\
\dfrac{\Gamma(z'+k+\frac12)}{\Gamma(z+k+\frac12)}\,
&z,z'\in(n,n+1)\,,
\end{cases}$$ and $Q(v_k,v_l)=0$ if $k\ne l$. It follows that the operators $$\tfrac{i}2\,L,\tfrac12\,(U-D), \tfrac{i}2\,(U+D) \in {\mathfrak{sl}(2)}\,,$$ which form a standard basis of ${\mathfrak{su}}(1,1)$, are skew-Hermitian and hence this representation of ${\mathfrak{su}}(1,1)$ can be integrated to a unitary representation of the universal covering group of $SU(1,1)$. This group $SU(1,1)$ is isomorphic to $SL(2,{\mathbb{R}})$ and the above representations correspond to the principal and complementary series of unitary representations of the universal covering of $SL(2,{\mathbb{R}})$, see [@Pu].
The infinite wedge module
-------------------------
Consider the module $\Lambda^{\frac{\infty}2}\, V$ which is, by definition, spanned by vectors $$\delta_S=v_{s_1} \wedge v_{s_2} \wedge v_{s_3} \wedge \dots\,,$$ where $S=\{s_1>s_2>\dots\}\subset {\mathbb{Z}}+{{\textstyle \frac12}}$ is a such subset that both sets $$S_+ = S \setminus \left({\mathbb{Z}}_{\le 0} - {{\textstyle \frac12}}\right) \,, \quad
S_- = \left({\mathbb{Z}}_{\le 0} - {{\textstyle \frac12}}\right) \setminus S$$ are finite. We equip this module with the inner product in which the basis $\{\delta_S\}$ is orthonormal. Introduce the following operators $$\psi_k, \psi^*_k : \Lambda^{\frac{\infty}2}\, V \to
\Lambda^{\frac{\infty}2}\, V \,.$$ The operator $\psi_k$ is the exterior multiplication by $v_k$ $$\psi_k \left(f\right) = v_k \wedge f \,.$$ The operator $\psi^*_k$ is the adjoint operator; it can be also given by the formula $$\psi^*_k \left(v_{s_1} \wedge v_{s_2} \wedge v_{s_3} \right)=
\sum_i (-1)^{i+1} {\langle}v^*_k, v_{s_i} {\rangle}\,
v_{s_1} \wedge v_{s_2} \wedge \dots \wedge \widehat{v_{s_i}} \wedge
\dots \,.$$ These operators satisfy the canonical anticommutation relations $$\psi_k \psi^*_k + \psi^*_k \psi_k = 1\,,$$ all other anticommutators being equal to $0$. It is clear that $$\label{e14}
\psi_k \psi^*_k \, \delta_S =
\begin{cases}
\delta_S\,, & k \in S \,, \\
0 \,, & k \notin S \,.
\end{cases}$$ A general reference on the infinite wedge space is Chapter 14 of the book [@K].
The Lie algebra ${\mathfrak{sl}(2)}$ acts on $\Lambda^{\frac{\infty}2}\, V$. The action of $U$ and $D$ are the obvious extensions of the action on $V$. In terms of the fermionic operators $\psi_k$ and $\psi^*_k$ they can be written as follows $$\begin{aligned}
U&=\sum_{k\in{\mathbb{Z}}+\frac12} (z+k+{{\textstyle \frac12}})\,\, \psi_{k+1} \psi^*_{k} \,,\\
D&=\sum_{k\in{\mathbb{Z}}+\frac12} (z'+k+{{\textstyle \frac12}})\,\, \psi_{k} \psi^*_{k+1} \,.\end{aligned}$$ The easiest way to define the action of $L$ is to set it equal to $[D,U]$ by definition. We obtain $$L=2H + (z+z')\, C + zz' \,,$$ where $H$ is the energy operator $$H=\sum_{k>0} k \, \psi_k \psi^*_k - \sum_{k<0} k \, \psi^*_k \psi_k
\,,$$ and $C$ is the charge $$C=\sum_{k>0} \psi_k \psi^*_k - \sum_{k<0} \psi^*_k \psi_k \,.$$ It is clear that $$C\, \delta_S = \left(|S_+|-|S_-|\right) \, \delta_S$$ and, similarly, $$H \, \delta_S = \left(\sum_{k\in S_+} k - \sum_{k \in S_-} k\right)\,
\delta_S\,.$$ The charge is preserved by the ${\mathfrak{sl}(2)}$ action.
Consider the zero charge subspace, that is, the kernel of $C$ $$\Lambda_0 \subset {\Lambda^{\frac\infty2}V}\,.$$ It is spanned by vectors which, abusing notation, we shall denote by $$\label{e15}
\delta_{\lambda}=\delta_{S({\lambda})} \,, \quad S({\lambda})=
\left\{{\lambda}_1-\tfrac12,{\lambda}_2-\tfrac32,{\lambda}_3-\tfrac52,\dots \right\}\,,$$ where ${\lambda}$ is a partition. One immediately sees that the action of ${\mathfrak{sl}(2)}$ on $\{\delta_{\lambda}\}$ is identical with Kerov’s operators.
Correlation functions
---------------------
Recall that the correlation functions were defined by $$\rho(X)={\mathcal{M}}\big(\{{\lambda}, X\subset {\mathfrak{S}}({\lambda})\}\big) \,,$$ where the finite set $$X=\{x_1,\dots,x_s\} \subset {\mathbb{Z}}+\tfrac12$$ is arbitrary.
The important observation is that and imply the following expression for the correlation functions $$\label{e17}
\rho(X) = (1-\xi)^{zz'} \,
\left( e^{\sqrt\xi\, D} \, \prod_{x\in X} \psi_{x} \psi^*_{x} \,
e^{\sqrt\xi\, U}\, {\delta_\emptyset},{\delta_\emptyset}\right) \,.$$ We apply to the same strategy we applied to which is to commute the operators $e^{\sqrt\xi\, D}$ and $e^{\sqrt\xi\, U}$ all the way to the right and left, respectively, and then use . [F]{}rom , we have for any operator $A$ the following identity $$\begin{gathered}
\label{e18}
e^{\beta\, D}\, A \,e^{\alpha\, U}= \\
e^{\frac\alpha{1-\alpha\beta}\, U} \,
\left[
e^{-\frac\alpha{1-\alpha\beta}\, U}\,
e^{\beta\, D} \, A \,
e^{-\beta\, D}
e^{\frac\alpha{1-\alpha\beta}\, U}
\right]
(1-\alpha\beta)^{-L} e^{\frac\beta{1-\alpha\beta}\, D} \,.\end{gathered}$$ We now apply this identity with $\alpha=\beta=\sqrt\xi$ and $A=\prod \psi_x \psi^*_x$ to obtain $$\label{e19}
\rho(X) =
\left(G\, \prod_{x\in X} \psi_{x} \psi^*_{x} \, G^{-1}\, {\delta_\emptyset},{\delta_\emptyset}\right) \,,$$ where $$G=\exp\left(\frac{\sqrt{\xi}}{\xi-1}\, U\right)\,
\exp\left(\sqrt{\xi}\, D\right)\,.$$
Consider the following operators $$\begin{aligned}
2
\Psi_k &= G\, \psi_k \, G^{-1} &&= \sum_i
{\left[k\to i\right]}\, \psi_i \,, \label{e110}\\
\Psi^*_k &= G\, \psi^*_k \, G^{-1} &&= \sum_i
{\left[k\to i\right]}^*\, \psi^*_i \,, \label{e111} \end{aligned}$$ with the understanding that matrix elements without parameters stand for the following choice of parameters $$\label{e112}
{\left[k\to i\right]}={\left[k\to i\right]}_{\xi^{1/2}(\xi-1)^{-1},\, \xi^{1/2},z,z'}\,,$$ and with same choice of parameters for ${\left[k\to i\right]}^*$. The first equality in both and is a definition and the second equality follows from the definition of the operators $\psi_i$ and the definition of the matrix coefficients ${\left[i\to j\right]}_{\alpha,\beta,z,z'}$.
From we obtain $$\label{e114}
\rho(X)=\left(\prod_{x\in X} \Psi_{x} \Psi^*_{x} \, {\delta_\emptyset},{\delta_\emptyset}\right) \,.$$ Applying Wick’s theorem to , or simply unraveling the definitions in the right-hand side of , we obtain the following
We have $$\rho(X)=\det\big[K(x_i,x_j)\big]_{1\le i,j \le s} \,,$$ where the kernel $K$ is defined by $$K(i,j)=\left(\Psi_{i} \Psi^*_{j} \, {\delta_\emptyset},{\delta_\emptyset}\right) \,.$$
Observe that $$\left(\psi_l \, \psi^*_m \, {\delta_\emptyset},{\delta_\emptyset}\right)=
\begin{cases}
1\,, & l=m < 0 \,, \\
0\,, & \text{otherwise} \,.
\end{cases}$$ Therefore, applying the formulas and we obtain
We have $$\label{e115}
K(i,j)=\sum_{m=-1/2,-3/2,\dots}
{\left[i\to m\right]} \, {\left[j\to m\right]}^* \,,$$ with the agreement about matrix elements without parameters.
The formula is the analog of the Proposition 2.9 in [@BOO] for the discrete Bessel kernel.
We conclude this section with the following formula which, after substituting the formulas and for matrix elements, becomes the formula of Borodin and Olshanski [@BO2].
We have $$\label{e116}
K(i,j)=\frac{
z' \sqrt{\xi} {\left[i\to {{\textstyle \frac12}}\right]}\, {\left[j\to -{{\textstyle \frac12}}\right]}^* -
z \frac{\sqrt{\xi}}{(\xi-1)^2} {\left[i\to -{{\textstyle \frac12}}\right]}\, {\left[j\to {{\textstyle \frac12}}\right]}^* }{i-j} \,,$$ where for $i=j$ the right-hand side is defined by continuity.
More generally, set $$K(i,j)_{\alpha,\beta} = \left(\Psi_i(\alpha,\beta)\,
\Psi^*_j(\alpha,\beta) \, {\delta_\emptyset},{\delta_\emptyset}\right)\,.$$ where $$\begin{aligned}
2
\Psi_k &= e^{\alpha\, U} \, e^{\beta\, D}\, \psi_k \,
e^{-\beta\, D} \, e^{-\alpha\, U} &&= \sum_i
{\left[k\to i\right]}_{\alpha,\beta,z,z'}\, \psi_i \\
\Psi^*_k &= e^{\alpha\, U} \, e^{\beta\, D}\, \psi^*_k \,
e^{-\beta\, D} \, e^{-\alpha\, U} &&= \sum_i
{\left[k\to i\right]}^*_{\alpha,\beta,z,z'}\, \psi^*_i \,. \end{aligned}$$ We will prove that $$\begin{gathered}
\label{e117}
K(i,j)_{\alpha,\beta}=
\left(
\beta z' {\left[i\to {{\textstyle \frac12}}\right]}_{\alpha,\beta,z,z'}\, {\left[j\to -{{\textstyle \frac12}}\right]}^*_{\alpha,\beta,z,z'} -
\right.\\
\left.
\alpha (\alpha\beta-1) z
{\left[i\to -{{\textstyle \frac12}}\right]}_{\alpha,\beta,z,z'}\, {\left[j\to {{\textstyle \frac12}}\right]}^*_{\alpha,\beta,z,z'} \right)
/(i-j) \,.\end{gathered}$$ First, we treat the case $i\ne j$ in which case we can clear the denominators in . From the following computation with $2\times 2$ matrices $$\begin{gathered}
\begin{pmatrix}
1 & \alpha \\
0 & 1
\end{pmatrix}
\begin{pmatrix}
1 & 0 \\
-\beta & 1
\end{pmatrix}
\begin{pmatrix}
1 & 0 \\
0 & -1
\end{pmatrix}
\begin{pmatrix}
1 & 0 \\
\beta & 1
\end{pmatrix}
\begin{pmatrix}
1 & -\alpha \\
0 & 1
\end{pmatrix} = \\
\begin{pmatrix}
1-2\alpha\beta & 2\alpha(\alpha\beta-1) \\
-2\beta & 2\alpha\beta-1
\end{pmatrix}\end{gathered}$$ we conclude that $$e^{\alpha\, U} \, e^{\beta\, D} \, L
e^{-\beta\, D} \, e^{-\alpha\, U}
=L+T\,,$$ where $$T=-2\alpha\beta\, L + 2\beta\, D +2\alpha(\alpha\beta-1) \, U \,.$$ This can be rewritten as follows $$\begin{aligned}
[L,e^{\alpha\, U} \, e^{\beta\, D}]&= -T\,
e^{\alpha\, U} \, e^{\beta\, D}\,, \label{e118}\\
[L,e^{-\alpha\, U} \, e^{-\beta\, D}]&=
e^{-\alpha\, U} \, e^{-\beta\, D} \, T\,. \label{e119}\end{aligned}$$ Note that $$\label{e120}
[L,\psi_i \, \psi^*_j] = 2(i-j)\, \psi_i \, \psi^*_j \,.$$ From , , and we have $$\begin{gathered}
\label{e121}
\left[L,\Psi_i(\alpha,\beta)\,
\Psi^*_j(\alpha,\beta) \right]= \\
-\left[T,\Psi_i(\alpha,\beta)\,
\Psi^*_j(\alpha,\beta) \right]+2(i-j) \, \Psi_i(\alpha,\beta)\,
\Psi^*_j(\alpha,\beta) \,.\end{gathered}$$ Since ${\delta_\emptyset}$ is an eigenvector of $L$ we have $$\left(\left[L,\Psi_i(\alpha,\beta)\,
\Psi^*_j(\alpha,\beta) \right]\, {\delta_\emptyset},{\delta_\emptyset}\right) = 0$$ Expand this equality using and the relations $$\begin{aligned}
T \,
{\delta_\emptyset}&= - 2\alpha\beta z z' {\delta_\emptyset}+ 2\alpha(\alpha\beta-1) z \delta_{\square}\,,\\
T^*
{\delta_\emptyset}&= - 2\alpha\beta z z' {\delta_\emptyset}+ 2 \beta z' \delta_{\square}\,,\end{aligned}$$ where $T^*$ is the operator adjoint to $T$ and $\delta_{\square}$ is the vector corresponding to the partition $(1,0,0,\dots)$. We obtain $$\begin{gathered}
(i-j) K(i,j)_{\alpha,\beta} =
\beta z' \, \left(\Psi_i(\alpha,\beta)\,
\Psi^*_j(\alpha,\beta)\, {\delta_\emptyset},\delta_{\square}\right) - \\
\alpha(\alpha\beta-1) z \,
\left(\Psi_i(\alpha,\beta)\,
\Psi^*_j(\alpha,\beta)\, \delta_{\square},{\delta_\emptyset}\right)\end{gathered}$$ In order to obtain for $i\ne j$, it now remains to observe that $$\begin{aligned}
(\psi_l\, \psi^*_m \, {\delta_\emptyset},\delta_{\square})&=
\begin{cases}
1\,, & l=\frac12\,, \,\, m =-\frac12 \,,\\
0 \,, & \text{otherwise}\,,
\end{cases}
\\
(\psi_l\, \psi^*_m \, \delta_{\square},{\delta_\emptyset})&=
\begin{cases}
1\,, & l=-\frac12\,, \,\, m =\frac12 \,,\\
0 \,, & \text{otherwise}\,.
\end{cases}\end{aligned}$$ In the case $i=j$ we argue by continuity. It is clear from that $K(i,j)$ is an analytic function of $i$ and $j$ and so is the right-hand side of . The passage from to is based on the fact that the product $i$ times ${\left[i\to m\right]}_{\alpha,\beta,z,z'}$ is a linear combination of ${\left[i\to m\right]}_{\alpha,\beta,z,z'}$ and ${\left[i\to m\pm 1\right]}_{\alpha,\beta,z,z'}$ with coefficients which are linear functions of $m$. Since the matrix coefficients are, essentially, the hypergeometric function, such a relation must hold for any $i$, not just half-integers. Hence, and are equal for any $i\ne j$, not necessarily half-integers. Therefore, they are equal for $i=j$.
Rim-hook analogs
----------------
The same principles apply to rim-hook analogs of the $z$-measures which were also considered by S. Kerov [@Ke].
Recall that a rim hook of a diagram ${\lambda}$ is, by definition, a skew diagram ${\lambda}/\mu$ which is connected and lies on the rim of ${\lambda}$. Here connected means that the squares have to be connected by common edges, not just common vertices. Rim hooks of a diagram ${\lambda}$ are in the following 1-1 correspondence with the squares of ${\lambda}$: given a square ${\square}\in{\lambda}$, the corresponding rim hook consists of all squares on the rim of ${\lambda}$ which are (weakly) to the right of and below ${\square}$. The length of this rim hook is equal to the hook-length of ${\square}$.
The entire discussion of the previous section applies to the more general operators $$\begin{aligned}
{2}
U_r\, v_k = &&\left(z+\tfrac kr +{{\textstyle \frac12}}\right)\,&v_{k+r}\,,\\
L_r\, v_k = &&\left(\tfrac{2k}r+z+z'\right)\,&v_{k}\,,\\
D_r\, v_k = &&\left(z'+\tfrac kr-{{\textstyle \frac12}}\right)\,&v_{k-r}\,,\end{aligned}$$ which satisfy the same ${\mathfrak{sl}(2)}$ commutation relations. The easiest way to check the commutation relations is to consider $\frac kr$ rather than $k$ as the index of $v_k$; the above formulas then become precisely the formulas . The operator $U_r$ acts on the basis $\{\delta_{\lambda}\}$ as follows $$\label{e401}
U_r \, \delta_{\lambda}=\sum_{\mu={\lambda}+\textup{ rim hook}} (-1)^{\textup{height}+1}
\left(z+\frac1{r^2}\sum_{{\square}\in\textup{rim hook}}
c({\square}) \right) \delta_\mu \,,$$ where the summation is over all partitions $\mu$ which can be obtained from ${\lambda}$ by adding a rim hook of length $r$, height is the number of horizontal rows occupied by this rim hook and $c({\square})$ stands, as usual for the content of the square ${\square}$. Similarly, the operator $D_r$ removes rim hooks of length $r$. These operators were considered by Kerov [@Ke].
It is clear that the action of the operators $e^{\alpha U_r}$ and $e^{\beta D_r}$ on a half-infinite wedge product like $$v_{s_1} \wedge v_{s_2} \wedge v_{s_3} \wedge \dots\,,$$ essentially (up to a sign which disappears in formulas like ) factors into the tensor product of $r$ separate actions on $$\bigwedge_{s_i\equiv k+\frac12\!\! \mod r} v_{s_i} \,, \quad k=0,\dots,r-1 \,.$$ Consequently, the analogs of the correlation functions have again a determinantal form with a certain kernel $K_r(i,j)$ which has the following structure. If $i\equiv j \mod r$ then $K_r(i,j)$ is essentially the kernel $K(i,j)$ with rescaled arguments. Otherwise, $K_r(i,j)=0$.
This factorization of the action on ${\Lambda^{\frac\infty2}V}$ is just one more way to understand the following well-known phenomenon. Let ${\mathbb{Y}}_r$ be the partial ordered set formed by partitions with respect to the following ordering: $\mu\le_r{\lambda}$ if $\mu$ can be obtained from ${\lambda}$ by removing a number of rim hooks with $r$ squares. The minimal elements of ${\mathbb{Y}}_r$ are called the $r$-cores. The $r$-cores are precisely those partitions which do not have any hooks of length $r$. We have $$\label{e41}
{\mathbb{Y}}_r \cong \bigsqcup_{\textup{$r$-cores}} ({\mathbb{Y}}_1)^r$$ as partially ordered sets. Here the Cartesian product $({\mathbb{Y}}_1)^r$ is ordered as follows: $$(\mu_1,\dots,\mu_r) \le ({\lambda}_1,\dots,{\lambda}_r)
\quad \Leftrightarrow \quad \mu_i \le_1 {\lambda}_i, \quad
i=1,\dots,r \,,$$ and the partitions corresponding to different $r$-cores are incomparable in the $\le_r$-order. Combinatorial algorithms which materialize the isomorphism are discussed in Section 2.7 of the book [@JK]. The $r$-core and the $r$-tuple of partitions which the isomorphism associates to a partition ${\lambda}$ are called the $r$-core of ${\lambda}$ and the $r$-quotient of ${\lambda}$. Among more recent papers dealing with $r$-quotients let us mention [@FS] where an approach similar to the use of ${\Lambda^{\frac\infty2}V}$ is employed, an analog of the Robinson-Schensted algorithm for ${\mathbb{Y}}_r$ is discussed, and further references are given.
The factorization and the corresponding analog of the Robinson-Schensted algorithm play the central role in the recent paper [@B], see also [@R].
[99]{}
A. Borodin, *Longest increasing subsequence of random colored permutation*, math.CO/9902001.
A. Borodin, A. Okounkov, and G. Olshanski, *On asymptotics of the Plancherel measures for symmetric groups*, to appear in J.Amer. Math. Soc., math.CO/9905032.
A. Borodin and G. Olshanski, *Point processes and the infinite symmetric group*, Math. Res. Letters **5**, 799–816, 1998.
A. Borodin and G. Olshanski, *Distribution on partitions, point processes, and the hypergeometric kernel*, to appear in Comm. Math. Phys., math.RT/9904010.
A. Borodin and G. Olshanski, *Z-measures on partitions, Robinson-Schensted-Knuth correspondence, and $\beta=2$ random matrix ensembles*, this volume, math.CO/9905189,
S. Fomin and D. Stanton, *Rim hook lattices*, St. Petersburg Math. J., **9**, no. 5, 1007–1016, 1998.
G. James and A. Kerber, *The representation theory of the symmetric group*, Encyclopedia of Math. and its Appl., **16**, Addison-Wesley, 1981.
V. Kac, *Infinite dimensional Lie algebras*, Cambridge University Press.
S. Kerov, *private communication*.
S. Kerov, G. Olshanski, and A. Vershik, *Harmonic analysis on the infinite symmetric group. A deformation of the regular representation*, C. R. Acad. Sci.Paris Sér. I Math., **316**, no. 8, 1993, 773–778.
I. G. Macdonald, *Symmetric functions and Hall polynomials*, Clarendon Press, 1995.
A. Okounkov, *Infinite wedge and random partitions*, math.RT/9907127.
L. Pukanszky, *The Plancherel formula for the universal covering group of ${\rm SL}(2,R)$*, Math. Annalen **156**, 1964, 96–143.
E. M. Rains, *Increasing subsequences and the classical groups*, Electr. J. of Combinatorics, **5**(1), 1998.
N. Vilenkin, *Special functions and the theory of group representations*, Translations of Mathematical Monographs, **22**, AMS, 1968.
[^1]: Department of Mathematics, University of California at Berkeley, Evans Hall \#3840, Berkeley, CA 94720-3840. E-mail: okounkov@math.berkeley.edu
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We consider the warped product manifold, ${\mathbb{R}}_+ \times_{\operatorname{\bf{Id}}} M^n$, with Riemannian metric $\gamma\equiv \operatorname{d}r^2 \oplus r^2 \sigma$, where $(M^n, \sigma)$ is a smooth closed Riemannian $n$-manifold. We investigate what sufficient curvature condition is required of $\sigma$ to ensure that a solution to the inverse mean curvature flow - commencing with a surface described as the graph of a global $C^{2,\alpha}$ function on $M^n$ - exists for all times $t>0$.'
author:
- Thomas Mullins
title: On the inverse mean curvature flow in warped product manifolds
---
introduction
============
The general outward curvature flow (of which the inverse mean curvature flow is a special case) was studied by Gerhardt [@ger1] and Urbas [@urbas] in ${\mathbb{R}}^{n+1}$ for a [*starshaped*]{} initial surface $S_0$, which is equivalent to there existing a $u_0:S^n \rightarrow {\mathbb{R}}_+$ such that $$S_0 = \operatorname{graph}u_0,$$ where $S^n$ is the unit $n$-sphere. In both papers, the idea was to use polar coordinates to describe the flow of the surfaces and prove existence for all times $t>0$. This is equivalent (see Example \[polar coords\]) to the flow of surfaces in the warped product ${\mathbb{R}}_+ \times_{\operatorname{\bf{Id}}} S^n$. So it was a natural question to ask: under what conditions on an arbitrary manifold $M^n$ will a solution to this flow in the warped product ${\mathbb{R}}_+ \times_{\operatorname{\bf{Id}}}M^n$ also exist for all times $t>0$?\
\
Indeed, starting out with a sufficiently smooth function $u_0$ from the compact $n$-manifold $M^n$ into the positive real numbers, defining an embedded hypersurface $M_0$ in the warped product ${\mathbb{R}}_+ \times_{\operatorname{\bf{Id}}} M^n$ in the obvious manner $$M_0 \equiv \operatorname{graph}u_0 = \{ (u_0(p),p), p\in M^n \},$$ we investigate the curvature conditions required of $M^n$. We will show that a positive definite Ricci tensor suffices for the solution to indeed exist for all times $t>0$ and remains graphical thoughout the evolutionary process. We also show, via a standard rescaling method, that the solution, in a certain sense, is asymptotically ‘$\{\infty\} \times M^n$’. We assert the main theorem:
Let $(M^n,\sigma)$ be a smooth closed Riemannian manifold of dimension $n$, whose curvature satisfies $$\label{ricci bound}
\operatorname{Ric}_M(X) > 0$$ for all $X\in TM$. Let $u_0:M^n \rightarrow {\mathbb{R}}_+$ be of class $C^{2,\alpha}$ whose corresponding embedding $$x_0:M^n \rightarrow N^{n+1}, \quad p \mapsto (u_0(p),p),$$ has strictly positive mean curvature $H_0$. Then the evolution equation $$\label{eq}\left\{
\begin{array}{l l}
\dot{x} & = H^{-1}\nu \\
x_0 & = x(0,\cdot),
\end{array} \right.$$ where $\nu$ is the outward pointing unit normal to the evolving hypersurfaces $M_t \equiv x(t,M)$, and $H$ is the mean curvature of $M_t$, has a unique solution of class $H^{2+\beta,\frac{2+\beta}{2}}(Q_{\infty}) \cap H^{2+m+\gamma,\frac{2+m+\gamma}{2}}(Q_{\epsilon,\infty})$ for any $0<\beta < \alpha$, $\gamma \in (0,1)$, $\epsilon > 0$ and $m\geq 0$ that exists for all times $t>0$. The rescaled surfaces $$\tilde{M}_t = \tilde{x}(t,M) = \operatorname{graph}\tilde{u}(t,\cdot),$$ where $\tilde{u} \equiv u\operatorname{e}^{-t/n}$, converge exponentially fast to a constant embedding $$M^n \xhookrightarrow{r_{\infty}}\lbrace r_{\infty} \rbrace\times M^n ,$$ with $$r_{\infty} = \left[ \frac{|M_0|}{|M^n|}\right]^{1/n}.$$
notation and conventions {#not and conv}
------------------------
We are working primarily with a family of embeddings of an arbitrary $n$-manifold with a fixed metric $(M^n,\sigma)$ into an $(n+1)$-manifold also equipped with a fixed metric $(N^{n+1},\gamma)$. Coordinates on $N^{n+1}$ will be denoted by $\lbrace x^a \rbrace$ and on $M^n$ as $\lbrace y^i \rbrace$, where the indices from the set $\lbrace a,b,c,d,e \rbrace$ run from $0$ to $n$ and the indices of the set $\lbrace i,j,k,l,m,r,s \rbrace$ run from $1$ to $n$ throughout this paper. The indices will be raised and lowered (corresponding to contravariant and covariant quantities respectively) with the corresponding metric as is usual practise.\
\
With $g=\{g_{ij}\}$ we denote the induced metric on the embedded manifolds, $M_t$. Although $g=g(t)$, we suppress this dependence on $t$ to spare the reader a debauch of indices and arguments. We should, however, bear this dependence in mind. The covariant differential operators on $M^n$, $M_t$ and $N^{n+1}$ are signified either by $D=D(\sigma)$, $\nabla = \nabla(g)$ and ${\overline{\nabla}}= {\overline{\nabla}}(\gamma)$; or simply by subscripted indices, from the set from $1$ to $n$ for $M^n$ and $M_t$ (it will always be clear from the context whether we are on $M^n$ or $M_t$), alternatively from the set of indices running from $0$ to $n$ for covariant differentiation on $N^{n+1}$. In situations of potential ambiguity there will occasionally be a subscripted semicolon ‘$_;$’ in front of the index to be covariantly differentiated. For the partial derivative, we either replace the subscripted semicolon with a subscripted comma, as is common in the literature, or include the partial derivative symbol $\partial$ with appropriate subscripted indices.\
\
We will mainly use tensor notation to describe tensor quantities on $M^n$, $M_t$ and $N^{n+1}$. For instance if $w$ is a function from $N^{n+1}$ to ${\mathbb{R}}$ then $w_{;\,ab}$ (or simply $w_{ab}$) would represent the Hessian of $w$. For an arbitrary tensor $\mathcal{T}$ of valence $[ ^k _l ]$, we define the covariant derivative of that tensor $\nabla \mathcal{T}$ to be the $[^k _{l+1}]$-tensor by $$\label{tensor covariant derivative}
\begin{split}
(\nabla \mathcal{T})&(\omega_1,\dots,\omega_k,X^1,\dots,X^l,Z) = (\nabla_Z \mathcal{T})(\omega_1,\dots,\omega_k,X_1,\dots,X_l)\\
\equiv & Z\left(\mathcal{T}(\omega_1,\dots,\omega_k,X^1,\dots,X^l) \right) \\
& + \sum_p \mathcal{T}(\omega_1,\dots,\nabla_Z \omega_p,\dots, \omega_k,X^1,\dots,X^l) \\
& - \sum_q \mathcal{T}(\omega_1,\dots,\omega_k,X^1,\dots,\nabla_Z X^q,\dots,X^l).
\end{split}$$ Here, the $\omega_j$ are covectors and the $X^j$ are vectors. This can be expressed more concisely in index notation as follows $$\mathcal{T}^{i_1 ... i_k} _{j_1 ... j_l;p} = \mathcal{T}^{i_1 ... i_k} _{j_1 ... j_l,p} + \sum_{q=1}^k \tensor{\varGamma}{^{i_q}_p_e}\mathcal{T}^{i_1 ..e.. i_k} _{j_1 ... j_l} - \sum_{r=1}^l \tensor{\varGamma}{^e_p_{j_r}} \mathcal{T}^{i_1 ... i_k} _{j_1 ..e.. j_l},$$ where $\tensor{\varGamma}{^c_a_b}$ are the connection coefficients. The Einstein summation convention will be used throughout.\
\
The norm of a tensor $\tensor{\mathcal{T}}{^k_i_j}$ of valence $\left[ ^1_2 \right]$ with respect to the metric $\sigma$ is defined by $$|\mathcal{T}|^2_{\sigma} \equiv \sigma_{ij} \sigma^{kl}\sigma^{rs} \tensor{\mathcal{T}}{^i_k_r}\tensor{\mathcal{T}}{^j_l_s}.$$ The norms with respect to $g$ and $\gamma$ are defined analogously, as is the case for a tensor of arbitrary valence.\
\
At times we will need to distinguish between the connection coefficients and other tensor quantities (for instance, curvature) on $(M^n,\sigma)$, on $(M_t,g)$ and on $(N^{n+1},\gamma)$. We shall do so by a superscripted index of the metric for $M^n$ and $M_t$, and a bar for $N^{n+1}$, for example the Christoffel symbols of the connections ${^{\sigma}\tensor{\varGamma}{^k_i_j}}$, ${^g\tensor{\varGamma}{^k_i_j}}$ and $\tensor{{\overline{\varGamma}}}{^c_a_b}$ for the corresponding quantities on $M^n$, $M_t$ and $N^{n+1}$ respectively.\
\
We adopt the standard convention for the commutator of the covariant derivative (the Riemann curvature tensor), corresponding to $$\label{commutator vec}
\begin{split}
R(U,V)X \equiv & \mathop{\nabla}\limits_{U}\mathop{\nabla}\limits_{V} X - \mathop{\nabla}\limits_{V}\mathop{\nabla}\limits_{U} X - \mathop{\nabla}\limits_{[U,V]} X \\
R(U,V,X,Z) \equiv & \langle R(U,V)X,Z \rangle,
\end{split}$$ for tangent vectors $U,V$ and vector fields $X,Z$. In index notation, this can be expressed for a contravariant quantity as $$\label{curvature index notation}
\nabla_a \nabla_b X^c - \nabla_b \nabla_a X^c = \tensor{R}{_a_b^c_d} X^d,$$ and for a covariant quantity as $$\label{commutator form}
\nabla_a \nabla_b \varphi_c - \nabla_b \nabla_a \varphi_c =\tensor{R}{_a_b_c^d} \varphi_d.$$ The Ricci curvature, as remarked in the introduction, is the trace of the Riemann curvature tensor. Formally $$\label{ricci}
\operatorname{Ric}(U,V) \equiv \sum^{n}_{i=1} R(U,E_i,V,E_i),$$ where $\{E_i\}_{i=1,...,n}$ is an orthonormal frame. Equivalently, in index notation can be expressed as $$\operatorname{Ric}(U,V) = \tensor{R}{_i_k}U^i V^k \equiv \tensor{R}{_i_m_k^m}U^i V^k.$$ We can also define the quadratic form of the Ricci tensor, a notation which will at times prove useful: $$\operatorname{Ric}(U) \equiv \operatorname{Ric}(U,U).$$ The [*function space*]{}, $H^{2+m+\alpha,\frac{2+m+\alpha}{2}}(Q_T)$, in which our functions reside is defined in . The definition of the norm can be found in [@ger2 §2.5].
hypersurface geometry
=====================
the fundamental equations
-------------------------
Let us assert the fundamental equations from the theory of isometric immersions. These relate the geometry of the submanifold to the geometry of the ambient space via the second fundamental form, $A\equiv \{h_{ij}\}_{i,j=1,\dots,n}$. Recall that if $$x:M^n \rightarrow N^{n+1}$$ is an [*embedding*]{} (or [*immersion*]{}) of $M^n$ into $N^{n+1}$, then $\operatorname{d}\! x_p$ has rank $n$ for all $p\in M^n$.\
\
A wide range of literature covers this topic including proofs of the following relations, see for instance [@do; @carmo; @ger2].
There exists a symmetric $\left[^0_2\right]$-tensor field $h:TM \times TM \rightarrow {\mathbb{R}}$ that satisfies $$\label{gauss formula}
x_{ij} = - h_{ij}\nu.$$
The unit normal $\nu$ satisfies the identity $$\label{weingarten eq}
\nu_i = h^k_{i}x_k.$$
The second derivative of $\nu$ satisfies $$\nu_{ij} = h^k_{i;j}x_k - h^k_i h_{kj} \nu.$$
\[gauss eq lem\] The Riemann curvature of the submanifold is related to the curvature of the ambient space by $$\label{gauss eq}
{^g\tensor{R}{_i_j_k_l}} = h_{ik}h_{jl} - h_{il}h_{jk} + {\overline{R}}(x_i,x_j,x_k, x_l).$$
By contracting with the inverse metric $\gamma^{ab}$ of the ambient space, we obtain identities for the Ricci curvature as well as the scalar curvature of the submanifold. These we give explicitly in the following two corollaries.
\[ricci gauss\] The Ricci curvature of the submanifold is related to the curvature of the ambient space by $${^g\tensor{R}{_i_k}} = {\overline{\operatorname{Ric}}}(x_i,x_k) - {\overline{R}}(\nu,x_i,\nu,x_k) + Hh_{ik} - h_{i m} h^m_k$$
\[scalar gauss\] The Scalar curvature of the submanifold is related to the curvature of the ambient space by $${^gR} = {\overline{R}}- 2{\overline{\operatorname{Ric}}}(\nu) + H^2 - |A|^2_g,$$ where $A\equiv \{h_{ij}\}_{i,j = 1,\dots,n}$ (a notation that we will use on occasion).
Our final equation of the fundamental classical variety relates the ambient curvature to the differential of the second fundamental form.
The arguments of the $\left[^0_3\right]$-tensor $\nabla h$ commute as follows $$\label{codazzi}
h_{ij;k} - h_{ki;j} = {\overline{R}}(\nu, x_i, x_j, x_k).$$
We now look at how the second derivatives of the second fundamental form commute. These identities were first discovered by Simons [@simons] and come in very useful.
The second derivatives of $h_{ij}$ satisfy $$\label{hij der comm}
\begin{split}
h_{ij;kl} = & h_{kl;ij} + h_{mj} h_{il}h^m_k - h_{ml}h_{ij}h^m_k + h_{mj} h_{kl} h^m_i - h_{ml} h_{kj} h^m_i\\
& + {\overline{R}}( x_k, x_i, x_l, x_m) h^m_j +{\overline{R}}( x_k, x_i, x_j, x_m) h^m_l + {\overline{R}}(x_m, x_i, x_j, x_l ) h^m_k \\
& + {\overline{R}}(x_m, x_k, x_j, x_l) h^m_i+ {\overline{R}}(\nu, x_i, \nu, x_j) h_{kl} - {\overline{R}}(\nu, x_k, \nu, x_l) h_{ij}\\
& + ({\overline{\nabla}}{\overline{R}})(\nu, x_i, x_j, x_k, x_l) + ({\overline{\nabla}}{\overline{R}})(\nu, x_k, x_i, x_l, x_j ).
\end{split}$$
Using , ,, and the commutator of covariant tensor quantities and the tensor covariant derivative (and not forgetting the symmetries of ${\overline{R}}$) $$\begin{aligned}
\notag h_{ij;kl} = & (h_{ki:j} + {\overline{R}}(\nu, x_i, x_j, x_k)_{;l} \\
\notag = & h_{ki;jl} + ({\overline{\nabla}}{\overline{R}})(\nu, x_i, x_j x_k x_l) \\
\label{three}& + {\overline{R}}(x_m, x_i, x_j, x_k) h^m_l - {\overline{R}}(\nu, x_i, \nu, x_k) h_{lj} - {\overline{R}}(\nu, x_i, x_j, \nu) h_{kl} \\
\notag h_{ki;jl} = & h_{ki;lj} + {^g\tensor{R}{_m_i_j_l}}h^m_k + {^g\tensor{R}{_m_k_j_l}}h^m_i + {\overline{R}}(x_m, x_i, x_j, x_l) h^m_k + {\overline{R}}(x_m, x_k, x_j, x_l) h^m_i\\
\notag = & h_{ki;lj} + h_{mj} h_{il}h^m_k - h_{ml}h_{ij}h^m_k + h_{mj} h_{kl} h^m_i - h_{ml} h_{kj} h^m_i\\
\label{two} & + {\overline{R}}(x_m, x_i, x_j, x_l) h^m_k + {\overline{R}}(x_m, x_k, x_j, x_l) h^m_i\\
\notag h_{ki;lj} = & h_{kl;ij} + ({\overline{\nabla}}{\overline{R}})(\nu, x_k, x_i, x_l, x_j) \\
\label{one} &+ {\overline{R}}(x_m, x_k, x_i, x_l) h^m_j - {\overline{R}}(\nu, x_k, \nu, x_l) h_{ij} - {\overline{R}}( \nu, x_k, x_i, \nu) h_{jl}.
\end{aligned}$$ Plugging into and into and using again the symmetries of the full tensor ${\overline{R}}$ gives the desired result.
The Laplacian of the second fundamental form satisfies $$\begin{split}
\Delta_{M^{\prime}} h_{ij} = & H_{ij} - |A|^2_g h_{ij} + H h_{im} h^m_j + H {\overline{R}}(\nu, x_i, \nu, x_j)- {\overline{\operatorname{Ric}}}(\nu) h_{ij} \\
& + g^{kl}\big[ {\overline{R}}( x_k, x_i, x_l, x_m) h^m_j +{\overline{R}}( x_k, x_i, x_j, x_m) h^m_l + {\overline{R}}(x_m, x_i, x_j, x_l ) h^m_k \\
& + {\overline{R}}(x_m, x_k, x_j, x_l) h^m_i + ({\overline{\nabla}}_{x_l} {\overline{R}})(\nu, x_i, x_j, x_k) + ({\overline{\nabla}}_{ x_j } {\overline{R}})(\nu, x_k, x_i, x_l)\big].
\end{split}$$
Contract with $g^{kl}$ and use the (anti)symmetric properties of the full tensor ${\overline{R}}$.
geometry of the warped product {#wp geometry}
------------------------------
The [*warped product*]{}, $P\times_{h} S$, of two Riemannian manifolds, $(P^p,\varpi)$ and $(S^s,\varsigma)$, with [*warping factor*]{} $h$ is the product manifold, $P\times S$, equipped with the metric $$\gamma^{P\times_{h} S} = \varpi \oplus h^2 \varsigma,$$ In this manner, providing $h$ is not degenerate, $(P\times_{h} S, \gamma^{P\times_{h} S})$ also becomes a Riemannian manifold.\
\
In this section we look at some properties of warped product spaces in a more specialised setting. We consider the warped product $I\times_f M^n$, where $I$ is an arbitrary one dimensional manifold parameterised with $r$, $(M^n,\sigma)$ is a Riemannian $n$-manifold with metric $\sigma$, and $f$ is a positive monotonically increasing (preferably differentiable) function of $r$. Thus we have the metric $$\label{wp metric}
\gamma =\operatorname{d}\!r^2 \oplus f^2 \sigma.$$ Indeed, this metric is conformal to a metric of the form $$\tag{*}\label{metric}
\operatorname{d}\!\rho^2 \oplus \rho^2 \sigma,$$ with conformal factor $\phi(\rho)$ where $\operatorname{d}\! r = \phi(\rho)\operatorname{d}\!\rho$ and $f(r) = \rho \phi(\rho)$, (see [@petersen]).\
\
During the course of this work, we concern ourselves only with metrics of the form , or more accurately, with the warped product ${\mathbb{R}}_+ \times_{\operatorname{\bf{Id}}} M^n$.\
\
The arbitrarily chosen coordinates $\{ y^i \}$ of $M^n$ are extended via the radial function $r$ to coordinates $\{y^a\}$ of $N={\mathbb{R}}_+ \times_{\operatorname{\bf{Id}}} M^n$ whereby $$y^0 \equiv r.$$ In this manner, the basis $\{ e_i \equiv \partial / \partial y^i\}_{i=1,\dots,n}$ of $TM$ is extended to a basis, $\{ e_a \}_{a=0,\dots,n}$, of $TN$. This allows for convenient indexing in what follows. Moreover, tensor quantities on $N^{n+1}$ will be dealt with in this basis, for example $$\tensor{{\overline{R}}}{_a_b_c_d} \equiv {\overline{R}}(e_a,e_b,e_c,e_d).$$ The level sets, $\lbrace r \rbrace \times M^n\equiv \lbrace r = \text{const} \rbrace$, of the radial function $r$ of the warped product, have the induced metric $$\label{metric alpha}
\alpha_{ij} = r^2\sigma_{ij},$$ and the second fundamental form, $\beta_{ij},$ of the inclusion $M^n \xhookrightarrow{r} \lbrace r \rbrace \times M^n $ is easily shown to be $$\label{sff beta}
\beta_{ij} = r\sigma_{ij}.$$ We wish to express the connection on $N^{n+1}$ in terms of quantities on $M^n$ and the radial function $r$, whereby we consider surfaces $\lbrace r \rbrace \times M^n$. Using the well known formula for the Levi-Civita connection coefficients (see for example [@jost]) $$\label{christoffel define}
\tensor{{\overline{\varGamma}}}{^c_a_b} = \frac{1}{2} \gamma^{c e} (\partial_{a} \gamma_{e b} + \partial_{b} \gamma_{a e} - \partial_{e} \gamma_{a b}),$$ where $\partial_a \equiv \partial / \partial y^a$, the following fact almost derives itself $$\label{christoffel}
\tensor{{\overline{\varGamma}}}{^c_a_b} = \left\{
\begin{array}{l l}
\tensor{\varGamma}{^k_i_j} & \quad \text{if $(a,b,c)=(i,j,k)$}\\
-r\sigma_{ij} & \quad \text{if $c=0, (a,b) =(i,j)$ }\\
r^{-1} \delta^k_j & \quad \text{if $a=0, (b,c) =(j,k) \vee b=0, (a,c) = (j,k)$ }\\
0 & \quad \text{otherwise}.
\end{array} \right.$$
The Riemann curvature tensor of the warped product ${\mathbb{R}}_+ \times_{\operatorname{\bf{Id}}} M^n$ vanishes in the radial direction, i.e. $$\label{curvature}
\tensor{{\overline{R}}}{_0_b_c_d} = 0.$$
It is easily derived from that the curvature tensor in terms of $\tensor{{\overline{\varGamma}}}{^c_a_b}$ (see [@wald §3]) can be expressed as $$\label{curvature gamma}
\tensor{{\overline{R}}}{_a_b_c^d} = \partial_b \tensor{{\overline{\varGamma}}}{^d_a_c} - \partial_a \tensor{{\overline{\varGamma}}}{^d_b_c} + \tensor{{\overline{\varGamma}}}{^d_b_e}\tensor{{\overline{\varGamma}}}{^e_a_c} - \tensor{{\overline{\varGamma}}}{^d_a_e}\tensor{{\overline{\varGamma}}}{^e_b_c}.$$ Combining and we deduce (sparing the details) $$\tensor{{\overline{R}}}{_0_j_k^l} = \tensor{{\overline{R}}}{_0_j_k^0} =0.$$ Lowering the final index with $\gamma_{ab}$ shows $$\tensor{{\overline{R}}}{_0_i_j_a} = \gamma_{ab} \tensor{{\overline{R}}}{_0_i_j^b} = 0.$$ The symmetries of ${\overline{R}}$ lead to the result.
The embedding $M^n \xhookrightarrow{r} \{r\}\times M^n$ has constant mean curvature of $r^{-1}n.$
Contracting the second fundamental form $\beta=r\sigma$ with the inverse induced metric $\alpha^{-1} = r^{-2}\sigma^{-1}$ gives $$H = \alpha^{ij} \beta_{ij} = r^{-1} \sigma^{ij} \sigma_{ij} = r^{-1}n.$$
$$\operatorname{Ric}_M \geq 0 \iff {\overline{\operatorname{Ric}}}\geq (1-n) \sigma$$
Once again using and , we deduce $$\label{riemann level set}
{^{\alpha}\tensor{R}{_i_j_k^l}} = {^{\sigma}\tensor{R}{_i_j_k^l}},$$ where ${^{\alpha}\tensor{R}{_i_j_k^l}}$ is the Riemann curvature tensor with respect to the metric $\alpha_{ij}$ from on the level set $\{ r \} \times M^n$. Taking the trace over $j$ and $l$ shows $$\operatorname{Ric}_{\{r\}\times M} = \operatorname{Ric}_M.$$ Using and $$\beta^i_j = \alpha^{im}\beta_{mj} = r^{-1}\delta^i_j.$$ Now apply Corollary \[ricci gauss\], substituting $\beta$ for $h$ and taking note of , to reveal $$\label{ric m}
\operatorname{Ric}_M = {\overline{\operatorname{Ric}}}+ (n-1)\sigma.$$
If we contract with $\alpha_{lm}$, we get $${^{\alpha}\tensor{R}{_i_j_k_m}} = \alpha_{lm}\left[{^{\sigma}\tensor{R}{_i_j_k^l}}\right] = r^2 \sigma_{lm}\left[ {^{\sigma}\tensor{R}{_i_j_k^l}}\right] = r^2 \left[ {^{\sigma}\tensor{R}{_i_j_k_m}} \right].$$ This equality shows that as the embedded hypersurface varies in the radial direction with equal magnitude at every point (i.e. rescales), the curvature of the rescaled hypersurface is inversely proportional to the radial distance. Intuitively this can be seen if we consider a family of expanding 2-spheres in ${\mathbb{R}}^{3}$.
\[polar coords\] The archetypal example of a warped product manifold is ${\mathbb{R}}^{n+1} \setminus \{0\}$ equipped with the canonical metric $\delta^i_j$ expressed as $${\mathbb{R}}_+ \times_{\operatorname{\bf{Id}}} S^n$$ with the metric as in where $\sigma$ is now a metric on $S^n$. This corresponds simply to expressing elements of ${\mathbb{R}}^{n+1}\setminus \{{\bf 0}\}$ in polar coordinates. As aforementioned, this is how Gerhardt and Urbas tackled the issue of the outward flow of an embedded surface in Euclidean space in [@ger1] and [@urbas] respectively.
hypersurfaces as graphs of functions
------------------------------------
The theory of general hypersurfaces (see eg. [@do; @carmo §6]) can be easily applied to the special case in which the ambient space is the warped product $N={\mathbb{R}}_+ \times_{\operatorname{\bf{Id}}} M^n$ from §\[wp geometry\], where the metric $\gamma$ of $N^{n+1}$ in terms of the radial function $r$ and the metric $\sigma$ of $M^n$, is expressed as $$\gamma = \operatorname{d}\! r^2 \oplus r^2 \sigma$$ and the embedding $x$ can be seen as the graph of a function $u:M^n\rightarrow {\mathbb{R}}_+$, however, not all submanifolds of $N^{n+1}$ can be expressed as such.\
\
In this respect we obtain the induced metric $g$: $$\label{g def}
g_{ij}(p) \equiv g_p(e_i, e_j) \equiv (x^{*}\gamma)_p(e_i, e_j) \equiv \gamma_{x(p)=(u(p),p)}(x_{*}(e_i), x_{*}(e_j))$$ where $$\label{xi}
x_{*}(e_i) = \operatorname{d}\!x(e_i) = u_ie_0 + e_i \equiv x_i,$$ and $e_i = \partial / \partial y^i$. Thus, we find $$g_{ij} = \gamma(x_i, x_j) = u_iu_j + u^2 \sigma_{ij} = u^2(\sigma_{ij} + \varphi_i \varphi_j),$$ where $\varphi \equiv \log u$ throughout. Since $u$ takes solely positive values, $\varphi$ is always well defined. The basis $\{ e_i = \partial / \partial y^i \}_{i=1,...,n}$ of $TM$ gives rise to a basis of $Tx(M)$ via the differential (or [*push forward*]{}, see previous section). These are indeed the vectors $\{ x_i \}_{i=1,...,n}$ of .\
\
The outward pointing unit normal vector to $x(M)$, which necessarily satisfies $\gamma(\nu, x_i) = 0$ for all $i=1,\dots,n$, is consequently given by $$\nu = (\nu^a) = \gamma^{ab} \nu_{b}$$ where the covector $\nu_{a} $ has the components $(1, -Du)$ and the normalisation factor $v^{-1}$ where $$\label{v}
v^2 \equiv 1+u^{-2}|Du|^2_{\sigma}= 1+|D\varphi|^2_{\sigma}.$$ Thus, $$\label{norm}
\nu = (\nu^{a}) = v^{-1}(e_0 - u^{-2} u^k e_k ).$$ It is readily checked that this vector is indeed orthogonal to each $x_i$ with respect to $\gamma$. The inverse of the induced metric is $$g^{ij} = u^{-2}(\sigma^{ij} - \varphi^i \varphi^j / v^2),$$ where $\varphi^i \equiv \sigma^{ij}\varphi_j = \sigma^{ij}D_j\varphi$ and with a quick calculation, is easily seen to be the inverse of $g_{ij}$.
\[conn g conn sigma\] The connection of $g$ is related to the connection of $\sigma$ by $$\label{conn g conn sigma eq}
\begin{split}
\left( \mathop{\nabla}\limits_X - \mathop{D}\limits_X \right) Y = & \sigma(D\varphi,X)Y + \sigma(D\varphi, Y)X \\
& + \left( D^2\varphi(X,Y) - [uv]^{-2}g(X,Y) \right) D\varphi,
\end{split}$$ for $X,Y \in TM$. Equivalently $${^g\tensor{\varGamma}{^k_i_j}} = {^{\sigma}\tensor{\varGamma}{^k_i_j}}+ \varphi_i \delta^k_j + \varphi_j \delta^k_i - u^{-2}v^{-2}\varphi^kg_{ij} + v^{-2}\varphi^k D_{ij}\varphi.$$
Using the formula , we see $$\begin{split}
{^g\tensor{\varGamma}{^k_i_j}} = & \frac{1}{2}g^{kl} (g_{lj,i} + g_{il,j} - g_{ij,l}) \\
= & \frac{1}{2}g^{kl}\big\lbrace 2u^{-1}u_ig_{lj} + u^2\sigma_{lj,i} + u^2(\varphi_l \varphi_j)_{,i} \\
&\hspace{35pt} + 2u^{-1}u_jg_{il} + u^2\sigma_{il,j} + u^2(\varphi_i \varphi_l)_{,j}\\
&\hspace{35pt} - 2u^{-1}u_lg_{ij} - u^2\sigma_{ij,l} - u^2(\varphi_i \varphi_j)_{,l} \big\rbrace\\
= & \varphi_i \delta^k_j + \varphi_j \delta^k_i - \varphi_l g^{kl}g_{ij} + {^{\sigma}\tensor{\varGamma}{^k_i_j}} \\
& - \varphi^k \varphi^l v^{-2}\sigma_{ml} {^{\sigma}\tensor{\varGamma}{^m_i_j}} - \varphi^k \varphi_{j,i} - v^{-2}|D\varphi|^2_{\sigma} \varphi^k\varphi_{j,i}.
\end{split}$$ Using the facts $$1-|D\varphi|^2 v^{-2} = v^{-2}$$ and $$D_{ij} \varphi = \varphi_{j,i} - {^{\sigma}\tensor{\varGamma}{^m_i_j}}\varphi_m$$ leads to the sought after result.
Notice how the right hand side of is tensorial in both $X$ and $Y$ when contracted with a covector. This is consistent with the difference of two torsion free connections being a tensor of valence $\left[ ^1_2 \right]$ (in stark contrast to the expression $\omega(\mathop{\nabla}\limits_X Y)$, for covector $\omega$, not being tensorial in $Y$ for an arbitrary connection).
The second fundamental form, $A \equiv\{h_{ij} \}$, of the embedding $x$ is given as $$\label{sff eq}
h_{ij} = uv^{-1}(\sigma_{ij}+\varphi_i \varphi_j - \varphi_{ij}),$$ and the *Weingarten* map is given as $$h^i_j = \left[ uv \right]^{-1}\left( \delta^i_j + ( -\sigma^{ik} + \varphi^i \varphi^k v^{-2} ) \varphi _{kj}\right).$$
Since $x_i$ is covariant with respect to $g$ as well as contravariant with respect to $\gamma$ (see proof of Lemma \[gauss eq lem\]), in the $\{ e_a \}_{a=0,...,n}$ basis we have, $$x^a_{ij} = \partial^2_{ij}x^{a} + \tensor{{\overline{\varGamma}}}{^a_b_d} x_i^{b} x_j^{d} - {^g\tensor{\varGamma}{^k_i_j}}x^{a}_k.$$ and plugging this into the Gauss formula $$h_{ij} = - \gamma((\partial^2_{ij}x^{a} + \tensor{{\overline{\varGamma}}}{^a_b_d} x_i^{b} x_j^{d} - {^g\tensor{\varGamma}{^k_i_j}}x^{a}_k)e_{a}, \nu^{b}e_{b} ).$$ The partial (not covariant) derivative $x_{,ij} = u_{,ij} e_0$, so the first term is $$\gamma( x_{,ij}, \nu ) = v^{-1}u_{,ij}.$$ The final term disappears due to orthogonality so we look at the middle term. We notice that $\gamma( e_k, \nu )= -v^{-1} u_k$ and $\gamma( e_0, \nu ) = v^{-1}.$ Thus $$h_{ij} =-v^{-1}( u_{,ij} + \tensor{{\overline{\varGamma}}}{^0_b_d}x_i^{b} x_j^{d} - \tensor{{\overline{\varGamma}}}{^k_b_d}x_i^{b} x_j^{d} u_k).$$ With a little computation using and , $$\label{christoffel graph}
\tensor{{\overline{\varGamma}}}{^c_a_b}x_i^a x_j^b = \left\{
\begin{array}{l l}
{^{\sigma}\tensor{\varGamma}{^k_i_j}} & \quad \text{if $(a,b,c)=(i,j,k)$}\\
-u\sigma_{ij} & \quad \text{if $c=0, (a,b) =(i,j)$ }\\
r^{-1} u_i u_j & \quad \text{if $a=0, (b,c) =(j,k) \vee b=0, (a,c) = (j,k)$ }\\
0 & \quad \text{otherwise}
\end{array} \right.$$ Putting it all together we have $$\begin{aligned}
h_{ij} & = -v^{-1}\partial^2_{ij} u - v^{-1}(-u\sigma_{ij}) + v^{-1}({^{\sigma}\tensor{\varGamma}{^k_i_j}}u_k + 2u^{-1}u_iu_j)\\
& = uv^{-1}(\sigma_{ij} + \varphi_i \varphi_j - \varphi_{ij}),\end{aligned}$$ where we have used the fact that $u_{ij} = \partial^2_{ij} u - {^{\sigma}\tensor{\varGamma}{^k_i_j}}u_k$ (and seeing as this is a torsion free connection, $u_{ij} = u_{ji}$).\
\
For the second assertion, simply compute $h^i_j = g^{ik} h_{kj}$.
\[smc\] The mean curvature of the embedding $x:p\mapsto (u(p),p)$ can be expressed as follows $$\begin{split}
H & = [uv]^{-1} \left( n- \Delta_{\sigma} \varphi + D^2 \varphi( D\varphi, D\varphi )v^{-2} \right) \\
&= [uv]^{-1} \left( n- \sigma^{ij}\varphi_{ij}+ \varphi^i \varphi^j\varphi_{ij} v^{-2} ) \right)\\
& = [uv]^{-1} \left( n - u^2 \Delta_g \varphi \right).
\end{split}$$
Taking the $0^{\text{th}}$ component of into consideration, achieves $$\begin{split}
-v^{-1}h_{ij} = & u_{,ij} - {^g\tensor{\varGamma}{^k_i_j}}u_k + \tensor{{\overline{\varGamma}}}{^0_a_b}x_i^a x_j^b\\
= & \nabla_j u_i - u\sigma_{ij}\\
= & \nabla_j u_i - \beta_{ij},
\end{split}$$ where $\beta$ is the second fundamental form as in of the level set $\{u(p)\} \times M^n$ and our computations are being carried out at point $p\in M^n$. This can be verified with an elaborate calculation using Proposition \[conn g conn sigma\], the details of which, we spare the reader.
inverse mean curvature flow for graphs {#imcf for graphs}
======================================
In §\[posing the problem\] we assume existence of a solution to on a maximal time interval $[0,T^{\star})$ and adapt this initial problem to the situation at hand to acquire a nonlinear [*scalar*]{} parabolic equation that is more manageable to work with. This fact is strengthened in §\[first order estimates\] where we prove existence of a solution for such an equation. We acquire two forms of this equation. One for the embedding function $$u:I\times M^n \longrightarrow {\mathbb{R}}_+$$ and the other for the natural logarithm of $u,$ $$\varphi \equiv \log u:I\times M^n \longrightarrow {\mathbb{R}}.$$ The advantage of the equation for $\varphi,$ as we shall soon be seeing, is that the corresponding nonlinear operator for fixed $t$ only depends on the first and second derivatives of $\varphi$ and not on the function itself. This will come in handy for the maximum and comparison principles in §\[max and comp principles\].
Since a solution of the equation for one of these functions naturally implies a solution for the equation of the other (by taking logarithm or exponentiating respectively), the equations for $u$ and $\varphi$ will be interchanged and handled with equality throughout. Note that $\varphi$ is well defined, since $u_0>0$ everywhere on $M^n$ and the a priori estimates of §\[first order estimates\] deliver us positive lower bounds on $u$ for $t>0$.
In §\[max and comp principles\] as mentioned, we procure useful maximum and comparison principles for the acquired equations. These estimates are important for proving long time existence later on.\
\
We apply results of [@ger2; @lady] in §\[first order estimates\] to prove existence of a solution, $\varphi$, to the adapted problem, then we go on to prove some first order estimates.\
\
To conclude the section, in §\[evolution equations\], we study the so-called [*evolution equations*]{} of some quantities of interest on the evolving hypersurfaces. As we shall see, these are vital to the understanding of the behaviour of such solutions.\
\
We use the notation of [@ger2 §2.5] throughout this section and for the duration of the paper. In particular, $$Q_{T^{\star}} \equiv [0,T^{\star}) \times M^n$$ denotes the [*parabolic cylinder*]{} of existence of the solution to and the class of function $H^{2+\beta, \frac{2+\beta}{2}}(\bar{Q}_{T^{\star}})$ to which (as we shall find out) the solution belongs is defined by $$\label{fcn space}
\varphi \in H^{2+\beta, \frac{2+\beta}{2}}(\bar{Q}_{T^{\star}}) \iff \varphi(t,\cdot) \in C^{2,\beta}(M^n) \: \wedge \: \varphi(\cdot,x) \in C^{1,\frac{\beta}{2}}([0,T^{\star})),$$ for $0<\beta<1$.
posing the problem {#posing the problem}
------------------
The family of embeddings $x:Q_{T^{\star}} \equiv [0,T^{\star})\times M^n \rightarrow N^{n+1}$, of $M^n$ into the warped product $(N^{n+1} = {\mathbb{R}}_+ \times_{\operatorname{\bf{Id}}} M^n,\gamma)$ takes the form $$x:(t,p)\mapsto \left(u(t,p(t)),p(t)\right),$$ so that in local coordinates $\lbrace y^i \rbrace$ on $M$ we can express the evolving hypersurface as $$\label{graph hs}
x(t,y^i(t)) = (u(t,y^i(t)), y^i(t)).$$ We take a total time derivative of and substitute it into the original equation using to get $$\label{radial x}
\dot{ x}^0 = {\frac{\operatorname{d}\! u }{\operatorname{d}\! t}} = \frac{\partial u}{\partial t} + u_i \dot{y}^i = H^{-1} \nu^0 = v^{-1} H^{-1}$$ for the radial component and $$\label{tangential x}
\dot{ x}^i = \dot{y}^i = -v^{-1}H^{-1}u^{-2}u^i$$ for the components tangential to the level sets of the radial function, $\lbrace r = \text{const} \rbrace$. Plugging into we infer $$\begin{split}
\dot{u} & = v^{-1}H^{-1}u^{-2} |Du|^2_{\sigma} + v^{-1}H^{-1}\\
& = v^{-1}H^{-1}(1 + u^{-2}|Du|^2_{\sigma}) \\
& = vH^{-1}
\end{split}$$ where the dot $\dot{}$ over $u$ has taken the place of $\partial / \partial t$. Multiplying both sides by $u^{-1}$ and using Corollary \[smc\], we come to the conclusion that in this setting, the original problem is equivalent to the nonlinear parabolic (of course this still needs to be checked) equation $$\label{eq phi}
\left\{
\begin{array}{l l}
\dot{\varphi} - \left[ F(D\varphi, D^2 \varphi)\right]^{-1} &= 0\\
\varphi_0 &= \varphi(0,\cdot)
\end{array}
\right.$$ with $$\label{F}
F \equiv v^{-2}(n - \sigma^{ij}\varphi_{ij}+ \varphi^i \varphi^j\varphi_{ij} v^{-2} ) = Huv^{-1}.$$
\[F phi independent\] Notice how $F = F(D\varphi, D^2 \varphi)$ is independent of $\varphi$ in an undifferentiated guise. This fact will come in useful in the near future.
The equation for the scalar quantity $u$ will also be of use in the future. It is given by $$\label{eq u}
\left\{
\begin{array}{l l}
\dot{u} - H^{-1} v& =0 \\
u_0 & = u(0,\cdot),
\end{array} \right.$$ where $H = H(u,Du,D^2u)$ is expressed as $$H = [uv]^{-1} \left( n- \left( u \Delta_g u - |Du|^2_g \right) \right).$$ The final term here, $|Du|^2_g$ is the norm, in terms of $g$, of the gradient of $u$.
maximum and comparison principles {#max and comp principles}
---------------------------------
\[comp princ\] Let $\varphi$ and $\tilde{\varphi}$ solve in $Q_{T^{\star}}$ with initial data $\varphi_0$ and $\tilde{\varphi}_0$ respectively. Then $$\inf_M (\varphi_0 - \tilde{\varphi}_0) \leq \varphi - \tilde{\varphi} \leq \sup_M (\varphi_0 - \tilde{\varphi}_0).$$
Let $w \equiv (\varphi - \tilde{\varphi})\operatorname{e}^{\lambda t}$ for a small $\lambda$ to be chosen later on. Without loss of generality we assume $$\label{sup phi}
\sup_M (\varphi_0 - \tilde{\varphi}_0) \geq 0.$$ Let $0 < t_0$ be the first time the maximum of $w$ is attained and this occurs at the point $x_0 \in M^n$. At the [*spacetime*]{} point $(t_0,x_0)$ we have $$\begin{aligned}
0 \leq \dot{w} = & \lambda w + (\dot{\varphi} - \dot{\tilde{\varphi}}) \operatorname{e}^{\lambda t}\\
= & \lambda w + ( F^{-1} - \tilde{F}^{-1}) \operatorname{e}^{\lambda t},\end{aligned}$$ where $F\equiv F(D\varphi(t_0,x_0), D^2 \varphi(t_0,x_0))$ and $\tilde{F}\equiv F(D\tilde{\varphi}(t_0,x_0), D^2 \tilde{\varphi}(t_0,x_0))$. Since $w$ attains its spatial maximum at this point, we have $$D\varphi = D\tilde{\varphi} \quad \text{ and } \quad D^2 \varphi \leq D^2 \tilde{\varphi}.$$ Using these facts along with , we infer $$F \geq \tilde{F}.$$ Thus, at $(t_0, x_0)$ we conclude $$0\geq - \lambda w.$$ If we choose $\lambda < 0$ it follows that $$\sup_{Q_{T^{\star}}} w = w(t_0,x_0) \leq 0$$ Letting $\lambda \nearrow 0$, bearing in mind, implies the right hand inequality. The left hand inequality is proved analogously.
\[comp cor\] Let $\varphi$ solve on $Q_T$. Then the following holds $$\label{phi bound}
\inf_{M^n} \varphi_0 \leq \varphi - \frac{t}{n} \leq \sup_{M^n} \varphi_0.$$
Define $\tilde{\varphi} = t/n$. Then $\tilde{\varphi}$ solves . Apply Theorem \[comp princ\].
\[u exp bound\] By exponentiation, this is equivalent to $$\label{u bound}
\inf_{M^n} u_0 \leq u \operatorname{e}^{-t/n} \leq \sup_{M^n} u_0,$$ for the embedding function $u$, which solves .
The solution $\tilde{\varphi}$ of in the proof of Corollary \[comp cor\] corresponds to the solution of whereby the initial embedding is given by $$x_0:p\mapsto (1,p).$$
As we shall be seeing shortly in §\[convergence\], the estimate on $u$ in Remark \[u exp bound\] alludes to the fact that if we were to introduce a factor of $\operatorname{e}^{-t/n}$, then the function $\tilde{u} = u\operatorname{e}^{-t/n}$ remains uniformly bounded from above and below. Thus we will be studying the convergence properties of for the corresponding rescaled embedding $\tilde{x}$ in §\[rescaled hypersurfaces\].
short time existence and first order estimates {#first order estimates}
----------------------------------------------
With the help of the parabolic theory [@ger2; @lady; @evans] we show that a solution to equation exists for a short time. Following this, as the heading suggests, we prove some first order estimates on $\varphi$.
Let $F = Huv^{-1}$ as defined in . Then $$a^{ij} \equiv -\frac{\partial F}{\partial \varphi_{ij}}$$ is positive definite.
Since $$F = v^{-2}(n - \sigma^{ij}\varphi_{ij}+ \varphi^i \varphi^j\varphi_{ij} v^{-2} ) = v^{-2}(n-u^2 g^{ij}\varphi_{ij}),$$ we simply differentiate $F$ with respect to $\varphi_{ij}$ to find $$-a^{ij} = \frac{\partial F}{\partial \varphi_{ij}} = - v^{-2}u^{2} g^{ij}.$$ Since $u$ and $v$ are both positive and $g^{ij}$ is positive definite, having trace $n-1+(1+|D\varphi|^2)^{-1}$ with respect to $\sigma$, (see [@mullins §2.2]), we conclude that $a^{ij}$ is positive definite.
\[aij pos def\] Since, as remarked upon above, $$\operatorname{trace}_{\sigma} \{g^{ij}\} = n-1+v^{-2},$$ and $v$ is uniformly bounded from above and below, we conlude that $F^{-1}$ is [*uniformly parabolic*]{} on $Q_{T^{\star}}$. We will also show this more explicitly later (see Theorem \[H bound\]) where we derive an evolution equation for $F^{-1} \equiv \chi H^{-1}$, where $\chi$ is to be defined at the beginning of §\[graphical nature\].
\[short time existence\] There exists a unique solution of class $H^{2+\beta,\frac{2+\beta}{2}}(Q_{T^{\star}})$ to , given $C^{2,\alpha}$ initial data, for $0<\beta<\alpha$, where $T^{\star}$ depends on $\beta$ and the initial data.
The prerequisites of [@ger2 Theorem 2.5.7] are fulfilled, since we have $$F^{-2} a^{ij}> 0,$$ i.e. is uniformly parabolic on some time interval $[0,T^{\star})$.
Let $\varphi$ solve on $Q_{T^{\star}}$. Then the following holds $$\inf_M \dot{\varphi}(0, \cdot) \leq \dot{\varphi} \leq \sup_M \dot{\varphi}(0, \cdot)$$
Differentiate with respect to $t$ to obtain $$\label{dt eq phi}
\ddot{\varphi} + F^{-2} \left( - a^{ij} D_i D_j \dot{\varphi} + a^i D_i \dot{\varphi} \right) = 0,$$ where $a^i = \partial F / \partial \varphi_i$. Applying the parabolic maximum principle (see [@evans §7.1 Theorem 8]) yields the result.
\[Dphi exp\] Let $\varphi$ solve on $Q_{T^{\star}}$ and condition hold. Then there exists a $\lambda > 0$, such that $$\label{Dphi exp eq}
|D\varphi|^2_{\sigma}\operatorname{e}^{\lambda t} \leq \sup_{M^n} |D\varphi_0|^2_{\sigma}$$ on $Q_{T^{\star}}$.
We aim to derive an evolution equation for the left hand side of and apply the maximum principle to deduce the inequality. Firstly, let $w = \frac{1}{2}|D\varphi|^2$. We differentiate in the direction of $D\varphi$ to obtain $$\begin{aligned}
\notag 0 & = \sigma^{kl}\varphi_l D_k (\dot{\varphi} - F^{-1})\\
\label{Dphi(eq)} & = \dot{w} + F^{-2}\big( -a^{ij}\sigma^{kl}D_k D_j \varphi_i \varphi_l + a^iD_i w\big),\end{aligned}$$ where $a^{i} \equiv \partial F/ \partial \varphi_i$, having used the following identities: $$\dot{w} = \sigma^{kl}\varphi_l D_k \dot{\varphi},$$ $$D_i w = \sigma^{kl}D_i\varphi_k \varphi_l.$$ Recall from that $$\label{curv phi}
D_k D_j \varphi_i = D_k D_i \varphi_j = D_iD_k \varphi_j - \tensor{R}{_i_k_j^m}\varphi_m,$$ where the first equality is on account of the torsion free connection $D$ on $M^n$. We also observe $$\label{D2w}
D_i D_j w = \sigma^{kl}D_iD_j\varphi_k \varphi_l + \sigma^{kl} D_j \varphi_k D_i\varphi_l.$$ Subtracting and adding the quantity $F^{-2}a^{ij}\sigma^{kl} D_j \varphi_k D_i\varphi_l$ (i.e. adding a clever 0) to and plugging in and we arrive at $$\label{wfin}
\dot{w} + F^{-2} \big( -a^{ij} D_iD_j w + a^{ij}\sigma^{kl}D_j\varphi_k D_i \varphi_l + a^{ij}\tensor{R}{_i_k_j_m}\varphi^m \varphi^k + a^i D_i w \big) = 0.$$ For second term inside the parenthesis we have: $$a^{ij}\sigma^{kl}D_j\varphi_{k} D_i \varphi_l \geq 0,$$ since $\{a^{ij}\}$ and $\sigma^{ij}$ are both positive definite and symmetric (see [@ding §4]). Subtracting this term from gives $$\label{wleq}
\dot{w} +F^{-2} \big( -a^{ij}D_i D_j w + a^{ij}\tensor{R}{_i_k_j_m}\varphi^m \varphi^k+ a^iD_iw \big) \leq 0.$$ Now multiply by $e^{\lambda t}$ and define $\tilde{w} = we^{\lambda t},$ to give $$\dot{w}e^{\lambda t} +F^{-2} \big( -a^{ij}D_i D_j \tilde{w} + a^{ij}\tensor{R}{_i_k_j_m}\varphi^k \varphi^m e^{\lambda t} + a^iD_i\tilde{w} \big) \leq 0,$$ and finally add and subtract $\lambda \tilde{w}$, leaving us with $$\dot{\tilde{w}} + F^{-2}\big( -a^{ij} D_i D_j \tilde{w} + a^{ij} \tensor{R}{_i_k_j_m}\varphi^k \varphi^m e^{\lambda t} + a^i D_i \tilde{w} -\lambda F^2 \tilde{w} \big) \leq 0.$$ Let $\mu$ be the smallest eigenvalue of $\{a^{ij}\}$ (in light of Remark \[aij pos def\] we have $\mu >0$ everywhere on $Q_{T^{\star}}$). Then together with $$a^{ij}\tensor{R}{_i_k_j_m} \varphi^k \varphi^m e^{\lambda t}\geq \mu \operatorname{Ric}_M(D\varphi)\operatorname{e}^{\lambda t} \geq \mu \delta |D\varphi|^2_{\sigma}\operatorname{e}^{\lambda t} = 2\mu \delta \tilde{w},$$ for a small $\delta >0$ (the existence of which is ensured in view of and the compactness of $M^n$), which implies $$\dot{\tilde{w}} + F^{-2} \big( -a^{ij}D_i D_j \tilde{w} + a^iD_i \tilde{w}+ (2\mu \delta - \lambda F^2)\tilde{w} \big) \leq 0.$$ Thus, by applying the maximum principle (see [@evans §7.1 Theorem 9]), the lemma is proved with $\lambda \leq 2\mu\delta F^{-2}$.
the evolution equations {#evolution equations}
-----------------------
Assuming short time existence of the flow , we focus our attention on how the solution changes shape, and how the quantities of interest evolve under the flow. The evolution equations are paramount to the study of how solutions behave in the long term, or in the neighbourhood of singularities.\
\
Since we are dealing with quantities that live on the evolving hypersurfaces, we will be covariantly differentiating with respect to the metric $g$ of $M_t$ and the [*pushforward basis*]{} of $T_{x(p)}M_t$, thus with the operator $\nabla$. Furthermore, the derivative with respect to $t$, designated as both $\dot{}$ and ${\frac{\operatorname{d}\! }{\operatorname{d}\! t}}$ interchangeably, now refers to the ambient covarient derivative in the direction of $\nu$ with a magnitude of $H^{-1}$, i.e. $${\frac{\operatorname{d}\! }{\operatorname{d}\! t}} = \frac{D}{\partial t}= H^{-1}{\overline{\nabla}}_{\nu}$$
The unit normal vector $\nu$ evolves according to $$\label{ev eq nu}
\dot{\nu} = H^{-2} \nabla H = H^{-2} H^{;k}x_k$$
Since $\nu$ is of unit length and ${\frac{\operatorname{d}\! }{\operatorname{d}\! t}}$ is compatible with the metric $\gamma$, we have $$\label{d of norm}
0 = {\frac{\operatorname{d}\! }{\operatorname{d}\! t}} 1 = {\frac{\operatorname{d}\! }{\operatorname{d}\! t}} \gamma(\nu, \nu) = 2 \gamma\left( \dot{\nu} , \nu \right),$$ or in other words, $\dot{\nu}$ resides exclusively in $TM_t$. We can express $\dot{\nu}$ thusly $$\label{nu dot}
\dot{\nu} = g^{ik} \gamma(\dot{\nu}, x_i) x_k = - g^{ik} \gamma(\nu, \dot{x}_i) x_k.$$ To see how the basis vector $x_i$ evolves, let $c:(-\epsilon,\epsilon)\longrightarrow M^n$ be a differentiable curve such that $${\frac{\operatorname{d}\!c }{\operatorname{d}\! s}} \bigg|_{s=0} = e_i.$$ Per definition $$x_i = x_{\star} (e_i) = {\frac{\operatorname{d}\! }{\operatorname{d}\! s}} \bigg|_{s=0} x(t,c(s)).$$ This shows subsequently that $$\label{ev basis}
\dot{x}_i = \frac{D}{\partial t} \frac{\partial}{\partial s} \bigg|_{s=0} x(t,c(s)) = \frac{D}{\partial s}\bigg|_{s=0} \frac{\partial}{\partial t} x(t,c(s)) = \frac{D}{\partial s}\bigg|_{s=0} (H^{-1}\nu)_{x(t,c(s))} = (H^{-1}\nu)_i$$ where we have used [@do; @carmo §3 Lemma 3.4] to reverse the order of differentiation (for details, see [@mullins]). Once again using , this time in the second equality of with $\operatorname{d}= ({\overline{\nabla}}_{x_i})^{\top} \equiv \nabla_i$, we conclude $$\dot{\nu} = -g^{ik}\gamma(\nu, (H^{-1}\nu)_i) x_k = -g^{ik} (H^{-1})_i\gamma(\nu,\nu) x_k = H^{-2} g^{ik} H_{;i} x_k.$$
\[commute\] What equation alludes to is the fact that the flows of the vector fields $H^{-1}\nu$ and $x_i$ commute, i.e. $$[H^{-1}\nu, x_{i}] = 0,$$ or equivalently $${\overline{\nabla}}_{H^{-1}\nu}x_i = {\overline{\nabla}}_{x_i}(H^{-1}\nu) \equiv (H^{-1}\nu)_i.$$
The metric $g_{ij}$ evolves according to $$\label{ev eq gij}
\dot{g}_{ij} = 2H^{-1}h_{ij}.$$
Using , , , the symmetry of $g_{ij}$ and the fact that $\gamma(\nu,x_i) = 0$ we obtain $$\dot{g}_{ij} = 2\gamma(\dot{x}_i, x_j) = 2\gamma((H^{-1}\nu)_i, x_j) = 2H^{-1}\gamma(\nu_i, x_j) = 2H^{-1}h_{ij}.$$
The inverse metric fulfills the evolution equation $$\label{ev eq inverse g}
\dot{g}^{ij} = -2H h^{ij},$$ where $h^{ij} \equiv g^{ik} g^{jl} h_{kl}$.
Since $\delta^i_j = g^{ik} g_{kj}$ and $0 = \operatorname{d}\! \delta^i_j / \operatorname{d}\! t$, we have $$\dot{g}^{ik}g_{kj} = -g^{ik}\dot{g}_{kj}.$$ Now contracting both sides with $g^{jl}$ and plugging in for $\dot{g}_{kj}$ we have $$\dot{g}^{il} = -2g^{ik}g^{jl}Hh_{kj} = -2Hh^{il}.$$
\[ev eq measure\] The evolution of the induced measure $\operatorname{d}\!\mu$ satisfies $$\label{ev eq g}
{\frac{\operatorname{d}\! }{\operatorname{d}\! t}}(\operatorname{d}\!\mu) = \operatorname{d}\!\mu.$$
In coordinates, $$\operatorname{d}\!\mu = \sqrt{g}\operatorname{d}\!x,$$ where $g = \operatorname{det}g_{ij}$. [*Jacobi’s formula*]{} for taking the derivative of a determinant and the product rule give $${\frac{\operatorname{d}\! }{\operatorname{d}\! t}} \sqrt{g} = \frac{1}{2}( g)^{-\frac{1}{2}} g g^{ij} \frac{\partial g_{ij}}{\partial t} = \sqrt{g}H^{-1} g^{ij}h_{ij} = \sqrt{g}.$$
For a surface evolving according to , as long as the flow exists, the following holds $$|M_t| = |M_0|\operatorname{e}^t,$$ where $M_0$ is the initial surface.
The second fundamental form evolves according to $$\label{ev eq hij}
\begin{split}
{\frac{\operatorname{d}\!h_{ij} }{\operatorname{d}\! t}} = & (H^{-1})_{;ij} + H^{-1}\left( h^m_i h_{mj} - {\overline{R}}(\nu, x_i, \nu, x_j)\right)\\
= &2 H_i H_j H^{-3} - H_{ij} H^{-2} + H^{-1}\left( h^m_i h_{mj} - {\overline{R}}(\nu, x_i, \nu, x_j)\right).
\end{split}$$
We utilise , , Remark \[commute\] and a myriad of additional tricks to derive the following $$\begin{split}
{\frac{\operatorname{d}\!h_{ij} }{\operatorname{d}\! t}} =& - {\frac{\operatorname{d}\! }{\operatorname{d}\! t}} \gamma(x_{ij}, \nu) \\
= & - \gamma({\overline{\nabla}}_{H^{-1}\nu} {\overline{\nabla}}_{x_j} x_i, \nu) - \gamma(x_{ij}, \dot{\nu})\\
= &- \gamma({\overline{\nabla}}_{x_j}{\overline{\nabla}}_{H^{-1}\nu} x_i, \nu) - H^{-1}\gamma({\overline{R}}(x_i,\nu)x_j,\nu)+\gamma({\overline{\nabla}}_{[x_i, H^{-1}\nu]}x_j,\nu)\\
= &-\gamma({\overline{\nabla}}_{x_j}{\overline{\nabla}}_{x_i}(H^{-1}\nu), \nu) - H^{-1}\gamma({\overline{R}}(x_i,\nu)x_j,\nu)\\
= &-(H^{-1})_{ij} - H^{-1}\gamma(\nu,\nu_{ij}) - H^{-1}\gamma({\overline{R}}(x_i,\nu)x_j,\nu)\\
= &-(H^{-1})_{ij} + H^{-1} \left(h_i^k h_{kj} - {\overline{R}}(x_i,\nu,x_j,\nu) \right )
\end{split}$$
The Weingarten map $h^{i}_j$ satisfies $$\begin{split}\label{ev eq weing}
{\frac{\operatorname{d}\!h^i_j }{\operatorname{d}\! t}} = & -(H^{-1})^i_j - H^{-1}\left(h^i_k h^k_j + {\overline{R}}(x_k,\nu,x_j,\nu)g^{ki} \right ) \\
= &2 H^i H_j H^{-3} - H^i_{j} H^{-2} - H^{-1}\left(h^i_k h^k_j + {\overline{R}}(x_k,\nu,x_j,\nu)g^{ki} \right )
\end{split}$$
Since $$\dot{h}^i_j = {\frac{\operatorname{d}\! }{\operatorname{d}\! t}}(g^{ik}h_{kj}) = \dot{g}^{ik}h_{kj} + g^{ik} \dot{h}_{kj},$$ we can use and to yield the result.
\[ev eq H\] The mean curvature evolves according to $$\begin{split}
{\frac{\operatorname{d}\!H }{\operatorname{d}\! t}} = & -\Delta_{M_t}(H^{-1}) - H^{-1}\left( |A|_g^2 + {\overline{\operatorname{Ric}}}(\nu) \right) \\
= & H^{-2} \Delta_{M_t} H -2H^{-3} |\nabla H|^2_g - H^{-1}\left( |A|_g^2 + {\overline{\operatorname{Ric}}}(\nu) \right).
\end{split}$$
Take the trace of .
By application of the maximum principle in the previous equation, in light of the negative sign on the $|A|^2$ term, we deduce that the mean curvature $H$ is uniformly bounded from above in terms of the ambient Ricci curvature and the initial embedding.
long term behaviour
===================
In order to show that the flow exists for all times $t > 0$, we need to bound the mean curvature with an exponential factor, $H \operatorname{e}^{\lambda t}$, from above and below by positive constants. In the process, we shall also show that then the surfaces $M_t$ remain as graphs throughout the evolution (see Theorem \[graphical nature\]) by deriving an evolution equation for the quantity $\chi = \gamma(X,\nu)^{-1}$ where $X = u \partial / \partial r$.\
\
In §\[convergence\] we rescale the embedded hypersurfaces with a factor of $\operatorname{e}^{-t/n}$, as already hinted at earlier on.\
\
Throughout this section we shall denote with $c$ and $C$ arbitrary positive constants that can, on occasion, vary from line to line (as contradictory as this sounds).
graphical nature {#graphical nature}
----------------
The quantities that we will be dealing with reside on the evolving surfaces $M_t$ and as such, we will generally be dealing with the basis $\{x_i\}_{i=1,...,n}$ of $TM_t$. Indeed, all covariant derivatives will be carried out with respect to $\nabla(g)$ and indices will be raised and lowered with respect to either $g_{ij}$ or $\gamma_{ab}$. The context should be clear.
Let $\chi = \gamma(X,\nu)^{-1}$ where $X = u \partial / \partial r$. Then
- $$X_{;i} = x_i \equiv u_i e_0 + e_i,$$
- $$\gamma( X_{;ij}, \nu ) = \gamma( x_{ij}, \nu ) = -h_{ij}$$
- $$\chi = vu^{-1}$$
\(i) is derived from $$X_{;i} = X^a_b x^b_i = \left( {\overline{\nabla}}_{u_i e_0 + e_i} (u e_0) \right)^{\top} = u_i e_0 + u\tensor{{\overline{\varGamma}}}{^a_i_0} e_a = u_i e_0 + \delta_i^k e_k = x_i.$$ (ii) follows (i) and , and (iii) is easily seen as a result of $\gamma(\partial / \partial r, \nu) = v^{-1}$.
The following evolution equation controls the graphical nature of the surfaces $M_t$.
\[graphical nature\] Let $\chi \equiv \gamma (X, \nu )^{-1}$ as above. Then $\chi$ satisfies the evolution equation $$\label{graphical nature eq}
\begin{split}
\left( {\frac{\operatorname{d}\! }{\operatorname{d}\! t}} - H^{-2}\Delta_{M_t} \right) \chi = -2\chi^{-1}H^{-2}|\nabla \chi|^2_g - \chi H^{-2} \left( |A|^2_g+ {\overline{\operatorname{Ric}}}(\nu) \right).
\end{split}$$
We begin by differentiating $\chi$ with respect to $t$ $$\dot{\chi} = -\chi^2 \big( H^{-1} + \gamma( X,H^{-2} \nabla H ) \big),$$ where we have used . Next we take first and second derivatives of $\chi$ in the directions $x_i$ and $x_j$ $$\chi_{;i} = -\chi^2 \big( \gamma( x_i, \nu )+ \gamma (X, \nu_i ) \big),$$ and $$\begin{aligned}
\label{chi ij}
\notag \chi_{ij} = & 2\chi^3 \big (\gamma( x_j, \nu )+ \gamma( X, \nu_j ) \big ) \big (\gamma( x_i, \nu )+ \gamma( X, \nu_i ) \big ) \\
\notag &-\chi^2 \big ( \gamma( x_{ij}, \nu ) + \gamma( x_j, \nu_i )+ \gamma( x_i, \nu_j )+ \gamma( X, \nu_{;ij}) \big )\\
\notag = &2\chi^{-1} \chi_i \chi_j - \chi^2 h_{ij} - \chi^2h^k_{i;j} \gamma( X,x_k )+ \chi h^k_i h_{kj}\\
\notag = & 2\chi^{-1} \chi_i \chi_j - \chi^2 h_{ij} + \chi h^k_i h_{kj} \\
& - \chi^2 \big ( h^{;k}_{ij} \gamma( X,x_k ) + {\overline{R}}(\nu, x_i, x_l, x_j) g^{kl}\gamma(X,x_k)\big ),
\end{aligned}$$ where we have used contracted with $g^{kl}$ to get the curvature term. Using the fact that $$g^{kl}\gamma( X, x_k)x_l = X^{\top} = X - \gamma( \nu,X )\nu,$$ we come to the conclusion that $$\begin{aligned}
\label{rn}
\notag {\overline{R}}( \nu, x_i, x_l, x_j) g^{kl}\gamma(x,x_k) = & {\overline{R}}(\nu, x_i X, x_j) - {\overline{R}}(\nu, x_i, \nu, x_j) \chi^{-1} \\
\notag = &- {\overline{R}}(\nu, x_i, \nu, x_j) \chi^{-1},\end{aligned}$$ since $X = u \partial / \partial r$ and in light of . Contracting $\chi_{ij}$ with $H^{-2} g^{ij}$ and subtracting it from $\dot{\chi}$ delivers $$\left( {\frac{\operatorname{d}\! }{\operatorname{d}\! t}} - H^{-2}\Delta_{M_t} \right) \chi = -2\chi^{-1}H^{-2}|\nabla \chi|^2_g - \chi H^{-2} \left( |A|^2_g+ {\overline{\operatorname{Ric}}}(\nu) \right),$$ as claimed.
\[H bound\] There exist positive constants $c$ and $C$, depending only on $u_0, Du_0, D^2 u_0$ and $n$ for which during the evolution process up to time $T$, the following holds $$c\leq H\operatorname{e}^{t/n} \leq C.$$
We shall prove only the lower bound here, the upper bound follows analogously with $\inf_M$ replacing $\sup_M$ in the following argument. We compute the evolution of $\chi H^{-1},$ using and . $$\begin{split}
{\frac{\operatorname{d}\!\ (\chi H^{-1}) }{\operatorname{d}\! t}} = & \dot{\chi} H^{-1} - H^{-2}\chi \dot{H} \\
= & H^{-3} \Delta \chi - 2 \chi^{-1} H^{-3}|\nabla \chi|^2_g + \chi H^{-2}\Delta(H^{-1})\\
= & H^{-2} \left( H^{-1}\Delta\chi + \chi \Delta(H^{-1}) - 2\chi^{-1}H^{-1} |\nabla \chi|^2_g \right).
\end{split}$$ Let $(t_0,x_0)\in M_t$ be an interior space-time point such that the following holds $$(\chi H^{-1})(t_0,x_0) = \sup_{t\in [0,T)} \sup_{M_t} (\chi H^{-1}).$$ Then at $(t_0,x_0)$, $$0 = (\chi H^{-1})_i = \chi_i H^{-1} + \chi (H^{-1})_i,$$ and also $${\frac{\operatorname{d}\!\ (\chi H^{-1}) }{\operatorname{d}\! t}}(x_0) \geq 0$$ Using these and the fact that $$\Delta(\eta\xi) = \Delta(\eta)\xi + \eta\Delta(\xi) + 2 \nabla \eta \cdot \nabla \xi,$$ for arbitrary real-valued functions $\eta$ and $\xi$, we get $$0 \leq {\frac{\operatorname{d}\!\ (\chi H^{-1}) }{\operatorname{d}\! t}} = H^{-2} \Delta(\chi H^{-1}) \leq 0,$$ since $(t_0,x_0)$ is a maximum. Adding $t\epsilon$ to $\chi H^{-1}$ and letting $\epsilon \searrow 0$ derives a contradiction and gives $$\chi H^{-1} \leq \sup_{M_0} \chi H^{-1}$$ or in other words $$\chi^{-1} H \geq \inf_{M_0} \chi^{-1} H.$$ Since $\chi^{-1} = u/v$ and $v\geq 1$ we have $$\inf_{M_0} \chi^{-1} H \leq \chi^{-1}H = uv^{-1}H \leq uH \leq H c^{\prime} \operatorname{e}^{t/n}$$ where the final inequality comes from . This subsequently offers $$H\operatorname{e}^{t/n} \geq c.$$
Since $[F(D\varphi,D^2\varphi)]^{-1} = \chi H^{-1}$, the upper and lower bound from the previous proof shows that remains uniformly parabolic on $Q_{T^{\star}}$.
The solution of stays graphical as long as the solution exists.
We wish to derive a differential inequality for $\chi$ and apply a quasilinear comparison principle (see [@lieberman Corollary 2.5]) to show that $\chi$ is bounded from above. From we have $${\overline{\operatorname{Ric}}}(e_0,e_a) = {\overline{R}}_{0a} = {\overline{R}}_{a0} = 0.$$ Together with and the definition of the outward pointing unit normal $\nu$, we deduce $$\begin{split}
{\overline{\operatorname{Ric}}}(\nu) = & \left[ v^{-1} u^{-2} \right]^2 {\overline{R}}_{ik} u^i u^k \\
= & \left[ uv \right]^{-2}\left( \operatorname{Ric}_M(D\varphi) - (n-1)|D\varphi|^2_{\sigma} \right).
\end{split}$$ Using the prerequisite along with the lower bound on $H$, the positivity of $\chi$ and $|A|^2$, and Lemma \[Dphi exp\] we infer $$\left( {\frac{\operatorname{d}\! }{\operatorname{d}\! t}} - H^{-2}\Delta_{M_t} \right) \chi \leq -2\chi^{-1}H^{-2}|\nabla \chi|^2_g + \chi H^{-2} c \operatorname{e}^{-\lambda t}.$$ In view of Theorem \[H bound\] we have $$H^{-2} \leq C \operatorname{e}^{2t/n}.$$ In order to apply a quasilinear comparison principle (see [@lieberman Corollary 2.5]), we need a function that satisfies $${\frac{\operatorname{d}\! }{\operatorname{d}\! t}}f(t) \geq C \operatorname{e}^{(2/n-\lambda)t}f(t),$$ but since we only want to show the boundedness of $\chi$ and are not interested in a sharp estimate, a solution to the equation $${\frac{\operatorname{d}\! }{\operatorname{d}\! t}}f(t) = C \operatorname{e}^{2t/n}f(t),$$ will suffice. Indeed $f(t)=C\operatorname{e}^{n\operatorname{e}^{2t/n}/2}$ does the job. We define the quasilinear parabolic operator $P$ by $$Pw \equiv \left( {\frac{\operatorname{d}\! }{\operatorname{d}\! t}} - H^{-2}\Delta_{M_t} \right) w + 2w^{-1} H^{-2}|\nabla w|_g^2 - H^{-2}c\operatorname{e}^{-\lambda t} w.$$ Then $$Pf > 0 \geq P\chi.$$ By applying a comparison principle [@lieberman Corollary 2.5], we deduce that $\chi$ is bounded from above on $\bar{Q}_{T^{\star}}$ by $f$ and is thus bounded on compact intervals, proving the claim.
rescaled hypersurfaces {#rescaled hypersurfaces}
----------------------
As alluded to previously, for later contemplations of convergence we must rescale the embedded hypersurfaces so that the solutions remain inside a compact set. In doing this, we can investigate the asymptotics without having to appeal to spatial infinity. We use a factor of $\operatorname{e}^{-t/n}$, procured from . In this section we investigate the a priori effect of rescaling.\
\
Consider the family of rescaled embeddings $\tilde{x}(t,\cdot)$ with $$\tilde{x}(t,p) = (\tilde{u}(t,p(t)),p(t)),$$ where $\tilde{u} = u\operatorname{e}^{-t/n}.$ Using this notation we observe the following rescaled quantities: $$\begin{aligned}
D\tilde{u} = & Du \operatorname{e}^{-t/n} \\
\dot{\tilde{u}} = & \dot{u}\operatorname{e}^{-t/n} - \frac{\tilde{u}}{n}\\
\tilde{\varphi} = & \varphi - \frac{t}{n}\\
\label{resc dphi} D\tilde{\varphi} = & D\varphi\\
\dot{\tilde{\varphi}} = & \dot{\varphi} - \frac{1}{n}.\end{aligned}$$ This shows that the induced metric on the rescaled evolving hypersurfaces satisfies $$\tilde{g}_{ij} = \gamma(\tilde{x}_i, \tilde{x}_j) = \operatorname{e}^{-2t/n} g_{ij},$$ while the inverse metric satisfies $$\tilde{g}^{ij} = \operatorname{e}^{2t/n} g^{ij}.$$ Thus, the rescaled second fundamental form satisfies $$\tilde{h}_{ij} = h_{ij} \operatorname{e}^{-t/n},$$ and the mean curvature as well as the Weingarten map rescale as expected $$\begin{aligned}
\tilde{h}^i_j = & h^i_j \operatorname{e}^{t/n} \\
\tilde{H} = & H\operatorname{e}^{t/n} .\end{aligned}$$ The rescaled hypersurfaces must now solve the equations: for $\tilde{\varphi}$ $$\label{resc eq phi}
\left\{
\begin{array}{l l}
\dot{\tilde{\varphi}} - \left[ F(\cdot,D\varphi, D^2 \varphi)\right]^{-1} +n^{-1} &= 0\\
\tilde{\varphi}_0 &= \varphi(0,\cdot),
\end{array} \right.$$ in light of , and for $\tilde{u}$ $$\label{resc eq u}
\left\{
\begin{array}{l l}
\dot{\tilde{u}} - v \tilde{H}^{-1} + \tilde{u}n^{-1}& =0\\
\tilde{u}_0 & = \tilde{u}(0,\cdot),
\end{array} \right.$$ Both equations are obviously still uniformly parabolic on $Q_{T^{\star}}$.
The rescaled quantity $|D\tilde{u}|$ decays exponentially fast to $0$.
From $\varphi =\log u$, we deduce $$|D\tilde{u}|^2 = |D\varphi|^2 \operatorname{e}^{-2t/n} u^2,$$ and using Lemma \[Dphi exp\] we have $$|D\tilde{u}|^2 \leq C \operatorname{e}^{-\lambda t - 2t/n }u^2.$$ Taking the root from both sides gives $$|D\tilde{u}| \leq C \operatorname{e}^{-\lambda t/ 2} u\operatorname{e}^{-t/n} \leq C^{\prime} \operatorname{e}^{-\lambda t /2},$$ where, for the final inequality, we have once again used .
convergence
-----------
Let us summarise the results we have gathered thus far. In §\[posing the problem\] we transformed the initial problem into two versions of a scalar problem, and respectively. In §\[first order estimates\] we proved existence of a solution $\varphi$ to , which is equivalent to existence of a solution $u = \operatorname{e}^{\varphi}$ to , on the parabolic cylinder $Q_{T^{\star}}$ for $0<T^{\star} \leq \infty$.\
\
We had derived a priori estimates for $\varphi - t/n (\equiv \tilde{\varphi})$ and $u\operatorname{e}^{-t/n}(\equiv \tilde{u})$ in §\[max and comp principles\], then we went on to bound $\dot{\varphi}$ from above and below with its initial data and $|D\varphi|^2_{\sigma}$ from above with an exponentially decaying factor (which is equivalent to $1 < v \leq 1+C\operatorname{e}^{-\lambda t}$).\
\
We showed that the [*starshapedness*]{} of the solution as a hypersurface is maintained throughout the evolution in §\[graphical nature\] and then proved bounds on $H\operatorname{e}^{t/n}( \equiv \tilde{H})$ from above and below. These bounds show that the nonlinear operator $G=F^{-1}= v[uH]^{-1}$ of is uniformly bounded from above and below up to time $T^{\star}$.\
\
With these bounds, we then rescaled the surfaces in §\[rescaled hypersurfaces\] to acquire a solution $\tilde{u}$ to the uniformly parabolic equation which remains inside a compact set during its existence. We also showed that $|D\tilde{u}|$ decays exponentially fast.\
\
Now we wish to achieve higher order estimates for $\varphi$ independent of $t$. The notation we use throughout this section in terms of norms and spaces can be found in [@ger2 §2.5] and also in [@lady].
Let $\varphi$ solve in $Q_{T^{\star}}$. Then for all $\alpha \in (0,1)$, $\delta \in (0,T^{\star})$ and $m\geq 1$, $$\label{phi bound m}
|\varphi|_{m+\alpha,Q_{\delta,T^{\star}}} \leq C,$$
where $C$ does not depend on $t$ and $Q_{\delta,T^{\star}}\equiv [\delta,T^{\star}) \times M^n$.
Since $M^n$ is smooth and $F^{-1}$ is $C^{\infty}$ in its arguments, we can apply [@ger2 Theorem 2.5.10] to our situation, which states that under the above prerequisites, $\varphi$ is of class $H^{2+m+\alpha, \frac{2+m+\alpha}{2}}(Q_{\delta,T^{\star}})$ for $m\geq 1$. We differentiate in the direction of $e_k$, using to get $$0=e_k\left( \dot{\varphi} - F^{-1}\right) = \dot{\varphi}_k + F^{-2}\left\{-a^{ij}D_i D_j\varphi_{k} + a^{ij}[ {^{\sigma}\tensor{R}{_i_k_j^m}} ]\varphi_m + a^i D_i\varphi_k\right\}.$$ Since $a^{ij} = [u/v]^2g^{ij} = v^{-2}(\sigma^{ij}-\varphi^i \varphi^j v^{-2})$, using again the symmetries of $R_{ijkl}$, this transforms into $$0 = \dot{\varphi}_k + F^{-2} \left\{ - a^{ij}D_i D_j \varphi_k+ v^{-2}\operatorname{Ric}_M(D\varphi, e_k) + a^i D_i \varphi_k \right\}$$ In addition to this, we once again differentiate with respect to $t$ (see proof of Lemma \[Dphi exp\]) to achieve $$\begin{split}
\ddot{\varphi} + F^{-2}\left\{-a^{ij} \dot{\varphi_{ij}} + a^{i}\dot{\varphi_{i}}\right \} & = 0\\
\dot{\varphi}_1 + F^{-2} \left\{ - a^{ij}D_i D_j \varphi_1+ v^{-2}\operatorname{Ric}_M(D\varphi, e_1) + a^i D_i \varphi_1 \right\} &=0\\
\vdots \hspace{200pt} \vdots & \\
\dot{\varphi}_n + F^{-2} \left\{ - a^{ij}D_i D_j \varphi_n+ v^{-2}\operatorname{Ric}_M(D\varphi, e_n) + a^i D_i \varphi_n \right\} &=0,
\end{split}$$ which is a system of $(n+1)$ linear parabolic equations of second order for the first derivatives of $\varphi$ on the interval $[\delta,T^{\star})$. We may now apply [@lady Theorem 5.1], which delivers uniform bounds on $|\varphi_k|_{2+\alpha,Q_{\delta,T^{\star}}}$ for $k=1,\dots,n$. Using an induction argument achieves the result.
$|D^2 \varphi|$ decays exponentially fast to $0$.
An interpolation inequality due to Hamilton [@ham Cor. 12.7] reads $$\int |D^2 \varphi|^2 d\mu \leq C \left \{ \int |D^3 \varphi|^2 d\mu \right\}^{\frac{2}{3}} \left\{ \int |D\varphi|^2 d\mu \right\}^{\frac{1}{3}},$$ and in light of and the compactness of $M^n$ we infer $$\int |D^2 \varphi|^2 d\mu \leq C \operatorname{e}^{-\lambda t}.$$
This result implies $$\label{D2 u bound}
|D^2 \tilde{u}| \leq C \operatorname{e}^{-\beta t},$$ for a constant $\beta>0$ as one can easily see, using the fact that $$D^2 \varphi = u^{-1}D^2 u - u^{-2}Du \otimes Du,$$ together with our known estimates, provides $$|D^2 \tilde{u}| \leq c u^{-1}|D^2u| = c\left( |D^2 \varphi| + |D\varphi|^2 \right) \leq C e^{-\beta t}.$$
The non-linear parabolic equation has a solution for all times $t>0$.
Arguing as in [@ger2 Lemma 2.6.1], we assume $T^{\star}< \infty$ is the maximal time of existence. Let $m\geq 0$. With the uniform bounds we have obtained for $|\varphi|_{2+m+\alpha,Q_{\delta,T^{\star}}}$, by applying Arzela-Ascoli, we see that there exists a sequence $t_j \rightarrow T^{\star}$ for which $$\varphi(t_j,\cdot) \xrightarrow[j \rightarrow \infty]{}\bar{\varphi},$$ in $C^{2+m,\beta}(M^n)$, for $0<\beta<\alpha$, with $\bar{\varphi}\in C^{2+m,\alpha}(M^n)$. Then choosing $\bar{\varphi}$ as initial value in Theorem \[short time existence\], for which the flow exists at least up to time $\epsilon_1>0$, we deduce that there exists an open set $U(\bar{\varphi}) \subset C^{2+m,\beta}(M^n)$, for whose members the flow also exists up to time $\epsilon_1$. Now choosing $j_0$ so large that $\varphi(t_{j_0}, \cdot) \in U(\bar{\varphi})\cap C^{2+m,\alpha}(M^n)$ and that $t_{j_0}+ \epsilon_1 > T^{\star}$, we extend the maximal time of existence by once again applying the existence Theorem \[short time existence\], this time with initial data $\varphi(t_{j_0},\cdot)$, thus contradicting the maximality of $T^{\star}$. Therefore, a solution exists for all $t>0$.
asymptotics
-----------
From , since shows that $\tilde{u}$ stays inside a compact set during evolution and bearing Lemma \[Dphi exp\] in mind, we conclude that $\tilde{u}$ is asymptotically constant, we shall call this constant $r_{\infty}$. To achieve a value for $r_{\infty}$ we will need to make some observations.
The rescaled quantities satisfy
- $$\tilde{g}_{ij}\xrightarrow[t \rightarrow \infty]{} r_{\infty}^2 \sigma_{ij}$$
- $$\tilde{h}^i_j \xrightarrow[t \rightarrow \infty]{} r_{\infty}^{-1} \delta^i_{j}$$
- \[rescaled volume\] $$\label{ev eq tilde gij}
\dot{\tilde{g}}_{ij} = 2\tilde{H}^{-1} \tilde{h}_{ij} - 2n^{-1} \tilde{g}_{ij}$$
- $${\frac{\operatorname{d}\! }{\operatorname{d}\! t}}(\operatorname{d}\! \tilde{\mu})= 0 $$
Using the already procured estimates we conclude for (i) and (ii) $$\tilde{g}_{ij} = \operatorname{e}^{-2t/n}g_{ij} = \tilde{u}^2 (\sigma_{ij} + \varphi_i \varphi_j) \xrightarrow[t \rightarrow \infty]{} r_{\infty}^2 \sigma_{ij},$$ $$\tilde{h}^i_j = [\tilde{u}v]^{-1} (\delta^i_j - \sigma^{ik} \varphi_{kj} + \varphi^{i}\varphi^{k}\varphi_{kj}v^{-2}) \xrightarrow[t \rightarrow \infty]{} r_{\infty}^{-1} \delta^i_{j}.$$ (iii) can be obtained from the evolution equation and the product rule and (iv) is simply a consequence of (iii) and the evolution equation (notice how contracting with $\tilde{g}^{ij}$ gives $0$).
What Lemma \[rescaled volume\](iii) alludes to is that during the evolution process of the surfaces, the volume stays constant. Hence, $r_{\infty}$ depends only on the initial embedding $u_0$.
$$r_{\infty} = \left[ \frac{|M_0|}{|M^n|}\right]^{1/n}.$$
Equation implies that the volume element on the level sets of the radial function $r$ in $N^{n+1}$ and the volume element on $M^n$, betokened $d\mu^{\alpha}$ and $d\mu^{\sigma}$ respectively, are related by $$\operatorname{d}\!\mu^{\alpha} = r^n \operatorname{d}\!\mu^{\sigma},$$ since, in local coordinates $$\sqrt{\operatorname{det}\{\alpha_{ij}\}} = r^n \sqrt{\operatorname{det}\{\sigma_{ij} \}}.$$ Thus, we have $$|M_0| = r_{\infty}^n |M^n|.$$
[MSY]{}
L. Cafferelli, L. Nirenberg & J. Spruck [*The Dirichlet problem for nonlinear second order elliptic equations, III; Functions of the eigenvalue of the Hessian*]{}, Acta Math. [**155** ]{} (1985) 261-301
M. do Carmo, [*Riemannian geometry. Translated from Portugese by Francis Flaherty*]{}, Birkhäuser Boston 1992
M. do Carmo, [*Differential Forms and Applications*]{}, Springer-Verlag Berlin Heidelberg, 1994
J. Clutterbuck, [*Parabolic equations with continuous initial data*]{}, arXiv:math/0504455v1 \[math.AP\], 2004
Qi Ding, [*The inverse mean curvature flow in rotationally symmetric spaces*]{}, Chin. Ann. Math. [**32**]{}B(1) (2011) 27-44 L.C. Evans, [*Partial Differential Equations*]{}, Graduate studies in mathematics, Volume 19, American Mathematical Society, 1998
C. Gerhardt, [*Flow of nonconvex hypersurfaces into spheres* ]{}, J. Differential Geometry [**32**]{} (1990) 299 - 314
C. Gerhardt, [*Curvature Problems*]{}, Series in Geometry and Topology [**39**]{}, International Press 2006
R. Hamilton, [*Three-manifolds with positive Ricci curvature*]{}, J. Differential Geom. [**17**]{}, Number 2 (1982), 255-306
G. Huisken [*Flow by mean curvature of convex surfaces into spheres*]{}, J. Differential Geometry [**20**]{} (1984) 237 - 266
G. Huisken & A. Polden, [*Geometric evolution equations for hypersurfaces*]{}, Calc. of Var. and Geom. Ev. Problems, Lecture Notes in Mathematics [**1713**]{} (1999) 45-84
J. Jost, [*Riemannian geometry and geometric analysis*]{}, Universitext, Springer-Verlag, Berlin, 2002
N.V. Krylov, [*Nonlinear elliptic and parabolic equations of the second order*]{}, Reidel, Dordrecht, 1987
O. A. Ladyzenskaja, V. A. Solonnikov & N. N. Ura’ceva [*Linear and Quasi-linear Equations of Parabolic Type. Translated from the Russian by S.Smith*]{}, Translations of Mathematical Monographs. [**23**]{} American Mathematical Society, Rhode Island, 1968
G.M. Lieberman, [*Second Order Parabolic Differential Equations*]{}, World Scientific, 1996
T. Marquardt, [*The inverse mean curvature flow for hypersurfaces with boundary*]{}, Dissertation, Freie Universit[ä]{}t Berlin 2012
T. Mullins, [*On minimal submanifolds*]{}, Freie Universit[ä]{}t Berlin 2012
P. Petersen, [*Warped products*]{}, Petersen’s UCLA webpage, http://www.math.ucla.edu/ petersen/warpedproducts.pdf
R. Penrose, [*The Road to Reality*]{}, Jonathan Cape, London, 2004
J. Simons, [*Minimal varieties in Riemannian manifolds*]{}, Ann. of Math. [**88**]{} (1968) 62 - 105
M. Spivak, [*A Comprehensive Introduction to Differential Geometry, Volume Three*]{}, Publish or Perish Inc., 1990
J.I.E. Urbas, [*On the expansion of starshaped hypersurfaces by symmetric functions of their principal curvatures*]{}, Math. Zeitschrift [**205**]{} (1990) 355-372
R.M. Wald, [*General Relativity*]{}, The University of Chicago Press, Chicago, 1984
T. Mullins, <span style="font-variant:small-caps;">R3S, Fraunhofer IZM, Gustav-Meyer-Allee 25, 13355 Berlin, Germany</span>
<thomas.mullins@tu-berlin.de>
| {
"pile_set_name": "ArXiv"
} |
---
author:
- 'K. Vida[^1], L. Kriskovics, K. Oláh'
title: A quest for activity cycles in low mass stars
---
Introduction
============
The 11 year-long activity cycle on the Sun is known for a long time. On other stars [@1978ApJ...224..182P] were the first to find long-term brightness changes which were interpreted as starspot cycles. Using long-term photometric measurements, [@2000AAA...356..643O] found that other stars can also have multiple activity cycles – similar to the Gleissberg and Suess cycles on the Sun – and that the length of the cycles vary in time.
[@1978ApJ...226..379W] and [@1995ApJ...438..269B] studied the chromospheric activity of F–M-type stars using long-term CaH&K data from the Mt. Wilson survey, and detected cyclic variations in the H&K flux. Different relations are known between the rotation period, the length of the activity cycle, and other stellar properties. Using the Mt. Wilson survey data, [@1996ApJ...460..848B] found that there is a connection between the cycle length and the rotation period, namely, that faster rotating stars have shorter activity cycles. The authors also gave an explanation for the cycle lengths of different stars using dynamo theory. [@1993ApJ...414L..33S] proposed $(P_\mathrm{cyc}/P_\mathrm{rot})^2$, the square of the ratio of the cycle length and the rotation period as a quantity to parametrize activity cycles. [@Brandenburg:1998dt] and [@1999ApJ...524..295S] found a dependence between the cycle and rotational frequencies $\omega_\mathrm{cyc}/\Omega$ and the Rossby-number. [@2002AN....323..361O] studied photometric data of active stars, and found a similar correlation between the rotation and cycle length, as [@1996ApJ...460..848B].
Therefore, if we seek activity cycles, it is effective to monitor fast-rotating objects, i.e., cycles may be found after a few years of monitoring already, contrary to the case of stars with longer rotation period. We present the analysis of four objects, all of them having a rotation/orbital period in the order of 0.5 days: EY Dra, V405And, GSC 3377-0296, and V374 Peg.
EY Dra is a single, active dM1-2 star that was thoroughly analyzed lately by [@2010AN....331..250V]. The star showed slow evolution on monthly time scale, and possibly a flip-flop mechanism. The activity cycle presented here was reported also in that paper.
V405And is a grazing eclipsing binary which was studied in detail first by [@1997AAA...326..228C]. They found an orbital period of $P=0.465$ days and a small eclipse in the light curve. The spectral type of the components were also determined: M0V and M5V were found for the primary and the secondary, respectively. Both components were showed strong emission in the region of the $H\alpha$ line. [@2009AAA...504.1021V] analyzed the system using $BV(RI)_C$ photometric measurements and spectroscopic data. Strong flares in the light curve and the $H\alpha$ region were observed, and by modeling the system they showed that the radius of the primary was significantly larger than the theoretically predicted value, while the size of the secondary fit the mass-radius relation. This result was later confirmed by [@2011AJ....142..106], who used an independent spectroscopy-based method to determine the size of the components.
GSC 3377-0296 was studied to date only by [@2007IBVS.5772....1L], who identified the object as a heavily spotted RS CVn system with an orbital period of 0.422 days.
V374 Peg is a single, fully convective M4 dwarf with $0.28M_\odot$ [@2000AAA...364..217D]. According to [@2006Sci...311..633D] and [@2008MNRAS.384...77M] , the star has very weak differential rotation, and a stable, poloidal, axisymmetric magnetic field.
In this paper new, long-term datasets of these objects are presented in the hope of finding activity cycles.
![image](vida_fig1a.eps){width="32.50000%"} ![image](vida_fig1b.eps){width="32.50000%"} ![image](vida_fig1c.eps){width="32.50000%"}
![image](vida_fig1d.eps){width="32.50000%"} ![image](vida_fig1e.eps){width="32.50000%"} ![image](vida_fig1f.eps){width="32.50000%"}
![image](vida_fig1g.eps){width="32.50000%"} ![image](vida_fig1h.eps){width="32.50000%"} ![image](vida_fig1i.eps){width="32.50000%"}
Observations
============
Data of EY Dra were obtained using the 60cm telescope of the Konkoly Observatory at Svábhegy, Budapest equipped with a Wright Instruments $750\times1100$ CCD camera. V405And was observed both by the 60cm telescope, and on Piszkéstető Mountain station of the Konkoly Observatory, using the 1m RCC telescope and a Princeton Instruments $1300\times1300$ CCD. The other targets were monitored only by the 1m RCC telescope. The observations presented in this paper were obtained using Johnson $V$ filter. The datasets cover time ranges of 3–5 years (the exact values are shown in Table \[tab:table\]). Data reduction was done using standard IRAF[^2] procedures. Differential aperture photometry was done using the DAOPHOT package.
Analysis & discussion
=====================
![*From top to bottom:* Typical phased light curves of EY Dra, V405And, GSC 3377-0296, and V374 Peg.[]{data-label="fig:phasedlc"}](vida_fig2.eps){width="48.00000%"}
![*From top to bottom:* $V$ light curve, Fourier-spectrum and spectral window of V374 Peg. Note, that the top plot is truncated, the eruptions caused by strong flares cannot be seen.[]{data-label="fig:v374peg"}](vida_fig3a.eps "fig:"){width="45.00000%"} ![*From top to bottom:* $V$ light curve, Fourier-spectrum and spectral window of V374 Peg. Note, that the top plot is truncated, the eruptions caused by strong flares cannot be seen.[]{data-label="fig:v374peg"}](vida_fig3b.eps "fig:"){width="45.00000%"}
![Rotation period–activity cycle relation, based on [@2002AN....323..361O]. Different colors denote data from different surveys. The line shows the fit to the shortest cycles.[]{data-label="fig:2002AN....323..361O"}](vida_fig4.eps){width="48.00000%"}
---------------------- ------------- ------------- ------------- -------
EY V405 GSC 3377- V374
Dra And 0296 Peg
Sp. type dM1-2 dM0+dM5 K3? dM4
Binarity – + + –
Mass ($M_\odot$) 0.5 0.49+0.21 ? 0.28
$P_\mathrm{rot} (d)$ 0.459 0.465 0.422 0.445
$P_\mathrm{cyc} (d)$ 348 305 530 –
Significance 5.6$\sigma$ 6.1$\sigma$ 8.4$\sigma$ –
LC length $(d)$ 1176 1391 1446 1112
---------------------- ------------- ------------- ------------- -------
: The basic properties of the objects, the lengths of the detected activity cycles, their significance, and the total length of the observed light curves.
\[tab:table\]
The Fourier-analysis of the observed light curves was done using the MUFRAN software [@1990KOTN....1....1K]. The Fourier-spectra of the data with the spectral windows of EY Dra, V405And, and GSC 3377-0926 are plotted in Fig. \[fig:figure\]. Before determining the length of the starspot cycle the light curves were first pre-whitened with the rotational modulation. The main feature on all Fourier-spectra is the rotational modulation and a signal at the double frequency (and their aliases), as all targets have typically two dominant active regions on their surface (see Fig. \[fig:phasedlc\] with the phased light curves).
The Fourier-spectrum of V374 Peg does not indicate any cyclic long-term variations (see Fig. \[fig:v374peg\]). The other three objects, EY Dra, V405And, and GSC 3377-0296 all show a starspot cycle in the order of 300-500 days. The analysis of EY Dra by [@2010AN....331..250V] showed, that this object probably possesses an activity cycle, but the cycles of V405And and GSC 3377-0296 were not known before. The actual cycle lengths with the basic stellar parameters are summarized in Table \[tab:table\]. The statistical significances of the detected cycles were estimated from the standard deviation of the Fourier-spectra after being pre-whitened by the rotational signal.
According to the stellar masses of cycling stars, both EY Dra and the primary of V405And are over the limit of full convection (about $0.3M_\odot$, see e.g. @2001ApJ...559..353M). Thus, they have probably an interior structure similar to the Sun, and a dynamo capable of hosting an activity cycle.
The changes in the light curve of V405And above the proximity effects is dominated by spottedness of the primary component, and not by the changes on the secondary. Concerning the $H\alpha$ spectra, the secondary component is active, too, but because of the intensity difference between the two components these changes are too small to observe. This means, that the activity cycle is most possibly associated with the primary component.
Currently, without any binary models, we do not know much about the structure of the GSC 3377-0296 components. However, the fact that we detected a stellar activity cycle indicates that at least one of the components should have solar-like dynamo, thus a radiative core and a convective envelope, as according to [@Rudiger:2003gu], the cyclic behavior is an “exotic exception” for the solution of $\alpha^2$ dynamos, and they conclude, that cyclic stellar activity can always be the indicator of strong internal differential rotation.
[@2005AN....326..265K] and [@2006AAA...446.1027C] showed, that fully convective stars are capable to host large-scale, non-axisymmetric magnetic field with $\alpha^2$ dynamo, if they have weak differential rotation. On the other hand, the model of [@2006ApJ...638..336D] suggests, that these stars can have axisymmetric magnetic field, supposing they have strong differential rotation. V374 Peg seems to have an axisymmetric magnetic field, and it is rotating almost as a rigid body (see @2006Sci...311..633D [@2008MNRAS.384...77M]), posing an interesting question to theoreticians. The fact, that we did not find any sign of an activity cycle, is consistent with the properties of an $\alpha^2$ dynamo (see @Rudiger:2003gu). MHD simulations of [@Browning:2008dn] showed, that fully convective stars can have strong magnetic fields as a result of convective flows acting as magnetic dynamo. Their models indicate no differential rotation, consistent with the observations of V374 Peg. [@Gastine:2012da] suggested an interesting resolution for the problem of different magnetic field topologies. They showed, that under a critical value of the Rossby-number ($\mathrm{Ro}\approx0.1$) a bistable region exists, where both dipolar and multipolar fields can be generated, depending on the initial magnetic seed. These two branches would also manifest in the scale of the differential rotation: stars with axisymmetric fields should have weak differential rotation, as seen on V374 Peg.
The detected starspot cycles can be plotted with the data from [@2002AN....323..361O] (see Fig. \[fig:2002AN....323..361O\]). The length of the activity cycles for our targets seem to be somewhat shorter than the previous findings, but fit quite well to the relation. With our new data, we get $$\log (P_\mathrm{cyc}/P_\mathrm{rot} )=0.73 \log(1/P_\mathrm{rot})+2.89$$ for the correlation between the rotation period and the shortest activity cycle length for the sample of both single and binary stars from different surveys.
Such short cycles – in the time scale of one year – are not unprecedented, although they not frequently detected, probably because of the long-term observations needed densely covering as much as possible in each observing season. E.g. [@2010ApJ...723L.213M] detected an activity cycle of 1.6 years on the F8V-type $\iota$ Horologii, which is already in the order of magnitude as those seen at our targets. The cycles found on the stars presented in this paper, with lengths in the order of one year, are the shortest ones known up to now.
Summary
=======
- We present long-term photometric measurements of four stars in $V$ passband of EY Dra, V405And, GSC 3377-0296, and V374 Peg.
- The Fourier-spectrum shows sign of long-term periodic changes in the case of EY Dra, V405And, and GSC 3377-0296 with cycle length of about 300–500 days, the shortest cycles known to date.
- The Fourier-spectrum of V374 Peg does not show any long-term cyclic behavior, which is consistent with the properties of an $\alpha^2$ dynamo.
We would like to thank the referee his helpful comments. This project has been supported by the OTKA grant K-81421, the Lendület-2009, the Lendület-2012 Young Researchers’ Program of the Hungarian Academy of Sciences, and the HUMAN MB08C 81013 grant of the MAG Zrt.
Baliunas, S.L., et al., Astrophysical Journal, 1995, 438, 269–287
Baliunas, S.L., Nesme-Ribes, E., Sokoloff, D., Soon, W.H., Astrophysical Journal v.460, 1996, 460, 848
Brandenburg, A., Saar, S.H., Turpin, C.R., The Astrophysical Journal, 1998, 498, L51L54
Browning, M.K., The Astrophysical Journal, 2008, 676, 1262–1280
Chevalier, C., & Ilovaisky, S. A. 1997, Astronomy and Astrophysics, 326, 228
Chabrier, G., Küker, M. 2006, Astronomy and Astrophysics, 446, 1027
Delfosse, X., Forveille, T., S[é]{}gransan, D., Beuzit, J.-L., Udry, S., Perrier, C., & Mayor, M. 2000, Astronomy and Astrophysics, 364, 217
Dobler, W., Stix, M., & Brandenburg, A. 2006, The Astrophysical Journal, 638, 336
Donati, J.-F., Forveille, T., Collier Cameron, A., Barnes, J. R., Delfosse, X., Jardine, M. M., & Valenti, J. A. 2006, Science, 311, 633
Gastine, T. et al., Astronomy and Astrophysics, 2012, 549, L5
Kolláth Z., 1990, *The program package MUFRAN*, Occasional Technical Notes of Konkoly Observatory, No. 1 ([www.konkoly.hu/Mitteilungen/Mitteilungen.html\#TechNotes](www.konkoly.hu/Mitteilungen/Mitteilungen.html#TechNotes))
K[ü]{}ker, M., Rüdiger, G. 2005, Astronomische Nachrichten, 326, 265
Lloyd, C., Bernhard, K., & Monninger, G. 2007, Information Bulletin on Variable Stars, 5772, 1
Metcalfe, T.S. et al., The Astrophysical Journal Letters, 2010, 723, L213–L217
Morin, J., et al. 2008, Monthly Notices of the Royal Astronomical Society, 384, 77
Mullan, D. J., & MacDonald, J. 2001, The Astrophysical Journal, 559, 353
Ol[á]{}h, K., Koll[á]{}th, Z., & Strassmeier, K. G. 2000, Astronomy and Astrophysics, 356, 643
Ol[á]{}h, K., & Strassmeier, K. G. 2002, Astronomische Nachrichten, 323, 361
Phillips, M. J., & Hartmann, L. 1978, The Astrophysical Journal, 224, 182
Ribeiro, T., Baptista, R., & Kafka, S. 2011, The Astronomical Journal, 142, 106
Rüdiger, G., Elstner, D., Ossendrijver, M., Proceedings of the International Astronomical Union, 2003, 406, 15–21
Saar, S.H., Brandenburg, A., The Astrophysical Journal, 1999, 524, 295–310
Soon, W.H., Baliunas, S.L., Zhang, Q., Astrophysical Journal, 1993, 414, L33–L36
Vida, K., Ol[á]{}h, K., K[ő]{}v[á]{}ri, Zs., Korhonen, H., Bartus, J., Hurta, Zs., & Posztob[á]{}nyi, K. 2009, , 504, 1021
Vida, K., et al. 2010, Astronomische Nachrichten, 331, 250
Wilson, O.C., Astrophysical Journal, 1978, 226, 379–396
[^1]: Corresponding author:
[^2]: IRAF is distributed by the National Optical Astronomy Observatory, which is operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: |
A new generalization of Pascal’s triangle, the so-called hyperbolic Pascal triangles were introduced in [@BNSz]. The mathematical background goes back to the regular mosaics in the hyperbolic plane. The alternating sum of elements in the rows was given in the special case $\{4,5\}$ of the hyperbolic Pascal triangles. In this article, we determine the alternating sum generally in the hyperbolic Pascal triangle corresponding to $\{4,q\}$ with $q\ge5$.\
[*Key Words: Pascal triangle, hyperbolic Pascal triangle, alternating sum.*]{}\
[*MSC code: 11B99, 05A10.*]{}
author:
- 'László Németh[^1], László Szalay[^2] [^3]'
title: '**Alternating sums in hyperbolic Pascal triangles** '
---
Introduction {#sec:introduction}
============
In the hyperbolic plane there are infinite types of regular mosaics (see, for example [@C]), they are denoted by Schläfli’s symbol $\{p,q\}$, where $(p-2)(q-2)>4$. Each regular mosaic induces a so called hyperbolic Pascal triangle (see [@BNSz]), following and generalizing the connection between the classical Pascal’s triangle and the Euclidean regular square mosaic $\{4,4\}$. For more details see [@BNSz], but here we also collect some necessary information.
There are several approaches to generalize the Pascal’s arithmetic triangle (see, for instance [@BSz]). The hyperbolic Pascal triangle based on the mosaic $\{p,q\}$ can be figured as a digraph, where the vertices and the edges are the vertices and the edges of a well defined part of the lattice $\{p,q\}$, respectively, further the vertices possesses a value giving the number of different shortest paths from the base vertex. Figure \[fig:Pascal\_46\_layer5\] illustrates the hyperbolic Pascal triangle when $\{p,q\}=\{4,6\}$. Generally, for $\{4,q\}$ the base vertex has two edges, the leftmost and the rightmost vertices have three, the others have $q$ edges. The square shaped cells surrounded by appropriate edges are corresponding to the regular squares in the mosaic. Apart from the winger elements, certain vertices (called “Type A” for convenience) have two ascendants and $q-2$ descendants, the others (“Type B”) have one ascendant and $q-1$ descendants. In the figures we denote the vertices type $A$ by red circles and the vertices type $B$ by cyan diamonds, further the wingers by white diamonds. The vertices which are $n$-edge-long far from the base vertex are in row $n$.
The general method of drawing is the following. Going along the vertices of the $j^{th}$ row, according to type of the elements (winger, $A$, $B$), we draw appropriate number of edges downwards (2, $q-2$, $q-1$, respectively). Neighbour edges of two neighbour vertices of the $j^{th}$ row meet in the $(j+1)^{th}$ row, constructing a vertex with type $A$. The other descendants of row $j$ in row $j+1$ have type $B$. In the sequel, $\binomh{n}{k}$ denotes the $k^\text{th}$ element in row $n$, which is either the sum of the values of its two ascendants or the value of its unique ascendant. We note, that the hyperbolic Pascal triangle has the property of vertical symmetry.
It is well-known that the alternating sum $\sum_{i=0}^{n}(-1)^i\,{n\choose i}$ of row $n$ in the classical Pascal’s triangle is zero ($n\geq1$). In [@BNSz] we showed that the alternating sum $\sum_{i}(-1)^i\,\binomh{n}{i}$ is either 0 (if $n\equiv1\;(\bmod\ 3)$) or 2 (otherwise, with $n\ge5$) for case $\{4,5\}$. In this paper, we determine an explicit form for the alternating sums generally for $\{4,q\}$ $(q\geq5)$. If one considers the result with $q=4$, it returns 0 according to the classical Pascal’s triangle. The definitions, the signs, the figures, and the method strictly follow the article [@BNSz].
![Hyperbolic Pascal triangle linked to $\{4,6\}$ up to row 5[]{data-label="fig:Pascal_46_layer5"}](Pascal_46_layer5.pdf){width="0.99\linewidth"}
Main Theorems
=============
From this point we consider the hyperbolic Pascal triangle based on the mosaic $\{4,q\}$ with $q\geq5$. Denote by $s_n$ and $\hat{s}_n$ the number of the vertices, and the sum of the elements in row $n$, respectively.
The sequences $\{s_n\}$ and $\{\hat{s}_n\}$ can be given (see again [@BNSz]) by the ternary homogenous recurrence relations $$\label{eq:recsn}
s_n=(q-1)s_{n-1}-(q-1)s_{n-2}+s_{n-3}\qquad (n\ge4),$$ (the initial values are $ s_1=2,\;s_2=3,\;s_3=q$) and $$\hat{s}_n=q\hat{s}_{n-1}-(q+1)\hat{s}_{n-2}+2\hat{s}_{n-3}\qquad (n\ge4),$$ (the initial values are $\hat{s}_1=2,\;\hat{s}_2=4,\;\hat{s}_3=2q$), respectively.
Let $\widetilde{s}_{n}$ be the alternating sum of elements of the hyperbolic Pascal triangle (starting with positive coefficient) in row $n$, and we distinguish the even and odd cases.
\[th:even\] Let $q$ be even. Then $$\widetilde{s}_n=\sum_{i=0}^{s_n-1}(-1)^i\,\binomhh{n}{i}=
\left\{
\begin{array}{clll}
0, & {\rm if} & n=2t+1, & n\ge1,\\
-2(5-q)^{t-1}+2, & {\rm if} & n=2t, & n\ge2,
\end{array}
\right.$$ hold, further $\widetilde{s}_0=1$.
In case of $q=6$ we deduce $$\widetilde{s}_n=
\left\{
\begin{array}{clll}
0, & {\rm if} & n\ne 4t, & n\ge1,\\
4, & {\rm if} & n=4t, & n\ge4.
\end{array}
\right.$$
For $q=4$ Theorem \[th:even\] would return with 0, providing the known result $ \widetilde{s}_n=0$ for the original Pascal’s triangle.
\[th:odd\] Let $q\ge5$ be odd. Then $\widetilde{s}_0=1$, further $$\widetilde{s}_n=\sum_{i=0}^{s_n-1}(-1)^i\,\binomhh{n}{i}=
\left\{
\begin{array}{clll}
0, & {\rm if} & n=3t+1,& n\ge1 ,\\
\phantom{2}(-2)^t(q-5)^{t-1}+2, & {\rm if} & n=3t-1,& n\ge n_1,\\
2(-2)^t(q-5)^{t-1}+2, & {\rm if} & n=3t,& n\ge n_2,
\end{array}
\right.$$ where $(n_1,n_2)=(2,3)$ and $(5,6)$ if $n>5$ and $n=5$, respectively. In the latter case $\widetilde{s}_2=0$, $\widetilde{s}_3=-2$. $$%\widetilde{s}_n=\sum_{i=0}^{s_n-1}(-1)^i\,\binomhh{n}{i}=
%\left\{
%\begin{array}{lll}
%0, & {\rm if} & n=3t+1, \; n\ge1,\\
%2, & {\rm if} & n\ne3t+1, \;n\ge5,
%\end{array}
%\right.$$
We note, that by the help of $\hat{s}_n$ and $\widetilde{s}_n$ we can easily determine the alternate sum with the arbitrary weights $v$ and $w$.
$$\begin{aligned}
\widetilde{s}_{(v,w),n}&=&\sum_{i=0}^{s_n-1}\left(v{\delta_{0, \,i\bmod2}}+w{\delta_{1,\, i\bmod2}}\right)\binomhh{n}{i}=\frac{\hat{s}_n+\widetilde{s}_n}{2}v+\frac{\hat{s}_n-\widetilde{s}_n}{2}w\\
&=&
\frac{v+w}{2}\hat{s}_n+\frac{v-w}{2}\widetilde{s}_n,\end{aligned}$$
where $\delta_{j,i}$ is the Kronecker delta.
Proofs of Theorems
==================
Since the hyperbolic Pascal triangle has a symmetry axis, if $s_n$ is even then the alternating sum is zero. In the case when $s_n$ is odd (in the sequel, we assume it), the base of both proofs is to consider the vertices type $A$ and $B$ of row $n$ and to observe their influence on $\widetilde{s}_{n+2}$ or $\widetilde{s}_{n+3}$. We separate the contribution of each $\binomh{n}{i}$ individually, and then take their superposition.
Let $\widetilde{s}_n^{(A)}$ and $\widetilde{s}_n^{(B)}$ be the subsum of $\widetilde{s}_n$ restricted only to the elements of type $A$ and $B$, respectively. As $\binomh{n}{0}=\binomh{n}{s_n-1}=1$, then $$\label{eq:snAB}
\widetilde{s}_{n}=\widetilde{s}_n^{(A)}+\widetilde{s}_n^{(B)}+2.$$
Using the notations of [@BNSz], $x_A$ and $x_B$ denote the value of an element of type $A$ and $B$, respectively. (In the figures, we indicate them shortly by $x$.) Their contributions to $\widetilde{s}_{n+k}$ $(k\geq1)$ are denote by ${\cal H}_k(x_A)$ and ${\cal H}_k(x_B)$, respectively, and for example, ${\cal H}_k^{(A)}(x_A)$ and ${\cal H}_k^{(B)}(x_A)$ for the contribution of $x_A$ from the row $n$ to the alternate sum of the row $n+k$ restricted to the elements of type $A$ and $B$, respectively. Similarly, ${\cal H}_k(1)$ shows the contribution of a winger element of row $n$ (having value $1$). According to [@BNSz] we have $$\begin{aligned}
\label{eq:inf}
{\cal H}_k(x_A)&=&{\cal H}_k^{(A)}(x_A)+{\cal H}_k^{(B)}(x_A),\\
{\cal H}_k(x_B)&=&{\cal H}_k^{(A)}(x_B)+{\cal H}_k^{(B)}(x_B),\\
{\cal H}_k(1)&=&{\cal H}_k^{(A)}(1)+{\cal H}_k^{(B)}(1)+1.\end{aligned}$$
Clearly, it is obvious that $\widetilde{s}_{0}=1$, $\widetilde{s}_{2}=0$ hold for all $q\geq5$.
Proof for even $q$
------------------
If $n=2t+1$ $(n\ge1)$, then in accordance with relation $s_n$ is even, otherwise odd. So, we may suppose that $n$ is also even.
![The influence ${\cal H}_2(x_A)$ and ${\cal H}_2(x_B)$[]{data-label="fig:4q_even_HAB2"}](Pascal_4q_even_HAB2.pdf){width="0.99\linewidth"}
Figure \[fig:4q\_even\_HAB2\] shows the contributions of $x_A$ and $x_B$ from the row $n$ to the alternating sum of the row $n+2$. From the growing method of the hyperbolic Pascal triangle, a vertex type $A$ in row $i$ generates $q-4$ vertices type $B$ in row $i+1$, if the vertex is type $B$ then it has $q-3$ generated vertices type $B$ in row $i+1$. The value of a vertex is either the value of its ascendant or the sum of values of its two ascendants. Then we consider the value $2x_A$ (and $2x_B$) of vertices type $A$ in rows $n+2$ as $x_A+x_A$ (and $x_B+x_B$). In Figure \[fig:4q\_even\_HAB2\] we drew the values of the alternating sums belonging to $x$ (in row $n+2$) in blocks. The last but one row shows the values ${\cal H}_2^{(A)}(x_A)$ (or ${\cal H}_2^{(A)}(x_B)$), the last row shows ${\cal H}_2^{(B)}(x_A)$ (or ${\cal H}_2^{(B)}(x_B)$) in blocks.
Put $\ve=\pm1$ and $\delta=\pm1$ in case of the vertex type $A$ and $B$ is being considered, respectively. (In Figure \[fig:4q\_even\_HAB2\], they are $+1$, it is the first sign in row $n+2$.) Now we obtain, by observing Figure \[fig:4q\_even\_HAB2\], that $$\begin{aligned}
{\cal H}_2^{(A)}(x_A) &=& \ve\cdot(-2x(q-4))= -2(q-4)\ve x_A,\\
{\cal H}_2^{(B)}(x_A) &=& \ve\cdot x(q-4)= (q-4)\ve x_A,\end{aligned}$$ and $$\begin{aligned}
{\cal H}_2^{(A)}(x_B) &=& \delta\cdot (-2x(q-3))= -2(q-3)\delta x_B,\\
{\cal H}_2^{(B)}(x_B) &=& \delta\cdot x(q-3) =(q-3)\delta x_B.\end{aligned}$$
The Figure \[fig:4q\_even\_H12\] shows the contributions of the influence of left winger element of row $n$ to row $n+2$. For the right winger elements the situation is the same, thanks to the vertical symmetry. Thus $$\begin{aligned}
{\cal H}_2^{(A)}(1) &=& -1,\\
{\cal H}_2^{(B)}(1) &=& 0.\end{aligned}$$
![The influence ${\cal H}_2(1)$[]{data-label="fig:4q_even_H12"}](Pascal_4q_even_H12.pdf){width="5cm"}
We have given the influence of an element located in row $n$ on row $n+2$. Let suppose that $y$ is the value of the neighbour element of $x$ in row $n$. Clearly, $y$ has also influence on row $n+1$. The signs of $x$ and $y$ in the alternating sum in row $n$ are different and the signs of left hand side of their influence structures are also different in row $n+2$. In the figures, the leftmost element of influence structures are highlighted. (The signs of rightmost values of influence structures are also different.) The situation is the same in case of the winger elements. Thus, according to [@BNSz] we can give the changing of the alternating sums from row $n$ to row $n+2$.
Summarising the results, we obtain the system of recurrence equations $$\begin{aligned}
\widetilde{s}_{n+2}^{(A)}&=&-2(q-4)\widetilde{s}_n^{(A)}-2(q-3)\widetilde{s}_n^{(B)}-2,\qquad (n\ge0),\label{eq:even_sA}\\
\widetilde{s}_{n+2}^{(B)}&=&\phantom{-2}(q-4)\widetilde{s}_n^{(A)}+\phantom{2}(q-3)\widetilde{s}_n^{(B)},\phantom{+1}\,\, \qquad (n\ge0).\label{eq:even_sB}\end{aligned}$$
Now we apply the following lemma (see [@BNSz]).
\[lemma:2seq\] Let $x_0$, $y_0$, further $u_i$, $v_i$ and $w_i$ ($i=1,2$) be complex numbers such that $a_2b_1\ne0$. Assume that the for $n\ge n_0$ terms of the sequences $\{x_n\}$ and $\{y_n\}$ satisfy $$\begin{aligned}
%\label{mix}
x_{n+1}&=&u_1x_n+v_1y_n+w_1,\\
y_{n+1}&=&u_2x_n+v_2y_n+w_2.\end{aligned}$$ Then for both sequences $$\label{nomix}
z_{n+3}=(u_1+v_2+1)z_{n+2}+(-u_1v_2+u_2v_1-u_1-v_2)z_{n+1}+(u_1v_2-u_2v_1)z_n$$ holds ($n\ge n_0$).
Thus we obtain $\widetilde{s}_{n+6}^{(A)}=(6-q)\widetilde{s}_{n+4}^{(A)}+(q-5)\widetilde{s}_{n+2}^{(A)}$ and $\widetilde{s}_{n+6}^{(B)}=(6-q)\widetilde{s}_{n+4}^{(B)}+(q-5)\widetilde{s}_{n+2}^{(B)}$. Obviously, $\widetilde{s}_{2}^{(A)}=-2$ and $\widetilde{s}_{2}^{(B)}=0$ fulfil. Using and **** we gain $\widetilde{s}_{4}^{(A)}=4q-18$ and $\widetilde{s}_{4}^{(B)}=-2(q-4)$.
From we conclude $$\label{eq:even_recu}
\widetilde{s}_{n+6}=(6-q)\widetilde{s}_{n+4}+(q-5)\widetilde{s}_{n+2},\qquad (n\ge0),$$ where $\widetilde{s}_{0}=1$, $\widetilde{s}_{2}=0$ and $\widetilde{s}_{4}=2(q-4)$.
The characteristic equation of is $$\widetilde{p}(x)=x^4+(q-6)x^2-(q-5)=\left(x^2-(5-q)\right)(x^2-1).$$ Further, we have $\widetilde{s}_{2t}=\alpha(5-q)^t+\beta$, and from $\widetilde{s}_2$ and $\widetilde{s}_4$ we realize $\alpha=-2/(5-q)$, $\beta=2$, and then $\widetilde{s}_{2t}=(-2/(5-q))(5-q)^t+2=-2(5-q)^{t-1}+2$.
For all $n\geq1$, $\widetilde{s}_n = -\widetilde{s}_{n}^{(B)}$, because $\widetilde{s}_i = -\widetilde{s}_{i}^{(B)}=0$ for $i=1,2,3$ and $\widetilde{s}_4 = -\widetilde{s}_{4}^{(B)}$.
Proof for odd $q$
-----------------
We examine rows $n\ne3t+1$ $(n\geq2)$, because the number of element in row $n=3t+1$ is even (see relation ).
Here, apart from some details, we copy the treatment of the previous case. The first difference is that now we have to examine the influence of the elements from row $n$ on row $n+3$ because the nice property about the signs first appears three rows later. The influence structures ${\cal H}_3(x_A)$ and ${\cal H}_3(x_B)$ are rather complicated, so we split them into smaller parts. First we draw the structures ${\cal H}_2(x_A)$ and ${\cal H}_2(x_B)$ when $x_A$ and $x_B$ are in row $n+1$, and we describe the influence of them on row $n+3$. Then we combine the result with the branches of ${\cal H}_3(x_A)$ and ${\cal H}_3(x_B)$. In Figures \[fig:4q\_odd\_HA2\] and \[fig:4q\_odd\_HB2\] only ${\cal H}_2(x_A)$ and ${\cal H}_2(x_B)$ can be seen, later in Figure \[fig:4q\_odd\_HAB3\] we consider the “skeleton” of ${\cal H}_3(x_A)$ and ${\cal H}_3(x_B)$.
![The influence ${\cal H}_2(x_A)$[]{data-label="fig:4q_odd_HA2"}](Pascal_4q_odd_HA2.pdf){width="0.80\linewidth"}
![The influence ${\cal H}_2(x_B)$[]{data-label="fig:4q_odd_HB2"}](Pascal_4q_odd_HB2.pdf){width="0.80\linewidth"}
![The influence ${\cal H}_3(x_A)$ and ${\cal H}_3(x_B)$[]{data-label="fig:4q_odd_HAB3"}](Pascal_4q_odd_HAB3.pdf){width="0.97\linewidth"}
![The influence ${\cal H}_3(1)$[]{data-label="fig:4q_odd_H13"}](Pascal_4q_odd_H13.pdf){width="0.78\linewidth"}
From the figures one can derive the observations $$\begin{aligned}
{\cal H}_3^{(A)}(x_A) &=& \ve\cdot(2x-4x-4x(q-4)+2x)=-4(q-4)\ve x_A,\\
{\cal H}_3^{(B)}(x_A) &=& \ve\cdot(-x+2x+2x(q-4)-x)=2(q-4)\ve x_A,\end{aligned}$$ and $$\begin{aligned}
{\cal H}_3^{(A)}(x_B) &=& \delta\cdot(2x-4x-4x(q-3)+2x)=-4(q-3)\delta x_B,\\
{\cal H}_3^{(B)}(x_B) &=& \delta\cdot(-x+2x+2x(q-3)-x)=2(q-3)\delta x_B.\end{aligned}$$ We also have $$\begin{aligned}
{\cal H}_3^{(A)}(1) &=& -5+2=-3,\\
{\cal H}_3^{(B)}(1) &=& 2-1=1.\end{aligned}$$
Combining the informations, it results the system of recurrence relations $$\begin{aligned}
\label{eq:_odd_recu01}
\widetilde{s}_{n+3}^{(A)}&=&-4(q-4)\widetilde{s}_n^{(A)}-4(q-3)\widetilde{s}_n^{(B)}+2\cdot(-3),\qquad (n\ge0),\\
\widetilde{s}_{n+3}^{(B)}&=&\;\;\;2(q-4)\widetilde{s}_n^{(A)}+2(q-3)\widetilde{s}_n^{(B)}+2\cdot1,\phantom{(-)}\qquad (n\ge0).\label{eq:_odd_recu02}\end{aligned}$$
Lemma \[lemma:2seq\] yields $\widetilde{s}_{n+9}^{(A)}=(11-2q)\widetilde{s}_{n+6}^{(A)}+(2q-10)\widetilde{s}_{n+3}^{(A)}$ and $\widetilde{s}_{n+9}^{(B)}=(11-2q)\widetilde{s}_{n+6}^{(B)}+(2q-10)\widetilde{s}_{n+3}^{(B)}$. Apparently, $\widetilde{s}_{2}^{(A)}=-2$, $\widetilde{s}_{2}^{(B)}=0$, and $\widetilde{s}_{3}^{(A)}=-6$, $\widetilde{s}_{3}^{(B)}=2$. From the system and we gain $\widetilde{s}_{5}^{(A)}=8q-38$, $\widetilde{s}_{5}^{(B)}=-4q+18$ and $\widetilde{s}_{6}^{(A)}=16q-78$, $\widetilde{s}_{6}^{(B)}=-8q+38$.
By we realize $$\label{eq:odd_recu}
\widetilde{s}_{n+9}=(11-2q)\widetilde{s}_{n+6}+(2q-10)\widetilde{s}_{n+3},\qquad (n\ge0),$$ where $\widetilde{s}_{2}=0$, $\widetilde{s}_{3}=-2$ and $\widetilde{s}_{5}=4q-18$, $\widetilde{s}_{6}=8q-38$.
In case of $q=5$, the equation gives $\widetilde{s}_{n+9}=\widetilde{s}_{n+6}$ and $\widetilde{s}_{5}=2$, $\widetilde{s}_{6}=2$. So $\widetilde{s}_{n}=2$ holds if $n\geq 5$. In case $q>5$, the characteristic equation of is $$\widetilde{p}(x)=x^6+(2q-11)x^3-(2q-10)=\left(x^3-2(5-q)\right)(x^3-1).$$
Thus $\widetilde{s}_{3t}$ or $\widetilde{s}_{3t-1}=\alpha (2(5-q))^t+\beta$ fulfils for some $\alpha$ and $\beta$. Finally, if $n=3t-1$, then from $\widetilde{s}_2$ and $\widetilde{s}_5$ we obtain $\alpha=1/(q-5)$ and $\beta=2$. Hence $\widetilde{s}_{3t-1}=(10-2q)^t/(q-5)+2$. Otherwise, if $n=3t$, then from $\widetilde{s}_{3}$ and $\widetilde{s}_{6}$ we deduce $\alpha=2/(q-5)$ and $\beta=2$. Thus $\widetilde{s}_{3t}=2(10-2q)^t/(q-5)+2$.
[2015]{}
H. Belbachir, L. Németh and L. Szalay, [Hyperbolic Pascal triangles]{}, *Appl. Math. Comp.* [**273**]{} (2016), 453-464.
Belbachir, H. and Szalay, L., On the arithmetic triangles, Siauliai Math. Sem., [**9**]{} (17) (2014), 15-26.
Coxeter, H. S. M., Regular honeycombs in hyperbolic space, Proc. Int. Congress of Math., Amsterdam, Vol. III. (1954), 155-169.
[^1]: University of West Hungary, Institute of Mathematics, Hungary. *nemeth.laszlo@nyme.hu*
[^2]: Department of Mathematics and Informatics, J. Selye University, Hradna ul. 21., 94501 Komarno, Slovakia
[^3]: University of West Hungary, Institute of Mathematics, Hungary. *szalay.laszlo@nyme.hu*
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Let $R$ be a Cohen-Macaulay local ring possessing a canonical module. In this paper we consider when the maximal ideal of $R$ is self-dual, i.e. it is isomorphic to its canonical dual as an $R$-module. local rings satisfying this condition are called Teter rings, and studied by Teter, Huneke-Vraciu, Ananthnarayan-Avramov-Moore, and so on. On the positive dimensional case, we show such rings are exactly the endomorphism rings of the maximal ideals of some Gorenstein local rings of dimension one. We also provide some connection between the self-duality of the maximal ideal and near Gorensteinness.'
address: 'Graduate School of Mathematics, Nagoya University, Furocho, Chikusaku, Nagoya, Aichi 464-8602, Japan'
author:
- Toshinori Kobayashi
title: 'Local rings with a self-dual maximal ideal'
---
[^1] [^2] [^3]
Introduction
============
Let $R$ be a Cohen-Macaulay local ring with a canonical module $\omega$. For an $R$-module $M$, we denote by $M^\dag$ the $R$-module $\Hom_R(M,\omega)$. The $R$-module $M$ is called [*self-dual*]{} if there exists an isomorphism $M\xrightarrow[]{\cong} M^\dag$ of $R$-modules. Note that the self-duality of $R$-modules is independent of the choice of $\omega$.
Let $R$ and $S$ be artinian local rings such that $S$ maps onto $R$. Denote by $c_S(R)$ the colength $\ell_S(S)-\ell_S(R)$. In the case that $S$ is Gorenstein, the integer $c_S(R)$ is used to estimate homological properties of $R$, for example, see [@Kustin-Vraciu Theorem 7.5]. Ananthnarayan [@Ananthnarayan] introduced the [*Gorenstein colength*]{} $g(R)$ of an artinian local ring $(R,\m,k)$ to be the following integer $$g(R)\coloneqq\min\{c_S(R) \mid S \text{ is a Gorenstein artinian local ring mapping onto } R\}.$$
The number $g(R)$ measures how close $R$ is to a Gorenstein ring. Clearly, $g(R)$ is zero if and only if $R$ is Gorenstein. One can see that $g(R)=1$ if and only if $R$ is non-Gorenstein and $R\cong S/\mathsf{soc}(S)$ for an artinian Gorenstein ring $S$. These rings are called *Teter rings*. On Teter rings, the following characterization is known, which is an improvement of Teter’s result [@Teter]. This was proved by Huneke-Vraciu [@Huneke-Vraciu] under the assumption that $1/2\in R$ and $\mathsf{soc}(R)\subseteq\m^2$, and later Ananthnarayan-Avramov-Moore [@A-A-M] removed the assumption $\mathsf{soc}(R)\subseteq\m^2$. See also the result of Elias-Takatsuji .
\[Huneke-Vraciu, Ananthnarayan-Avramov-Moore, Elias-Takatuji\] \[thHVAAM\] Let $(R,\m,k)$ be an artinian local ring such that either $R$ contains $1/2$ or $R$ is equicharacteristic with $\mathsf{soc}(R)\subseteq \m^2$. Then the following are equivalent.
- $g(R)\leq 1$.
- Either $R$ is Gorenstein or $\m\cong \m^\dag$.
- Either $R$ is Gorenstein or there exists a surjective homomorphism $\omega \to \m$.
Moreover, Ananthnarayan [@Ananthnarayan] extended this theorem to the case $g(R)\leq 2$ as follows.
\[Ananthnarayan\] \[Anan\] Let $(R,\m)$ be an artinian local ring. Write $R\cong T/I$ where $(T,\m_T)$ is a regular local ring and $I$ is an ideal of $T$. Suppose $I\subseteq \m_T^6$ and $1/2\in R$. Then the following are equivalent.
- $g(R)\leq 2$.
- There exists a self-dual ideal $\mathfrak{a}\subseteq R$ such that $l(R/\mathfrak{a})\leq 2$.
In this paper, we try to extend the notion of Gorenstein colengths and the above results to the case that $R$ is a one-dimensional Cohen-Macaulay local ring.
For a local ring $(R,\m)$, we denote by $Q(R)$ the total quotient ring of $R$. An extension $S\subseteq R$ of local rings is called [*birational*]{} if $R\subseteq Q(S)$. In this case, $R$ and $S$ have same total quotient ring.
Let $(S,\mathfrak{n})\subseteq (R,\m)$ be an extension of local rings. Suppose $\mathfrak{n}=\m\cap R$. Then $S\subseteq R$ is called [*residually rational*]{} if there is an isomorphism $S/\mathfrak{n}\cong R/\m$ induced by the natural inclusion $S\to R$. For example, if $S\subseteq R$ is module-finite and $S/\mathfrak{n}$ is algebraically closed, then it automatically follows that $S\subseteq R$ is residually rational. We introduce an invariant $bg(R)$ for local rings $R$ as follows, which is the infimum of Gorenstein colengths in birational maps.
For a local ring $R$, we define $$bg(R)\coloneqq\inf\left\{\ell_S(R/S) \middle|
\begin{array}{l}
S\text{ is Gorenstein and }S\subseteq R\text{ is a module-finite}\\
\text{residually rational birational map of local rings}
\end{array}
\right\}.$$
We will state the main results of this paper by using this invariant. The first one is the following theorem, which gives a one-dimensional analogue of Theorem \[thHVAAM\].
\[thA\] Let $(R,\m)$ be a one-dimensional Cohen-Macaulay local ring having a canonical module $\omega$. Consider the following conditions.
- $bg(R)\leq 1$.
- Either $R$ is Gorenstein or there exists a Gorenstein local ring $(S,\mathfrak{n})$ of dimension one such that $R\cong \End_S(\mathfrak{n})$.
- Either $R$ is Gorenstein or $\m\cong \m^\dag$.
- There is a short exact sequence $0\to \omega \to \m \to k \to 0$.
- There is an ideal $I$ of $R$ such that $I\cong \omega$ (i.e. $I$ is a canonical ideal of $R$) and $l(R/I)\leq 2$.
Then the implications $(1)\implies (2) \implies (3) \iff (4) \iff (5)$ hold. The direction $(5) \implies (1)$ also holds if $R$ contains an infinite field $k$ as a subalgebra which is isomorphic to $R/\m$ via the projection $R\to R/\m$, i.e. $R$ has an infinite coefficient field $k\subseteq R$.
The existence of a canonical ideal $I$ of $R$ with $\ell_R(R/I)=2$ is considered by Dibaei-Rahimi [@DR]. Using their notion, the condition (5) above is equivalent to the condition that $\min(S_{\mathfrak{C}_R})\le 2$.
We also remark that Bass’s idea [@B] tells us the importance of the endomorpshism ring $\End_S(\mathfrak{n})$ of the maximal ideal $\mathfrak{n}$ of a Gorenstein local ring $S$ of dimension one. He shew that any torsion-free $S$-module without non-zero free summand can be regarded as a module over $\End_S(\mathfrak{n})$. So we can analyze Cohen-Macaulay representations of $R$ via the ring $\End_S(\mathfrak{n})$ (see also [@LW Chapter 4]).
As a corollary, we can characterize Cohen-Macaulay local rings $R$ of dimension one having minimal multiplicity and satisfying $bg(R)\leq 1$.
\[co1\] Let $(R,\m)$ be a one-dimensional Cohen-Macaulay local ring. Consider the following conditions.
- $bg(R)\leq 1$ and $R$ has minimal multiplicity.
- Either $e(R)\leq 2$ or $R$ is almost Gorenstein with $bg(R)=1$.
- $\m\cong \m^\dag$ and $R$ is almost Gorenstein.
- $\m\cong \m^\dag$ and $R$ has minimal multiplicity.
- $R$ is almost Gorenstein and has minimal multiplicity.
- There exists a Gorenstein local ring $(S,\mathfrak{n})$ of dimension one such that $e(S)\leq \edim S+1$ and $R\cong \End_S(\mathfrak{n})$.
- $\m\cong \m^\dag$ and $\rho(R)\le 2$.
Then $(1) \iff (2) \implies (3) \iff (4) \iff (5)$ holds. If $R/\m$ is infinite, then $(5) \iff (7)$ and $(6) \implies (4)$ hold. If $R$ has an infinite coefficient field $k\subseteq R$, then all the conditions are equivalent.
Here we use the notion of almost Gorenstein rings in the sense of [@GMP]. Also we denote by $e(S)$ the multiplicity of $S$ and by $\edim S$ the embedding dimension of $S$. A Gorenstein ring of dimension one satisfying $e(S)= \edim S+1$ are called *a ring of almost minimal multiplicity* or a *Gorenstein ring of minimal multiplicity*, and studied by J. D. Sally [@Sally]. The invariant $\rho(R)$ is the *canonical index* of $R$, introduced by Ghezzi-Goto-Hong-Vasconcelos [@GGHV].
The second main theorem of this paper is the following, which is a one-dimensional analogue of Theorem \[Anan\].
\[thB\] Let $(R,\m)$ be a complete one-dimensional Cohen-Macaulay local ring. Consider the following conditions.
- $bg(R)\leq 2$.
- There exists a self-dual ideal $\mathfrak{a}\subseteq R$ such that $\ell_R(R/\mathfrak{a})\leq 2$.
Then $(1)$ implies $(2)$. The implication $(2) \implies (1)$ also holds if $R$ has an infinite coefficient field $k\subseteq R$.
In the view of Theorem \[thA\], local rings with self-dual maximal ideal are naturally constructed from Gorenstein local rings, and so their ubiquity is certified. It is interesting to consider how good properties they have, comparing Gorenstein rings. In section 3, we have an observation that a Cohen-Macaulay local ring $(R,\m)$ is nearly Gorenstein in the sense of Herzog-Hibi-Stamate [@HHS] if $\m$ is self-dual. The converse of this is not true in general, however, we have the following result.
\[th313\] Let $(R,\m,k)$ be a Cohen-Macaulay local ring of dimension one. Put $B=\m:\m$. Assume $k$ is infinite.
- If $B$ is local with Cohen-Macaulay type two and $R$ is nearly Gorenstein, then $R$ is almost Gorenstein and does not satisfy $\m\cong \m^\dag$.
- If $B$ is local with Cohen-Macaulay type three and $R$ is nearly Gorenstein, then either $R$ is almost Gorenstein or $\m\cong \m^\dag$.
We will provide a proof of Theorem \[th313\] in section 3. One should compare this theorem with the following result of Goto-Matsuoka-Phuong [@GMP Theorem 5.1].
Let $(R,\m,k)$ be a Cohen-Macaulay local ring of dimension one. Put $B=\m:\m$. Then $B$ is Gorenstein if and only if $R$ is almost Gorenstein and has minimal multiplicity.
In section 4, we deal with numerical semigroup rings having self-dual maximal ideal. The definition of UESY-semigroups was given by [@Rosales]. These numerical semigroups are exactly the semigroups obtained by adding one element to a symmetric numerical semigroup. We will show that a numerical semigroup ring has self-dual maximal ideal if and only if the corresponding numerical semigroup is UESY. After that, we also prove that the rings of UESY-numerical semigroup have quasi-decomposable maximal ideal. According to [@NT], an ideal $I$ of $R$ is called [*quasi-decomposable*]{} if there exists a regular sequence $\underline{x}=x_1,\dots,x_t$ such that $I/(\underline{x})$ is decomposable as an $R$-module. Local rings with quasi-decomposable maximal ideal have some interesting properties; we can classify thich subcategories of the singularity category with some assumption on the punctured spectrum ([@NT Theorem 4.5]), and we have results on the vanishings of Ext and Tor ([@NT Section 6]).
In section 5, we characterize the endomorphism ring of a local hypersurface of dimension one, using Theorem \[thA\].
Proof of Theorem \[thA\] and \[thB\]
====================================
In this section, we prove Theorem \[thA\] and \[thB\]. Let $(R,\m)$ be a Noetherian local ring with total quotient ring $Q(R)$. Denote by $\widetilde R$ the integral closure of $R$ in $Q(R)$. A [*fractional ideal*]{} is a finitely generated $R$-submodule $I$ of $Q(R)$ with $IQ(R)=Q(R)$ (i.e. $I$ contains a non zero divisor of $R$). If $R$ has $\depth R \ge 1$, then every $\m$-primary ideal is a fractional ideal of $R$. For a fractional ideal $I$ and $J$, we can naturally identify $\Hom_R(I,J)$ with the set $J:I=\{a\in Q(R)\mid aI\subseteq J\}$. In this way, the endomorphism ring $\End_R(\m)$ of $\m$ is identified with the $R$-subalgebra $\m:\m$ of $Q(R)$. Here we note that the extension $R\subseteq \m:\m$ is module-finite and hence $\m:\m$ is commutative semilocal ring.
We give the following well-known lemma in order to use in the proof of Theorem \[thA\].
\[lem21\] Let $(R,\m)$ be a Cohen-Macaulay local ring of dimension one. Assume $R$ is not ,. Then
- $R\subsetneq\m:\m=R:\m$
- $\ell_R(\m:\m/R)$ is equal to the Cohen-Macaulay type $r(R)$ of $R$.
- There exists a short exact sequence $$0 \to \m^{\oplus t(R)} \to R^{\oplus r(R)+1} \to \m:\m \to 0.$$ In particular, $\mu_R(\m:\m)=r(R)+1$ and $\syz_R(\m:\m)=\m^{\oplus r(R)}$.
\[lem22\] Let $(S,\mathfrak{n}) \subsetneq (R,\m)$ be a module-finite birational extension of one-dimensional local rings. Assume $R$ is reflexive as an $S$-module. Then we have birational extensions $S\subsetneq \mathfrak{n}:\mathfrak{n} \subseteq R$.
Note that $S$ is not a ,, and so $S$ is properly contained in $\mathfrak{n}:\mathfrak{n}$ (Lemma \[lem21\]). By the assumption, $R=S:(S:R)$. Since $R$ has constant rank one over $S$ and $R$ is not isomorphic to $S$, one has $R=\mathfrak{n}:(\mathfrak{n}:R)$. Therefore, $(\mathfrak{n}:\mathfrak{n})\subseteq(\mathfrak{n}:\mathfrak{n})R\subseteq R$.
Now we can explain the proof of the direction \[thA\] (1) $\implies$ (2) $\implies$ (3) $\iff$ (4) $\iff$ (5) of Theorem \[thA\].
\(1) $\implies$ (2): Assume $bg(R)\leq 1$. If $bg(R)=0$, then $R$ is Gorenstein, and there is nothing to prove. We may assume $bg(R)=1$. Then there is a Gorenstein local ring $(S,\mathfrak{n})$ and module-finite residually rational birational extension $S\subsetneq R$ with $\ell_S(R/S)=1$. In particular, $R$ is Cohen-Macaulay. By the previous lemma, we have $S\subsetneq \mathfrak{n}:\mathfrak{n}\subseteq R$. Therefore, it should follows that $\ell_S(R/\mathfrak{n}:\mathfrak{n})=0$, in other words, $R=\mathfrak{n}:\mathfrak{n}=\mathfrak{n}:S$.
\(2) $\Rightarrow$ (3): We may assume $S$ is not a , (otherwise $R\cong S$ and hence $R$ is Gorenstein). Identify $R$ with $\mathfrak{n}:\mathfrak{n}$. By Lemma \[lem21\], one has $\ell_S(R/S)=1$. Hence we have that the colength $\ell_S(\m/\mathfrak{n})$ of the inclusion $\mathfrak{n}\subseteq \m$ is less than or equal to $1$. it is easy to check that $\m/\mathfrak{n}$ is an $R$-module and it has dimension one as a vector space over $R/\m$. Fix a preimage $t\in R$ of a basis $\overline{t}$ of $\m/\mathfrak{n}$. Then $\m=\mathfrak{n}+Rt$ and $\m^2=\mathfrak{n}^2+\m t\subseteq \mathfrak{n}$. This means $\m\subseteq S:\m$. We have another inclusion $$S:\m\subseteq S:\mathfrak{n}=R.$$ Since $Rt\not\subset S$, it follows that $S:\m=\m$. The fractional ideal $S:\m$ is isomorphic to $\m^\dag$ and so we obtain $\m\cong\m^\dag$.
\(3) $\implies$ (4): Applying the functor $(-)^\dag$ to the short exact sequence $0\to \m \to R \to k \to 0$, we see that the resulting exact sequence is $0 \to \omega \to \m^\dag \to \Ext^1_R(k,\omega)\cong k \to 0$. Replacing $\m^\dag$ by $\m$, using the assumption $\m\cong \m^\dag$, we get the desired exact sequence.
\(4) $\implies$ (3): Applying the functor $(-)^\dag$ to the short exact sequence $0\to \omega \to \m \to k \to 0$, we get an exact sequence $0 \to \m^\dag \to R \to \Ext^1_R(k,\omega)\cong k \to 0$. Then, the image of $\m^\dag$ in $R$ must be equal to $\m$ and hence one has an isomorphism $\m^\dag\cong \m$.
\(4) $\implies$ (5): The exact sequence $0 \to \omega \to \m \to k \to 0$ yields that there is an ideal $I\cong \omega$ such that the colength $\ell_R(\m/I)$ is one. The equality $\ell_R(\m/I)=2$ immediately follows from the above.
\(5) $\implies$ (4): Take an ideal $I\cong \omega$ such that $l(R/I)\le 2$. If $I=R$, then $R$ is Gorenstein and there is nothing to prove. So we may suppose that $I\subseteq \m$. If $I=\m$, then $R$ must be , since $\m\cong \omega$. Now we deal with an assumption that $I\subsetneq \m$. The inequality $l(R/I)\le 2$ implies that the equality $l(\m/I)=1$. Thus the exact sequence $0\to I \to \m \to k \to 0$ is induced.
0
By assuming $S$ is complete, we can proof (2)$\implies$(3), using the theory of Auslander-Reiten sequence. We may also assume $S$ is not a ,. Now identify $R$ with the fractional ideal $\mathfrak{n}:\mathfrak{n}$. Then we see that $\mathfrak{n}=S:R\cong \Hom_S(R,S)$ and this module is a canonical module of $R$. Since $R$ has constant rank as an $S$-module, it is locally free on punctured spectrum of $S$. So one can take the Auslander-Reiten sequence $0 \to \tau(R) \to E_R \to R \to 0$ of the $S$-module $R$, see [@Yoshino Theorem 3.4]. The Auslander-Reiten transration $\tau(R)$ of $R$ over $S$ is isomorphic to $\Hom_S(\Omega_S \tr_S R,S)\cong \Omega_S R\cong \mathfrak{n}$ ([@Yoshino Proposition 3.11]). In particular, $E_R$ has constant rank $2$.
On the other hand, we have irreducible morphisms $\mathfrak{n} \to S$ and $\m \to R$, and their canonical duals $S\to R$ and $\mathfrak{n} \to \m^\dag$. Therefore, $S\oplus \m$ and $S\oplus \m^\dag$ are direct summands of $E_R$. Comparing the ranks of these modules, we have $S\oplus \m\cong E_R \cong S\oplus \m^\dag$, and hence $\m\cong \m^\dag$ by the Krull-Schmidt property. $\blacksquare$
All that remains is to show the direction (5) $\implies$ (1). Let $(R,\m)$ be a Noetherian local ring containing a coefficient field $k\cong R/\m$. Let $I\subset R$ be a fractional ideal such that $\ell_R(R/I)<\infty$. Put $k+I\coloneqq\{a+b\mid a\in k, b\in I\}\subseteq R$, which is a $k$-subalgebra of $R$. Then, since $\dim_k R/(k+I)\leq \ell_R(R/I)<\infty$, $R$ is finitely generated as a $k+I$-module and hence $k+I$ is Noetherian local ring with a maximal ideal $(k+I)\cap \m=I$. Thus, the ring extension $k+I\subseteq R$ is module-finite residually rational and birational.
\[lmlmlm\] Let $(R,\m)$ be a one-dimensional Cohen-Macaulay local ring. Assume $R$ has a canonical ideal $I\cong \omega$ such that $l(R/I)=2$. Put $S=k+I$. Then $S$ is Gorenstein, and the colength $\ell_S(R/S)$ is equal to $1$.
$S$ is local with a maximal ideal $\mathfrak{n}=I$. The extension $S\subseteq R$ is module-finite, residually rational and birational. Since $I$ is a canonical ideal, we have $I:I=R$. Equivalently, $\mathfrak{n}:\mathfrak{n}=R$. In particular, the colength $\ell_S(S/R)$ is equal to the Cohen-Macaulay type of $S$ (Lemma \[lem21\]). Since $R$ and $S$ have same residue field $k$, we can see the equalities $\ell_S(R/S)=l_S(\m/\mathfrak{n})=\ell_R(\m/I)$. On the other hand, we have $$\ell_R(\m/I)=\ell_R(R/I)-\ell_R(R/\m)=2-1=1.$$ It follows that $S$ has Cohen-Macaulay type $1$, that is, $S$ is Gorenstein. Moreover, the colength $\ell_S(R/S)$ is equal to $\ell_R(\m/I)=1$.
Assume There is a canonical ideal $I$ such that $\ell_R(R/I)\le 2$. If $\ell_R(R/I)\le 1$, then $I=R$ or $\m$. In both of these cases, $R$ should be Gorenstein.
Thus we only need to consider the case $\ell_R(R/I)=2$. By previous lemma, the ring $S\coloneqq k+I$ is Gorenstein and the colength $\ell_S(R/S)$ is $1$. This shows $bg(R)le 1$. 0 Taking a $\m$-adic completion $\widehat{(-)}$, we still have an isomorphism $\widehat{\m}\cong \widehat{\m}^\dag$. In particular, $\widehat{\m}^\dag$ has a constant rank as an $\widehat{R}$-module. Since the isomorphisms $$(\widehat{\m}^\dag)_\mathfrak{p}\cong\Hom_{R_\mathfrak{p}}(\m_\mathfrak{p},\widehat{\omega}_\mathfrak{p})\cong \Hom_{R_\mathfrak{p}}(\widehat{R}_\mathfrak{p},\widehat{\omega}_\mathfrak{p})\cong \widehat{\omega}_\mathfrak{p}$$ holds for any non-maximal prime ideal $\mathfrak{p}$ of $\widehat{R}$, one can see that $\widehat{\omega}$ also has a constant rank. In other words, $\widehat{R}$ is generically Gorenstein. Then we can take a fractional ideal $R\subseteq K_R \subseteq \widetilde R$ such that $K_R$ is isomorphic to $\omega$ (see [@GMP Corollary 2.9]).
Now we identify the $R$-module $\m^\dag$ with the fractional ideal $K_R:\m$. Fix an isomorphism $\phi\colon K_R:\m\to \m$, which is a multiplication by an element $\phi=t/s\in \m:(K_R:\m)$, where $s\in R$ is an element and $t\in R$ is a non-zero divisor. We see that $$t/s\in\m:(K_R:\m)\subseteq \m:R=\m.$$ So we may assume $s=1$. It follows that $\m=t(K_R:\m)$. Put $I=tK_R$. Then $I=t(K_R:R)\subseteq t(K_R:\m)=\m$ and thus $I$ is an ideal of $R$. Set $S=k+I$. The extension $S\subseteq R$ is module-finite, residually rational and birational. We claim that $S$ is Gorenstein, and prove this in the followings. $I=tK_R$ implies $I:I=tK_R:tK_R=K_R:K_R=R$. Thus $R$ is the endomorphism ring of the maximal ideal $I$ of $S$. In particular, the colength $\ell_S(S/R)$ is equal to the Cohen-Macaulay type of $S$. Since $R$ and $S$ have same residue field $k$, we can see the equalities $\ell_S(R/S)=\ell_S(\m/I)=\ell_R(\m/I)$. On the other hand, we have $$\ell_R(\m/I)=\ell_R(t(K_R:\m)/t(K_R:R))=\ell_R((K_R:\m)/(K_R:R))=\ell_R(R/\m)=1.$$ It follows that $S$ has Cohen-Macaulay type $1$, that is, $S$ is Gorenstein.
We put the following lemma here, which will be used in the proof of Corollary \[co1\].
\[lem16\] Let $(R,\m)$ be a Cohen-Macaulay generically Gorenstien local ring of dimension one having a canonical module. Assume $R$ is not a ,. Then
- $R$ has minimal multiplicity if and only if $\m\cong \m:\m$.
- $R$ is almost Gorenstein in the sense of [@GMP] if and only if $\m^\dag\cong\m:\m$.
See [@Ooishi Proposition 2.5] and [@Ko Theorem 2.14] respectively.
We give a proof of Corollary \[co1\] as follows.
The implications (3) $\iff$ (4) $\iff$ (5) follow immediately from Lemma \[lem16\].
\(1) $\iff$ (2): In the case $bg(R)=0$, $R$ is Gorenstein and has minimal multiplicity and thus $e(R)\leq 2$. The converse also holds. Now suppose $bg(R)=1$. Then by Theorem \[thA\], $\m$ is isomorphic to $\m^\dag$. Therefore, $R$ has minimal multiplicity if and only if $R$ is almost Gorenstein.
\(1) $\implies$ (3): Clear.
Now assume the residue field $R/\m$ is infinite.
\(6) $\implies$ (4): Obviously, $S/\mathfrak{n}$ is also infinite. If $e(S)\leq \edim S$, then $e(S)\leq 2$ and $R$ also has an inequality $e(R)\leq 2$. This says that $R$ is Gorenstein and has minimal multiplicity. So we may assume $e(S)=\edim S+1$.
Take a minimal reduction $(t)$ of $\mathfrak{n}$ and a preimage $\delta\in \mathfrak{n}^2$ of a generator of the socle of $S/(t)$. Then $\mathfrak{n}^3=t\mathfrak{n}^2$ (see [@Sally2 Proof of (3.4)]), $\mathfrak{n}^2=t\mathfrak{n}+S\delta$ and $(t):_S \mathfrak{n}=(t)+S\delta$. Therefore $$R\cong \End_S(\mathfrak{n})\cong (t):_S \mathfrak{n}/t=S+S(\delta/t).$$ Identify $R$ with $S+S(\delta/t)$. Since $R$ is local and $\delta^2\in\mathfrak{n}^4=t^2\mathfrak{n}^2$, $(\delta/t)$ cannot be a unit of $R$. This shows $\m=\mathfrak{n}+S(\delta/t)$. By this equality, we also have an isomorphism $R/\m\cong S/\mathfrak{n}$ induced by $S\subseteq R$. Observe the following equalities $$t\m=t\mathfrak{n}+S\delta=\mathfrak{n}^2$$ and $$\m^2=(\mathfrak{n}+S(\delta/t))^2=\mathfrak{n}^2+\mathfrak{n}(\delta/t)+S(\delta/t)^2.$$ Then $\delta^2\in\mathfrak{n}^4=t^2\mathfrak{n}^2$ implies $S(\delta/t)^2\subseteq \mathfrak{n}^2$, and $\mathfrak{n}\delta\subseteq \mathfrak{n}^3=t\mathfrak{n}^2$ implies $\mathfrak{n}(\delta/t)\subseteq \mathfrak{n}^2$. So $\m^2=\mathfrak{n}^2=t\m$. This means that $R$ has minimal multiplicity.
It remains to show that $\m\cong \m^\dag$. By Theorem \[thA\], it holds that either $R$ is Gorenstein or $\m\cong \m^\dag$. In the case that $R$ is Gorenstein, it holds that $e(R)\le 2$ and so $\m$ is self-dual by [@Ooishi Theorem 2.6].
\(5) $\implies$ (7): Assume $R$ is almost Gorenstein and has minimal multiplicity. Then we already saw that $m$ is self-dual. It follows from [@GMP Theorem 3.16] that $\rho(R)\le 2$.
\(7) $\implies$ (5): It is enough to show that $\rho(R)\le 2$. Recall that $\rho(R)$ is the reduction number of a canonical ideal of $R$ ([@GGHV Definition 4.2]). So if $\rho(R)\le 1$, then $R$ is Gorenstein ([@GMP Theorem 3.7]). Assume $\rho(R)=2$. Combining [@DR Theorem 3.7, Proposition 3.8] and Theorem \[thA\], we obtain that $R$ is almost Gorenstein and has minimal multiplicity.
Finally, we deal with the assumption that $R$ contains a infinite field $k$ isomorphic to $R/\m$ via $R\to R/\m$.
\(4) $\implies$ (1): This follows directly from Theorem \[thA\].
\(1) $\implies$ (6): First we consider the case that $R$ is Gorenstein (i.e. $bg(R)=0$). In this case, $e(R)\le 2$ and $\edim R\le 2$ by the assumption. Take a minimal reduction $Rt$ of $\m$. Then $\m^2=t\m$. In particular, $\ell_R(\m/I)=\ell_R(\m/I+\m^2)\le 1$. Put $I=Rt$ and $S=k+I$. Then the ring-extension $S\subseteq R$ is module-finite, residually rational and birational. Since $I:I=R$ and $\ell_R(\m/I)\le 1$, we can see that $S$ is Gorenstein and $\End_S(I)\cong R$ by the similar argument in the proof of \[thA\] (3) $\implies$ (1). Furthermore, one has an equality $tI=I^2$, which particularly show that $S$ has minimal multiplicity, that is, $e(S)=\edim S$.
Now consider the case that $bg(R)=1$. Repeating the proof of Theorem \[thA\] (3) $\implies$ (1), there is a canonical ideal $I$ such that if we let $S=k+I$, then $S$ is Gorenstein local and $R\cong \End_S(\mathfrak{n})$, where $\mathfrak{n}$ is the maximal ideal of $S$. Since $R$ is Almost Gorenstein, it was shown in [@GMP Theorem 3.16] that there is a minimal reduction $Q=(t)\subseteq I$ of $I$ in $R$ such that $\ell_R(I^2/QI)\le 1$ and $QI^2=I^3$. Then it follows that $\ell_S(I^2/QI)\le 1$. Using [@Sally2 Proposition 3.3], the equality $e(S)=\edim S+1$ holds.
We give here an example of a ring $R$ with $bg(R)=1$.
Let $R=k[[t^3,t^4,t^5]]$ and $S=k[[t^3,t^4]]$ be numerical semigroup rings, where $k$ is a field. Then the natural inclusion $S\subseteq R$ is a module-finite birational extension of local rings with the same coefficient field. The colength $\ell_S(R/S)$ is equal to $1$. Since $R$ is non-Gorenstein and $S$ is Gorenstein, we have $bg(R)=1$.
We now turn to estimate the invariant $bg(R)$ in general. Suppose there exists a self-dual fractional ideal of $R$. Then we have an upper bound of $bg(R)$ as follows.
\[lem23\] Let $(R,\m)$ be a complete one-dimensional Cohen-Macaulay local ring. Assume $R$ contains an infinite coefficient field $k\cong R/\m$. Let $I\subseteq R$ be a fractional ideal of $R$. If $I$ is self-dual, then we have $bg(R)\leq l(R/I)$. In other words, the following inequality holds $$bg(R)\leq \inf\{\ell_R(R/I)\mid I\cong I^\dag \}.$$
In the case $I=R$, the self-duality of $I$ implies $R$ is Gorenstein. So we may assume $I\subseteq \m$. Take a non-zero divisor $t\in I$, and Put $B=k+I$. Then $B\subseteq R$ is a module-finite extension and $I$ is the maximal ideal of local ring $B$. Remark that $B$ is also complete. Since $B\subseteq R$ is birational, the $R$-isomorphism $I\to I^\dag$ is also a $B$-isomorphism, and $I^\dag$ is equal to the canonical dual of $I$ over $B$. By Theorem \[thA\], $bg(B)\leq 1$, that is, there is a Gorenstein ring $S$ and module-finite birational extension $S\subseteq B$. Then $S\subseteq R$ is also a module-finite birational extension. The calculation $$\ell_S(R/S)=\ell_S(R/B)+\ell_S(B/S)=\ell_R(\m/I)+1=\ell_R(R/I)$$ shows that $bg(R)\leq \ell_R(R/I)$.
As a corollary of this, we can see the finiteness of $bg(R)$ in the analytically unramified case.
Let $(R,\m)$ be a complete one-dimensional local ring. Assume $R$ contains an infinite coefficient field. If there exists a module-finite birational extension $R\subseteq T$ with a Gorenstein ring $T$, Then $bg(R)\leq l(R/aT)$ for any non-zero divisor $a\in T:R$ of $T$. In particular, if $R$ is analytically unramified, then $bg(R)\leq l(R/R:\widetilde R)<\infty$.
Since $T$ is Gorenstein, the $R$-module $aT\cong T$ is self-dual. So we can apply Lemma \[lem23\] for $I=aT$. If $R$ is analytically unramified, then the normalization $\widetilde R$ of $R$ in $Q(R)$ is Gorenstein, and $R\subseteq \widetilde R$ is finite birational. Take any non-zero element $a\in R:\widetilde R$. Then $bg(R)\leq l(R/a(R:\widetilde R))$. Since $R:\widetilde R$ is contained in $R$, we have an inequality $(R/a(R:\widetilde R))\le l(R/R:\widetilde R)$.
Ananthnarayan [@Ananthnarayan] shows the following inequalities hold for an artinian local ring $R$.
$$\ell_R(R/\omega^*(\omega))\leq \min\{\ell_R(R/I)\mid I\cong I^\dag\} \leq g(R).$$
Here $\omega^*(\omega)$ is the trace ideal of $\omega$; see Definition \[deftr\].
As analogies of these inequalities, the followings are natural questions.
\[Q1\] Let $(R,\m)$ be a one-dimensional Cohen-Macaulay local ring. Is an equality $$bg(R)\geq\inf\{\ell_R(R/I)\mid I\cong I^\dag\}$$ hold true?
\[Q2\] Let $(R,\m)$ be a one-dimensional generically Gorenstein local ring. Is an inequality $\ell_R(R/\omega^*(\omega))\leq bg(R)$ hold true?
By our main theorems \[thA\] and \[thB\], Question \[Q1\] is affirmative for $R$ with $bg(R)\leq 2$. Question \[Q2\] has positive answer given in Proposition \[p1\] if $bg(R)\leq 1$, but there is the following counter-example, even when $bg(R)=2$.
Let $R$ be the numerical semigroup ring $k[[H]]$, where $H=\langle 3,13,14\rangle$. Then $\ell_R(R/\omega^*(\omega))=\mathsf{res}(H)=4$ by [@HHS Remark 7.14]. there is, however, a Gorenstein subring $S=k[[H']]$ of $R$, where $H'=\langle 6,9,13,14,16,17\rangle$. The colength $\ell_S(R/S)$ is equal to $2$, and so this gives a counter-example of Question \[Q2\].
We now return to prove the Theorem \[thB\].
\(2) $\implies$ (1): This is a consequence of Lemma \[lem23\] by letting $I=\mathfrak{a}$.
\(1) $\implies$ (2): In the case $bg(R)\leq 1$, assertion is followed by Theorem \[thA\]. So we may assume $bg(R)=2$. Take a Gorenstein local ring $(S,\mathfrak{n})$ and module-finite residually rational birational extension $S\subset R$ satisfying $\ell_S(R/S)=2$.
Let $B$ be the ring $\mathfrak{n}:\mathfrak{n}$. By Lemma \[lem22\] and Lemma \[lem21\], we have $\ell_S(B/S)=1$ and $S\subsetneq B\subseteq R$. Therefore $\ell_B(R/B)=1$ and $B$ is local. Let $\m_B$ be the maximal ideal of $B$ and fix a preimage $t\in R$ of a basis $\overline{t}$ of the one-dimensional vector space $R/B$ over $B/\m_B$. By the relation $\m_Bt\subseteq B$ yields that $t\in B:\m_B=\m_B:\m_B$. Therefore $R=B+Bt\subseteq \m_B:\m_B$. In particular, $R\m_B\subseteq \m_B$. This says that $\m_B$ is an ideal of $R$. Since $bg(B)=1$, $\m_B$ is a self-dual ideal of $B$ by Theorem \[thA\]. Thus, it is also self-dual as $R$-module. One can also have equalities $$\ell_R(R/\m_B)=\ell_B(R/B)+\ell_B(B/\m_B)=2.$$
Let $(R,\m)$ be a one-dimensional local ring. Assume $R$ is complete, equicharacteristic and $bg(R)=n<\infty$. If there exists a Cohen-Macaulay local ring $(B,\m_B)$ with $bg(B)=1$ and module-finite residually rational birational extensions $B\subseteq R\subseteq \m_B:\m_B$ such that $\ell_B(R/B)+1\leq n$. Then, by the same argument of proof of Theorem \[thB\], it follows that $\m_B$ is a self-dual ideal of $R$ satisfying $\ell_R(R/\m_B)\leq n$. In this case, Question \[Q1\] is affirmative for $R$.
0
Let $R=k[[t^3,t^{10},t^{11}]]$ be a numerical semigroup ring, where $k$ is a field. We give some observation on Question \[Q1\] fo $R$. $R$ has minimal multiplicity and is not almost Gorenstein. By Corollary, $bg(R)\geq 2$. In the case $bg(R)=2$, Question \[Q1\] is affirmative by Theorem \[thB\]. However, we don’t know whether $bg(R)=2$ or not.
Let $S=k[[t^3,t^{10}]]$ and $\mathfrak{n}$ be its maximal ideal. Then $S$ is Gorenstein and $S\subseteq R$ is module-finite residually rational birational extension with $\ell_S(R/S)=3$. In particular, $bg(R)\leq 3$. Put $B=\mathfrak{n}:\mathfrak{n}=k[[t^3,t^{10},t^{17}]]$. Then $\m_B:\m_B$ is equal to $k[[t^3,t^7]]$ and does not contain $R$. Therefore, we cannot apply Remark for this choice of $S$.
Let $S'=k[[t^6,t^9,t^{10}, t^{13},t^{14}]]$ and $\mathfrak{n}'$ be its maximal ideal. Then $S'$ is Gorenstein and $S'\subseteq R$ is module-finite residually rational birational extension with $\ell_{S'}(R/S')=3$. Put $B'=\mathfrak{n}':\mathfrak{n}'=k[[t^6,t^9,t^{10},t^{13},t^{14},t^{17}]]$. Then $\m_{B'}:\m_{B'}$ is equal to $k[[t^3,t^4]]$, which contains $R$. By Remark, $R$ has a self-dual ideal $I=\m_{B'}$ with $\ell_R(R/I)=3$. Consequently, Question \[Q1\] is affirmative for $R$ regardless of whether $bg(R)=2$ or not.
The seif-duality of the maximal ideal
=====================================
In this section, we collect some properties of local rings $(R,\m)$ with $\m\cong \m^\dag$.
Let $(R,\m)$ be a Cohen-Macaulay local ring with a canonical module. Assume $\m\cong \m^\dag$. Then
- $\dim R\leq 1$.
- Let $x\in\m\setminus \m^2$ be a non-zero divisor of $R$. Then $R/(x)$ also has self-dual maximal ideal.
- $\edim(R)=r(R)+1$.
Suppose $\dim R\geq 2$ and $\omega$ is a canonical module of $R$. Applying $(-)^\dag$ to the exact sequence $0 \to \m \to R \to k \to 0$, we get an exact sequence $$0 \to \Hom_R(k,\omega) \to \m^\dag \to \omega \to \Ext_R^1(k,\omega).$$ By the assumption $\dim R\geq 2$ yields that $\Hom_R(k,\omega)=0=\Ext_R^1(k,\omega)$ and hence $\m^\dag \cong R$. From the isomorphism $\m\cong \m^\dag$, it follows that $\m$ is principal ideal. This shows that $\dim R\leq 1$, which is a contradiction. Thus, it must be $\dim R\leq 1$.
When $\dim R\ge 2$, the maximal ideal $\m$ cannot be self-dual. However, we suggest the following generalization of the self-duality of the maximal ideal in higher dimensional case.
$(R,\m)$ be a non-Gorenstein Cohen-Macaulay local ring of dimension $d>0$ having an infinite residue field. Assume $R$ has a canonical ideal $I$ satisfying $e(R/I)=2$. Then there is a regular sequence $\underline{x}=x_1,\dots,x_{d-1}$ such that $R/(\underline{x})$ has self-dual maximal ideal.
Since $R/I$ is Cohen-Macaulay of dimension $d-1$, we can take a minimal reduction $\underline{y}=y_1,\dots,y_{d-1}$ of the maximal ideal $\m/I$ in $R/I$. Then the length $l(R/I)/(\underline{y}))$ is equal to $e(R/I)(\le2)$. Let $\underline{x}=x_1,\dots,x_{d-1}$ be a preimage of $\underline{y}$ in $R$. As $I$ is unmixed, we can take $\underline{x}$ as a regular sequence in $R$. The tensor product $I'=I\otimes R/(\underline{x})$ is naturally isomorphic a canonical ideal of $R'=R/(\underline{x})$. The quotient $R'/I'$ has length $l(R/(I+\underline{x}))=l(R/I)/(\underline{y})\le 2$. Therefore $R'$ has self-dual maximal ideal by Theorem \[thA\].
Let $R=k[[x^3,x^2y,xy^2,y^3]]$ be the third Veronese subring of $k[[x,y]]$. Then $I=(x^3,x^2y)R$ is a canonical ideal of $R$. The quotient ring $R/I$ is isomorphic to $k[[s,t]]/(s^2)$, and hence $e(R/I)=2$.
Go back to the subject on self-duality of the maximal ideal. Recall the notion of trace ideal of an $R$-module and nearly Gorensteiness of local rings (see [@HHS]).
\[deftr\] Let $R$ be a commutative ring. For an $R$-module $M$, the [*trace ideal*]{} $M^*(M)$ of $M$ in $R$ is defined to be the ideal $\sum_{f\in \Hom_R(M,R)} \image f\subseteq R$.
A Cohen-Macaulay local ring $(R,\m)$ with a canonical module $\omega$ is called [*nearly Gorenstein*]{} if $\omega^*(\omega)\supseteq \m$.
\[lemng\] Let $(R,\m)$ be a Cohen-Macaulay local ring with a canonical module. The following are equivalent.
- $R$ is nearly Gorenstein.
- there is a surjective homomorphism $\omega^{\oplus n} \to \m$ for some $n$.
Moreover, if $\dim R\leq 1$, then we can add the following conditions.
- there is a short exact sequence $0 \to \m^\dag \to R^{\oplus n} \to M \to 0$ for some $n$ and maximal Cohen-Macaulay module $M$.
- there is a short exact sequence $0 \to \m^\dag \to \m^{\oplus n} \to M \to 0$ for some $n$ and maximal Cohen-Macaulay module $M$.
\(1) $\iff$ (2): By the definition of trace ideals, there is a surjection $\omega^{\oplus n}\to \omega^*(\omega)$ for some $n$. So the equivalence immediately follows.
Now assume $\dim R\le 1$. Then the maximal ideal $\m$ is maximal Cohen-Macaulay as an $R$-module. So the condition (2) is equivalent to that there is a short exact sequence $0\to M\to \omega^{\oplus n}\to \m \to 0$ for some $n$ and maximal Cohen-Macaulay module $M$. Taking the canonical duals, the equivalence of (2) and (3) follows.
We turn the equivalence of (3) and (4). We may assume $R$ is not a discrete valuation ring, and hence $\m^\dag$ is not a free $R$-module. So it follows from.
\[p1\] Let $(R,\m)$ be a Cohen-Macaulay local ring with a canonical module. Assume $\m\cong \m^\dag$. Then
- $R$ is nearly Gorenstein.
- If $R$ is non-Gorenstein and $2$ is invertible in $R$, then $R$ is G-regular.
We already saw that $\dim R\leq 1$ from Lemma \[lemng\].
\(1) In the case of $\dim R=0$, we have a short exact sequence $0\to \m \to R \to k \to 0$ and hence we can apply Lemma \[lemng\] (3) $\implies$ (1).
In the case of $\dim R=1$, we may assume $R$ is not a ,. Applying Lemma \[lemng\] to the short exact sequence in Theorem \[thA\] (4), it follows that $R$ is nearly Gorenstein.
\(2) In the case that $\dim R=0$, the statement is proved in [@Striuli-Vraciu Corollary 3.4]. So we may assume $\dim R=1$. Take $x\in \m\setminus \m^2$ a non-zero divisor. Applying $(-)^\dag$ to the exact sequence $0 \to \m \xrightarrow[]{x} \m \to \m/x\m \to 0$, we get an exact sequence $$0 \to \m^\dag \xrightarrow[]{x} \m^\dag \to \Ext^1_R(\m/x\m,\omega) \to 0.$$ This shows that $\m^\dag/x\m^\dag\cong \Ext^1_R(\m/x\m,\omega)\cong \Hom_{R/(x)}(\m/x\m,\omega/x\omega)$. Therefore, by $\m\cong \m^\dag$, $\m/x\m$ is self-dual as an $R/(x)$-module. On the other hand, there is a direct sum decomposition $\m/x\m\cong \overline{\m}\oplus k$ of $R/(x)$-modules, where $\overline{\m}$ is the maximal ideal of $R/(x)$. Since $k$ is self-dual over $R/(x)$, the $R/(x)$-module $\overline{\m}$ must be self-dual by the Krull-Schmidt theorem. $R/(x)$ is G-regular by [@Striuli-Vraciu Corollary 3.4]. It follows from [@Tak Proposition 4.2] that $R$ is also G-regular.
Let $R=k[[t^4,t^5,t^7]]$. Then $R$ is almost Gorenstein local ring of dimension one. Therefore, $R$ is G-regular and nearly Gorenstein. On the other hand, $R$ does not have minimal multiplicity, and hence $\m$ is not self-dual. This shows that the converse of Proposition \[p1\] doesn’t hold true in general.
The associated graded ring $\mathsf{gr}_\m(R)$ of a local ring $(R,\m)$ with self-dual maximal ideal need not be Cohen-Macaulay, for example, $R=k[[t^4,t^5,t^{11}]]$.
We use the notion of minimal faithful modules. The definition of them is given in below.
Let $R$ be a commutative ring. An $R$-module $M$ is called [*minimal faithful*]{} if it is faithful and no proper submodule or quotient module is faithful.
For an artinian local ring $R$, the $R$-module $R$ and a canonical module $\omega$ of $R$ (i.e. injective hull of the residue field) are minimal faithful.
The following is proved by Bergman [@Bergman Corollary 2].
\[bergman\] Let $A,B$ and $C$ be finite-dimensional vector spaces over a field $k$. and $f\colon A\times B\to C$ be a bilinear map. Assume the following conditions.
1. any nonzero element $a$ of $A$ induces a nonzero map $f(-,a)\colon B\to C$
2. any proper submodule $i\colon B'\to B$, there is a nonzero element $a\in A$ such that $f(i(-),a)\colon B'\to C$ is a zero map.
3. any proper quotient module $p\colon C\to C'$ there is a nonzero element $a\in A$ such that the map $p\circ f(-,a)\colon B\to C'$ is a zero map.
Then $\dim_k A\ge \dim_k B+\dim_k C -1$.
\[injhom\] Let $R$ be a commutative ring, $n$ be a positive integer, $M,N$ be $R$-modules and $f=[f_1,\dots,f_n]^t\colon N\to M^{\oplus n}$ be an $R$-homomorphism. Then $f$ is injective if and only if for any nonzero element $a\in \mathsf{soc}(N)$, there exists $i$ such that $f_i(a)\not=0$.
\[injdim\] Let $(R,\m,k)$ be an artinian local ring and $M, N$ be finitely generated faithful $R$-modules. Assume $M$ is minimal faithful. If $n$ is a positive integer such that exists an injective homomorphism $f=[f_1,\dots,f_n]\colon N \to M^{\oplus n}$, then the $k$-subspace $B$ of $\Hom_R(N,M)\otimes_R k$ generated by the image of $f_1,\dots,f_n$ has a dimension exactly equal to $n$ over $k$.
We only need to show $\dim_k B\ge n$. Assume there is a equation $f_1=a_1f_2+\cdots+a_nf_n+g$ for some $a_2,\dots,a_n\in R$ and $g\in \mathfrak{m}\Hom_R(N,M)$. Then for any element $a\in \mathsf{soc}(N)$, $g(a)=0$. So $n\ge 2$ and $f(a)\not=0$ implies there exists $i\ge 2$ such that $f_i(a)\not=0$. This particular says that the homomorphism $[f_2,\dots,f_n]\colon N\to M^{\oplus n-1}$ also an injection by Lemma \[injhom\], which is a contradiction to our assumption on $n$.
The following lemma is a generalization of the result of Gulliksen [@Gull Lemma 2].
\[lemcmf\] Let $(R,\m,k)$ be an artinian local ring and $M, N$ be finitely generated faithful $R$-modules. Assume $M$ is minimal faithful. If there exists an injective homomorphism $N \to M^{\oplus n}$ for some $n$, then $\dim_k \mathsf{soc}(M)\leq\dim_k \mathsf{soc}(N)$ and equality holds if and only if $N\cong M$.
Let $n$ be the minimal integer such that there is an injective map $N \to M^{\oplus n}$. Take a injective map $N \xrightarrow[]{[f_1,\dots,f_n]^t} M^{\oplus n}$ and set $B$ the $k$-subspace of $\Hom_R(N,M)\otimes_R k$ generated by the image of $f_1,\dots,f_n$. Then $\dim_k B=n$ by Lemma \[injdim\]. By letting $A=\mathsf{soc}(N)$ and $C=\mathsf{soc}(M)$, we have a bilinear map $A\times B \to C$ over $k$ satisfying the assumption (1) and (2) of Lemma \[bergman\] in view of Lemma \[injhom\] and \[injdim\]. We also verify the condition (3) of Lemma \[bergman\] as follows. Assume (3) is not satisfied. Then there is a subspace $C'$ of $C$ such that any nonzero element $a$ of $A$ induces a nonzero map $p\circ f(-,a)\colon B\to C/C'$, where $p\colon C\to C/C'$ is the natural surjection. Since $C/C'\subseteq M/C'$ as an $R$-module, we obtain an injective map $g\colon N\xrightarrow[]{q\circ f_1,\dots,q\circ f_n}(M/C')^{\oplus n}$, where $q\colon M\to M/C'$ is also the natural surjection. Since $N$ is faithful, there is an injective map $h$ from $R$ to some copies $N^{\oplus m}$ of $N$. Taking a composition of $h$ and $g^{\oplus m}$, one has an injective map from $R$ to $(M/C')^{\oplus mn}$. In particular, $M/C'$ is a faithful $R$-module, which contradicts the assumption that $M$ is minimal faithful.
Therefore, we can apply Lemma \[bergman\] and get an equality $\dim A\geq \dim B+\dim C-1$. It follows that $\dim \mathsf{soc}(N)-\dim \mathsf{soc}(M)\geq n-1\geq 0$. If the equalities hold, then $n=1$ and $N$ is isomorphic to a submodule of $M$. By the minimality of $M$, one has $N\cong M$.
We also give some basic properties of minimal faithful modules.
\[lemmf\] Let $(R,\m,k)$ be an artinian local ring. Then
1. Any minimal faithful $R$-module is indecomposable.
2. Assume $R$ has Cohen-Macaulay type at most three. Then $\ell_R(R)\leq \ell_R(M)$ for all faithful $R$-module $M$. In particular, a faithful $R$-module $M$ is minimal faithful if $\ell_R(M)=\ell_R(R)$.
(1): Let $M$ be a minimal faithful $R$-module, and assume that $M$ decomposes as direct sum $M=M_1\oplus M_2$ of $R$-modules. the faithfulness of $M$ yields that $\ann(M_1)\cap\ann(M_2)=0$. Take minimal generators $x_1,\dots,x_n$ of $M_1$ and $y_1,\dots,y_m$ of $M_2$. Without loss of generality, we may assume $n\le m$. Then the submodule $N$ of $M=M_1\oplus M_2$ generated by the elements $x_1+y_1,\dots,x_n+y_n,0+y_{n+1},\dots,0+y_m$ is proper and faithful. This contradicts that $M$ is minimal faithful. (2): This follows by [@Gull Theorem 1].
Let $(R,\m,k)$ be a commutative ring. A fractional ideal $I$ of $R$ is called [*closed*]{} [@BV] if the natural homomorphism $R\to \Hom_R(I,I)$ is an isomorphism.
Let $(R,\m,k)$ be a one-dimensional Cohen-Macaulay local ring. Set $B=\m:\m$. Then $\m$ is closed as a fractional ideal of $B$.
Let $(R,\m,k)$ be a one-dimensional Cohen-Macaulay local ring having a canonical module and $I$ be a fractional ideal of $R$. Then the following are equivalent.
- $I$ is closed.
- $I^\dag$ is closed.
- There is a surjective homomorphism $I^{\oplus n}\to \omega$ for some $n$.
- There is a short exact sequence $0\to R \to I^{\oplus n} \to M \to 0$ for some $n$ and maximal Cohen-Macaulay $R$-module $M$.
See [@BV Proposition 2.1]. Note that (4) follows by the canonical dual of (3), since $I$ is maximal Cohen-Macaulay as an $R$-module.
Let $(R,\m,k)$ be a Cohen-Macaulay local ring of dimension one. Put $B=\m:\m$. Assume $k$ is infinite.
- If $B$ is local with Cohen-Macaulay type two and $R$ is nearly Gorenstein, then $R$ is almost Gorenstein and does not satisfy $\m\cong \m^\dag$.
- If $B$ is local with Cohen-Macaulay type three and $R$ is nearly Gorenstein, then either $R$ is almost Gorenstein or $\m\cong \m^\dag$.
Take a minimal reduction $(t)$ of $\m_B$. Since $\m$ and $\m^\dag$ has constant rank one as a $B$-module, $\ell_B(B/tB)=e(B)=\ell_B(\m/t\m)=\ell_B(\m^\dag/t\m^\dag)$. As $B/tB$ has Cohen-Macaulay type less than or equal to three in both case (1) and (2), Lemma \[lemmf\] ensures that $\m/t\m$ and $\m^\dag/t\m^\dag$ are minimal faithful over $B/tB$. Consider the exact sequence $$0 \to \m^\dag \xrightarrow{\phi} \m^{\oplus n} \to M \to 0$$ as in Lemma \[lemng\]. Then $\phi\in \Hom_R(\m^\dag,\m^{\oplus n})=\Hom_B(\m^\dag,\m^{\oplus n})$. Therefore $M=\coker \phi$ is also a $B$-module and it is torsion-free over $B$ as well as over $R$. Moreover, the above sequence is an exact sequence of $B$-modules and $B$-homomorphisms. Tensoring $B/tB$ to this sequence, we have a short exact sequence $$\label{seq}
0 \to \m^\dag/t\m^\dag \xrightarrow{\phi\otimes B/tB} (\m/t\m)^{\oplus n} \to M/tM \to 0$$ of $B/tB$-modules.
(1): Applying Lemma \[lemcmf\] to the sequence (\[seq\]), we obtain the inequalities $$1\leq r_B(\m) \leq r_B(\m^\dag) < r_B(B)=2.$$ So one has either $\mathsf{soc}(\m/t\m)=1$ or $\mathsf{soc}(\m^\dag/t\m^\dag)=2$. In the former case, $\m^\dag$ must be a cyclic $B$-module and hence $\m^\dag\cong B$. $R$ is almost Gorenstein. In the case $\mathsf{soc}(\m^\dag/t\m^\dag)=2$, $\m$ must be a cyclic $B$-module by a similar argument. Thus we have $\m\cong B$. Then $R$ has minimal multiplicity. $B$ has type one. contradiction.
(2): Applying Lemma \[lemcmf\] to the sequence (\[seq\]), we obtain the inequalities $$1\leq r_B(\m) \leq r_B(\m^\dag) < r_B(B)=3.$$ In the case $1=r_B(\m)$, it follows by same argument as in (1) that $\m^\dag\cong B$ and $R$ is almost Gorenstein. So we only need to consider the case $r_B(\m)=r_B(\m^\dag)=2$. In this case, $\m/t\m$ should be isomorphic to $\m^\dag/t\m^\dag$ by lemma \[lemcmf\]. Put $\phi=[\phi_1,\dots,\phi_n]^t\colon \m^\dag\to\m^{\oplus n}$ and so $\phi\otimes B/tB=[\phi_1\otimes B/tB,\dots,\phi_n\otimes B/tB]^t$. Consider the canonical dual $(\phi\otimes B/tB)^\dag\colon (\m/t\m)^{\dag\oplus n}\to (\m/t\m)$, which is surjective. Since $\m/t\m$ is indecomposable (Lemma \[lemmf\]), the Nakayama’s lemma indicates that $\mathsf{jac}(\End(\m/t\m))\cdot(\m/t\m)\not= \m/t\m$. Therefore, one of the endomorphism $(\phi_1\otimes B/tB)^\dag,\dots,(\phi_n\otimes B/tB)^\dag$ of $\m/t\m$ must be not contained in $\mathsf{jac}(\End(\m/t\m))$, otherwise $(\phi\otimes B/tB)$ cannot be surjective. This means that one of the $\phi_1\otimes B/tB,\dots,\phi_n\otimes B/tB$ is an isomorphism. Say $\phi_i\otimes B/tB$ is an isomorphism. Then the $B$-homomorphism $\phi_i\colon \m^\dag \to \m$ is also surjective. Both $\m$ and $\m^\dag$ have constant rank, $\phi_i$ must be an isomorphism. This shows that $\m\cong \m^\dag$.
Let $(R,\m,k)$ be a complete Cohen-Macaulay local ring of dimension one with a canonical module. Assume $B\coloneqq\m:\m$ is local and $k$ is infinite. If $R$ is nearly Gorenstein with multiplicity $e(R)\leq 4$, then either $R$ is almost Gorenstein or $\m\cong \m^\dag$.
Take a minimal reduction $(t)$ of $R$. Then the multiplicity $e(\m,B)=\ell_R(B/tB)$ of $B$ as an $R$-module is equal to $4$, provided $B$ has a constant rank as an $R$-module. Deduce $$4=\ell_R(B/tB)\geq \ell_B(B/tB)\geq r(B),$$ where $r(B)$ is the Cohen-Macaulay type of $B$. Now we can apply Theorem \[th313\] and attain the desired statement.
Let $R=k[[t^5,t^6,t^7]]$. Then $R$ is nealy Gorenstein and has multiplicity $5$, however, neither $R$ is almost Gorenstein nor $\m\cong \m^\dag$.
Numerical semigroup rings
=========================
In this section, we deal with the numerical semigroup rings $(R,\m)$ having an isomorphism $\m\cong \m^\dag$. We begin the section with recalling preliminaries on numerical semigroup. Let $H\subsetneq \mathbb{N}$ be a numerical semigroup. The set of [*pseudo-Frobenius numbers*]{} $\mathsf{PF}(H)$ of $H$ is consisting of integers $a\in \mathbb{N}\setminus H$ such that $a+b\in H$ for any $b\in H\setminus\{0\}$. Then the maximal element $\mathsf{F}(H)$ of $\mathsf{PF}(H)$ is the Frobenius number of $H$. Set $H'=H\cup\{\mathsf{F}(H)\}$. Then $H'$ is also a numerical semigroup. A numerical semigroup of the form $H'=H\cup \{\mathsf{F}(H)\}$ for some symmtric numerical semigroup $H$ is called [*UESY-semigroup (unitary extension of a symmetric semigroup)*]{}, which is introduced in [@Rosales]. Note that $\mathsf{F}(H)$ is a minimal generator of $H'=H\cup \{\mathsf{F}(H)\}$. For a numerical semigoup $H$ and a field $k$, the [*numerical semigoup ring*]{} of $H$ over $k$ is the subring $k[[\{t^a\mid a\in H\}]]$ of $k[[t]]$, where $t$ is an indeterminate.
Let $H$ be a numerical semigroup, $k$ is an infinite field and $(R,\m)$ is the numerical semigroup ring $k[[H]]$. Then the following are equivalent.
- $\m$ is self-dual.
- $H$ is a UESY-semigroup.
\(1) $\implies$ (2): In the case that $H$ is symmetric, $e(R)\le 2$. Then it can easily shown that $H$ is UESY.
We may assume that $H$ is not symmetric. By Theorem \[thA\], there is a Gorenstein local subring $(S,\mathfrak{n})$ of $R$ such that $R=\mathfrak{n}:\mathfrak{n}$. Take a value semigroup $v(S)$ of $S$, where $v$ is the normalized valuation of $k[[t]]$, that is, $v$ takes $t$ to $1\in\mathbb{Z}$. Then $H=v(R)$, and $v(S)$ is symmetric by the result of Kunz [@Kunz]. Since $R\mathfrak{n}\subset \mathfrak{n}$, $v(S)\subseteq H \subseteq v(S)\cup \{F(v(S))\}$. Therefore, $H$ should be equal to $v(S)\cup \{F(v(S))\}$. In particular, $H$ is UESY.
\(2) $\implies$ (1): Describe $H$ as $H=H'\cup \{\mathsf{F}(H')\}$ with a symmtric numerical semigoup $H'$. Set $S=k[[H']]$. Then $\End_S(\m_S)$ is isomorphic to $R$. Thus by our theorem (Theorem \[thA\]), the maximal ideal $\m$ of $R$ is self-dual.
Let $H=\langle a_1,\dots,a_n\rangle$ be a symmetric numerical semigroup minimally generated by $\{a_i\}$ with $2<a_1<a_2<\cdots<a_n$ and $H'\coloneqq H\cup\{\mathsf{F}(H)\}$. Put $S=k[[H]]$ over an infinite field $k$ and $R=k[[H']]$. Then the maximal ideal of $R$ is quasi-decomposable.
Denote by $\m_R$ the maximal ideal of $R$. We will prove that the maximal ideal $\m_R/(t^{a_1})$ of $R/(t^{a_1})$ has a direct summand $I$ generated by the image of $t^{\mathsf{F}(H)}$, and $I\cong k$ as an $R$-module. Since $t^{\mathsf{F}(H)}$ is a minimal generator of $\m_R$, it is enough to show that $\m_Rt^{\mathsf{F}(H)}\subseteq t^{a_1}R$. So what we need to show is that $\mathsf{F}(H)+a_i-a_1\in H$ for all $i\not=1$ and $2\mathsf{F}(H)-a_1 \in H$. These follow by the fact that $\mathsf{F}(H)$ is the largest number in $\mathbb{N}\setminus H$ and the inequalities $a_i-a_i>0$ and $\mathsf{F}(H)-a_1>0$.
Further characterizations
=========================
The goal of this section is to give characterizations of local rings $R$ such that there exists a one-dimensional local hypersurface $(S,\mathfrak{n})$ such that $R\cong \End_S(\mathfrak{n})$.
Let $(R,\m)$ be a Cohen-Macaulay local ring of dimension one. Assume that $R$ has a canonical module and infinite coefficient field $k$. Then the followings are equivalent.
1. There is a local hypersurface $(S,\mathfrak{n})$ such that $R\cong \End_S(\mathfrak{n})$.
2. $e(R)\le 2$, or $R$ has type $2$ and a canonical ideal $I$ such that $I^2=\m I$ and $\ell_R(R/I)=2$.
3. $e(R)\le 2$, or $R$ has embedding dimension $3$, and a canonical ideal $I$ such that $I^2=\m^2$.
(1)$\implies$(2): Assume $e(R)>2$ and $R$ satisfies (1). Then $R$ is not Gorenstein, and $I\coloneqq \mathfrak{n}$ is a canonical ideal of $R$. Since $S$ is a hypersurface and not a ,, $\ell_R(I/I^2)=\ell_S(\mathfrak{n}/\mathfrak{n}^2)=2$. It forces the equality $I^2=\m I$, since $I$ is not a principal ideal.
(2)$\implies$(1): Consider the case that $e(R)\le 2$. Then by the proof of Corollary \[co1\] (1) $\implies$ (6), there is a Gorenstein local ring $(S,\mathfrak{n})$ such that $R\cong \End_S(\mathfrak{n})$ and $e(S)=\edim S$. In particular, $e(S)\le 2$ and $S$ is a hypersurface. Now consider the case that $R$ has type 2 and a canonical ideal $I$ such that $I^2=\m I$ and $\ell_R(R/I)=2$. One has equalities $\ell_R(I/I^2)=\ell_R(I/\m I)=2$. Put $S\coloneqq k+I$. Then $S$ is Gorenstein local with a maximal ideal $\mathfrak{n}\coloneqq I$, and $R\cong \End_S(\mathfrak{n})$ (Lemma \[lmlmlm\]). We can compute the embedding dimension $\edim S$ as follows: $$\edim S=\ell_S(\mathfrak{n}/\mathfrak{n}^2)=\ell_R(I/I^2)=2.$$ Therefore, $S$ should be a hypersurface. (2)$\implies$ (3): We may assume $R$ has type $2$. By the implication (2)$\implies$(1), we can calculate the embedding dimension of $R$ as $\edim R\le \edim S+1=3$, where $(S,\mathfrak{n})$ is a hypersurface with $R\cong \End_S(\mathfrak{n})$. Since $R$ is not Gorenstein, $\edim R$ should be equal to $3$. This means $\ell_R(\m/\m^2)=3$. On the other hand, one has $$\ell_R(\m/I^2)=\ell_R(\m/I)+\ell_R(I/I^2)=1+\ell_R(I/\m I)=1+2=3.$$ So the inclusion $I^2\subseteq \m^2$ yields that $I^2=\m^2$. The direction (3)$\implies$(2) also follows by similar calculations.
For a Cohen-Macaulay local ring $(R,\m)$ of dimension one, when is there a local complete intersection $(S,\mathfrak{n})$ with an isomorphism $R\cong \End_S(\mathfrak{n})$?
The author is grateful to his supervisor Ryo Takahashi for giving him kind advice throughout the paper, and to Osamu Iyama for his helpful comments on Theorem \[thA\]. The author is also grateful to Luchezar Avramov for useful comments.
[1]{} , The Gorenstein colength of an Artinian local ring. [*J. Algebra*]{} [**320**]{} (2008), no. 9, 3438-–3446. , Connected sums of Gorenstein local rings. [*J. Reine Angew. Math.*]{} [**667**]{} (2012), 149–-176. , On the ubiquity of Gorenstein rings, [*Math. Zeitschrift*]{} [**82**]{} (1963), 8-–28. , Minimal faithful modules over Artinian rings. [*Publ. Mat.*]{} [**59**]{} (2015), no. 2, 271-–300. , On the structure of closed ideals, [*Math. Scand.*]{} [**88**]{} (2001), no. 1, 3–16. , Rings with canonical reduction. `arXiv:1712.00755`. , On Teter rings. [*Proc. Roy. Soc. Edinburgh Sect. A*]{} [**147**]{} (2017), no. 1, 125–-139. , Invariants of Cohen–Macaulay rings associated to their canonical ideals. [*Journal of Algebra*]{}, [**489**]{}, 506–528. , Almost Gorenstein rings. [*J. Algebra*]{} [**379**]{} (2013), 355–-381. , On the length of faithful modules over Artinian local rings, [*Math. Scand.*]{} [**31**]{} (1972) 78–82. , The trace of the canonical module. arXiv:1612.02723. .[C. Huneke; A. Vraciu]{}, Rings which are almost Gorenstein. [*Pacific J. Math.*]{}, [**225**]{} (2006) no. 1, 85–102. , Syzygies of Cohen-Macaulay modules over one dimensional Cohen-Macaulay local rings, Preprint (2017), `arXiv:1710.02673`. , The value-semigroup of a one-dimensional Gorenstein ring, [*Proc. Amer. Math. Soc.*]{} [**25**]{} (1970), 748–751. , Totally reflexive modules over rings that are close to Gorenstein. `arXiv:1705.05714`. , Cohen-Macaulay Representations, Mathematical Surveys and Monographs, vol. 181, [*American Mathematical Society, Providence, RI*]{}, 2012. , Local rings with quasi-decomposable maximal ideal, [*Math. Proc. Cambridge Philos. Soc.*]{} (to appear). , On the self-dual maximal Cohen-Macaulay modules, [*J. Pure Appl. Algebra*]{} [**106**]{} (1996), no. 1, 93–102. , Numerical semigroups that differ from a symmetric numerical semigroup in one element. [*Algebra Colloq.*]{} [**15**]{} (2008), no. 1, 23-–32. , Tangent cones at Gorenstein singularities, [*Comp. Math.*]{} [**40**]{} (1980), 167–175. , Cohen-Macaulay local rings of embedding dimension e+d−2, [*J. Algebra*]{} [**83**]{} (1983), 393–408. , Some homological properties of almost Gorenstein rings, Commutative algebra and its connections to geometry, 201-–215, Contemp. Math., [**555**]{}, Amer. Math. Soc., Providence, RI, 2011. , On G-regular local rings, [*Comm. Algebra*]{} [**36**]{} (2008), 4472–-4491. , Rings which are a factor of a Gorenstein ring by its socle. [*Inventione Math*]{}, [**23**]{} (1974), 153–162. , Cohen-Macaulay modules over Cohen-Macaulay rings, London Mathematical Society Lecture Note Series, 146, [*Cambridge University Press, Cambridge*]{}, 1990.
[^1]: 2010 [*Mathematics Subject Classification.*]{} 13C14, 13E15, 13H10
[^2]: [*Key words and phrases.*]{} Gorenstein ring, canonical ideal, birational extension
[^3]: The auther was supported by JSPS Grant-in-Aid for JSPS Fellows 18J20660.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'For an algebraic set $X$ (union of varieties) embedded in projective space, we say that $X$ satisfies property $\textbf{N}_{d,p}$, $(d\ge 2)$ if the $i$-th syzygies of the homogeneous coordinate ring are generated by elements of degree $< d+i$ for $0\le i\le p$ (see [@EGHP2] for details). Much attention has been paid to linear syzygies of quadratic schemes $(d=2)$ and their geometric interpretations (cf. [@AK],[@EGHP1],[@HK],[@GL2],[@KP]). However, not very much is actually known about the case satisfying property $\textbf{N}_{3,p}$. In this paper, we give a sharp upper bound on the maximal length of a zero-dimensional linear section of $X$ in terms of graded Betti numbers (Theorem \[3-regulr-multisecants\] (a)) when $X$ satisfies property $\textbf{N}_{3,p}$. In particular, if $p$ is the codimension $e$ of $X$ then the degree of $X$ is less than or equal to $\binom{e+2}{2}$, and equality holds if and only if $X$ is arithmetically Cohen-Maucalay with $3$-linear resolution (Theorem \[3-regulr-multisecants\] (b)). This is a generalization of the results of Eisenbud et al. ([@EGHP1; @EGHP2]) to the case of ${\ensuremath{\mathrm{\bf N}}}_{3,p}$, $(p\leq e)$.'
address:
- 'Department of Mathematics Education, Kongju National University, 182, Shinkwan-dong, Kongju, Chungnam 314-701, Republic of Korea'
- 'Department of Mathematics, Korea Advanced Institute of Science and Technology, 373-1 Gusung-dong, Yusung-Gu, Daejeon, Korea'
author:
- 'Jeaman Ahn and Sijong Kwak${}^{*}$'
title: 'On Syzygies, degree, and geometric properties of projective schemes with property $\textbf{N}_{3,p}$'
---
[^1] [^2]
[^3]
Introduction
============
Throughout this paper, we will work with a non-degenerate reduced algebraic set (union of varieties) $X$ of dimension $n$ and codimension $e$ in $\P^{n+e}$ defined over an algebraically closed field $k$ of characteristic zero. We write $I_X:=\bigoplus_{m=0}^{\infty} H^0(\mathcal I_X(m))$ for the defining ideal of $X$ in the polynomial ring $R=k[x_0, x_1 ,\ldots, x_{n+e}]$.
It is known that if $X$ is a variety then $X$ satisfies the condition $\deg(X)\geq e+1$ and if equality holds then we say that $X$ has minimal degree. Del Pezzo [@DP] classified surfaces of minimal degree and Bertini [@Ber] extended the classfication to all dimensions. This classification was again extended to equidimenisonal algebraic sets that are connected in codimension one by Xambó [@X]. In this case, they are all 2-regular (in the sense of Castelnuovo-Mumford) and vice versa ([@EG]).
For more general algebraic sets, the inequality $\deg(X)\geq e+1$ does not hold. A simple example is a set of two skew lines in $\P^3$, which is $2$-regular. In [@EGHP1 Theorem 0.4] authors give a classification of $2$-regular algebraic sets in terms of “[*smallness*]{}”. This means that for every linear subspace $\Lambda$ such that $\Lambda\cap X$ is finite, the scheme $\Lambda\cap X$ is linearly independent. From this result if $X$ is $2$-regular then $\deg(X)\leq e+1$ and the extremal degree holds if and only if $X$ is a reduced aCM scheme whose defining ideal has $2$-linear resolution. Recall that $2$-regularity means the syzygies of $I_X$ are all linear. In a different paper [@EGHP2] carried out by the same authors, they introduce the notion ${\ensuremath{\mathrm{\bf N}}}_{2,p}$ (i.e. the syzygies of $I_X$ are linear for $p$ steps) and show that if $X$ satisfies the property ${\ensuremath{\mathrm{\bf N}}}_{2,p}$ then for every linear subspace $\Lambda$ of dimension $\leq p$ such that $\Lambda\cap X$ is finite, the scheme $\Lambda\cap X$ is $2$-regular and $\deg(\Lambda\cap X)\leq \dim\Lambda+1$ (See [@EGHP2 Theorem 1.1]). Since a small algebraic set is $2$-regular, this inequality implies that if $X$ satisfies the property ${\ensuremath{\mathrm{\bf N}}}_{2,e}$ then $X$ is $2$-regular [@EGHP2 Corollary 1.8].
Summing up these results, we have the following theorem:
\[Eisenbud et al\] Let $X\subset \P^{n+e}$ be a non-degenerate algebraic set (union of varieties) of dimension $n$ and codimension $e$. Suppose that $X$ satisfies property ${\ensuremath{\mathrm{\bf N}}}_{2,p}$.
- For a linear subspace $\Lambda$ of dimension $\leq p$, if $X\cap \Lambda$ is finite then $$\deg(X\cap \Lambda)\leq \dim\Lambda+1;$$
- In particular, if $X$ satisfies property ${\ensuremath{\mathrm{\bf N}}}_{2,e}$ (i.e. $X$ is $2$-regular) then $\deg(X)\leq e+1$ and the extremal degree holds if and only if $X$ is a reduced aCM scheme whose defining ideal has $2$-linear resolution.
In this context, it is natural to ask what we can say about algebraic sets satisfying property ${\ensuremath{\mathrm{\bf N}}}_{3,\alpha}$ with the Green-Lazarsfeld index $p$. (i.e. $p$ is the largest $k\ge 0$ such that $X$ satisfies property ${\ensuremath{\mathrm{\bf N}}}_{2,k}$. See [@BCR] for the definition). Although linear syzygies of quadratic schemes and their geometric properties have been understood by many authors (cf. [@AK; @AR; @EGHP1; @HK; @GL2; @KP]), not very much is actually known about the geometric properties of algebraic sets satisfying $\textbf{N}_{3,\alpha}$.
This paper is a starting point for a generalization of Theorem \[Eisenbud et al\] to the case of ${\ensuremath{\mathrm{\bf N}}}_{3,\alpha}$. Our main result is the following:
\[3-regulr-multisecants\] Let $X$ be a non-degenerate algebraic set in $\P^{n+e}$ of codimension $e\ge 1$ satisfying *property* $\textbf{N}_{3,\alpha}$ with the Green-Lazarsfeld index $p$. For a linear space $L^{\alpha}$ of dimension $\alpha\leq e$,
- if $X\cap L^{\alpha}$ is a finite scheme then, $$\deg(X\cap L^{\alpha})\leq 1+ \alpha+\min\left\{\frac{|\alpha-p|(\alpha+p+1)}{2}, \beta^R_{\alpha,2}(R/I_X)\right\};$$
- In particular, if $X$ satisfies property ${\ensuremath{\mathrm{\bf N}}}_{3,e}$ then $\deg(X)\leq \binom{e+2}{2}$ and the extremal degree holds if and only if $X$ is a reduced aCM scheme whose defining ideal has $3$-linear resolution. (hence 3-regular).
There is an algebraic set satisfying property ${\ensuremath{\mathrm{\bf N}}}_{3,e}$ that is not $3$-regular (see Example \[example:2014\]). This means the condition ${\ensuremath{\mathrm{\bf N}}}_{3,e}$ does not imply $3$-regularity. Besides, we do not know a nice characterization of algebraic sets having $3$-linear resolution corresponding to the case of $2$-linear resolution ([@EGHP1 Theorem 0.4]). Nevertheless, our result is just such a generalization of Theorem \[Eisenbud et al\] to the case of ${\ensuremath{\mathrm{\bf N}}}_{3,\alpha}$ with an alternative perspective and approach.
To achieve the result, we use the elimination mapping cone construction for graded modules and apply it to give a systematic approach to the relation between multisecants and graded Betti numbers. From the maximal bound for the length of finite linear sections of algebraic sets satisfying property ${\ensuremath{\mathrm{\bf N}}}_{3,e}$ (in terms of the graded Betti numbers), the extremal cases can be characterized by the combinatorial property of the syzygies of generic initial ideals.
We also provide some illuminating examples of our results and corollaries via calculations done with [*Macaulay 2*]{} [@GS].
Preliminaries
=============
Notations and Definitions
-------------------------
For precise statements, we begin with notations and definitions used in the subsequent sections:
- We work over an algebraically closed field $k$ of characteristic zero.
- Unless otherwise stated, $X$ is a non-degenerate, reduced algebraic sets (union of varieties) of dimension $n$ in $\P^{n+e}$.
- For a finitely generated graded $R=k[x_0, x_1,\ldots,x_{n}]$-module $M=\bigoplus_{\nu \geq 0}M_{\nu}$, consider a minimal free resolution of $M$: $$\cdots \rightarrow \oplus_j R(-i-j)^{\beta^{R}_{i,j}(M)}\rightarrow\cdots\rightarrow\oplus_j R(-j)^{\beta^{R}_{0,j}(M)}\rightarrow M\rightarrow 0$$ where $\beta^R_{i,j}(M):=\dim_k{\operatorname{Tor}}_i^{R}(M,k)_{i+j}$. We write $\beta^R_{i,j}(M)$ as $\beta^R_{i,j}$ if it is obvious. We define the regularity of $M$ as follows: $${\operatorname{reg}}_R(M):= \max\{j\mid \beta^R_{i,j}(M)\neq 0 \text{ for some } i\}$$ In particular, ${\operatorname{reg}}(X):= {\operatorname{reg}}_R(I_X)$.
- One says that $M$ satisfies [*property ${\ensuremath{\mathrm{\bf N}}}^R_{d,\alpha}$*]{} if $\beta^R_{i,j}(M)=0$ for all $j\ge d$ and $0\le i\le\alpha$. We can also think of $M$ as a graded $S_t=k[x_t,\ldots,x_{n}]$-module by an inclusion map $S_t \hookrightarrow R$. As a graded $S_t$-module, we say that $M$ satisfies [*property ${\ensuremath{\mathrm{\bf N}}}^{S_t}_{d,\alpha}$*]{} if $\beta^{S_t}_{i,j}(M):=\dim_k{\operatorname{Tor}}_i^{S_t}(M,k)_{i+j}=0$ for all $j\ge d$ and $0\le i\le \alpha$.
- For an algebraic set $X$ in $\P^{n+e}$, the Green-Lazarsfeld index of $X$, denoted by $\text{index}_{\rm GL}(X)$, is the largest $\alpha\ge 0$ such that $X$ satisfies property ${\ensuremath{\mathrm{\bf N}}}_{2,\alpha}$.
Elimination Mapping Cone Construction
-------------------------------------
For a graded $R$-module $M$, consider the natural multiplicative $S_1=k[x_1,x_2,\ldots,x_{n}]$-module map $\varphi: M(-1) \stackrel{\times x_0}{\longrightarrow} M$ such that $\varphi(m)=x_0\cdot m$ and the induced map on the graded Koszul complex of $M$ over $S_1$: $$\overline{\varphi}: \Bbb F_{\bullet}=K_{\bullet}^{S_1}(M(-1)) \stackrel{\times x_0}{\longrightarrow} \Bbb G_{\bullet}=K_{\bullet}^{S_1}(M).$$
Then, we have the mapping cone $(C_{\bullet}(\overline{\varphi}),{\partial}_{\,\,\overline{\varphi}})$ such that $C_{\bullet}(\overline{\varphi})=\Bbb G_{\bullet}\bigoplus\Bbb F_{\bullet}[-1],$ and $W=\langle x_1, x_2,\ldots,x_{n} \rangle;$
- $C_i(\overline{\varphi})_{i+j}=[\Bbb G_{i}]_{i+j}\bigoplus
[\Bbb F_{i-1}]_{i+j}=\left(\wedge^i W\otimes M_j\right)\oplus \left(\wedge^{i-1} W\otimes M_j\right)$.\
- the differential ${\partial}_{\,\,\overline{\varphi}}: C_i(\,\,\overline{\varphi})
\rightarrow C_{i-1}(\overline{\varphi})$ is given by $${\partial}_{\,\,\overline{\varphi}}=\left(\begin{array}{lr}
{\partial}& \overline{\varphi} \\
0 & -{\partial}\\
\end{array}\right),$$ where ${\partial}$ is the differential of Koszul complex $K_{\bullet}^{S_1}(M)$.
From the exact sequence of complexes $$0\longrightarrow \Bbb G_{\bullet} \longrightarrow C_{\bullet}(\overline{\varphi}) \longrightarrow \Bbb F_{\bullet}[-1]\longrightarrow 0$$ and the natural isomorphism $H_i({C_{\bullet}(\overline{\varphi})})_{i+j}\simeq {\operatorname{Tor}}_i^R(M,k)_{i+j}$ (cf. Lemma 3.1 in [@AK]), we have the following long exact sequence in homology.
\[Thm:Mapping\_cone\_theorem\] For a graded $R$-module $M$, there is a long exact sequence: $$\begin{array}{ccccccccccccc}
{\longrightarrow}{\operatorname{Tor}}_{i}^{S_1}(M,k)_{i+j}{\longrightarrow}{\operatorname{Tor}}_{i}^R(M,k)_{i+j}{\longrightarrow}{\operatorname{Tor}}_{i-1}^{S_1}(M,k)_{i-1+j}{\longrightarrow}& &&
\\[2ex]
{\stackrel{\delta=\times x_0}{\rightarrow}}{\operatorname{Tor}}_{i-1}^{S_1}(M,k)_{i-1+j+1}{\longrightarrow}{\operatorname{Tor}}_{i-1}^R(M,k)_{i-1+j+1}{\longrightarrow}{\operatorname{Tor}}_{i-2}^{S_1}(M,k)_{i-2+j+1}&
\end{array}$$ whose connecting homomorphism $\delta$ is the multiplicative map $\times\, x_0$.
\[projective dimension\] Let $M$ be a finitely generated graded $R$-module and also finitely generated as an $S_1$-module. Then, $${\operatorname{proj.dim}}_{S_1}(M)={\operatorname{proj.dim}}_R(M)-1.$$
Let $\ell={\operatorname{proj.dim}}_R(M)$. Thus, $\beta^{R}_{\ell+1,j}(M)=0$ for all $j\ge 1$ and the following map $\delta=\times x_0$ is injective for all $j\ge 1$: $$0={\operatorname{Tor}}_{\ell +1}^{R}(M,k)_{\ell+1+j} {\rightarrow}{\operatorname{Tor}}_{\ell}^{S_1}(M,k)_{\ell+j} {\stackrel{\delta=\times x_0}{\rightarrow}} {\operatorname{Tor}}_{\ell}^{S_1}(M,k)_{\ell+j+1}.$$ But, ${\operatorname{Tor}}_{\ell}^{S_1}(M,k)_{\ell+j+1}=0$ for $j>>0$ due to finiteness of $M$ (as an $S_1$-module). Therefore, ${\operatorname{Tor}}_{\ell}^{S_1}(M,k)_{\ell+j}=0$ for all $j\ge 1$. On the other hand, $\beta^{R}_{\ell,j_{*}}(M)\neq 0$ for some $j_{*}>0$. So, $$0={\operatorname{Tor}}_{\ell}^{S_1}(M,k)_{\ell+j_{*}} {\rightarrow}{\operatorname{Tor}}_{\ell}^{R}(M,k)_{\ell+j_{*}}{\stackrel{}{\rightarrow}} {\operatorname{Tor}}_{\ell-1}^{S_1}(M,k)_{\ell-1+j_{*}}$$ is injective and $\beta^{S_1}_{\ell-1,j_{*}}(M)\neq 0$. Consequently, we get $${\operatorname{proj.dim}}_{S_1}(M)={\operatorname{proj.dim}}_R(M)-1,$$ as we wished.
\[prop: N\_dp\_as\_S\_module\] Let $M$ be a finitely generated graded $R$-module satisfying property ${\ensuremath{\mathrm{\bf N}}}^R_{d,\alpha}, (\alpha\geq 1)$. If $M$ is also finitely generated as an $S_1$-module, then we have the following:
- $M$ satisfies property ${\ensuremath{\mathrm{\bf N}}}^{S_1}_{d,\alpha-1}$. In particular, ${\operatorname{reg}}_{S_1}(M)={\operatorname{reg}}_{R}(M)$.
- $\beta^{S_1}_{i-1,d-1}(M)\leq \beta^{R}_{i,d-1}(M)$ for $1\le i\le \alpha$.
Suppose that $M$ satisfies ${\ensuremath{\mathrm{\bf N}}}^R_{d,\alpha}, (\alpha\geq 1)$ and let $1\le i\le \alpha$ and $j\geq d$.\
(a): Consider the exact sequence from Theorem \[Thm:Mapping\_cone\_theorem\] $$\begin{array}{ccccccccccccc}
\cdots {\rightarrow}{\operatorname{Tor}}_{i}^R(M,k)_{i+j}& {\rightarrow}{\operatorname{Tor}}_{i-1}^{S_1}(M,k)_{i-1+j}&{\stackrel{\delta=\times x_0}{\rightarrow}}&&
\\[2ex]
&{\operatorname{Tor}}_{i-1}^{S_1}(M,k)_{i-1+j+1}&{\rightarrow}& {\operatorname{Tor}}_{i-1}^R(M,k)_{i-1+j+1}&{\rightarrow}& \cdots
\end{array}$$ By the property $N^R_{d,\alpha}$, we see that ${\operatorname{Tor}}_{i}^R(M,k)_{i+j}=0$. Hence we obtain an isomorphism $${\operatorname{Tor}}_{i-1}^{S_1}(M,k)_{(i-1)+j}{\stackrel{\delta=\times x_0}{\rightarrow}}{\operatorname{Tor}}_{i-1}^{S_1}(M,k)_
{i+j}.$$ By assumption that $M$ is a finitely generated $S_1$-module, we conclude that ${\operatorname{Tor}}_{i-1}^{S_1}(M,k)_{(i-1)+j}=0$ for $1\le i\le \alpha $ and $j\geq d$. Hence $M$ satisfies ${\ensuremath{\mathrm{\bf N}}}^{S_1}_{d,\alpha-1}$. If $\alpha=\infty$, we have that ${\operatorname{reg}}_{S_1}(M)={\operatorname{reg}}_{R}(M)$.\
(b): Note that we have the following surjection map for $1\le i\le \alpha$ $${\operatorname{Tor}}_{i}^{R}(M,k)_{i+d-1} {\rightarrow}{\operatorname{Tor}}_{i-1}^{S_1}(M,k)_{i-1+d-1} {\stackrel{\delta=\times x_0}{\rightarrow}} {\operatorname{Tor}}_{i-1}^{S_1}(M,k)_{i-1+d}=0,$$ which is obtained from Theorem \[Thm:Mapping\_cone\_theorem\]. This implies that for $1\le i\le \alpha$ $$\begin{aligned}
\beta^{S_1}_{i-1,d-1}(M)\leq \beta^R_{i,d-1}(M)\end{aligned}$$ as we wished.
\[Remk:1\] Let $M$ be a finitely generated graded $R$-module satisfying property $\textbf {N}^{R}_{d,\alpha}$ for some $\alpha\geq 1$. If $M$ is also finitely generated as an $S_t=k[x_{t}, x_{t+1}\ldots,x_n]$-module for every $1\leq t\leq \alpha$ then $M$ satisfies property $\textbf {N}^{S_t}_{d,\alpha-t}$. Moreover, in the strand of $j=d-1$, we have the inequality $$\beta^{S_{\alpha}}_{0,d-1}~\le \beta^{S_{\alpha-1}}_{1,d-1}~\le \cdots\le \beta^{S_1}_{\alpha-1,d-1}~\le \beta^R_{\alpha,d-1},$$ which follows from Proposition \[prop: N\_dp\_as\_S\_module\] (b).
The most interesting case is a projective coordinate ring $M=R/I_X$ of an algebraic set $X$. In this case, the elimination mapping cone theorem is naturally associated to outer projections of $X$. Our starting point is to understand some algebraic and geometric information on $X$ via the relation between ${\operatorname{Tor}}_i^R(R/I_X,k)$ and ${\operatorname{Tor}}_i^{S_{\alpha}}(R/I_X,k)$.
Let $X$ is a non-degenerate algebraic set of dimension $n$ in $\P^{n+e}$. Let $\Lambda=\P^{\alpha-1}$ be an $(\alpha-1)$-dimensional linear subspace with homogeneous coordinates $x_0, \ldots, x_{\alpha-1}$, $(\alpha\le e)$ such that $\Lambda\cap X$ is empty. Then each point $q_i=[0:0:\cdots:1:\cdots:0]$ whose $i$-th coordinates is $1$ is not contained in $X$ for $0\le i \le \alpha-1$. Therefore, there is a homogeneous polynomial $f_i \in I_X$ of the form $x_i^{m_i}+ g_i$ where $g_i\in R=k[x_0,x_1,\ldots, x_{n+e}]$ is a homogeneous polynomial of degree $m_i$ with the power of $x_i$ less than $m_i$. Therefore, $R/I_X$ is a finitely generated $S_{\alpha}=k[x_{\alpha}, x_{\alpha+1},\ldots, x_{n+e}]$-module with monomial generators $$\{x_0^{j_0}x_1^{j_1}\ldots x_{\alpha-1}^{j_{\alpha-1}}\mid 0\le j_k < m_k, 0\le k\le {\alpha-1}\}.$$ Note that the above generating set is not minimal. If $X$ satisfies $\textbf {N}^{R}_{d,\alpha}$ then $X$ also satisfies $\textup{\textbf{N}}^{S_{\alpha}}_{d, 0}$. This implies that $R/I_X$ is generated in degree $< d$ as a $S_{\alpha}$-module and thus $\beta^{S_{\alpha}}_{0,i}\le \binom{\alpha-1+i}{i}$ for $0\le i\le d-1$. To sum up, we have the following corollary.
\[N\_[d,p-1]{}as a S-module\] Suppose $X$ satisfies the property ${\textup{\textbf {N}}}^R_{d,\alpha}$ and consider the following minimal free resolution of $R/I_X$ as a graded $S_{\alpha}=k[x_{\alpha},\ldots, x_{n+e}]$-module: $$\cdots\rightarrow F_{1} \rightarrow F_0\rightarrow R/I_X \rightarrow 0.$$
- $R/I_X$ satisfies the property $\textup{\textbf{N}}^{S_{\alpha}}_{d, 0}$ as an $S_{\alpha}$-module;
- The Betti numbers of $F_0$ satisfy the following:
- $\beta^{S_{\alpha}}_{0,0}=1$, $\beta^{S_{\alpha}}_{0,1}=\alpha$, and $\beta^{S_{\alpha}}_{0,i}\le \binom{\alpha-1+i}{i}$ for $2\le i\le d-1$;
- Furthermore, $\beta^{S_{\alpha}}_{0,d-1}~\le \beta^{S_{\alpha-1}}_{1,d-1}~\le \cdots\le \beta^{S_1}_{\alpha-1,d-1}~\le \beta^R_{\alpha,d-1}$.
- When $\alpha=e$, $R/I_X$ is a free $S_{e}$-module if and only if $X$ is arithmetically Cohen-Macaulay. In this case, letting $d={\operatorname{reg}}(X)$, $$R/I_X=S_{e}\oplus S_{e}(-1)^e \oplus \cdots\oplus S_e(-d+1)^{\beta^{S_e}_{0,d-1}}$$ and ${\pi_{\Lambda}}_{*}\mathcal O_X =\mathcal O_{\P^n}\oplus\mathcal O_{\P^n}(-1)^{e}\oplus\cdots\oplus\mathcal O_{\P^n}(-d+1)^{\beta^{S_e}_{0,d-1}}$.
Note that $\binom{\alpha-1+i}{i}$ is the dimension of the vector space of all homogeneous polynomials of degree $i$ in $k[x_{0},\ldots, x_{\alpha-1}]$ defining $\Lambda=\P^{\alpha-1}$. Since $X$ is non-degenerate, $\{x_i\mid 0\le i\le \alpha-1\}$ is contained in the minimal generating set of $R/I_X$ as an $S_{\alpha}$-module. So, $\beta^{S_{\alpha}}_{0,1}=\alpha$. The remaining part of $(b)$ is given by Proposition \[prop: N\_dp\_as\_S\_module\] and the argument given in Remark \[Remk:1\] below.
For a proof of $(c)$, first note that by Corollary \[projective dimension\] and Corollary \[prop: N\_dp\_as\_S\_module\], $$\begin{array}{rcl}
{\operatorname{proj.dim}}_{S_e}(R/I_X)&=&{\operatorname{proj.dim}}_R(R/I_X)-e\\
{\operatorname{reg}}_{S_e}(R/I_X)&=&{\operatorname{reg}}_{R}(R/I_X).
\end{array}$$ Consequently, $R/I_X$ is a free $S_{e}$-module if and only if ${\operatorname{proj.dim}}_R(R/I_X)=e$, as we wished.
\[Remk:2\] If a reduced algebraic set $X$ is arithmetically Cohen-Macaulay, then it is locally Cohen-Macaulay, equidimensional and connected in codimension one. Furthermore, as shown in the Corollary \[N\_[d,p-1]{}as a S-module\], $${\pi_{\Lambda}}_{*}\mathcal O_X =\mathcal O_{\P^n}\oplus\mathcal O_{\P^n}(-1)^{e}\oplus\cdots\oplus\mathcal O_{\P^n}(-d+1)^{\beta^{S_e}_{0,d-1}}.$$
However, in general, if $X$ is locally Cohen-Macaulay and equidimensional, then ${\pi_{\Lambda}}_{*}\mathcal O_X$ is a vector bundle of rank $r=\deg(X)$. Furthermore, by the well-known splitting criterion due to Horrock or Evans-Griffith([@EvG], [@H]), ${\pi_{\Lambda}}_{*}\mathcal O_X$ is a direct sum of line bundles if and only if $H^i(\P^{n},{\pi_{\Lambda}}_{*}\mathcal O_X(j))=H^i(X, \mathcal O_{X}(j))=0$ for all $1\le i \le n-1, \forall j\in \Z$. This condition is weaker than arithmetically Cohen-Macaulayness.
\[example:1\] For one’s familiarity with these topics, we show the simplest examples in the following table: Let $\Lambda=\P^{i-1}$ be a general linear subspace with coordinates $x_0, \cdots, x_{i-1}$ and $R/I$ is a $S_i=k[x_i, \cdots, x_{n+e}]$-module. Note that by Corollary \[projective dimension\] and Proposition \[prop: N\_dp\_as\_S\_module\], $${\operatorname{proj.dim}}_{S_i}(R/I_X)={\operatorname{proj.dim}}_R(R/I_X)-i \text { and } {\operatorname{reg}}_{S_i}(R/I_X)={\operatorname{reg}}_{R}(R/I_X).$$
[l|| c c c c l]{}\
& [**$R$-modules**]{} & [**$S_1$-modules**]{} & [**$S_2$-modules**]{}\
\
& & &\
\
& & & &\
\
& & & &\
In generic coordinates, Betti table for $R/I$ as a $S_i$-module can be computed with Macaulay 2 ([@GS]) as follows:\
Syzygetic properties of algebraic sets satisfying property $\textbf{N}_{3,e}$
=============================================================================
For an algebraic set $X$ of dimension $n$ in $\P^{n+e}$ satisfying property $\textbf {N}_{2,p}$, it is proved by Eisenbud et al in [@EGHP2] that a finite scheme $X\cap \Lambda$ for any linear space $\Lambda$ of dimension $\le p$, is in general linear position of length at most $\dim\Lambda + 1$. In addition, they show the syzygetic rigidity, i.e. $X$ satisfies property $\textbf {N}_{2, e}$ if and only if $X$ is $2$-regular.
In this section, we give a proof of Theorem \[3-regulr-multisecants\]. This result give us a sharp upper bound on the maximal length of a zero-dimensional linear section of $X$ in terms of graded Betti numbers when $X$ satisfies property $\textbf{N}_{3,p}$. In particular, if $p$ is the codimension $e$ of $X$ then $\deg(X)$ is at most $\binom{e+2}{2}$ and the equality holds if and only if $X$ is an arithmetically Cohen-Maucalay scheme with $3$-linear resolution.
The proof of Theorem \[3-regulr-multisecants\] (a)
--------------------------------------------------
Let $X$ be a non-degenerate algebraic set in $\P^{n+e}$ of codimension $e\ge 1$ satisfying *property* $\textbf{N}_{3,\alpha}$ with the Green-Lazarsfeld index $p$. For a linear space $L^{\alpha}$ of dimension $\alpha\leq e$, we have to show that if $X\cap L^{\alpha}$ is a finite scheme then $$\label{main_formula}
\deg(X\cap L^{\alpha})\leq 1+ \alpha+\min\left\{\frac{|\alpha-p|(\alpha+p+1)}{2}, \beta^R_{\alpha,2}(R/I_X)\right\}.$$
Note that $\beta^R_{\alpha, 2}=0$ if $\alpha\le p$. In this case, the inequality $\eqref{main_formula}$ follows from [@EGHP2 Theorem 1.1] directly. Now we assume $\alpha > p$ and $\beta^R_{\alpha, 2}\neq 0$. Suppose $\dim(X\cap L^{\alpha})=0$ and choose a linear subspace $\Lambda\subset L^{\alpha}$ of dimension $(\alpha-1)$ disjoint from $X$ with homogeneous coordinates $x_0,\ldots, x_{\alpha-1}$.
Our main idea is to consider a projection $\pi_\Lambda:X\to \pi_\Lambda(X)\subset \P^{n+e-\alpha}$ and to regard $L^{\alpha}\cap X$ as a fiber of $\pi_\Lambda$ at the point $\pi_\Lambda(L^{\alpha}\setminus \Lambda)\in \pi_\Lambda(X)$. From Corollary \[N\_[d,p-1]{}as a S-module\] (a), we see that $R/I_X$ is finitely generated as an $S_{\alpha}=k[x_{\alpha},x_{\alpha+1}\ldots,x_{n+e}]$-module satisfying property ${\ensuremath{\mathrm{\bf N}}}^{S_{\alpha}}_{3,0}$. Thus, the minimal free resolution of $R/I_X$ is of the following form: $$\label{eq:mfr}
\cdots \longrightarrow S_{\alpha}\oplus S_{\alpha}(-1)^{\alpha}\oplus S_{\alpha}(-2)^{\beta^{S_{\alpha}}_{0,2}}\longrightarrow R/I_X\longrightarrow 0.$$ Sheafifying the sequence (\[eq:mfr\]), we have the following surjective morphism $$\cdots\rightarrow \mathcal O_{\P^{n+e-\alpha}}\oplus\mathcal O_{\P^{n+e-\alpha}}(-1)^{\alpha}\oplus
\mathcal O_{\P^{n+e-\alpha}}(-2)^{\beta^{S_{\alpha}}_{0,2}}{\stackrel{\widetilde{\varphi_{\alpha}}}{\rightarrow}}\pi_{{\Lambda}_{*}}{\mathcal O_X}\longrightarrow 0.$$ For any point $q\in \pi_{\Lambda}(X)$, note that $\pi_{{\Lambda}_{*}}{\mathcal O_X}\otimes k(q)\simeq
H^0(\langle \Lambda, q\rangle, {\mathcal O}_{{\pi_{\Lambda}}^{-1}(q)})$. Thus, by tensoring $\mathcal O_{\P^{n+e-\alpha}}(2)\otimes k(q)$ on both sides, we have the surjection on vector spaces: $$\label{eq:surjection1}
[\mathcal O_{\P^{n+e-\alpha}}(2)\oplus\mathcal O_{\P^{n+e-\alpha}}(1)^{\alpha}\oplus\mathcal O_{\P^{n+e-\alpha}}^{\beta^{S_{\alpha}}_{0,2}}]\otimes k(q)\twoheadrightarrow H^0(\langle \Lambda, q\rangle, {\mathcal O}_{{\pi_{\Lambda}}^{-1}(q)}(2)).$$ Therefore, $\langle \Lambda, q\rangle\cap X$ is $3$-regular and the length of any fiber of $\pi_{\Lambda}$ is at most $1+\alpha+\beta^{S_{\alpha}}_{0,2}$. Hence it is important to get an upper bound of $\beta^{S_{\alpha}}_{0,2}$.\
[**[Claim.]{}**]{}\[Claim:Betti numbers inequality\] There are following inequalities on graded Betti numbers:
1. $\beta^{S_{\alpha}}_{0,2}~\le \beta^{S_{\alpha-1}}_{1,2}~\le \cdots\le \beta^{S_1}_{\alpha-1,2}~\le \beta^R_{\alpha,2},
~~~~~~ \alpha\le e=\text{codim}(X)$ ;
2. $\beta^{S_{\alpha}}_{0,2}~\le \frac{(\alpha-p)(\alpha+p+1)}{2}$.
Due to the claim, we have the following inequality: $$\beta^{S_{\alpha}}_{0,2}~\leq \min\{\frac{|\alpha-p|(\alpha+p+1)}{2}, \beta^R_{\alpha,2}(R/I_X)\}.$$ Therefore, the length of any fiber of $\pi_{\Lambda}: X \to \mathbb P^{n+e-\alpha}$ is at most $$1+\alpha+\beta^{S_{\alpha}}_{0,2}\leq 1+\alpha+\min\{\frac{|\alpha-p|(\alpha+p+1)}{2}, \beta^R_{\alpha,2}(R/I_X)\},$$ which completes a proof of Theorem \[3-regulr-multisecants\].\
Now let us prove the Claim. Note that Claim $({\rm i})$ follows directly from Corollary \[N\_[d,p-1]{}as a S-module\] (b) for $d=3$. Hence we only need to show Claim $({\rm ii})$. We consider the multiplicative maps appearing in the mapping cone sequence as follows: $$\label{eq:long exact sequence of tor}
\begin{array}{cccccccccccccc}
{\operatorname{Tor}}_{0}^{S_{\alpha}}(R/I_X,k)_{1}{\stackrel{\times x_{\alpha-1}}{\rightarrow}}{\operatorname{Tor}}_{0}^{S_{\alpha}}(R/I_X,k)_{2}\twoheadrightarrow{\operatorname{Tor}}_{0}^{S_{\alpha-1}}(R/I_X,k)_{2}{\rightarrow}0,\\[1ex]
{\operatorname{Tor}}_{0}^{S_{\alpha-1}}(R/I_X,k)_{1}{\stackrel{\times x_{\alpha-2}}{\rightarrow}}{\operatorname{Tor}}_{0}^{S_{\alpha-1}}(R/I_X,k)_{2}\twoheadrightarrow{\operatorname{Tor}}_{0}^{S_{\alpha-2}}(R/I_X,k)_{2}{\rightarrow}0,\\[1ex]
\cdots~~~\cdots~~~\cdots\\[1ex]
{\operatorname{Tor}}_{0}^{S_{p+1}}(R/I_X,k)_{1}{\stackrel{\times x_{p}}{\rightarrow}}{\operatorname{Tor}}_{0}^{S_{p+1}}(R/I_X,k)_{2}\twoheadrightarrow{\operatorname{Tor}}_{0}^{S_{p}}(R/I_X,k)_{2}=0.
\end{array}$$ Since $R/I_X$ satisfies property $\textbf {N}^{S_p}_{2,0}$ as an $S_p$-module, we get ${\operatorname{Tor}}_{0}^{S_{p}}(R/I_X,k)_{2}=0$. From the above exact sequences, we have the following inequalities on the graded Betti numbers by dimension counting:
$$\beta^{S_{\alpha}}_{0,2}\le \beta^{S_{\alpha}}_{0,1}+\beta^{S_{\alpha-1}}_{0,2}\le \beta^{S_{\alpha}}_{0,1}+\beta^{S_{\alpha-1}}_{0,1}+\beta^{S_{\alpha-2}}_{0,2}\le\cdots\le \beta^{S_{\alpha}}_{0,1}+\beta^{S_{\alpha-1}}_{0,1}+\cdots\beta^{S_{p+1}}_{0,1}=$$ $$\alpha +(\alpha-1)+\cdots + (p+1)=\frac{({\alpha-p})(\alpha+p+1)}{2}.$$ Thus, we obtain the desired inequality $$\beta^{S_{\alpha}}_{0,2}(R/I_X)\le \min\{\frac{{(\alpha-p)}(\alpha+p+1)}{2}, ~~~\beta^R_{\alpha,2}(R/I_X)\},$$ as we claimed.
\[remk:2\] In the proof of Theorem \[3-regulr-multisecants\] (a), we consider the following surjection: $$[\mathcal O_{\P^{n-\alpha}}(2)\oplus\mathcal O_{\P^{n-\alpha}}(1)^{\alpha}\oplus\mathcal O_{\P^{n-\alpha}}^{\beta^{S_{\alpha}}_{0,2}}]\otimes k(q)
\twoheadrightarrow \,\,\,H^0(\langle \Lambda, q\rangle, {\mathcal O}_{{\pi_{\Lambda}}^{-1}(q)}(2))$$ where $[\mathcal O_{\P^{n-\alpha}}(2)\oplus\mathcal O_{\P^{n-\alpha}}(1)^{\alpha}\oplus\mathcal O_{\P^{n-\alpha}}^{\beta^{S_{\alpha}}_{0,2}}]\otimes k(q)
\subset H^0(\langle \Lambda, q\rangle,\mathcal O_{\langle \Lambda, q\rangle}(2))$. Thus, $\pi_{\Lambda}^{-1}(q)=X\cap \langle \Lambda, q\rangle$ is $2$-normal and so $3$-regular. Similarly, we can show that if $X$ satisfies ${\ensuremath{\mathrm{\bf N}}}_{d,\alpha}$, then every finite linear section $X\cap L^{\alpha}$ is $d$-regular, which was first proved by Eisenbud et al[@EGHP2 Theorem 1.1]. Moreover, from the following surjection as an $S_{\alpha}$-module $$S_{\alpha}\oplus S_{\alpha}(-1)^{\alpha}\oplus S_{\alpha}(-2)^{\beta^{S_{\alpha}}_{0,2}}\oplus S_{\alpha}(-3)^{\beta^{S_{\alpha}}_{0,3}}\cdots\oplus S_{\alpha}(-d+1)^{\beta^{S_{\alpha}}_{0,d-1}}\rightarrow R/I_X\rightarrow 0,$$ we see that if $X$ satisfies property ${\ensuremath{\mathrm{\bf N}}}_{d,\alpha}$ then $\deg(X\cap L^{\alpha})\leq 1+\alpha+ \ds\sum_{t=2}^{d-1} \beta^{S_{\alpha}}_{0,t}.$
The following result shows that if $X$ is a nondegenerate variety satisfying ${\ensuremath{\mathrm{\bf N}}}_{3,{\operatorname{e}}}$ then there is some sort of rigidity toward the beginning and the end of the resolution. This means the following Betti diagrams are equivalent;
---------------------------------------------------------------------- ----------------------- -------------------- -- -- -- --
Property ${\ensuremath{\mathrm{\bf N}}}_{3,e}$ and $\beta^R_{e,2}=0$ $X$ is $2$-regular
\[2ex\] $\Longleftrightarrow$
\[2ex\]
---------------------------------------------------------------------- ----------------------- -------------------- -- -- -- --
\[prop1312:2-regular\] Suppose $X\subset \P^{n+e}$ is a non-degenerate variety of dimension $n$ and codimension $e$ with property ${\ensuremath{\mathrm{\bf N}}}_{3,e}$. Then, $\beta^R_{e,2}=0$ if and only if $X$ is 2-regular.
Let $L^e$ be a linear space of dimension $e$ and assume that $X\cap L^{e}$ is finite. By Theorem \[3-regulr-multisecants\] (a), $
\deg(X\cap L^e)\leq 1+e+\beta^R_{e,2}.
$ Therefore, $\beta^R_{e,2}=0$ implies $\deg(X\cap L^e)\leq 1+e$. Since $X$ is a nondegenerate variety this implies that $X$ is small. (i.e. for every zero-dimensional intersection of $X$ with a linear space $L$, the degree of $X\cap L$ is at most $1+\dim(L)$. (See [@E Definition 11].) Then it follows directly from [@EGHP1 Theorem 0.4] that $X$ is $2$-regular.
What can we say about the case $\beta^R_{\alpha,2}=0$ where $\alpha<e$? In this case, we see that if $\Lambda\cap X$ is finite for a linear subspace $\Lambda$ of dimension $\leq \alpha$ then $\deg(\Lambda\cap X)\leq \dim\Lambda+1$. Note that this condition is a necessary condition for property ${\ensuremath{\mathrm{\bf N}}}_{2,\alpha}$. However, the converse is false in general, as for example in the case of a double structure on a line in $\P^3$ or the case of the plane with embedded point. (See [@EGHP2 Example 1.4].) We do not know if there are other cases when $X$ is a variety.
\(a) The two skew lines $X$ in $\P^3$ satisfies $\deg(X)=2<1+e=3$. The Betti table of $R/I_X$ is given by $${\tiny\texttt{
\begin{tabular}{c|ccccccccccccccccccccccccccccc}
& 0 & 1 & 2 & 3 & 4 & ... \\[1ex]
\hline\\
0 & 1 & 0 & 0 & 0 & 0&... \\[1ex]
1 & 0 & 4 & 4 & 1 & 0&... \\[1ex]
2 & 0 & 0 & 0 & 0 & 0&... \\[1ex]
\end{tabular}
}}$$ Note that $X$ is $2$-regular but not aCM.\
(b) Let $C$ be a rational normal curve in $\P^4$, which is $2$-regular. If $X=C\cup P$ for a general point $P\in \P^4$ then $\deg(X)=1+e=4$. However a general hyperplane $L$ passing through $P$ is $5$-secant $3$-plane such that $\deg(L\cap X)=5>4=1+e$. This implies that $\beta^R_{e,2}(R/I_X)\neq 0$. If $P\in {\operatorname{Sec}}(C)$ then there is a $3$-secant line to $X$. Therefore $\beta^R_{1,2}(R/I_X)\neq 0$. For the two cases, the corresponding Betti tables for $X$ are computed as follows ([@GS Macaulay 2]):
---------------------------------------- -- ------------------------------------------- -- -- -- --
Case 1: $P\in {\operatorname{Sec}}(C)$ Case 2: $P\notin {\operatorname{Sec}}(C)$
---------------------------------------- -- ------------------------------------------- -- -- -- --
\[example:2014\] Let $C$ be a rational normal curve and $Z$ be a set of general $4$ points in $\P^3$.\
\
Using Macaulay 2, we can compute the Betti table of $X=C\cup Z$ as follows:\
\
\
Since the codimension $e$ of $X$ is two, $X$ satisfies property ${\ensuremath{\mathrm{\bf N}}}_{3,e}$. Note that $X$ is not $3$-regular. Unlike the case of ${\ensuremath{\mathrm{\bf N}}}_{2,e}$, the condition ${\ensuremath{\mathrm{\bf N}}}_{3,e}$ in general does not imply $3$-regularity.
The proof of Theorem \[3-regulr-multisecants\] (b)
--------------------------------------------------
Suppose that $X$ satisfies property ${\ensuremath{\mathrm{\bf N}}}_{3,e}$ and let $L^e\subset\P^n$ be a linear space of dimension $e$. If $X\cap L^{e}$ is finite then we have the following inequality from Theorem \[3-regulr-multisecants\] (a) as follows; $$\label{N3e_inequality}
\deg(X\cap L^e)\leq 1+e+\frac{(e-p)(e+p+1)}{2}\leq 1+e+\binom{e+1}{2}=\binom{e+2}{2}.$$ This implies that $\deg(X)\leq \binom{2+e}{2}$ since $\deg(X)$ is defined by $\deg(X\cap L^e)$ for a [*general*]{} linear space $L^e$ of dimension $e$.
The bound in is sharp because if $M$ is a $1$-generic matrix of size $3\times t$ for $t\geq 3$ then the determinantal variety $X$ defined by maximal minors of $M$ acheives this degree bound. In this case, the minimal free resolution of $I_X$ is a $3$-linear resolution, which is given by Eagon-Northcott complex.
Note that if $X$ is arithemetically Cohen-Macaulay and $I_X$ has $3$-linear resolution then it was shown that $\deg(X)=\binom{e+2}{2}$ in [@EG Corollary 1.1]. The converse is not true in general. For example, let $Y$ be the secant variety of a rational normal curve in $\mathbb P^n$ and let $P$ be a general point in $\mathbb P^n$. Then the algebraic set $X=Y\cup P$ has the geometric degree $\binom{e+2}{2}$ but it does not satisfy $N_{3,e}$ because there exists a $(\binom{e+2}{2}+1)$-secant $e$-plane to $X$. This also implies that $I_X$ does not have $3$-linear resolution.
It is natural to ask what makes the ideal $I_X$ have $3$-linear resolution under the condition $\deg(X)=\binom{e+2}{2}$. Theorem \[3-regulr-multisecants\] (b) shows that property ${\ensuremath{\mathrm{\bf N}}}_{3,e}$ is sufficient for this.
Note that the condition ${\ensuremath{\mathrm{\bf N}}}_{3,e}$ is essential and cannot be weakened. For example, let $S$ be a smooth complete intersection surface of type $(2,3)$ in $\P^4$. Then the codimension $e$ is two such that $\deg(S)=6=\binom{e+2}{2}$. However $I_X$ does not have $3$-linear resolution. Note that $S$ satisfies ${\ensuremath{\mathrm{\bf N}}}_{3,e-1}$ but not ${\ensuremath{\mathrm{\bf N}}}_{3,e}$.
For proof we need the following lemma.
\[cor:303\] Suppose that $X$ satisfies *property* ${\rm {\bf N}}_{3,e}$ and $\deg(X)= \binom{e+2}{2}$. Then,
- $I_X$ has no quadric generators. This implies that $I_X$ is $3$-linear up to $e$-th step.
- $\binom{\alpha+1}{2}\le \beta^R_{\alpha,2}(R/I_X)$ for all $1\le {\alpha}\le e$.
Assume that $\deg(X)=\binom{e+2}{2}$ and there is a quadric hypersurface $Q$ containing $X$. Choose a general linear subspace $L^e$ of dimension $e$ such that $L^e\varsubsetneq Q$. Then, we may assume that the point $q=(1,0,\cdots,0)$ is contained in $L^e\setminus Q$, and thus we have a surjective morphism $S_1\oplus S_1(-1)\twoheadrightarrow R/I_X$ (see the proof in [@AKS Theorem 4.2]). Hence, the multiplicative map $${\operatorname{Tor}}_{0}^{S_{1}}(R/I_X,k)_{1}{\stackrel{\times x_{0}}{\rightarrow}}{\operatorname{Tor}}_{0}^{S_{1}}(R/I_X,k)_{2}$$ is a zero map and ${\operatorname{Tor}}_{0}^{S_{1}}(R/I_X,k)_{2}=0$. Therefore, as in the proof of Theorem \[3-regulr-multisecants\] (a) (see the sequence (\[eq:long exact sequence of tor\]) for $\alpha=e$ and $p=0$), $$\beta^{S_{{\operatorname{e}}}}_{0,2}\le \beta^{S_{e}}_{0,1}+\beta^{S_{e-1}}_{0,1}+\cdots\beta^{S_{2}}_{0,1}+\beta^{S_{1}}_{0,2}
=e +(e-1)+\cdots + 2+0= \binom{e+1}{2}-1.$$ Thus, $\deg(X\cap L^e)\le 1+e+\beta^{S_{{\operatorname{e}}}}_{0,2}\leq \binom{e+2}{2}-1$ which contradicts our assumption. So, there is no quadric vanishing on $X$ and the minimal free resolution of $I_X$ is $3$-linear up to $e$-th step. In addition, in the case of $3$-linearity up to $e$-th step, there is no syzygies in degree $2$ and $$\beta^{S_{\alpha}}_{0,2}=\beta^{S_{\alpha}}_{0,1}+\beta^{S_{\alpha-1}}_{0,1}+\cdots\beta^{S_{2}}_{0,1}+\beta^{S_{1}}_{0,1}
=\binom{\alpha+1}{2}\le \beta^R_{\alpha,2}(R/I_X),$$ as we wished.
To prove Theorem \[3-regulr-multisecants\] (b), it suffices to show that $\deg(X)=\binom{e+2}{2}$ implies $I_X$ has a $3$-linear resolution under the condition ${\ensuremath{\mathrm{\bf N}}}_{3,e}$ ([@EG Corollary 1.11]). Our proof is divided into four steps.
Step [*I*]{}. First we show that if $H$ is a general linear space of dimension $i$ where $e\leq i\leq n$, then $I_{{X\cap H},{H}}$ cannot have quadric generators.
For general linear space $\Lambda$ of dimension $e$, we see from Remark \[remk:2\] that $I_{{X\cap \Lambda},{\Lambda}}$ is $3$-regular. Since $X\cap \Lambda$ is a zero dimensional scheme of $$\deg(X\cap \Lambda)=\deg(X)=\binom{e+2}{2}=\binom{{\operatorname{codim}}(X\cap \Lambda,\Lambda)+2}{2},$$ it follows from Lemma \[cor:303\] that $I_{{X\cap \Lambda},{\Lambda}}$ has a 3-linear resolution and hence there is no quadric generator in the ideal $I_{{X\cap \Lambda},{\Lambda}}$. This implies that if $H$ is a general linear space of dimension $i$ for some $e\leq i\leq n$, then $I_{{X\cap H},{H}}$ cannot have quadric generators. In particular, if $H=\P^n$ then $I_X$ does not have quadric generators and hence $$\beta_{k,1}(R/I_X)=0 \text{ for all } k\geq 0.$$ $${\begin{tabular}{ccccccc}
{\tiny\texttt{
\begin{tabular}{c|ccccccccccccccccccccccccccccc}
& 0 & 1 & ... & e-1 & e & e+1 & e+2 & ... \\[1ex]
\hline\\
0 & 1 & 0 & ... & 0 & 0 & 0 & 0&... \\[1ex]
1 & 0 & * & ... & * & * & * & *&... \\[1ex]
2 & 0 & * & ... & * & * & * & *&... \\[1ex]
3 & 0 & 0 & ... & 0 & 0 & * & *&... \\[1ex]
\end{tabular}
}}
&$\Longrightarrow$ &
{\tiny\texttt{
\begin{tabular}{c|ccccccccccccccccccccccccccccc}
& 0 & 1 & ... & e-1 & e & e+1 & e+2 &... \\[1ex]
\hline\\
0 & 1 & 0 & ... & 0 & 0 & 0 &0& ... \\[1ex]
1 & 0 & 0 & ... & 0 & 0 & 0 &0& ... \\[1ex]
2 & 0 & * & ... & * & * & * &*& ... \\[1ex]
3 & 0 & 0 & ... & 0 & 0 & * &*& ... \\[1ex]
\end{tabular}
}}\\
&&&
\end{tabular}
}$$ Step [*II*]{}. The goal in this step is to show that $$\beta_{k,3}(I_X)=\beta_{k+1,2}(R/I_X)=0 \text{ for all } k\geq e.$$ $$\begin{tabular}{ccccccc}
{\tiny\texttt{
\begin{tabular}{c|ccccccccccccccccccccccccccccc}
& 0 & 1 & ... & e-1 & e & e+1 & e+2 &... \\[1ex]
\hline\\
0 & 1 & 0 & ... & 0 & 0 & 0 &0& ... \\[1ex]
1 & 0 & 0 & ... & 0 & 0 & 0 &0& ... \\[1ex]
2 & 0 & * & ... & * & * & * &*& ... \\[1ex]
3 & 0 & 0 & ... & 0 & 0 & * &*& ... \\[1ex]
\end{tabular}
}}
$\Longrightarrow$ &
{\tiny\texttt{
\begin{tabular}{c|ccccccccccccccccccccccccccccc}
& 0 & 1 & ... & e-1 & e & e+1 & e+2 &... \\[1ex]
\hline\\
0 & 1 & 0 & ... & 0 & 0 & 0 &0& ... \\[1ex]
1 & 0 & 0 & ... & 0 & 0 & 0 &0& ... \\[1ex]
2 & 0 & * & ... & * & * & 0 &0& ... \\[1ex]
3 & 0 & 0 & ... & 0 & 0 & * &*& ... \\[1ex]
\end{tabular}
}}\\
\end{tabular}$$ To show this, we prove that if $k\geq e$ then $\beta_{k,3}({\operatorname{gin}}I_X)=0$, where ${\operatorname{gin}}(I_X)$ is a generic initial ideal of $I_X$ with respect to the reverse lexicographic monomial order. Note that $\beta_{k,3}({\operatorname{gin}}(I_X))=0$ implies that $\beta_{k,3}(I_X)=0$ ([@G Proposition 2.28]). Let $\mathcal G({\operatorname{gin}}(I_X))_d$ be the set of [*monomial generators*]{} of ${\operatorname{gin}}(I_X)$ in degree $d$. For each monomial $T$ in $R=k[x_0,\ldots, x_n]$, we denote by $m(T)$ $$\max\{ i \geq 0 \mid \text{ a variable } x_i \text{ divides } T\}.$$ Now suppose that $$\label{eq: betti and gin}
\beta_{k,3}({\operatorname{gin}}(I_X))\neq 0 \text{ for some } k \geq e,$$ and let $k$ be the largest integer satisfying the condition . By the result of Eliahou-Kervaire [@EK] we see that $$\beta_{k,3}({\operatorname{gin}}(I_X))=\big|\{\,T\in \mathcal G({\operatorname{gin}}(I_X))_3 \mid m(T)=k \}\big|.$$ Since $\beta_{k,3}({\operatorname{gin}}(I_X))\neq 0$, we can choose a monomial $T\in \mathcal G({\operatorname{gin}}(I_X))_3$ such that $m(T)=k$. This implies that $T$ is divided by $x_k$. If $H$ is a general linear space of dimension $k$ then it follows from [@G Theorem 2.30] that the ideal $$\label{eq:1312}
{\operatorname{gin}}(I_{{X\cap H},{H}})=\left[\frac{({\operatorname{gin}}(I_X),x_{k+1},\ldots, x_n)}{(x_{k+1},\ldots, x_n)}\right]^{\rm sat}=\left[\frac{({\operatorname{gin}}(I_X),x_{k+1},\ldots, x_n)}{(x_{k+1},\ldots, x_n)}\right]_{x_k\to 1}$$ has to contain the quadratic monomial $T/x_k$. This means that $X\cap H$ is cut out by a quadric hypersurface, which contradicts the result in Step [*I*]{}. Hence we conclude that $\beta_{k,3}(I_X)=0$ for all $k\geq e$.\
Step [*III*]{}. We claim that $$\mathcal G({\operatorname{gin}}(I_X))_3={\operatorname{gin}}(I_X)_3=k[x_0,\ldots,x_{e-1}]_3.$$
By Lemma \[cor:303\] and [@G Proposition 2.28], we see that $$\label{gin and reduction}
\binom{e+1}{2}\leq \beta_{e,2}(R/I_X) = \beta_{e-1,3}(I_X)\leq \beta_{e-1,3}({\operatorname{gin}}(I_X)).$$ Since $\beta_{k,3}({\operatorname{gin}}(I_X))=0$ for each $k\geq e$, any monomial generator $T\in \mathcal G({\operatorname{gin}}(I_X))_3$ cannot be divided by $x_k$ for any $k\geq e$. Thanks to the result of Eliahou-Kervaire [@EK] again, $$\begin{array}{llllllllllllllll}
\beta_{e-1,3}({\operatorname{gin}}(I_X)) & = &\big|\{ T\in \mathcal G({\operatorname{gin}}(I_X))_3 \mid m(T)=e-1 \}\big| \\[1ex]
& \leq & \dim_k \big( x_{e-1}\cdot k[x_0,\ldots, x_{e-1}]_2\big)\\[1ex]
& = & \ds \binom{e+1}{2}.
\end{array}$$ By the dimension counting and equation , we have $\beta_{e-1,3}({\operatorname{gin}}(I_X))= \binom{e+1}{2}$ and thus $$\{ T\in \mathcal G({\operatorname{gin}}(I_X))_3 \mid m(T)=e-1 \}= x_{e-1}\cdot k[x_0,\ldots, x_{e-1}]_2,$$ which implies that $x_{e-1}^3\in {\operatorname{gin}}(I_X)$. Note that ${\operatorname{gin}}(I_X)$ does not have any quadratic monomial. Hence we conclude from Borel fixed property of ${\operatorname{gin}}(I_X)$ that $$\mathcal G({\operatorname{gin}}(I_X))_3={\operatorname{gin}}(I_X)_3=k[x_0,\ldots,x_{e-1}]_3.$$
Step [*IV*]{}. Finally, by the result in Step [*II*]{}, we only need to show that, for all $k\geq e$ and $j\geq 3$, $$\beta_{k,j}(I_X)=0.$$ $$\begin{tabular}{ccccccc}
{\tiny\texttt{
\begin{tabular}{c|ccccccccccccccccccccccccccccc}
& 0 & 1 & ... & e-1 & e & e+1 & e+2 &... \\[1ex]
\hline\\
0 & 1 & 0 & ... & 0 & 0 & 0 &0& ... \\[1ex]
1 & 0 & 0 & ... & 0 & 0 & 0 &0& ... \\[1ex]
2 & 0 & * & ... & * & * & 0 &0& ... \\[1ex]
3 & 0 & 0 & ... & 0 & 0 & * &*& ... \\[1ex]
4 & 0 & 0 & ... & 0 & 0 & * &*& ... \\[1ex]
\end{tabular}
}}
$\Longrightarrow$ &
{\tiny\texttt{
\begin{tabular}{c|ccccccccccccccccccccccccccccc}
& 0 & 1 & ... & e-1 & e & e+1 & e+2 &... \\[1ex]
\hline\\
0 & 1 & 0 & ... & 0 & 0 & 0 &0& ... \\[1ex]
1 & 0 & 0 & ... & 0 & 0 & 0 &0& ... \\[1ex]
2 & 0 & * & ... & * & * & 0 &0& ... \\[1ex]
3 & 0 & 0 & ... & 0 & 0 & 0 &0& ... \\[1ex]
4 & 0 & 0 & ... & 0 & 0 & 0 &0& ... \\[1ex]
\end{tabular}
}}\\
\end{tabular}$$ Since $\beta_{k,j}(I_X)\leq \beta_{k,j}({\operatorname{gin}}(I_X))$, it is suffices to prove that ${\operatorname{gin}}(I_X)$ has no generators in degree $\geq 4$. To prove this, suppose that there is a monomial generator $T\in \mathcal G({\operatorname{gin}}(I_X))_j$ for some $j\geq 4$. Then the monomial $T$ can be written as a product of two monomial $N_1$ and $N_2$ such that $$N_1\in k[x_{e},\ldots, x_n], \quad N_2\in k[x_0, \ldots, x_{e-1}].$$ By the result in Step [*III*]{}, if the monomial $N_2$ is divided by some cubic monomial in $k[x_0, \ldots, x_{e-1}]$ then $T$ cannot be a monomial generator of ${\operatorname{gin}}(I_X)$. Hence we see $\deg(N_2)$ is at most $2$. If $\Lambda$ is a general linear space of dimension $e$ then it follows from the similar argument given in the proof of Step [*III*]{} with the equation that $N_2\in {\operatorname{gin}}(I_{{X\cap \Lambda},{\Lambda}})$. Hence $I_{{X\cap \Lambda},{\Lambda}}$ has a hyperplane or a quadratic polynomial, which contradicts the result proved in Step [*I*]{}.
The similar argument in the proof can also be applied to show Theorem \[Eisenbud et al\] (b).
In [@HK], the authors have shown that if a non-degenerate reduced scheme $X\subset \P^n$ satisfies ${\ensuremath{\mathrm{\bf N}}}_{2,p}$ for some $p\geq 1$ then the inner projection from any smooth point of $X$ satisfies at least property ${\ensuremath{\mathrm{\bf N}}}_{2,p-1}$. So it is natural to ask whether the inner projection from any smooth point of $X$ satisfies at least property ${\ensuremath{\mathrm{\bf N}}}_{3,p-1}$ when $X$ satisfies ${\ensuremath{\mathrm{\bf N}}}_{3,p}$ for some $p\geq 1$. Our result shows that this is not true in general. For examples, if we consider the secant variety $X={\rm Sec}(C)$ of a rational normal curve $C$ then the inner projection $Y$ from any smooth point of $X$ has the degree $$\deg(Y)=\binom{2+e}{2}-1=\binom{e+1}{2}+\binom{e}{1}>\binom{2+(e-1)}{2},$$ where $e={\rm codim(X)}$ and $e-1={\rm codim(Y)}$. This implies that $X$ satisfies ${\ensuremath{\mathrm{\bf N}}}_{3,e}$ but $Y$ does not satisfy ${\ensuremath{\mathrm{\bf N}}}_{3,e-1}$.
Remark that there exists an [*algebraic set*]{} $X$ of degree $<\binom{e+2}{2}$ whose defining ideal $I_X$ has $3$-linear resolution. For example, let $I=(x_0^3, x_0^2x_1, x_0x_1^2, x_1^3, x_0^2x_2)$ be a monomial ideal of $R=k[x_0,x_1,x_2,x_3]$. Note that the sufficiently generic distraction $D_{\mathcal L}(I)$ of $I$ is of the form $$D_{\mathcal L}(I)=(L_1L_2L_3, L_1L_2L_4, L_1L_4L_5, L_4L_5L_6, L_1L_2L_7),$$ where $L_i$ is a generic linear form for each $i=1,\ldots, 7$ (see [@BCR] for the definition of distraction). Then the algebraic set $X$ defined by the ideal $D_{\mathcal L}(I)$ is an union of $5$ lines and one point such that its minimal free resolutions are given by
[ c c c c l]{}\
& [**$R$-modules**]{} & [**$S_1$-modules**]{} & [**$S_2$-modules**]{}\
\
& & &\
In this case, we see that $e=2$, $\deg(X)=5<\binom{2+2}{2}=6$ and there is a $6$-secant $2$-plane to $X$. We see that a general hyperplane section of $X$ is contained in a quadric hypersurface from $\beta_{e+1,2}(R/I_X)\neq 0$.
From Remark \[remk:2\], we know that if $X$ satisfies ${\ensuremath{\mathrm{\bf N}}}_{d, e}$, $(d\geq 2)$ then every linear section $X\cap L^{e}$ of dimension zero is $d$-regular, where $L^{e}$ is a linear space of dimension $e$. Moreover we can verify that $$\deg(X\cap L^{e})\leq 1+\alpha+ \ds\sum_{t=2}^{d-1} \beta^{S_{e}}_{0,t}\leq \binom{e+d-1}{d-1}.$$
We close the paper with the following question.\
[**Question.**]{} Let $X$ be a non-degenerate algebraic set of dimension $n$ in $\mathbb P^{n+e}$ satisfying ${\ensuremath{\mathrm{\bf N}}}_{d,e}$. Then we have $\deg(X)\leq \binom{e+d-1}{d-1}$. Suppose that equality holds. Is always $X$ a reduced aCM scheme whose defining ideal has $d$-linear resolution?
[Acknowledgements]{}
We are thankful to F.-O. Schreyer for personal communications concerning examples of non $3$-regular algebraic sets satisfying ${\ensuremath{\mathrm{\bf N}}}_{3,e}$ by using Boij-Söderberg theory. We are also grateful to the anonymous referees for valuable and helpful suggestions. In addition, the program Macaulay 2 has been useful to us in computations of concrete examples.
[9999999]{}
J. Ahn; S. Kwak *Graded mapping cone theorem, multisecants and syzygies*, J. Algebra 331, (2011), 243–-262.
Alzati, Alberto; Russo, Francesco *On the $k$-normality of projected algebraic varieties*, Bull. Braz. Math. Soc. (N.S.) 33 (2002), no. 1, 27–48.
J. Ahn; S. Kwak and Y. Song *The degree complexity of smooth surfaces of codimension 2*, J. Symbolic Comput. 47, no. 5, 568–581.
E. Bertini, *Introduzione alla geometria proiettiva degli iperspazi*, Enrico Spoerri, Pisa, 1907.
Bruns, Winfried; Conca, Aldo; Römer, Tim *Koszul homology and syzygies of Veronese subalgebras*, Math. Ann. 351 (2011), no. 4, 761–-779.
A. M. Bigatti; A. Conca; L. Robbiano, *Generic initial ideals and distractions*, Comm. Algebra 33 (2005), no. 6, 1709–1732.
P. Del Pezzo; *Sulle superficie di ordine $n$ immerse nello spazio di $n + 1$ dimensioni*, Rend. Circ. Mat. Palermo [**1**]{} (1886).
D. Eisenbud, *Syzygies, Degree, and choices from a life in mathematics. Retiring presidential address*, Bulletin of the AMS. Volume 44, Number 3, July 2007, Pages 331–359.
D. Eisenbud; S. Goto [*Linear free resolutions and minimal multiplicity*]{}, J. Algebra 88 (1984), 89–133.
E. G. Evans; P. Griffith, [*The syzygy problem*]{}, Annals of Math.(2) 114(1981), no. 2, 323–333.
D. Eisenbud; M. Green; K. Hulek; S. Popescu, [*Small schemes and varieties of minimal degree*]{}, Amer. J. Math. 128 (2006), no. 6, 1363–1389.
D. Eisenbud; M. Green; K. Hulek; S. Popescu, [*Restriction linear syzygies: algebra and geometry*]{}, Compositio Math. 141 (2005), 1460–1478.
S. Eliahou; M. Kervaire, [*Minimal resolutions of some monomial ideals*]{}, J. Algebra 129 (1990) 1–25.
M. Green, *Generic Initial Ideals*, in Six lectures on Commutative Algebra, (Elias J., Giral J.M., Miró-Roig, R.M., Zarzuela S., eds.), Progress in Mathematics [**166**]{}, Birkhäuser, 1998, 119–186.
Grayson, Daniel R., Stillman, Michael E. *Macaulay 2: a software system for algebraic geometry and commutative algebra*, available over the web at http://www.math.uiuc.edu/Macaulay2.
G. Horrocks, *Vector bundles on the punctured spectrum of a local ring*, Proc. London Math. Society, (3) 14 (1964), 689–713.
K. Han; S. Kwak, *Analysis on some infinite modules, inner projection, and applications*, Trans. Amer. Math. Soc. 364 (2012), no. 11, 5791–5812.
M.Green; R. Lazarsfeld, [*Some results on the syzygies of finite sets and algebraic curves*]{}, Compositio Mathematica, 67 (1988), 301–314.
S. Kwak; E. Park *Some effects of property ${\rm N}\sb p$ on the higher normality and defining equations of nonlinearly normal varieties*, J. Reine Angew. Math. 582 (2005), 87–105.
S. Xambó, *On projective varieties of minimal degree*, Collect. Math. [**32**]{} (1981), no. 2, 149–-163.
[^1]: ${}^{*}$ Corresponding author.
[^2]: 1\. The first author was supported by the research grant of the Kongju National University in 2013 (No. 2013-0535)
[^3]: 2\. The second author was supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIP) (No. 2013042157).
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Sensory processing is hard because the variables of interest are encoded in spike trains in a relatively complex way. A major goal in sensory processing is to understand how the brain extracts those variables. Here we revisit a common encoding model [@olshausen_emergence_1996] in which variables are encoded linearly. Although there are typically more variables than neurons, this problem is still solvable because only a small number of variables appear at any one time (sparse prior). However, previous solutions usually require all-to-all connectivity, inconsistent with the sparse connectivity seen in the brain. Here we propose a principled algorithm that provably reaches the MAP inference solution but using sparse connectivity. Our algorithm is inspired by the mouse olfactory bulb, but our approach is general enough to apply to other modalities; in addition, it should be possible to extend it to nonlinear encoding models.'
author:
- |
Sina Tootoonian, Peter Latham\
Gatsby Computational Neuroscience Unit\
University College London\
London W1T 4JG, UK\
`[sina|pel]@gatsby.ucl.ac.uk`\
bibliography:
- 'ms.bib'
title: |
Sparse connectivity for MAP inference in\
linear models using sister mitral cells
---
Introduction
============
A prevalent idea in modern sensory neuroscience is that early sensory systems invert generative models of the environment to infer the hidden causes or latent variables that have produced sensory observations. Perhaps the simplest form of such inference is *maximum a posteriori* inference, or MAP inference for short, in which the most likely configuration of latent variables given the sensory inputs is reported. The implementation of MAP inference in neurally plausible circuitry often requires all-to-all connectivity between the neurons involved in the computation. Given that the latent variables are often very high dimensional, this can imply single neurons being connected to millions of others, a requirement that is impossible to achieve in most biological circuits. Here we show how a MAP inference problem can be reformulated to employ sparse connectivity between the computational units. Our formulation is inspired by the vertebrate olfactory system, but is completely general and can be applied in any setting where such an inference problem is being solved.
We begin by describing the olfactory setting of the problem, and highlight the requirement of all-to-all connectivity. Then we show how the MAP inference problem can be solved using convex duality to yield a biologically plausible circuit. Noting that it too suffers from all-to-all connectivity, we then derive a solution inspried by the anatomy of the vertebrate olfactory that uses sparse connectivity.
Sparse coding in olfaction
--------------------------
We consider sparse coding [@olshausen_emergence_1996] as applied to olfaction [@koulakov_sparse_2011; @grabska-barwinska_demixing_2013; @tootoonian_dual_2014; @grabska-barwinska_probabilistic_2017; @kepple_deconstructing_2016]. Odors are modeled as high-dimensional, real valued latent variables ${\mathbf{x}}\in\mathbb{\mathbb{{R}}}^{N}$ drawn from a factorized distribution $$\begin{aligned}
\tag{Odor model} p({\mathbf{x}})=\prod_{i=1}^{N}p(x_i)=\frac{{1}}{Z}e^{-\phi({\mathbf{x}})}, \quad \phi({\mathbf{x}})= \beta\|{\mathbf{x}}\|_{1}+\frac{{\gamma}}{2}\|{\mathbf{x}}\|_{2}^{2} + \mathbb I({\mathbf{x}}\ge 0).\end{aligned}$$ The first two terms of $\phi$ embody an elastic net prior [@kepple_deconstructing_2016; @zou_regularization_2005] on molecular concentrations that models their observed sparsity in natural odors [@jouquand_sensory_2008], while the last term enforces the non-negativity of molecular concentrations and is defined as $ \mathbb I({\mathbf{x}}\ge 0) = \sum_{i=1}^N \mathbb I(x_i \ge 0)$, where $\mathbb I(x_i \ge 0) = 0$ when $x_i \ge 0$ and $\infty$ otherwise. The animal observes these latents indirectly via low dimensional glomerular responses ${\mathbf{y}}\in\mathbb{{R}}^{M}$, where $M\ll N$. Odors are transduced linearly into glomerular responses via the *affinity matrix* ${\mathbf{A}}$, where $A_{ij}$ is the response of glomerulus $i$ to a unit concentration of molecule $j$. This results in a likelihoood $p({\mathbf{y}}|{\mathbf{x}})=\mathcal{{N}}({\mathbf{y}};{\mathbf{A}}{\mathbf{x}},\sigma^{2}{\mathbf{I}}$), where $\sigma^{2}$ is the noise variance. As in [@tootoonian_dual_2014], we assume that the olfactory system infers odors from glomerular inputs via MAP inference, i.e. by finding the vector ${\mathbf{x}}_{{\text{MAP}}}$ that minimizes the negative log posterior over odors given the inputs: $$\begin{aligned}
\tag{MAP inference} {\mathbf{x}}_{{\text{MAP}}}=\operatorname*{argmin}_{{\mathbf{x}}\in {\mathbf}R^N}\;\phi({\mathbf{x}})+\frac{{1}}{2\sigma^{2}}\|{\mathbf{y}}-{\mathbf{A}}{\mathbf{x}}\|_{2}^{2}\end{aligned}$$
A common approach to solving such problems is gradient descent [@olshausen_emergence_1996], with dynamics in ${\mathbf{x}}$: $$\begin{aligned}
\tag{Gradient descent}
\tau \frac{d{\mathbf{x}}}{dt} &= -\text{(leak)} + \frac{1}{\sigma^2}{\mathbf{A}}^T{\mathbf{y}}- \frac{1}{\sigma^2}{\mathbf{A}}^T{\mathbf{A}}{\mathbf{x}},
\end{aligned}$$ where we’ve absorbed the effects of the prior into the leak term for simplicity. These dynamics have a neural interpretation as feedforward excitation of the readout units ${\mathbf{x}}$ by the glomeruli ${\mathbf{y}}$ due to the ${\mathbf{A}}^T {\mathbf{y}}$ term, and recurrent inhibition among the readout units due to the $-{\mathbf{A}}^T{\mathbf{A}}{\mathbf{x}}$ term. This circuit is shown in Figure \[fig:all-to-all\]A.
Another circuit is motivated by noting that ${\mathbf{A}}^T{\mathbf{y}}- {\mathbf{A}}^T {\mathbf{A}}{\mathbf{x}}= {\mathbf{A}}^T({\mathbf{y}}- {\mathbf{A}}{\mathbf{x}})$. This suggests a predictive coding [@rao_predictive_1999] reformulation: $$\begin{aligned}
\tag{Predictive coding}
\tau_{\text{fast}} \frac{d{\mathbf{r}}}{dt} = -\text{(leak)} + {\mathbf{y}}- {\mathbf{A}}{\mathbf{x}}, \quad \tau_{\text{slow}} \frac{d{\mathbf{x}}}{dt} = -\text{(leak)} + \frac{1}{\sigma^2}{\mathbf{A}}^T{\mathbf{r}}.
\end{aligned}$$ Here the new variable ${\mathbf{r}}$ encodes the residual after explaining the glomerular activations ${\mathbf{y}}$ with odor ${\mathbf{x}}$. The neural interpretation of these dynamics is that the residual units ${\mathbf{r}}$ receive feed-forward input from the glomeruli due to the ${\mathbf{y}}$ term and feedback inhibition from the readout units due to the $-{\mathbf{A}}{\mathbf{x}}$ term, while the readout units receive feedforward excitation from the residual units due to the ${\mathbf{A}}^T {\mathbf{r}}$ term. This circuit is shown in Figure \[fig:all-to-all\]B.
![Two architectures for MAP inference requiring all-to-all connectivity in general. Arrows indicate excitatory connections, knobs indicate inhibitory connections. (A) Gradient descent architecture. All-to-all feedforward excitation is required from the glomeruli to the readout units, and all-to-all recurrent inhibition between the readout units. (B) Predictive coding architecture. All mitral cells excite all granule cells and are in turn inhibited by them. No direct interaction among granule cells is required. Both architectures yield the MAP solution at convergence.[]{data-label="fig:all-to-all"}](circuit_all_to_all.pdf){width="\linewidth"}
The problem of all-to-all connectivity
--------------------------------------
Connectivity in the above circuits is determined by the affinity matrix ${\mathbf{A}}$. Given the combinatorial nature of receptor affinities [@nara_large-scale_2011], ${\mathbf{A}}$ can be dense, i.e. have many non-zero values. This will result in correspondingly dense, even all-to-all connectivity. For example, the gradient descent architecture would require each glomerulus to connect to every readout unit, and for each readout unit to connect to every other. If we assume that the cells in the piriform cortex correspond to the readout units, this will require, in the case of the mouse olfactory bulb, that each glomerulus directly connect to millions of piriform cortical neurons, and for each cortical neuron to directly connect to millions of others. Such dense connectivity is clearly biologically implausible. The predictive coding circuit obviates the need for recurrent inhibition among the readout units, but still requires each residual unit to excite and receive feedback from millions of cortical neurons, which again is implausible. This problematic requirement of all-to-all connectivity is not limited to olfaction: the sparse coding formulation above is quite generic so that any system thought to implement it, such as the early visual system [@olshausen_sparse_1997], is likely to face a similar problem.
Results
=======
To address the problem of all-to-all connectivity we will first show how MAP inference can be solved as a constrained optimization problem, resulting in a principled derivation of the predictive coding dynamics derived heuristically above. The resulting circuit also suffers from all-to-all connectivity. Taking inspiration from the anatomy of the olfactory bulb, we then show how the problem can be reformulated and solved using sparse connectivity.
MAP inference as constrained optimization
-----------------------------------------
The MAP inference problem is a high-dimensional *unconstrained* optimization problem, where we search over the full $N$-dimensional space of odors ${\mathbf{x}}$. In [@tootoonian_dual_2014] the authors showed how a similar compressed-sensing problem can be solved in the lower-, $M$-dimensional space of observations by converting it to a low-dimensional *constrained* optimization problem. Here we use similar methods to demonstrate how the MAP problem itself can be solved in the lower-dimensional space. We begin by introducing an auxiliary variable ${\mathbf{r}}$, and reformulate the problem as constrained optimization: $$\begin{aligned}
\tag{MAP inference, constrained} {\mathbf{x}}_{{\text{MAP}}}, {\mathbf{r}}_{{\text{MAP}}}=\operatorname*{argmin}_{\substack{{\mathbf{x}}\in {\mathbf}R^N \\{\mathbf{r}}\in {\mathbf}R^M}}\;\phi({\mathbf{x}})+\frac{{1}}{2\sigma^{2}}\|{\mathbf{r}}\|_{2}^{2} \quad \text{s.t.}\quad {\mathbf{r}}= {\mathbf{y}}- {\mathbf{A}}{\mathbf{x}}.\end{aligned}$$ The Lagrangian for this problem is $${\mathcal{L}}({\mathbf{x}},{\mathbf{r}},{\boldsymbol{\lambda}})=\phi({\mathbf{x}})+\frac{1}{2\sigma^2}\|{\mathbf{r}}\|_2^2 + {\boldsymbol{\lambda}}^T({\mathbf{y}}- {\mathbf{A}}{\mathbf{x}}- {\mathbf{r}}),$$ where ${\boldsymbol{\lambda}}$ are the dual variables enforcing the constraint. The auxillary variable ${\mathbf{r}}$ can be eliminated by extremizing ${\mathcal{L}}$ with respect to it: $$\nabla_{{\mathbf{r}}}{\mathcal{L}}= \frac{1}{\sigma^2}{\mathbf{r}}- {\boldsymbol{\lambda}},\quad \nabla_{{\mathbf{r}}}{\mathcal{L}}= 0 \implies {\mathbf{r}}= \sigma^2 {\boldsymbol{\lambda}}.$$ Plugging this value of ${\mathbf{r}}$ into ${\mathcal{L}}$ we get $${\mathcal{L}}({\mathbf{x}},{\boldsymbol{\lambda}})=\phi({\mathbf{x}})-\frac{1}{2}\sigma^2\|{\boldsymbol{\lambda}}\|_2^2 + {\boldsymbol{\lambda}}^T({\mathbf{y}}- {\mathbf{A}}{\mathbf{x}}).$$ After a change of variables to ${\boldsymbol{\lambda}}\leftarrow \sigma {\boldsymbol{\lambda}}$ (which we justify below) we arrive at $$\begin{aligned}
\tag{MAP Lagrangian}{\mathcal{L}_{\text{MAP}}}({\mathbf{x}},{\boldsymbol{\lambda}})=\phi({\mathbf{x}})-\frac{1}{2}\|{\boldsymbol{\lambda}}\|_{2}^{2}+\frac{1}{\sigma}{\boldsymbol{\lambda}}^{T}({\mathbf{y}}-{\mathbf{A}}{\mathbf{x}}).\end{aligned}$$ Extermizing ${\mathcal{L}_{\text{MAP}}}$ yields dynamics $$\begin{aligned}
\tag{Mitral cell firing rate relative to baseline}
\tau_{mc} \frac{d{\boldsymbol{\lambda}}}{dt} &= - {\boldsymbol{\lambda}}+ \frac{1}{\sigma}({\mathbf{y}}- {\mathbf{A}}{\mathbf{x}})\\
\tag{Granule cell membrane voltage}
\tau_{gc} \frac{d{\mathbf{v}}}{dt} &= - {\mathbf{v}}+ {\mathbf{A}}^T{\boldsymbol{\lambda}},\\
\tag{Granule cell firing rate}
{\mathbf{x}}&= \frac{1}{\gamma \sigma}[{\mathbf{v}}- \beta \sigma]_+,\end{aligned}$$ where $[z]_+ = \text{max}(z,0)$ is the rectifying linear function. These dynamics can easily be shown to yield the MAP solution in the value of ${\mathbf{x}}$ at convergence (see Supplementary Information). The identification of ${\boldsymbol{\lambda}}$ and ${\mathbf{x}}$ with mitral and granule cells, respectively is natural as the dynamics indicate that (a) the ${\boldsymbol{\lambda}}$ variables are excited by the sensory input ${\mathbf{y}}$ and inhibited by ${\mathbf{x}}$, whereas (b) the much more numerous ${\mathbf{x}}$ variables receive their sole excitation from the ${\boldsymbol{\lambda}}$ variables, and (c) the connectivity of the ${\boldsymbol{\lambda}}$ and ${\mathbf{x}}$ variables is symmetric, reminiscent of the observed dendro-dendritic connections between mitral and granule cells [@shepherd_synaptic_2004]. The rescaling applied to ${\boldsymbol{\lambda}}$ is to keep mitral cell activity at convergence on the same order of magnitude as that of the receptor neurons, as qualitatively observed experimentally (compare for example [@shusterman_precise_2011] and [@duchamp-viret_odor_1999]): We assume without loss of generality that the elements of ${\mathbf{A}}$ and ${\mathbf{x}}$ are scaled such that the elements of ${\mathbf{y}}$ are $O(1)$ in magnitude. At convergence, ${\boldsymbol{\lambda}}= \sigma^{-1}({\mathbf{y}}- {\mathbf{A}}{\mathbf{x}})$, and as we expect the elements of ${\mathbf{y}}- {\mathbf{A}}{\mathbf{x}}$ to be $O(\sigma)$ at convergence, this results in the elements of ${\boldsymbol{\lambda}}$ being $O(1)$ in magnitude, as desired.
It may seem odd that the readout of the computation is in the activity of the granule cells, which not only do not project outside of the olfactory bulb, but lack axons entirely [@shepherd_synaptic_2004]. However, cortical neurons can read out the results of the computation by simply mirroring the dynamics of the granule cells: $$\begin{aligned}
\tag{Piriform cell membrane voltage}
\tau_{pc} \frac{d{\mathbf{u}}}{dt} &= - {\mathbf{u}}+ {\mathbf{A}}^T{\boldsymbol{\lambda}},\\
\tag{Piriform cell firing rate}
{\mathbf{z}}&= \frac{1}{\gamma \sigma}[{\mathbf{u}}- \beta \sigma]_+,\end{aligned}$$ In this circuit cortical neurons receive exactly the same mitral cell input as the granule cells and integrate it in exactly the same way (in fact, there is an implied 1-to-1 correspondence between granule cells and piriform cortical neurons) but are not required to provide feedback to the bulb. Thus, basic olfactory inference can be performed entirely within the bulb, with the concomitant increases in computational speed, and the results can be easily read out in the cortex. As cortical feedback to the bulb (in particular to the granule cells, as this model would suggest) does exist [@shepherd_synaptic_2004], its role may be to incorporate higher level cognitive information and task contingencies into the inference computation. We leave the exploration of this hypothesis to future work.
These dynamics and their implied circuit are essentially the same as those of predictive coding described in the Introduction (Figure \[fig:all-to-all\]B), and hence suffer from the same problem of all-to-all connectivity. However, as we have derived our dynamics in a principled way from the original MAP inference problem, we can now elaborate them by taking inspiration from olfactory bulb anatomy to derive a circuit that can perform MAP inference but with sparse connectivity.
Incorporating sister mitral cells
----------------------------------
The circuit derived above (Figure \[fig:all-to-all\]C) implies that each glomerulus is sampled by a single mitral cell. However, in vertebrates there are many more mitral cells than glomeruli, but each mitral cell samples a single glomerulus, so that each mitral cell has several dozen ‘sister’ cells all of whom sample the same glomerulus [@shepherd_synaptic_2004]. This is shown schematically in Figure \[fig:sister\_mcs\]. Although sister mitral cells receive the same receptor inputs their odor responses can vary, presumably due to differing interactions with the granule cell population [@dhawale_non-redundant_2010]. The computational role of the sister mitral cells has thus far remained unclear. Here we show that how they can be used to perform MAP inference but with sparse connectivity.
![Sister mitral cells. In the vertebrate olfactory bulb, each glomerulus is sampled by not one but $\sim 25$ ‘sister’ cells [@shepherd_synaptic_2004]. Here we’ve shown a setting with 3 sisters/glomerulus.[]{data-label="fig:sister_mcs"}](sister_mcs_schematic.pdf){width="4in"}
We begin by noting the simple equalities $${\mathbf{A}}{\mathbf{x}}= \sum_{i=1}^n {\mathbf{A}}^i {\mathbf{x}}^i, \quad \phi({\mathbf{x}}) = \sum_{i=1}^n \phi({\mathbf{x}}^i),$$ for any separable function $\phi$ (such as ours), and any partitioning of the matrix ${\mathbf{A}}$ and corresponding partitioning of the vector ${\mathbf{x}}$ into $n$ blocks. For example, if we partition ${\mathbf{A}}$ and ${\mathbf{x}}$ into consecutive blocks, we’d have: $${\mathbf{A}}= [\underbrace{A_{:,1},\dots,A_{:,N/n}}_{{\mathbf{A}}^1},\dots,\underbrace{A_{:,N-N/n+1},\dots,A_{:,N}}_{{\mathbf{A}}^n}], \quad {\mathbf{x}}= [\underbrace{x_1,\dots,x_{N/n}}_{{\mathbf{x}}^1},\dots,\underbrace{x_{N-N/n+1},\dots,x_N}_{{\mathbf{x}}^n}].$$ This partitioning is shown schematically in Figure \[fig:partition\].
![An example partitioning of the affinity matrix ${\mathbf{A}}$ and the odor vector ${\mathbf{x}}$.[]{data-label="fig:partition"}](partitioning.pdf){width="4in"}
We can rewrite the Lagrangian ${\mathcal{L}_{\text{MAP}}}$ in terms of this partitioning as $${\mathcal{L}_{\text{MAP}}}({\mathbf{x}}, {\boldsymbol{\lambda}}) = {\mathcal{L}_{\text{MAP}}}(\{{\mathbf{x}}^i\},{\boldsymbol{\lambda}})=-\frac{1}{2}\|{\boldsymbol{\lambda}}\|_{2}^{2} + \frac{1}{\sigma}{\boldsymbol{\lambda}}^T{\mathbf{y}}+ \sum_{i=1}^n \phi({\mathbf{x}}^i) - \frac{1}{\sigma}{\boldsymbol{\lambda}}^{T}{\mathbf{A}}^i {\mathbf{x}}^i.$$ Note that although we’ve split ${\mathbf{A}}$ and ${\mathbf{x}}$ into $n$ blocks, we’re still using a single, shared ${\boldsymbol{\lambda}}$ variable. Extremizing with resepect to the $\{{\mathbf{x}}^i\}$ and a shared ${\boldsymbol{\lambda}}$ would be an application of dual decomposition [@boyd_distributed_2011] to our problem. Instead, inspired by the presence of sister mitral cells, we reformulate the Lagrangian ${\mathcal{L}_{\text{MAP}}}$ by assigning to each block its own set ${\boldsymbol{\lambda}}^i$ of mitral cells, and introduce a corresponding set of variables ${\boldsymbol{\mu}}^i$ to enforce the constraint ${\boldsymbol{\lambda}}^i = {\boldsymbol{\lambda}}$. This yields $$\begin{aligned}
{\mathcal{L}_{\text{sis}}}(\{{\mathbf{x}}^i\},\{{\boldsymbol{\lambda}}^i\},\{{\boldsymbol{\mu}}^i\},{\boldsymbol{\lambda}}) = \sum_{i=1}^n \frac{1}{n\sigma }{\boldsymbol{\lambda}}^{i,T}{\mathbf{y}}+ \phi({\mathbf{x}}^i) &- \frac{1}{2n} \|{\boldsymbol{\lambda}}^i\|_2^2 - \frac{1}{\sigma}{\boldsymbol{\lambda}}^{i,T}{\mathbf{A}}^i {\mathbf{x}}^i\\
&+ {\boldsymbol{\mu}}^{i,T}({\boldsymbol{\lambda}}- {\boldsymbol{\lambda}}^i) - \frac{1}{2}\|{\boldsymbol{\lambda}}- {\boldsymbol{\lambda}}^i\|_2^2.\end{aligned}$$ The additional term $\frac{1}{2}\|{\boldsymbol{\lambda}}- {\boldsymbol{\lambda}}^i\|_2^2$ has been introduced because it does not alter the value of ${\mathcal{L}_{\text{sis}}}$ at the solution (since there ${\boldsymbol{\lambda}}= {\boldsymbol{\lambda}}^i$), while allowing us to eliminate ${\boldsymbol{\lambda}}$ by setting $\nabla_{{\boldsymbol{\lambda}}}{\mathcal{L}_{\text{sis}}}= 0$, yielding: $${\boldsymbol{\lambda}}= \overline{{\boldsymbol{\lambda}}} + \overline{{\boldsymbol{\mu}}},\quad \overline{{\boldsymbol{\lambda}}} = \frac{1}{n}\sum_{i=1}^n {\boldsymbol{\lambda}}^i, \quad \overline{{\boldsymbol{\mu}}} = \frac{1}{n}\sum_{i=1}^n {\boldsymbol{\mu}}^i.$$ The values $\overline{{\boldsymbol{\lambda}}}$ and $\overline {\boldsymbol{\mu}}$ are averages computed over blocks, and are variables that would be available at the glomeruli. For example $\overline{{\boldsymbol{\lambda}}}_i$ would be the average activity of all sister cells that innervate the $i$’th glomerulus.
As before, we derive dynamics by extermizing a Lagrangian, in this case ${\mathcal{L}_{\text{sis}}}$. As the $\{{\boldsymbol{\mu}}^i\}$ are the dual variables of a constrained *maximization* problem (that of maximizing ${\mathcal{L}_{\text{sis}}}$ with respect to $\{{\boldsymbol{\lambda}}^i\}$), their dynamics minimize ${\mathcal{L}_{\text{sis}}}$: $$\frac{d{\boldsymbol{\mu}}^i}{dt} \propto -\nabla_{{\boldsymbol{\mu}}^i}{\mathcal{L}_{\text{sis}}}= {\boldsymbol{\lambda}}^i - {\boldsymbol{\lambda}}= {\boldsymbol{\lambda}}^i - \overline{{\boldsymbol{\lambda}}} - \overline{{\boldsymbol{\mu}}} \implies \frac{d\overline{{\boldsymbol{\mu}}}}{dt} \propto -\overline{{\boldsymbol{\mu}}}.$$ Hence $\overline{{\boldsymbol{\mu}}}$ decays to zero irrespective of the other variables, and in particular, if it starts at 0 it will remain there. In the following we will assume that this initial condition is met so that $\overline {\boldsymbol{\mu}}= 0$ at all times, allowing us to eliminate it from the equations. The resulting dynamics that extremize ${\mathcal{L}_{\text{sis}}}$ are: $$\begin{aligned}
\tag{Mitral cell activity relative to baseline}\tau_{mc} \frac{d{\boldsymbol{\lambda}}^i}{dt} &= -(1 + \frac{1}{n}){\boldsymbol{\lambda}}^i + \frac{1}{\sigma}\left(\frac{{\mathbf{y}}}{n} - {\mathbf{A}}^i {\mathbf{x}}^i\right) + \overline{{\boldsymbol{\lambda}}} - {\boldsymbol{\mu}}^i\\
\tag{Granule cell membrane voltage}\tau_{gc} \frac{d{\mathbf{v}}^i}{dt} &= - {\mathbf{v}}^i + {\mathbf{A}}^{i,T}{\boldsymbol{\lambda}}^i\\
\tag{Granule cell firing rate}{\mathbf{x}}^i &= \frac{1}{\gamma \sigma }[{\mathbf{v}}^i - \beta \sigma]_+\\
\tag{Periglomerular cell activity relative to baseline, no leak}\tau_{pg} \frac{d{\boldsymbol{\mu}}^i}{dt} &= {\boldsymbol{\lambda}}^i - \overline{{\boldsymbol{\lambda}}}\end{aligned}$$ We have identified the ${\boldsymbol{\mu}}^i$ variables with olfactory bulb periglomerular cells because they inhibit the mitral cells and are in turn excited by them [@shepherd_synaptic_2004] and do not receive direct receptor input themselves, reminiscent of the Type II periglomerular cells of Kosaka and Kosaka [@kosaka_synaptic_2005].
This circuit is shown schematically in Figure \[fig:sparse\_circuit\]. Importantly, in this circuit each mitral cell interacts only with the granule cells within its block, reducing mitral-granule connectivity by a factor of $n$ (though the *total* number of mitral-granule synapses has stayed the same due to the introduction $n$ sister mitral cells per glomerulus). The information from the other granule cells is delivered indirectly to each mitral cell via the influences of the glomerular average $\overline {\boldsymbol{\lambda}}$ and periglomerular inhibition.
![Inference circuit with sparse connectivity using sister mitral cells. Sister cells now only interact with the granule cells within their own block, reducing their connectivity by a factor of $n$. Information is shared between blocks at the glomeruli and through the periglomerular cells.[]{data-label="fig:sparse_circuit"}](sparse_circuit.pdf){width="4in"}
Leaky periglomerular cells via an approximate Lagrangian
--------------------------------------------------------
The dynamics above imply that that the periglomerular cells ${\boldsymbol{\mu}}^i$ do not leak i.e. are perfect integrators, a property that is obviously at odds with biology. To introduce a leak term we first recall that ${\boldsymbol{\mu}}^i$ dynamics minimize ${\mathcal{L}_{\text{sis}}}$. We then introduce an upper bound to ${\mathcal{L}_{\text{sis}}}$: $$\begin{aligned}
{\mathcal{L}_{\text{sis}}^{\varepsilon}}(\{{\boldsymbol{\mu}}^i\},\dots) = {\mathcal{L}_{\text{sis}}}(\{{\boldsymbol{\mu}}^i\},\dots) + \sum_{i=1}^n \frac{1}{2}\|{\boldsymbol{\lambda}}- {\boldsymbol{\lambda}}^i\|_2^2 - \frac{1}{2(1 + \varepsilon)}\|{\boldsymbol{\lambda}}- {\boldsymbol{\lambda}}^i\|_2^2 + \frac{1}{2}\varepsilon \|{\boldsymbol{\mu}}^i\|_2^2,\end{aligned}$$ where $\varepsilon \ge 0$ and we’ve suppressed the other arguments to the Lagrangians for clarity. The first two terms in the augmentation replace each $-\frac{1}{2}\|{\boldsymbol{\lambda}}- {\boldsymbol{\lambda}}^i\|_2^2$ term in ${\mathcal{L}_{\text{sis}}}$ with $-\frac{1}{2(1+\varepsilon)}\|{\boldsymbol{\lambda}}- {\boldsymbol{\lambda}}^i\|_2^2$, and the final term penalizes large values of ${\boldsymbol{\mu}}^i$. The dynamics that extremize ${\mathcal{L}_{\text{sis}}^{\varepsilon}}$ are the same as those that ${\mathcal{L}_{\text{sis}}}$ above, except for those of the mitral and periglomerular cells, which are modified to: $$\begin{aligned}
\tau_{mc} \frac{d{\boldsymbol{\lambda}}^i}{dt} &= -(\frac{1}{1+\varepsilon} + \frac{1}{n}){\boldsymbol{\lambda}}^i + \frac{1}{\sigma}\left(\frac{{\mathbf{y}}}{n} - {\mathbf{A}}^i {\mathbf{x}}^i\right) + \frac{\overline{{\boldsymbol{\lambda}}}}{1+\varepsilon} - {\boldsymbol{\mu}}^i\\
\tau_{pg} \frac{d{\boldsymbol{\mu}}^i}{dt} &= - {\boldsymbol{\mu}}^i + \frac{1}{\varepsilon}({\boldsymbol{\lambda}}^i - \overline{{\boldsymbol{\lambda}}})\end{aligned}$$ Note that now the periglomerular cells are endowed with a leak, as desired. Because the resulting dynamics no longer extremize ${\mathcal{L}_{\text{sis}}}$, the solution no longer matches the MAP solution exactly, and is in fact denser. To understand this effect (see Supplementary Information), note that at $\varepsilon = 0$, ${\mathcal{L}_{\text{sis}}^{\varepsilon}}= {\mathcal{L}_{\text{sis}}}$, and the sister cells are ‘fully coupled’ i.e. the ${\boldsymbol{\mu}}^i$ variables are free to enforce the constraint ${\boldsymbol{\lambda}}^i = {\boldsymbol{\lambda}}$. The system then solves the MAP problem exactly by combining information from all blocks, yielding a sparse solution. As $\varepsilon \to \infty$ non-zero values of ${\boldsymbol{\mu}}^i$ result in progressively higher values for the Lagrangian, forcing ${\boldsymbol{\mu}}^i$ to zero in the limit. In this ‘fully decoupled’ state each block attempts to explain its fraction ${\mathbf{y}}/n$ of the input independently of the others using only its own subset ${\mathbf{A}}^i$ of the affinity matrix, reducing overcompleteness and resulting in denser representations. For the small values of $\varepsilon$ this can be counteracted by increasing the sparsity prior coefficient $\beta$.
Figure \[fig:performance\]A demonstrates the time course of the recovery error of the circuit in response to a 500 ms odor puff, as the number of blocks is varied, and averaged over 40 trials. Recovery error is defined as the mean sum of squares of the difference between the circuit’s estimate and the MAP solution normalized by the mean sum of squares of the MAP solution. The all-to-all circuit is able to reduce this error to near zero (numerical precision) as it is performing MAP inference exactly. As the multi-block circuits use a non-zero value of $\varepsilon$ they are only approximating the MAP solution, but can still greatly reduce the recovery error when using an optimized setting of the sparsity parameter $\beta$, as described above. Figure \[fig:performance\]B shows the output of the 4-block circuit for a typical input odor, demonstrating its close approximation to the MAP solution. In Figure \[fig:performance\]C the dynamics of two different cells and their sisters from another block are shown, demonstrating that they are similar, but not identical, as experimentally observed [@dhawale_non-redundant_2010], and Figure \[fig:performance\]D shows the activity of corresponding periglomerular cells. Finally, in Figure \[fig:performance\]E shows the membrane voltage and output firing rate of one of the active granule cells. Note that the firing rate has essentially stabilized by $\sim$200 ms after odor onset, broadly consistent with the fast olfactory discrimination times observed in rodents [@uchida_speed_2003].
![Performance. (A) Time course of recovery error for different circuits averaged over 40 random odors puffed for 500 ms at $t = 0$. Recovery error is mean squared error of granule cell activity relative to the MAP estimate, normalized by mean sum of squares of the MAP estimate. The all-to-all circuit essentially recovers the MAP solution; $n$-block circuits do so approximately as $\varepsilon>0$. Odors were sparse 1000-dimensional vectors ($N=1000$) with 3 randomly selected element set to 1. All-to-all circuit had 50 mitral cells ($M=50$); $n$-block circuits had $50n$ mitral cells and the corresponding periglomerular cells. Other parameters: $\beta = 100\;\text{(all-to-all)}, 150\;\text{(2-block)}, 170;\text{(4-block)}$; $\gamma = 100$; $\sigma = 10^{-2}$ (but no noise was actually added above); $\varepsilon = 10^{-2}$; $\tau_{mc} = \tau_{pg} = \tau_{gc} = \text{50 ms}$. (B) Example recovery. The output of a circuit is the vector of granule cell activations immediately before odor offset. Top panel: actual odor presented. Bottom two panels: MAP estimate (blue) and output of the 4-block circuit (orange), zoomed in (and sign-inverted in the bottom panel) to values near 1 and 0, respectively, to highlight discrepancies between the MAP estimate and the circuit output, demonstrating good agreement. (C) Sister mitral cells: The time course of two mitral cells and one each of their sisters, showing that the activities of sister cells are similar but not identical. (D) The activity of the periglomerular cells paired to the mitral cells in (C). (E) The membrane voltage and firing rate of a granule cell strongly activated by the odor. Firing rate is stable by $\sim 200$ ms after odor onset, consistent with fast odor discrimination in rodents [@uchida_speed_2003].[]{data-label="fig:performance"}](performance.pdf){width="\linewidth"}
Discussion
==========
Inspired by the sister mitral cells in the olfactory bulb, we have shown in this work how MAP inference, which often requires dense connectivity between computational units, can be reformulated in a principled way to yield a circuit with sparse connectivity, at the cost of introducing additional computational units. A key prediction of our model may appear to be that the mitral-granule cell connectome has block structure, in which granule cells only communicate with the mitral cells in their block and vise versa. As we show in the Supplemental Information, a simple generalization of our model shows that MAP solution can be found with mitral-granule cell connectivity that does not have the block structure we have assumed here (though equally sparse). This generalization also accommodates the the experimentally observed random sampling of glomeruli by mitral cells [@imai_construction_2014], in addition to the ordered one presented above where exactly $n$ sister cells sample each glomerulus.
Previous work in several groups has addressed sparse coding in olfaction [@koulakov_sparse_2011; @grabska-barwinska_demixing_2013; @tootoonian_dual_2014; @grabska-barwinska_probabilistic_2017; @kepple_deconstructing_2016]. Our work extends that of [@tootoonian_dual_2014] in insects by showing how the MAP problem itself can be solved, rather than the related compressed sensing problem addressed in that paper. In our work we propose that olfactory bulb granule cells encode odor represenations, similar to [@koulakov_sparse_2011]. The authors there assumed a random mitral-granule connectome, resulting in ‘incomplete’ odor represenations because granule cell firing rates are positive. In this work we assume that the connectome is set to its ‘correct’ value determined by the affinity matrix ${\mathbf{A}}$, obviating the need for negative rates and resulting in ‘complete’ representations. Even with such complete representations, mitral cell activity is not negligible, and allows for simple and exact readout of the infered odor concentrations in downstream cortical areas. Furthermore, previous work [@tootoonian_dual_2014] has shown, albeit in a limited setting, that the correct value of the connectome can be learned via biologically plausible learning mechanisms. We expect that to be the case here, though we leave that determination to future work. The authors in [@grabska-barwinska_demixing_2013; @grabska-barwinska_probabilistic_2017] propose a model in which the olfactory bulb and cortex interact to infer odorant concentrations while retaining uncertainty information, rather than just providing point estimates as in MAP inference. The authors in [@kepple_deconstructing_2016] propose a bulbar-cortical circuit that represents odors based on ‘primacy’, the relative strengths of the strongest receptor responses, automatically endowing the system with the concentration invariance likely to be important in olfactory computation. We’ve shown that the MAP computation can be performed entirely within the bulb while allowing for easy and exact cortical readout and without the need for cortical feedback, retaining odor information and allowing downstream areas to perform concentration invariance and primacy computations, as needed. Extending our methods to provide uncertainity information is an important task that we leave to future work.
Supplementary Information {#supplementary-information .unnumbered}
=========================
Generalizing the model {#generalizing-the-model .unnumbered}
----------------------
Our model as formulated in the main text predicts that granule cells interact only with the mitral cells within their blocks. This predicts a block diagonal structure in the mitral-granule cell connectome, such as in Figure \[fig:connectivity\]B. However, this is not the only possible solution. To determine the set of all possible solutions we extend the derivations in the main text to a slightly more general setting. This generalization will also allow us to deal with the biologically observed random sampling of glomeruli by mitral cells in the following section.
Instead of considering the sister mitral cells separately as $\{{\boldsymbol{\lambda}}^i\}$, we can stack them into one large vector ${\boldsymbol{\xi}}$, similarly stack the periglomerular celsl $\{{\boldsymbol{\mu}}^i\}$ into ${\boldsymbol{\mu}}$ and consider a generalized Lagrangian $${\mathcal{L}_{\text{sis}}^{\varepsilon}}({\mathbf{x}},{\boldsymbol{\xi}}, {\boldsymbol{\lambda}}, {\boldsymbol{\mu}})= \phi({\mathbf{x}})-\frac{1}{2}\|{\mathbf{F}}{\boldsymbol{\xi}}\|_{2}^{2}+\frac{1}{\sigma}{{\boldsymbol{\xi}}}^{T}({\mathbf{G}}{\mathbf{V}}{\mathbf{y}}-{\mathbf{W}}{\mathbf{x}}) + {\boldsymbol{\mu}}^T ({\mathbf{D}}{\mathbf{V}}{\boldsymbol{\lambda}}- {\boldsymbol{\xi}}) - \frac{\|{\mathbf{D}}{\mathbf{V}}{\boldsymbol{\lambda}}- {\boldsymbol{\xi}}\|_2^2}{2(1+\varepsilon)} + \frac{\varepsilon}{2}\|{\boldsymbol{\mu}}\|_2^2.$$ Here ${\mathbf}F = \text{diag}(f_1,\dots,f_T)$ and ${\mathbf}G= \text{diag}(g_1,\dots,g_T)$, where $T$ is the total number of mitral cells. The binary matrix ${\mathbf{V}}$ indicates the glomeruli sampled by each mitral cell. As each mitral cell samples exactly one glomerulus, each of the rows of ${\mathbf{V}}$ contain just one non-zero element, rendering ${\mathbf{V}}^T{\mathbf{V}}$ orthogonal (though not orthonormal). ${\mathbf{G}}$ is the gain each mitral cell applies its glomerular input, ${\mathbf{F}}$ can modify the leak time constant of each mitral cell, and ${\mathbf{W}}$ is the mitral-granule connectome. The relationship between ${\boldsymbol{\xi}}$ and ${\boldsymbol{\lambda}}$ at convergence must satisfy ${\boldsymbol{\xi}}= {\mathbf{D}}{\mathbf{V}}{\boldsymbol{\lambda}}$, to mirror the sampling of glomeruli by mitral cells, where we’ve included the diagonal matrix ${\mathbf{D}}= {\text{diag}}(\{d_i\})$ to allow for cell-specific gain. We can then ask what conditions these matrices ensure that when ${\boldsymbol{\xi}}= {\mathbf{D}}{\mathbf{V}}{\boldsymbol{\lambda}}$, ${\mathcal{L}_{\text{sis}}}^0 = {\mathcal{L}_{\text{MAP}}}$. Plugging ${\mathbf{D}}{\mathbf{V}}{\boldsymbol{\lambda}}$ in for ${\boldsymbol{\xi}}$, we have $${\mathcal{L}_{\text{sis}}}^0({\mathbf{x}},{\boldsymbol{\xi}}, {\boldsymbol{\lambda}}, {\boldsymbol{\mu}}) = \phi({\mathbf{x}})-\frac{1}{2}{\boldsymbol{\lambda}}^T {\mathbf{V}}^T {\mathbf{D}}^T {\mathbf{F}}^T{\mathbf{F}}{\mathbf{D}}{\mathbf{V}}{\boldsymbol{\lambda}}+\frac{1}{\sigma}{\boldsymbol{\lambda}}^T {\mathbf{V}}^T {\mathbf{D}}^T({\mathbf{G}}{\mathbf{V}}{\mathbf{y}}-{\mathbf{W}}{\mathbf{x}}).$$ Then by inspection, if the following conditions $$(1)\; {\mathbf{V}}^T {\mathbf{D}}^T {\mathbf{F}}^T {\mathbf{F}}{\mathbf{D}}{\mathbf{V}}= {\mathbf}I_M, \quad (2)\; {\mathbf{V}}^T {\mathbf{D}}^T {\mathbf{G}}{\mathbf{V}}= {\mathbf}I_M, \quad\text{and}\quad (3)\; {\mathbf{V}}^T {\mathbf{D}}^T {\mathbf{W}}= {\mathbf{A}}$$ are met, $${\mathcal{L}_{\text{sis}}}^0({\mathbf{x}},{\boldsymbol{\xi}}, {\boldsymbol{\lambda}}, {\boldsymbol{\mu}}) = \phi({\mathbf{x}})-\frac{1}{2}\|{\boldsymbol{\lambda}}\|_2^2 +\frac{1}{\sigma}{\boldsymbol{\lambda}}^T ({\mathbf{y}}-{\mathbf{A}}{\mathbf{x}}) = {\mathcal{L}_{\text{MAP}}}.$$ Hence, extremizing ${\mathcal{L}_{\text{sis}}}^0$ will yield the MAP solution at convergence (see below for a direct derivation). ${\mathcal{L}_{\text{sis}}}$ considered in the main text corresponds to $${\mathbf{V}}= {\mathbf}1_n \otimes {\mathbf{I}}_M, \quad {\mathbf{F}}^T{\mathbf{F}}= {\mathbf{G}}= \frac{1}{n}{\mathbf{I}}_{nM}, \quad {\mathbf{D}}= {\mathbf{I}}_{nM},$$ and ${\mathbf{A}}$ (e.g. Figure \[fig:connectivity\]A) partitioned to yield a block-diagonal mitral-granule (Figure \[fig:connectivity\]B). This setting of the matrices satisfies the conditions above, guaranteeing that extremizing ${\mathcal{L}_{\text{sis}}}$ in the text yields the MAP solution.
Note that the third condition above implies that any ${\mathbf}W$ that satisfies ${\mathbf}V^T {\mathbf}W = {\mathbf}A$ will result in the extremization of ${\mathcal{L}}_0$ and yield the MAP solution. Although the block structured connectome in Figure \[fig:connectivity\]B satisfies this conditions, so too does e.g. the connectome in Figure \[fig:connectivity\]C. This latter connectome was generated by performing a modified $\ell_0$ minimization on a connectivity matrix ${\mathbf{W}}$, subject to the third constraint above. The result has the same sparseness as the matrix in Figure \[fig:connectivity\]B, but without the block structure. Thus a block-structured mitral-granule connectome is not the only sparsity pattern that solves the MAP solution, and given that the biological connectome is likely a result of learning, the experimentally observed connectome is more likely to resemble that in Figure \[fig:connectivity\]C, and to lack block structure.
### Random sampling of glomeruli {#random-sampling-of-glomeruli .unnumbered}
In this work we’ve assumed that each glomerulus is sampled by exactly the same number $n$ sister mitral cells. In biological fact, glomeruli are sampled randomly by mitral cells, but subject to the constraint that each mitral cell samples exactly one glomerulus. As long as each glomerulus is sampled at least once, our generalized framework in the previous section can accommodate this situation. The random sampling of glomeruli can be modeled as each row of the matrix ${\mathbf{V}}$ having one *randomly selected* element set to 1. Setting ${\mathbf{D}}= {\mathbf{I}}_{nM}$, condition (2) then implies $${\mathbf{V}}^T {\mathbf{G}}{\mathbf{V}}= {\mathbf{I}}_M \implies {\mathbf{V}}^T {\mathbf{G}}{\mathbf{V}}{\mathbf}1 = {\mathbf}I_M {\mathbf}1 \implies {\mathbf{V}}^T {\mathbf{G}}{\mathbf}1 = {\mathbf}1 \implies {\mathbf{V}}^T {\mathbf}g = {\mathbf}1,$$ where ${\mathbf}g$ is the vector of diagonal elements of ${\mathbf{G}}$, and we’ve used the fact that ${\mathbf{V}}{\mathbf}1 = {\mathbf}1$. As ${\mathbf{V}}^T$ has more columns than rows, the last equation is under-determined, so we can take the solution with least Euclidean norm by assuming that ${\mathbf}g$ is in the range of ${\mathbf{V}}$ i.e. ${\mathbf}g = {\mathbf{V}}{\mathbf}g'$. We then have $${\mathbf{V}}^T {\mathbf}g= {\mathbf}1 \implies {\mathbf{V}}^T {\mathbf{V}}{\mathbf}g' = {\mathbf}1 \implies {\mathbf}g' = [n_1^{-1},\dots,n_M^{-1}]^T \implies {\mathbf{G}}= \text{diag}(\{n_i^{-1}\}),$$ where $n_i$ is the number of sisters that mitral cell $i$ has. What this value of ${\mathbf{G}}$ implies in terms of the circuit is that glomerular activation is split evenly among the innervating sister cells, so that each receives $1/n_i$ of the excitation. This process would occur naturally as the result of the neurotransmitter released by receptor neurons in the glomerulus being distributed approximately evenly among the innervating mitral cell dendrites. As for the remaining variables, ${\mathbf}F$ can then be set to $ {\mathbf}F = \text{diag}(\{n_i^{-1/2}\})$, and ${\mathbf{W}}$ can be chosen arbitrarily as long as it satisfies ${\mathbf{V}}^T {\mathbf{W}}= {\mathbf{A}}$.
### Generalizing the constraint on sister-cells {#generalizing-the-constraint-on-sister-cells .unnumbered}
As a final generalization, we consider the desired mapping between sister mitral cells ${\boldsymbol{\xi}}$ and the original mitral cell vector ${\boldsymbol{\lambda}}$. In the main text, we’ve required that ${\boldsymbol{\xi}}= {\mathbf{V}}{\boldsymbol{\lambda}}$ at convergence but we can generalize this to ${\boldsymbol{\xi}}= {\mathbf{D}}{\mathbf{V}}{\boldsymbol{\lambda}}$, where ${\mathbf{D}}= {\text{diag}}(\{d_i\})$ can be interpreted as a gain applied to the constraint on each sister mitral cell. We can then use condition (2) to solve for ${\mathbf{G}}$: $${\mathbf{V}}^T {\mathbf{D}}^T {\mathbf{G}}{\mathbf{V}}= {\mathbf{I}}_M \implies {\mathbf{V}}^T {\mathbf{D}}^T {\mathbf{G}}{\mathbf{V}}{\mathbf}1 = {\mathbf{V}}^T ({\mathbf}{d g}) = {\mathbf}1,$$ where ${\mathbf}{dg} = [\{d_i g_i\}]^T$. As before, we can then assume that ${\mathbf}{dg}$ is in the range of ${\mathbf{V}}$, which like before yields $${\mathbf{V}}^T ({\mathbf}{dg})= {\mathbf}1 \implies {\mathbf{V}}^T {\mathbf{V}}({\mathbf}{dg})' = {\mathbf}1 \implies ({\mathbf}{dg})' = [n_1^{-1},\dots,n_M^{-1}]^T \implies {\mathbf{G}}= \text{diag}(\{(d_i n_i)^{-1}\}).$$ To solve for ${\mathbf{F}}$ we can then note that if condition 2 is satisfied, then condition 1 can be satisfied by setting ${\mathbf{F}}^T{\mathbf{F}}{\mathbf{D}}= {\mathbf{G}}$. We then have $${\text{diag}}(\{f_i^2d_i\}) = {\text{diag}}(\{(d_i n_i)^{-1}\}) \implies f_i^2 d_i = (d_i n_i)^{-1} \implies f_i^2 = d_i^{-2}n_i^{-1} \implies f_i = (d_i \sqrt{n_i})^{-1},$$ so that ${\mathbf{F}}= {\text{diag}}(\{d_i^{-1}n_i^{-1/2}\}),$ showing that our framework can accommodate this setting as well.
![Sparsifying connectivity. (A) The original all-to-all affinity matrix ${\mathbf{A}}$. (B) The affinity matrix sparsified by being split into blocks, as considered in the main text. (C) A learned affinity matrix with the same number of non-zero elements as that in (B), but without block structure. Both the matrices ${\mathbf{W}}$ in panels (B) and (C) satisfy the third matrix condition ${\mathbf{V}}^T{\mathbf{D}}^T {\mathbf{W}}= {\mathbf{A}}$ for ${\mathbf{V}}= {\mathbf}1_n \otimes {\mathbf{I}}_M$ and ${\mathbf{D}}= {\mathbf{I}}_{nM}$, demonstrating that a block diagonal mitral-granule connectome is not required for circuit dynamics to yield the MAP solution.[]{data-label="fig:connectivity"}](connectivity_slide.pdf){width="\linewidth"}
Derivation of the generalized dynamics {#derivation-of-the-generalized-dynamics .unnumbered}
--------------------------------------
We will derive the dynamics for ${\mathcal{L}_{\text{sis}}^{\varepsilon}}$ in the generalized setting introduced above. Dynamics for our variables are determined by extremizing this Lagrangian. We minimize with respect to ${\mathbf{x}}$ as it’s the primal variable, maximize with respect to ${\boldsymbol{\lambda}}$ and ${\boldsymbol{\xi}}$ because they are dual variables, and minimize with respect to ${\boldsymbol{\mu}}$ as it is the dual variable for the maximization with respect to ${\boldsymbol{\lambda}}$ and ${\boldsymbol{\xi}}$.
We first eliminate ${\boldsymbol{\lambda}}$ from the dynamics by setting its gradient to zero. The gradient is $$\nabla_{{\boldsymbol{\lambda}}} {\mathcal{L}_{\text{sis}}^{\varepsilon}}= {\mathbf{V}}^T{\mathbf{D}}^T {\boldsymbol{\mu}}+ \frac{1}{1 + \varepsilon}({\mathbf{V}}^T {\mathbf{D}}^T {\boldsymbol{\xi}}- {\mathbf{V}}^T {\mathbf{D}}^T{\mathbf{D}}{\mathbf{V}}{\boldsymbol{\lambda}}).$$ Setting it to zero yields $${\mathbf{V}}^T {\mathbf{D}}^T {\mathbf{D}}{\mathbf{V}}{\boldsymbol{\lambda}}= {\mathbf{V}}^T {\mathbf{D}}^T( {\boldsymbol{\xi}}- (1 + \varepsilon) {\boldsymbol{\mu}}) \implies {\boldsymbol{\lambda}}= ({\mathbf{V}}^T {\mathbf{D}}^T {\mathbf{D}}{\mathbf{V}})^{-1}{\mathbf{V}}^T {\mathbf{D}}^T({\boldsymbol{\xi}}+ (1 + \varepsilon) {\boldsymbol{\mu}}).$$ Thus $ {\boldsymbol{\lambda}}$ are the coefficients of the least-squares projection of the $N$-dimensional variable $ {\boldsymbol{\xi}}+ (1 + \varepsilon) {\boldsymbol{\mu}}$ into the $M$-dimensional span of ${\mathbf{D}}{\mathbf{V}}$. Then $${\mathbf{D}}{\mathbf{V}}{\boldsymbol{\lambda}}= {\mathbf{D}}{\mathbf{V}}({\mathbf{V}}^T {\mathbf{D}}^T {\mathbf{D}}{\mathbf{V}})^{-1}{\mathbf{V}}^T {\mathbf{D}}^T ( {\boldsymbol{\xi}}+ (1 + \varepsilon) {\boldsymbol{\mu}}) = {\mathbf{P}}({\boldsymbol{\xi}}+ (1 + \varepsilon) {\boldsymbol{\mu}})$$ is the projection of ${\boldsymbol{\xi}}+ (1 + \varepsilon) {\boldsymbol{\mu}}$ into the span of ${\mathbf{D}}{\mathbf{V}}$, and ${\mathbf{P}}= {\mathbf{D}}{\mathbf{V}}({\mathbf{V}}^T {\mathbf{D}}^T {\mathbf{D}}{\mathbf{V}})^{-1}{\mathbf{V}}^T {\mathbf{D}}^T$ is the projection matrix.
The dynamics maximize ${\mathcal{L}_{\text{sis}}^{\varepsilon}}$ with respect to ${\boldsymbol{\xi}}$: $$\begin{aligned}
\dot{{\boldsymbol{\xi}}} \propto \nabla_{{\boldsymbol{\xi}}}{\mathcal{L}_{\text{sis}}}&= -{\mathbf{F}}^T {\mathbf{F}}{\boldsymbol{\xi}}+ \frac{1}{\sigma}({\mathbf{G}}{\mathbf{V}}{\mathbf{y}}- {\mathbf{W}}{\mathbf{x}}) - {\boldsymbol{\mu}}+ \frac{1}{1 + \varepsilon}({\mathbf{D}}{\mathbf{V}}{\boldsymbol{\lambda}}- {\boldsymbol{\xi}})\\
&= -\left[{\mathbf{F}}^T {\mathbf{F}}+ \frac{1}{1+\varepsilon}{\mathbf{I}}\right] {\boldsymbol{\xi}}+ \frac{1}{\sigma}({\mathbf{G}}{\mathbf{V}}{\mathbf{y}}- {\mathbf{W}}{\mathbf{x}}) - {\boldsymbol{\mu}}+ \frac{1}{1+\varepsilon}{\mathbf{P}}({\boldsymbol{\xi}}+ (1 + \varepsilon){\boldsymbol{\mu}}).\end{aligned}$$ The dynamics of ${\boldsymbol{\mu}}$ minimize ${\mathcal{L}_{\text{sis}}^{\varepsilon}}$ with respect to it: $$\begin{aligned}
\dot {\boldsymbol{\mu}}\propto -\nabla_{{\boldsymbol{\mu}}} {\mathcal{L}_{\text{sis}}^{\varepsilon}}&= -{\mathbf{D}}{\mathbf{V}}{\boldsymbol{\lambda}}+ {\boldsymbol{\xi}}- \varepsilon {\boldsymbol{\mu}}= {\boldsymbol{\xi}}- {\mathbf{P}}( {\boldsymbol{\xi}}+ (1 + \varepsilon){\boldsymbol{\mu}}) - \varepsilon {\boldsymbol{\mu}}= ({\mathbf{I}}- {\mathbf{P}}) {\boldsymbol{\xi}}- (1 + \varepsilon){\mathbf{P}}{\boldsymbol{\mu}}- \varepsilon {\boldsymbol{\mu}}.\end{aligned}$$ Thus we can decompose ${\boldsymbol{\mu}}$ into orthogonal components ${\mathbf{P}}{\boldsymbol{\mu}}$ and $({\mathbf{I}}- {\mathbf{P}}) {\boldsymbol{\mu}}$, with dynamics $$\begin{aligned}
{\mathbf{P}}\dot{\boldsymbol{\mu}}\propto -{\mathbf{P}}{\boldsymbol{\mu}}, \quad ({\mathbf{I}}- {\mathbf{P}}) \dot {\boldsymbol{\mu}}\propto ({\mathbf{I}}- {\mathbf{P}}) ({\boldsymbol{\xi}}- \varepsilon {\boldsymbol{\mu}}).\end{aligned}$$ These imply that the ${\mathbf{P}}{\boldsymbol{\mu}}$ component decays to zero, and in particular that if it starts at zero, it will remain there. Therefore we will require that this initial condition holds so that ${\mathbf{P}}{\boldsymbol{\mu}}= 0$ for all $t$, and simplify the ${\boldsymbol{\mu}}$ dynamics to $$\dot {\boldsymbol{\mu}}\propto -{\boldsymbol{\mu}}+ \frac{1}{\varepsilon} ({\mathbf{I}}- {\mathbf{P}}){\boldsymbol{\xi}}.$$ The dynamics for ${\mathbf{v}}$ and ${\mathbf{x}}$ remain unchanged, with ${\mathbf{v}}$ integrating the input from ${\boldsymbol{\xi}}$ and ${\mathbf{x}}$ applying a rectifying nonlinearity. If we identify the activity of the mitral cells with ${\boldsymbol{\xi}}$, the dynamics for the full system can then be defined as $$\begin{aligned}
\tau_{mc}\frac{d{\mathbf{s}}}{dt} &= -({\mathbf{F}}^T{\mathbf{F}}+ \frac{1}{1+\varepsilon}{\mathbf{I}}) {\mathbf{s}}+ \frac{1}{\sigma}({\mathbf{G}}{\mathbf{V}}{\mathbf{y}}- {\mathbf{W}}{\mathbf{x}}) + \frac{1}{1+\varepsilon}{\mathbf{P}}{\mathbf{s}}- {\boldsymbol{\mu}}\\
\tau_{pg}\frac{d{\boldsymbol{\mu}}}{dt} &= -{\boldsymbol{\mu}}+ \frac{1}{\varepsilon}({\mathbf{s}}- {\mathbf{P}}{\mathbf{s}})\\
\tau_{gc}\frac{d {\mathbf{v}}}{dt} &= -{\mathbf{v}}+ {\mathbf{W}}^T {\mathbf{s}}\\
{\mathbf{x}}&= \frac{1}{\gamma \sigma}[{\mathbf{v}}- \beta \sigma]_+\end{aligned}$$
The MAP solution is the stationary point of the dynamics {#the-map-solution-is-the-stationary-point-of-the-dynamics .unnumbered}
--------------------------------------------------------
The MAP problem is to find the ${\mathbf{x}}$ that minimizes the log posterior $\phi({\mathbf{x}}) + \frac{1}{2\sigma^2}\|{\mathbf{y}}- {\mathbf{A}}{\mathbf{x}}\|_2^2$. This ${\mathbf{x}}$ satisfies $$0 \in \partial \phi({\mathbf{x}}) -\frac{1}{\sigma^2}{\mathbf{A}}^T ({\mathbf{y}}- {\mathbf{A}}{\mathbf{x}}) \implies {\mathbf{x}}= \frac{1}{\gamma \sigma^2}[{\mathbf{A}}^T ({\mathbf{y}}- {\mathbf{A}}{\mathbf{x}}) - \beta \sigma^2]_+,$$ where $\partial \phi({\mathbf{x}})$ is the subgradient of $\phi$.
We will first show that the dynamics that extremize ${\mathcal{L}_{\text{MAP}}}$ yield an ${\mathbf{x}}$ variable that satisfies this relation. We will then show that the same is true for the generalized dynamics described above.
Dynamics that extremize ${\mathcal{L}_{\text{MAP}}}$ are $$\begin{aligned}
\tau_{\lambda} \frac{d{\boldsymbol{\lambda}}}{dt} &= - {\boldsymbol{\lambda}}+ \frac{1}{\sigma}({\mathbf{y}}- {\mathbf{A}}{\mathbf{x}})\\
\tau_{v} \frac{d{\mathbf{v}}}{dt} &= - {\mathbf{v}}+ {\mathbf{A}}^T{\boldsymbol{\lambda}},\\
{\mathbf{x}}&= \frac{1}{\gamma \sigma}[{\mathbf{v}}- \beta \sigma]_+,\end{aligned}$$ At convergence, we have $$\frac{d{\boldsymbol{\lambda}}}{dt} = 0 \implies {\boldsymbol{\lambda}}= \frac{1}{\sigma}({\mathbf{y}}- {\mathbf{A}}{\mathbf{x}}).$$ $$\frac{d{\mathbf{v}}}{dt} = 0 \implies {\mathbf{v}}= {\mathbf{A}}^T {\boldsymbol{\lambda}}= \frac{1}{\sigma}{\mathbf{A}}^T({\mathbf{y}}- {\mathbf{A}}{\mathbf{x}}).$$ $${\mathbf{x}}= \frac{1}{\gamma \sigma}[{\mathbf{v}}- \beta \sigma]_+ = \frac{1}{\gamma \sigma}[\frac{1}{\sigma}{\mathbf{A}}^T({\mathbf{y}}- {\mathbf{A}}{\mathbf{x}}) - \beta \sigma]_+.$$ Finally, using the fact that $[ax]_+ = a[x]_+$ for $a>0$ we have $${\mathbf{x}}= \frac{1}{\gamma \sigma^2}[{\mathbf{A}}^T({\mathbf{y}}- {\mathbf{A}}{\mathbf{x}}) - \beta \sigma^2]_+,$$ as desired.
We will now show that when the generalized dynamics above for $\varepsilon = 0$ converge, the ${\mathbf{x}}$ variable satisfies this relation. We’ve assumed that ${\mathbf{P}}{\boldsymbol{\mu}}= 0$ as the dynamics will drive it there if it does not initially start at zero. Multiplying both sides with ${\mathbf{V}}^T {\mathbf{D}}^T$ and using the fact that ${\mathbf{V}}^T {\mathbf{D}}^T {\mathbf{P}}= {\mathbf{V}}^T {\mathbf{D}}^T$, we have $${\mathbf{P}}{\boldsymbol{\mu}}= 0 \implies {\mathbf{V}}^T{\mathbf{D}}^T {\mathbf{P}}{\boldsymbol{\mu}}= {\mathbf{V}}^T {\mathbf{D}}^T {\boldsymbol{\mu}}= 0.$$ At convergence $\dot {\boldsymbol{\mu}}= 0$ which means ${\mathbf{P}}{\mathbf{s}}= {\mathbf{s}}$ which implies that ${\mathbf{s}}$ is in the range of ${\mathbf{D}}{\mathbf{V}}$, i.e. there exists ${\boldsymbol{\lambda}}$ such that ${\mathbf{D}}{\mathbf{V}}{\boldsymbol{\lambda}}= {\mathbf{s}}$. Also $\dot {\mathbf{s}}= 0$, so we have, after substituting ${\mathbf{s}}$ for ${\mathbf{P}}{\mathbf{s}}$, $${\mathbf{F}}^T{\mathbf{F}}{\mathbf{s}}= \sigma^{-1}({\mathbf{G}}{\mathbf{V}}{\mathbf{y}}- {\mathbf{W}}{\mathbf{x}}) - {\boldsymbol{\mu}}.$$ Substituting ${\mathbf{D}}{\mathbf{V}}{\boldsymbol{\lambda}}$ for ${\mathbf{s}}$, and multiplying both sides by ${\mathbf{V}}^T {\mathbf{D}}^T$, we have $${\mathbf{V}}^T {\mathbf{D}}^T {\mathbf{F}}^T {\mathbf{F}}{\mathbf{D}}{\mathbf{V}}{\boldsymbol{\lambda}}= \sigma^{-1}({\mathbf{V}}^T{\mathbf{D}}^T{\mathbf{G}}{\mathbf{V}}{\mathbf{y}}- {\mathbf{V}}^T{\mathbf{D}}^T {\mathbf{W}}{\mathbf{x}}) - {\mathbf{V}}^T {\mathbf{D}}^T {\boldsymbol{\mu}}.$$ Substituting in the three matrix constraints and the fact that at convergence ${\mathbf{V}}^T {\mathbf{D}}^T {\boldsymbol{\mu}}= 0$, we have $${\boldsymbol{\lambda}}= \frac{1}{\sigma}({\mathbf{y}}- {\mathbf{A}}{\mathbf{x}}).$$ Finally, $$\dot {\mathbf{v}}= 0 \implies {\mathbf{v}}= {\mathbf{W}}^T {\mathbf{s}}= {\mathbf{W}}^T {\mathbf{D}}{\mathbf{V}}{\boldsymbol{\lambda}}= {\mathbf{A}}^T {\boldsymbol{\lambda}},$$ so $$\begin{aligned}
{\mathbf{x}}= \frac{1}{\gamma \sigma}[{\mathbf{A}}^T {\boldsymbol{\lambda}}- \beta \sigma]_+ = \frac{1}{\gamma \sigma}[\frac{1}{\sigma}{\mathbf{A}}^T ({\mathbf{y}}- {\mathbf{A}}{\mathbf{x}}) - \beta \sigma]_+ = \frac{1}{\gamma \sigma^2 }[{\mathbf{A}}^T({\mathbf{y}}- {\mathbf{A}}{\mathbf{x}}) - \beta \sigma^2]_+,\end{aligned}$$ as desired. Note that as ${\mathcal{L}_{\text{olf}}}$ is a particular case of ${\mathcal{L}_{\text{sis}}}$ in which ${\mathbf{s}}= {\boldsymbol{\lambda}}$ and ${\mathbf{D}}= {\mathbf{F}}= {\mathbf{G}}= {\mathbf{V}}= {\mathbf{I}}_M$, so that ${\mathbf{P}}{\mathbf{s}}= {\mathbf{s}}$ and the various matrix constraints are met, the fact that the generalized dynamics arrive at the MAP solution automatically guarantees that the dynamics extremizing ${\mathcal{L}_{\text{olf}}}$ also do.
Understanding the behaviour of the uncoupled circuit {#understanding-the-behaviour-of-the-uncoupled-circuit .unnumbered}
----------------------------------------------------
The generalized Lagrangian that accomodates leaky periglomerular cells is $${\mathcal{L}_{\text{sis}}^{\varepsilon}}({\mathbf{x}},{\boldsymbol{\xi}}, {\boldsymbol{\lambda}}, {\boldsymbol{\mu}})= \phi({\mathbf{x}})-\frac{1}{2}\|{\mathbf{F}}{\boldsymbol{\xi}}\|_{2}^{2}+\frac{1}{\sigma}{{\boldsymbol{\xi}}}^{T}({\mathbf{G}}{\mathbf{V}}{\mathbf{y}}-{\mathbf{W}}{\mathbf{x}}) + {\boldsymbol{\mu}}^T ({\mathbf{D}}{\mathbf{V}}{\boldsymbol{\lambda}}- {\boldsymbol{\xi}}) - \frac{1}{2(1+\varepsilon)}\|{\mathbf{D}}{\mathbf{V}}{\boldsymbol{\lambda}}- {\boldsymbol{\xi}}\|_2^2 + \frac{1}{2}\varepsilon\|{\boldsymbol{\mu}}\|_2^2.$$ Changing the $\varepsilon$ parameter allows us to vary the dynamics of the system a‘fully coupled’ state at $\varepsilon = 0$, to its ‘fully uncoupled’ state as $\varepsilon \to \infty$. The fully coupled state ${\mathcal{L}_{\text{sis}}}^0$ is equivalent to ${\mathcal{L}_{\text{sis}}}$, in which the variables ${\boldsymbol{\mu}}$ are free to enforce the constraint ${\mathbf{s}}= {\mathbf{D}}{\mathbf{V}}{\boldsymbol{\lambda}}$, thus coupling the ${\mathbf{s}}$ variables and solving the MAP solution exactly, as we’ve shown above. In the fully uncoupled limit, any non-zero value of ${\boldsymbol{\mu}}$ incurs infinite loss, clamping its value at 0. This implies that the ${\mathbf{s}}$ are no longer required to satisfy the ${\mathbf{s}}= {\mathbf{D}}{\mathbf{V}}{\boldsymbol{\lambda}}$ constraint, hence our term ‘fully uncoupled’ for this state of the circuit. The Lagrangian reduces to $${\mathcal{L}_{\text{sis}}}^\infty({\mathbf{x}},{\boldsymbol{\xi}})= \phi({\mathbf{x}})-\frac{1}{2}\|{\mathbf{F}}{\boldsymbol{\xi}}\|_{2}^{2}+\frac{1}{\sigma}{{\boldsymbol{\xi}}}^{T}({\mathbf{G}}{\mathbf{V}}{\mathbf{y}}-{\mathbf{W}}{\mathbf{x}}),$$ which by inspection is just a larger version of the original MAP Lagrangian.
The behaviour of the fully uncoupled state is easiest to understand in the simple $n$-block setting, where each of the glomeruli is sampled evenly by the mitral cells and the ${\mathbf{A}}$ matrix is partitioned evenly among the blocks. This corresponds to a setting of $${\mathbf{V}}= \mathbf{1}_n \otimes {\mathbf{I}}_M, \quad {\mathbf{F}}^T{\mathbf{F}}= {\mathbf{G}}= n^{-1}{\mathbf{I}}_{nM}, \quad {\mathbf{D}}= {\mathbf{I}}_{nM}.$$ The Lagrangian then reduces to $$\begin{aligned}
{\mathcal{L}_{\text{sis}}}^\infty({\mathbf{x}},{\boldsymbol{\xi}}) &= \phi({\mathbf{x}})-\frac{1}{2n }\|{\boldsymbol{\xi}}\|_{2}^{2}+\frac{1}{\sigma}{{\boldsymbol{\xi}}}^{T}(\frac{1}{n}{\mathbf{V}}{\mathbf{y}}-{\mathbf{W}}{\mathbf{x}})\\
&= \sum_{i=1}^n \phi({\mathbf{x}}^i)-\frac{1}{2n}\|{\boldsymbol{\xi}}^i\|_{2}^{2}+\frac{1}{\sigma}{\boldsymbol{\xi}}^{i,T}(\frac{{\mathbf{y}}}{n}-{\mathbf{A}}^i {\mathbf{x}}^i),\\
&= \sum_{i=1}^n \ell({\mathbf{x}}^i, {\boldsymbol{\xi}}^i).\end{aligned}$$ Hence the Lagrangian is just the sum of $n$ terms that can be extremized independently, emphasizing the ‘fully uncoupled’ nature of this state. To understand the nature of the solutions in this state, we note the similarity of each of the $\ell({\mathbf{x}}^i, {\boldsymbol{\xi}}^i)$ terms to the MAP Lagrangian ${\mathcal{L}_{\text{MAP}}}$. We have $$\begin{aligned}
\ell({\mathbf{x}}^i, {\boldsymbol{\xi}}^i) &= \phi({\mathbf{x}}^i) - \frac{1}{2n}\|{\boldsymbol{\xi}}^i\|_{2}^{2}+\frac{1}{\sigma}{\boldsymbol{\xi}}^{i,T}(\frac{{\mathbf{y}}}{n}-{\mathbf{A}}^i {\mathbf{x}}^i).\end{aligned}$$ Rescaling ${\boldsymbol{\xi}}^i \leftarrow {\boldsymbol{\xi}}^i/\sqrt{n}$, we get $$\ell({\mathbf{x}}^i, {\boldsymbol{\xi}}^i) = \phi({\mathbf{x}}^i) - \frac{1}{2}\|{\boldsymbol{\xi}}^i\|_{2}^{2}+\frac{\sqrt{n}}{\sigma} {\boldsymbol{\xi}}^{i,T}(\frac{{\mathbf{y}}}{n}-{\mathbf{A}}^i {\mathbf{x}}^i)$$ We recognize this as a MAP Lagrangian, thus revealing that extremizing $\ell({\mathbf{x}}^i, {\boldsymbol{\xi}}^i)$ is equivalent to MAP inference but with input signal and noise variance scaled by $1/n$, and limited to ${\mathbf{A}}^i$ and the corresponding latents. Thus in the fully uncoupled regime each block attempts to explain its fraction ${\mathbf{y}}/n$ of the input independently of the other blocks by solving its own MAP inference problem using only its own partition ${\mathbf{A}}^i$ of the affinity matrix, resulting in denser representations due to a reduction in overcompleteness.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We study a simple reaction-diffusion population model (proposed by A. Windus and H. J. Jensen, J. Phys. A: Math. Theor. **40**, 2287 (2007)) on scale-free networks. In the case of fully random diffusion, the network topology does not affect the critical death rate, whereas the heterogenous connectivity makes the steady population density and the critical population density small. In the case of modified diffusion, the critical death rate and the steady population density are higher, at the meanwhile, the critical population density is lower, which is good for survival of species. The results are obtained with a mean-field framework and confirmed by computer simulations.'
author:
- 'An-Cai Wu,$^{1,2}$ Xin-Jian Xu,$^{2,3}$ José F. F. Mendes,$^{2,}$[^1] and Ying-Hai Wang$^{1,}$[^2]'
title: 'A simple reaction-diffusion population model on scale-free networks'
---
Recently, Windus and Jensen [@jpa07; @tpb07] introduced a model for population on lattices with diffusion and birth/death according to $2A\rightarrow 3A$ and $A\rightarrow\phi$ for an individual $A$ . They found that the model displays a phase transition from an active to an absorbing state which is continuous in $1+1$ dimensions and of first-order in higher dimensions [@jpa07]. They also investigated the importance of fluctuations and that of the population density, particularly with respect to Allee effects in regular lattices [@tpb07]. It was found that there exists a critical population density below which the probability of extinction is greatly increased, and the probability of survival for small populations can be increased by a reduction in the size of the habitat [@tpb07].
In the study of complex networks [@Albert], an important issue is to investigate the effect of their complex topological features on dynamical processes [@review07], such as the spread of infectious diseases [@pv01a] and the reaction-diffusion (RD) process [@GA04]. For most real networks, the connectivity distribution has power-law tails $P(k)\sim k^{- \gamma}$, namely, a characteristic value for the degrees is absent, hence the scale-free (SF) property. In this Brief Report, we shall study the simple RD population model [@jpa07] on SF networks.
In an arbitrary finite network which consists of nodes $i=1,\ldots,N$ and links connecting them. Each node is either occupied by a single individual (1) or empty (0). We randomly choose a node. If it is occupied, the individual dies with probability $p_{\rm d}$, leaving the node empty. If the individual does not die, a nearest neighbor-node is randomly chosen. If the neighboring node is empty, the particle moves there; otherwise, the individual reproduces with probability $p_{\rm b}$ producing a new individual on another randomly selected neighboring node, conditional on that node being empty. A time step is defined as the number of network nodes $N$. In homogeneous networks (such as regular lattices and Erd[ö]{}s-R[é]{}nyi (ER) random networks [@erdos59]), the mean-field (MF) equation for the density of active nodes $\rho(t)$ is given by [@tpb07] $$\frac{d \rho(t)}{d t} = -p_{\rm d}\rho(t) + p_{\rm b}(1-p_{\rm
d})\rho^{2}(t)(1-\rho(t)), \label{rateeq00}$$ which has three stationary states $$\bar\rho_0 = 0, \quad
\bar\rho_\pm=\frac{1}{2}\left(1\pm\sqrt{1-\frac{4p_{\rm d}}{p_{\rm
b}(1-p_{\rm d})}}\right). \label{steady states}$$ For $4p_{\rm d} > p_{\rm b}(1-p_{\rm d})$, $\bar\rho_0$ is the only real stationary state, and one can obtain a critical death rate $p_{{\rm d}_{\rm c}} =p_{\rm b}/(4+p_{\rm b})$ which separates the active phase representing survival and the absorbing state of extinction. Simple analysis shows that $\bar\rho_+$ and $\bar\rho_0$ are stable stationary states, whereas $\bar\rho_-$ is unstable and therefore represents a critical density $\rho_{\rm c}$ below which extinction will occur in all cases. Thus, for $p_{\rm d} < p_{{\rm
d}_{\rm c}}$, one can write [@tpb07] $$\rho(t) \rightarrow \left\{
\begin{array}{cl}
0 & \text{if} \quad \rho(t) < \rho_{\rm c}, \\
\bar\rho_+ & \text{if} \quad \rho(t) > \rho_{\rm c},
\end{array} \right.
\quad \text{as} \quad t \rightarrow \infty.$$ At $p_{\rm d} = p_{{\rm d}_{\rm c}}$, the stationary density jumps from $1/2$ to $0$, resulting in a first-order phase transition.
In order to study analytically this process on SF networks in which the degree distribution has the form $P(k)\sim k^{- \gamma}$ and nodes show large degree fluctuations, we are forced to consider the partial densities $\rho_k(t)$, representing the density of individuals in nodes of degree $k$ at time $t$ [@pv01a]. To obtain a rate equation for $\rho_k(t)$ , we use a microscopical approach which has been applied in diffusion-annihilation [@da05] and multicomponent RD processes on SF networks [@mrd06]. Let $n_i(t)$ be a dichotomous random variable taking values $0$ or $1$ whenever node $i$ is empty or occupied by an individual $A$, respectively. Using this formulation, the state of the system at time $t$ is completely defined by the state vector ${\bf n}(t)=\{n_1(t),n_2(t),\cdots,n_N(t)\}$. The evolution of ${\bf
n}(t)$ after a time step can be expressed as $$n_i(t+1)=n_i(t)\eta+[1-n_i(t)]\xi, \label{evolution}$$ where $\eta$ and $\xi$ are dichotomous random variables, taking values of $0$ or $1$ with certain probabilities $p$ and $1-p$, respectively,
$$\eta= \left\{
\begin{array}{cl}
0; & \displaystyle{p= p_{\rm d}+(1-p_{\rm d})
\left[1-\frac{1}{k_i}\sum_j a_{ij} n_j(t) \right]}, \\[0.5cm]
1; & \displaystyle{1-p},
\end{array}
\right.$$
$$\xi= \left\{
\begin{array}{cl}
1; & \displaystyle p= \displaystyle{\sum_j \frac{(1-p_{\rm
d})a_{ij} n_j(t)}{k_j} \left[1+\frac{p_{\rm b}}{k_j}\sum_l
a_{jl} n_l(t) \right]}, \\[0.5cm]
0; & \displaystyle{1-p}
\end{array}
\right.$$
Obviously, $p$ is the probability that an occupied node $i$ becomes empty. If node $i$ is empty, there are two factors causing it occupied. One is that its survival neighbors will move to $i$ with probability $\sum_j \frac{(1-p_{\rm d}) a_{ij} n_j(t)}{k_j}$ and the other is that its survival neighbor $j$ selecting an occupied neighbor $l$ (with the probability $\frac{1}{k_j}\sum_l a_{jl}
n_l(t))$) reproduces a new individual on $i$, then we have the term $\sum_j \frac{(1-p_{\rm d}) a_{ij} n_j(t)}{k_j} \frac{p_{\rm
b}}{k_j}\sum_l a_{jl} n_l(t))$. Taking the average of Eq. (\[evolution\]), we obtain
$$\begin{aligned}
\langle n_i(t+1) | {\bf n}(t)\rangle &=& n_i(t)(1-p_{\rm
d})\frac{1}{k_i}\sum_j a_{ij} n_j(t)+(1-n_i(t))\left[\sum_j
\frac{(1-p_{\rm d}) a_{ij} n_j(t)}{k_j} (1+\frac{p_{\rm
b}}{k_j}\sum_l a_{jl} n_l(t))\right], \label{evolution_averaged}\end{aligned}$$
which describes the average evolution of the system, conditioned to the knowledge of its state at the previous time step. In the MF approximation, $\langle n_i(t)n_j(t)\rangle \equiv \langle n_i(t)
\rangle \langle n_j(t) \rangle$ and $\langle
n_i(t)n_j(t)n_l(t)\rangle \equiv \langle n_i(t) \rangle \langle
n_j(t) \rangle \langle n_l(t) \rangle$. Thus, after multiplying Eq. (\[evolution\_averaged\]) by the probability to find the system at state ${\bf n}$ at time $t$ and summing for all possible configurations, we obtain
$$\begin{aligned}
\rho_i(t+1) & \equiv & \langle n_i(t+1) \rangle =
\rho_i(t)\frac{(1-p_{\rm d})}{k_i}\sum_j a_{ij}
\rho_j(t)+(1-\rho_i(t))\left[\sum_j \frac{(1-p_{\rm d}) a_{ij}
\rho_j(t)}{k_j} (1+\frac{p_{\rm b}}{k_j}\sum_l a_{jl}
\rho_l(t))\right]. \label{evolution02}\end{aligned}$$
We assume that nodes with the same degree are statistically equivalent, i.e., $$\rho_i(t) \equiv \rho_k(t) \quad \forall i \in \mathcal{V}(k),$$ and have $$\sum_j a_{ij}=\sum_{k'}\sum_{j \in
\mathcal{V}(k')}a_{ij}=\sum_{k'}kP(k'|k) \quad \forall i \in
\mathcal{V}(k),$$ where $\mathcal{V}(k)$ is the set of nodes of degree $k$.
We split the sum with index $j$ into two sums over $k'$ and $\mathcal{V}(k')$, respectively. The double sum over $a_{ij}$ is related to the conditional probability $P(k'|k)$ that a node of given degree $k$ has a neighbor which has degree $k'$. In present work, we restrict ourselves to the case of uncorrelated networks in the following, in which the conditional probability takes the simple form $P(k' | k) = k' P(k') / \langle k\rangle$. Thus, from Eq. (\[evolution02\]) and after some formal manipulations, we obtain
$$\rho_k(t+1)=\rho_k(t)(1-p_{\rm d})\Theta(\rho(t))+\frac{k}{\langle
k\rangle} (1-p_{\rm d})(1-\rho_k(t))\rho(t)[1+p_{\rm
b}\Theta(\rho(t))],
\label{evolution04}$$
where $ \rho(t)$ is the total density of active individuals and $\Theta(\rho(t))$ is the probability that any given link points to an occupied node $$\Theta(\rho(t)) = \frac{1}{\langle k\rangle} \sum_k k P(k)
\rho_k(t). \label{eq:5}$$ From Eq. (\[evolution04\]), we can obtain the following rate equation
$$\frac{d \rho_k(t)}{d t}=-\rho_k(t)+\rho_k(t)(1-p_{\rm
d})\Theta(t)+\frac{k}{\langle k\rangle} (1-p_{\rm
d})(1-\rho_k(t))\rho(t)[1+p_{\rm b}\Theta(\rho(t))]. \label{rateeqk}$$
Imposing stationarity $\partial_t \rho_k(t) =0$, we obtain $$\rho_k=\frac{\frac{k}{\langle k\rangle} (1-p_{\rm
d})\rho(1+p_{\rm b}\Theta(\rho))}{\cosh+\frac{k}{\langle k\rangle}
(1-p_{\rm d})\rho(1+p_{\rm b}\Theta(\rho))}.
\label{rhok}$$ This set of equations imply that the higher the node connectivity, the higher the probability to be in an occupied state. This inhomogeneity must be taken into account in the computation of $\Theta(\rho)$. Multiplied Eq. (\[rateeqk\]) by $P(k)$ and summing over $k$, we obtain a rate equation for $\rho(t)$ $$\frac{d \rho(t)}{d t} = -\rho(t)p_{\rm
d}+p_{\rm b}(1-p_{\rm d})[1-\Theta(\rho(t))]\rho(t) \Theta(\rho(t)),
\label{rateeq}$$ Notable, the above equation is consistent with Eq. (\[rateeq00\]) by imposing that $\Theta(\rho(t))=\rho(t)$ in homogeneous networks. It also has three stationary states $$\label{sf steady states}
\bar\rho^{SF}_0 = 0, \quad \Theta(\bar\rho^{SF}_\pm)=\bar\rho_\pm=\frac{1}{2}\left(1\pm\sqrt{1-\frac{4p_{\rm d}}{p_{\rm b}(1-p_{\rm d})}}\right).$$ The critical death rate $p_{{\rm d}_{\rm c}} =p_{\rm b}/(4+p_{\rm
b})$ is the same as that in homogeneous networks. Imposing, naturally, $\frac{d \Theta(\rho)}{d \rho}>0$, we find that $\bar\rho^{SF}_+$ and $\bar\rho^{SF}_0$ are stable stationary states, whereas $\bar\rho^{SF}_-$ is unstable and therefore represents a critical density $\rho^{SF}_{\rm c}$ below which extinction will occur in all cases. In SF networks, the higher the node connectivity, the higher the probability to be in an occupied state (Eq. (\[rhok\])). The presence of nodes with very large degree results in that $\bar\rho^{SF}_\pm <
\Theta(\bar\rho^{SF}_\pm)$. We conclude that both the population steady state and the critical population in SF networks are smaller than those in homogeneous networks.
![Simulation results of $\frac{1}{\rho_{k}}-1$ against the reciprocal of $k$ in log-log scale on uncorrelated random SF networks with exponent $\gamma=3.0$, $k_{min}=3$ and $N=10^{3}$. The birth rate is $p_{\rm b}=0.5$. All the plots recover the form predicted in Eq. (\[rhok\]).[]{data-label="rhokk"}](1rho1k.eps){width="\columnwidth"}
![ (a) Stationary population densities $\rho$ against the death rate $p_{\rm d}$. (b) For different death rate $p_{\rm d}$, there are different critical initial population densities $\rho(0)$. All the data were obtained with $p_{\rm b}=0.5$ and $N=1000$. MF results for the homogenous network (line) and simulation results for the ER random network (triangles), uncorrelated random SF networks with exponent $\gamma=3.0$ (circles) and $2.5$ (squares).[]{data-label="rhopd"}](rhopda.eps){width="\columnwidth"}
The numerical simulations performed on uncorrelated SF networks confirm the picture extracted from the analytic treatment. We construct the uncorrelated SF network by the algorithm [@ucmmodel] with minimum degree $k_\text{{min}}=3$ and size $N=1000$. The simulations are carried out on an initially fully occupied network for obtaining $\rho_k$ and steady states. To find the critical population density, due to its instability, we instead use the initial population density $\rho (0)$ and find the value of $p_{\rm d}$ that separates the active and absorbing states. The prevalence $\rho_{k}$ is computed and averaged over $100$ times for each network configuration, which are performed on $10$ different realizations of the network. Figure \[rhokk\] shows the behavior of the probability $\rho_{k}$ that a node with degree $k$ is occupied. The numerical value of the slope in log-log scale is about $0.98$, which is in good agreement with the theoretical value $1$ in Eq. (\[rhok\]). In Fig. \[rhopd\], for ER networks with average degree $\langle k\rangle=14$ and size $N=1000$, both plots of the stationary population density $\rho$ and the critical initial population density $\rho(0)$ are consistent with the MF results in homogenous networks (Eq. (\[steady states\])). For SF networks, the population and the critical initial population density in steady states are smaller than that of homogeneous networks, which agrees with Eq. (\[sf steady states\]). Furthermore, the more heterogeneous the SF network, i.e., the smaller degree exponent of the SF network, the smaller the densities. Noting that in despite of the different network topologies, the critical death rate $p_{{\rm
d}_{\rm c}}$ is changeless, which is the prediction of Eq. (\[sf steady states\]).
![(a) Stationary population densities $\rho$ against the death rate $p_{\rm d}$. (b) Critical initial population densities $\rho(0)$ versus the death rate $p_{\rm d}$. All the data were obtained with $p_{\rm b}=0.5$ and $N=1000$. MF results for the homogenous network (line), simulation results in the uncorrelated random SF networks with exponent $\gamma=3.0$ with fully random diffusion (triangles), and the modified diffusion under $\alpha=1$ and $\beta=-2$ (circles).[]{data-label="modd"}](rhopdb.eps){width="\columnwidth"}
In the above model, the neighboring node is chosen with full randomness, and we call this fully random diffusion. In the following, we shall redesign the diffusion strategies of the population model on SF networks. If the randomly chosen particle does not die, a nearest neighbor node $j$ is randomly chosen with a probability proportional to $k^{\alpha}_j$. If the neighboring node $j$ is empty, the particle moves there; otherwise, the particle reproduces with probability $p_{\rm b}$ producing a new particle on another neighboring node $l$ which is chosen by a probability proportional to $k^{\beta}_l$, conditional on that node being empty. We can also write the MF equation in the uncorrelated random SF network for this modified diffusion case $$\frac{d \rho(t)}{d t} = -\rho(t)p_{\rm
d}+p_{\rm b}(1-p_{\rm d})(1-\Theta_{\beta}(\rho(t)))\rho(t)
\Theta_{\alpha}(\rho(t)),
\label{rateeqab}$$ where $$\Theta_{\alpha}(\rho(t)) = \frac{1}{\langle
k^{1+\alpha}\rangle} \sum_k k^{1+\alpha} P(k) \rho_k(t).
\label{eq:citaab}$$ In the extinction state $\rho(t)=0$, it is natural that $\Theta_{\alpha}(\rho(t))=0$. We can obtain at lowest order in $\rho(t)$, $\Theta_{\alpha}(\rho(t))\simeq A\rho(t)$. Similarly $\Theta_{\beta}(\rho(t))\simeq B\rho(t)$ can also be obtained at lowest order in $\rho(t)$, where $A$ and $B$ are coefficients. In the steady state, following previous analysis, we can easily get the critical death rate $$p_{{\rm d}_{\rm c}} =\frac{p_{\rm b}}{(4\frac{B}{A}+p_{\rm b})},
\label{pdc}$$ which can be changed by the ratio $B/A$. If $B<A$, the modified model has larger critical death rates than the original. Similar to Eq. (\[rhok\]), the partial density takes the form $(\frac{1}{\rho_k}-1)\sim 1/D(k)$ with $D(k)=\frac{k^{1+\alpha}}{\langle k^{1+\alpha} \rangle}+
\frac{k^{1+\beta}}{\langle k^{1+\beta} \rangle}p_{\rm
b}\Theta_{\alpha}$. Considering that $0<p_{\rm b}$ and $\Theta_{\alpha}<1$, we can negate the second term in $D(k)$. If $\alpha>-1$, the higher degree node has larger partial density $\rho_k$. As $\beta<\alpha$, we get $B<A$; On the other hand, if $\alpha<-1$, the higher degree node has smaller $\rho_k$. As $\beta
> \alpha$, we have $B<A$.
In Fig. \[modd\] the diffusion coefficients are $\alpha=1$ and $\beta=-2$. From the previous discussion, we obtain $B<A$. Thus, the modified model has a larger critical death rate. Furthermore, it has the larger steady population density and the smaller critical population density. From the view of conservation, it is better that a population system has a larger critical death rate, a larger steady population density and a lower critical population density at the same time. Our modified model has this nice property under the condition of $B<A$.
In summary, we have studied a simple RD population model on SF networks by analytical methods and computer simulations. We find that in the case of fully random diffusion, the network topology can not change the value of the critical death rate $p_{{\rm d}_{\rm
c}}$. However, the more heterogenous the network, the smaller steady population density and the critical population density. For the modified diffusion strategy, we can obtain the larger critical death rate and the higher steady population density, at the meanwhile, the lower critical population density, which is good for the specie’s survival. In present work, we consider the population model which has only one specie, it may be more interesting to investigate population models having multi-species with predator-prey, mutualistic, or competitive interactions in complex networks, which is left to future work.
This work was partially supported by NSFC/10775060, SOCIALNETS, and POCTI/FIS/61665.
A. Windus and H. J. Jensen, J. Phys. A: Math. Theor. **40**, 2287 (2007).
A. Windus and H. J. Jensen, Theor. Popul. Biol. **72**, 459 (2007).
R. Albert and A.-L. Barabási, Rev. Mod. Phys. **74**, 47 (2002); S. N. Dorogovtsev and J. F. F. Mendes, Adv. Phys. **51**, 1079 (2002); M. E. J. Newman, SIAM Rev. **45**, 167 (2003).
S. N. Dorogovtsev, A. V. Goltsev and J. F. F. Mendes, arXiv:0705.0010
R. Pastor-Satorras and A. Vespignani, Phys. Rev. Lett. [**86**]{}, 3200 (2001).
L. K. Gallos and P. Argyrakis, Phys. Rev. Lett. **92**, 138301 (2004).
P. Erdös and A. Rényi, Publ. Math. **6**, 290 (1959).
M. Catanzaro, M. Bogu[ñ]{}[á]{} and R. Pastor-Satorras, Phys. Rev. E [**71**]{}, 056104 (2005).
S. Weber and M. Porto, Phys. Rev. E [**74**]{}, 046108 (2006).
M. Catanzaro, M. Bogu[ñ]{}[á]{} and R. Pastor-Satorras, Phys. Rev. E [**71**]{}, 027103 (2005).
[^1]: Electronic address: jfmendes@ua.pt
[^2]: Electronic address: yhwang@lzu.edu.cn
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We introduce a class of special geometries associated to the choice of a differential graded algebra contained in $\Lambda^*{\mathbb{R}}^n$. We generalize some known embedding results, that effectively characterize the real analytic Riemannian manifolds that can be realized as submanifolds of a Riemannian manifold with special holonomy, to this more general context. In particular, we consider the case of hypersurfaces inside nearly-Kähler and $\alpha$-Einstein-Sasaki manifolds, proving that the corresponding evolution equations always admit a solution in the real analytic case.'
address: 'Dipartimento di Matematica e Applicazioni, Università di Milano Bicocca, via Cozzi 53, 20125 Milano, Italy.'
author:
- Diego Conti
bibliography:
- 'torsion.bib'
title: Embedding into manifolds with torsion
---
[^1]
There are a number of known results concerning the problem of embedding a generic Riemannian manifold into a manifold with special holonomy. At the very least for this problem to make sense the embedding should be isometric, but often extra conditions are required. A classical example is that of special Lagrangian submanifolds of Calabi-Yau manifolds, introduced in [@HarveyLawson:CalibratedGeometries] in the context of minimal submanifolds; it is a result of [@Bryant:Calibrated] that every compact, oriented real analytic Riemannian $3$-manifold can be embedded isometrically as a special Lagrangian submanifold in a $6$-manifold with holonomy contained in ${\mathrm{SU}}(3)$ (see also [@Matessi:Isometric] for a generalization). A similar result holds for coassociative submanifolds in $7$-manifolds with holonomy contained in ${{\mathrm{G}_2}}$ (see [@Bryant:Calibrated]); the case of codimension one embeddings in manifolds with special holonomy was considered in [@Hitchin:StableForms; @ContiSalamon].
Whilst the above results have been obtained using characterizations in terms of differential forms, the codimension one embedding problem can be uniformly rephrased using the language of spinors. Indeed, if $M$ is a Riemannian spin manifold with a parallel spinor, then a hypersurface inherits a generalized Killing spinor $\psi$, namely satisfying $$\nabla_X\psi = \frac12 A(X)\cdot\psi$$ where $A$ is a section of the bundle of symmetric endomorphism of $TM$, corresponding to the Weingarten tensor, and the dot represents Clifford multiplication. One can ask if, given a Riemannian spin manifold $(N,g)$ with a generalized Killing spinor $\psi$, the spinor can be extended to a parallel spinor on a Riemannian manifold $M\supset N$ containing $N$ as a hypersurface; if so, $M$ is Ricci flat and the second fundamental form is determined by the tensor $A$, which is an intrinsic property of $(N,g,\psi)$. Counterexamples exist in the smooth category [@Bryant:NonEmbedding]; in the real analytic category, the general existence of such an embedding has been established in [@BarGauduchonMoroianu] under the assumption that the tensor $A$ is Codazzi, and in [@ContiSalamon] for six-dimensional $N$. A characterization of the $(N,g,\psi)$ as above that embed into an irreducible $M$ follows from Theorem \[thm:AbstractEmbedding\] in this paper. The existence of an embedding for any real analytic $(N,g,\psi)$ has been recently proved by Ammann [@Ammann].
The spinor formulation lends itself to a natural variation, where one replaces the parallel spinor on the ambient manifold with a real Killing spinor, namely a spinor $\psi$ for which $\nabla_X\psi = \frac12 X\cdot\psi$. Compact manifolds with a Killing spinor are classified in [@Bar], and they are either Einstein-Sasaki (possibly $3$-Einstein-Sasaki) manifolds, nearly-Kähler six-manifolds, nearly-parallel ${{\mathrm{G}_2}}$-manifolds, or round spheres. Aside from the somewhat degenerate case of the sphere, each of these geometries can be associated to a finite-dimensional differential graded algebra (DGA). For instance, nearly-Kähler geometry corresponds to the algebra of ${\mathrm{SU}}(3)$-invariant elements in the exterior algebra $\Lambda^*{\mathbb{R}}^6$, which is generated by a $2$-form $\omega$ and two $3$-forms $\psi^\pm$; the differential graded algebra structure is given by the operator $$d(\omega)=3\psi^+, \quad d(\psi^-)=-2\omega\wedge\omega, \quad d(\psi^+)=0.$$ A hypersurface inside a manifold with a nearly-Kähler $6$-manifold has an induced structure, called a nearly hypo ${\mathrm{SU}}(2)$-structure, and the codimension one embedding problem for nearly hypo $5$-manifolds amounts to determining whether the “nearly hypo evolution equations” admit a solution [@Fernandez:NearlyHypo].
In this paper we give a characterization of the DGA’s that define a special geometry (Lemma \[lemma:ZEmpty\]), and classify these DGA structures on the algebras $(\Lambda^*{\mathbb{R}}^{2n})^{{\mathrm{SU}}(n)}$ and $(\Lambda^*{\mathbb{R}}^{2n+1})^{{\mathrm{SU}}(n)}$ (Propositions \[prop:derivation\] and \[prop:oddderivation\]). We define the exterior differential system associated to such a DGA, and provide examples in which it is involutive (Theorem \[thm:involutive\]); as a byproduct, we obtain that the intrinsic torsion of an ${\mathrm{SU}}(n)$-structure on a manifold of dimension $2n+1$ is entirely determined by the exterior derivative of the defining forms, like in the well-known even-dimensional case (Proposition \[prop:SUnIsStronglyAdmissible\]). We formulate the embedding problem for geometries defined by arbitrary DGA’s and submanifolds of arbitrary codimension, and determine a sufficient condition for the embedding to exist by means of Cartan-Kähler theory (Theorem \[thm:AbstractEmbedding\]). This result requires that the manifold and metric be real analytic, which appears to be a necessary restriction due to the counterexamples of [@Bryant:NonEmbedding]. Though the sufficient condition is not very explicit, we show it to hold in some explicit examples, including those corresponding to an ambient geometry with a real Killing spinor (Theorems \[thm:nearlyhypo\] and \[thm:go\]), and those where the DGA is generated by a single stable form (Proposition \[prop:stable\]). In addition, we study a new geometry defined by a $5$-form in nine dimensions whose stabilizer ${\mathrm{SO}}(3)\subset{\mathrm{SO}}(9)$ acts irreducibly on ${\mathbb{R}}^9$, proving an embedding result in that case (Theorem \[thm:so3Embedding\]). A different irreducible representation of ${\mathrm{SO}}(3)$ has been studied in [@Nurowski:SO3], along with its associated five-dimensional geometry, which is however defined by a tensor rather than differential forms. In seven dimensions, the ${\mathrm{SO}}(3)$-invariant forms are also invariant under ${{\mathrm{G}_2}}$; the nine-dimensional case seems more in line with the DGA approach of this paper, since the irreducible action of ${\mathrm{SO}}(3)$ on ${\mathbb{R}}^9$ preserves two differential forms, each with stabilizer ${\mathrm{SO}}(3)$.
${\mathrm{PSU}}(3)$-structures and calibrated submanifolds {#sec:psu3}
==========================================================
In this section we give a concrete introduction to the embedding problem, in the context of ${\mathrm{PSU}}(3)$-structures on $8$-manifolds. This type of structure has been studied in [@Hitchin:StableForms; @Witt:SpecialMetrics].
The structure group ${\mathrm{PSU}}(3)={\mathrm{SU}}(3)/{\mathbb{Z}}_3$ acts faithfully on ${\mathbb{R}}^8$ via the adjoint representation; a ${\mathrm{PSU}}(3)$-structure on an $8$-manifold $M$ is a reduction of the bundle of frames to the group ${\mathrm{PSU}}(3)\subset{\mathrm{SO}}(8)$. The group ${\mathrm{PSU}}(3)$ fixes a $3$-form on ${\mathbb{R}}^8$, which in terms of an orthonormal basis $e^1,\dotsc, e^8$ can be written as $$\rho=e^{123}+\frac12 e^1\wedge (e^{47}-e^{56})+\frac12 e^1\wedge (e^{46}+e^{57})+\frac12 e^3\wedge (e^{45}-e^{67})+\frac{\sqrt3}2e^8\wedge (e^{45}+e^{67})\;.$$ This form is stable in the sense of [@Hitchin:StableForms], a fact which plays a significant rôle in the proof of Theorem \[thm:psu3Embedding\] below.
By construction, an $8$-manifold $M$ with a ${\mathrm{PSU}}(3)$-structure carries two canonical forms, which we also denote by $\rho$ and $*\rho$. Requiring these forms to be parallel is too strong a condition in order to obtain interesting examples (see [@Hitchin:StableForms]), and one is led to consider the weaker conditions $$\label{eqn:PSU3Geometries} d\rho=0, \quad d*\rho=0, \quad d\rho=0=d*\rho.$$ We shall see in Section \[sec:Preparatory\] that to each geometry in one can associate an exterior differential system, which is involutive in the first two cases (see Proposition \[prop:stable\]), though it does not appear to be involutive in the last case.
Up to a normalization constant, we can assume that the comass of $\rho$, namely the maximum of the map $$\label{eqn:comass}
Gr_3^+({\mathbb{R}}^8)\to{\mathbb{R}}, \quad E\to \rho(E),$$ is equal to one. Then if $M$ is an $8$-manifold with a ${\mathrm{PSU}}(3)$-structure such that the associated form $\rho$ is closed, $\rho$ is a calibration in the sense of [@HarveyLawson:CalibratedGeometries]. An oriented three-dimensional $N\subset M$ is said to be calibrated by $\rho$ if $\rho(T_xN)=1$ for all $x$ in $N$. In analogy with [@Bryant:Calibrated], we can prove the following:
\[thm:psu3Embedding\] If $N$ is a compact, orientable, real analytic Riemannian $3$-manifold, then $N$ can be embedded isometrically as a calibrated submanifold in a $9$-manifold $M$ with a ${\mathrm{PSU}}(3)$-structure with $\rho$ closed.
The proof of this theorem will be given in Section \[sec:AbstractEmbedding\]. For the moment, we point out that by a real analytic Riemannian manifold we mean a real analytic manifold with a real analytic Riemannian metric.
The only rôle of the dimension three in the proof of Theorem \[thm:psu3Embedding\] is that of ensuring that $N$ is parallelizable. In general, one has to take this as a hypothesis. For example, we can consider the analogous problem of $5$-manifolds calibrated by $*\rho$:
\[thm:psu3Embedding2\] If $N$ is a compact, parallelizable, real analytic Riemannian $5$-manifold, then $N$ can be embedded isometrically as a calibrated submanifold of a $9$-manifold $M$ with a ${\mathrm{PSU}}(3)$-structure with $*\rho$ closed.
The form $\rho\in\Lambda^3{\mathfrak{su}}(3)$ can also be defined in terms of the Killing form $B$ of ${\mathfrak{su}}(3)$, as it satisfies (up to a constant) $$\rho(X,Y,Z)=B([X,Y],Z).$$ More generally, any Lie algebra has a standard $3$-form defined this way, and one can study the associated geometry. However, the results of this paper have no evident application to this general situation, since the form is generally not stable.
Special geometries, differential graded algebras and exterior differential systems {#sec:Preparatory}
==================================================================================
In this section we introduce the exterior differential system associated to special geometries of a specific type, modelled on a certain differential graded algebra (DGA), and we introduce the Cartan-Kähler machinery, which will be used in later sections to prove an abstract embedding theorem. As a first application, we determine sufficient conditions for this structure to determine an involutive exterior differential system (implying, in particular, local existence).
We now introduce some notation which will be fixed for the rest of the paper. Let $M$ be a manifold of dimension $n$, and let $\pi\colon F\to M$ be the bundle of frames. We set $T={\mathbb{R}}^n$ as a representation of ${\mathrm{O}}(n)\subset{\mathrm{GL}}(n,{\mathbb{R}})$, so that $TM=F\times_{{\mathrm{GL}}(n,{\mathbb{R}})}T$; in addition, there is a fixed a metric on $T$, which will be freely identified with its dual $T^*$. Let $G$ be a compact Lie group in ${\mathrm{O}}(n)$. The fixed set $(\Lambda^*T)^G$ of $G$ in the exterior algebra over $T$ maps naturally to forms on $F$ by a map $$\label{eqn:TautologicalMorphism}
\iota\colon (\Lambda^*T)^G\to\Omega(F).$$ Notice that $\iota$ does not depend on $G$, but only on $T$ and the tautological form on $F$; we shall sometimes write $\iota_T$ rather than $\iota$ to emphasize the dependence. If $P_G\subset F$ is a $G$-structure, the restriction of the image of $\iota$ to $P_G$ determines a finite dimensional subalgebra of $\Omega(M)$.
Let $\mathcal{I}^G(F)\subset \Omega(F)$ be the ideal generated by $d(\iota((\Lambda^*T)^G))$; by construction, this is a differential ideal, modelled on the graded algebra $(\Lambda^*T)^G$. More generally, let $A$ be a graded subalgebra of $(\Lambda^*T)^G$; we say that a linear endomorphism $f$ of $A$ is a [*differential operator*]{} if $(A,f)$ is a DGA, namely $f^2=0$, the graded Leibnitz rule holds and $f$ is the sum of linear maps $$f\colon (A\cap \Lambda^pT)\to (A\cap \Lambda^{p+1})^G.$$ Given the DGA $(A,f)$, one can consider the ideal $\mathcal{I}_{A,f}(F)$ generated by $$\left\{d\iota(\alpha)-\iota(f(\alpha)), d\iota(f(\alpha))\mid \alpha\in A\right\}.$$ This is a differential ideal which coincides with $\mathcal{I}^G(F)$ if $(A,f)=((\Lambda^*T)^G,0)$. Since the algebra $A$ is encoded in the definition of $f$, we shall often write $\mathcal{I}_f(F)$ for $\mathcal{I}_{A,f}(F)$.
We fix a torsion-free connection form $\omega$, whose components will be denoted by $\omega_{ij}$, and the tautological form $\theta$ with components $\theta_i$; by the torsion-free assumption, $$\label{eqn:structure}
d\theta_i=-\sum_{j=1}^n\omega_{ij}\wedge\theta_j.$$ Let $Gr_n(TF)$ be the Grassmannian of $n$-planes inside $TF$; by definition, an element of $Gr_n(TF)$ is transverse to $\pi$ if it does not intersect the vertical distribution $\ker\pi_*$. Transverse elements define an open subset $Gr_n(TF,\pi)\subset Gr_n(TF)$, on which we have natural coordinates $(u,p_{ijk})$, where $u$ represents a set of coordinates on $F$ and $p_{ijk}$ are determined by $$\label{eqn:grassmann}
\omega_{ij}=\sum_{k=1}^n p_{ijk}\theta_k.$$ We define $V_n(\mathcal{I}_f)$ as the set of $\pi$-transverse integral elements of dimension $n$, namely those elements $E$ of $Gr_n(F,\pi)$ such that all forms in $\mathcal{I}_f$ restrict to zero on $E$. In terms of the coordinates $(u,p_{ijk})$, we see from , that $V_n(\mathcal{I}_f)$ is cut out by affine equations in the $p_{ijk}$ with constant coefficients; hence, it is either empty or a smooth manifold.
The coordinates $p_{ijk}$ determine a natural map $$\label{eqn:grassmannian}
{\mathfrak{gl}}(n,{\mathbb{R}})\otimes T\to Gr_n(T_uF,\pi).$$ Under this map, $V_n(\mathcal{I}_f)$ corresponds to an affine space $$Z_f=Z_{A,f}\subset {\mathfrak{gl}}(n,{\mathbb{R}})\otimes T.$$ We shall say that an element of ${\mathfrak{gl}}(n,{\mathbb{R}})\otimes T$ is an [*integral*]{} of $\mathcal{I}_f$ if it lies in $Z_f$. Notice that the map depends on the choice of a connection, but not so the space $Z_f$, i.e. whether a point in ${\mathfrak{gl}}(n,{\mathbb{R}})\otimes T$ is mapped to an integral.
Define $Z'_f$, $Z''_f$ by the diagram of $G$-equivariant maps $$\xymatrix{
{\mathfrak{gl}}(n,{\mathbb{R}})\otimes T\ar[r]^\cong &T\otimes {\mathfrak{gl}}(n,{\mathbb{R}})\ar[r]& T\otimes \Lambda^2T\ar[r]& \frac{ T\otimes \Lambda^2T}{{\mathfrak{g}}\otimes T}\\
Z_f\ar@{^{(}->}[u]\ar[rr]&& Z'_f\ar@{^{(}->}[u]\ar[r] & Z''_f\ar@{^{(}->}[u]}$$ where the vertical arrows are inclusions and the horizontal arrows are surjective.
We can explain the relation among exterior differential system and special geometries with the following:
\[prop:SpecialGeometries\] The affine spaces $Z_f$, $Z'_f$, $Z''_f$ are invariant under $G$, and the vector space $$T\otimes S^2T+ {\mathfrak{g}}\otimes T$$ is contained in a translate of $Z_f$. If $E\in Gr_n(T_uF,\pi)$, the following are equivalent:
- $E$ is an integral element of $\mathcal{I}_f(F)$.
- Given any connection extending $E$, in the sense that the connection form $\tilde\omega$ satisfies $\ker\tilde\omega_u=E$, the torsion map $$\Theta\colon P\to T\otimes \Lambda^2T$$ satisfies $\Theta(u)\in Z'_f$.
- Given any local $G$-structure $P_G\subset F$ extending $E$, in the sense that $T_uP_G\supset E$, the intrinsic torsion map $$\tau\colon P\to \frac{ T\otimes \Lambda^2T}{{\mathfrak{g}}\otimes T}$$ satisfies $\tau(u)\in Z''_f$.
In general, one is interested in finding $G$-structures $P_G$ which not only extend an integral element, but are themselves integrals of $\mathcal{I}_f$, i.e. such that $\mathcal{I}_f$ restricts to zero on $P_G$. Then the map $\iota$ descends to an injective DGA morphism $(A,f)\to\Omega(M)$, and Proposition \[prop:SpecialGeometries\] tells us that the intrinsic torsion at each point is forced to lie in some space $Z''_f$.
First, we discuss the problem of whether $Z_f$ is empty.
\[lemma:ZEmpty\] The space $Z_{A,f}$ is non-empty if and only if $f$ extends to a degree $1$ derivation $\tilde f$ of the graded algebra $\Lambda^*T$. In that case, $Z_{A,f}$ is a translate of $Z_{A,0}$, and $\tilde f$ can be taken to be $G$-equivariant.
Take an element $\eta$ of ${\mathfrak{gl}}(T)\otimes T$; its image in $T\otimes\Lambda^2T$ can be wiewed as a linear map $T\to\Lambda^2T$, which extends uniquely to a degree $1$ derivation $d_\eta$ of $\Lambda^*T$ (which fails to be a differential operator since $(d_\eta)^2\neq 0$ in general). It follows from the definition that $\eta$ is in $Z'_{A,f}$ if and only if $d_\eta$ restricted to $A$ coincides with $f$, and two such $\eta$ differ by a derivation that restricts to zero on $A$, i.e. an element of $Z'_{A,0}$.
For the last part of the statement, observe that the group ${\mathrm{GL}}(T)$ acts on the space of derivations of degree one on $\Lambda^*T$ by $$g\cdot f(\alpha)= g(f(g^{-1}\alpha)), \quad \alpha\in \Lambda^*T.$$ By construction, if $f'$ is a derivation that coincides with $f$ on $(\Lambda^*T)^G$, then any derivation $g\cdot f'$ also restricts to $f$ on $A\subset (\Lambda^*T)^G$. Thus we can define $$\tilde f=\int_{g\in G} (gf')\mu_g,$$ where $\mu_g$ is the Haar measure, and obtain a $G$-equivariant derivation of degree one which extends $f$.
In fact, the condition of Lemma \[lemma:ZEmpty\] is not trivially satisfied; for an example, we refer to Section \[sec:so3\].
We can now introduce a special class of DGA’s for which $Z_f$ has a particularly simple description. Let ${\mathfrak{g}}^\perp$ denote the orthogonal complement of ${\mathfrak{g}}$ in $\Lambda^2T$.
\[prop:cit\] For a given compact Lie group $G$ acting on $T$ and $A=(\Lambda^*T)^G$, $Z''_0=\{0\}$ if and only if $$Z_0=({\mathfrak{g}}\otimes T)\oplus (T\otimes S^2T).$$ In this case, $$\operatorname{codim}V_n(\mathcal{I})=\dim T\otimes {\mathfrak{g}}^\perp.$$ For every differential operator $f$ on $(\Lambda^*T)^G$, either $Z''_f$ is empty or $Z''_f=\{\xi_f\}$, where $\xi_f$ is fixed under the action of $G$. Moreover the intrinsic torsion of any $G$-structure $P_G$ is completely determined by the map $$d\circ\iota\colon (\Lambda^*T)^G\to\Omega(P_G).$$ In particular, a $G$-structure is an integral of $\mathcal{I}_f$ if and only if its intrinsic torsion map $$P_G\to{\mathfrak{g}}^\perp\otimes T$$ takes constantly the value $\xi_f$.
We say $G$ is [*strongly admissible*]{} if one of the equivalent conditions of Proposition \[prop:cit\] hold, and that $P_G$ is a $G$-structure with [*constant intrinsic torsion*]{} if the intrinsic torsion map is constant. Proposition \[prop:cit\] suggests that it makes sense to consider $G$-structures with constant intrinsic torsion, for $G$ a strongly admissible group; constant intrinsic torsion geometries are the DGA equivalent of Killing spinors, though more general. Examples of these structures will be given in Section \[sec:SUn\] and Section \[sec:oddSUn\].
In the case that $f$ is zero, looking for integral $G$-structures is equivalent to requiring that the holonomy be contained in $G$, as in [@Bryant]. However, the same techniques also work in the case of constant intrinsic torsion (that is, when $f$ is non-zero) or in the case of subalgebras $A\subset(\Lambda^*T)^G$. Indeed, in the rest of this section we give a sufficient condition on $(A,f)$ ensuring that $\mathcal{I}_{A,f}$ is involutive.
Now fix an exterior differential system $\mathcal{I}_f$. A subspace $E\subset T_uF$ is integral if $\mathcal{I}_f$ restricts to zero on $E$. If $v_1,\dotsc,v_k$ is a basis of $E$, the space of [*polar equations*]{} of $E$ is $$\mathcal{E}(E)=\left\{(v_1\wedge\dotsb\wedge v_k){\lrcorner\,}\alpha_u \mid \alpha\in \mathcal{I}_f\right\}.$$ Then the annihilator of $\mathcal{E}(E)$ is the union of all integral elements containing $E$. Given $E\in V_n(\mathcal{I}_f)$, a flag $$\label{eqn:flagE}
\{0\}=E_0\subsetneq\dotsb\subsetneq E_n=E$$ is [*ordinary*]{} (resp. [*regular*]{}) if for all $k<n$ (resp. $k\leq n$) the dimension of $\mathcal{E}(E')$ is constant for $E'$ in a neighbourhood of $E_k$ in $Gr_k(F)$. By Cartan’s test, the flag satisfies $$\operatorname{codim}V_n(\mathcal{I}_f)\geq c_0 + \dotsb + c_{n-1},\quad c_k=\dim\mathcal{E}(E_k),$$ and equality holds if and only if the flag is ordinary. Moreover by the Cartan-Kähler theorem the terminus $E$ of an ordinary flag is contained in an integral manifold of dimension $n$ (see [@BryantEtAl]).
Since the map $\iota$ establishes an isomorphism between $T$ and $E^*$, we can define another flag $$\label{eqn:flagW}
\{0\}=W_0\subsetneq\dotsb\subsetneq W_n=T, \quad E_k=\iota(W_k^\perp)^o\cap E$$ Here the notation is that if $V\subset W$, we define $V^o\subset W^*$ as the subspace of those elements of $W^*$ that vanish on $V$. We now show that the numbers $c(E_k)$ do not depend on $E$ or $f$, but only on the flag .
\[lemma:cw\] Given a compact Lie group $G$ acting orthogonally on $T={\mathbb{R}}^n$, and a graded subalgebra $A\subset (\Lambda^*T)^G$, to every $W\subset T$ one can associate a subspace $H(W)\subset{\mathfrak{gl}}(T)$ with the following property. Let $F$ be a frame bundle over an $n$-dimensional manifold $M$, $f$ a differential operator on $A$, $E$ in $V_n(\mathcal{I}_f(F))$, and $E'\subset E$ the subspace corresponding to $W\subset T$. Then the polar space $H(E')$ for the exterior differential system $\mathcal{I}_f(F)$ satisfies $$H(E')=E'\oplus H(W).$$
If $E\subset T_uF$, we can think of $\iota$ as a map from $T$ to $T_u^*F$. Since $E\supset E'$ is an integral element, all the forms in $\mathcal{E}(E')$ vanish on $E$. This means that $\mathcal{E}(E')$ does not intersect $\iota(T)$. We can therefore consider the reduced polar equations, namely the image of $\mathcal{E}(E')$ under the map $$T^*_uF \to \frac{T_u^*F}{\iota(T)}\cong {\mathfrak{gl}}(T)^*.$$ When passing to the quotient, the components $\iota(f(\alpha))$ of the generators vanish, and so the reduced polar equations for $\mathcal{I}_{A,f}(F)$ are the same as those for $\mathcal{I}_{A,0}(F)$. Since does not depend on the point $u$, or the manifold $M$, their image in ${\mathfrak{gl}}(T)^*$ depends only on $W$. Defining $H(W)$ as the subspace $\mathcal{E}(E')^o$ of ${\mathfrak{gl}}(T)$, the statement follows.
Noting that $\dim H(E)+\dim\mathcal{E}(E)=n^2+n$, we set $$c(W)=n^2+n-\dim H(W).$$ Thus, Lemma \[lemma:cw\] asserts that $c(E)$ equals $c(W)$ regardless of $f$, $F$ and $E$.
\[dfn:ordinary\] We say a flag $W_0\subsetneq\dotsb\subsetneq W_n=T$ is [*$\mathcal{I}_A$-ordinary*]{} if $$c(W_0)+\dotsb+c(W_{n-1})=\dim \frac{{\mathfrak{gl}}\otimes T}{Z_0}.$$
This is a purely algebraic definition, which does not involve directly an exterior differential system. One can show that the $c(W_i)$ add up to the dimension of $T\otimes{\mathfrak{g}}^\perp$ if and only if $G$ is strongly admissible and the flag is $\mathcal{I}_A$-ordinary (see Proposition \[prop:SUnIsStronglyAdmissible\]).
\[prop:localexistence\] Given a compact Lie group $G$ acting orthogonally on , and a subalgebra $A\subset (\Lambda^*T)^G$, suppose there is an $\mathcal{I}_A$-ordinary flag $$W_0\subsetneq\dotsb\subsetneq W_n=T.$$ Then, for every differential operator $f$ on $A$ which extends to a derivation of degree one of $\Lambda^*T$ and every real analytic $n$-dimensional manifold $M$ with frame bundle $F$, the exterior differential system $\mathcal{I}_f(F)$ is involutive.
By Lemma \[lemma:ZEmpty\], $Z_{A,f}$ is a translate of $Z_{A,0}$; in particular, at each $u\in F$ there is an integral element $E\in V_n(\mathcal{I}_f(F))$. By Lemma \[lemma:cw\], the flag determines a flag $E_0\subset\dotsb\subset E_n$ such that $$c_0+\dotsb+c_{n-1}=\dim \frac{{\mathfrak{gl}}\otimes T}{Z_0}.$$ On the other hand the right-hand side equals the codimension of $V_n(\mathcal{I}_f(F))$, and so by Cartan’s test the flag $E_0\subset\dotsb\subset E_n$ is ordinary. Thus, every integral element is ordinary, and so by the Cartan-Kähler theorem it is tangent to a real analytic integral of dimension $n$.
We can now give a sufficient condition for algebras generated by a single stable form. Recall from [@Hitchin:StableForms] that a form $\phi\in\Lambda^pT$ is [*stable*]{} if its infinitesimal orbit ${\mathfrak{gl}}(T)\cdot\phi$ relative to the standard ${\mathrm{GL}}(T)$ action coincides with $\Lambda^pT$.
More generally, given a subspace $E\subset T$, we shall say that a form $\alpha$ in $\Lambda^pT$ is $E$-stable if for the restriction map $\pi_E\colon \Lambda^pT\to \Lambda^pE$ satisfies $$\label{eqn:Estable}
\pi_E({\mathfrak{gl}}(T)\cdot\phi)=\Lambda^p E.$$ In particular, if $\alpha$ is stable then it is also $E$-stable for all $E\subset T$. The converse is not true: for instance, a $4$-form in ${\mathbb{R}}^8$ with stabilizer ${\mathrm{Spin}}(7)$ is not stable, but it is stable for every $E\subsetneq T$, since the restriction to a space of codimension one has stabilizer conjugate to ${{\mathrm{G}_2}}$.
\[prop:stable\] Let $\alpha$ be an $E$-stable form on $T$, where $E$ has codimension one in $T$. Then every flag $E_0\subsetneq\dotsb\subsetneq E_{n}$ in $T$ with $E_{n-1}=E$ is $\mathcal{I}^\alpha$-ordinary.
Since $\pi_{E_i}$ factors through $\pi_{E_{n-1}}$, $\alpha$ is $E_i$-stable for all $i$. Thus we obtain that $\mathcal{E}(E_i)$ consists of $\binom{i}{p}$ equations. On the other hand, the space $Z^\alpha$ is defined by at most $\binom{n}{p+1}$ equations. By Cartan’s inequality, $$\binom{p}{p}+\binom{p+1}{p}\dotsb+\binom{n-1}{p}\leq \operatorname{codim}Z^{\alpha} \leq \binom{n}{p+1},$$ and a straightforward calculation shows that equality must hold.
In general, for algebras $A$ generated by a single form, an $\mathcal{I}_A$-ordinary flag may exist even if the form is not $E$-stable for any $E$ of codimension one. For instance, the form $e^{12}+e^{34}$ in ${\mathbb{R}}^7$ is not $E$-stable for any six-dimensional $E$, but the algebra $A$ it generates admits an $\mathcal{I}_A$-ordinary flag. However, the above proposition applies to some significant cases. Recall from Section \[sec:psu3\] that ${\mathbb{R}}^8$ admits a stable three-form $\phi$ with stabilizer ${\mathrm{PSU}}(3)$. Indeed more stable forms exist (see [@LePanakVanzura]), but we shall focus on forms with compact stabilizers.
\[thm:involutive\] If $A$ is one of $$(\Lambda^*{\mathbb{R}}^8)^{Sp(2)Sp(1)}, (\Lambda^*{\mathbb{R}}^8)^{Spin(7)}, {\mathbb{R}}\rho\subset\Lambda^*{\mathbb{R}}^8, {\mathbb{R}}(*\rho)\subset\Lambda^*{\mathbb{R}}^8\;$$ then for every real analytic manifold $M$ of the appropriate dimension with frame bundle $F$, the exterior differential system $\mathcal{I}_{A,0}(F)$ is involutive. Moreover if $A$ is one of $$(\Lambda^*{\mathbb{R}}^7)^{{{\mathrm{G}_2}}},(\Lambda^*{\mathbb{R}}^{2k})^{SU(k)}, (\Lambda^*{\mathbb{R}}^{2k+1})^{SU(k)},$$ for every real analytic manifold $M$ of the appropriate dimension $n$ with frame bundle $F$ and every differential operator $f$ on $A$ which extends to a derivation of degree one $f$ of $\Lambda^*{\mathbb{R}}^n$, the exterior differential system $\mathcal{I}_{A,f}(F)$ is involutive.
Given Proposition \[prop:localexistence\], it suffices to show that a flag satisfying Definition \[dfn:ordinary\] exists in each case. For the first part of the statement, we observe that each algebra is generated by an $E$-stable form and apply Proposition \[prop:stable\]. Concerning the second part, we refer to [@Bryant] for the case of ${{\mathrm{G}_2}}$, and to Lemmas \[lemma:sun\], \[lemma:sunodd\] for the case of ${\mathrm{SU}}(n)$.
By the second part of the theorem, it follows that the exterior differential systems associated to nearly-parallel ${{\mathrm{G}_2}}$-structures, nearly-Kähler ${\mathrm{SU}}(3)$-structures and $\alpha$-Einstein-Sasaki ${\mathrm{SU}}(k)$-structures are involutive. These structures are also characterized by the existence of a Killing spinor. Thus, this result can be viewed as an “analogue with torsion” to Bryant’s result in the case of a parallel spinor [@Bryant].
Computing the $c_k$ is in general not a straightforward task. For fixed $G$, one can resort to the use of a computer (see [@Conti:SymbolicComputations]). In general, one has to carry out the computation by hand (See Sections \[sec:SUn\], \[sec:oddSUn\]).
The abstract embedding theorem {#sec:AbstractEmbedding}
==============================
In this section we formalize the embedding problem introduced in Section \[sec:so3\], and prove an abstract embedding theorem implying Theorem \[thm:so3Embedding\]; further applications will be given in later sections.
Let $W\subset T$ be a subspace. The action of $G$ on $T$ restricts to an action of $$H=G\cap\mathrm{O}(W)$$ on $W$. There is a natural map $$p\colon (\Lambda^*T)^G\to (\Lambda^*W)^H,$$ which is not in general surjective. Given a differential operator $f$ on $A\subset(\Lambda^*T)^G$, there is at most one differential operator $f_W$ such that the diagram $$\label{eqn:fw}
\begin{aligned}
\xymatrix{
A\ar[d]^p\ar[r]^f& A\ar[d]^p\\
p(A)\ar[r]^{f_W}& p(A)
}
\end{aligned}$$ commutes; the condition for the existence of $f_W$ is that $$\ker p \subset\ker p\circ f.$$ When $f=0$ the condition is trivially satisfied, and we will denote the induced map by $0_W$. The map $f_W$ can be used to define an induced exterior differential system on certain submanifolds of $N$.
Indeed, let $\iota\colon N\to M$ be a submanifold of $M$, with the same dimension as $W$, and assume the normal bundle is trivial. A principal bundle, whose fibre consists of the group ${\mathrm{GL}}(T,W)$ of matrices in ${\mathrm{GL}}(T)$ mapping $W$ to itself, is induced on $N$ by $$F_N=\{u\in \iota^*F\mid u(W)\subset T_{\pi(u)}N\}.$$ If one fixes a trivialization of the normal bundle, one can view the frame bundle of $N$ as a reduction to ${\mathrm{GL}}(W)$ of the principal bundle $F_N$; however, it will be more convenient to work with $F_N$. Although $F_N$ is not a ${\mathrm{GL}}(T,W)$-structure, it carries a tautological form nonetheless, and so it has a tautological morphism of the type , i.e. $$\iota_W\colon p((\Lambda^*T)^G)\to \Omega(F_N).$$ Like in Section \[sec:Preparatory\], it follows that $f_W$ determines an exterior differential system on $F_N$ which we denote by $\mathcal{I}_{f_W}(F_N)$.
We wish to relate integrals of $\mathcal{I}_{f}(F)$ and of $\mathcal{I}_{f_W}(F_N)$. In order to do so, we need to establish a compatibility condition between the embedding $N\subset M$ and the inclusion $W\subset T$.
Suppose $N$ is an embedded submanifold of $M$ with the same dimension as $W\subset T$ and $P_G$ is a $G$-structure on $M$; if the intersection $$\iota^*F_N \cap \iota^* P_G\subset\iota^*F$$ contains a principal bundle $P_H$ on $N$ with fibre $H$, we say that the pair $(N,P_H)$ is embedded in $(M,P_G)$ with type $W\subset T$.
Notice that in the above definition, the intersection $$(\iota^*F_N \cap \iota^* P_G)_x=\left\{u\in (P_G)_x\mid u(W)\subset T_{\pi(x)}N\right\}$$ is either empty or a single $\tilde H$-orbit, where $$\tilde H=G\cap(\mathrm{O}(W)\times\mathrm{O}(W^\perp)).$$ So the first condition required in the definition is that the intersection is never empty, or equivalently, a principal bundle with fibre $\tilde H$. On the other hand, giving a reduction to $H$ of this bundle is the same as giving a trivialization of the normal bundle.
\[prop:submanifold\] Fix a manifold $M$ and an exterior differential system $\mathcal{I}_f$ on its frame bundle. If a $G$-structure $P_G$ is an integral of $\mathcal{I}_f$ and $(N,P_H)$ is embedded in $(M,P_G)$ with type $W\subset T$, then $\ker p\subset \ker p\circ f$ and $P_H$ is an integral for $\mathcal{I}_{f_W}$.
Consider the commutative diagram $$\xymatrix{
A\ar@{^{(}->}[r]\ar[d]^p&(\Lambda^*T)^G\ar[d]^p\ar[r]^{\iota_T} &\Omega(F)\ar[r]\ar[d] &\Omega(P_G)\ar[d]\\
p(A)\ar@{^{(}->}[r]&(\Lambda^*W)^H\ar[r]^{\iota_W} &\Omega(F_N)\ar[r]&\Omega(P_H)
}$$ where the straight unlabeled arrows are restriction maps. Let $p(\alpha)$ be a form in $p(A)$. On $P_H$, $$d(\iota_W(p(\alpha)))=d(\iota_T(\alpha))=\iota_T(f(\alpha))=\iota_W(p(f(\alpha))),$$ and so if $p(\alpha)=0$, then $p(f(\alpha))=0$. Since $p\circ f=f_W\circ p$, $P_H$ is an integral of $\mathcal{I}_{f_W}$.
Conversely, we can pose the following general problem, which we shall call the [*embedding problem*]{}. Let $N$ be a manifold of the same dimension as $W$, and let $F_N$ be the bundle of frames on $N$. Suppose $F_N$ has a $H$-reduction $P_H$ which is an integral of $\mathcal{I}_{f_W}(F_N)$. Can we embed $(N,P_H)$ into a pair $(M,P_G)$ with type $W\subset T$, in such a way that $P_G$ is an integral of $\mathcal{I}_{f}$?
In principle one could also consider submanifolds $N$ with non-trivial normal bundle. So far, this would amount to replacing the structure group $H$ with $\tilde H$. However, the main result of this section makes use of Cartan-Kähler theory, which requires enlarging one dimension at a time. Thus, the manifolds one obtains have trivial normal bundle and the $\tilde H$-structure induced by the embedding reduces to $H$. For this reason, the theorem will not apply to $\tilde H$-structures which do not admit a $H$-reduction. See also the comments in [@Bryant:Calibrated] and the remark at the end of this section.
It was shown in [@Bryant:NonEmbedding] that one cannot solve the embedding problem in general in the non-real-analytic setting. However, keeping the real analytic assumption, we will prove an abstract embedding theorem, which generalizes the results mentioned in Section \[sec:so3\], does not reference any specific instance of $G$ and $T$, and holds in the constant intrinsic torsion case as well.
\[lemma:extends\] Assume $Z_f$ is not empty. For ever choice of $W\subset T$, the linear projection ${\mathfrak{gl}}(T)\otimes T\to{\mathfrak{gl}}(W)\otimes W$ induces by restriction a surjective map .
It follows from the diagram that the image of $Z_f$ is contained in $Z_{f_W}$; in particular $Z_{f_W}$ is not empty. Since the projection is linear, the image of $Z_f$ has the same dimension as the image of $Z_0$. Moreover, as $Z_{f_W}$ is not empty, its dimension does not depend on $f$. Summing up, it suffices to prove the statement for $f=0$.
The rest of the proof, while algebraic in nature, is best explained working at a point $u$ of the frame bundle $F$. Let $W$ have dimension $k$ and $T$ have dimension $n$. Consider the natural map $${\mathfrak{gl}}(W)\otimes W\to {\mathfrak{gl}}(T)\otimes T\to Gr_n(T_uF,\pi)\xrightarrow{q} Gr_k(T_uF,\pi),$$ where the map $q$ is given by the choice of $W\subset T$ and the correspondence of . We claim that the image $E$ of an element of $Z_{0_W}$ under this map is an integral element of $\mathcal{I}_{A,0}(F)$. Indeed, let $\alpha$ be in $A$; we can decompose it according to $T=W\oplus W^\perp$, so that $$\alpha=p(\alpha)+\sum \beta_i\wedge\gamma_i, \quad \beta_i\in\Lambda^*W^\perp, \gamma_i\in\Lambda^*W.$$ Restricting to the space $E$, $d\iota(p(\alpha))$ is zero because $E$ is the image of an element in $Z_{0_W}$, $\iota(\beta_i)$ is zero because of how $q$ is defined, and finally $d\iota(\beta_i)$ is zero because $E$ is the image of an element in ${\mathfrak{gl}}(W)\otimes W$. Thus, $d\iota(\alpha)$ is zero on $E$ for all $\alpha$, and $\pi$-transverse integral elements of $\mathcal{I}_{0_W}$ are mapped to $\pi$-transverse integral elements.
Therefore, we obtain a commutative diagram $$\xymatrix{
Z_0\ar[d]\ar[r] & V_n(\mathcal{I}_{A,0}(F))\ar[d]^q\\
Z_{0_W}\ar[r] & V_k(\mathcal{I}_{A,0}(F))
}$$ where the bottom arrow is the map constructed above; it suffices to show that the map $q$ is surjective. In other words, we must prove that for the exterior differential system $\mathcal{I}_{A,0}(F)$, every $\pi$-transverse integral element $E\subset T_uF$ of dimension $k<n$ is contained in a $\pi$-transverse integral element of dimension $n$.
Let $e$ be a generator of $\Lambda^k E$, and set $$\mathcal{\tilde E}(E)=\{e{\lrcorner\,}\phi_u\mid \phi\in\mathcal{I}^G(F)\}\;.$$ By construction, $\mathcal{\tilde E}(E)$ is closed under wedging by forms in the exterior algebra $\Lambda E^o$ over the space $E^o\subset T^*_uF$ of $1$-forms that vanish on $E$. The polar equations $\mathcal{E}(E)$ are the subspace of $\mathcal{\tilde E}(E)$ consisting of forms of degree one; a vector space $E'\supset E$ of dimension $k+1$ is integral if and only if all forms in $\mathcal{E}(E)$ vanish on $E'$. Introducing the space of “horizontal” one-forms $$L=(\ker\pi_{*u})^o,$$ we now prove that $\mathcal{E}(E)$ is transverse to $L$. Indeed, let $\Theta$ be a generator of $\Lambda^nL$, and let $\Theta_o$ be a generator of $\Lambda^{n-k}(E^o\cap L)$. Then $\Theta_o$ does not lie in $\mathcal{\tilde E}(E)$, because otherwise $\mathcal{I}_{A,0}$ would contain an element of the form $\Theta+\beta$, $e{\lrcorner\,}\beta=0$; this is absurd, because it implies that the zero element in ${\mathfrak{gl}}(T)\otimes T$ is not an integral of $\mathcal{I}_{A,0}$. Thus $\mathcal{\tilde E}(E)$, which is closed under wedging with forms in $\Lambda E^o$, does not contain any $1$-form in $E^o\cap L$. On the other hand $E$ is integral, so $\mathcal{E}(E)$ is contained in $E^o$ and $$\mathcal{E}(E)\cap L=\mathcal{E}(E)\cap (E^o\cap L)=\{0\}\;.$$ Thus, a basis of $L$ restricts to linearly independent elements on $\mathcal{E}(E)^o$; this implies the existence of a $\pi$-transverse integral $E'\supset E$ of dimension $k+1$, and by repeating the argument we obtain the required integral element of dimension $n$.
In Proposition \[prop:localexistence\] we introduced a sufficient condition for $\mathcal{I}_f$ to be involutive. An involutive exterior differential system is a necessary ingredient for our embedding theorem, but the condition needs to be refined slightly. We say that $W\subset T$ is *relatively admissible* if there is an $\mathcal{I}^G$-ordinary flag $$\label{eqn:relativelyadmissible}
W_0\subsetneq\dotsb\subsetneq W_{n}=T, \quad W_k=W.$$
We can now prove the main result of this section, which also gives Theorems \[thm:psu3Embedding\], \[thm:psu3Embedding2\] as corollaries.
\[thm:AbstractEmbedding\] Let $G\subset {\mathrm{O}}(T)$ be a compact group, and let $W\subset T$ be a relatively admissible subspace, with $H=G\cap {\mathrm{O}}(W)$. Let $f$ be a differential operator on $A\subset (\Lambda^*T)^G$ which extends to a derivation of degree $1$ on $\Lambda^*T$ and assume $\ker p\subset \ker f\circ p$; let $f_W$ be the differential operator induced by the diagram . Then every real analytic pair $(N,P_H)$, with $N$ a manifold of the same dimension as $W$ and $P_H$ an integral of $\mathcal{I}_{p(a),f_W}$, can be embedded in a pair $(M,P_G)$ with type $W\subset T$, where $P_G$ is an integral of $\mathcal{I}_{A,f}$.
We shall prove that if $W'=W_{k+1}$, with $W_{k+1}$ defined by the flag , then $(N,P_H)$ can be embedded with type $W\subset W'$ in a real analytic pair $(N',P_K)$, where $$K=G\cap\mathrm{O}(W'),$$ and $P_K$ is an integral of $\mathcal{I}_{f_{W'}}$. By definition, either $W'=T$ and there is nothing left to prove, or $W'\subset T$ is relatively admissible and $(N',P_K)$ satisfies the hypotheses of the theorem. Therefore, we can repeat the argument, obtaining a chain of embedded pairs. Now, if $(N,P_H)$ is embedded in $(N',P_{H'})$ with type $W\subset W'$, and $(N',P_{H'})$ is embedded in $(M,P_G)$ with type $W'\subset T$, then $(N,P_H)$ is embedded in $(M,P_G)$ with type $W\subset T$. Therefore, the statement will follow.
Having reduced the problem to one-dimensional steps, we proceed by adapting the proof of [@ContiSalamon]. Suppose $P_H$ is an integral of $\mathcal{I}_{f_W}$ on $F_N$. Let $M=N\times {\mathbb{R}}^{n-k}$; denote by $q\colon M\to N$ the natural projection. The frame bundle $F_N$ of $N$ pulls back to a ${\mathrm{GL}}(W)$-structure $q^*F_N$ on $M$. We claim that $(q^*P_H)|_{M\times\{0\}}$ is an integral of the exterior differential system $\mathcal{I}_{f}(q^*F_N)$. Indeed, consider the commutative diagram $$\xymatrix{
A\ar[r]^{\iota_{T}}\ar[d]^{p} &\Omega\left(q^*F_N|_{M\times\{0\}}\right)\ar[r]&\Omega\left(q^*P_H|_{M\times\{0\}}\right)\\
p(A)\ar[r]^{\iota_W} &\Omega(F_N)\ar[r]\ar[u]_\cong&\Omega(P_H)\ar[u]_\cong
}$$ where the unlabeled arrows are restriction maps. For any $\alpha\in (\Lambda^*T)^G$, we see that the form upstairs $$d\iota_{T}(\alpha)-\iota_{T} f(\alpha)$$ can be identified with the form downstairs $$d\iota_{W}(p(\alpha))-\iota_{W} f_{W}(p(\alpha)),$$ which vanishes on $P_H$. Therefore $(q^*P_H)|_{M\times\{0\}}$ is an integral of $\mathcal{I}_{f}(q^*F_N)$, and in particular an integral of $\mathcal{I}_{f}(q^*F_M)$, where $F_M$ denotes the frame bundle of $M$.
Now fix a torsion-free connection on $F_N$, pull it back to $q^*F_N$ and extend it to $F_M$. At a point $u$ of $P_H$, we can write $$T_uF_N=W\oplus {\mathfrak{gl}}(W).$$ Choose an $E\subset T_uF_N$ such that $$T_uP_H=E\oplus {\mathfrak{h}}.$$ Then the usual map identifies $E$ with an element of $Z_{f_W}\subset{\mathfrak{gl}}(W)\otimes W$, which we also denote by $E$. Passing to the diffeomorphic bundle $q^*P_H|_{M\times\{0\}}$, the point $u$ corresponds to a point $(u,0)$, and the splitting $$T_{(u,0)}q^*F_N=T\oplus {\mathfrak{gl}}(W)$$ is compatible with the splitting of $T_uF_N$; thus, the tangent space to $(q^*P_H)|_{M\times\{0\}}$ at ${(u,0)}$ can also be identified with $E+{\mathfrak{h}}$, where $E$ is in $Z_{f_W}$. By Lemma \[lemma:extends\], $E$ is the image of an element $\tilde E$ of $Z_f$ under the map ${\mathfrak{gl}}(T)\otimes T\to{\mathfrak{gl}}(W)\otimes W$. In terms of bundles, this means that $$T_{(u,0)}q^*P_H\subset \tilde E\oplus {\mathfrak{h}}$$ where $\tilde E$ is in $V_n(\mathcal{I}_f(q^*F_N))$. Since $W\subset T$ is relatively admissible, and using Lemma \[lemma:cw\], we can find a flag $$E_0\subset \dotsb\subset E_{n}=\tilde E, \quad E=E_k;$$ which satisfies Cartan’s test, and is therefore ordinary. In particular, $E$ is regular.
By the $G$-invariance (and therefore $H$-invariance) of $\mathcal{I}_f$, a $\pi$-transverse element $E_0$ is integral if and only if $E_0+{\mathfrak{h}}$ is integral, and the polar spaces are related by $$H(E_0)\oplus{\mathfrak{h}}=H(E_0+{\mathfrak{h}}).$$ In order to apply the Cartan-Kähler theory, we must quotient out the $H$ invariance. Thus, we consider the quotient $F_M/H$. Then the manifold $q^*P_H/H\mid_{M\times\{0\}}$ is an integral of $\mathcal{I}_f(F_M/H)$. Moreover since ${\mathfrak{h}}$ is contained in any polar space, the ordinary flag $E_0\subsetneq \dotsb\subsetneq E_n$ determines an ordinary flag $E_0'\subsetneq \dotsb \subsetneq E_n'$ on the quotient $F_M/H$. In particular, $E_k'$ is regular, and since this holds at all $u$, the integral manifold $(q^*P_H/H)|_{N\times\{0\}}$ is regular.
The Cartan-Kähler theorem requires a “restraining manifold”, namely a real analytic submanifold $R$ of $F_M/H$ which intersects each polar space of $(q^*P_H/H)|_{N\times\{0\}}$ transversely, and such that the codimension of the former is the extension rank of the latter. We shall define $R$ as the quotient of a $H$-invariant submanifold $\tilde R$ of $F_M$ with the same properties. The extension rank can be defined as $$r(E+{\mathfrak{h}})=\dim H(E+{\mathfrak{h}})-\dim (E+{\mathfrak{h}})-1,$$ and since $H(E)\supset{\mathfrak{h}}\oplus E_n$, it satisfies $$r(E+{\mathfrak{h}})=\dim H(E)-k-1-\dim{\mathfrak{h}} \geq n-k-1\geq 0.$$ By Lemma \[lemma:cw\], the polar space of $E$ satisfies $H(E)=\tilde E\oplus H(W)$. Let $V$ be a $H$-invariant complement of $H(W)$ in ${\mathfrak{gl}}(T)$, and let $U$ be the image under the exponential map of a small neighbourhood of zero in $V$, so that $U$ is invariant under the adjoint action of $H$ and the product map $U\times H\to {\mathrm{GL}}(T)$ is an embedding.
Denote by $N'$ the subset $$\{(x,y_1,\dotsc, y_{n-k}\in M\times{\mathbb{R}}^{n-k}, \mid y_2=\dotsc=y_{n-k}=0\}.$$ Then the manifold $$\tilde R=(q^*P_H\cdot U)|_{N'}$$ is a fibre bundle over $N'$ with fibre $H\times U$. However by construction $H\times U=U\times H$, and so $\tilde R$ is $H$-invariant; its codimension is $$\dim V+n-k-1- \dim{\mathfrak{h}}=\dim H(E)-k-1-\dim{\mathfrak{h}}=r(E+{\mathfrak{h}}).$$ Moreover, at a point $(u,0)$ of $q^*P_H$ we have $$T_{(u,0)}\tilde R=T_{(u,0)}q^*P_H|_{N\times\{0\}} \oplus \left\langle\frac\partial{\partial y_1}\right\rangle \oplus V,$$ so $$T_{(u,0)}\tilde R+H(E)=E+H(W)+V=T_{(u,0)}F_M.$$ Then the quotient $R=\tilde R/H$ is the restraining manifold we need.
Applying the Cartan-Kähler theorem to $F/H$, and taking the preimage in $F$, we obtain a $H$-invariant integral manifold $Q$, $(q^*P_H)|_{N\times\{0\}}\subset Q\subset \tilde R$ of dimension $k+1+\dim {\mathfrak{h}}$. By construction, $$T_{(u,0)}Q=E_{k+1}\oplus {\mathfrak{h}}$$ where $E_{k+1}$ is $\pi$-transverse. Up to restricting $N'$, $P_K=Q\cdot K$ is a $K$-structure on $N'$ as well as an integral manifold of $\mathcal{I}_f$, and therefore of $\mathcal{I}_{f_{W'}}$; moreover, $(N,P_H)$ is embedded in $(N',P_{H'})$ with type $W\subset W'$.
As a corollary, we obtain the following:
Let $A$ be the algebra generated by $\rho$, and let $W\subset{\mathbb{R}}^8$ be a subspace calibrated by $\rho$, i.e. a maximum point for . Since $\rho$ is stable, $W\subset{\mathbb{R}}^8$ is relatively admissible with respect to $\mathcal{I}_A$. Being three-dimensional, the manifold $N$ is parallelizable, and compactness ensures that a real analytic parallelism exists (see [@Bryant:Calibrated; @Bochner]), which can be taken to be orthonormal. Thus, $N$ has a real analytic $\{e\}$-structure, which is an integral of $\mathcal{I}_{0_W}$ because $A$ is generated by a three-form. By Theorem \[thm:AbstractEmbedding\] the manifold $N$ can be embedded with type $W\subset{\mathbb{R}}^8$ in a manifold $M$ with a ${\mathrm{PSU}}(3)$-structure whose associated form is closed. Since $W$ is a calibrated space, $N$ is a calibrated submanifold.
Analogous to the proof of Theorem \[thm:psu3Embedding\], except that parallelizability is now part of the hypothesis.
Notice that Theorem \[thm:AbstractEmbedding\] applies to all the geometries appearing in Theorem \[thm:involutive\], and in particular to $$\label{eqn:ParallelSpinor}
(\Lambda^*{\mathbb{R}}^{2n})^{{\mathrm{SU}}(n)}, (\Lambda^*{\mathbb{R}}^7)^{{\mathrm{G}_2}}, (\Lambda^*{\mathbb{R}}^8)^{{\mathrm{Spin}}(7)}.$$ In these cases every codimension one $W\subset T$ is relatively admissible, because the structure group acts transitively on the sphere. So, in some sense Theorem \[thm:AbstractEmbedding\] answers the generalized Killing spinor embedding problem mentioned in the introduction. In fact, by [@Wang:ParallelSpinorsAndParallelForms] the holonomy of a simply-connected, irreducible Riemannian manifold with a parallel spinor is either ${{\mathrm{G}_2}}$, ${\mathrm{Spin}}(7)$, ${\mathrm{SU}}(n)$ or ${\mathrm{Sp}}(n)$. Since ${\mathrm{Sp}}(n)$ is contained in ${\mathrm{SU}}(2n)$, a real analytic Riemannian manifold with a generalized Killing spinor $\psi$ can be isometrically embedded in an irreducible Riemannian manifold with a parallel spinor restricting to $\psi$ if and only if the $G$-structure defined by $\psi$ is an integral of $\mathcal{I}_{p(A),0_W}$, where $A$ is a DGA in . Other examples will be given in Sections \[sec:hypo\], \[sec:go\].
Theorem \[thm:AbstractEmbedding\] does not require that $G$ be admissible or strongly admissible. However, if the group is strongly admissible Proposition \[prop:cit\] gives a much stronger interpretation in terms of intrinsic torsion.
One can weaken the assumptions slightly and consider submanifolds whose normal bundle is flat rather than trivial; in other words, one replaces the group $H$ with a larger group $H'\subset G$ that acts discretely on $W^\perp$. The idea is that any $H'$-structure on $N$ reduces to $H$ when pulled back to the universal covering $\tilde N$. One can then apply the theorem to $\tilde N\subset \tilde M$, and obtain the embedding $N\subset M$ by means of a quotient. Indeed, with notation as in the proof of the theorem, $\pi_1(N)$ acts properly discontinuosly on $q^*P$ and therefore on $R$. This action is compatible with the product action on $N\times{\mathbb{R}}$. Under suitable assumptions on $N$, e.g. if $\pi_1(N)$ is finite or $N$ is compact, we can restrict $P_{H'}$ and $N'$ to make them $\pi_1(N)$-invariant; moreover, the action is properly discontinuos because it is on the base.
The structure group ${\mathrm{SU}}(n)\subset {\mathrm{O}}(2n)$ {#sec:SUn}
==============================================================
In this paper we have considered ${\mathrm{SU}}(n)$ as a subgroup of ${\mathrm{O}}(2n)$ or as a subgroup of ${\mathrm{O}}(2n+1)$; either way it is strongly admissible. In this section we consider it as a subgroup of ${\mathrm{O}}(2n)$, and show that it admits an $\mathcal{I}^{{\mathrm{SU}}(n)}$-ordinary flag, so that Theorem \[thm:AbstractEmbedding\] applies. This result will be used in Section \[sec:hypo\].
Let $T={\mathbb{R}}^{2n}$, $n\geq 2$, with basis $\theta_1,\dotsc,\theta_{2n}$, and identify ${\mathrm{SU}}(n)$ with the subgroup of ${\mathrm{O}}(T)$ which fixes the real two-form and complex $n$-form $$\label{eqn:SUnForms}
F=\theta^{12}+\dotsc + \theta^{2n-1,2n}, \quad \Omega=\Omega^+ + i\Omega^-=(\theta_1+i\theta_2)\dotsm (\theta_{2n-1}+i\theta_{2n}).$$
Define a flag $E_0\subsetneq\dotsb\subsetneq E_{2n}=T$, where for $0\leq k\leq n$ $$E_k={\operatorname{Span}\left\{\theta_1,\dotsc,\theta_{2k-1}\right\}},\quad E_{n+k}=E_n\oplus{\operatorname{Span}\left\{\theta_2,\dotsc, \theta_{2k}\right\}}.$$ Let $S={\mathfrak{gl}}(T)$, with basis $\omega_{ij}$. If $F$ is the frame bundle over a $2n$-dimensional manifold with a fixed torsion-free connection, the direct sum $T\oplus S\subset\Omega(P)$ maps isomorphically to $T^*F_u$ for any $u\in F$. In addition, exterior differentiation on $F$ restricts to an operator $$\label{eqn:formald}
d\colon \Lambda T\to S\otimes\Lambda T,\quad d(\theta_i)=\sum_{1\leq j\leq 2n}\omega_{ij}\theta_j.$$ In order to compute the numbers $c(W_k)$ defined after Lemma \[lemma:cw\] we can work in the exterior algebra over $T\oplus S$, using the formal operator $d$ defined by .
\[lemma:sun\] With respect to the exterior differential system $\mathcal{I}^{{\mathrm{SU}}(n)}$, the flag defined above satisfies $$c(W_k)=\begin{cases}
\binom{k}{2}& k<n\\
2-3n^2+4nk-n-\binom{k}2& n\leq k<2n\\
1+3n+6\binom{n}2&k=2n\\
\end{cases}.$$ In particular, $$\sum_{k=0}^{2n-1} c_k = 2n(n^2-n+1)=\dim {\mathbb{R}}^{2n}\otimes\frac{{\mathfrak{so}}(2n)}{{\mathfrak{su}}(n)}.$$
Let $E\subset T$ be the vector space spanned by some subset of $\{\theta_1,\dotsc,\theta_{2n}\}$. We write $$\Lambda^n(E)=L(E)\oplus C(E)$$ where $L(E)$ is spanned by elements $\theta_1\wedge\dotsb \wedge \theta_n$ such that the set $\{\theta_i\}$ does not contain any pair $\{\theta_{2j-1},\theta_{2j}\}$, and $C(E)$ is spanned by elements $\theta_1\wedge\dotsb \wedge \theta_n$ such that the set $\{\theta_i\}$ contains at least one pair $\{\theta_{2j-1},\theta_{2j}\}$. Similarly, we write $$\Lambda^2(E)=L^2(E)\oplus C^2(E),$$ where $C^2(E)$ is spanned by elements of the form $\theta_{2j-1}\wedge \theta_{2j}$ and $L^2(E)$ is its natural complement. In addition, we further decompose $L(E)$ as $$L(E)=L^+(E)\oplus L^-(E)$$ where $\theta_{i_1}\wedge\dotsb\wedge \theta_{i_n}$ lies in $L^+(E)$ or $L^-(E)$ according to whether the number of odd $i_k$ is even or odd.
Since we have fixed a basis, the space $T\oplus S$ can be identified with its dual; thus, we have natural maps $$\phi\colon \Lambda^2 T\to S,\quad \phi(v\wedge w)=dF(v,w),$$ $$\phi^\pm\colon \Lambda^n T\to S,\quad \phi(v_1\wedge\dotsm \wedge v_n)=d\Omega^\pm(v_1,\dotsc,v_n),$$ where as usual $\Omega^\pm$ are real forms such that $\Omega=\Omega^++i\Omega^-$. The space of polar equations of $E$ can be expressed as $$\mathcal{E}(E)=\phi(\Lambda^2(E))+ \phi^+(\Lambda^n(E))+ \phi^-(\Lambda^n(E)).$$ To compute its dimension, we decompose $S$ as $$S=S_1\oplus S_2\oplus S_3$$ where $$S_1={\operatorname{Span}\left\{\omega_{ii}\mid 1\leq i\leq 2n\right\}},\quad S_2={\operatorname{Span}\left\{\omega_{2k,2k-1},\omega_{2k-1,2k}\mid 1\leq k\leq n\right\}},$$ and $S_3$ is spanned by the $\omega_{ij}$ that do not lie in $S_1\oplus S_2$.
As a first observation, notice that $\phi$ can be represented as matrix multiplication by the invertible square matrix representing the complex structure on ${\mathbb{R}}^{2n}$, and so it is injective. With respect to the above decomposition, $$\label{eqn:imagesplits1}
\phi(L^2(E))\subseteq S_3, \quad \phi(C^2(E))\subseteq S_1,$$ and more explicitly $$\label{eqn:explicitly}
\phi(C^2(E_k))={\operatorname{Span}\left\{\omega_{2h-1,2h-1}+\omega_{2h,2h}\right\}}_{1\leq h\leq k-n}.$$ Similarly, we have $$\label{eqn:imagesplits2}
\phi^\pm(L^\pm(E))\subseteq S_1,\quad
\phi^\pm(L^\mp(E))\subseteq S_2,
\quad \phi^\pm(C(E))\subseteq S_3.$$ Next observe that $\phi^\pm$ is injective on $L^\pm(E)$. Now write $$\Omega=(\theta_1+i\theta_2)\wedge\Omega_{n-1},\quad \Omega_{n-1}=(\theta_3+i\theta_4)\dotsm (\theta_{2n-1}+i\theta_{2n}).$$ If $\theta_1\wedge\alpha$ is in $L$, where $\alpha$ has no component in $\theta_1\wedge\Lambda T$, then $$(\theta_1\wedge\alpha){\lrcorner\,}d\Omega +i (\theta_2\wedge\alpha){\lrcorner\,}d\Omega = -(\omega_{11}-\omega_{22}+i\omega_{21}+i\omega_{12}) \alpha{\lrcorner\,}\Omega_{n-1}.$$ So if $\theta_1\wedge\alpha$ is in $L^+$ then $\alpha{\lrcorner\,}\Omega_{n-1}$ is a real non-zero number, which we can normalize to $1$, finding that $$\begin{aligned}
\phi^+(\theta_1\wedge\alpha) &= \phi^-(\theta_2\wedge\alpha) -\omega_{11}+\omega_{22}\\
\phi^-(\theta_1\wedge\alpha) &= -\phi^+(\theta_2\wedge\alpha) -(\omega_{21}+\omega_{12})\end{aligned}$$ If $E=E_k$, $k>n$ we can compute $\mathcal{E}(E_k)$ in three steps.
(*i*) The space $\mathcal{E}(E_k)\cap S_1$ contains $\omega_{11}-\omega_{22}$, and therefore, by it contains both $\omega_{11}$ and $\omega_{22}$. By relabeling the indices, it follows that $\mathcal{E}(E_k)\cap S_1$ contains the elements $$\omega_{2h-1,2h-1},\omega_{2h,2h},\quad 1\leq h\leq k-n.$$ In addition $\mathcal{E}(E)$ contains $$\omega_{11}+\omega_{33}+\dotsc + \omega_{2n-1,2n-1}=-\phi^+(\theta_1\wedge\dotsb\wedge \theta_{2n-1}).$$ Thus $$\dim \mathcal{E}(E_k)\cap S_1=\begin{cases}
0& k<n\\
2(k-n)+1& n\leq k<2n\\
2n&k=2n
\end{cases}$$
(*ii*) The space $\mathcal{E}(E_k)\cap S_2$ contains $\omega_{21}+\omega_{12}$, and $$\mathcal{E}(E_k)\cap S_2={\operatorname{Span}\left\{\omega_{21}+\omega_{12}\right\}}+\phi^+(L_-(E)).$$ By relabeling the indices, it follows that $\mathcal{E}_n(E_k)\cap S_2$ contains the element $$\omega_{2h-1,2h}+\omega_{2h,2h-1},\quad 1\leq h\leq k-n.$$ In addition $\mathcal{E}(E_k)$ contains $$\omega_{21}+\omega_{43}+\dotsc + \omega_{2n,2n-1}=-\phi^-(\theta_1\wedge \theta_3\wedge \dotsb\wedge \theta_{2n-1}).$$ Thus $$\dim \mathcal{E}(E_k)\cap S_2=\begin{cases}
0& k< n\\
(k-n)+1& n\leq k\leq 2n
\end{cases}$$
(*iii*) Finally, $\phi^\pm_n(C(E))$ is spanned by the elements $\phi^\pm(\alpha_{ij})$, where $$\alpha_{ij}=\theta_{2i-1}{\lrcorner\,}\left(\theta_{2j}\wedge \theta_1\wedge \dotsb\wedge \theta_{2n-1}\right), \quad i\neq j.$$ For $1\leq i, j\leq n$, define $$S_{ij}={\operatorname{Span}\left\{\omega_{hk}\mid \left\{\left[\frac{h+1}2\right],\left[\frac{k+1}2\right]\right\}=\{i,j\}\right\}}.$$ Then $$S_3=\bigoplus_{i<j} S_{ij},$$ and $\phi^\pm(\alpha_{ij})$ is in $S_{ij}$; more precisely $$\begin{aligned}
\phi^+(\alpha_{ij})&=\omega_{2i-1,2j}-\omega_{2i,2j-1},& \phi^-(\alpha_{ij})&=\omega_{2i,2j}-\omega_{2i-1,2j-1},\\
\phi^+(\alpha_{ji})&=\omega_{2j-1,2i}-\omega_{2j,2i-1},& \phi^-(\alpha_{ji})&=\omega_{2j,2i}-\omega_{2j-1,2i-1}.\end{aligned}$$ On the other hand $$\begin{aligned}
\phi(\theta_{2i-1}\wedge\theta_{2j-1})&=\omega_{2i,2j-1}-\omega_{2j,2i-1},&
\phi(\theta_{2i-1}\wedge\theta_{2j})&=\omega_{2i,2j}+\omega_{2j-1,2i-1},\\
\phi(\theta_{2i}\wedge\theta_{2j-1})&=-\omega_{2i-1,2j-1}-\omega_{2j,2i},&
\phi(\theta_{2i}\wedge\theta_{2j})&=-\omega_{2i-1,2j}+\omega_{2j-1,2i}.\end{aligned}$$ These elements span a six-dimensional space, so $S_{ij}\cap\mathcal{E}(T)$ has dimension $6$. In general, the dimension of $\mathcal{E}(E_k)\cap S_{ij}$ only depends on how many of the elements $\theta_{2i-1}$, $\theta_{2i}$, $\theta_{2j-1}$, $\theta_{2j}$ lie in $E_k$. It follows that, if $1\leq i<j\leq n$: $$\dim\mathcal{E}(E_k)\cap S_{ij}=\begin{cases}
0&k<j\\
1& j\leq k\leq n\\
1&0\leq k-n<i\\
4&i\leq k-n<j\\
6& i<j\leq k-n \end{cases}$$ Therefore $$\dim \mathcal{E}(E_k)\cap S_3=\begin{cases}
\binom{k}{2}& k\leq n\\
6\binom{k-n}2+4(k-n)(2n-k)+\binom{2n-k}2& n\leq k\leq 2n\\
\end{cases}$$ Since by and $$\mathcal{E}(E_k)=(\mathcal{E}(E_k)\cap S_1) \oplus (\mathcal{E}(E_k)\cap S_2)\oplus (\mathcal{E}(E_k)\cap S_3),$$ the statement follows.
The structure group ${\mathrm{SU}}(n)\subset {\mathrm{O}}(2n+1)$ {#sec:oddSUn}
================================================================
In this section we consider ${\mathrm{SU}}(n)$ as a structure group in dimension $2n+1$. We show that it is strongly admissible, and we construct an $\mathcal{I}^{{\mathrm{SU}}(n)}$-ordinary flag, so that Theorem \[thm:AbstractEmbedding\] applies. This result will be used in Section \[sec:go\].
Let $T={\mathbb{R}}^{2n+1}$; the space $(\Lambda^*T)^{{\mathrm{SU}}(n)}$ is spanned by $\alpha$, $F$ and $\Omega^\pm$, where $$\label{eqn:oddSUnForms}
\begin{gathered}
\alpha=\theta^{2n+1},\quad F=\theta^{12}+\dotsb+\theta^{2n-1,2n},\\
\Omega=\Omega^+ + i\Omega^-=(\theta^1+i\theta^2)\dotsm (\theta^{2n-1}+i\theta^{2n}).
\end{gathered}$$ We start by constructing an $\mathcal{I}^{{\mathrm{SU}}(n)}$-ordinary flag. Let $E_0\subsetneq\dotsb\subsetneq E_{2n+1}$ be the flag such that, for $0\leq k\leq n$, $$\begin{gathered}
E_k={\operatorname{Span}\left\{\theta_1,\dotsc,\theta_{2k-1}\right\}}, \quad 0\leq k\leq n\\
E_{n+k}=E_n\oplus{\operatorname{Span}\left\{\theta_2,\dotsc, \theta_{2k}\right\}}, \quad 1\leq k<n\\
E_{2n}=E_{2n-1}\oplus{\operatorname{Span}\left\{\theta_{2n+1}\right\}}, \quad E_{2n+1}=T\end{gathered}$$
\[lemma:sunodd\] With respect to $\mathcal{I}^{{\mathrm{SU}}(n)}$, the flag $E_0\subsetneq\dotsb\subsetneq E_{2n+1}$ satisfies $$c(E_k)=\begin{cases}
\binom{k}{2}+k& k<n\\
2+\binom{n}2+n& k=n\\
2-3n^2+4nk-n-\binom{k}2+k& n< k<2n\\
3n+2-3n^2+3n(2n-1)&k=2n\\
\end{cases}.$$ In particular the sum $c_0+\dotsb+c_{2n}$ equals $2n^3+3n^2+3n+1$.
We proceed like in the proof of Lemma \[lemma:sun\]. We have to consider however an extra map $$\phi^\alpha\colon \Lambda^1(E)\to S, \phi^\alpha(v)=v{\lrcorner\,}d\theta_{2n+1}.$$ For fixed $k$ with $k<2n$, we define $\mathcal{\tilde E}(E_k)$ as the space of polar equations with respect to $\mathcal{I}^{{\mathrm{SU}}(n)}$, and view the space of polar equations computed in Lemma \[lemma:sun\] as a subspace $\mathcal{E}(E_k)\subset \mathcal{\tilde E}(E_k)$; we obtain $$\mathcal{\tilde E}(E_k)=\mathcal{E}(E_k)\oplus{\operatorname{Span}\left\{\omega_{2n+1,i}\mid \theta_i\in E_k\right\}}.$$ This accounts for the values of $c_k$ for $k<2n$. For $k=2n$, we can see directly that $$\mathcal{\tilde E}(E_{2n})=\mathcal{\tilde E}(E_{2n-1})\oplus {\operatorname{Span}\left\{\omega_{i,2n+1}\mid i\leq 2n+1\right\}}.\qedhere$$
As a consequence, we obtain a generalization of a result of [@ContiSalamon] concerning the intrinsic torsion of an ${\mathrm{SU}}(n)$-structure:
\[prop:SUnIsStronglyAdmissible\] The flag $E_0\subsetneq\dotsb\subsetneq E_{2n+1}$ is $\mathcal{I}^{{\mathrm{SU}}(n)}$-ordinary, and the group ${\mathrm{SU}}(n)\subset {\mathrm{O}}(2n+1)$ is strongly admissible. In particular, given an ${\mathrm{SU}}(n)$-structure $(\alpha,F,\Omega)$ on $M^{2n+1}$, its intrinsic torsion is entirely determined by $d\alpha,dF,d\Omega$.
By Cartan’s test and Proposition \[prop:SpecialGeometries\], $$\dim T\otimes{\mathfrak{su}}(n)^\perp =2n^3+3n^2+3n+1\leq \operatorname{codim}Z^{{\mathrm{SU}}(n)}\leq \dim T\otimes{\mathfrak{su}}(n)^\perp,$$ so equality must hold and the statement follows.
Hypo and nearly hypo evolution equations {#sec:hypo}
========================================
In this section we come to a concrete application of Theorem \[thm:AbstractEmbedding\] concerning the group ${\mathrm{SU}}(n)$. Let $T={\mathbb{R}}^{2n}$, $n\geq 2$, with basis $e_1,\dotsc,e_{2n}$, and let $F$, $\Omega$ be as in .
The group ${\mathrm{SU}}(n)$ is both admissible, meaning that ${\mathrm{SU}}(n)$ is the subgroup of ${\mathrm{GL}}(n,{\mathbb{R}})$ that fixes $(\Lambda^*T)^{{\mathrm{SU}}(n)}$, and strongly admissible, meaning that integrals of $\mathcal{I}_f$ are structures with constant intrinsic torsion. However, the constant intrinsic torsion geometries that occur in this case are essentially only two.
\[prop:derivation\] Let $f$ be a differential operator on $(\Lambda^*T)^{{\mathrm{SU}}(n)}$ that extends a derivation of degree one on $\Lambda^*T$. Then either $f=0$ or $n=3$ and $$f(F)=\lambda \Omega^+ + \mu \Omega^-,\quad f(\Omega^+)=\frac23\mu F^2,\quad f(\Omega^-)=-\frac23\lambda F^2.$$
If $n\neq 3$ there are no invariant three-forms; thus, $f(F)=0$. Suppose $f\neq 0$. Then the space of invariant $n+1$-forms must be nontrivial, which implies that $n$ is odd and $f(\Omega^\pm)$ is a multiple of $F^{\frac{n+1}2}$. By the Leibnitz rule, it follows that $$0=f(\Omega^\pm\wedge F)=f(\Omega^\pm)\wedge F,$$ and so $f(\Omega^\pm)=0$.
For $n=3$, it follows from the Leibnitz rule and $F^3=\frac32\Omega^+\wedge\Omega^-$ that $f$ must be like in the statement. Any such $f$ extends to a derivation of $\Lambda^*T$ by setting $$f(\alpha)=\alpha{\lrcorner\,}(\lambda\Omega^+ + \mu\Omega^-), \quad \alpha\in T.\qedhere$$
So, for the group ${\mathrm{SU}}(n)$ acting on ${\mathbb{R}}^{2n}$ there are only two geometries to consider in our setup, namely Calabi-Yau geometry ($f=0$) and nearly-Kähler geometry ($n=3$, $f\neq 0$). In the latter case, it is customary to normalize the constants so that $\lambda=3$, $\mu=0$, although the normalization is irrelevant to the discussion to follow.
Now take $W\subset T$ of codimension one. Since ${\mathrm{SU}}(n)$ acts transitively on the sphere in $T$, all choices of $W$ are equivalent; we shall fix $$W={\operatorname{Span}\left\{e^1,\dotsc, e^{2n-1}\right\}}.$$ Then the induced structure group $H$ is $${\mathrm{SU}}(n-1)={\mathrm{SU}}(n)\cap {\mathrm{O}}({2n-1}).$$ The space $(\Lambda^*W)^{{\mathrm{SU}}(n)}$ is spanned by the forms $\alpha$, $F$ and $\Omega^\pm$ defiend in , with a shift in the value of $n$. On the other hand $p(\Lambda^*T({\mathrm{SU}}(n)))$ is spanned by the forms $$\alpha\wedge\Omega^+, \quad \alpha\wedge\Omega^-, F.$$ Thus the induced differential operator $0_W$ satisfies $$0_W(F)=0,\quad 0_W(\alpha\wedge\Omega^\pm)=0\;.$$ Since all codimension one subspaces of $T$ are conjugate under ${\mathrm{SU}}(n)$, given a manifold $M$ with a ${\mathrm{SU}}(n)$-structure $P_{{\mathrm{SU}}(n)}$, any hypersurface $N\subset M$ admits a ${\mathrm{SU}}(n-1)$-structure $P_{{\mathrm{SU}}(n-1)}$ such that $(N,P_{{\mathrm{SU}}(n-1)})$ is embedded in $(M,P_{{\mathrm{SU}}(n)})$ with type $W\subset T$. Thus, Proposition \[prop:submanifold\] reduces to the known fact that an oriented hypersurface in a manifold with holonomy ${\mathrm{SU}}(n)$ admits a ${\mathrm{SU}}(n-1)$-structure which is an integral of $\mathcal{I}_{0_W}$, called a [*hypo*]{} structure (see [@ContiSalamon; @ContiFino]). On the other hand if $f$ is the nearly-Kähler differential operator, the induced differential operator $f_W$ is characterized by $$f_W(F)=2\alpha\wedge\Omega^+,\quad f_W(\alpha\wedge\Omega^-)=3F\wedge F\;.$$ Again by Proposition \[prop:submanifold\], we recover the known fact that an oriented hypersurface in a nearly-Kähler $6$-manifold admits an ${\mathrm{SU}}(2)$-structure which is an integral of $\mathcal{I}_{f_W}$, called a [*nearly-hypo*]{} structure (see [@Fernandez:NearlyHypo]).
In either case, if the hypersurface $N\subset M$ is compact, one can use the exponential map to identify a tubular neighbourhood of $N$ with $N\times(a,b)$, and rewrite the Kähler form and complex volume on $M$ as $$\alpha(t)\wedge dt + F(t),\quad (\alpha(t)+idt)\wedge\Omega(t)$$ where $t$ is a coordinate on $(a,b)$ and $(\alpha(t),\Omega(t),F(t))$ is a one-parameter family of ${\mathrm{SU}}(n)$-structures on $N$. In this language, finding an embedding of $(N,P_{{\mathrm{SU}}(n-1)})$ in $(M,P_{{\mathrm{SU}}(n)})$ amounts to finding a solution of certain evolution equations (see [@ContiSalamon; @ContiFino; @Fernandez:NearlyHypo]).
A sketch of the proof of the existence of a solution to the evolution equations was given in [@ContiFino], while [@Fernandez:NearlyHypo] left it as an open problem. We can now give a complete and simultaneous proof of both cases, in the guise of the following:
\[thm:nearlyhypo\] If $(N,\alpha,F,\Omega)$ is a real analytic hypo compact manifold of dimension $2n-1$, the hypo evolution equations admit a solution, and determine an embedding $i\colon N\to M$ into a real analytic $2n$-manifold with holonomy ${\mathrm{SU}}(n)$. If $(N,\alpha,F,\Omega)$ is a real analytic nearly-hypo compact $5$-manifold, the nearly-hypo evolution equations admit a solution, and determine an embedding $i\colon N\to M$ into a real analytic manifold with a nearly-Kähler ${\mathrm{SU}}(3)$-structure.
Since ${\mathrm{SU}}(n)$ is strongly admissible, Proposition \[prop:cit\] implies that $$\operatorname{codim}V_n(\mathcal{I}_f)=\dim {\mathbb{R}}^{2n}\otimes\frac{{\mathfrak{so}}(2n)}{{\mathfrak{su}}(n)}.$$ Then the flag $E_0\subsetneq\dotsb\subsetneq E_{2n}=T$ is $\mathcal{I}^{{\mathrm{SU}}(n)}$-ordinary, implying that ${\mathbb{R}}^{2n-1}\subset {\mathbb{R}}^{2n}$ is relatively admissible. On the other hand the differential operator $f$ extends to a derivation of $\Lambda^*T$ by Proposition \[prop:derivation\]. The statement now follows from Theorem \[thm:AbstractEmbedding\].
By the same argument, we also obtain a complete proof of a result stated in [@Bryant:Calibrated]:
Every real analytic, parallelizable, compact Riemannian $n$-manifold can be embedded isometrically as a special Lagrangian submanifold in a manifold with holonomy ${\mathrm{SU}}(n)$.
Notice that the assumption of real analyticity refers not only to the manifold, but to the structure as well.
$\alpha$-Einstein-Sasaki geometry and hypersurfaces {#sec:go}
===================================================
In this section we classify the constant intrinsic torsion geometries for the group ${\mathrm{SU}}(n)\subset {\mathrm{O}}(2n+1)$, and write down evolution equations for hypersurfaces which are orthogonal to the characteristic direction, in analogy with Section \[sec:hypo\].
Let $T={\mathbb{R}}^{2n+1}$. The space $(\Lambda^*T)^{{\mathrm{SU}}(n)}$ is spanned by the forms $\alpha$, $F$ and $\Omega^\pm$, defined in . In order to classify the differential operators on $(\Lambda^*T)^{{\mathrm{SU}}(n)}$, we observe that every element $g$ of the normalizer $N({\mathrm{SU}}(n))$ of ${\mathrm{SU}}(n)$ in maps $(\Lambda^*T)^{{\mathrm{SU}}(n)}$ to itself; this defines a natural notion of equivalence among differential operators.
\[prop:oddderivation\] Let $f$ be a derivation of $(\Lambda^*T)^{{\mathrm{SU}}(n)}$; then $f$ is a differential operator that extend to a derivation of degree one on $\Lambda^*T$ if and only if one of the following holds:
- $f(\alpha)=0$, $f(F)=2\lambda\alpha\wedge F$, $f(\Omega)=n(\lambda-\mu i)\alpha\wedge\Omega$;
- $f(\alpha)=\lambda F$, $f(F)=0$, $f(\Omega)=-\mu i\alpha\wedge\Omega$;
- $n=2$, and up to $N({\mathrm{SU}}(2))$ action, $\tilde f$ has the form (A) or (B);
- $n=3$, $f(\alpha)=0$, $f(F)=3\lambda\Omega^- - 3\mu\Omega^+$, $f(\Omega)=2(\lambda+i\mu)F^2$;
here $\lambda$ and $\mu$ are real constants.
By Lemma \[lemma:ZEmpty\], we have to classify the ${\mathrm{SU}}(n)$-equivariant derivations of degree one $\tilde f$ of $\Lambda^*T$ with $\tilde f^2=0$ on $(\Lambda^*T)^{{\mathrm{SU}}(n)}$. As a representation of ${\mathrm{SU}}(n)$, we have $$\operatorname{Hom}(T,\Lambda^2T)=({\mathbb{R}}\oplus[\Lambda^{1,0}])\otimes ([\Lambda^{2,0}]\oplus {[\![}\Lambda^{1,1}_0{]\!]}\oplus{\mathbb{R}}\oplus [\Lambda^{1,0}]).$$ Decomposing into irreducible components, this tensor product always contains three trivial components, one of which is in ${\mathbb{R}}\otimes{\mathbb{R}}$, and the other two in $[\Lambda^{1,0}]\otimes [\Lambda^{1,0}]$. Moreover if $n=2$ both $[\Lambda^{1,0}]\otimes [\Lambda^{1,0}]$ and ${\mathbb{R}}\otimes[\Lambda^{2,0}]$ contain each two more trivial components, and if $n=3$ $[\Lambda^{1,0}]\otimes [\Lambda^{2,0}]$ contains two trivial components. Thus, we have to consider three different cases.
(*i*) If $n>3$ and $\tilde f$ is an invariant derivation, there are constants $\lambda$, $\mu$ and $k$ such that $$\tilde f(\alpha) = kF, \quad \tilde f(e^i)=\alpha\wedge (\lambda e^i+\mu e_i{\lrcorner\,}F),$$ so $\tilde f(F)=2\lambda \alpha \wedge F$. Since $f$ is a differential operator, it follows that $$0=\tilde f(\lambda\alpha\wedge F)=\lambda k F\wedge F,$$ hence either $\lambda=0$ or $k=0$. On the other hand, in complex terms, $$\tilde{f}(\Omega)=n(\lambda-\mu i)\alpha\wedge\Omega\;,$$ so we obtain (A) or (B). Notice that these are indeed differential operators, i.e. $f^2=0$.
(*ii*) If $n=3$, the invariant derivation $\tilde f$ has the form $$\tilde f(\alpha) = kF, \quad \tilde f(e^i)=\alpha\wedge (\lambda e^i+\mu e_i{\lrcorner\,}F)+\beta e_i{\lrcorner\,}\Omega^+ + \gamma e_i{\lrcorner\,}\Omega^-,$$ and therefore $$\tilde f(F)=2\lambda \alpha \wedge F +3\beta\Omega^- - 3\gamma\Omega^+,\quad
\tilde f(\Omega)=2(\beta+i\gamma)F^2+3(\lambda-i\mu)\alpha\wedge\Omega.$$ If $k\neq 0$, then $$0=\tilde f^2(\alpha)=2\lambda \alpha \wedge F +3\beta\Omega^- - 3\gamma\Omega^+,$$ so $\lambda=0=\beta=\gamma$ and $\tilde f$ lies in the second family. On the other hand if $k=0$, then $$\tilde f(2\lambda \alpha \wedge F +3\beta\Omega^- - 3\gamma\Omega^+)=3(\lambda\beta-3\mu\gamma)\alpha\wedge\Omega^- -3 (\lambda\gamma+3\mu\beta)\alpha\wedge\Omega^+$$ must be zero. Similarly, $$0=\tilde f^2(\Omega)=(\lambda+3i\mu)(\beta+i\gamma)\alpha \wedge F^2,$$ so either $\beta=\gamma=0$, or $\lambda=\mu=0$, corresponding to (A) and (D) in the statement.
(*iii*) If $n=2$, $\tilde f$ has the form $$\tilde f(\alpha) = kF+\beta\Omega^+ + \gamma\Omega^-, \quad \tilde f(e^i)=\alpha\wedge (\lambda e^i+\mu e_i{\lrcorner\,}F +\sigma e_i{\lrcorner\,}\Omega^++\tau e_i{\lrcorner\,}\Omega^-)\;;$$ since the normalizer $N({\mathrm{SU}}(2))$ contains a copy of ${\mathrm{SU}}(2)$ that acts as rotations on the space spanned by $F$ and $\Omega^\pm$, we can assume that $\mu F +\sigma \Omega^++\tau \Omega^-$ is a multiple of $F$, i.e. $\sigma=0=\tau$. We obtain $$\tilde f(F)=2\lambda F\wedge\alpha,\quad
\tilde f(\Omega)=2(\lambda-\mu i)\alpha\wedge\Omega$$ The condition $f^2=0$ is then equivalent to $$\begin{gathered}
\lambda k=0, \quad \lambda\beta=0, \quad \lambda\gamma =0, \quad \mu\gamma=0, \quad \mu\beta=0.\end{gathered}$$ Now if $\gamma=0=\beta$ we are in case (A) or (B). Otherwise, we have $\mu=0$, i.e. $$\tilde f(\omega)=2\lambda \omega\wedge\alpha, \quad \omega\in{\operatorname{Span}\left\{F,\Omega^+,\Omega^-\right\}}.$$ Thus, we can use the action of $N({\mathrm{SU}}(2))$ to reduce to the case where $\tilde{f}(\alpha)$ is a multiple of $F$.
The differential operators appearing in Proposition \[prop:oddderivation\], to be denoted each by the corresponding letter, can be intepreted as follows. The differential operator $A$ determines a codimension one foliation $\ker\alpha$, where each leaf has an integrable induced ${\mathrm{SU}}(n)$-structure. Conversely, given an integrable ${\mathrm{SU}}(n)$-structure $(F,\Omega)$ on $M^{2n}$, we can define an ${\mathrm{SU}}(n)$-structure on $M^{2n}\times{\mathbb{R}}$ by $$\tilde F = e^{2\lambda t} F,\quad \alpha = dt, \quad \tilde\Omega = e^{n(\lambda - i\mu) t}\Omega,$$ which is an integral of $\mathcal{I}_A$.
Similarly, the differential operator $C$ corresponds to a foliation with nearly-Kähler leaves. On the other hand $B$ corresponds to $\alpha$-Einstein-Sasaki geometry (see [@ContiSalamon] for a proof of this fact in the five-dimensional case). The methods of Section \[sec:hypo\] apply with minimal changes; the main difference is that here ${\mathrm{SU}}(n)$ does not act transitively on the sphere in $T={\mathbb{R}}^{2n+1}$. Indeed, codimension one subspaces $W\subset T$ have an invariant, namely the angle $\Gamma$ that they form with the characteristic direction $e_{2n+1}$. However, by [@Conti:CohomogeneityOne], if $T$ is $\alpha$-Einstein-Sasaki and one uses the exponential to identify a tubular neighbourhood of a hypersurface $N\subset T$ with the product $M\times(a,b)$, the angle $\Gamma$ is constant along the radial direction.
Let us consider the case that $W$ is tangent to the characteristic direction, i.e. $$W={\operatorname{Span}\left\{e_1,\dots,e_{2n-1},e_{2n+1}\right\}}.$$ Then ${\mathrm{SU}}(n)\cap {\mathrm{O}}(W)={\mathrm{SU}}(n-1)$, and the space $(\Lambda^*W)^{{\mathrm{SU}}(n-1)}$ is spanned by $$\alpha=e^{2n+1},\quad \beta = e^{2n-1}, \quad F=e^{12}+\dotsb+e^{2n-3,2n-2},$$ and the real and imaginary part of $$\Omega^+ + i\Omega^-=(e^1+ie^2)\dotsm (e^{2n-3}+ie^{2n-2}).$$ Then $p(\Lambda^*T({\mathrm{SU}}(n)))$ is spanned by the forms $\alpha$, $F$ and $\Omega\wedge\beta$; the projection induces an operator satisfying $$\label{eqn:Go}
B_W(\alpha)=\lambda F, \quad B_W(\Omega\wedge\beta)=-i\mu\alpha\wedge\Omega\wedge\beta, \quad B_W(F)=0.$$
Given an integral $(\alpha(0),F(0),\Omega(0),\beta(0))$ of $\mathcal{I}_{B_W}$ on a $2n$-dimensional manifold $N$, embedding $N$ into a manifold $M$ with an ${\mathrm{SU}}(n)$-structure which is an integral of $\mathcal{I}_B$ is equivalent to extending $(\alpha(0),F(0),\Omega(0),\beta(0))$ to a one-parameter family of ${\mathrm{SU}}(n-1)$-structures $(\alpha(t),F(t),\Omega(t),\beta(t))$ such that $$\label{eqn:GoEvolution}
\frac{\partial }{\partial t}\alpha = -\lambda \beta, \quad \frac{\partial }{\partial t}F = -d\beta, \quad
\frac{\partial }{\partial t}(\beta\wedge\Omega) = id\Omega -\mu\alpha\wedge\Omega$$ Indeed, in that case $(\alpha(t),F(t),\Omega(t),\beta(t))$ is an integral of $\mathcal{I}_{B_W}$ for all $t$, and the forms $$\alpha(t), \quad \Omega(t)\wedge (\beta(t)+idt), \quad F(t)+\beta(t)\wedge dt$$ define a ${\mathrm{SU}}(n)$-structure on $N\times(a,b)$ which is an integral of $\mathcal{I}_B$. The converse follows from the fact that the angle $\Gamma$ is constant along the radial direction.
Applying Theorem \[thm:AbstractEmbedding\], we obtain an odd-dimensional (with reference to the ambient space) analogue of Theorem \[thm:nearlyhypo\].
\[thm:go\] If $N$ is a real analytic compact manifold of dimension $2n$ with a real analytic ${\mathrm{SU}}(n-1)$-structure $(\alpha,F,\Omega,\beta)$ which is an integral of $\mathcal{I}_{B_W}$ as defined in , the evolution equations admit a solution with initial data $(\alpha,F,\Omega,\beta)$, and determine an embedding $i\colon N\to M$ into a real analytic $\alpha$-Einstein-Sasaki manifold as a hypersurface tangent to the characteristic direction.
An irreducible geometry in dimension $9$ {#sec:so3}
========================================
As a final application, we consider a special geometry modelled on the $9$-dimensional irreducible representation $T$ of ${\mathrm{SO}}(3)$. This geometry is described by a $4$-form and a $5$-form; if closed, the $5$-form defines a calibration, and one can look for calibrated embeddings of five-manifolds in this sense; more generally, one can fix a five-dimensional subspace of ${\mathbb{R}}^9$, and look for “compatible” embeddings.
To make things explicit, we fix a basis of ${\mathfrak{so}}(3)$ satisfying $$[H,X]=\sqrt2Y, \quad [H,Y]=-\sqrt2X, \quad [X,Y]=\sqrt2H\;;$$ and identify the representation $T$ with ${\mathbb{R}}^9$ in such a way that the generic element $aH+bX+cY$ of ${\mathfrak{so}}(3)$ acts as the matrix $$\left(\begin{array}{ccccccccc}0&2 b&0&0&-4 a \sqrt{2}&-2 c&0&0&0\\-2 b&0& \sqrt{7} b&0&-2 c&-3 a \sqrt{2}&- c \sqrt{7}&0&0\\0&- \sqrt{7} b&0&3 b&0&- c \sqrt{7}&-2 a \sqrt{2}&-3 c&0\\0&0&-3 b&0&0&0&-3 c&- a \sqrt{2}&2 \sqrt{5} b\\4 a \sqrt{2}&2 c&0&0&0&2 b&0&0&0\\2 c&3 a \sqrt{2}& c \sqrt{7}&0&-2 b&0& \sqrt{7} b&0&0\\0& c \sqrt{7}&2 a \sqrt{2}&3 c&0&- \sqrt{7} b&0&3 b&0\\0&0&3 c& a \sqrt{2}&0&0&-3 b&0&2 c \sqrt{5}\\0&0&0&-2 \sqrt{5} b&0&0&0&-2 c \sqrt{5}&0\end{array}\right)$$ Since this matrix is skew-symmetric, the action of ${\mathrm{SO}}(3)$ preserves the standard metric on ${\mathbb{R}}^9$.
By standard representation theory, one sees that the action of ${\mathrm{SO}}(3)$ on $\Lambda^*T$ leaves invariant a $4$-form $\gamma$, and this is the only invariant along with its Hodge dual $*\gamma$ and the volume form. If $e^1,\dotsc, e^9$ is the standard basis of ${\mathbb{R}}^9$, using the expression for the ${\mathrm{SU}}(2)$ action given above we can identify $\gamma$ as $$\begin{gathered}
\label{eqn:SO3gamma}
\gamma=\frac{1}{4}\sqrt5\left( -e^{2589}+ e^{1249} -e^{1689}+ e^{4569} \right)
- (e^{1357}+e^{1256})
+\frac{7}{8} e^{3478}\\
+\frac{1}{8}\sqrt{35}\left(-e^{3689} -e^{2789}+e^{4679}+ e^{2349} \right)
-\frac{1}{2} e^{1458}
+\frac{1}{8} e^{2367}\\
+\frac38(e^{3456}+e^{5678}-e^{2468}-e^{2457}-e^{2358}-e^{1467}-e^{1368}+e^{1278}+e^{1234})\end{gathered}$$ Thus, on a $9$-manifold $M$ with an ${\mathrm{SO}}(3)$-structure one has two canonical forms, which we also denote by $\gamma$ and $*\gamma$. The intrinsic torsion space has dimension $279$ and splits into $25$ irreducible components, but the most natural torsion classes to consider are those defined by one of $$\label{eqn:SO3Geometries} d\gamma=0, \quad d*\gamma=0, \quad d\gamma=0=d*\gamma.$$ By Section \[sec:Preparatory\], to each geometry in one can associate an exterior differential system, which is involutive in the second case (by Lemma \[lemma:gammaEstable\] and Proposition \[prop:stable\]), though it does not appear to be involutive in the other two cases.
It would also be natural to consider the geometry defined by $d\gamma=\lambda *\gamma$ with $\lambda$ a non-zero constant, but structures of this type do not exist. Indeed, let . Then $A$ is generated by two elements $\gamma$, $*\gamma$, but the differential operator defined by $f(\gamma)=*\gamma$ does not extend to a derivation of $\Lambda^*T$, since $\operatorname{Hom}(T,\Lambda^2T)$ has no trivial submodule (see Lemma \[lemma:ZEmpty\]).
Now suppose that $*\gamma$ is closed; this means that $8$ out of the $25$ components of the intrinsic torsion vanish (counting dimensions, $84$ out of $279$). Like in Section \[sec:psu3\], we can renormalize $*\gamma$ into a calibration and try to obtain an embedding result. However, this time the form is not stable, but a weaker result holds:
\[lemma:gammaEstable\] Let $\alpha$ be a vector in the finite set $$\label{eqn:finiteset}\{e^1,e^2,e^4,e^5,e^6,e^8\}\subset{\mathbb{R}}^9.$$ Then the form $*\gamma$ is $\alpha^\perp$-stable.
In order to prove an analogue of Theorem \[thm:psu3Embedding2\], one would have to show that ${\mathbb{R}}^9$ contains calibrated subspaces orthogonal to one of . This appears to be a difficult problem, because the structure group is small and $*\gamma$ has a complicated form.
At any rate, one can fix a $5$-dimensional $W\subset{\mathbb{R}}^9$ and consider submanifolds $N\subset M$ embedded with type $W\subset{\mathbb{R}}^9$, in the sense that at each point $x\in N$ one can choose a coframe $e^1,\dotsc, e^9$ such that $\gamma$ has the form and the coframe maps $T_xN\subset T_xM$ to $W\subset{\mathbb{R}}^9$.
\[thm:so3Embedding\] If $N$ is a compact, parallelizable, real analytic Riemannian $5$-manifold and $W\subset{\mathbb{R}}^9$ is a $5$-dimensional subspace orthogonal to one of the vectors $$e^1,e^2,e^4,e^5,e^6,e^8$$ then $N$ can be embedded with type $W\subset{\mathbb{R}}^9$ in a $9$-manifold $M$ with an ${\mathrm{SO}}(3)$-structure such that $*\gamma$ is closed.
Lemma \[lemma:gammaEstable\] identifies six $8$-dimensional subspaces $E\subset{\mathbb{R}}^9$ such that the $5$-form $*\gamma$ is $E$-stable. By Proposition \[prop:stable\], if $W$ is contained in such an $E$, then $W\subset {\mathbb{R}}^9$ is relatively admissible, and by Theorem \[thm:AbstractEmbedding\] the statement follows.
We leave it as an open problem to determine the subspaces $W\subset{\mathbb{R}}^9$ calibrated by $*\gamma$.
[^1]: This research has been carried out despite the effects of the Italian law 133/08. This law drastically reduces public funds to public Italian universities, with dangerous consequences for free scientific research, and will prevent young researchers from obtaining a position, either temporary or tenured, in Italy. The author is protesting against this law to obtain its cancellation (see [http://groups.google.it/group/scienceaction]{}).
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Age determination is undertaken for nearby early-type (BAF) stars, which constitute attractive targets for high-contrast debris disk and planet imaging surveys. Our analysis sequence consists of: acquisition of $uvby\beta$ photometry from catalogs, correction for the effects of extinction, interpolation of the photometry onto model atmosphere grids from which atmospheric parameters are determined, and finally, comparison to the theoretical isochrones from pre-main sequence through post-main sequence stellar evolution models, accounting for the effects of stellar rotation. We calibrate and validate our methods at the atmospheric parameter stage by comparing our results to fundamentally determined $T_\text{eff}$ and $\log g$ values. We validate and test our methods at the evolutionary model stage by comparing our results on ages to the accepted ages of several benchmark open clusters (IC 2602, $\alpha$ Persei, Pleiades, Hyades). Finally, we apply our methods to estimate stellar ages for 3493 field stars, including several with directly imaged exoplanet candidates.'
author:
- 'Trevor J. David and Lynne A. Hillenbrand'
bibliography:
- 'main.bib'
title: 'The Ages of Early-Type Stars: Strömgren Photometric Methods Calibrated, Validated, Tested, and Applied to Hosts and Prospective Hosts of Directly Imaged Exoplanets'
---
Introduction {#sec:intro}
============
In contrast to other fundamental stellar parameters such as mass, radius, and angular momentum – that for certain well-studied stars and stellar systems can be anchored firmly in observables and simple physics – stellar ages for stars other than the Sun have no firm basis. Ages are critical, however, for many investigations involving time scales including formation and evolution of planetary systems, evolution of debris disks, and interpretation of low mass stars, brown dwarfs, and so-called planetary mass objects that are now being detected routinely as faint point sources near bright stars in high contrast imaging surveys.
The Era of Direct Imaging of Exoplanets {#subsec:directimaging}
---------------------------------------
Intermediate-mass stars ($1.5-3.0\ M_{\odot}$) have proven themselves attractive targets for planet search work. Hints of their importance first arose during initial data return from IRAS in the early 1980s, when several A-type stars (notably Vega but also $\beta$ Pic and Fomalhaut) as well K-star Eps Eri – collectively known as “the fab four” – distinguished themselves by showing mid-infrared excess emission due to optically thin dust in Kuiper-Belt-like locations. Debris disks are signposts of planets, which dynamically stir small bodies resulting in dust production. Spitzer results in the late 2000s solidified the spectral type dependence of debris disk presence (e.g. @carpenter2006 [@wyatt2008]) for stars of common age. For a random sample of field stars, however, the primary variable determining the likelihood of debris is stellar age [@kains2011].
The correlation in radial velocity studies of giant planet frequency with stellar mass [@fischer2005; @gaidos2013] is another line of evidence connecting planet formation efficiency to stellar mass. The claim is that while $\sim$14% of A stars have one or more $>1 M_\mathrm{Jupiter}$ companions at $<$5 AU, only $\sim$2% of M stars do (@johnson2010, c.f. @lloyd2013 [@schlaufman2013]).
Consistently interpreted as indicators of hidden planets, debris disks finally had their long-awaited observational connection to planets with the watershed discovery of [*directly imaged*]{} planetary mass companions. These were – like the debris disks before them – found first around intermediate-mass A-type stars, rather than the solar-mass FGK-type stars that had been the subject of much observational work at high contrast during the 2000s. HR 8799 [@marois2008; @marois2010] followed by Fomalhaut [@kalas2008] and $\beta$ Pic [@lagrange2009; @lagrange2010] have had their planets [*and indeed one planetary system*]{}, digitally captured by ground-based and/or space-based high contrast imaging techniques. Of the known [*bona fide*]{} planetary mass ($< 10 M_\text{Jup}$) companions that have been directly imaged, six of the nine are located around the three A-type host stars mentioned above, with the others associated with lower mass stars including the even younger 5-10 Myr old star 1RXS 1609-2105 [@lafreniere2008; @ireland2011] and brown dwarf 2MASS 1207-3933 [@chauvin2004] and the probably older GJ 504 [@kuzuhara2013]. Note that to date these directly imaged objects are all “super-giant planets" and not solar system giant planet analogs (e.g. Jupiter mass or below).
Based on the early results, the major direct imaging planet searches have attempted to optimize success by preferentially observing intermediate-mass, early-type stars. The highest masses are avoided due to the limits of contrast. Recent campaigns include those with all the major large aperture telescopes: Keck/NIRC2, VLT/NACO, Gemini/NICI, and Subaru/HiCAO. Current and near-future campaigns include Project 1640 (P1640; Hinkley et al. 2011) at Palomar Observatory, Gemini Planet Imager (GPI), operating on the Gemini South telescope, VLT/SPHERE, and Subaru/CHARIS. The next-generation TMT and E-ELT telescopes both feature high contrast instruments.
@mawet2012 compares instrumental contrast curves in their Figure 1. Despite the technological developments over the past decade, given the as-built contrast realities, only the largest, hottest, brightest, and therefore the youngest planets, i.e. those less than a few to a few hundred Myr in age, are still self-luminous enough to be amenable to direct imaging detection. Moving from the 3-10 $M_\text{Jupiter}$ detections at several tens of AU that are possible today/soon, to detection of lower mass, more Earth-like planets located at smaller, more terrestrial zone, separations, will require pushing to higher contrast from future space-based platforms. The targets of future surveys, whether ground or space, are however not likely to be substantially different from the samples targeted in today’s ground-based surveys.
The most important parameter really is age, since the brightness of planets decreases so sharply with increasing age due to the rapid gravitational contraction and cooling [@fortney2008; @burrows2004]. There is thus a premium on identifying the closest, youngest stars.
The Age Challenge {#subsec:agechallenge}
-----------------
Unlike the other fundamental parameters of stellar mass (unambiguously determined from measurements of double-line eclipsing binaries and application of Kepler’s laws) and stellar radius (unambiguously measured from interferometric measurements of angular diameters and parallax measurements of distances), there are no directly interpretable observations leading to stellar age.
Solar-type stars ($\sim 0.7-1.4 M_{\odot}$, spectral types F6-K5) were the early targets of radial velocity planet searches and later debris disk searches that can imply the presence of planets. For these objects, although more work remains to be done, there are established activity-rotation-age diagnostics that are driven by the presence of convective outer layers and can serve as proxies for stellar age [e.g. @mamajek2008].
For stars significantly different from our Sun, however, and in particular the intermediate-mass stars ($\sim 1.5-3.0 M_{\odot}$, spectral types A0-F5 near the main sequence) of interest here, empirical age-dating techniques have not been sufficiently established or calibrated. Ages have been investigated recently for specific samples of several tens of stars using color-magnitude diagrams by @nielsen2013 [@vigan2012; @moor2006; @su2006; @rhee2007; @lowrance2000].
Perhaps the most robust ages for young BAF stars come from clusters and moving groups, which contain not only the early-type stars of interest, but also lower mass stars to which the techniques mentioned above can be applied. These groups are typically dated using a combination of stellar kinematics, lithium abundances, rotation-activity indicators, and placement along theoretical isochrones in a color-magnitude diagram. The statistics of these coeval stellar populations greatly reduce the uncertainty in derived ages. However, only four such groups exist within $\sim$ 60 pc of the Sun and the number of early-type members is small.
Field BAF stars having late-type companions at wide separation could have ages estimated using the methods valid for F6-K5 age dating. However, these systems are not only rare in the solar neighborhood, but considerable effort is required in establishing companionship e.g. [@stauffer1995; @barrado1997; @song2000]. Attempts to derive fractional main sequence ages for A-stars based on the evolution of rotational velocities are ongoing [@zorec2012], but this method is undeveloped and a bimodal distribution in $v \sin i$ for early-type A-stars may inhibit its utility. Another method, asteroseismology, which detects low-order oscillations in stellar interiors to determine the central density and hence age, is a heavily model-dependent method, observationally expensive, and best suited for older stars with denser cores.
The most general and quantitative way to age-date A0-F5 field stars is through isochrone placement. As intermediate-mass stars evolve quickly along the H-R diagram, they are better suited for age-dating via isochrone placement relative to their low-mass counterparts which remain nearly stationary on the main sequence for many Gyr [@soderblom2010]. Indeed, the mere presence of an early-type star on the main sequence suggests moderate youth, since the hydrogen burning phase is relatively short-lived. However, isochronal ages are obviously model-dependent and they do require precise placement of the stars on an H-R diagram implying a parallax. The major uncertainties arise from lack of information regarding metallicity [@nielsen2013], rotation [@collinssmith1985] and multiplicity [@derosa2014].
Our Approach
------------
![image](f1.pdf){width="95.00000%"}
Despite that many nearby BAF stars are well-studied, historically, there is no modern data set leading to a set of consistently derived stellar ages for this population of stars. Here we apply Strömgren photometric techniques, and by combining modern stellar atmospheres and modern stellar evolutionary codes, we develop the methods for robust age determination for stars more massive than the Sun. The technique uses specific filters, careful calibration, definition of photometric indices, correction for any reddening, interpolation from index plots of physical atmospheric parameters, correction for rotation, and finally Bayesian estimation of stellar ages from evolutionary models that predict the atmospheric parameters as a function of mass and age.
Specifically, our work uses high-precision archival $uvby\beta$ photometry and model atmospheres so as to determine the fundamental stellar atmospheric parameters $T_\text{eff}$ and $\log g$. Placing stars accurately in an $\log T_\mathrm{eff}$ vs. $\log g$ diagram leads to derivation of their ages and masses. We consider [@bressan2012] evolutionary models that include pre-main sequence evolutionary times (2 Myr at 3 $M_{\odot}$ and 17 Myr at 1.5 $M_{\odot}$), which are a significant fraction of any intermediate mass star’s absolute age, as well as [@ekstrom2012] evolutionary models that self-consistently account for stellar rotation, which has non-negligible effects on the inferred stellar parameters of rapidly rotating early-type stars. Figure \[fig:evolutiont\] shows model predictions for the evolution of both physical and observational parameters.
The primary sample to which our technique is applied in this work consists of 3499 BAF field stars within 100 pc and with $uvby\beta$ photometry available in the [@hauck1998] catalog, hereafter HM98. The robustness of our method is tested at different stages with several control samples. To assess the uncertainties in our atmospheric parameters we consider (1) 69 $T_\mathrm{eff}$ standard stars from [@boyajian2013] or [@napiwotzki1993]; (2) 39 double-lined eclipsing binaries with standard $\log{g}$ from [@torres2010]; (3) 16 other stars from [@napiwotzki1993], also for examining $\log{g}$. To examine isochrone systematics, stars in four open clusters are studied (31 members of IC 2602, 51 members of $\alpha$ Per, 47 members of the Pleiades, and 47 members of the Hyades). Some stars belonging to sample (1) above are also contained in the large primary sample of field stars.
The Strömgren Photometric System {#sec:uvby}
================================
![The $u, v, b, y,$ H$\beta_\text{wide}$ and H$\beta_\text{narrow}$ passbands. Overplotted on an arbitrary scale is the synthetic spectrum of an A0V star generated by [@munari2005] from an ATLAS9 model atmosphere. The $uvby$ filter profiles are those of [@bessell2011], while the H$\beta$ filter profiles are those originally described in [@crawford1966] and the throughput curves are taken from [@castelli2006].[]{data-label="fig:filters"}](f2.pdf){width="45.00000%"}
Historical use of Strömgren photometry methods indeed has been for the purpose of determining stellar parameters for early-type stars. Recent applications include work by @nieva2013 [@dallemese2012; @onehag2009; @allende1999]. An advantage over more traditional color-magnitude diagram techniques [@nielsen2013; @derosa2014] is that distance knowledge is not required, so the distance-age degeneracy is removed. Also, metallicity effects are relatively minor (as addressed in an Appendix) and rotation effects are well-modelled and can be corrected for (§ \[subsec:vsinicorrection\]).
Description of the Photometric System {#subsec:uvbydescription}
-------------------------------------
The $uvby\beta$ photometric system is comprised of four intermediate-band filters ($uvby$) first advanced by [@stromgren1966] plus the H$\beta$ narrow and wide filters developed by [@crawford1958]; see Figure \[fig:filters\]. Together, the two filter sets form a well-calibrated system that was specifically designed for studying earlier-type BAF stars, for which the hydrogen line strengths and continuum slopes in the Balmer region rapidly change with temperature and gravity.
From the fluxes contained in the six passbands, five $uvby\beta$ indices are defined. The color indices, ($b-y$) and ($u-b$), and the $\beta$-index,
$$\beta = \mathrm{H}\beta_\text{narrow} - \mathrm{H}\beta_\text{wide},$$
are all sensitive to temperature and weakly dependent on surface gravity for late A- and F-type stars. The Balmer discontinuity index,
$$c_1 = (u-v) - (v-b),$$
is sensitive to temperature for early type (OB) stars and surface gravity for intermediate (AF) spectral types. Finally, the metal line index,
$$m_1 = (v-b) - (b-y),$$
is sensitive to the metallicity $[M/H]$.
For each index, there is a corresponding intrinsic, dereddened index denoted by a naught subscript with e.g $c_0, (b-y)_0,$ and $(u-b)_0$, referring to the intrinsic, dereddened equivalents of the indices $c_1, (b-y),$ and $(u-b)$, respectively. Furthermore, although reddening is expected to be negligible for the nearby sources of primary interest to us, automated classification schemes that divide a large sample of stars for analysis into groups corresponding to earlier than, around, and later than the Balmer maximum will sometimes rely on the reddening-independent indices defined by [@crawfordmandwewala1976] for A-type dwarfs:
$$\begin{aligned}
[c_1] &= c_1 - 0.19 (b-y) \\
[m_1] &= m_1 + 0.34 (b-y) \\
[u-b] &= [c_1] + 2 [m_1].
\end{aligned}$$
Finally, two additional indices useful for early A-type stars, $a_0$ and $r^*$, are defined as follows:
$$\begin{aligned}
a_0 &= 1.36(b-y)_0 + 0.36m_0 + 0.18c_0 - 0.2448 \\
&= (b-y)_0 + 0.18[(u-b)_0 - 1.36], \\
r^* &= 0.35c_1 - 0.07(b-y)-(\beta-2.565).
\end{aligned}$$
Note that $r^*$ is a reddening free parameter, and thus indifferent to the use of reddened or unreddened photometric indices.
Extinction Correction {#subsec:reddening}
---------------------
Though the sample of nearby stars to which we applying the Strömgren methodology are assumed to be unextincted or only lightly extincted, interstellar reddening is significant for the more distant stars including those in the open clusters used in § \[subsec:openclustertests\] to test the accuracy of the ages derived using our $uvby\beta$ methodology. In the cases where extinction is thought to be significant, corrections are performed using the `UVBYBETA`[^1] and `DEREDD`[^2] programs for IDL.
These IDL routines take as input $(b-y), m_1, c_1, \beta$, and a class value (between 1-8) that is used to roughly identify what region of the H-R diagram an individual star resides in. For our sample, stars belong to only four of the eight possible classes. These classes are summarized as follows: (1) B0-A0, III-V, $2.59 < \beta < 2.88$, $-0.20 < c_0 < 1.00$, (5) A0-A3, III-V, $2.87 < \beta < 2.93$, $-0.01 < (b-y)_0 < 0.06$, (6) A3-F0, III-V, $2.72 < \beta < 2.88$, $0.05 < (b-y)_0 < 0.22$, and (7) F1-G2, III-V, $2.60 < \beta < 2.72$, $0.22 < (b-y)_0 < 0.39$. The class values in this work were assigned to individual stars based on their known spectral types (provided in the XHIP catalog [@anderson2011]), and $\beta$ values where needed. In some instances, A0-A3 stars assigned to class (5) with values of $\beta < 2.87$, the dereddening procedure was unable to proceed. For these cases, stars were either assigned to class (1) if they were spectral type A0-A1, or to class (6), if they were spectral type A2-A3.
Depending on the class of an individual star, the program then calculates the dereddened indices $(b-y)_0, m_0, c_0$, the color excess $E(b-y)$, $\delta m_0$, the absolute V magnitude, $M_V$, the stellar radius and effective temperature. Notably, the $\beta$ index is unaffected by reddening as it is the flux difference between two narrow band filters with essentially the same central wavelength. Thus, no corrections are performed on $\beta$ and this index can be used robustly in coarse classification schemes.
To transform $E(b-y)$ to $A_V$, we use the extinction measurements of [@schlegel1998] and to propagate the effects of reddening through to the various $uvby\beta$ indices we use the calibrations of [@crawfordmandwewala1976]:
$$\begin{aligned}
E(m_1) &= -0.33 E(b-y) \\
E(c_1) &= 0.20 E(b-y) \\
E(u-b) &= 1.54 E(b-y).\end{aligned}$$
From these relations, given the intrinsic color index $(b-y)_0$, the reddening free parameters $m_0, c_0, (u-b)_0,$ and $a_0$ can be computed.
In § \[subsec:tefflogguncertainties\] we quantify the effects of extinction and extinction uncertainty on the final atmospheric parameter estimation, $T_\mathrm{eff}, \log g$.
Utility of the Photometric System {#subsec:uvbyutility}
---------------------------------
From the four basic Strömgren indices – $b-y$ color, $\beta$, $c_1$, and $m_1$ – accurate determinations of the stellar atmospheric parameters $T_\text{eff}, \log g$, and $[M/H]$ are possible for B, A, and F stars. Necessary are either empirical [e.g. @crawford1979; @lester1986; @olsen1988; @smalley1993; @smalley1995; @clem2004], or theoretical [e.g. @balona1984; @moon1985; @napiwotzki1993; @balona1994; @lejeune1999; @castelli2006; @castelli2004; @onehag2009] calibrations. Uncertainties of 0.10 dex in $\log g$ and 260 K in $T_\text{eff}$ are claimed as achievable and we reassess these uncertainties ourselves § \[subsec:tefflogguncertainties\].
Determination of Atmospheric Parameters $T_\mathrm{eff}, \log g$ {#sec:atmosphericparameters}
================================================================
Procedure {#subsec:atmosphericparameters}
---------
![image](f3.pdf){width="99.00000%"} ![image](f4.pdf){width="99.00000%"}
Once equipped with $uvby\beta$ colors and indices and understanding the effects of extinction, arriving at the fundamental parameters $T_\text{eff}$ and $\log g$ for program stars, proceeds by interpolation among theoretical color grids (generated by convolving filter sensitivity curves with model atmospheres) or explicit formulae (often polynomials) that can be derived empirically or using the theoretical color grids. In both cases, calibration to a sample of stars with atmospheric parameters that have been independently determined through fundamental physics is required. See e.g. [@figueras1991] for further description.
Numerous calibrations, both theoretical and empirical, of the $uvby\beta$ photometric system exist. For this work we use the [@castelli2006; @castelli2004] color grids generated from solar metallicity (Z=0.017, in this case) ATLAS9 model atmospheres using a microturbulent velocity parameter of $\xi = $ 0 km s$^{-1}$ and the new ODF. We do not use the alpha-enhanced color grids. The grids are readily available from F. Castelli[^3] or R. Kurucz[^4].
Prior to assigning atmospheric parameters to our program stars directly from the model grids, we first investigated the accuracy of the models on samples of BAF stars with fundamentally determined $T_\mathrm{eff}$ (through interferometric measurements of the angular diameter and estimations of the total integrated flux) and $\log g$ (from measurements of the masses and radii of double lined eclipsing binaries). We describe these validation procedures in § \[subsec:teffvalidation\] and § \[subsec:loggvalidation\].
Atmospheric parameter determination occurs in three different observational Strömgren planes depending on the temperature regime (see Figure \[fig:uvbygrids\]); this is in order to avoid the degeneracies that are present in all single observational planes when mapped onto the physical parameter space of $\log T_\text{eff}$ and $\log g$.
Building off of the original work of e.g. [@stromgren1951; @stromgren1966], [@moon1985], and later [@napiwotzki1993], suggested assigning physical parameters in the following three regimes: for cool stars ($T_\text{eff} \leq$ 8500 K), $\beta$ or $(b-y)$ can be used as a temperature indicator and $c_0$ is a surface gravity indicator; for intermediate temperature stars (8500 K $\leq T_\text{eff} \leq$ 11000 K), the temperature indicator is $a_0$ and surface gravity indicator $r^*$; finally, for hot stars ($T_\text{eff} \gtrsim$ 11000 K), the $c_0$ or the $[u-b]$ indices can be used as a temperature indicator while $\beta$ is a gravity indicator (note that the role of $\beta$ is reversed for hot stars compared to its role for cool stars). We adopt here $c_1$ vs. $\beta$ for the hottest stars, $a_0$ vs. $r^*$ for the intermediate temperatures, and $(b-y)$ vs. $c_1$ for the cooler stars.
Choosing the appropriate plane for parameter determination effectively means establishing a crude temperature sequence prior to fine parameter determination; in this, the $\beta$ index is critical. Because the $\beta$ index switches from being a temperature indicator to a gravity indicator in the temperature range of interest to us (spectral type B0-F5, luminosity class IV/V stars), atmospheric parameter determination proceeds depending on the temperature regime. For the $T_\mathrm{eff}$ and $\log g$ calibrations described below, temperature information existed for all of the calibration stars, though this is not the case for our program stars. In the general case we must rely on photometric classification to assign stars to the late, intermediate, and early groups, and then proceed to determine atmospheric parameters in the relevant $uvby\beta$ planes.
[@ttmoon1985] provides a scheme, present in the `UVBYBETA` IDL routine, for roughly identifying the region of the H-R diagram in which a star resides. However, because our primary sample of field stars are assumed to be unextincted, and because the `UVBYBETA` program relies on user-inputted class values based on unverified spectral types from the literature, we opt for a classification scheme based solely on the $uvby\beta$ photometry.
[@monguio2014], hereafter M14, designed a sophisticated classification scheme, based on the work of [@stromgren1966]. The M14 scheme places stars into early (B0-A0), intermediate (A0-A3), and late (later than A3) groups based solely on $\beta$, the reddened color $(b-y)$, and the reddening-free parameters $[c_1], [m_1], [u-b]$. The M14 scheme improves upon the previous method of [@figueras1991] by imposing two new conditions (see their Figure 2 for the complete scheme) intended to prevent the erroneous classification of some stars. For our sample of 3499 field stars (see § \[subsec:fieldstars\]), there are 699 stars lacking $\beta$ photometry, all but three of which cannot be classified by the M14 scheme. For such cases, we rely on supplementary spectral type information and manually assign these unclassified stars to the late group. Using the M14 scheme, the final makeup of our field star sample is 85.9% late, 8.4% intermediate, and 5.7% early.
Sample and Numerical Methods
----------------------------
For all stars in this work, $uvby\beta$ photometry is acquired from the [@hauck1998] compilation (hereafter HM98), unless otherwise noted. HM98 provides the most extensive compilation of $uvby\beta$ photometric measurements, taken from the literature and complete to the end of 1996 (the photometric system has seen less frequent usage/publication in more modern times). The HM98 compilation includes 105,873 individual photometric measurements for 63,313 different stars, culled from 533 distinct sources, and are presented both as individual measurements and weighted means of the literature values.
The HM98 catalog provides $(b-y), m_1, c_1,$ and $\beta$ and the associated errors in each parameter if available. From these indices $a_0$ and $r^*$ are computed according to Equations (7), (8) & (9). The ATLAS9 $uvby\beta$ grids provide a means of translating from ($b-y, m_1, c_1, \beta, a_0, r^*$) to a precise combination of ($T_\mathrm{eff}, \log g$). Interpolation within the model grids is performed on the appropriate grid: ($(b-y)$ vs. $c_1$ for the late group, $a_0$ vs. $r^*$ for the intermediate group, and $c_1$ vs. $\beta$ for the early group).
The interpolation is linear and performed using the SciPy routine `griddata`. Importantly, the model $\log g$ values are first converted into linear space so that $g$ is determined from the linear interpolation procedure before being brought back into log space. The model grids used in this work are spaced by 250 K in $T_\mathrm{eff}$ and 0.5 dex in $\log g$. To improve the precision of our method of atmospheric parameter determination in the future, it would be favorable to use model color grids that have been calculated at finer resolutions, particularly in $\log g$, directly from model atmospheres. However, the grid spacings stated above are fairly standardized among extant $uvby\beta$ grids.
Rotational Velocity Correction {#subsec:vsinicorrection}
------------------------------
![Vectors showing the magnitude and direction of the rotational velocity corrections at 100 (black), 200, and 300 (light grey) km s$^{-1}$ for a grid of points in log(Teff)-log$g$ space, with PARSEC isochrones overlaid for reference. While typical A-type stars rotate at about 150 km s$^{-1}$, high-contrast imaging targets are sometimes selected for slow rotation and hence favorable inclinations, typically $v \sin i <$50 km s$^{-1}$ or within the darkest black vectors. For rapid rotators, a 100$\%$ increase in the inferred age due to rotational effects is not uncommon.[]{data-label="fig:rotation-vectors"}](f5.pdf){width="49.00000%"}
Early-type stars are rapid rotators, with rotational velocities of $v \sin i \gtrsim 150$ km s$^{-1}$ being typical. For a rotating star, both surface gravity and effective temperature decrease from the poles to the equator, changing the mean gravity and temperature of a rapid rotator relative to a slower rotator [@sweetroy1953]. Vega, rotating with an inferred equatorial velocity of $v_\mathrm{eq} \sim 270$ km s$^{-1}$ at a nearly pole-on inclination, has measured pole-to-equator gradients in $T_\mathrm{eff}$ and $\log{g}$ that are $\sim$ 2400 K and $\sim$ 0.5 dex, respectively [@peterson2006]. The apparent luminosity change due to rotation depends on the inclination: a pole-on ($i=0^\circ$) rapid rotator appears more luminous than a nonrotating star of the same mass, while an edge-on ($i=90^\circ$) rapid rotator appears less luminous than a nonrotating star of the same mass. [@sweetroy1953] found that a $(v \sin i)^2$ correction factor could describe the changes in luminosity, gravity, and temperature.
The net effect of stellar rotation on inferred age is to make a rapid rotator appear cooler, more luminous, and hence older when compared to a nonrotating star of the same mass (or more massive when compared to a nonrotating star of the same age). Optical colors can be affected since the spectral lines of early type stars are strong and broad. @kraftwrubel1965 demonstrated specifically in the Str[ö]{}mgren system that the effects are predominantly in the gravity indicators ($c_1$, which then also affects the other gravity indicator $r^*$) and less so in the temperature indicators ($b-y$, which then affects $a_0$).
[@figueras1998], hereafter FB98, used Monte-Carlo simulations to investigate the effect of rapid rotation on the measured $uvby\beta$ indices, derived atmospheric parameters, and hence isochronal ages of early-type stars. Those authors concluded that stellar rotation conspires to artificially enhance isochronal ages derived through $uvby\beta$ photometric methods by 30-50% on average.
To mitigate the effect of stellar rotation on the parameters $T_\text{eff}$ and $\log(g)$, FB98 presented the following corrective formulae for stars with $T_\mathrm{eff} > 11000$ K:
$$\begin{aligned}
\Delta T_\mathrm{eff} &= 0.0167 (v \sin i)^2 + 218, \\
\Delta \log g &= 2.10 \times 10^{-6} (v \sin i)^2 + 0.034.\end{aligned}$$
For stars with $8500 \mathrm{K} \leq T_\mathrm{eff} \leq 11000 \mathrm{K}$, the analogous formulae are:
$$\begin{aligned}
\Delta T_\mathrm{eff} &= 0.0187 (v \sin i)^2 + 150, \\
\Delta \log g &= 2.92 \times 10^{-6} (v \sin i)^2 + 0.048.\end{aligned}$$
In both cases, $\Delta T_\mathrm{eff}$ and $\Delta \log g$ are *added* to the $T_\mathrm{eff}$ and $\log g$ values derived from $uvby\beta$ photometry.
Notably, the rotational velocity correction is dependent on whether the star belongs to the early, intermediate, or late group. Specifically, FB98 define three regimes: $T_\mathrm{eff}<$8830 K (no correction), 8830 K$<$Teff$<$9700 K (correction for intermediate A0-A3 stars), Teff$>$9700 K (correction for stars earlier than A3).
[@song2001], who performed a similar isochronal age analysis of A-type stars using $uvby\beta$ photometry, extended the FB98 rotation corrections to stars earlier and later than B7 and A4, respectively. In the present work, a more conservative approach is taken and the rotation correction is applied only to stars in the early or intermediate groups, as determined by the classification scheme discussed in § \[subsec:atmosphericparameters\]. This decision was partly justified by the abundance of late-type stars that fall below the ZAMS in the open cluster tests (§ \[subsec:openclustertests\]), for which the rotation correction would have a small (due to the lower rotational velocities of late-type stars) but exacerbating effect on these stars whose surface gravities are already thought to be overestimated.
We include these corrections and, as illustrated in Figure \[fig:rotation-vectors\], emphasize that in their absence we would err on the side of over-estimating the age of a star, meaning conservatively overestimating rather than underestimating companion masses based on assumed ages. As an example, for a star with $T_\mathrm{eff} \approx$ 13,275 K and log$g$ $\approx$ 4.1, assumed to be rotating edge-on at 300 km s$^{-1}$, neglecting to apply the rotation correction would result in an age of $\sim$ 100 Myr. Applying the rotation correction to this star results in an age of $\sim$ 10 Myr.
Of note, the FB98 corrections were derived for atmospheric parameters determined using the synthetic $uvby\beta$ color grids of [@moon1985]. It is estimated that any differences in derived atmospheric parameters resulting from the use of color grids other than those of [@moon1985] are less than the typical measurement errors in those parameters. In § \[subsec:tefflogguncertainties\] we quantify the effects of rotation and rotation correction uncertainty on the final atmospheric parameter estimation, $T_\mathrm{eff}, \log g$.
Calibration and Validation Using the HM98 Catalog
=================================================
In this section we assess the effective temperatures and surface gravities derived from atmospheric models and $uvby\beta$ color grids relative to fundamentally determined temperatures (§ \[subsec:teffvalidation\]) and surface gravities (§ \[subsec:loggvalidation\]).
Effective Temperature {#subsec:teffvalidation}
---------------------
A fundamental determination of $T_\mathrm{eff}$ is possible through an interferometric measurement of the stellar angular diameter and an estimate of the total integrated flux. We gathered 69 stars (listed in Table \[table:teffcal\]) with fundamental $T_\mathrm{eff}$ measurements from the literature and determine photometric temperatures for these objects from interpolation of $uvby\beta$ photometry in ATLAS9 model grids.
Fundamental $T_\mathrm{eff}$ values were sourced from [@boyajian2013], hereafter B13, and [@napiwotzki1993], hereafter N93. Several stars have multiple interferometric measurements of the stellar radius, and hence multiple fundamental $T_\mathrm{eff}$ determinations. For these stars, identified as those objects with multiple radius references in Table \[table:teffcal\], the mean $T_\mathrm{eff}$ and standard deviation were taken as the fundamental measurement and standard error. Among the 16 stars with multiple fundamental $T_\mathrm{eff}$ determinations by between 2 and 5 authors, there is a scatter of typically several percent (with 0.1-4% range).
Additional characteristics of the $T_\mathrm{eff}$ “standard” stars are summarized as follows: spectral types B0-F9, luminosity classes III-V, 2 km s$^{-1}$ $\leq v \sin i \leq$ 316 km s$^{-1}$, mean and median $v \sin i$ of 58 and 26 km s$^{-1}$, respectively, 2.6 pc $\leq d \leq$ 493 pc, and a mean and median \[Fe/H\] of -0.08 and -0.06 dex, respectively. Line-of-sight rotational velocities were acquired from the [@glebocki2005] compilation and \[Fe/H\] values were taken from SIMBAD. Variability and multiplicity were considered, and our sample is believed to be free of any possible contamination due to either of these effects.
From the HM98 compilation we retrieved $uvby\beta$ photometry for these “effective temperature standards.” The effect of reddening was considered for the hotter, statistically more distant stars in the N93 sample. Comparing mean $uvby\beta$ photometry from HM98 with the dereddened photometry presented in N93 revealed that nearly all of these stars have negligible reddening ($E(b-y) \leq$ 0.001 mag). The exceptions are HD 82328, HD 97603, HD 102870, and HD 126660 with color excesses of $E(b-y)=$ 0.010, 0.003, 0.011, and 0.022 mag, respectively. Inspection of Table \[table:teffcal\] indicates that despite the use of the reddened HM98 photometry the $T_\mathrm{eff}$ determinations for three of these four stars are still of high accuracy. For HD 97603, there is a discrepancy of $>$ 300 K between the fundamental and photometric temperatures. However, the $uvby\beta$ $T_\mathrm{eff}$ using reddened photometry for this star is actually hotter than the fundamental $T_\mathrm{eff}$. Notably, the author-to-author dispersion in multiple fundamental $T_\mathrm{eff}$ determinations for HD 97603 is also rather large. As such, the HM98 photometry was deemed suitable for all of the “effective temperature standards.”
![*Top:* Comparison of the temperatures derived from the ATLAS9 $uvby\beta$ color grids (T$_{uvby}$) and the fundamental effective temperatures ($\mathrm{T_{fund}}$) taken from B13 and N93. *Bottom:* Ratio of $uvby\beta$ temperature to fundamental temperature, as a function of $T_\mathrm{uvby}$. For the majority of stars, the $uvby\beta$ grids can predict $\mathrm{T_{eff}}$ to within $\sim 5 \%$ without any additional correction factors. []{data-label="fig:teff-cal-3"}](f6.pdf){width="45.00000%"}
For the sake of completeness, different model color grids were investigated, including those of [@fitzpatrick2005], which were recently calibrated for early group stars, and those of [@onehag2009], which were calibrated from MARCS model atmospheres for stars cooler than 7000 K. We found the grids that best matched the fundamental effective temperatures were the ATLAS9 grids of solar metallicity with no alpha-enhancement, microturbulent velocity of 0 km s$^{-1}$, and using the new opacity distribution function (ODF). The ATLAS9 grids with microturbulent velocity of 2 km s$^{-1}$ were also tested, but were found to worsen both the fractional $T_\mathrm{eff}$ error and scatter, though only nominally (by a few tenths of a percent).
For the early group stars, temperature determinations were attempted in both the $c_1-\beta$ and \[$u-b$\]-$\beta$ planes. The $c_1$ index was found to be a far better temperature indicator in this regime, with the \[$u-b$\] index underestimating $T_\mathrm{eff}$ relative to the fundamental values $>$10% on average. Temperature determinations in the $c_1-\beta$ plane, however, were only $\approx$ 1.9% cooler than the fundamental values, regardless of whether $c_1$ or the dereddened index $c_0$ was used. This is not surprising as the $c_1-\beta$ plane is not particularly susceptible to reddening.
At intermediate temperatures, the $a_0-r^*$ plane is used. In this regime, the ATLAS9 grids were found to overestimate $T_\mathrm{eff}$ by $\approx$ 2.0% relative to the fundamental values.
Finally, for the late group stars temperature determinations were attempted in the $(b-y)-c_1$ and $\beta-c_1$ planes. In this regime, $(b-y)$ was found to be a superior temperature indicator, improving the mean fractional error marginally and reducing the RMS scatter by more than 1%. In this group, the model grids overpredict $T_\mathrm{eff}$ by $\approx$ 2.4% on average, regardless of whether the reddened or dereddened indices are used.
Figure \[fig:teff-cal-3\] shows a comparison of the temperatures derived from the ATLAS9 $uvby\beta$ color grids and the fundamental effective temperatures given in B13 and N93. For the majority of stars the color grids can predict the effective temperature to within about 5 $\%$. A slight systematic trend is noted in Figure \[fig:teff-cal-3\], such that the model color grids overpredict $T_\mathrm{eff}$ at low temperatures and underpredict $T_\mathrm{eff}$ at high temperatures. We attempt to correct for this systematic effect by applying $T_\mathrm{eff}$ offsets in three regimes according to the mean behavior of each group: late and intermediate group stars were shifted to cooler temperatures by 2.4% and 2.0%, respectively, and early group stars were shifted by 1.9% toward hotter temperatures. After offsets were applied, the remaining RMS error in temperature determinations for these “standard” stars was 3.3%, 2.5%, and 3.5% for the late, intermediate, and early groups, respectively, or 3.1% overall.
Taking the uncertainties or dispersions in the fundamental $T_\mathrm{eff}$ determinations as the standard error, there is typically a 5-6 $\sigma$ discrepancy between the fundamental and photometric $T_\mathrm{eff}$ determinations. However, given the large author-to-author dispersion observed for stars with multiple fundamental $T_\mathrm{eff}$ determinations, it is likely that the formal errors on these measurements are underestimated. Notably, N93 does not publish errors for the fundamental $T_\mathrm{eff}$ values, which are literature means. However, those authors did find fractional errors in their photometric $T_\mathrm{eff}$ ranging from 2.5-4% for BA stars.
In § \[subsec:openclustertests\], we opted not to apply systematic offsets, instead assigning $T_\mathrm{eff}$ uncertainties in three regimes according to the average fractional uncertainties noted in each group. In our final $T_\mathrm{eff}$ determinations for our field star sample (§ \[subsec:fieldstars\]) we attempted to correct for the slight temperature systematics and applied offsets, using the magnitude of the remaining RMS error (for all groups considered collectively) as the dominant source of uncertainty in our $T_\mathrm{eff}$ measurement (see § \[subsec:tefflogguncertainties\]).
As demonstrated in Figure \[fig:teffcal-vsini\], rotational effects on our temperature determinations for the $T_\mathrm{eff}$ standards were investigated. Notably, the FB98 $v\sin{i}$ corrections appear to enhance the discrepancy between our temperature determinations and the fundamental temperatures for the late and intermediate groups, while moderately improving the accuracy for the early group. For the late group this is expected, as the correction formulae were originally derived for intermediate and early group stars. Notably, however, only two stars in the calibration sample exhibit projected rotational velocities $>200$ km s$^{-1}$. We examine the utility of the $v\sin{i}$ correction further in § \[subsec:loggvalidation\] & § \[subsec:openclustertests\].
The effect of metallicity on the determination of $T_\mathrm{eff}$ from the $uvby\beta$ grids is investigated in Figure \[fig:teff-cal-2\] showing the ratio of the grid-determined temperature to the fundamental temperature as a function of \[Fe/H\]. The sample of temperature standards spans a large range in metallicity, yet there is no indication of any systematic effect with \[Fe/H\], justifying our choice to assume solar metallicity throughout this work (see further discussion of metallicity effects in the Appendix).
The effect of reddening on our temperature determinations was considered but since the vast majority of sources with fundamental effective temperatures are nearby, no significant reddening was expected. Indeed, no indication of a systematic trend of the temperature residuals as a function of distance was noted.
In summary our findings that the ATLAS9 predicted $T_\mathrm{eff}$ values are $\sim 2 \%$ hotter than fundamental values for AF stars are consistent with the results of @bertone2004, who found 4-8% shifts warmer in $T_\mathrm{eff}$ from fits of ATLAS9 models to spectrophotometry relative to $T_\mathrm{eff}$ values determined from the infrared flux method (IRFM). We attempt systematic corrections with offsets of magnitude $\sim 2\%$ according to group, and the remaining RMS error between $uvby\beta$ temperatures and fundamental values is $\sim 3\%$.
![Ratio of the $uvby\beta$ temperature to fundamental temperature as a function of $v \sin i$, for the late (left), intermediate (middle), and early (right), group stars. The solid horizontal colored lines indicate the mean ratios in each case. The arrows reperesent both the magnitude and direction of change to the ratio $T_{uvby}/T_\mathrm{fund}$ after applying the FB98 rotation corrections. The dashed horizontal colored lines indicate the mean ratios after application of the rotation correction. The rotation correction appears to improve temperature estimates for early group stars, but worsen estimates for the late and intermediate groups. Notably, however, the vast majority of $T_\mathrm{eff}$ standards are slowly rotating ($v\sin{i}<150$ km s$^{-1}$). Note one rapidly rotating intermediate group star extends beyond the scale of the figure, with a rotation corrected $T_{uvby}/T_\mathrm{fund}$ ratio of $\approx 1.26$.[]{data-label="fig:teffcal-vsini"}](f7.pdf){width="48.00000%"}
![Ratio of the $uvby\beta$ temperature to fundamental temperature as a function of \[Fe/H\]. There is no indication that the grids systematically overestimate or underestimate $T_\mathrm{eff}$ for different values of \[Fe/H\].[]{data-label="fig:teff-cal-2"}](f8.pdf){width="30.00000%"}
[cccccccccccc]{}
4614 & F9V & 5973 $\pm$ 8 & 3 & 5915 & 4.442 & -0.28 & 1.8 & 0.372 & 0.185 & 0.275 & 2.588\
5015 & F8V & 5965 $\pm$ 35 & 3 & 6057 & 3.699 & 0.04 & 8.6 & 0.349 & 0.174 & 0.423 & 2.613\
5448 & A5V & 8070 & 18 & 8350 & 3.964 & & 69.3 & 0.068 & 0.189 & 1.058 & 2.866\
6210 & F6Vb & 6089 $\pm$ 35 & 1 & 5992 & 3.343 & -0.01 & 40.9 & 0.356 & 0.183 & 0.475 & 2.615\
9826 & F8V & 6102 $\pm$ 75 & 2,4 & 6084 & 3.786 & 0.08 & 8.7 & 0.346 & 0.176 & 0.415 & 2.629\
16765 & F7V & 6356 $\pm$ 46 & 1 & 6330 & 4.408 & -0.15 & 30.5 & 0.318 & 0.160 & 0.355 & 2.647\
16895 & F7V & 6153 $\pm$ 25 & 3 & 6251 & 4.118 & 0.00 & 8.6 & 0.325 & 0.160 & 0.392 & 2.625\
17081 & B7V & 12820 & 18 & 12979 & 3.749 & 0.24 & 23.3 & -0.057 & 0.104 & 0.605 & 2.717\
19994 & F8.5V & 5916 $\pm$ 98 & 2 & 5971 & 3.529 & 0.17 & 7.2 & 0.361 & 0.185 & 0.422 & 2.631\
22484 & F9IV-V & 5998 $\pm$ 39 & 3 & 5954 & 3.807 & -0.09 & 3.7 & 0.367 & 0.173 & 0.376 & 2.615\
30652 & F6IV-V & 6570 $\pm$ 131 & 3,6 & 6482 & 4.308 & 0.00 & 15.5 & 0.298 & 0.163 & 0.415 & 2.652\
32630 & B3V & 17580 & 18 & 16536 & 4.068 & & 98.2 & -0.085 & 0.104 & 0.318 & 2.684\
34816 & B0.5IV & 27580 & 18 & 28045 & 4.286 & -0.06 & 29.5 & -0.119 & 0.073 & -0.061 & 2.602\
35468 & B2III & 21230 & 18 & 21122 & 3.724 & -0.07 & 53.8 & -0.103 & 0.076 & 0.109 & 2.613\
38899 & B9IV & 10790 & 18 & 11027 & 3.978 & -0.16 & 25.9 & -0.032 & 0.141 & 0.906 & 2.825\
47105 & AOIV & 9240 & 18 & 9226 & 3.537 & -0.28 & 13.3 & 0.007 & 0.149 & 1.186 & 2.865\
48737 & F5IV-V & 6478 $\pm$ 21 & 3 & 6510 & 3.784 & 0.14 & 61.8 & 0.287 & 0.169 & 0.549 & 2.669\
48915 & A0mA1Va & 9755 $\pm$ 47 & 7,8,9,10,11 & 9971 & 4.316 & 0.36 & 15.8 & -0.005 & 0.162 & 0.980 & 2.907\
49933 & F2Vb & 6635 $\pm$ 90 & 12 & 6714 & 4.378 & -0.39 & 9.9 & 0.270 & 0.127 & 0.460 & 2.662\
56537 & A3Vb & 7932 $\pm$ 62 & 3 & 8725 & 4.000 & & 152 & 0.047 & 0.198 & 1.054 & 2.875\
58946 & F0Vb & 6954 $\pm$ 216 & 3,18 & 7168 & 4.319 & -0.25 & 52.3 & 0.215 & 0.155 & 0.615 & 2.713\
61421 & F5IV-V & 6563 $\pm$ 24 & 11,13,14,15,18 & 6651 & 3.983 & -0.02 & 4.7 & 0.272 & 0.167 & 0.532 & 2.671\
63922 & BOIII & 29980 & 18 & 29973 & 4.252 & 0.16 & 40.7 & -0.122 & 0.043 & -0.092 & 2.590\
69897 & F6V & 6130 $\pm$ 58 & 1 & 6339 & 4.290 & -0.26 & 4.3 & 0.315 & 0.149 & 0.384 & 2.635\
76644 & A7IV & 7840 & 18 & 8232 & 4.428 & -0.03 & 142 & 0.104 & 0.216 & 0.856 & 2.843\
80007 & A2IV & 9240 & 18 & 9139 & 3.240 & & 126 & 0.004 & 0.140 & 1.273 & 2.836\
81937 & F0IVb & 6651 $\pm$ 27 & 3 & 7102 & 3.840 & 0.17 & 146 & 0.211 & 0.180 & 0.752 & 2.733\
82328 & F5.5IV-V & 6299 $\pm$ 61 & 3,18 & 6322 & 3.873 & -0.16 & 7.1 & 0.314 & 0.153 & 0.463 & 2.646\
90839 & F8V & 6203 $\pm$ 56 & 3 & 6145 & 4.330 & -0.11 & 8.6 & 0.341 & 0.171 & 0.333 & 2.618\
90994 & B6V & 14010 & 18 & 14282 & 4.219 & & 84.5 & -0.066 & 0.111 & 0.466 & 2.730\
95418 & A1IV & 9181 $\pm$ 11 & 3,18 & 9695 & 3.899 & -0.03 & 40.8 & -0.006 & 0.158 & 1.088 & 2.880\
97603 & A5IV(n) & 8086 $\pm$ 169 & 3,6,18 & 8423 & 4.000 & -0.18 & 177 & 0.067 & 0.195 & 1.037 & 2.869\
102647 & A3Va & 8625 $\pm$ 175 & 5,6,18 & 8775 & 4.188 & 0.07 & 118 & 0.043 & 0.211 & 0.973 & 2.899\
102870 & F8.5IV-V & 6047 $\pm$ 7 & 3,18 & 6026 & 3.689 & 0.12 & 5.4 & 0.354 & 0.187 & 0.416 & 2.628\
118098 & A2Van & 8097 $\pm$ 43 & 3 & 8518 & 4.163 & -0.26 & 200 & 0.065 & 0.183 & 1.006 & 2.875\
118716 & B1III & 25740 & 18 & 23262 & 3.886 & & 113 & -0.112 & 0.058 & 0.040 & 2.608\
120136 & F7IV-V & 6620 $\pm$ 67 & 2 & 6293 & 3.933 & 0.24 & 14.8 & 0.318 & 0.177 & 0.439 & 2.656\
122408 & A3V & 8420 & 18 & 8326 & 3.500 & -0.27 & 168 & 0.062 & 0.164 & 1.177 & 2.843\
126660 & F7V & 6202 $\pm$ 35 & 3,6,18 & 6171 & 3.881 & -0.02 & 27.7 & 0.334 & 0.156 & 0.418 & 2.644\
128167 & F4VkF2mF1 & 6687 $\pm$ 252 & 3,18 & 6860 & 4.439 & -0.32 & 9.3 & 0.254 & 0.134 & 0.480 & 2.679\
130948 & F9IV-V & 5787 $\pm$ 57 & 1 & 5899 & 4.065 & -0.05 & 6.3 & 0.374 & 0.191 & 0.321 & 2.625\
136202 & F8IV & 5661 $\pm$ 87 & 1 & 6062 & 3.683 & -0.04 & 4.9 & 0.348 & 0.170 & 0.427 & 2.620\
141795 & kA2hA5mA7V & 7928 $\pm$ 88 & 3 & 8584 & 4.346 & 0.38 & 33.1 & 0.066 & 0.224 & 0.950 & 2.885\
142860 & F6V & 6295 $\pm$ 74 & 3,6 & 6295 & 4.130 & -0.17 & 9.9 & 0.319 & 0.150 & 0.401 & 2.633\
144470 & BlV & 25710 & 18 & 25249 & 4.352 & & 107 & -0.112 & 0.043 & -0.005 & 2.621\
162003 & F5IV-V & 5928 $\pm$ 81 & 3 & 6469 & 3.916 & -0.03 & 11.9 & 0.294 & 0.147 & 0.497 & 2.661\
164259 & F2V & 6454 $\pm$ 113 & 3 & 6820 & 4.121 & -0.03 & 66.4 & 0.253 & 0.153 & 0.560 & 2.690\
168151 & F5Vb & 6221 $\pm$ 39 & 1 & 6600 & 4.203 & -0.28 & 9.7 & 0.281 & 0.143 & 0.472 & 2.653\
169022 & B9.5III & 9420 & 18 & 9354 & 3.117 & & 196 & 0.016 & 0.102 & 1.176 & 2.778\
172167 & AOVa & 9600 & 18 & 9507 & 3.977 & -0.56 & 22.8 & 0.003 & 0.157 & 1.088 & 2.903\
173667 & F5.5IV-V & 6333 $\pm$ 37 & 3,18 & 6308 & 3.777 & -0.03 & 16.3 & 0.314 & 0.150 & 0.484 & 2.652\
177724 & A0IV-Vnn & 9078 $\pm$ 86 & 3 & 9391 & 3.870 & -0.52 & 316 & 0.013 & 0.146 & 1.080 & 2.875\
181420 & F2V & 6283 $\pm$ 106 & 16 & 6607 & 4.187 & -0.03 & 17.1 & 0.280 & 0.157 & 0.477 & 2.657\
185395 & F3+V & 6516 $\pm$ 203 & 3,4 & 6778 & 4.296 & 0.02 & 5.8 & 0.261 & 0.157 & 0.502 & 2.688\
187637 & F5V & 6155 $\pm$ 85 & 16 & 6192 & 4.103 & -0.09 & 5.4 & 0.333 & 0.151 & 0.380 & 2.631\
190993 & B3V & 17400 & 18 & 16894 & 4.195 & -0.14 & 140 & -0.083 & 0.100 & 0.295 & 2.686\
193432 & B9.5V & 9950 & 18 & 10411 & 3.928 & -0.15 & 23.4 & -0.021 & 0.134 & 1.015 & 2.852\
193924 & B2IV & 17590 & 18 & 17469 & 3.928 & & 15.5 & -0.092 & 0.087 & 0.271 & 2.662\
196867 & B9IV & 10960 & 18 & 10837 & 3.861 & -0.06 & 144 & -0.019 & 0.125 & 0.889 & 2.796\
209952 & B7IV & 13850 & 18 & 13238 & 3.913 & & 215 & -0.061 & 0.105 & 0.576 & 2.728\
210027 & F5V & 6324 $\pm$ 139 & 6 & 6496 & 4.187 & -0.13 & 8.6 & 0.294 & 0.161 & 0.446 & 2.664\
210418 & A2Vb & 7872 $\pm$ 82 & 3 & 8596 & 3.966 & -0.38 & 136 & 0.047 & 0.161 & 1.091 & 2.886\
213558 & A1Vb & 9050 $\pm$ 157 & 3 & 9614 & 4.175 & & 128 & 0.002 & 0.170 & 1.032 & 2.908\
215648 & F6V & 6090 $\pm$ 22 & 3 & 6198 & 3.950 & -0.26 & 7.7 & 0.331 & 0.147 & 0.407 & 2.626\
216956 & A4V & 8564 $\pm$ 105 & 5,18 & 8857 & 4.198 & 0.20 & 85.1 & 0.037 & 0.206 & 0.990 & 2.906\
218396 & F0+($\lambda$ Boo) & 7163 $\pm$ 84 & 17 & 7540 & 4.435 & & 47.2 & 0.178 & 0.146 & 0.678 & 2.739\
219623 & F8V & 6285 $\pm$ 94 & 1 & 6061 & 3.85 & 0.04 & 4.9 & 0.351 & 0.169 & 0.395 & 2.624\
222368 & F7V & 6192 $\pm$ 26 & 3 & 6207 & 3.988 & -0.14 & 6.1 & 0.330 & 0.163 & 0.399 & 2.625\
222603 & A7V & 7734 $\pm$ 80 & 1 & 8167 & 4.318 & & 62.8 & 0.105 & 0.203 & 0.891 & 2.826
Surface Gravity {#subsec:loggvalidation}
---------------
To assess the surface gravities derived from the $uvby\beta$ grids, we compare to results on both double-lined eclipsing binary and spectroscopic samples.
### Comparison with Double-Line Eclipsing Binaries
[@torres2010] compiled an extensive catalog of 95 double-lined eclipsing binaries with fundamentally determined surface gravities for all 180 individual stars. Eclipsing binary systems allow for dynamical determinations of the component masses and geometrical determinations of the component radii. From the mass and radius of an individual component, the Newtonian surface gravity, $g=GM/R^2$ can be calculated.
From these systems, 39 of the primary components have $uvby\beta$ photometry available for determining surface gravities using our methodology. The spectral type range for these systems is O8-F2, with luminosity classes of IV, V. The mass ratio (primary/secondary) for these systems ranges from $\approx$ 1.00-1.79, and the orbital periods of the primaries range from $\approx$ 1.57-8.44 days. In the cases of low mass ratios, the primary and secondary components should have nearly identical fundamental parameters, assuming they are coeval. In the cases of high mass ratios, given that the individual components are presumably unresolved, we assume that the primary dominates the $uvby\beta$ photometry. For both cases (of low and high mass ratios), we assume that the photometry allows for accurate surface gravity determinations for the primary components and so we only consider the primaries from the [@torres2010] sample.
It is important to note that the eclipsing binary systems used for the surface gravity calibration are more distant than the stars for which we can interferometrically determine angular diameters and effective temperatures for. Thus, for the surface gravity calibration it was necessary to compute the dereddened indices $(b-y)_0, m_0, c_0$ in order to obtain the highest accuracy possible for the intermediate-group stars, which rely on $a_0$ (an index using dereddened colors) as a temperature indicator. Notably, however, we found that the dereddened photometry actually worsened $\log{g}$ determinations for the early and late groups. Dereddened colors were computed using the IDL routine `UVBYBETA`.
The results of the $\log g$ calibration are presented in Table \[table:loggcal\] and Figure \[fig:logg-cal-fig1\]. As described above, for the late group stars. ($T_\mathrm{eff} <$8500 K), $\log g$ is determined in the $(b-y)-c_1$ plane. The mean and median of the $\log g$ residuals (in the sense of grid-fundamental) are -0.001 dex and -0.038 dex, respectively, and the RMS error 0.145 dex. As in § \[subsec:teffvalidation\], we found that the $\beta-c_1$ plane produced less accurate atmospheric parameters, relative to fundamental determinations, for late group stars.
For the intermediate group stars (8500 K $\leq T_\mathrm{eff} \leq$ 11000 K), $\log g$ is determined in the $a_0-r^*$ plane. The mean and median of the $\log g$ residuals are -0.060 dex and -0.069 dex, respectively with RMS error 0.091 dex. For the early group stars ($T_\mathrm{eff} >$ 11000 K), $\log g$ is determined in the $c_1-\beta$ plane. The mean and median of the $\log g$ residuals are -0.0215 dex and 0.024 dex, respectively, with RMS error 0.113 dex. The $[u-b]-\beta$ plane was also investigated for early group stars, but was found to produce $\log{g}$ values of lower accuracy relative to the fundamental determinations.
When considered collectively, the mean and median of the $\log g$ residuals for all stars are -0.017 dex and -0.034 dex, and the RMS error 0.127 dex. The uncertainties in our surface gravities that arise from propagating the photometric errors through our atmospheric parameter determination routines are of the order $\sim$ 0.02 dex, significantly lower than the uncertainties demonstrated by the comparison to fundamental values of $\log g$.
As stated above, the main concern with using double-lined eclipsing binaries as surface gravity calibrators for our photometric technique is contamination from the unresolved secondary components. The $\log g$ residuals were examined as a function of both mass ratio and orbital period. While the amplitude of the scatter is marginally larger for low mass ratio or short period systems, in all cases our $\log g$ determinations are within 0.2 dex of the fundamental values $\approx 85\%$ of the time.
To assess any potential systematic inaccuracies of the grids themselves, the surface gravity residuals were examined as a function of $T_\mathrm{eff}$ and the grid-determined $\log g$. Figures \[fig:logg-cal-fig3\] show the $\log g$ residuals as a function of $T_\mathrm{eff}$ and $\log g$, respectively. No considerable systematic effects as a function of either effective temperature or $\log g$ were found in the $uvby\beta$ determinations of $\log g$.
The effect of rotational velocity on our $\log g$ determinations was considered. As before, $v \sin i$ data for the surface gravity calibrators was collected from [@glebocki2005]. As seen in Figure \[fig:dlogg-vsini\], the majority of the $\log g$ calibrators are somewhat slowly rotating ($v \sin i \leq$ 150 km s$^{-1}$). While the $v \sin i$ correction increases the accuracy of our $\log g$ determinations for the early-group stars in most cases, the correction appears to worsen our determinations for the intermediate group, which appear systematically high to begin with.
The potential systematic effect of metallicity on our $\log g$ determinations is considered in Figure \[fig:logg-cal-fig8\], showing the surface gravity residuals as a function of \[Fe/H\]. Metallicity measurements were available for very few of these stars, and were primarily taken from [@ammons2006; @anderson2012]. Nevertheless, there does not appear to be a global systematic trend in the surface gravity residuals with metallicity. There is a larger scatter in $\log g$ determinations for the more metal-rich, late-type stars, however it is not clear that this effect is strictly due to metallicity.
In summary, for the open cluster tests we assign $\log g$ uncertainties in three regimes: $\pm$ 0.145 dex for stars belonging to the late group, $\pm$ 0.091 dex for the intermediate group, and $\pm$ 0.113 dex for the early group.
For our sample of nearby field stars we opt to assign a uniform systematic uncertainty of $\pm$ 0.116 dex for all stars. We do not attempt to correct for any systematic effects by applying offsets in $\log g$, as we did with $T_\mathrm{eff}$. As noted in discussion of the $T_\mathrm{eff}$ calibration, we do apply the $v \sin i$ correction to both intermediate and early group stars, as these corrections permit us to better reproduce open cluster ages (as presented in § \[subsec:openclustertests\]).
![image](f9.pdf){width="99.00000%"}
![Surface gravity residuals, $\Delta \log g$ (in the sense of fundamental-$uvby\beta$), as a function of $uvby\beta$-determined $\log{(T_\mathrm{eff})}$ (left) and $\log{g}$ (right). Solid points represent eclipsing binary primaries from [@torres2010] and open circles are stars with spectroscopic $\log{g}$ determinations in N93. Of the 39 eclipsing binaries, only six have residuals greater than 0.2 dex in magnitude. This implies that the $uvby\beta$ grids determine $\log g$ to within 0.2 dex of fundamental values $\sim 85\%$ of the time. Surface gravity residuals are largest for the cooler stars. Photometric surface gravity measurements are in better agreement with spectroscopic determinations than the eclipsing binary sample. There is no indication for a global systematic offset in $uvby\beta$-determined $\log g$ values as a function of either $T_\mathrm{eff}$ or $\log g$. []{data-label="fig:logg-cal-fig3"}](f10.pdf){width="45.00000%"}
![Surface gravity residuals, $\Delta \log g$ (in the sense of fundamental-$uvby\beta$), of eclipsing binary primaries as a function of $v \sin i$. Arrows indicate the locations of points after application of the [@figueras1998] $v\sin{i}$ correction, where in this case late group stars received the same correction as the intermediate group.[]{data-label="fig:dlogg-vsini"}](f11.pdf){width="45.00000%"}
![Surface gravity residuals, $\Delta \log g$ (in the sense of fundamental-$uvby\beta$), as a function of \[Fe/H\]. The metallicity values have been taken primarily from [@ammons2006], with additional values coming from [@anderson2012]. While metallicities seem to exist for very few of the surface gravity calibrators used here, there does not appear to be a systematic trend in the residuals with \[Fe/H\]. There is a larger amount of scatter for the more metal-rich late-type stars, however the scatter is confined to a relatively small range in \[Fe/H\] and it is not clear that this effect is due to metallicity effects. []{data-label="fig:logg-cal-fig8"}](f12.pdf){width="45.00000%"}
[ccccccccccccc]{}
EM Car & O8V & 34000 $\pm$ 2000 & 21987 & 3.855 $\pm$ 0.016 & 3.878 & 146.0 & & 0.279 & -0.042 & 0.083 & 2.617\
V1034 Sco & O9V & 33200 $\pm$ 900 & 28228 & 3.923 $\pm$ 0.008 & 3.969 & 159.0 & -1.0 & 0.190 & -0.024 & -0.068 & 2.587\
AH Cep & B0.5Vn & 29900 $\pm$ 1000 & 24867 & 4.017 $\pm$ 0.009 & 4.115 & 154.0 & & 0.290 & -0.064 & 0.003 & 2.611\
V578 Mon & B1V & 30000 $\pm$ 740 & 25122 & 4.176 $\pm$ 0.015 & 4.200 & 107.0 & & 0.206 & -0.024 & -0.003 & 2.613\
V453 Cyg & B0.4IV & 27800 $\pm$ 400 & 24496 & 3.725 $\pm$ 0.006 & 3.742 & 130.0 & & 0.212 & -0.004 & -0.004 & 2.590\
CW Cep & B0.5V & 28300 $\pm$ 1000 & 22707 & 4.050 $\pm$ 0.019 & 3.716 & 120.0 & & 0.355 & -0.077 & 0.050 & 2.601\
V539 Ara & B3V & 18100 $\pm$ 500 & 17537 & 3.924 $\pm$ 0.016 & 3.964 & 85.6 & & -0.033 & 0.089 & 0.268 & 2.665\
CV Vel & B2.5V & 18100 $\pm$ 500 & 17424 & 3.999 $\pm$ 0.008 & 3.891 & 42.8 & & -0.057 & 0.083 & 0.273 & 2.659\
AG Per & B3.4V & 18200 $\pm$ 800 & 15905 & 4.213 $\pm$ 0.020 & 4.311 & 92.6 & -0.04 & 0.048 & 0.079 & 0.346 & 2.708\
U Oph & B5V & 16440 $\pm$ 250 & 15161 & 4.076 $\pm$ 0.004 & 3.954 & 350.0 & & 0.081 & 0.050 & 0.404 & 2.695\
V760 Sco & B4V & 16900 $\pm$ 500 & 15318 & 4.176 $\pm$ 0.019 & 4.061 & & & 0.169 & 0.023 & 0.392 & 2.701\
GG Lup & B7V & 14750 $\pm$ 450 & 13735 & 4.298 $\pm$ 0.009 & 4.271 & 123.0 & & -0.049 & 0.115 & 0.514 & 2.747\
$\zeta$ Phe & B6V & 14400 $\pm$ 800 & 13348 & 4.121 $\pm$ 0.004 & 4.153 & 111.0 & & -0.039 & 0.118 & 0.559 & 2.747\
$\chi^2$ Hya & B8V & 11750 $\pm$ 190 & 11382 & 3.710 $\pm$ 0.007 & 3.738 & 131.0 & & -0.020 & 0.110 & 0.841 & 2.769\
V906 Sco & B9V & 10400 $\pm$ 500 & 10592 & 3.656 $\pm$ 0.012 & 3.719 & 81.3 & & 0.039 & 0.101 & 0.996 & 2.805\
TZ Men & A0V & 10400 $\pm$ 500 & 10679 & 4.224 $\pm$ 0.009 & 4.169 & 14.4 & & 0.000 & 0.142 & 0.918 & 2.850\
V1031 Ori & A6V & 7850 $\pm$ 500 & 8184 & 3.559 $\pm$ 0.007 & 3.793 & 96.0 & & 0.076 & 0.174 & 1.106 & 2.848\
$\beta$ Aur & A1m & 9350 $\pm$ 200 & 9167 & 3.930 $\pm$ 0.005 & 3.894 & 33.2 & -0.11 & 0.017 & 0.173 & 1.091 & 2.889\
V364 Lac & A4m: & 8250 $\pm$ 150 & 7901 & 3.766 $\pm$ 0.005 & 3.707 & & & 0.107 & 0.168 & 1.061 & 2.875\
V624 Her & A3m & 8150 $\pm$ 150 & 7902 & 3.832 $\pm$ 0.014 & 3.794 & 38.0 & & 0.111 & 0.230 & 1.025 & 2.870\
V1647 Sgr & A1V & 9600 $\pm$ 300 & 9142 & 4.252 $\pm$ 0.008 & 4.087 & & & 0.040 & 0.174 & 1.020 & 2.899\
VV Pyx & A1V & 9500 $\pm$ 200 & 9560 & 4.087 $\pm$ 0.008 & 4.004 & 22.1 & & 0.028 & 0.161 & 1.013 & 2.881\
KW Hya & A5m & 8000 $\pm$ 200 & 8053 & 4.078 $\pm$ 0.006 & 4.390 & 16.6 & & 0.122 & 0.232 & 0.832 & 2.827\
WW Aur & A5m & 7960 $\pm$ 420 & 8401 & 4.161 $\pm$ 0.005 & 4.286 & 35.8 & & 0.081 & 0.231 & 0.944 & 2.862\
V392 Car & A2V & 8850 $\pm$ 200 & 10263 & 4.296 $\pm$ 0.011 & 4.211 & 163.0 & & 0.097 & 0.108 & 1.019 & 2.889\
RS Cha & A8V & 8050 $\pm$ 200 & 7833 & 4.046 $\pm$ 0.022 & 4.150 & 30.0 & & 0.136 & 0.186 & 0.866 & 2.791\
MY Cyg & F0m & 7050 $\pm$ 200 & 7054 & 3.994 $\pm$ 0.019 & 3.882 & & & 0.219 & 0.226 & 0.709 & 2.756\
EI Cep & F3V & 6750 $\pm$ 100 & 6928 & 3.763 $\pm$ 0.014 & 3.904 & 16.2 & 0.27 & 0.234 & 0.199 & 0.658 & 2.712\
FS Mon & F2V & 6715 $\pm$ 100 & 6677 & 4.026 $\pm$ 0.005 & 3.992 & 40.0 & 0.07 & 0.266 & 0.148 & 0.594 & 2.688\
PV Pup & A8V & 6920 $\pm$ 300 & 7327 & 4.255 $\pm$ 0.009 & 4.386 & 66.4 & & 0.200 & 0.169 & 0.636 & 2.722\
HD 71636 & F2V & 6950 $\pm$ 140 & 6615 & 4.226 $\pm$ 0.014 & 4.104 & 13.5 & 0.15 & 0.278 & 0.157 & 0.496 &\
RZ Cha & F5V & 6450 $\pm$ 150 & 6326 & 3.905 $\pm$ 0.006 & 3.808 & & 0.02 & 0.312 & 0.155 & 0.482 &\
BW Aqr & F7V & 6350 $\pm$ 100 & 6217 & 3.979 $\pm$ 0.018 & 3.877 & & & 0.328 & 0.165 & 0.432 & 2.650\
V570 Per & F3V & 6842 $\pm$ 50 & 6371 & 4.234 $\pm$ 0.019 & 3.998 & 44.9 & 0.06 & 0.308 & 0.165 & 0.441 &\
CD Tau & F6V & 6200 $\pm$ 50 & 6325 & 4.087 $\pm$ 0.007 & 3.973 & 18.9 & 0.19 & 0.314 & 0.178 & 0.436 &\
V1143 Cyg & F5V & 6450 $\pm$ 100 & 6492 & 4.322 $\pm$ 0.015 & 4.155 & 19.8 & 0.22 & 0.294 & 0.165 & 0.451 & 2.663\
VZ Hya & F3V & 6645 $\pm$ 150 & 6199 & 4.305 $\pm$ 0.003 & 4.182 & & -0.22 & 0.333 & 0.145 & 0.370 & 2.629\
V505 Per & F5V & 6510 $\pm$ 50 & 6569 & 4.323 $\pm$ 0.016 & 4.325 & 31.4 & -0.03 & 0.287 & 0.142 & 0.435 & 2.654\
HS Hya & F4V & 6500 $\pm$ 50 & 6585 & 4.326 $\pm$ 0.005 & 4.471 & 23.3 & 0.14 & 0.287 & 0.160 & 0.397 & 2.648
\[table:loggcal\]
### Comparison with Spectroscopic Measurements
The Balmer lines are a sensitive surface gravity indicator for stars hotter than $T_\mathrm{eff} \gtrsim$ 9000 K and can be used as a semi-fundamental surface gravity calibration for the early- and intermediate-group stars. The reason why surface gravities derived using this method are considered semi-fundamental and not fundamental is because the method still relies on model atmospheres for fitting the observed line profiles. Nevertheless, surface gravities determined through this method are considered of high fidelity and so we performed an additional consistency check, comparing our $uvby\beta$ values of $\log g$ to those with well-determined spectroscopic $\log g$ measurements.
N93 fit theoretical profiles of hydrogen Balmer lines from [@kurucz1979] to high resolution spectrograms of the H$\beta$ and H$\gamma$ lines for a sample of 16 stars with $uvby\beta$ photometry. The sample of 16 stars was mostly drawn from the list of photometric $\beta$ standards of [@crawford1966]. We compared the $\log g$ values we determined through interpolation in the $uvby\beta$ color grids to the semi-fundamental spectroscopic values determined by N93. The results of this comparison are presented in Table \[table:loggspec\].
Though N93 provide dereddened photometry for the spectroscopic sample, we found using the raw HM98 photometry produced significantly better results (yielding an RMS error that was three times lower). For the early group stars, the atmospheric parameters were determined in both the $c_0-\beta$ plane and the $[u-b]-\beta$ plane. In both cases, $\beta$ is the gravity indicator, but we found that the $\log g$ values calculated when using $c_0$ as a temperature indicator for hot stars better matched the semi-fundamental spectroscopic $\log g$ values. This result is consistent with the result from the effective temperature calibration that suggests $c_0$ better predicted the effective temperatures of hot stars than $[u-b]$. As before, $\log{g}$ for intermediate group stars is determined in the $a_0-r^*$ plane.
We tested $uvby\beta$ color grids of different metallicity, alpha-enhancement, and microturbulent velocity and determined that the non-alpha-enhanced, solar metallicity grids with microturbulent velocity $v_\mathrm{turb}$ = 0 km s$^{-1}$ best reproduced the spectroscopic surface gravities for the sample of 16 early- and intermediate-group stars measured by N93.
The $\log g$ residuals, in the sense of (spectroscopic – grid), as a function of the grid-calculated effective temperatures are plotted in Figure \[fig:logg-cal-fig3\]. There is no evidence for a significant systematic offset in the residuals as a function of either the $uvby\beta$-determined $T_\mathrm{eff}$ or $\log{g}$. For the early group, the mean and median surface gravity residuals are -0.007 dex and 0.004 dex, respectively, with RMS 0.041 dex. For the intermediate group, the mean and median surface gravity residuals are -0.053 dex and -0.047 dex, respectively, with RMS 0.081 dex. Considering both early and intermediate group stars collectively, the mean and median surface gravity residuals are -0.027 dex and -0.021 dex, and the RMS 0.062 dex.
One issue that may cause statistically larger errors in the $\log g$ determinations compared to the $T_\mathrm{eff}$ determinations is the linear interpolation in a low resolution logarithmic space (the $uvby\beta$ colors are calculated at steps of 0.5 dex in $\log g$). In order to mitigate this effect one requires either more finely gridded models or an interpolation scheme that takes the logarithmic gridding into account.
[ccccccccccc]{}
63 & A2V & 8970 & 9047 & 3.73 & 3.912 & 0.026 & 0.181 & 1.050 & 1.425 & 2.881\
153 & B2IV & 20930 & 20635 & 3.78 & 3.872 & -0.090 & 0.087 & 0.134 & 0.264 & 2.627\
1641 & B3V & 16890 & 16528 & 4.07 & 4.044 & -0.085 & 0.104 & 0.319 & 0.485 & 2.683\
2421 & AOIV & 9180 & 9226 & 3.49 & 3.537 & 0.007 & 0.149 & 1.186 & 1.487 & 2.865\
4119 & B6V & 14570 & 14116 & 4.18 & 4.176 & -0.062 & 0.111 & 0.481 & 0.673 & 2.730\
4554 & AOVe & 9360 & 9398 & 3.82 & 3.863 & 0.006 & 0.155 & 1.112 & 1.425 & 2.885\
5191 & B3V & 17320 & 16797 & 4.28 & 4.292 & -0.080 & 0.106 & 0.297 & 0.470 & 2.694\
6588 & B3IV & 17480 & 17025 & 3.82 & 3.864 & -0.065 & 0.079 & 0.292 & 0.418 & 2.661\
7001 & AOVa & 9540 & 9508 & 4.01 & 3.977 & 0.003 & 0.157 & 1.088 & 1.403 & 2.903\
7447 & B5III & 13520 & 13265 & 3.73 & 3.712 & -0.016 & 0.088 & 0.575 & 0.743 & 2.707\
7906 & B9IV & 10950 & 10838 & 3.85 & 3.861 & -0.019 & 0.125 & 0.889 & 1.130 & 2.796\
8585 & A1V & 9530 & 9615 & 4.11 & 4.175 & 0.002 & 0.170 & 1.032 & 1.373 & 2.908\
8634 & B8V & 11330 & 11247 & 3.69 & 3.672 & -0.035 & 0.113 & 0.868 & 1.077 & 2.768\
8781 & B9V & 9810 & 9868 & 3.54 & 3.593 & -0.011 & 0.128 & 1.129 & 1.380 & 2.838\
8965 & B8V & 11850 & 11721 & 3.47 & 3.422 & -0.031 & 0.100 & 0.784 & 0.969 & 2.725\
8976 & B9IVn & 11310 & 11263 & 4.23 & 4.260 & -0.035 & 0.131 & 0.831 & 1.076 & 2.833\
\[table:loggspec\]
Summary of Atmospheric Parameter Uncertainties {#subsec:tefflogguncertainties}
----------------------------------------------
Precise and accurate stellar ages are the ultimate goal of this work. The accuracy of our ages is determined by both the accuracy with which we can determine atmospheric parameters and any systematic uncertainties associated with the stellar evolutionary models and our assumptions in applying them. The precision, on the other hand, is determined almost entirely by the precision with which we determine atmospheric parameters and, because there are some practical limits to how well we may ever determine $T_\mathrm{eff}$ and $\log{g}$, the location of the star in the H-R diagram (e.g. stars closer to the main sequence will always have more imprecise ages using this method).
It is thus important to provide a detailed accounting of the uncertainties involved in our atmospheric parameter determinations, as the final uncertainties quoted in our ages will arise purely from the values of the $\sigma_{T_\mathrm{eff}}, \sigma_{\log{g}}$ used in our $\chi^2$ calculations. Below we consider the contribution of the systematics already discussed, as well as the contributions from errors in interpolation, photometry, metallicity, extinction, rotational velocity, multiplicity, and spectral peculiarity.
[**Systematics:**]{} The dominant source of uncertainty in our atmospheric parameter determinations are the systematics quantified in § \[subsec:teffvalidation\] and § \[subsec:loggvalidation\]. All systematic effects inherent to the $uvby\beta$ method, and the particular model color grids chosen, which we will call $\sigma_\mathrm{sys}$, are embedded in the comparisons to the stars with fundamentally or semi-fundamentally determined parameters, summarized as approximately $\sim 3.1\%$ in $T_\mathrm{eff}$ and $\sim 0.116$ dex in $\log{g}$. We also found that for stars with available \[Fe/H\] measurements, the accuracy with which we can determine atmospheric parameters using $uvby\beta$ photometry does not vary systematically with metallicity, though we further address metallicity issues both below and in an Appendix.
[**Interpolation Precision:**]{} To estimate the errors in atmospheric parameters due to the numerical precision of the interpolation procedures employed here, we generated 1000 random points in each of the three relevant $uvby\beta$ planes. For each point, we obtained ten independent $T_\mathrm{eff}, \log{g}$ determinations to test the repeatability of the interpolation routine. The scatter in independent determinations of the atmospheric parameters were found to be $<10^{-10}$ K, dex, and thus numerical errors are assumed zero.
[**Photometric Errors:**]{} Considering the most basic element of our approach, there are uncertainties due to the propagation of photometric errors through our atmospheric parameter determination pipeline. As discussed in § \[subsec:fieldstars\], the photometric errors are generally small ($\sim 0.005$ mag in a given index). Translating the model grid points in the rectangular regions defined by the magnitude of the mean photometric error in a given index, and then interpolating to find the associated atmospheric parameters of the perturbed point, we take the maximum and minimum values for $T_\mathrm{eff}$ and $\log{g}$ to calculate the error due to photometric measurement error.
To simplify the propagation of photometric errors for individual stars, we performed simulations with randomly generated data to ascertain the mean uncertainty in $T_\mathrm{eff}$, $\log{g}$ that results from typical errors in each of the $uvby\beta$ indices.
We begin with the HM98 photometry and associated measurement errors for our sample (3499 stars within 100 pc, B0-F5, luminosity classes IV-V). Since the HM98 compilation does not provide $a_0$ or $r^*$, as these quantities are calculated from the four fundamental indices, we calculate the uncertainties in these parameters using the crude approximation that none of the $uvby\beta$ indices are correlated. Under this assumption, the uncertainties associated with $a_0$ and $r^*$ are as follows:
$$\begin{aligned}
\sigma_{a_0} &= \sqrt{1.36^2\sigma_{b-y}^2+0.36^2\sigma_{m_1}^2+0.18^2\sigma_{c_1}^2} \\
\sigma_{r^*} &= \sqrt{0.07^2\sigma_{b-y}^2+0.35^2\sigma_{c_1}^2+\sigma_{\beta}^2}.\end{aligned}$$
A model for the empirical probability distribution function (hereafter PDF) for the error in a given $uvby\beta$ index is created through a normalized histogram with 25 bins. From this empirical PDF, one can randomly draw values for the error in a given index. For each $uvby\beta$ plane, 1,000 random points in the appropriate range of parameter space were generated with photometric errors drawn as described above. The eight ($T_\mathrm{eff}$, $\log{g}$) values corresponding to the corners and midpoints of the “standard error rectangle” centered on the original random data point are then evaluated. The maximally discrepant ($T_\mathrm{eff}$, $\log{g}$) values are saved and the overall distributions of $\Delta T_\mathrm{eff}/T_\mathrm{eff}$ and $\Delta \log{g}$ are then analyzed to assess the mean uncertainties in the atmospheric parameters derived in a given $uvby\beta$ plane due to the propagation of typical photometric errors.
For the late group, points were generated in the range of $(b-y)-c_1$ parameter space bounded by 6500 K $\leq T_\mathrm{eff} \leq$ 9000 K and $3.0 \leq \log{g} \leq 5.0$. In this group, typical photometric uncertainties of $\left \langle \sigma_\mathrm{b-y} \right \rangle$ = 0.003 mag and $\left \langle \sigma_\mathrm{c_1} \right \rangle$ = 0.005 mag lead to average uncertainties of 0.6 % in $T_\mathrm{eff}$ and 0.055 dex in $\log{g}$. For the intermediate group, points were generated in the range of $a_0-r^*$ parameter space bounded by 8500 K $\leq T_\mathrm{eff} \leq$ 11000 K and $3.0 \leq \log{g} \leq 5.0$. In this group, typical photometric uncertainties of $\left \langle \sigma_\mathrm{a_0} \right \rangle$ = 0.005 mag and $\left \langle \sigma_\mathrm{r^*} \right \rangle$ = 0.005 mag lead to average uncertainties of 0.8 % in $T_\mathrm{eff}$ and 0.046 dex in $\log{g}$. For the early group, points were generated in the range of $c_1-\beta$ parameter space bounded by 10000 K $\leq T_\mathrm{eff} \leq$ 30000 K and $3.0 \leq \log{g} \leq 5.0$. In this group, typical photometric uncertainties of $\left \langle \sigma_\mathrm{c_1} \right \rangle$ = 0.005 mag and $\left \langle \sigma_\mathrm{\beta} \right \rangle$ = 0.004 mag lead to average uncertainties of 1.1 % in $T_\mathrm{eff}$ and 0.078 dex in $\log{g}$. Across all three groups, the mean uncertainty due to photometric errors is $\approx 0.9\%$ in $T_\mathrm{eff}$ and $\approx 0.060$ dex in $\log{g}$.
[**Metallicity Effects:**]{} For simplicity and homogeneity, our method assumes solar composition throughout. However, our sample can more accurately be represented as a Gaussian centered at -0.109 dex with $\sigma \approx$ 0.201 dex. Metallicity is a small, but non-negligible, effect and allowing \[M/H\] to change by $\pm$ 0.5 dex can lead to differences in the assumed $T_\mathrm{eff}$ of $\sim$ 1-2$\%$ for late-, intermediate-, and some early-group stars, or differences of up to $6\%$ for stars hotter than $\sim$ 17000 K (of which there are few in our sample). In $\log{g}$, shifts of $\pm$0.5 dex in \[M/H\] can lead to differences of $\sim 0.1$ dex in the assumed $\log{g}$ for late- or early-group stars, or $\sim 0.05$ dex in the narrow region occupied by intermediate-group stars.
Here, we estimate the uncertainty the metallicity approximation introduces to the fundamental stellar parameters derived in this work. We begin with the actual $uvby\beta$ data for our sample, and \[Fe/H\] measurements from the XHIP catalog [@anderson2012], which exist for approximately 68% of our sample. Those authors collected photometric and spectroscopic metallicity determinations of Hipparcos stars from a large number of sources, calibrated the values to the high-resolution catalog of [@wu2011] in an attempt to homogenize the various databases, and published weighted means for each star. The calibration process is described in detail in §5 of [@anderson2012].
For each of the stars with available \[Fe/H\] in our field star sample, we derive $T_\mathrm{eff}, \log{g}$ in the appropriate $uvby\beta$ plane for the eight cases of \[M/H\]=-2.5,-2.0,-1.5,-1.0,-0.5,0.0,0.2,0.5. Then, given the measured \[Fe/H\], and making the approximation that \[M/H\]=\[Fe/H\], we perform a linear interpolation to find the most accurate values of $T_\mathrm{eff}, \log{g}$ given the color grids available. We also store the atmospheric parameters a given star would be assigned assuming \[M/H\]=0.0. Figure \[fig:metallicity-error\] shows the histograms of $T_\mathrm{eff}/T_\mathrm{eff, [M/H]=0}$ and $\log{g}-\log{g}_\mathrm{[M/H]=0}$. We take the standard deviations in these distributions to reflect the typical error introduced by the solar metallicity approximation. For $T_\mathrm{eff}$, there is a 0.8% uncertainty introduced by the true dispersion of metallicities in our sample, and for $\log{g}$, the uncertainty is 0.06 dex. These uncertainties in the atmospheric parameters are naturally propagated into uncertainties in the age and mass of a star through the likelihood calculations outlined in § \[subsubsec:formalism\].
![Distributions of the true variations in $T_\mathrm{eff}$ (left) and $\log{g}$ (right) caused by our assumption of solar metallicity. The “true” $T_\mathrm{eff}$ and $\log{g}$ values are determined for the $\sim 68\%$ of our field star sample with \[Fe/H\] measurements in XHIP and from linear interpolation between the set of atmospheric parameters determined in eight ATLAS9 grids [@castelli2006; @castelli2004] that vary from -2.5 to 0.5 dex in \[M/H\].[]{data-label="fig:metallicity-error"}](f13.pdf){width="49.00000%"}
[**Reddening Effects:**]{} For the program stars studied here, interstellar reddening is assumed negligible. Performing the reddening corrections (described in § \[subsec:reddening\]) on our presumably unreddened sample of stars within 100 pc, we find for the $\sim 80\%$ of stars for which dereddening proved possible, that the distribution of $A_V$ values in our sample is approximately Gaussian with a mean and standard deviation of $\mu=0.007, \sigma=0.125$ mag, respectively (see Figure \[fig:sample-specs\]). Of course, negative $A_V$ values are unphysical, but applying the reddening corrections to our $uvby\beta$ photometry and deriving the atmospheric parameters for each star in both the corrected and uncorrected cases gives us an estimate of the uncertainties in those parameters due to our assumption of negligible reddening out to 100 pc. The resulting distributions of $T_\mathrm{eff,0}/T_\mathrm{eff}$ and $\log{g}_0-\log{g}$, where the naught subscripts indicate the dereddened values, are sharply peaked at 1 and 0, respectively. The FWHM of these distributions indicate an uncertainty $<0.2\%$ in $T_\mathrm{eff}$ and $\sim 0.004$ dex in $\log{g}$. For the general case of sources at larger distances that may suffer more significant reddening, the systematic effects of under-correcting for extinction are illustrated in Figure \[fig:redvectors\].
![The effect of interstellar reddening on atmospheric parameters derived from $uvby\beta$ photometry. The isochrones and mass tracks plotted are those of [@bressan2012]. The tail of each vector represents a given point in a specific photometric plane ($(b-y)-c_1$ for the late group stars in red, $a_0-r^*$ for the intermediate group stars in teal, and $c_1-\beta$ for the early group stars in black) and its corresponding value in \[$T_\mathrm{eff}, \log{g}$\]. The tip of the vector points to the new value of \[$T_\mathrm{eff}, \log{g}$\] after each point in photometric space has been “dereddened” assuming arbitrary values of $A_V$. The shifts in $uvby\beta$ space have been computed according to the extinction measurements of [@schlegel1998] and [@crawfordmandwewala1976], assuming $A_V \simeq 4.237 E(b-y)$. The magnitudes of $A_V$ chosen for this figure represent the extremes of values expected for our sample of nearby stars and are meant to illustrate the directionality of the effects of reddening as propagated through the $uvby\beta$ planes. Finally, note for the early group (black vectors), the $A_V$ values are an order of magnitude larger and much higher than expected for our sample. Again, this is to illustrate the directionality of the reddening effect, which is particularly small for the early group which rely on $c_1$, the Balmer discontinuity index, for temperature, and $\beta$, a color between two narrow-band filters with nearly the same central wavelength, for $\log{g}$.[]{data-label="fig:redvectors"}](f14.pdf){width="49.00000%"}
[**Uncertainties in Projected Rotational Velocities:**]{} The [@glebocki2005] compilation contains mean $v\sin{i}$ measurements, as well as individual measurements from multiple authors. Of the 3499 stars in our sub-sample of the HM98 catalog, 2547 stars have $v\sin{i}$ values based on 4893 individual $v\sin{i}$ measurements, 1849 of which have an accompanying measurement error. Of these measurements, 646 are for intermediate or early groups, for which rotation corrections are performed in our method. The mean fractional error in $v\sin{i}$ for this subset of measurements is $\sim 13\%$. Caclulating the atmospheric parameters for these stars, then performing the FB98 $v\sin{i}$ corrections using $v_\mathrm{rot}$ and $v_\mathrm{rot} \pm \sigma_{v_\mathrm{rot}}$ allows us to estimate the magnitude of the uncertainty in $T_\mathrm{eff}, \log{g}$ due to the uncertainties in $v\sin{i}$ measurements. The resulting RMS errors in $T_\mathrm{eff}, \log{g}$ are 0.7% and 0.01 dex, respectively. When $v\sin{i}$ measurements are not available, an average value based on the spectral type can be assumed, resulting in a somewhat larger error. The systematic effects of under-correcting for rotation are illustrated in Figure \[fig:rotation-vectors\].
[**Influence of Multiplicity:**]{} In a large study such as this one, a high fraction of stars are binaries or higher multiples. Slightly more than 30% of our sample stars are known as members of multiple systems. We choose not to treat these stars differently, given the unknown multiplicity status of much of the sample, and caution our readers to use due care regarding this issue.
[**Influence of Spectral peculiarities:**]{} Finally, early-type stars possess several peculiar subclasses (e.g. Ap, Bp, Am, etc. stars) for which anomalous behavior has been reported in the $uvby\beta$ system with respect to their “normal-type” counterparts. Some of these peculiarities have been linked to rotation, which we do account for. We note that peculiar subclasses constitute $\sim 4\%$ of our sample and these stars could suffer unquantified errors in the determination of fundamental parameters when employing a broad methodology based on calibrations derived from mostly normal-type stars (see Tables \[table:teffcal\] & \[table:loggcal\] for a complete accounting of the spectral types used for calibrations). As these subclasses were included in the atmospheric parameter validation stage (§ \[sec:atmosphericparameters\]), and satisfactory accuracies were still obtained, we chose not to adjust our approach for these stars and estimate the uncertainties introduced by their inclusion is negligible.
[**Final Assessment:**]{} Our final atmospheric parameter uncertainties are dominated by the systematic effects quantified in § \[subsec:teffvalidation\] and § \[subsec:loggvalidation\], with the additional effects outlined above contributing very little to the total uncertainty. The largest additional contributor comes from the photometric error. Adding in quadrature the sources $\sigma_\mathrm{sys}, \sigma_\mathrm{num}, \sigma_\mathrm{phot}, \sigma_\mathrm{[Fe/H]}, \sigma_{v\sin{i}}$ and $\sigma_\mathrm{A_V}$ results in final error estimates of 3.4% in $T_\mathrm{eff}$ and 0.14 dex in $\log{g}$.
The use of $uvby\beta$ photometry to determine fundamental stellar parameters is estimated in previous literature to lead to uncertainties of just 2.5% in $T_\text{eff}$ and 0.1 dex in $\log g$ [@asiain1997], with our assessment of the errors somewhat higher.
The uncertainties that we derive in our Strömgren method work can be compared with those given by other methods. The Geneva photometry system ($U,B1,B2, V1, G$ filters), like the Strömgren system, has been used to derive $T_\mathrm{eff}, \log{g}$, and \[M/H\] values based on atmospheric grids [@kobinorth1990; @kunzli1997], with [@kunzli1997] finding 150-250 K (few percent) errors in $\log{T_\mathrm{eff}}$ and 0.1-0.15 dex errors in log g, comparable to our values. From stellar model atmosphere fitting to high dispersion spectra, errors of 1-5% in $T_\mathrm{eff}$ and 0.05-0.15 dex (typically 0.1 dex) in $\log g$ are quoted for early type stars [e.g. @nieva2011], though systematic effects in log g on the order of an additional 0.1 dex may be present. Wu et al. (2011) tabulate the dispersions in atmospheric parameters among many different studies, finding author-to-author values that differ for OBA stars by 300-5000 K in $T_\text{eff}$ (3-12%) and 0.2-0.6 dex in $\log{g}$ (cm/s$^2$), and for FGK stars 40-100 K in $T_\text{eff}$ and 0.1-0.3 dex in $\log{g}$ (cm/s$^2$).
Age Estimation from Isochrones {#sec:ageestimation}
==============================
Selection of Evolutionary Models
--------------------------------
Once $T_\text{eff}$ and $\log g$ have been established, ages are determined through a Bayesian grid search of the fundamental parameter space encompassed by the evolutionary models. In this section we discuss the selection of evolution models, the Bayesian approach, numerical methods, and resulting age/mass uncertainties.
![Comparison of PARSEC isochrones (solid lines), Ekström isochrones in the rotating case (dashed lines), and Ekström isochrones in the non-rotating case (dotted lines). The solid black lines are evolutionary tracks for stars of intermediate-mass, from the PARSEC models. All evolutionary tracks plotted are for solar metallicity.[]{data-label="fig:isochrone-compare"}](f15.pdf){width="49.00000%"}
Two sets of isochrones are considered in this work. The model families are compared in Figure \[fig:isochrone-compare\]. The PARSEC solar-metallicity isochrones of [@bressan2012], hereafter B12, take into account in a self-consistent manner the pre-main-sequence phase of evolution. The PARSEC models are the most recent iteration of the Padova evolutionary models, with significant revisions to the major input physics such as the equation of state, opacities, nuclear reaction rates and networks, and the inclusion of microscopic diffusion. The models are also based on the new reference solar composition, $Z = 0.01524$ from [@caffau2011], but can be generated for a wide range of metallicities. The B12 models cover the mass range $0.1-12 M_\odot$.
PARSEC isochrones are attractive because early-type dwarfs have relatively rapid evolution with the pre-main-sequence evolution constituting a significant fraction of their lifetimes, i.e. $\tau_\text{PMS} / \tau_\text{MS}$ is larger compared to stars of later types. For stars with effective temperatures in the range 6500 K - 25000 K (approximately spectral types B0-F5), the B12 models predict pre-main sequence lifetimes ranging from $\sim$ 0.2-40 Myr, main-sequence lifetimes from $\sim$ 14 Myr - 2.2 Gyr, and the ratio $\tau_\mathrm{PMS}/\tau_\mathrm{MS} \sim 1.6-2.4 \%$. A star of given initial mass thus can be followed consistently through the pre-MS, MS, and post-MS evolutionary stages. As a consequence, most points in $T_\text{eff}-\log{g}$ space will have both pre-ZAMS and post-ZAMS ages as possible solutions. Figure \[fig:evolutiont\] illustrates the evolution of atmospheric and corresponding photometric properties according to the PARSEC models.
The solar-metallicity isochrones of [@ekstrom2012], hereafter E12, also use updated opacities and nuclear reaction rates, and are the first to take into account the effects of rotation on global stellar properties at intermediate masses. They are available for both non-rotating stars and stars that commence their lives on the ZAMS with a rotational velocity of 40$\%$ their critical rotational velocity ($v_\text{rot,i}/v_\text{crit}$ = 0.4); however, the [@ekstrom2012] models do not take the pre-main sequence phase into account. The E12 models currently exist only for solar metallicity (Z=0.014 is used), but cover a wider range of masses ($0.8-120 M_\odot$).
The E12 models are attractive because they explictly account for rotation, though at a fixed percentage of breakup velocity. All output of stellar evolutionary models (e.g. lifetimes, evolution scenarios, nucleosynthesis) are affected by axial stellar rotation which for massive stars enhances the MS lifetime by about 30$\%$ and may increase isochronal age estimates by about 25$\%$ [@meynet2000]. In terms of atmospheres, for A-type stars, stellar rotation increases the strength of the Balmer discontinuity relative to a non-rotating star with the same color index [@maeder1970]. In the E12 models, the convective overshoot parameter was selected to reproduce the observed main sequence width at intermediate masses, which is important for our aim of distinguishing the ages of many field stars clustered on the main sequence with relatively large uncertainties in their surface gravities. Figure \[fig:isochrone-compare\] shows, however, that there is close agreement between the B12 and the rotating E12 models. Thus, there is not a significant difference between the two models in regards to the predicted width of the MS band.
It should be noted that the $uvby\beta$ grids of [@castelli2006; @castelli2004] were generated assuming a solar metallicity value of Z=0.017. As discussed elsewhere, metallicity effects are not the dominant uncertainty in our methods and we are thus not concerned about the very small metallicty differences between the two model isochrone sets nor the third metallicity assumption in the model atmospheres.
In matching data to evolutionary model grids, a general issue is that nearly any given point in an H-R diagram (or equivalently in $T_\mathrm{eff}$-$\log g$ space), can be reproduced by multiple combinations of stellar age and mass. Bayesian inference can be used to determine the relative likelihoods of these combinations, incorporating prior knowledge about the distributions of the stellar parameters being estimated.
Bayesian Age Estimation
-----------------------
A simplistic method for determining the theoretical age and mass for a star on the Hertzsprung-Russell (H-R) diagram is interpolation between isochrones or evolutionary models. Some problems with this approach, as pointed out by [@gtakeda2007; @pont2004], is that interpolation between isochrones neither accounts for the nonlinear mapping of time onto the H-R diagram nor the non-uniform distribution of stellar masses observed in the galaxy. As a consequence, straightforward interpolation between isochrones results in an age distribution for field stars that is biased towards older ages compared to the distribution predicted by stellar evolutionary theory.
Bayesian inference of stellar age and mass aims to eliminate such a bias by accounting for observationally and/or theoretically motivated distribution functions for the physical parameters of interest. As an example, for a given point with error bars on the H-R diagram, a lower stellar mass should be considered more likely due to the initial mass function. Likewise, due to the longer main-sequence timescales for lower mass stars, a star that is observed to have evolved off the main sequence should have a probability distribution in mass that is skewed towards higher masses, i.e. because higher mass stars spend a more significant fraction of their entire lifetime in the post-MS stage.
### Bayes Formalism {#subsubsec:formalism}
Bayesian estimation of the physical parameters can proceed from comparison of the data with a selection of models. Bayes’ Theorem states:
$$P(\mathrm{model|data}) \propto P(\mathrm{data|model}) \times P(\mathrm{model})$$
The probability of a model given a set of data is proportional to the product of the probability of the data given the model and the probability of the model itself. In the language of Bayesian statistics, this is expressed as:
$$\mathrm{posterior} \propto \mathrm{likelihood} \times \mathrm{prior}.$$
Our model is the set of stellar parameters, age ($\tau$) and mass ($M_*$), and our data are the measured effective temperature, $T_\mathrm{eff}$, and surface gravity, $\log g$, for a given star. At any given combination of age and mass, the predicted $T_\mathrm{eff}$ and $\log g$ are provided by stellar evolutionary models. The $\chi^2$ statistic for an individual model can be computed as follows:
$$\begin{aligned}
\chi^2 (\tau, M_*) &= \sum \frac{(O-E)^2}{\sigma^2} \\
&= \frac{[(T_\mathrm{eff})_O-(T_\mathrm{eff})_E]^2}{\sigma_{T_\mathrm{eff}}^2} + \frac{[(\log{g})_O-(\log{g})_E]^2}{\sigma_{\log{g}}^2},\end{aligned}$$
where the subscripts O and E refer to the observed and expected (or model) quantities, respectively, and $\sigma$ is the measurement error in the relevant quantity.
Assuming Gaussian statistics, the relative likelihood of a specific combination of ($T_\mathrm{eff}, \log g$) is:
$$\begin{aligned}
P(\mathrm{data|model}) &= P(T_\mathrm{eff, obs}, \log{g}_\mathrm{obs} | \tau, M_*) \\
&\propto \exp\left [{-\frac{1}{2}\chi^2(\tau, M_*}) \right ]. \end{aligned}$$
Finally, the joint posterior probability distribution for a model with age $\tau$ and mass $M_*$, is given by:
$$\begin{aligned}
P(\mathrm{model|data}) &= P(\tau, M_*|T_\mathrm{eff, obs}, \log{g}_\mathrm{obs}) \\
&\propto \exp\left [{-\frac{1}{2}\chi^2(\tau, M_*}) \right ] P(\tau)P(M_*),\end{aligned}$$
where $P(\tau)$ and $P(M_*)$ are the prior probability distributions in age and mass, respectively. The prior probabilities of age and mass are assumed to be independent such that $P(\tau,M_*)=P(\tau)P(M_*)$.
### Age and Mass Prior Probability Distribution Functions {#subsubsec:priors}
Standard practice in the Bayesian estimation of stellar ages is to assume an age prior that is uniform in linear age, e.g. [@pont2004; @jorgensen2005; @gtakeda2007; @nielsen2013]. There are two main justifications for choosing a uniform age prior: 1) it is the least restrictive choice of prior and 2) at this stage the assumption is consistent with observations that suggest a fairly constant star formation rate in solar neighborhood over the past 2 Gyr [@cignoni2006].
Since the evolutionary models are logarithmically gridded in age, the relative probability of age bin $i$ is given by the bin width in linear age divided by the total range in linear age:
$$P(\log(\tau_{i}) \leq \log(\tau) < \log(\tau_{i+1})) = \frac{\tau_{i+1}-\tau_i}{\tau_n - \tau_0},$$
where $\tau_n$ and $\tau_0$ are the largest and smallest allowed ages, respectively. This weighting scheme gives a uniform probability distribution in linear age.
As noted by [@gtakeda2007], it is important to understand that assuming a flat prior in linear age corresponds to a highly non-uniform prior in the measured quantities of $\log T_\mathrm{eff}$ and $\log g$. This is due to the non-linear mapping between these measurable quantities and the physical quantities of mass and age in evolutionary models. Indeed, the ability of the Bayesian approach to implicitly account for this effect is considered one of its main strengths.
As is standard in the Bayesian estimation of stellar masses, an initial mass function (IMF) is assumed for the prior probability distribution of all possible stellar masses. Several authors point out that Bayesian estimates of physical parameters are relatively insensitive to the mass prior (i.e. the precise form of the IMF assumed), especially in the case of parameter determination over a small or moderate range in mass space. For this work considering BAF stars, the power law IMF of [@salpeter1955] is assumed for the mass prior, so that the relative probability of mass bin $i$ is given by the following expression:
$$P(M_i \leq M < M_{i+1}) \propto M_i^{-2.35}.$$
### Numerical Methods {#subsubsec:numericalmethods}
As [@gtakeda2007] point out, in Bayesian age estimation interpolation should be performed only along isochrones and not between them. To avoid biasing our derived physical parameters from interpolating between isochrones, we generated a dense grid of PARSEC models. The evolutionary models were acquired with a spacing of 0.0125 dex in log(age/yr) and 0.0001 $M_\odot$ in mass. All probabilities were then computed on a 321 $\times$ 321 grid ranging from log(age/yr)=6 to 10 and from 1-10 $M_\odot$.
### Age and Mass Uncertainties
Confidence intervals in age and mass are determined from the one-dimensional marginalized posterior probability distributions for each parameter. Since the marginalized probability distributions can often be assymetric, the region chosen for determining confidence intervals is that of the Highest Posterior Density (HPD). This method selects the smallest range in a parameter that encompasses $N \%$ of the probability. The HPD method is discussed in more detail in the appendix.
Notably, uncertainties in the ages depend on where in the $\log g$ and $\log T_\text{eff}$ parameter space the star is located, and whether a pre-main sequence or a post-zero-age-main sequence age is more appropriate. In the pre-main sequence phase, both atmospheric parameters are important in age determination. For post-ZAMS stars, however, the relative importance of the two parameters changes. When stars are just bouncing back from the ZAMS and are starting to evolve through the MS phase, $\log g$ must be known precisely (within the range of $\sim$4.3 to 4.45) in order to derive a good age estimate. The age at which this bounce occurs will be a function of mass (earlier for more massive stars). Otherwise, once late B, A, and early F stars are comfortably settled on the MS, their evolution is at roughly constant temperature (see Figure \[fig:isochrone-compare\]) and so the gravity precision becomes far less important, with temperature precision now critical.
The Methodology Tested on Open Clusters {#subsec:openclustertests}
=======================================
![Histograms of the visual extinction, $A_V$, in magnitudes for individual members of the four open clusters considered here. The extinction values are calculated using the relation $A_V=4.237E(b-y)$, with the $(b-y)$ color excesses computed as described in § \[subsec:reddening\].[]{data-label="fig:clusters-av"}](f16.pdf){width="45.00000%"}
![image](f17.pdf){width="49.00000%"} ![image](f18.pdf){width="49.00000%"} ![image](f19.pdf){width="49.00000%"} ![image](f20.pdf){width="49.00000%"}
An important test of our methods is to assess the ages derived from our combination of $uvby\beta$ photometry, atmospheric parameter placement, and comparison to evolutionary models relative to the accepted ages for members of well-studied open clusters. We investigate four such clusters with rigorous age assessment in previous literature: IC 2602, $\alpha$ Persei, the Pleiades, and the Hyades.
The youngest ($\lesssim 20-30$ Myr) open clusters may be age-dated kinematically, by tracing the space motions of individual members back to the time when the stars were in closest proximity to one another [@soderblom2010]. After $\lesssim$ 1 galactic rotation period, however, individual member motions are randomized to the extent of limiting the utility of the kinematic method. Beyond $\sim 20-30$ Myr, the most precise open cluster ages come from the lithium depletion boundary (LDB) technique. This method uses the lithium abundances, which diminish predictably with time, of the lowest mass cluster members to converge on precise ($\sim$10%) ages. LDB ages are available for IC 2602: $\tau = 46^{+6}_{-5}$ Myr [@dobbie2010], $\alpha$ Per: $\tau = 90 \pm 10$ Myr [@stauffer1999], and the Pleiades: $\tau = 125 \pm 8$ Myr [@stauffer1998]. The LDB technique does not work past $\sim$ 250 Myr, so the Hyades is dated based on isochrone fitting in the H-R diagram using stars with high precision distance measurements, with currently accepted age $625 \pm 50$ Myr [@perryman1998].
Process
-------
Membership probabilities, $uvby\beta$ photometry, and projected rotational velocities are obtained for members of these open clusters via the WEBDA open cluster database[^5]. For the Pleiades, membership information was augmented and cross-referenced with [@stauffer2007]. Both individual $uvby\beta$ measurements and calculations of the mean and scatter from the literature measurements are available from WEBDA in each of the photometric indices. As the methodology requires accurate classification of the stars according to regions of the H-R diagram, we inspected the spectral types and $\beta$ indices and considered only spectral types B0-F5 and luminosity classes III-V for our open cluster tests.
In contrast to the field stars studied in the next section, the open clusters studied here are distant enough for interstellar reddening to significantly affect the derived stellar parameters. The photometry is thus dereddened as described in § \[subsec:reddening\]. Figure \[fig:clusters-av\] shows the histograms of the visual extinction $A_V$ for each cluster, with the impact of extinction on the atmospheric parameter determination illustrated above in Figure \[fig:redvectors\].
In many cases, individual cluster stars have multiple measurements of $v \sin i$ in the WEBDA database and we select the measurement from whichever reference is the most inclusive of early-type members. In very few cases does a cluster member have no rotational velocity measurement present in the database; for these stars we assume the mean $v \sin i$ according to the $T_\mathrm{eff}-v\sin{i}$ relation presented in Appendix B of [@gray2005book].
Atmospheric parameters are determined for each cluster member, as described in § \[sec:atmosphericparameters\]. Adopting our knowledge from the comparison to fundamental and semi-fundamental atmospheric parameters (§ \[subsec:teffvalidation\] & § \[subsec:loggvalidation\]), a uniform 1.6$\%$ shift towards cooler $T_\mathrm{eff}$ was applied to all temperatures derived from the model color grids to account for systematic effects in those grids. The FB98 $v \sin i$ corrections were then applied to the atmospheric parameters. The $v\sin{i}$ corrections prove to be a crucial step in achieving accurate ages for the open clusters (particularly for the Pleiades).
Results
-------
The results of applying our procedures to open cluster samples appear in Figure \[fig:cluster-hrds\]. While the exact cause(s) of the remaining scatter observed in the empirical isochrones for each cluster is not known, possible contributors may be systematic or astrophysical in nature, or due to incorrect membership information. Multiplicity, variability, and spectral peculiarities were among the causes investigated for this scatter, but the exclusion of objects on the basis of these criteria did not improve age estimation for any individual cluster. The number of stars falling below the theoretical ZAMS, particulary for stars with $\log{T_\mathrm{eff}} \lesssim 3.9$, is possibly systematic and may be due to an incomplete treatment of convection by the ATLAS9 models. This source of uncertainty is discussed in further detail in § \[subsec:belowzams\].
For each cluster, we publish the individual stars considered, along with relevant parameters, in Tables \[table:ic2602\], \[table:alphaper\], \[table:pleiades\], & \[table:hyades\]. In each table, the spectral types and $v\sin{i}$ measurements are from WEBDA, while the dereddened $uvby\beta$ photometry and atmospheric parameters are from this work.
### Ages from Bayesian Inference
Once atmospheric parameters have been determined, age determination proceeds as outlined in § \[sec:ageestimation\]. For each individual cluster member, the $\chi^2$, likelihood, and posterior probability distribution are calculated for each point on a grid ranging from log(age/yr)=6.5 to 10, with masses restricted to $1 \leq M/M_\odot \leq 10$. The resolution of the grid is 0.0175 dex in log(age/yr) and 0.045 $M_\odot$ in mass. The 1-D marginalized posterior PDFs for each individual cluster member are normalized and then summed to obtain an overall posterior PDF in age for the cluster as a whole. This composite posterior PDF is also normalized prior to the determination of statistical measures (mean, median, confidence intervals). Additionally, the posterior PDFs in log(age) for each member are multiplied to obtain the total probability in each log(age) bin that all members have a single age. While the summed PDF depicts better the behavior of individual stars or groups of stars, the multiplied PDF is best for assigning a single age to the cluster and evaluating any potential systematics of the isochrones themselves.
As shown in Figure \[fig:cluster-hist\], the summed age PDFs for each cluster generally follow the same behavior: (1) the peaks are largely determined by the early group (B-type) stars which have well-defined ages due to their unambiguous locations in the $T_\mathrm{eff}-\log{g}$ diagram; (2) examining the age posteriors for individual stars, the intermediate group stars tend to overpredict the cluster age relative to the early group stars, and the same is true for the late group stars with respect to the intermediate group stars, resulting in a large tail at older ages for each of the summed PDFs due to the relatively numerous and broad PDFs of the later group stars. For IC 2602 and the Pleiades, the multiplied PDFs have median ages and uncertainties that are in close agreement with the literature ages. Notably, the results of the open cluster tests favor an age for the Hyades that is older ($\sim$ 800 Myr) than the accepted value, though not quite as old as the recent estimate of 950$\pm$100 Myr from [@brandt2015]. The Bayesian age analysis also favors an age for $\alpha$ Per that is younger ($\sim$ 70 Myr) than the accepted value based on lithium depletion, but older than the canonical 50 Myr from the Upper Main Sequence Turnoff [@mermilliod1981]. In an appendix, we perform the same analysis for the open clusters on p($\tau$) rather than p($\log{\tau}$), yielding similar results.
![image](openclusters-log-v1.pdf){width="90.00000%"}
The results of the open cluster test are presented in Table \[table:clusters\]. It is noted that all statistical measures of the marginalized age PDFs quoted hereafter are from PDFs normalized in log(age), as opposed to converting to linear age and then normalizing. This choice was made due to the facts that 1) the isochrones are provided in uniform logarithmic age bins, and 2) the marginalized PDFs of individual stars are more symmetric (and thus better characterized by traditional statistical measures) in log(age) than in linear age. Notably, the median age is equivalent regardless of whether one chooses to analyze prob($\log \tau$) or prob($\tau$). This issue is discussed further in an appendix. In general, there is very close agreement in the Bayesian method ages between B12 and rotating E12 models. For IC 2602 and the Pleiades, our analysis yields median cluster ages (as determined from the multiplied PDFs) that are within 1-$\sigma$ of accepted values, regardless of the evolutionary models considered. The Bayesian analysis performed with the PARSEC models favor an age for $\alpha$ Persei that is $\sim$ 20% younger than the currently accepted value, or $\sim$ 20% older for the Hyades.
[cccccccccccc]{}
IC 2602 & 46$^{+6}_{-5}$ & [@ekstrom2012] & 80 & 32-344 & 42 & 41-46 & 39\
& & [@bressan2012] & 79 & 27-284 & 46 & 44-50 & 37\
$\alpha$ Persei & 90$^{+10}_{-10}$ & [@ekstrom2012] & 234 & 83-1618 & 71 & 68-74 & 50\
& & [@bressan2012] & 226 & 74-1500 & 70 & 69-74 & 48\
Pleiades & 125$^{+8}_{-8}$ & [@ekstrom2012] & 277 & 81-899 & 128 & 126-130 & 126\
& & [@bressan2012] & 271 & 85-948 & 123 & 121-126 & 115\
Hyades & 625$^{+50}_{-50}$ & [@ekstrom2012] & 872 & 518-1940 & 827 & 812-837 & 631\
& & [@bressan2012] & 844 & 487-1804 & 764 & 747-780 & 501
### Ages from Isochrone Fitting
As a final test of the two sets of evolutionary models, we used $\chi^2$-minimization to find the best-fitting isochrone for each cluster. By fitting all members of a cluster simultaneously, we are able to assign a single age to all stars, test the accuracy of the isochrones for stellar ensembles, and test the ability of our $uvby\beta$ method to reproduce the shapes of coeval stellar populations in $T_\mathrm{eff}-\log{g}$ space. For this exercise, we did not interpolate between isochrones, choosing instead to use the default spacing for each set of models (0.1 dex and 0.0125 dex in log(age/yr) for the E12 and B12 models, respectively). For the best results, we consider only the sections of the isochrones with $\log{g}$ between 3.5 and 5.0 dex. The results of this exercise are shown in Figure \[fig:bestfit-isochrones\]. The best-fitting E12 isochrone (including rotation) is consistent with accepted ages to within 1% for the Pleiades and Hyades, $\sim 15\%$ for IC 2602, and $\sim 44\%$ for $\alpha$-Per. For the B12 models, the best-fit isochrones are consistent with accepted ages to $\sim 8\%$ for the Pleiades, $\sim 20\%$ for the Hyades and IC 2602, and $\sim 47\%$ for $\alpha$-Per. The B12 models produce systematically younger ages than the E12 models, by a fractional amount that increases with absolute age.
As detailed above, the open cluster tests revealed that our method is able to distinguish between ensembles of differing ages, from tens to hundreds of Myr, at least in a statistical sense. For individual stars, large uncertainties may remain, particularly for the later types, owing almost entirely to the difficulty in determining both precise and accurate surface gravities. The open cluster tests also demonstrate the importance of a $v \sin i$ correction for early (B0-A0) and intermediate (A0-A3) group stars in determining accurate stellar parameters. While the $v \sin i$ correction was not applied to the late group (A3-F5 in this case) stars, it is likely that stars in this group experience non-negligible gravity darkening. The typically unknown inclination angle, $i$, also contributes significant uncertainties in derived stellar parameters and hence ages.
![image](f25.pdf){width="45.00000%"} ![image](f26.pdf){width="45.00000%"} ![image](f27.pdf){width="45.00000%"} ![image](f28.pdf){width="45.00000%"} ![image](f29.pdf){width="45.00000%"} ![image](f30.pdf){width="45.00000%"} ![image](f31.pdf){width="45.00000%"} ![image](f32.pdf){width="45.00000%"}
The Methodology Applied to Nearby Field Stars {#subsec:fieldstars}
=============================================
![image](f33.pdf){width="90.00000%"}
![Histograms of the uncertainties (in mag) for different $uvby\beta$ indices for the sample of $\sim$ 3500 field stars discussed in § \[subsec:fieldstars\]. The solid lines in each plot indicate the position of the mean uncertainty in that parameter. Uncertainties in $a_0$ and $r^*$ are calculated according to Eqns. (13) & (14).[]{data-label="fig:hm98err"}](f34.pdf){width="45.00000%"}
![image](f35.pdf){width="99.00000%"}
As an application of our developed, calibrated, validated, and tested methodology, we consider the complete HM98 photometric catalog of 63,313 stars. We are interested only in nearby stars that are potential targets for high contrast imaging campaigns, and for which interstellar extinction is negligible. We thus perform a distance cut at 100 pc, using distances from the XHIP catalog [@anderson2012]. We perform an additional cut in spectral type (using information from XHIP), considering only B0-F5 stars belonging to luminosity classes IV,V, because this is the range for which our method has been shown to work with high fidelity and additionally these are the primary stars of interest to near-term high-contrast imaging surveys. In total, we are left with 3499 stars. Figure \[fig:sample-specs\] shows the distribution of our field star sample in spectral type, distance, $A_V$, \[Fe/H\], and $v\sin{i}$. The distributions of photometric errors in given $uvby\beta$ indices are shown in Figure \[fig:hm98err\], and the mean errors in each index are summarized as follows: $\left \langle \sigma_{b-y} \right \rangle, \left \langle \sigma_{m_1} \right \rangle, \left \langle \sigma_{c_1} \right \rangle, \left \langle \sigma_{\beta} \right \rangle, \left \langle \sigma_{a_0} \right \rangle, \left \langle \sigma_{r^*} \right \rangle = 0.003, 0.004, 0.005, 0.004, 0.005, 0.005$ mag.
Projected rotational velocities for the sample of nearby field stars are sourced from the [@glebocki2005] compilation, which contains $v\sin{i}$ measurements for 2874 of the stars, or $\sim 82\%$ of the sample. For an additional 8 stars $v\sin{i}$ measurements are collected from [@zorec2012], and for another 5 stars $v\sin{i}$ values come from [@schroeder2009]. For the remaining stars without $v\sin{i}$ measurements, a projected rotational velocity is assumed according to the mean $v\sin{i}-T_\mathrm{eff}$ relation from Appendix B of [@gray2005book]. Atmospheric parameters are corrected for rotational velocity effects as outlined in § \[subsec:vsinicorrection\].
Atmospheric parameter determination was not possible for six stars, due to discrepant positions in the relevant $uvby\beta$ planes: HIP 8016 (a B9V Algol-type eclipsing binary), HIP 12887 (a poorly studied F3V star), HIP 36850 (a well-studied A1V+A2Vm double star system), HIP 85792 (a well-studied Be star, spectral type B2Vne), HIP 97962 (a moderately studied B9V star), and HIP 109745 (an A0III star, classified in XHIP as an A1IV star). Consequently, ages and masses were not computed for these stars.
An H-R diagram of the entire sample is shown in Figure \[fig:hrd\], with the evolutionary models of [@bressan2012] overlaid. Equipped with atmospheric parameters for the remaining 3493 stars, and assuming uniform uncertainties of 3.4% and 0.14 dex in $T_\mathrm{eff}$ and $\log{g}$, respectively, ages and masses were computed via the process outlined in § \[sec:ageestimation\]. Posterior probabilities were calculated on a uniform 321$\times$321 grid of the [@bressan2012] models, gridded from 1 Myr-10 Gyr in steps of 0.0125 dex in log(age), and from 1-10$M_\odot$ in steps of 0.028$M_\odot$. As the [@bressan2012] models exist for high resolution timesteps, no interpolation between isochrones was required.
From the 2D joint posterior PDF, we obtain the marginalized 1D PDFs in age and mass, from which we compute the mean (expected value), median, mode (most probable value), as well as 68% and 95% confidence intervals. Interpolated ages and masses are also included, and these values may be preferred, particularly for objects with an interpolated age $\lesssim 10^8$ yr and a $\log{g}$ placing it near the ZAMS (see § \[subsec:belowzams\] for more detail). The table of ages and masses for all 3943 stars, including our newly derived atmospheric parameters, are available as an electronic table and a portion (sorted in ascending age) is presented here in Table \[table:fieldstars\]. In rare instances (for $\sim 5\%$ of the sample), true 68% and 95% confidence intervals were not obtained due to numerical precision, the star’s location near the edge of the computational grid, or some combination of the two effects. In these cases the actual confidence interval quoted is noted as a flag in the electronic table.
[ccccccccccccccc]{}
65474 & 24718 & 4.00 & 6 & 7 & 9 & 5-12 & 2-14 & 13 & 9.59 & 9.61 & 9.62 & 9.4-9.9 & 9.2-10.0 & 10.26\
61585 & 21790 & 4.32 & 6 & 7 & 11 & 4-18 & 1-21 & 1 & 7.53 & 7.52 & 7.48 & 7.3-7.7 & 7.1-8.0 & 7.34\
61199 & 16792 & 4.18 & 18 & 22 & 33 & 13-53 & 3-60 & 36 & 4.84 & 4.83 & 4.78 & 4.6-5.0 & 4.5-5.2 & 4.95\
60718 & 16605 & 4.35 & 19 & 23 & 35 & 13-55 & 3-61 & 1 & 4.75 & 4.74 & 4.70 & 4.6-4.9 & 4.4-5.1 & 4.58\
60000 & 15567 & 4.12 & 21 & 26 & 40 & 14-65 & 4-77 & 60 & 4.27 & 4.26 & 4.22 & 4.1-4.4 & 4.0-4.6 & 4.48\
100751 & 17711 & 3.94 & 23 & 29 & 43 & 20-50 & 5-52 & 48 & 5.41 & 5.42 & 5.35 & 5.1-5.6 & 5.0-5.9 & 5.91\
23767 & 16924 & 4.10 & 23 & 30 & 44 & 18-56 & 4-61 & 46 & 4.96 & 4.95 & 4.92 & 4.7-5.2 & 4.5-5.4 & 5.14\
92855 & 19192 & 4.26 & 24 & 29 & 34 & 23-38 & 8-40 & 8 & 6.39 & 6.37 & 6.25 & 6.0-6.6 & 5.8-7.1 & 5.95\
79992 & 14947 & 3.99 & 26 & 31 & 48 & 18-78 & 4-89 & 88 & 4.01 & 4.00 & 3.97 & 3.8-4.1 & 3.7-4.3 & 4.45
As with the open clusters, we can sum the individual, normalized PDFs in age to produce composite PDFs for various subsets of our sample. Figure \[fig:compositeage\] depicts the composite age PDF for our entire sample, as well as age PDFs for the subsets of B0-B9, A0-A4, A5-A9, and F0-F5 stars. From these PDFs we can ascertain the statistical properties of these subsets of solar neighborhood stars, which are presented in Table \[table:compositeagepdfs\].
![Normalized composite age PDFs for our sample of field B0-F5 stars within 100 pc. The normalized composite PDFs are created by summing the normalized, 1D marginalized age PDFs of individual stars in a given spectral type grouping. The black curve represents the composite pdf for all spectral types, while the colored curves represent the composite PDFs for the spectral type groups B0-B9, A0-A4, A5-A9, F0-F5 (see legend). Circles represent the expectation values of the composite PDFs, while squares represent the medians. The solid and dashed lines represent the 68% and 95% confidence intervals, respectively, of the composite PDFs. The statistical measures for these composite PDFs are also presented in Table \[table:compositeagepdfs\].[]{data-label="fig:compositeage"}](f36.pdf){width="45.00000%"}
[cccccc]{} B0-B9 & 93 & 122 & 147 & 56-316 & 8-410\
A0-A4 & 296 & 365 & 392 & 200-794 & 39-1090\
A5-A9 & 572 & 750 & 854 & 434-1372 & 82-1884\
F0-F5 & 1554 & 1884 & 2024 & 1000-4217 & 307-6879\
\[table:compositeagepdfs\]
Empirical Mass-Age Relation {#subsec:massagerelation}
---------------------------
From our newly derived set of ages and masses of solar-neighborhood B0-F5 stars, we can determine an empirical mass-age relation. Using the mean ages and masses for all stars in our sample, we performed a linear least squares fit using the NumPy polyfit routine, yielding the following relation, valid for stars $1.04<M/M_\odot<9.6$:
$$\log(\mathrm{age/yr}) = 9.532 - 2.929 \log\left ( \frac{M}{M_\odot}\right ).$$
The RMS error between the data and this relation is a fairly constant 0.225 dex as a function of stellar mass.
![image](f37.pdf){width="49.00000%"} ![image](f38.pdf){width="49.00000%"}
Empirical Spectral-Type-Age/Mass Relations {#subsec:sptagerelation}
------------------------------------------
We can also derive empirical spectral-type-age and spectral-type-mass relations for the solar neighborhood, using the mean masses derived from our 1D marginalized posterior PDFs in age, and spectral type information from XHIP. These relations are plotted in Figure \[fig:sptrelations\], and summarized in Table \[table:sptagerelation\].
[cccccc]{} B0 & 19 & & 4.75 & & 1\
B1 & 6 & & 9.59 & & 1\
B2 & 15 & 13 & 6.96 & 0.81 & 2\
B3 & 41 & 16 & 5.22 & 0.50 & 3\
B4 & 26 & 12 & 4.94 & 0.59 & 4\
B5 & 44 & 16 & 3.94 & 0.49 & 5\
B6 & 84 & 51 & 3.69 & 0.23 & 4\
B7 & 140 & 209 & 3.23 & 0.60 & 13\
B8 & 99 & 43 & 3.28 & 0.38 & 18\
B9 & 154 & 86 & 2.88 & 0.88 & 67\
A0 & 285 & 437 & 2.47 & 0.40 & 120\
A1 & 313 & 217 & 2.23 & 0.29 & 132\
A2 & 373 & 320 & 2.11 & 0.30 & 144\
A3 & 462 & 412 & 2.07 & 0.96 & 100\
A4 & 540 & 333 & 1.84 & 0.19 & 37\
A5 & 514 & 350 & 1.86 & 0.81 & 81\
A6 & 628 & 265 & 1.85 & 0.49 & 46\
A7 & 574 & 262 & 1.78 & 0.30 & 79\
A8 & 642 & 272 & 1.64 & 0.11 & 62\
A9 & 800 & 339 & 1.62 & 0.21 & 102\
F0 & 994 & 544 & 1.52 & 0.19 & 324\
F1 & 948 & 352 & 1.51 & 0.13 & 68\
F2 & 1280 & 526 & 1.42 & 0.19 & 441\
F3 & 1687 & 633 & 1.34 & 0.23 & 605\
F4 & 1856 & 600 & 1.30 & 0.12 & 129\
F5 & 2326 & 697 & 1.27 & 0.18 & 905\
\[table:sptagerelation\]
Discussion
==========
The precision of the age-dating method described here relies on the use of Strömgren $ubvy\beta$ photometry to finely distinguish stellar atmosphere parameters and compare them to isochrones from stellar evolution models. For ages $\leq$ 10 Myr and $\gtrsim$ 100 Myr, in particular, there is rapid evolution of $\log{T_\text{eff}}$ and $\log{g}$ for intermediate-mass stars (see Figure \[fig:evolutiont\]). This enables greater accuracy in age determination through isochrone placement for stars in this mass and age range. Fundamentally, our results rely on the accuracy of both the stellar evolution models and the stellar atmosphere models that we have adopted. Accuracy is further set by the precision of the photometry, the derived atmospheric parameters, the calibration of the isochrones, and the ability to determine whether an individual star is contracting onto the main sequence or expanding off of it. By using isochrones that include both pre-MS and post-MS evolution in a self-consistent manner [@bressan2012], we can determine pre-ZAMS in addition to post-ZAMS ages for every data point in $T_\mathrm{eff}, \log{g}$).
Above, we have described our methodology in detail, including corrections for reddening and rotation, and we have presented quality control tests that demonstrate the precision and accuracy of our ages. In the section we describe several aspects of our analysis of specific interest, including the context of previous estimates of stellar ages for early type stars (§ \[sec:context\]), how to treat stars with locations apparently below the ZAMS (§ \[subsec:belowzams\]), and discussion of notable individual objects (§ \[subsec:specialstars\]). We will in the future apply our methods to new spectrophotometry.
Methods Previously Employed in Age Determination for Early Type Stars {#sec:context}
---------------------------------------------------------------------
In this section we place our work on nearby open cluster stars and approximately 3500 nearby field stars in the context of previous age estimation methods for BAF stars.
[@song2001] utilized a method quite similar to ours, employing $uvby\beta$ data from the catalogs of [@hauck1980; @olsen1983; @olsen1984], the color grids of [@moon1985] including a temperature-dependent gravity modification suggested by [@napiwotzki1993], and isochrones from [@schaller1992], to determine the ages of 26 Vega-like stars.
For A-type stars, [@vican2012] determined ages for *Herschel* DEBRIS survey stars by means of isochrone placement in log($T_\text{eff}$)-log($g$) space using [@li2008] and [@pinsonneault2004] isochrones, and atmospheric parameters from the literature. [@rieke2005] published age estimates for 266 B- & A-type main sequence stars using cluster/moving group membership, isochrone placement in the H-R diagram, and literature ages (mostly coming from earlier application of $uvby\beta$ photometric methods).
Among later type F dwarfs, previous age estimates come primarily from the Geneva-Copenhagen Survey [@casagrande2011], but their reliability is caveated by the substantially different values published in various iterations of the catalog [@nordstrom2004; @holmberg2007; @holmberg2009; @casagrande2011] and the inherent difficulty of isochrone dating these later type dwarfs.
More recently, [@nielsen2013] applied a Bayesian inference approach to the age determination of 70 B- & A-type field stars via $M_V$ vs $B-V$ color-magnitude diagram isochrone placement, assuming a constant star formation rate in the solar neighborhood and a Salpeter IMF. [@derosa2014] estimated the ages of 316 A-type stars through placement in a $M_K$ vs $V-K$ color-magnitude diagram. Both of these broad-band photometric studies used the theoretical isochrones of [@siess2000].
Considering the above sources of ages, the standard deviation among them suggests scatter among authors of only 15% for some stars up to 145%, with a typical value of 40%. The full range (as opposed to the dispersion) of published ages is 3-300%, with a peak around the 80% level. The value of the age estimates presented here resides in the large sample of early type stars and the uniform methodology applied to them.
Stars Below the Main Sequence {#subsec:belowzams}
-----------------------------
In Figure \[fig:hrd\] it may be noted that many stars, particularly those with $\log \mathrm{T_{eff}} \leq 3.9$, are located well below the model isochrones. Using rotation-corrected atmospheric parameters, $\sim$ 540 stars or $\sim 15\%$ of the sample, fall below the theoretical ZAMS.
Prior studies also faced a similarly large fraction of stars falling below the main sequence. [@song2001] arbitrarily assigned an age of 50 Myr to any star lying below the 100 Myr isochrone used in that work. [@tetzlaff2011] arbitrarily shifted stars towards the ZAMS and treated them as ZAMS stars.
Several possibilities might be invoked to explain the large population of stars below the $\log{g}-\log{T_\mathrm{eff}}$ isochrones: (1) failure of evolutionary models to predict the true width of the MS, (2) spread of metallicities, with the metal-poor MS residing beneath the solar-metallicity MS, (3) overaggressive correction for rotational velocity effects, or (4) systematics involved in surface gravity or luminosity determinations. Of these explanations, we consider (4) the most likely, with (3) also contributing somewhat. @valenti2005 found a 0.4 dex spread in $\log{g}$ among their main sequence FGK stars along with a 0.1 dex shift downward relative to the expected zero metallicity main sequence.
The Bayesian age estimates for stars below the MS are likely to be unrealistically old, so we compared the ages for these stars with interpolated ages. Using the field star atmospheric parameters and [@bressan2012] models, we performed a 2D linear interpolation with the SciPy routine `griddata`. Stars below the main sequence could be easily identified by selecting objects with $\log{\mathrm{(age/yr)}}_\mathrm{Bayes} - \log{\mathrm{(age/yr)}}_\mathrm{interp} > 1.0$. Notably, for these stars below the MS, the linear interpolation produces more realistic ages than the Bayesian method. A comparison of the Bayesian and interpolated ages for all stars is presented in Figure \[fig:bayesinterp\]. Of note, there is closer agreement between the Bayesian and interpolation methods in regards to estimating masses.
Figure \[fig:bayesinterp\] further serves to illustrate the difference between the Bayesian ages and the interpolated ages, which scatters over an order magnitude from a 1:1 relationship. A number of stars that fall below the MS and have independently constrained ages are examined in detail in n § \[subsec:specialstars\]. These stars have interpolated ages that are more in line with prior studies, and in light of this, we publish the interpolated ages in addition to the Bayesian ages in the final electronic table.
![Comparison of ages for BAF field stars derived through 2D linear interpolation and Bayesian inference. Grey points represent those stars with $\Delta \log{\mathrm{age/yr}} > 1$ (in the sense of Bayesian minus interpolated), which coincide with the same stars that reside below the MS.[]{data-label="fig:bayesinterp"}](f39.pdf){width="40.00000%"}
Stars of Special Interest {#subsec:specialstars}
-------------------------
In this section we discuss stars of particular interest given that they have either spatially resolved debris disks, detected possibly planetary mass companions, or both. As a final test of the [@bressan2012] evolutionary models and our Bayesian age and mass estimation method, we performed our analysis on these stars, including the Sun.
![image](f40.pdf){width="45.00000%"} ![image](f41.pdf){width="45.00000%"}
### Sun
The atmospheric parameters of our Sun are known with a precision that is orders of magnitude higher than what is obtainable for nearby field stars. One would thus expect the assumed priors to have a negligible influence on the Bayesian age and mass estimates.
The effective temperature of the Sun is calculated to be $T_\mathrm{eff} = 5771.8 \pm 0.7$ K from the total solar irradiance [@kopp2011], the solar radius [@haberreiter2008], the IAU 2009 definition of the AU, and the CODATA 2010 value of the Stefan-Boltzmann constant. The solar surface gravity is calculated to be $\log{g} = 4.43812 \pm 0.00013$ dex from the IAU 2009 value of $GM_\odot$ and the solar radius [@haberreiter2008]. Using these values, our Bayesian analysis yields a median age of 5.209$\pm$0.015 Gyr. The Bayesian estimation also yields a mass estimate of 0.9691$\pm$0.0003 $M_\odot$. Performing a 2D linear interpolation yields a slightly older age of 5.216 Gyr and slightly lower mass of 0.9690 $M_\odot$. As expected, the precise solar values lead to an elliptical joint posterior PDF in age and mass, and symmetric 1D marginalized PDFs. The difference between the Bayesian age estimate and interpolated age is negligible in this regime of extremely small uncertainties. This test also demonstrates that the [@bressan2012] evolutionary models may introduce a systematic overestimation of ages and underestimation of masses towards cooler temperatures, though because the Sun is substantially different from our sample stars we do not extrapolate this conclusion to our sample.
### HR 8799
HR 8799 is located near the ZAMS and is metal-poor with \[Fe/H\]=$−0.47 \pm 0.10$ dex [@gray1999]. However, because HR 8799 is a $\lambda$ Boo peculiar-type star, its photospheric metallicity may not reflect the global stellar metal abundance. The age of HR 8799 is believed to be $30^{+20}_{-10}$ Myr based on its proposed membership to the Columba association [@zuckerman2011].
Figure \[fig:hrd\] shows that HR 8799 lays well below the theoretical ZAMS. This location is well-documented from other spectroscopic and photometric analyses of the star, and is likely due to a combination of its genuine youth and subsolar metallicity. Consistent with the discussion in §8.2 and as illustrated in Figure \[fig:bayesinterp\], our Bayesian age analysis leads to an unrealistically old age for the star, with a median age of 956 Myr and a 68% confidence interval of 708-1407 Myr. The Bayesian approach also seems to overestimate the mass, with a median mass of $1.59 M_\odot$ and 68% confidence interval of $1.49-1.68 M_\odot$. Notably, however, 2D linear interpolation leads to more reasonable age estimates: 26 Myr assuming our newly derived atmospheric parameters ($T_\mathrm{eff}$=7540 K, $\log{g}$ = 4.43), or 25 Myr using $T_\mathrm{eff}$=7430 K and $\log{g}$ = 4.35 from [@gray1999].
### $\beta$ Pic
[@zuckerman2001] assigned an age of 12 Myr to $\beta$ Pic based on its proposed membership to the moving group of the same name. Isochronal age estimates for the star have ranged from the ZAMS age to 300 Myr [@barrado1999]. [@nielsen2013] performed a Bayesian analysis concluding a median age of 109 Myr with a 68% confidence interval of 82-134 Myr. Although barely below the ZAMS, $\beta$ Pic in our own Bayesian analysis has a much older median age of 524 Myr with a 68% confidence interval of 349-917 Myr. Prior authors also have noted that $\beta$ Pic falls below the ZAMS on a color-magnitude diagram. As was the case for HR 8799, we conclude that our erroneous age for $\beta$ Pic is due to the dominance of the prior assumption/s in exactly such a scenario.
However, the interpolated age using our atmospheric parameters of $T_\mathrm{eff}$=8300 K, $\log{g}$=4.389, is 20 Myr. Using the [@gray2006] values of $T_\mathrm{eff}$=8052 K (within $1\sigma$ of our determination), $\log{g}$=4.15 ($>1.5\sigma$ away from our surface gravity) we obtain an interpolated age of 308 Myr.
### $\kappa$ And
$\kappa$ Andromedae is another proposed member of the Columba association [@zuckerman2011]. Using the nominal 30 Myr age, [@carson2013] suggested a companion discovered via direct imaging is of planetary mass ($12-13 M_\mathrm{Jup}$). [@hinkley2013] refuted this claim, concluding an age of $220\pm100$ Myr from multiple isochronal analyses in §3.2 of that work. This older age estimate leads to a model-dependent companion mass of $50^{+16}_{-13} M_\mathrm{Jup}$. Our Bayesian analysis allows us to nearly rule out a 30 Myr age with a 95% confidence interval of 29-237 Myr. The mean, median, mode, and 68% confidence interval of the 1D marginalized posterior PDF in age for $\kappa$ And are 118, 150, 191, and 106-224 Myr, respectively. Notably, $\kappa$ And has a projected rotational velocity of $v\sin{i} \sim 160$ km s$^{-1}$ [@glebocki2005], and we find its rotation corrected atmospheric parameters ($T_\mathrm{eff} = 11903 \pm 405$ K, $\log{g} = 4.35 \pm 0.14$ dex) produce an interpolated age of 16 Myr. Using uncorrected atmospheric parameters ($T_\mathrm{eff} = 11263 \pm 383$ K, $\log{g} = 4.26 \pm 0.14$ dex) leads to an interpolated age of 25 Myr.
### $\zeta$ Delphini
[@derosa2014] recently published the discovery of a wide companion to $\zeta$ Delphini (HIP 101589). Those authors estimated the age of the system as 525$\pm$125 Myr, from the star’s positions on a color-magnitude and a temperature-luminosity diagram, leading to a model-dependent companion mass of 50$\pm$15 M$_\mathrm{Jup}$. Our method yields a mean age of 552 Myr, with 68% and 95% confidence intervals of 531-772 Myr, and 237-866 Myr, respectively. Our revised age is in agreement with the previous estimate of [@derosa2014], although favoring the interpretation of an older system and thus more massive companion. The interpolated ages for $\zeta$ Del are somewhat older: 612 Myr for the rotation-corrected atmospheric parameters $T_\mathrm{eff}$=8305 K, $\log{g}$=3.689, or 649 Myr for the uncorrected parameters $T_\mathrm{eff}$=8639 K, $\log{g}$=3.766. Note, in this case moderate rotation ($v \sin i =$ 99.2 km s$^{-1}$) leads to a discrepancy of only $\approx 6\%$ in the derived ages.
### 49 Ceti
49 Ceti does not have a known companion at present, but does possess a resolved molecular gas disk [@hughes2008]. The star is a proposed member of the 40 Myr Argus association, which would require cometary collisions to explain the gaseous disk that should have dissipated by $\sim$ 10 Myr due to radiation pressure [@zuckerman2012]. With a mean rotational velocity of $\sim$ 190 km s$^{-1}$ [@glebocki2005], and evidence that the star is highly inclined to our line of sight, rotational effects on photometric H-R diagram placement are prominent. Our $uvby\beta$ atmospheric parameters for 49 Ceti are $T_\mathrm{eff}$ = 10007 $\pm$ 340 K, $\log{g} = 4.37 \pm 0.14$ dex, after rotational effects were accounted for. These parameters place the star essentially on the ZAMS, with an interpolated age of 9 Myr, and calling into question the cometary genesis of its gaseous disk. However, the uncorrected atmospheric parameters ($T_\mathrm{eff} = 9182 \pm 309$ K, $\log{g}=4.22 \pm 0.14$ dex) are more consistent with the A1 spectral type and produces an interpolated age of 57 Myr, which seems to support the cometary collision hypothesis. This case illustrates the importance of high-precision atmospheric parameters.
Conclusions
===========
In the absence of finely calibrated empirical age indicators, such as the rotation-activity-age relation for solar-type stars [e.g. @mamajek2008], ages for early spectral type stars typically have come from open cluster and moving group membership, or through association with a late-type companion that can be age-dated through one of the applicable empirical methods. Because of their rapid evolution, early type stars are amenable to age dating via isochrones. In this paper we have investigated the use of Strömgren photometric techniques for estimating stellar atmospheric parameters, which are then compared to isochrones from modern stellar evolution models.
Bayesian inference is a particularly useful tool in the estimation of parameters such as age and mass from evolutionary models for large samples that span considerable ranges in temperature, luminosity, mass, and age. The Bayesian approach produces unbiased ages relative to a straightforward interpolation among isochrones which leads to age estimates that are biased towards older ages. However, as noted earlier, stars located beyond the range of the theory (below the theoretical ZAMS in our case) are assigned unreasonably old ages with the Bayesian method. This presumably is due to the clustering of isochrones and the dominance of the prior in inference scenarios in which the prior probability is changing quickly relative to the magnitude of the uncertainty in the atmospheric parameters. Linear interpolation for stars apparently below the MS may produce more reasonable age estimates.
The most important parameter for determining precise stellar ages near the ZAMS is the luminosity or surface gravity indicator. Effective temperatures, or observational proxies for temperature, are currently estimated with suitable precision. However, $\log g$, luminosity, or absolute magnitude (requiring a precise distance as well), are not currently estimated with the precision needed to meaningfully constrain the ages of field stars near the ZAMS. This effect is particularly pronounced for lower temperature stars where, for a given shift in $\log g$, the inferred age can change by many orders of magnitude. Our open cluster tests indicated that the age uncertainties due to the choice of evolutionary models are not significant compared to those introduced by the uncertainties in the surface gravities.
We have derived new atmospheric parameters (taking stellar rotation into account) and model-dependent ages and masses for 3493 BAF stars within 100 pc of the Sun. Our method of atmospheric parameter determination was calibrated and validated to stars with fundamentally determined atmospheric parameters. We further tested and validated our method of age estimation using open clusters with well-known ages. In determining the uncertainties in all of our newly derived parameters we conservatively account for the effects of systematics, metallicity, numerical precision, reddening, photometric errors, and uncertainties in $v\sin{i}$ as well as unknown rotational velocities.
Field star ages must be considered with caution. At minimum, our homogeneously derived set of stellar ages provides a relative youth ordering. For those stars below the MS we encourage the use of interpolated ages rather than Bayesian ages, unless more precise atmospheric parameters become available. Using the new set of ages, we presented an empirical mass-age relation for solar neighborhood B0-F5 stars. We also presented empirical relations between spectral type and age/mass and we discussed ages in detail for several famous low mass companion and/or debris disk objects. An anticipated use of our catalog is in the prioritization of targets for direct imaging of brown dwarf and planetary mass companions. David & Hillenbrand (2015b, *in preparation*) will explore how ages derived using this methodology can be applied to investigations such as debris disk evolution.
The authors wish to thank John Stauffer for his helpful input on sources of $uvby\beta$ data for open clusters and Timothy Brandt for helpful discussions during the proof stage of this work regarding the open cluster analysis, resulting in the appendix material concerning logarithmic versus linear approaches and a modified version of Figure 17. This material is based upon work supported in 2014 and 2015 by the National Science Foundation Graduate Research Fellowship under Grant No. DGE‐1144469. This research has made use of the WEBDA database, operated at the Institute for Astronomy of the University of Vienna, as well as the SIMBAD database and VizieR catalog access tool, operated at CDS, Strasbourg, France.
Metallicity Effects {#metallicity}
-------------------
We do not account explicitly for metallicity in this study, having assumed solar values in both our atmospheric models and our evolutionary grids. Our analysis in the $T_\mathrm{eff}$ and $\log g$ calibrations found that for stars with fundamentally determined atmospheric parameters and available \[Fe/H\] measurements, the accuracy with which we can determine atmospheric parameters using $uvby\beta$ photometry does not vary systematically with metallicity.
The effects of different metallicity assumptions on the Str[ö]{}mgren index atmospheric grids is illustrated in Figure \[fig:grids\_metallicity\]. Moving from the atmospheric grid to the evolutionary grid, Figure 17 of @valenti2005 illustrates that for the coolest stars under consideration here, which were the focus of their study, variation of metallicity from +0.5 to -0.5 dex in \[Fe/H\] corresponds to a +0.1 to -0.1 dex shift in $\log g$ of an evolutionary isochrone. Among hotter stars, Figure \[fig:grids\_metallicity\] shows that metallicity uncertainty affects temperatures only minorly, and gravities not at all or minimally.
We similarly calculated the effect on atmospheric parameter determination when allowing the model color grids to vary from +0.5 to -0.5 dex in \[M/H\], which notably represent the extremes of the metallicity range included in our sample (less than 1% of stars considered here have $\left | [\mathrm{Fe/H}] \right |>0.5$ dex). Figures \[fig:met-teff\] & \[fig:met-logg\] examine in detail the effects of metallicity on $T_\mathrm{eff}, \log{g}$ determinations in the relevant $uvby\beta$ planes. In summary, $T_\mathrm{eff}$ variations of up to $\sim 1\%$ in the $(b-y)-c_1$ plane, $\sim 2\%$ in the $a_0-r^*$ plane, and $6\%$ in the $c_1-\beta$ plane are possible with shifts of $\pm$ 0.5 dex in \[M/H\]. Notably, however, $T_\mathrm{eff}$ variations above the 2% level are only expected in the $c_1-\beta$ plane for stars hotter than $\sim 17000$ K, or roughly spectral type B4, of which there are very few in our sample. Similarly metallicity shifts of $\pm$ 0.5 dex can cause variations of $\sim$ 0.1 dex in $\log{g}$ in the $(b-y)-c_1$ and $c_1-\beta$ planes, while the same variation in the $a_0-r^*$ plane produces surface gravity shifts closer to $\sim 0.05$ dex.
By contrast, metallicity effects are more prominent in color-magnitude techniques. Recently, @nielsen2013 executed a Bayesian analysis of the locations in the $M_V$ vs $B-V$ diagram of Gemini/NICI targets to derive their ages including confidence contours for the stellar masses, ages, and metallicities. The work demonstrates correlation in this particular color-magnitude diagram of increasing mass and decreasing age with higher metallicity. Metal poor stars will have erroneously young ages attributed to them when solar metallicity is assumed.
![image](f42.pdf){width="99.00000%"}
![image](f43.pdf){width="30.00000%"} ![image](f44.pdf){width="30.00000%"} ![image](f45.pdf){width="30.00000%"}
![image](f46.pdf){width="30.00000%"} ![image](f47.pdf){width="30.00000%"} ![image](f48.pdf){width="30.00000%"}
Confidence Intervals
--------------------
All confidence intervals in age and mass quoted in this work are the bounds of the Highest Posterior Density (HPD) Region. For a given posterior probability density, $p(\theta | x)$, the $100(1-\alpha) \%$ HPD region is defined as the subset, $\mathcal{C}$, of $\theta$ values:
$$\mathcal{C} = \left \{ \theta : p(\theta | x) \geq p^*\right \},$$
where $p^*$ is the largest number such that
$$\int_{\theta: p(\theta | x) \geq p^*} p(\theta | x) \mathrm{d} \theta = 1- \alpha.$$
In other words, the HPD region is the set of most probable values (corresponding to the smallest range in $\theta$) that encloses $100(1-\alpha) \%$ of the posterior mass. The HPD method is particularly suited for finding confidence intervals of skewed probability distributions, such as the stellar age posteriors studied in this work. To find the highest posterior density (HPD) region numerically, a function is created that iteratively integrates a normalized posterior PDF above a test value of $p^*$ while the area/volume under the PDF is less than the desired confidence interval.
Open Cluster Tables
-------------------
[ccccccccc]{} HD 91711 & B8 V & -0.062 & 0.146 & 0.457 & 2.745 & 14687 $\pm$ 235 & 4.467 $\pm$ 0.113 & 153\
HD 91839 & A1 V & 0.025 & 0.178 & 1.033 & 2.904 & 9509 $\pm$ 152 & 4.188 $\pm$ 0.091 & 146\
HD 91896 & B7 III & -0.081 & 0.093 & 0.346 & 2.660 & 16427 $\pm$ 263 & 3.782 $\pm$ 0.113 & 155\
HD 91906 & A0 V & 0.016 & 0.177 & 1.005 & 2.889 & 9799 $\pm$ 157 & 4.146 $\pm$ 0.113 & 149\
HD 92275 & B8 III/IV & -0.056 & 0.125 & 0.562 & 2.709 & 13775 $\pm$ 220 & 3.852 $\pm$ 0.113 & 153\
HD 92467 & B95III & -0.026 & 0.168 & 0.833 & 2.851 & 11178 $\pm$ 179 & 4.423 $\pm$ 0.113 & 110\
HD 92478 & A0 V & 0.010 & 0.183 & 0.978 & 2.925 & 9586 $\pm$ 153 & 4.431 $\pm$ 0.091 & 60\
HD 92535 & A5 V n & 0.104 & 0.194 & 0.884 & 2.838 & 8057 $\pm$ 129 & 4.344 $\pm$ 0.145 & 140\
HD 92536 & B8 V & -0.043 & 0.131 & 0.705 & 2.795 & 13183 $\pm$ 211 & 4.423 $\pm$ 0.113 & 250\
HD 92568 & A M & 0.209 & 0.237 & 0.625 & 2.748 & 7113 $\pm$ 114 & 4.341 $\pm$ 0.145 & 126\
HD 92664 & B8 III P & -0.083 & 0.118 & 0.386 & 2.702 & 15434 $\pm$ 247 & 4.145 $\pm$ 0.113 & 65\
HD 92715 & B9 V nn & -0.027 & 0.136 & 0.882 & 2.836 & 12430 $\pm$ 199 & 4.362 $\pm$ 0.113 & 290\
HD 92783 & B85V nn & -0.033 & 0.124 & 0.835 & 2.804 & 12278 $\pm$ 196 & 4.130 $\pm$ 0.113 & 230\
HD 92837 & A0 IV nn & -0.007 & 0.160 & 0.953 & 2.873 & 10957 $\pm$ 175 & 4.322 $\pm$ 0.113 & 220\
HD 92896 & A3 IV & 0.114 & 0.193 & 0.838 & 2.831 & 8010 $\pm$ 128 & 4.425 $\pm$ 0.145 & 139\
HD 92938 & B3 V n & -0.075 & 0.105 & 0.384 & 2.690 & 15677 $\pm$ 251 & 4.015 $\pm$ 0.113 & 120\
HD 92966 & B95V nn & -0.019 & 0.158 & 0.930 & 2.878 & 11372 $\pm$ 182 & 4.445 $\pm$ 0.113 & 225\
HD 92989 & A05Va & 0.008 & 0.180 & 0.982 & 2.925 & 9979 $\pm$ 160 & 4.480 $\pm$ 0.091 & 148\
HD 93098 & A1 V s & 0.017 & 0.180 & 0.993 & 2.915 & 9688 $\pm$ 155 & 4.385 $\pm$ 0.091 & 135\
HD 93194 & B3 V nn & -0.078 & 0.105 & 0.357 & 2.668 & 17455 $\pm$ 279 & 4.015 $\pm$ 0.113 & 310\
HD 93424 & A3 Va & 0.060 & 0.197 & 0.950 & 2.890 & 8852 $\pm$ 142 & 4.247 $\pm$ 0.113 & 95\
HD 93517 & A1 V & 0.052 & 0.196 & 0.976 & 2.919 & 9613 $\pm$ 154 & 4.510 $\pm$ 0.091 & 220\
HD 93540 & B6 V nn & -0.065 & 0.116 & 0.476 & 2.722 & 15753 $\pm$ 252 & 4.308 $\pm$ 0.113 & 305\
HD 93549 & B6 V & -0.066 & 0.123 & 0.454 & 2.729 & 15579 $\pm$ 249 & 4.422 $\pm$ 0.113 & 265\
HD 93607 & B25V n & -0.084 & 0.102 & 0.292 & 2.675 & 17407 $\pm$ 279 & 4.098 $\pm$ 0.113 & 160\
HD 93648 & A0 V n & 0.041 & 0.188 & 1.025 & 2.890 & 9672 $\pm$ 155 & 4.157 $\pm$ 0.091 & 215\
HD 93714 & B2 IV-V n & -0.092 & 0.100 & 0.201 & 2.647 & 18927 $\pm$ 303 & 3.979 $\pm$ 0.113 & 40\
HD 93738 & A0 V nn & -0.027 & 0.158 & 0.842 & 2.817 & 12970 $\pm$ 208 & 4.336 $\pm$ 0.113 & 315\
HD 93874 & A3 IV & 0.071 & 0.203 & 0.947 & 2.896 & 8831 $\pm$ 141 & 4.367 $\pm$ 0.091 & 142\
HD 94066 & B5 V n & -0.068 & 0.117 & 0.439 & 2.680 & 15096 $\pm$ 242 & 3.792 $\pm$ 0.113 & 154\
HD 94174 & A0 V & 0.046 & 0.193 & 0.946 & 2.907 & 9305 $\pm$ 149 & 4.391 $\pm$ 0.113 & 149
[ccccccccc]{} BD$+$49 868 & F5 V & 0.261 & 0.165 & 0.459 & 2.683 & 6693 $\pm$ 107 & 4.455 $\pm$ 0.145 & 20\
HD 19767 & F0 V N & 0.176 & 0.178 & 0.756 & 2.765 & 7368 $\pm$ 118 & 4.174 $\pm$ 0.145 & 140\
HD 19805 & A0 Va & -0.000 & 0.161 & 0.931 & 2.887 & 10073 $\pm$ 161 & 4.344 $\pm$ 0.113 & 20\
HD 19893 & B9 V & -0.031 & 0.131 & 0.850 & 2.807 & 12614 $\pm$ 202 & 4.176 $\pm$ 0.113 & 280\
HD 19954 & A9 IV & 0.150 & 0.200 & 0.794 & 2.792 & 7632 $\pm$ 122 & 4.297 $\pm$ 0.145 & 85\
HD 20135 & A0 P & -0.011 & 0.186 & 0.970 & 2.848 & 10051 $\pm$ 161 & 3.998 $\pm$ 0.113 & 35\
BD$+$49 889 & F5 V & 0.292 & 0.156 & 0.418 & 2.656 & 6430 $\pm$ 103 & 4.352 $\pm$ 0.145 & 65\
BD$+$49 896 & F4 V & 0.261 & 0.168 & 0.472 & 2.686 & 6686 $\pm$ 107 & 4.410 $\pm$ 0.145 & 30\
HD 20365 & B3 V & -0.079 & 0.103 & 0.346 & 2.681 & 16367 $\pm$ 262 & 4.025 $\pm$ 0.113 & 145\
HD 20391 & A1 Va n & 0.026 & 0.179 & 1.006 & 2.901 & 10415 $\pm$ 167 & 4.386 $\pm$ 0.091 & 260\
HD 20487 & A0 V N & -0.016 & 0.151 & 0.976 & 2.856 & 11659 $\pm$ 187 & 4.198 $\pm$ 0.113 & 280\
BD$+$47 808 & F1 IV N & 0.183 & 0.179 & 0.759 & 2.763 & 7281 $\pm$ 116 & 4.062 $\pm$ 0.145 & 180\
BD$+$48 892 & F3 IV-V & 0.246 & 0.167 & 0.524 & 2.696 & 6800 $\pm$ 109 & 4.359 $\pm$ 0.145 & 20\
BD$+$48 894 & F0 IV & 0.174 & 0.202 & 0.734 & 2.770 & 7416 $\pm$ 119 & 4.284 $\pm$ 0.145 & 75\
HD 20809 & B5 V & -0.074 & 0.109 & 0.395 & 2.696 & 15934 $\pm$ 255 & 4.097 $\pm$ 0.113 & 200\
HD 20842 & A0 Va & -0.005 & 0.157 & 0.950 & 2.886 & 10258 $\pm$ 164 & 4.325 $\pm$ 0.113 & 85\
HD 20863 & B9 V & -0.034 & 0.134 & 0.810 & 2.813 & 12154 $\pm$ 194 & 4.267 $\pm$ 0.113 & 200\
BD$+$49 914 & F5 V & 0.281 & 0.170 & 0.431 & 2.664 & 6520 $\pm$ 104 & 4.395 $\pm$ 0.145 & 120\
HD 20919 & A8 V & 0.168 & 0.191 & 0.757 & 2.775 & 7463 $\pm$ 119 & 4.259 $\pm$ 0.145 & 50\
BD$+$49 918 & F1 V N & 0.186 & 0.183 & 0.770 & 2.755 & 7235 $\pm$ 116 & 3.977 $\pm$ 0.145 & 175\
HD 20931 & A1 Va & 0.018 & 0.174 & 0.979 & 2.911 & 9588 $\pm$ 153 & 4.342 $\pm$ 0.113 & 85\
BD$+$47 816 & F4 V & 0.271 & 0.155 & 0.452 & 2.672 & 6600 $\pm$ 106 & 4.399 $\pm$ 0.145 & 28\
HD 20961 & B95V & -0.019 & 0.163 & 0.920 & 2.875 & 10537 $\pm$ 169 & 4.344 $\pm$ 0.113 & 25\
BD$+$46 745 & F4 V & 0.274 & 0.169 & 0.462 & 2.674 & 6566 $\pm$ 105 & 4.332 $\pm$ 0.145 & 160\
HD 20969 & A8 V & 0.186 & 0.192 & 0.715 & 2.758 & 7291 $\pm$ 117 & 4.239 $\pm$ 0.145 & 20\
HD 20986 & A3 V N & 0.046 & 0.190 & 1.004 & 2.896 & 9584 $\pm$ 153 & 4.243 $\pm$ 0.091 & 210\
HD 21005 & A5 V N & 0.074 & 0.189 & 0.987 & 2.862 & 8266 $\pm$ 132 & 4.197 $\pm$ 0.145 & 250\
HD 21091 & B95IV nn & -0.019 & 0.152 & 0.938 & 2.856 & 12477 $\pm$ 200 & 4.416 $\pm$ 0.113 & 340\
HD 21092 & A5 V & 0.054 & 0.218 & 0.938 & 2.893 & 8775 $\pm$ 140 & 4.311 $\pm$ 0.091 & 75\
TYC 3320-1715-1 & F4 V & 0.281 & 0.153 & 0.469 & 2.663 & 6495 $\pm$ 104 & 4.220 $\pm$ 0.145 & 110\
HD 21152 & B9 V & -0.018 & 0.158 & 0.943 & 2.868 & 11306 $\pm$ 181 & 4.353 $\pm$ 0.113 & 225\
HD 232793 & F5 V & 0.311 & 0.172 & 0.377 & 2.645 & 6274 $\pm$ 100 & 4.362 $\pm$ 0.145 & 93\
HD 21181 & B85V N & -0.038 & 0.122 & 0.784 & 2.766 & 13726 $\pm$ 220 & 4.119 $\pm$ 0.113 & 345\
HD 21239 & A3 V N & 0.045 & 0.190 & 0.997 & 2.910 & 9182 $\pm$ 147 & 4.320 $\pm$ 0.091 & 145\
HD 21278 & B5 V & -0.073 & 0.111 & 0.398 & 2.705 & 15274 $\pm$ 244 & 4.152 $\pm$ 0.113 & 75\
HD 21302 & A1 V N & 0.022 & 0.177 & 0.989 & 2.888 & 10269 $\pm$ 164 & 4.301 $\pm$ 0.091 & 230\
BD$+$48 923 & F4 V & 0.270 & 0.153 & 0.464 & 2.673 & 6603 $\pm$ 106 & 4.362 $\pm$ 0.145 & 20\
HD 21345 & A5 V N & 0.051 & 0.208 & 0.969 & 2.893 & 9435 $\pm$ 151 & 4.324 $\pm$ 0.091 & 200\
HD 21398 & B9 V & -0.030 & 0.145 & 0.825 & 2.837 & 11615 $\pm$ 186 & 4.372 $\pm$ 0.113 & 135\
HD 21428 & B3 V & -0.077 & 0.105 & 0.363 & 2.686 & 16421 $\pm$ 263 & 4.076 $\pm$ 0.113 & 200\
HD 21481 & A0 V N & -0.013 & 0.164 & 0.993 & 2.858 & 11187 $\pm$ 179 & 4.141 $\pm$ 0.113 & 250\
HD 21527 & A7 IV & 0.093 & 0.231 & 0.855 & 2.856 & 8231 $\pm$ 132 & 4.486 $\pm$ 0.145 & 80\
HD 21551 & B8 V & -0.048 & 0.118 & 0.673 & 2.746 & 14869 $\pm$ 238 & 4.220 $\pm$ 0.113 & 380\
HD 21553 & A6 V N & 0.072 & 0.206 & 0.921 & 2.872 & 8381 $\pm$ 134 & 4.414 $\pm$ 0.145 & 150\
HD 21619 & A6 V & 0.052 & 0.221 & 0.935 & 2.894 & 8843 $\pm$ 141 & 4.329 $\pm$ 0.091 & 90\
BD$+$49 957 & F3 V & 0.258 & 0.168 & 0.500 & 2.687 & 6699 $\pm$ 107 & 4.334 $\pm$ 0.145 & 56\
HD 21641 & B85V & -0.042 & 0.131 & 0.721 & 2.747 & 12914 $\pm$ 207 & 3.929 $\pm$ 0.113 & 215\
BD$+$49 958 & F1 V & 0.198 & 0.188 & 0.732 & 2.739 & 7137 $\pm$ 114 & 3.989 $\pm$ 0.145 & 155\
HD 21672 & B8 V & -0.050 & 0.119 & 0.649 & 2.747 & 13473 $\pm$ 216 & 4.071 $\pm$ 0.113 & 225\
BD$+$48 944 & A5 V & 0.063 & 0.220 & 0.931 & 2.886 & 8799 $\pm$ 141 & 4.305 $\pm$ 0.091 & 120\
HD 21931 & B9 V & -0.029 & 0.147 & 0.835 & 2.829 & 11998 $\pm$ 192 & 4.343 $\pm$ 0.113 & 205\
[ccccccccc]{} HD 23157 & A9 V & 0.168 & 0.190 & 0.725 & 2.778 & 7463 $\pm$ 121 & 4.369 $\pm$ 0.145 & 100\
HD 23156 & A7 V & 0.111 & 0.215 & 0.815 & 2.837 & 8046 $\pm$ 130 & 4.498 $\pm$ 0.145 & 70\
HD 23247 & F3 V & 0.237 & 0.174 & 0.527 & 2.704 & 6863 $\pm$ 111 & 4.424 $\pm$ 0.145 & 40\
HD 23246 & A8 V & 0.170 & 0.184 & 0.758 & 2.773 & 7409 $\pm$ 120 & 4.234 $\pm$ 0.145 & 200\
HD 23288 & B7 V & -0.051 & 0.120 & 0.636 & 2.747 & 13953 $\pm$ 226 & 4.151 $\pm$ 0.113 & 280\
HD 23302 & B6 III & -0.054 & 0.098 & 0.638 & 2.690 & 13308 $\pm$ 216 & 3.478 $\pm$ 0.113 & 205\
HD 23289 & F3 V & 0.244 & 0.164 & 0.521 & 2.699 & 6796 $\pm$ 110 & 4.387 $\pm$ 0.145 & 40\
HD 23326 & F4 V & 0.250 & 0.164 & 0.514 & 2.691 & 6741 $\pm$ 109 & 4.358 $\pm$ 0.145 & 40\
HD 23324 & B8 V & -0.052 & 0.116 & 0.634 & 2.747 & 13748 $\pm$ 223 & 4.126 $\pm$ 0.113 & 255\
HD 23338 & B6 IV & -0.061 & 0.104 & 0.553 & 2.702 & 13696 $\pm$ 222 & 3.772 $\pm$ 0.113 & 130\
HD 23351 & F3 V & 0.249 & 0.176 & 0.507 & 2.695 & 6755 $\pm$ 109 & 4.391 $\pm$ 0.145 & 80\
HD 23361 & A25Va n & 0.069 & 0.201 & 0.959 & 2.872 & 8356 $\pm$ 135 & 4.309 $\pm$ 0.145 & 235\
HD 23375 & A9 V & 0.180 & 0.187 & 0.710 & 2.765 & 7336 $\pm$ 119 & 4.318 $\pm$ 0.145 & 75\
HD 23410 & A0 Va & 0.004 & 0.164 & 0.975 & 2.899 & 10442 $\pm$ 169 & 4.382 $\pm$ 0.113 & 200\
HD 23409 & A3 V & 0.070 & 0.202 & 0.980 & 2.892 & 8903 $\pm$ 144 & 4.270 $\pm$ 0.091 & 170\
HD 23432 & B8 V & -0.039 & 0.127 & 0.758 & 2.793 & 12695 $\pm$ 206 & 4.250 $\pm$ 0.113 & 235\
HD 23441 & B9 V N & -0.029 & 0.135 & 0.858 & 2.822 & 11817 $\pm$ 191 & 4.209 $\pm$ 0.113 & 200\
HD 23479 & A9 V & 0.188 & 0.166 & 0.716 & 2.755 & 7239 $\pm$ 117 & 4.212 $\pm$ 0.145 & 150\
HD 23489 & A2 V & 0.033 & 0.183 & 1.012 & 2.907 & 9170 $\pm$ 149 & 4.239 $\pm$ 0.091 & 110\
HD 23512 & A2 V & 0.057 & 0.196 & 1.035 & 2.909 & 8852 $\pm$ 143 & 4.214 $\pm$ 0.091 & 145\
HD 23511 & F5 V & 0.279 & 0.174 & 0.412 & 2.674 & 6521 $\pm$ 106 & 4.477 $\pm$ 0.145 & 28\
HD 23514 & F5 V & 0.285 & 0.179 & 0.443 & 2.668 & 6450 $\pm$ 104 & 4.307 $\pm$ 0.145 & 40\
HD 23513 & F5 V & 0.278 & 0.170 & 0.423 & 2.673 & 6528 $\pm$ 106 & 4.447 $\pm$ 0.145 & 30\
HD 23568 & B95Va n & -0.024 & 0.139 & 0.914 & 2.847 & 11731 $\pm$ 190 & 4.301 $\pm$ 0.113 & 240\
HD 23567 & F0 V & 0.159 & 0.196 & 0.735 & 2.788 & 7560 $\pm$ 122 & 4.407 $\pm$ 0.145 & 50\
HD 23585 & F0 V & 0.168 & 0.185 & 0.713 & 2.780 & 7472 $\pm$ 121 & 4.405 $\pm$ 0.145 & 100\
HD 23608 & F5 V & 0.278 & 0.177 & 0.482 & 2.673 & 6492 $\pm$ 105 & 4.185 $\pm$ 0.145 & 110\
HD 23607 & F0 V & 0.108 & 0.203 & 0.814 & 2.841 & 8085 $\pm$ 131 & 4.534 $\pm$ 0.145 & 12\
HD 23629 & A0 V & -0.001 & 0.163 & 0.986 & 2.899 & 10340 $\pm$ 168 & 4.342 $\pm$ 0.113 & 170\
HD 23632 & A0 Va & 0.006 & 0.167 & 1.009 & 2.899 & 10461 $\pm$ 169 & 4.312 $\pm$ 0.113 & 225\
HD 23628 & A4 V & 0.090 & 0.189 & 0.904 & 2.853 & 8163 $\pm$ 132 & 4.381 $\pm$ 0.145 & 215\
HD 23643 & A35V & 0.079 & 0.194 & 0.943 & 2.862 & 8258 $\pm$ 134 & 4.301 $\pm$ 0.145 & 185\
HD 23733 & A9 V & 0.207 & 0.177 & 0.672 & 2.736 & 7066 $\pm$ 114 & 4.174 $\pm$ 0.145 & 180\
HD 23732 & F5 V & 0.258 & 0.172 & 0.460 & 2.688 & 6695 $\pm$ 108 & 4.473 $\pm$ 0.145 & 50\
HD 23753 & B8 V N & -0.046 & 0.113 & 0.712 & 2.736 & 13096 $\pm$ 212 & 3.859 $\pm$ 0.113 & 240\
HD 23791 & A9 V+ & 0.139 & 0.214 & 0.758 & 2.811 & 7776 $\pm$ 126 & 4.480 $\pm$ 0.145 & 85\
HD 23850 & B8 III & -0.048 & 0.102 & 0.701 & 2.695 & 13446 $\pm$ 218 & 3.483 $\pm$ 0.113 & 280\
HD 23863 & A8 V & 0.116 & 0.201 & 0.857 & 2.826 & 7926 $\pm$ 128 & 4.354 $\pm$ 0.145 & 160\
HD 23872 & A1 Va n & 0.032 & 0.182 & 1.013 & 2.894 & 10028 $\pm$ 162 & 4.247 $\pm$ 0.091 & 240\
HD 23873 & B95Va & -0.023 & 0.143 & 0.907 & 2.852 & 10897 $\pm$ 177 & 4.255 $\pm$ 0.113 & 90\
HD 23886 & A4 V & 0.068 & 0.214 & 0.915 & 2.880 & 8974 $\pm$ 145 & 4.343 $\pm$ 0.091 & 165\
HD 23912 & F3 V & 0.274 & 0.154 & 0.481 & 2.671 & 6531 $\pm$ 106 & 4.242 $\pm$ 0.145 & 130\
HD 23924 & A7 V & 0.100 & 0.223 & 0.852 & 2.852 & 8121 $\pm$ 132 & 4.460 $\pm$ 0.145 & 100\
HD 23923 & B85V N & -0.033 & 0.124 & 0.839 & 2.794 & 12911 $\pm$ 209 & 4.159 $\pm$ 0.113 & 310\
HD 23948 & A1 Va & 0.033 & 0.191 & 0.984 & 2.905 & 9237 $\pm$ 150 & 4.307 $\pm$ 0.091 & 120\
HD 24076 & A2 V & 0.008 & 0.168 & 0.923 & 2.867 & 10196 $\pm$ 165 & 4.298 $\pm$ 0.091 & 155\
HD 24132 & F2 V & 0.245 & 0.149 & 0.597 & 2.692 & 6744 $\pm$ 109 & 4.182 $\pm$ 0.145 & 230 \[table:pleiades\]
[ccccccccc]{} HD 26015 & F3 V & 0.252 & 0.174 & 0.537 & 2.693 & 6732 $\pm$ 109 & 4.244 $\pm$ 0.145 & 25\
HD 26462 & F1 IV-V & 0.230 & 0.165 & 0.596 & 2.710 & 6916 $\pm$ 112 & 4.291 $\pm$ 0.145 & 30\
HD 26737 & F5 V & 0.274 & 0.168 & 0.477 & 2.674 & 6558 $\pm$ 106 & 4.263 $\pm$ 0.145 & 60\
HD 26911 & F3 V & 0.258 & 0.176 & 0.525 & 2.690 & 6682 $\pm$ 108 & 4.228 $\pm$ 0.145 & 30\
HD 27176 & A7 m & 0.172 & 0.187 & 0.785 & 2.767 & 7380 $\pm$ 120 & 4.087 $\pm$ 0.145 & 125\
HD 27397 & F0 IV & 0.171 & 0.194 & 0.770 & 2.766 & 7410 $\pm$ 120 & 4.173 $\pm$ 0.145 & 100\
HD 27429 & F2 VN & 0.240 & 0.171 & 0.588 & 2.693 & 6828 $\pm$ 111 & 4.270 $\pm$ 0.145 & 150\
HD 27459 & F0 IV & 0.129 & 0.204 & 0.871 & 2.812 & 7782 $\pm$ 126 & 4.198 $\pm$ 0.145 & 35\
HD 27524 & F5 V & 0.285 & 0.161 & 0.461 & 2.656 & 6461 $\pm$ 105 & 4.213 $\pm$ 0.145 & 110\
HD 27561 & F4 V & 0.270 & 0.162 & 0.482 & 2.677 & 6594 $\pm$ 107 & 4.284 $\pm$ 0.145 & 30\
HD 27628 & A2 M & 0.133 & 0.225 & 0.707 & 2.756 & 7944 $\pm$ 129 & 4.743 $\pm$ 0.145 & 30\
HD 27819 & A7 IV & 0.080 & 0.209 & 0.982 & 2.857 & 8203 $\pm$ 133 & 4.170 $\pm$ 0.145 & 35\
HD 27901 & F4 V N & 0.238 & 0.178 & 0.597 & 2.704 & 6837 $\pm$ 111 & 4.233 $\pm$ 0.145 & 110\
HD 27934 & A5 IV-V & 0.064 & 0.201 & 1.053 & 2.867 & 8506 $\pm$ 138 & 3.884 $\pm$ 0.091 & 90\
HD 27946 & A7 V & 0.149 & 0.192 & 0.840 & 2.783 & 7584 $\pm$ 123 & 4.112 $\pm$ 0.145 & 210\
HD 27962 & A3 V & 0.020 & 0.193 & 1.046 & 2.889 & 9123 $\pm$ 148 & 4.004 $\pm$ 0.091 & 30\
HD 28024 & A9 IV- N & 0.165 & 0.175 & 0.947 & 2.753 & 7279 $\pm$ 118 & 3.503 $\pm$ 0.145 & 215\
HD 28226 & A M & 0.164 & 0.213 & 0.771 & 2.775 & 7493 $\pm$ 121 & 4.248 $\pm$ 0.145 & 130\
HD 28294 & F0 IV & 0.198 & 0.173 & 0.694 & 2.745 & 7174 $\pm$ 116 & 4.194 $\pm$ 0.145 & 135\
HD 28319 & A7 III & 0.097 & 0.198 & 1.011 & 2.831 & 7945 $\pm$ 129 & 3.930 $\pm$ 0.145 & 130\
HD 28355 & A7 m & 0.112 & 0.226 & 0.908 & 2.832 & 7930 $\pm$ 128 & 4.207 $\pm$ 0.145 & 140\
HD 28485 & F0 V+ N & 0.200 & 0.192 & 0.717 & 2.740 & 7129 $\pm$ 115 & 4.035 $\pm$ 0.145 & 150\
HD 28527 & A5 m & 0.085 & 0.218 & 0.964 & 2.856 & 8180 $\pm$ 133 & 4.194 $\pm$ 0.145 & 100\
HD 28546 & A7 m & 0.142 & 0.234 & 0.796 & 2.809 & 7726 $\pm$ 125 & 4.354 $\pm$ 0.145 & 30\
HD 28556 & F0 IV & 0.147 & 0.202 & 0.814 & 2.795 & 7645 $\pm$ 124 & 4.244 $\pm$ 0.145 & 140\
HD 28568 & F5 V & 0.274 & 0.168 & 0.466 & 2.676 & 6564 $\pm$ 106 & 4.315 $\pm$ 0.145 & 55\
HD 28677 & F2 V & 0.214 & 0.176 & 0.654 & 2.725 & 7032 $\pm$ 114 & 4.161 $\pm$ 0.145 & 100\
HD 28911 & F5 V & 0.283 & 0.163 & 0.459 & 2.663 & 6481 $\pm$ 105 & 4.249 $\pm$ 0.145 & 40\
HD 28910 & A9 V & 0.144 & 0.200 & 0.830 & 2.796 & 7659 $\pm$ 124 & 4.213 $\pm$ 0.145 & 95\
HD 29169 & F2 V & 0.236 & 0.183 & 0.567 & 2.708 & 6880 $\pm$ 111 & 4.321 $\pm$ 0.145 & 80\
HD 29225 & F5 V & 0.276 & 0.171 & 0.461 & 2.675 & 6547 $\pm$ 106 & 4.316 $\pm$ 0.145 & 45\
HD 29375 & F0 IV-V & 0.187 & 0.187 & 0.740 & 2.754 & 7257 $\pm$ 118 & 4.106 $\pm$ 0.145 & 155\
HD 29388 & A5 IV-V & 0.062 & 0.199 & 1.047 & 2.870 & 8645 $\pm$ 140 & 3.927 $\pm$ 0.091 & 115\
HD 29499 & A M & 0.140 & 0.231 & 0.826 & 2.810 & 7713 $\pm$ 125 & 4.266 $\pm$ 0.145 & 70\
HD 29488 & A5 IV-V & 0.080 & 0.196 & 1.017 & 2.852 & 8127 $\pm$ 132 & 4.025 $\pm$ 0.145 & 160\
HD 30034 & A9 IV- & 0.150 & 0.195 & 0.813 & 2.791 & 7610 $\pm$ 123 & 4.218 $\pm$ 0.145 & 75\
HD 30210 & A5 m & 0.091 & 0.252 & 0.955 & 2.845 & 8126 $\pm$ 132 & 4.181 $\pm$ 0.145 & 30\
HD 30780 & A9 V+ & 0.122 & 0.207 & 0.900 & 2.813 & 7823 $\pm$ 127 & 4.141 $\pm$ 0.145 & 155\
HD 31845 & F5 V & 0.294 & 0.165 & 0.439 & 2.658 & 6396 $\pm$ 104 & 4.229 $\pm$ 0.145 & 25\
HD 32301 & A7 IV & 0.079 & 0.202 & 1.034 & 2.847 & 8116 $\pm$ 131 & 3.975 $\pm$ 0.145 & 115\
HD 33254 & A7 m & 0.132 & 0.251 & 0.835 & 2.824 & 7797 $\pm$ 126 & 4.306 $\pm$ 0.145 & 30\
HD 33204 & A7 m & 0.149 & 0.245 & 0.803 & 2.796 & 7634 $\pm$ 124 & 4.270 $\pm$ 0.145 & 30\
HD 25202 & F4 V & 0.206 & 0.172 & 0.695 & 2.724 & 7082 $\pm$ 115 & 4.064 $\pm$ 0.145 & 160\
HD 28052 & F0 IV-V N & 0.153 & 0.183 & 0.934 & 2.767 & 7431 $\pm$ 120 & 3.733 $\pm$ 0.145 & 170\
HD 18404 & F5 IV & 0.269 & 0.169 & 0.481 & 2.680 & 6605 $\pm$ 107 & 4.299 $\pm$ 0.145 & 0\
HD 25570 & F4 V & 0.249 & 0.147 & 0.557 & 2.688 & 6752 $\pm$ 109 & 4.183 $\pm$ 0.145 & 34\
HD 40932 & A2 M & 0.079 & 0.205 & 0.978 & 2.853 & 8224 $\pm$ 133 & 4.191 $\pm$ 0.145 & 18 \[table:hyades\]
Alternative Treatment of Open Clusters
--------------------------------------
As described in § \[subsubsec:numericalmethods\] The 1-D marginalized PDF in age for an individual star is computed on a model grid that is uniformly spaced in log(age). As such, the prior probability of each bin is also encoded in log(age) (see § \[subsubsec:priors\]). Thus, the resultant PDF is naturally in the units of $d$ p($\log{\tau}$)/$d\log{\tau}$, where $p$ is probability and $\tau$ is age.
In order to transform p($\log{\tau}$) to p($\tau$) one uses the conversion p($\tau$) = p($\log{\tau}$)/$\tau$. Statistical measures *other than the median*, such as the mean, mode, confidence intervals, etc. will be different depending on whether the PDF being quantified is p($\log{\tau}$) or p($\tau$). For example, $10^{\left \langle \log{\tau} \right \rangle} \neq \left \langle \tau \right \rangle$. Strictly speaking, however, both values are meaningful and authors frequently choose to report one or the other in the literature. In the case at hand, p($\log{\tau}$) for an individual star is more symmetric than the linear counterpart, p($\tau$). As such, one could reasonably argue that $10^{\left \langle \log{\tau} \right \rangle}$ is a more meaningful metric than $\left \langle \tau \right \rangle$.
In either case, because the PDFs in age or log(age) are both skewed, the median (which, again, is equal regardless of whether p($\tau$) or p($\log{\tau}$) is under consideration), is actually the most meaningful quantification of the PDF since it is less susceptible to extreme values than either the mean or mode.
With respect to the open clusters, regardless of whether our analyses are performed in logarthmic or linear space, our results favor ages that are younger and older than accepted values for $\alpha$ Per and the Hyades, respectively.
![image](openclusters-lin-v1.pdf){width="90.00000%"}
[cccccccccccc]{}
IC 2602 & 46$^{+6}_{-5}$ & [@ekstrom2012] & 22 & 3-39 & 41 & 41-42\
& & [@bressan2012] & 24 & 3-40 & 40 & 37-43\
$\alpha$ Persei & 90$^{+10}_{-10}$ & [@ekstrom2012] & 41 & 3-68 & 63 & 61-68\
& & [@bressan2012] & 45 & 3-71 & 62 & 58-66\
Pleiades & 125$^{+8}_{-8}$ & [@ekstrom2012] & 61 & 3-113 & 125 & 122-131\
& & [@bressan2012] & 77 & 3-117 & 112 & 107-120\
Hyades & 625$^{+50}_{-50}$ & [@ekstrom2012] & 118 & 3-403 & 677 & 671-690\
& & [@bressan2012] & 288 & 17-593 & 738 & 719-765
[^1]: <http://idlastro.gsfc.nasa.gov/ftp/pro/astro/uvbybeta.pro>
[^2]: <http://idlastro.gsfc.nasa.gov/ftp/pro/astro/deredd.pro>
[^3]: <http://wwwuser.oats.inaf.it/castelli>
[^4]: <http://kurucz.harvard.edu/grids/gridP00ODFNEW/uvbyp00k0odfnew.dat>
[^5]: <http://www.univie.ac.at/webda/>
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We show results of broadband dielectric measurements on the charge ordered, proposed to be multiferroic material LuFe$_2$O$_4$. The temperature and frequency dependence of the complex permittivity as investigated for temperatures above and below the charge-oder transition near $T_{CO}\approx 320$ K and for frequencies up to 1 GHz can be well described by a standard equivalent-circuit model considering Maxwell-Wagner-type contacts and hopping induced AC-conductivity. No pronounced contribution of intrinsic dipolar polarization could be found and thus the ferroelectric character of the charge order in LuFe$_2$O$_4$ has to be questioned.'
author:
- 'D. Niermann$^{1}$'
- 'F. Waschkowski$^{1}$'
- 'J. de Groot$^2$'
- 'M. Angst$^2$'
- 'J. Hemberger$^{1}$'
title: |
Dielectric properties of charge ordered LuFe$_2$O$_4$ revisited:\
The apparent influence of contacts
---
In recent years, multiferroics, i.e. materials combining at least two coexisting ferroic order parameters in a single thermodynamic phase, have attracted remarkable interest in condensed matter physics. Currently most promising with respect to application as well as to fundamental aspects is the class of magnetoelectric multiferroics in which ferroelectricity is coupled to magnetism [@Spaldin05; @Khomskii09]. Such coupling could enable the control of the electric polarization via a magnetic field and of the magnetic order via an electric field. However, the coexistence of ferroelectricity and (ferro-)magnetism needs a certain level of complexity as it may be generated via the interplay of structural and electronic degrees of freedom in transition metal oxides. Among these underlying mechanisms [@Khomskii09] two main scenarios for the onset of ferroelectricity may be highlighted: Systems in which ferroelectricity is driven by partially frustrated spiral [@Kimura03; @Cheong07] or collinear [@Dagotto07; @Giovanetti11] magnetism and systems in which ferroelectricity arises from complex charge order [@Brink08], discussed e.g. for nearly half doped rare earth manganites [@Efremov04; @Joos07], nickelates [@Giovanetti09], magnetite [@Picozzi09], or in particular LuFe$_2$O$_4$ [@Ikeda05]. For this latter class of materials the residual conductivity at the charge order (CO) transition, which in addition may be dependent on magnetic and electric fields [@Joos07; @Sichelschmidt01], makes it difficult to probe the theoretically predicted onset of ferroelectricity via macroscopic methods like pyrocurrent, hysteretical $P(E)$-loops, or dielectric permittivity measurements. In such cases contact contributions may add capacitive [@Lunkenheimer02; @Biskup05] or even magneto-capacitive contributions [@Catalan06] which will cover the intrinsic sample properties.
The mixed valence (Fe$^{2+}$/Fe$^{3+}$) system LuFe$_2$O$_4$ was proposed to show a novel type of ferroelectricity based on frustrated charge order within triangular Fe-O-double layers at $T_{CO} \approx 330$ K [@Ikeda05], which even is proposed to be coupled to magnetism and magnetic field [@Subramanian06; @Mulders11]. The corresponding ferroelectric moment was suggested to result from a CO configuration of polar bilayers with a Fe$^{2+}$/Fe$^{3+}$-unbalance within both sublayers [@Ikeda05; @Angst08]. Below the charge order transition magnetic order sets in at $T_N=240$ K, which is altered in a further magneto-structural, first-order type transition at $T_{LT} \approx 175$ K [@Xu08]. However, the large permittivity values, magneto-capacitive effects and temperature dependent polarization measurement reported for this material suffer from being influenced by the relatively high residual conductivity. Thus an unambiguous evidence for ferroelectricity by means of dielectric measurements is difficult to give. Schottky-type depletion layers at the contact interfaces or grain boundaries can lead to Maxwell-Wagner effects [@Maxwell-Wagner] and hopping conductivity can give a further frequency dependent contribution to the apparent dielectric constant [@Lunkenheimer02]. Such effects have already been demonstrated for poly-crystalline samples of LuFe$_2$O$_4$ for temperatures below 300 K and for frequencies up to MHz [@Ren11]. Here we will report on broadband spectroscopic investigations of the permittivity in high quality LuFe$_2$O$_4$ single crystals below and above the CO-transition for temperatures up to 400 K and frequencies up to 1 GHz in order to separate intrinsic and non-intrinsic contributions to the dielectric properties and to elucidate the potentially polar nature of the CO-state.
The single-crystalline samples of LuFe$_2$O$_4$ were grown employing the floating-zone method [@Christianson08]. Structural and magnetic measurements confirmed the known behavior: in the high temperature phase the samples are hexagonal and show the known sequence of phase transitions at $T_{CO}=320$ K, $T_{N}=240$ K, and $T_{LT}=175$ K. The samples are from the same batch as used for the latest structural studies published in Refs. and . The dielectric measurements were made in a commercial $^4$He-flow magneto-cryostat ([Quantum-Design PPMS]{}) employing a home-made coaxial-line inset. The complex, frequency dependent dielectric response was measured using a frequency-response analyzer ([Novocontrol]{}) for frequencies from 1 Hz to 1 MHz. For higher frequencies up to 1 GHz a micro-strip setup was employed and the complex transmission coefficient ($S_{12}$) was evaluated via a vector network analyzer ([Rhode & Schwarz]{}). All measurements were performed with the electric field along the crystallographic $c$-axis (the direction for which a spontaneous ferroelectric moment was postulated [@Ikeda05]) with a small stimulus of the order $E_0\approx 1$ V$_{rms}$/mm. The contacts were applied to the plate-like single-crystals using silver paint in sandwich geometry with a typical electrode area of $A\approx 1$ mm$^2$ and a thickness of $d\approx0.4$ mm. The uncertainty in the determination of the exact geometry together with additional (but constant) contributions of stray capacitances results in an uncertainty in the absolute values for the permittivity of up to 20 %. Additional specific heat measurements were carried out in a commercial system ([Quantum-Design PPMS]{}).
![ (Color online) Temperature dependence of the complex dielectric permittivity $\varepsilon^*(T)$ as measured for frequencies between 1 Hz and 1 GHz equally spaced with two frequencies per decade. The middle frame displays the 1 GHz curve of the real part on a linear scale together with specific heat data around the charge order transition. []{data-label="eps(T)"}](eps_T_inset.pdf){width="1.0\columnwidth"}
Fig. 1 shows the temperature dependence of the complex permittivity for temperatures 100 K $< T <$ 400 K. In the real part $\varepsilon'(T)$ a pronounced step from a low value of roughly $\varepsilon_i \approx 30$ for high frequencies and low temperatures to high values of several thousands for low frequencies and high temperatures. This feature resembles the findings of high permittivity values reported in literature [@Ikeda05; @Subramanian06]. Already at this point it is remarkable that these high $\varepsilon$-values for low enough frequencies ($\nu < 1 MHz$) do persist for temperatures above $T_{CO}$, and thus obviously do not depend on the onset of possibly ferroelectric charge order. These steps in $\varepsilon'(T)$ are accompanied by cusp-like features in the imaginary part $\varepsilon''(T)$ (Fig. \[eps(T)\], lower frame), which is, however, dominated by a steep, nearly logarithmic, and strongly frequency-dependent increase with temperature. Such type of behavior is due to the influence of conductivity $\sigma$, which in general is via the relation $\sigma'= \omega\varepsilon_0\varepsilon''$ connected to the dielectric loss $\varepsilon''$. The details on these corresponding conductivity contributions will be discussed later but already at this point it shall be mentioned that for high enough frequencies such non-intrinsic features are suppressed (or rather shifted to higher temperatures) and only the intrinsic features persist. Such high frequency data ($\nu=1$ GHz) for $\varepsilon'(T)$ is displayed in the middle frame of Fig. \[eps(T)\], this time on a linear scale. At $T_{LT}\approx 175$ K a small step-like anomaly with a distinct temperature hysteresis is reminiscent of the magneto-structural transition. The magnetic transition at $T_N=240$ K does not show up in the dielectric data, which questions a pronounced magneto-electric coupling. But most remarkably, at $T_{CO}\approx 320$ K no divergent behavior in the permittivity can be detected. In contrast, at the point were the charge order sets in (as reconfirmed via the peak in the specific heat measured on the very same sample, see inset in Fig. \[eps(T)\]) $\varepsilon'(T)$ is decreased. This is not compatible with the formation of spontaneous polarization of the order of several $\mu$C/cm$^2$ as reported in literature [@Ikeda05].
![(Color online) Frequency dependence of the complex dielectric permittivity $\varepsilon^*(\nu)$ for temperatures between 140 K and 400 K (in steps of 20 K) and in the frequency range 1 Hz $\leq \nu \leq$ 1 GHz. The data ($\circ$) for the real and imaginary part were fitted simultaneously using the equivalent circuit model described in the text. The fitting results are displayed as solid lines. []{data-label="eps(nu)"}](eps_nu_.pdf){width="1.0\columnwidth"}
![(Color online) Temperature dependence of model parameters gained from the fitting of the permittivity spectra as displayed in Fig. \[eps(nu)\]. The upper frame displays the different conductivity contributions, i.e. the intrinsic DC-conductivity $\sigma_{DC}$, the intrinsic variable range hopping contribution to the conductivity $\sigma_{0}\omega^s$ at a frequency of $\omega = 2\pi\nu=1$ GHz, and the non-intrinsic contribution due to the contact resistance $G_C=1/R_C$ normalized to the sample geometry $A/d$ for reasons of comparability. The solid lines in the $\sigma_{DC}(T)$ curve depict the change of slope near the CO-transition. The middle frame displays the non-intrinsic capacitive contribution due to the contacts and the lower frame, finally, gives the intrinsic permittivity contribution of the material. []{data-label="parameter"}](parameter.pdf){width="0.9\columnwidth"}
In order to shed light onto the origin of the large permittivity values obtained from the dielectric measurements we evaluated the frequency dependent complex permittivity (Fig. \[eps(nu)\]). The data roughly can be described as temperature dependent Debye-like steps in $\varepsilon'(\nu)$ accompanied by corresponding peaks in the dielectric loss $\varepsilon''(\nu)$ superimposed by a contribution $\propto 1/\nu$ due to the temperature dependent conductivity. The time constant which defines the step-position for each temperature is given by the effective sample resistance and the contact capacitance $\tau=R(T)C$. The data were quantitatively modeled using an equivalent-circuit description sketched in the lower frame of Fig. \[eps(nu)\]. In addition to the intrinsic DC-conductivity $\sigma_{DC}$ and the intrinsic, (for the regarded spectral range) frequency independent permittivity $\varepsilon_i$ of the material the equivalent-circuit model [@Lunkenheimer02] contains also the conductance $G_C$ and capacitance $C_C$ of the contacts. In poly-crystalline material additional heterogeneities, e.g. grain boundaries, might be considered, which in small single crystals, however, are absent. A further conductivity contribution results from hopping processes in the sample and can be modeled using a frequency dependent term for the ac-conductivity $\sigma_0\omega^s$ (with $\omega = 2\pi\nu$). This term contributes not only to the dielectric loss $\varepsilon''=\sigma'/ (\omega\varepsilon_0)$, but also gives a corresponding Kramers-Kronig consistent contribution to the real part of the permittivity and is commonly described as [*universal dielectric response*]{} [@Jonscher96]. The fits to the data were calculated simultaneously for the real and imaginary part and are displayed as solid lines in Fig. \[eps(nu)\]. The data can convincingly be modeled above and below $T_{CO}$ over the full spectral range of nine decades without the need of further contributions reminiscent of the onset of ferroelectric order.
The results for the corresponding temperature dependent fitting parameters are displayed in Fig. \[parameter\]. The upper frame gives the contributions to the conductivity or the dielectric loss, respectively. The intrinsic DC-conductivity of the sample $\sigma_{DC}$ (red curve in the upper frame of Fig. \[parameter\]) shows an approximately exponential decrease with decreasing temperature as expected for semiconductors. Around $T_{CO}$ a change of slope in this semi-logarithmic representation can be assumed (see solid lines in the upper frame of Fig. \[parameter\]) reflecting the change of charge carrier mobility at the charge order transition. Similar results were obtained from Mößbauer-spectroscopy [@Xu08]. However, it is remarkable, that the contribution of the contact resistance $1/G_C$ (blue curve in the upper frame of Fig. \[parameter\], displayed as normalized on the sample geometry) dominates the different contributions at all temperatures. The hopping contribution (green curve in the upper frame of Fig. \[parameter\]) is displayed as $\sigma_{0}\omega^s$ for $\omega/2\pi=1$ GHz. Again near $T_{CO}$ a small anomaly can be detected. The parameter $s$ possesses a weak and monotonic temperature dependence around values close to $s \approx 0.6$ in agreement with canonical expectations [@Lunkenheimer02; @Jonscher96]. The middle frame gives the results for the non-intrinsic contact capacitance $C_C$ displayed as contribution to the “effective” dielectric constant, i.e. normalized on the geometric capacitance of the sample. The large values of about $\approx 7000$ are more or less constant within the error bars which strongly increase for lower temperatures as the corresponding relaxational step shifts more and more out of the regarded frequency range. Such a weakly temperature dependent capacitance contribution can be understood in terms of very thin depletion layers formed by the Schottky-type metal-semiconductor interfaces at the electrodes. This contribution dominates the capacitive response of the sample in the low frequency, high temperature regime. The intrinsic contribution to the dielectric constant $\varepsilon_i(T)$ is displayed in the lower frame of Fig. \[parameter\]. The residual values lie between 30 and 40, comparable to other transition metal oxides [@Lunkenheimer02] but far from the large “effective” values generated by the contacts. The curvature of $\varepsilon_i(T)$ corroborates the data obtained for high frequencies as displayed in the middle frame of Fig. \[eps(T)\]. Again the decrease of the permittivity for crossing $T_{CO}$ into the CO-phase and the absence of any divergent characteristic at the transition temperature does not point towards the onset of ferroelectricity. This interpretation meets recent results of structural refinements of x-ray diffraction data from the charge-ordered phase of LuFe$_2$O$_4$ where the polar character of the bilayers could not be verified [@Groot12]. Also scenarios in which disorder smears out the onset of spontaneous polarization and relaxor ferrolectric behavior emerges can be ruled out as explanation for the relaxational features found in the dielectric response of LuFe$_2$O$_4$. Such a scenario has been proposed e.g. for the charge ordered phase of magnetite [@Schrettle11], but then the corresponding relaxation strengths should increase towards lower temperatures while in the present case the effective relaxation strengths decreases in accordance with the interpretation of an origin due to contacts and hopping conductivity. In addition, we repeat that such a strongly conductivity dominated scenario may explain the reported anomalies in pyro-current measurements or the $P(T)$ data derived from them [@Maglione08]: Charges are trapped inside the “hetero-structure” of contacts and sample for low conductivity values at low temperatures and released when the conductivity is enhanced at higher temperatures close to the CO-transition.
Summarizing, we performed broadband dielectric spectroscopy on single crystalline LuFe$_2$O$_4$ in the frequency range 1 Hz $<\nu<$ 1 GHz for temperatures well above and below the charge order transition at $T_{CO} \approx 320$ K. The results for the frequency and temperature dependent complex permittivity can be modeled quantitatively in terms of extrinsic contact contributions and intrinsic contributions due to finite DC-conductivity, hopping induced AC-conductivity, and intrinsic dielectric permittivity. The results for the intrinsic dielectric properties do not posses any features reminiscent of the onset of ferroelectric order. Thus we suggest to reconsider the polar nature of the charge ordered state in LuFe$_2$O$_4 $. In order to elucidate the ordering phenomena in this interesting but complex system further experimental and theoretical investigation are highly desirable.
This work has been funded by the DFG through SFB608 (Cologne). Support from the initiative and networking fund of Helmholtz Association by funding the Helmholz University Young Investigator Group “Complex Ordering Phenomena in Multifunctional Oxides” is gratefully acknowledged. MA thanks D. Mandrus, B.C. Sales, W. Tian and R. Jin for their assistance during sample-synthesis, also supported by US-DOE.
[99]{}
D.I. Khomskii, Physics [**2**]{}, 20 (2009)
N.A. Spaldin and M. Fiebig, Science [**309**]{}, 391 (2005)
T. Kimura [*et al.*]{}, Nature [**426**]{}, 55 (2003)
S.-W. Cheong and M. Mostovoy, Nature Materials [**6**]{}, 13 (2007) S. Picozzi [*et al.*]{}, Phys. Rev. Lett. [**99**]{}, 227201 (2007)
G. Giovannetti [*et al.*]{}, Phys. Rev. B [**83**]{}, 060402 (2011)
J. van den Brink and D.I. Khomskii, J. Phys.: Condens. Matter [**20**]{}, 434217 (2008)
Efremov [*et al.*]{}, Nat. Mater. [**3**]{}, 853 (2004).
Ch. Joos [*et al.*]{}, PNAS [**104**]{}, 13597 (2007)
G. Giovannetti [*et al.*]{}, Phys. Rev. Lett. [**103**]{}, 156401 (2009)
K. Yamauchi, T. Fukushima, and S. Picozzi, Phys. Rev. B [**79**]{}, 212404 (2009)
N. Ikeda [*et al.*]{}, Nature [**436**]{}, 1136 (2005)
J. Sichelschmidt [*et al.*]{} , Euro. Phys. J. B [**20**]{}, 7 (2001)
N. Biskup [*et al.*]{}, Phys. Rev. B [**72**]{}, 024115 (2005).
P. Lunkenheimer [*et al.*]{}, Phys. Rev. B [**66**]{}, 052105 (2002)
G. Catalan [*et al.*]{}, Appl. Phys. Lett. [**88**]{}, 102902 (2006)
M.A. Subramanian [*et al.*]{}, Adv. Mater. [**18**]{}, 1737 (2006).
A.M. Mulders [*et al.*]{}, Phys. Rev. B [**84**]{}, 140403 (2011).
X.S. Xu [*et al.*]{}, Phys. Rev. Lett. [**101**]{}, 227602 (2008).
J.C. Maxwell, [*Treatise on Electricity and Magnetism*]{}, 3rd ed. (Dover, New York, 1991); R.J. Wagner, Ann. Phys. (Leipzig) [**40**]{}, 817 (1913)
P. Ren [*et al.*]{}, J. Appl. Phys. [**109**]{}, 074109 (2011).
A.D. Christianson [*et al.*]{}, Phys. Rev. Lett. [**100**]{}, 107601 (2008).
M. Angst [*et al.*]{}, Phys. Rev. Lett. [**101**]{}, 227601 (2008).
J. de Groot [*et al.*]{}, arXiv:1112.0978v1
A.K. Jonscher [*Universal Relaxation Law*]{} (Chelsea, New York, 1996)
F. Schrettle [*et al.*]{}, Phys. Rev. B [**83**]{}, 195109 (2011)
M. Maglione and M.A. Subramanian, Appl. Phys. Lett. [**93**]{}, 032902 (2008)
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We study two canonical online optimization problems under capacity/budget constraints, the [*fractional*]{} one-way trading problem (OTP) and the [*integral*]{} online knapsack problem (OKP) with an infinitesimal assumption. Under the competitive analysis framework, it is well-known that both problems have the same optimal competitive ratio. However, these two problems are investigated by distinct approaches under separate contexts in the literature. There is a gap in understanding of the connection between these two problems and the nature of their online algorithm design. This paper provides a unified framework for the online algorithm design, analysis and optimality proof for both problems. We find that the infinitesimal assumption of the OKP is the key that connects the OTP in the analysis of online algorithms and the construction of worse-case instances. With this unified understanding, our framework shows its potential for analyzing other extensions of OKP and OTP in a more systematic manner.'
author:
- 'Ying Cao, Bo Sun, Danny H.K. Tsang'
bibliography:
- 'my\_bib.bib'
title: '**Optimal Online Algorithms for One-Way Trading and Online Knapsack Problems: A Unified Competitive Analysis** '
---
INTRODUCTION
============
Online optimization under capacity/budget constraints is a classical and challenging problem. Two well-known examples are the one-way trading problem (OTP) and the online knapsack problem (OKP).
In the OTP, an investor plans to trade a total amount of 1 dollar into yen. The exchange rates $p_i$ arrive online and are bounded, i.e., $p_i\in [L,U]$. The investor must immediately decide how much to trade at each exchange rate. If $x_i$ dollars are traded at the $i$th exchange rate $p_i$, $p_i x_i$ is the amount of yen the investor gains. The goal is to maximize the amount of yen traded after processing the $N$th exchange rate $\sum_{i=1}^N p_i x_i$, while respecting the budget limit $\sum_{i=1}^N x_i \le 1$. It is well-known that $(\ln (U/L)+1)$-competitive algorithms can be designed, e.g., the threat-based algorithm in [@el2001optimal] and the *CR-Pursuit* algorithm in [@Lin2019]. The 0-1 knapsack problem is a classic problem in computer science, where a decision maker selects a best subset of items that maximizes the total value of the knapsack contents without exceeding the normalized capacity limit $1$. Whereas in the OKP, the items arrive online. The value $v_i$ and weight $w_i$ of the $i$th item are only revealed upon its arrival. An online decision is made on whether to accept the item ($z_i=1$) or not ($z_i=0$). There exist no online algorithms with bounded competitive ratios for the OKP in the general setting [@babaioff2007knapsack]. However, ($\ln (U/L) +1$)-competitive algorithms can be designed [@zhou2008budget][@zhang2017optimal] under the [*infinitesimal assumption*]{} that the weight of each item is much smaller than the capacity (i.e., $\max_i w_i \ll 1$), and the bounded value-to-weight ratio assumption (i.e., $v_i/w_i\in [L,U]$). The infinitesimal assumption is a technical simplification but has been shown to hold in practical applications like cloud computing systems [@zhang2017optimal] and widely accepted in the literature. In this paper, we refer to the OKP with the infinitesimal assumption as OKP.
Both problems and their many variants have appeared in numerous applications, such as portfolio selection, cloud resource allocation, and keyword auctions, and thus have attracted considerable attention. A typical variant for both problems is to assume that the arrivals follow certain distributions[@tran2015efficient][@fujiwara2011average] or come in random order[@albers2019improved] in order to circumvent the analysis of the worst-case scenario. For the OTP, unbounded prices [@chin2015competitive] and interrelated prices [@schroeder2018optimal] have been considered recently; for the OKP, knapsacks with unknown capacity[@disser2017packing] and removable items[@han2013randomized] are interesting generalizations.
Motivated by the gaps in understanding the nature of challenges in the online algorithm design, we aim to unify the online algorithms for the OTP and the OKP into a threshold-based algorithm, in which the competitive performance of the algorithm mainly depends on the threshold function. We provide a sufficient condition on the threshold function that can ensure a bounded competitive ratio, and design the best possible threshold function based on this sufficient condition. Finally, we derive the lower bound of the competitive ratios of the OTP and the OKP. Although all results match those in the literature, the existing works approach the results by distinct methods and lack a systematic way of designing and analyzing related problems. Instead of the results, this paper mainly focus on the analysis and proofs. Our contributions are two-fold.
- We unify the online algorithms for the OTP and the OKP into a threshold-based algorithm and show that the unified algorithm can achieve the optimal competitive ratios under a unified competitive analysis.
- We provide new proofs for the lower bound of competitive ratios for the OTP and the OKP. The connection between these two problems is founded in the construction of the worse-case instances.
A UNIFIED ALGORITHM / Our Results
=================================
Notations
---------
Since the two problems have distinct sets of terms originally, we unify the notations for the brevity of problem formulation and clarify the different meanings here. Let $x_i$ denote the amount of dollars traded at the $i$th exchange rate for the OTP, while for the OKP, it represents the capacity used after processing the $i$th item $w_i z_i$. Let $b_i$ denote the exchange rate $p_i$ in the OTP, and the value-to-weight ratio of the $i$th item in the OKP, i.e., $\frac{v_i}{w_i}$. The LP that characterizes the offline problem of the OTP under the unified notation is: $$\begin{aligned}
\underset{\bm{x}}{\text{maximize}} \quad &\sum_{i=1}^{N}b_i x_i\\
s.t. \quad & \sum_{i=1}^{N}x_i \le 1 \\
& x_i \ge 0, \forall i \in [N]
\end{aligned}
\tag{1}\label{primal_otp}$$ The dual problem of (\[primal\_otp\]) is $$\begin{aligned}
\underset{\lambda}{\text{minimize}} \quad & \lambda\\
s.t. \quad & \lambda \geq b_i, \forall i \in [N]
\end{aligned}
\tag{2}\label{dual_otp}$$ Change the last constraint of (\[primal\_otp\]) to $0\le x_i\le w_i$, then the resulting LP serves as an upper bound of the OKP, and its dual is $$\begin{aligned}
\underset{\lambda,\bm{\beta}}{\text{minimize}} \quad &\lambda+\sum_{i=1}^{N}w_i \beta_i\\
s.t. \quad & \lambda+\beta_i\ge b_i,\forall i \in [N] \\
& \lambda \ge 0,\beta_i \ge 0, \forall i \in [N]
\end{aligned}
\tag{3}\label{dual_okp}$$
A Unified Algorithm
-------------------
Both the OTP and the OKP target to allocate one budget-constrained resource sequentially. Since the current decision affects the future decision through the budget constraint, we need an estimation on the value of the remaining resource to facilitate decision-making. Our idea is to use a threshold function to estimate the value of resources.
A threshold function $\phi(y): [0,1]\to [0,\infty)$ estimates the marginal cost of a resource at utilization $y$.
Given $\phi(y)$, we can estimate the pseudo-cost of allocating $x_i$ amount of resource by $\int_{y^{(i-1)}}^{y^{(i-1)} + x_i}\phi(\delta)d\delta$. Our unified algorithm then decides $x_i$ that can maximizes the pseudo-revenue $b_ix_i - \int_{y^{(i-1)}}^{y^{(i-1)} + x_i}\phi(\delta)d\delta$. The overall algorithm is summarized in Algorithm 1.
[ **Initialize:** ]{}$\phi(y)$, $y^{(0)} = 0$, $b_i\in[L,U]$; 1. $$\begin{aligned}
\tag{4} \label{eq:online-decision}
x_i=\arg\max_{x\in S} b_i x-\int_{y^{(i-1)}}^{y^{(i-1)}+x}\phi(\delta)d\delta\end{aligned}$$ 2. Update $y^{(i)} = y^{(i-1)}+x_i$; 3. If $y^{(i)}>1,x_i=0.$
For the OTP, $S$ is the set of positive real numbers, whereas $S$ is the set $\{0,w_i\}$ for the OKP. Note that step 3 is only necessary for the OKP, and then step 1 reduces to $x_i=\begin{cases}
w_i, & b_i\ge \phi(y^{(i-1)})\\
0, & otherwise
\end{cases}$, which corresponds to the update equation in [@zhou2008budget]. Algorithm 1 can be easily applied in the posted-price setting by its nature.
Main Results
------------
A standard measure for the performance of an online algorithm is the competitive ratio. Under the unified notation, define an arrival instance $\mathcal{A}$ as $\{b_i\}_{\forall i\in [N]}$ for the OTP, and as $\{b_i,w_i\}_{\forall i\in [N]}$ for the OKP. Given the arrival instance $\mathcal{A}$, denote the objective value achieved by the online algorithm and the offline optimal by $\text{ALG}(\mathcal{A})$ and $\text{OPT}(\mathcal{A})$, if $$\begin{aligned}
\alpha = \max_{\mathcal{A}}{}\frac{\text{OPT}(\mathcal{A})}{\text{ALG}(\mathcal{A})},\end{aligned}$$ then we say the online algorithm is $\alpha$-competitive. The competitive ratio of Algorithm 1 only depends on the choice of the function $\phi$. We find the sufficient conditions of $\phi$ for Algorithm 1 to be $\alpha$-competitive as in the following theorem.
\[sufficiency\] Algorithm 1 is $\alpha$-competitive for both the OTP and the OKP if $\phi$ is given by $$\phi(y) = \left\{
\begin{array}{cc}
L & y \in [0,\omega]\\
\varphi(y) & y \in [\omega,1],
\end{array}
\right .$$ where $\omega$ is a budget/capacity utilization level that satisfies $\frac{1}{\alpha}\le \omega \le 1$, and $\varphi(y)$ is an increasing function that satisfies $$\tag{5}\label{suffi_diffeq}
\left\{
\begin{array}{l}
\varphi(y)\ge\frac{1}{\alpha}\varphi'(y),y \in [\omega,1] \\
\varphi(\omega)=L, \varphi(1)\ge U.
\end{array}
\right .$$
As is shown in the theorem, $\phi$ is composed of two segments, one constant and the other exponential. Note that the functions used in [@zhou2008budget][@zhang2017optimal] satisfy the conditions. However, they come up with the function from nowhere. In contrast, by the following theorem, we can characterize the performance limit over the space of eligible functions and rigorously show the function that admits the smallest (best) competitive ratio.
\[opt\_CR\_for\_alg1\] Given $L$ and $U$, the best competitive ratio that can be achieved by Algorithm 1 is $(\ln \theta+1)$, and the corresponding $\phi^*$ is unique, where $\theta=U/L.$
We show that no other online algorithm can do better than Algorithm 1 by the following theorem.
\[lower\_bound\_thm\] Given $L$ and $U$, $(\ln\theta+1)$ is the lowest possible competitive ratio for both the OTP and the OKP.
In the next section, we introduce the primal-dual analysis framework, with which we prove Theorem \[sufficiency\]. Subsequently, we prove Theorem \[opt\_CR\_for\_alg1\] by the Gronwall’s inequality. In Section \[lower\_bounds\], we show Theorem \[lower\_bound\_thm\] by adversarial arguments.
COMPETITIVE ANALYSIS
====================
Primal-Dual Competitive Analysis
--------------------------------
Given the arrival instance $\mathcal{A}$, we denote the primal and dual objective values after processing $b_n$ by $P_n(\mathcal{A})$ and $D_n(\mathcal{A})$, respectively. For simplicity, we drop the parenthesis and write $P_n$ and $D_n$ hereinafter. We briefly introduce the framework by giving the following lemma.
\[primal-dual-lemma\] An online algorithm is $\alpha$-competitive if it can determine the primal variables $x$ and construct dual variables $\lambda$ based on the primal variables such that
- (*Feasible Solutions*) $x$ and $\lambda$ are feasible solutions of the primal and the dual.
- (*Initial Inequality*) there exists an index $k\in [N]\cup \{0\}$ such that $P_k\ge \frac{1}{\alpha}D_k$.
- (*Incremental Inequalities*) for $i\in \{k+1,\dots,N\}$, $$P_i-P_{i-1}\ge \frac{1}{\alpha}(D_i-D_{i-1}).$$
The primal feasibility is trivial since any online algorithm must first produce a feasible solution to the problem. It suffices to prove $P_N\ge \frac{1}{\alpha}D_N$ since $$ALG = P_N \ge \frac{1}{\alpha}D_N \overset{(a)}{\ge} \frac{1}{\alpha}D^* \overset{(b)}{\ge} \frac{1}{\alpha}OPT,$$where ($a$) is due to the dual feasibility, and ($b$) is due to the weak duality. Suppose there exists an $k$ such that $P_i-P_{i-1} \ge \frac{1}{\alpha}(D_i-D_{i-1})$ holds for all $i\in \{k+1,\dots,N\}$, then we have $
P_N-P_k \ge \frac{1}{\alpha}(D_N-D_k).
$ Combining it with the initial inequality, it leads to $P_N\ge \frac{1}{\alpha}D_N$. We thus complete the proof.
Note that the primal-dual competitive analysis framework that we use are more general than those used in the existing works, in that the initial inequality starts from $k\in [N]\cup \{0\}$ rather than the original $0$.
Next we show the proofs of Theorems \[sufficiency\] and \[opt\_CR\_for\_alg1\] for the OTP, and highlight the differences between them and the OKP case.
Analysis of OTP
---------------
(**Feasible Solutions**) First we show that the primal and dual solutions given by Algorithm 1 are feasible, $$\begin{aligned}
x_i = \begin{cases}
\phi^{-1}(b_i) - y^{(i-1)} & b_i \ge \phi(y^{(i-1)})\\
0 & b_i < \phi(y^{(i-1)}),
\end{cases}
\tag{6}\label{primal_solution_expression}\end{aligned}$$ where $\phi^{-1}(b)=\begin{cases}\omega & b=L \\ \varphi^{-1}(b) & b>L\end{cases}$. (\[primal\_solution\_expression\]) ensures $\forall i,x_i \ge 0$, and $\phi(1)\ge U$ ensures $\phi^{-1}(U)\le 1$. Since $y^{(N)} = \phi^{-1}(\max_{i\in[N]} b_i)$, we have $y^{(N)}\le \phi^{-1}(U)\le 1$. Thus the primal solutions are feasible. Construct the dual variables as $$\lambda_i = \phi(y^{(i)}).$$ Since $\phi(y)$ is non-decreasing, $\lambda_N=\phi(y^{(N)}) \ge \phi(y^{(i)}), \forall i\in[N]$. Thus $\lambda_N$ is a feasible solution to the dual.
(**Initial Inequality**) For the OTP, $P_0=0, D_0=\phi(y^{(0)})=L>0$. When $k\ge 1$, the primal objective at the end of the $k$th time slot is $ P_k=\sum_{i=1}^k b_i x_i, $ while the dual objective is $ D_k=\lambda_k=\phi(y^{(k)}). $
Since $y^{(0)}=0$ and $\forall i,\phi(0) = L\le b_i$, by (\[primal\_solution\_expression\]), we have $$x_1 = \phi^{-1}(b_1) = \begin{cases}\omega & b_1=L \\ \varphi^{-1}(b_1) & b_1>L.\end{cases}$$ Because $\varphi(y)$ is an increasing function, we have $$x_1\ge \omega\ge \frac{1}{\alpha}.$$ Since $b_1=\phi(x_1)$, it follows that $$P_1= b_1 x_1 \ge \frac{b_1}{\alpha}=\frac{1}{\alpha}\phi(x_1)=\frac{1}{\alpha}D_1.$$ (**Incremental Inequalities**) Next we show the incremental inequalities for $i>1$. Note that when $x_i=0$, $P_i=P_{i-1}$ and $D_i=D_{i-1}$. In this case, the incremental inequalities $P_i-P_{i-1}\ge \frac{1}{\alpha}(D_i-D_{i-1})$ always holds. Thus, we only need to focus on the case where $x_i>0,\forall i>1$, when the behavior of the algorithm is controlled by the second segment of $\phi$, which satisfies $\varphi(y)\ge\frac{1}{\alpha}\varphi'(y)$ for $y\in [\omega,1]$, and two boundary conditions $\varphi(\omega)=L$ and $\varphi(1)\ge U$.
The change in the primal objective is given as follows: $$P_i-P_{i-1}=b_i x_i\overset{(a)}{=} \phi(y^{(i)})x_i,$$ where ($a$) is due to (\[primal\_solution\_expression\]) and $y^{(i)}=y^{(i-1)}+x_i$.
The change in the dual objective is given as follows: $$\begin{aligned}
D_i-D_{i-1}=\lambda_i-\lambda_{i-1}&=\phi(y^{(i)})-\phi(y^{(i-1)}) \\
&= \varphi(y^{(i)})-\varphi(y^{(i-1)}).\end{aligned}$$
By the Cauchy mean value theorem, for every segment $[y^{(i-1)},y^{(i)}]$, there exists a $\delta_i \in [y^{(i-1)},y^{(i)}]$ such that
$$\frac{\varphi(y^{(i)})-\varphi(y^{(i-1)})}{y^{(i)}-y^{(i-1)}}=\varphi'(\delta_i).$$
Since $\forall y\in [\omega,1],\varphi(y)\ge\frac{1}{\alpha}\varphi'(y)$, and $\varphi(y)$ is increasing, we have
$$\alpha \varphi(y^{(i)}) \ge \alpha \varphi(\delta_i) \ge \frac{\varphi(y^{(i)})-\varphi(y^{(i-1)})}{y^{(i)}-y^{(i-1)}}.$$ Because $y^{(i)}-y^{(i-1)}>0$, we have $$\varphi(y^{(i)})(y^{(i)}-y^{(i-1)})\ge \frac{1}{\alpha}(\varphi(y^{(i)})-\varphi(y^{(i-1)})),$$ where the LHS is $P_i-P_{i-1}$, and the RHS is $\frac{1}{\alpha}(D_i-D_{i-1})$. Thus $P_i-P_{i-1}\ge \frac{1}{\alpha}(D_i-D_{i-1})$ holds for all $i>1$.
Therefore, Theorem \[sufficiency\] holds for the OTP.
Theorem \[opt\_CR\_for\_alg1\] characterizes the performance limit of Algorithm 1.
(**Best Competitive Ratio**) By the differential form of the Gronwall’s Inequality[@mitrinovic1991inequalities], if there exists a $\varphi$ that satisfies $$\varphi(y)\ge\frac{1}{\alpha}\varphi'(y),y \in [\omega,1],$$ where $\omega\in[\frac{1}{\alpha},1]$, it is bounded as follows: $$\varphi(y)\le \varphi(\omega)\exp{\bigg(\int_{\omega}^{y}\alpha dt\bigg)}, y\in[\omega,1].$$ Substituting the first boundary condition $\varphi(\omega)=L$, we have $$\varphi(y)\le L\exp{\big(\alpha(y-\omega)\big)}, y\in[\omega,1].$$ If the other boundary condition $\varphi(1)\ge U$ holds, it implies $$L\exp{\big(\alpha(1-\omega)\big)}\ge U,$$ otherwise $\varphi(1)\le L\exp{\big(\alpha(1-\omega)\big)} < U$, which incurs infeasibility. From the inequality above, we have $$\begin{aligned}
\omega \le 1-\frac{1}{\alpha}\ln \theta.\end{aligned}$$ A necessary condition for $\omega\ge \frac{1}{\alpha}$ to hold is $$\begin{aligned}
1-\frac{1}{\alpha}\ln \theta \ge \frac{1}{\alpha},\end{aligned}$$ and thus the competitive ratio $\alpha \ge \ln{\theta}+1.$
(**$\Phi^*$ and Its Uniqueness**) When $\alpha$ takes the smallest possible $\alpha^*=\ln\theta+1$, the corresponding $\phi^*$s satisfy $$\phi^*(y) = \left\{
\begin{array}{cc}
L & y \in [0,\omega],\\
\varphi^*(y) & y \in [\omega,1],
\end{array}
\right .$$ where $\omega \in [\frac{1}{\ln \theta+1},1]$ and $\varphi^*$s are given by $$\tag{7}\label{opt_diffeq}
\left\{
\begin{array}{l}
\varphi^*(y)\ge\frac{1}{\ln \theta+1}{\varphi^*}'(y),y \in [\omega,1] \\
\varphi^*(\omega)=L, \varphi^*(1)\ge U.
\end{array}
\right .$$ By the Gronwall’s inequality, we have $$\begin{aligned}
\tag{8} \label{gronwall_ineq_opt}
\varphi^*(y)&\le L\exp\big((\ln\theta+1)(y-\omega)\big) \\
&\overset{(a)}{\le} L \exp\big((\ln\theta+1)y-1\big),y\in[\omega,1],\end{aligned}$$ where ($a$) is due to $\omega\ge \frac{1}{\ln\theta+1}$. Then we have $$\begin{aligned}
\varphi^*(1)\le L\exp(\ln\theta)=L\theta=U.\end{aligned}$$ Combining with the second boundary condition $\varphi^*(1)\ge U$, we have $\varphi^*(1)= U$. Substituting it into (\[gronwall\_ineq\_opt\]), we have $$\begin{aligned}
L\exp\big((\ln\theta+1)(1-\omega)\big)\ge U, \\
1-\omega\ge \frac{\ln\theta}{\ln\theta+1},\\
\omega \le \frac{1}{\ln\theta+1}.\end{aligned}$$ Because $\omega\ge \frac{1}{\ln\theta+1}$, we have $\omega=\frac{1}{\ln\theta+1}$. Therefore, the solution space of (\[opt\_diffeq\]) is equivalent to the solution space of the following differential inequality with equality boundary conditions: $$\tag{9}\label{opt_final_diffineq}
\left\{
\begin{array}{l}
u(y)\ge\frac{1}{\ln \theta+1}{u}'(y),y \in [\frac{1}{\ln\theta+1},1] \\
u(\frac{1}{\ln\theta+1})=L, u(1)=U.
\end{array}
\right .$$ The differential equation counterpart is as follows: $$\tag{10}\label{opt_final_diffeq}
\left\{
\begin{array}{l}
v(y)=\frac{1}{\ln \theta+1}{v}'(y),y \in [\frac{1}{\ln\theta+1},1] \\
v(\frac{1}{\ln\theta+1})=L, v(1)=U.
\end{array}
\right .$$ The unique solution to (\[opt\_final\_diffeq\]) is $v(y)=\frac{L}{e}e^{(\ln\theta+1)y}$. By the Gronwall’s inequality, any feasible solution of (\[opt\_final\_diffineq\]) is bounded by $v(y)$ from above. Next, we are going to show that the solution of (\[opt\_final\_diffineq\]) is unique and is exactly $v(y)$.
Suppose $u$ is a feasible solution to (\[opt\_final\_diffineq\]) and $u(y)<v(y)$ for $y\in \mathrm{I}$, where $\mathrm{I}\subset [\frac{1}{\ln\theta+1},1]$. We know that for any $y\in [\frac{1}{\ln\theta+1},1]$, $$\begin{aligned}
v'=(\ln\theta+1)v,\end{aligned}$$ so for $y\in \mathrm{I}$, we have $$v'=(\ln\theta+1)v>(\ln\theta+1)u\ge u'.$$ Take integral of $u'$ over $[\frac{1}{\ln\theta+1},1]$, we have $$\begin{aligned}
\int_{\frac{1}{\ln\theta+1}}^{1}u'&=u\big|_{\frac{1}{\ln\theta+1}}^{1}=U-L,\end{aligned}$$ Meanwhile, it can be expressed as $$\begin{aligned}
\int_{\frac{1}{\ln\theta+1}}^{1}u'&=\int_{\mathrm{I}}u'+\int_{[\frac{1}{\ln\theta+1},1]\backslash\mathrm{I}} u'\\
&<\int_{\mathrm{I}}v'+\int_{[\frac{1}{\ln\theta+1},1]\backslash\mathrm{I}} u'\\
&=\int_{\frac{1}{\ln\theta+1}}^{1}v'=v\big|_{\frac{1}{\ln\theta+1}}^{1}=U-L,\end{aligned}$$ which shows $U-L<U-L$. Thus $u(y)=v(y)$ for $y\in [\frac{1}{\ln\theta+1},1]$. In conclusion, the optimal $\phi^*$ achieving competitive ratio $(\ln\theta+1)$ is unique and $$\phi^*(y) = \left\{
\begin{array}{cc}
L & y \in [0,\frac{1}{\ln\theta+1}],\\
\frac{L}{e}e^{(\ln\theta+1)y} & y \in (\frac{1}{\ln\theta+1},1].
\end{array}
\right .$$
Analysis of OKP
---------------
We highlight the differences in the analysis of the OKP. The primal feasibility holds trivially and the dual variables are constructed as follows: $$\begin{aligned}
\lambda = \lambda_N,\quad \beta_i=\begin{cases}b_i-\lambda_i & x_i = w_i\\
0 & x_i = 0\end{cases},\end{aligned}$$ where $\lambda_i = \phi(y^{(i-1)})$. When $x_i = w_i$, based on the decision-making rule , we must have $b_i \ge \phi(y^{(i-1)})$. Therefore, $\beta_i \ge 0, \forall i \in [N]$. The constraint of the dual problem is $$\begin{aligned}
\lambda + \beta_i - b_i =
\begin{cases}
\lambda - \lambda_i \ge 0& x_i = w_i\\
\lambda - b_i \ge 0 & x_i = 0.
\end{cases}\end{aligned}$$ Thus the dual feasibility holds.
Based on the online decision rule , the online algorithm will accept the first $k$ items, where $\sum_{i=1}^k w_i = \omega$. Also note that $\lambda_i = L, \forall i\in [k]$. Then we have $$\begin{aligned}
D_k &= \lambda_k + \sum_{i=1}^k w_i \beta_i = \lambda_k + \sum_{i=1}^k w_i (b_i - \lambda_i)\\
&= L(1 - \omega) + \sum_{i=1}^k w_ib_i\\
&\le \frac{1}{\omega}(\sum_{i=1}^k w_ib_i) \le \alpha \sum_{i=1}^k w_ib_i = \alpha P_k\end{aligned}$$ Thus, there exists $k$ that satisfies the initial inequality.
With regard to the incremental inequalities, we have $$\begin{gathered}
P_i-P_{i-1} = b_i w_i\\
D_i-D_{i-1} = \phi(y^{(i)})-\phi(y^{(i-1)})+w_i(b_i-\phi(y^{(i-1)})\\
\overset{(a)}{=}w_i [\phi'(y^{(i-1)})+b_i-\phi(y^{(i-1)})]\end{gathered}$$ where $(a)$ is due to the fact $\phi'(y^{(i-1)})=\frac{\phi(y^{(i)})-\phi(y^{(i-1)})}{w_i}$ and $w_i=y^{(i)}-y^{(i-1)}$ (using the infinitesimal weight assumption). Combining the ODE (\[suffi\_diffeq\]) with the fact that $b_i\ge \phi(y^{(i-1)})$, the incremental inequality holds for $i\in [N]$. Thus Theorem $\ref{sufficiency}$ holds for the OKP. Note that the proof of Theorem \[opt\_CR\_for\_alg1\] holds generally for the two problems.
Lower Bounds {#lower_bounds}
============
In this section, we show that the lower bound of the OTP and that of the OKP coincide. Denote Algorithm 1 with $\phi^*$ by $\text{ALG}_{\phi^*}$. We follow the same approach, first present the proofs for the OTP, and call attention to the differences for the OKP case.
Lower Bounds of OTP
-------------------
Below we find the family of the worst-case sequences under which $\text{ALG}_{\phi^*}$ incurs a ratio of $\ln\theta+1$.
\[worst-case-seq\] Given $L$ and $U$, the family of the worst-case sequences of $\text{ALG}_{\phi^*}$ in the OTP are denoted by $\{\hat{\delta}_k\}_{k\in \mathbb{N}^+}$, where $\hat{\delta}_k=\{\hat{b}_1,\dots,\hat{b}_k\}$, $\hat{b}_i\in[L,U]$ and the rates satisfy $$\hat{b}_1=L,\hat{b}_{i}=\hat{b}_{i-1}+\epsilon_{i-1}, i>1,
\lim_{k\rightarrow \infty} \hat{b}_k=U,$$ where $\epsilon_i$s are infinitesimal positive values. The amount traded by $\text{ALG}_{\phi^*}$ at the exchange rate $\hat{b}_i$ is denoted by $\hat{x}_i$ and they satisfy $$\hat{x}_1=\frac{1}{\ln\theta+1}, \hat{x}_i=\frac{\ln \hat{b}_i/\hat{b}_{i-1}}{\ln\theta+1},i>1,
\lim_{k\rightarrow \infty} \sum_{i=1}^k \hat{x}_i = 1.$$
The proof of Lemma \[worst-case-seq\] is in the Appendix.
Let ALG be any online algorithm different from $\text{ALG}_{\phi^*}.$ We show that ALG cannot achieve a competitive ratio smaller than $\ln\theta+1$ by using an adversarial argument.
Let $\hat{\delta}=\{L,L+\epsilon_1,\dots,U\}$. First present $\hat{b}_1=L$ to ALG. If ALG exchanges $x'_1<\hat{x}_1=1/(\ln\theta+1)$, then we end the sequence. If so, ALG cannot achieve $\ln\theta+1$, because $$\frac{\text{OPT}(\hat{\delta}_1)}{\text{ALG}(\hat{\delta}_1)}=\frac{1}{x'_1}>\ln\theta+1.$$ Thus we can assume that ALG spends an amount $x'_1\ge\hat{x}_1$, in this case, we continue and present $\hat{b}_2$ to ALG. In general, if after processing the $k$th exchange rate, the total amount of dollar spent is less than $\sum_{i=1}^k \hat{x}_i$, we immediately end the sequence. Otherwise, we continue to present $\hat{b}_{k+1}$, etc.
Let $f(k)=\sum_{i=1}^k (x'_i-\hat{x}_i)$. Let $\mathbb{K}=\{k\in \mathbb{N}|f(k)<0\}$, denote the minimum in $\mathbb{K}$ as $j$, we have $$\begin{aligned}
&x'_1 \ge \hat{x}_1, \\
&x'_1+x'_2 \ge \hat{x}_1+\hat{x}_2, \dotsc \\
&\sum_{i=1}^{j-1}x'_i \ge \sum_{i=1}^{j-1}\hat{x}_i, \\
&\sum_{i=1}^{j}x'_i < \sum_{i=1}^{j}\hat{x}_i.\end{aligned}$$ Thus ALG could gain more by spending exactly the same as $\text{ALG}_{\phi^*}$ at the first $(j-1)$ exchange rates and by spending $$\Tilde{x'_j}=x'_j+\sum_{i=1}^{j-1}(x'_i-\hat{x}_i)$$ at the $j$th exchange rate. Since $\Tilde{x'_j}<\hat{x}_j$, ALG cannot guarantee the competitive ratio of $\ln\theta+1$. If $f(k)\ge0$ for all $k\in \mathbb{N}^+$, we have $$\begin{aligned}
\liminf_{k\rightarrow\infty} f(k)\ge0,\limsup_{k\rightarrow\infty} f(k)\ge0.\end{aligned}$$ Since ALG cannot exceed the capacity limit, we have $$\lim_{k\rightarrow\infty}\sum_{i=1}^k x'_i\le 1,$$ and we also have $\lim_{k\rightarrow \infty} \sum_{i=1}^k \hat{x}_k = 1$, therefore we have $$\lim_{k\rightarrow\infty}f(k)\le0.$$ For an infinite sequence $f(k)$, the limit exists iff $$\limsup f(k)=\liminf f(k)=\lim f(k),$$ so we have $ \lim_{k\rightarrow\infty}f(k)= 0. $ By the Abel transformation, we have $$\begin{aligned}
\sum_{i=1}^k \hat{b}_i (x'_i-\hat{x}_i)&= \sum_{i=1}^{k-1}f(i)(\hat{b}_i-\hat{b}_{i+1})+f(k)\hat{b}_k \\
&\overset{(1)}{\le} f(k)\hat{b}_k,\end{aligned}$$ where (1) is due to the monotonicity of $\{\hat{b}_i\}$.
Thus, the performance gap between ALG and $\text{ALG}_{\phi^*}$ for this infinite exchange rate sequence is $$\begin{aligned}
\lim_{k\rightarrow\infty}\sum_{i=1}^k \hat{b}_i (x'_i-\hat{x}_i) \le \lim_{k\rightarrow\infty}f(k)\hat{b}_k=0\end{aligned}$$
Therefore, any online algorithm for the OTP cannot achieve a better competitive ratio than $\text{ALG}_{\phi^*}$. The lowest possible competitive ratio is $\ln\theta+1$.
Lower Bounds of OKP
-------------------
We show that with a slight modification to $\{\hat{\delta}_k\}_{k\in \mathbb{N}^+}$, they are also the worst-case sequences for the OKP.
Consider a family of the value-to-weight ratio sequences $\{I_b\}$ indexed by $b\in[L,U]$. $I_b$ is composed of a continuum of subsequences, with the ratios in the $i$th subsequence all being $\hat{b}_i,i\in \mathbb{N}^+$, where $\hat{b}_i\le b$ and is given in Lemma \[worst-case-seq\]. The length of each subsequence is large enough so that it can fulfill the capacity of the knapsack even if presented alone. Note that given $I_b$, the resource allocation strategy is analogous to the OTP case. The offline optimal solution is to only select from the subsequence with $\hat{b}_i=b$ until reaching the capacity limit, whereas $\text{ALG}_{\phi^*}$ will select a value-to-weight ratio as long as it is no less than $\phi^*(y)$, where $y$ is the current capacity utilization level. Therefore $\{I_b\}_{b\in [L,U]}$ are the worst-case sequences for the OKP.
In regard to the proof of Theorem \[lower\_bound\_thm\], one can replace the worst sequence $\hat{\delta}$ with $I_U$, present a subsequence instead of an arrival at a time to $\text{ALG}$, and act adversarially in the same way in response to the decisions made by $\text{ALG}$, and the results still hold.
APPENDIX {#appendix .unnumbered}
========
Denote any strictly-increasing sequence with length $k$ by $\delta_k=\{b_1,\dots,b_k\}$. We can simply focus on the strictly-increasing sequences, because $\text{ALG}_{\phi^*}$ only trades something when the current exchange rate is the new high ever observed. Any normal sequences can be transformed into a strictly-increasing sequence by keeping the exchange rate higher than all of its predecessors and omitting the rest, and the optimal in hindsight is not affected by this transformation. By (\[primal\_solution\_expression\]), we have $$\begin{aligned}
x_1&={\phi^*}^{-1}(b_1)
=\frac{\ln(b_1 e/L)}{\ln\theta+1},\\
x_i&={\phi^*}^{-1}(b_i)-{\phi^*}^{-1}(b_{i-1})\\
&=\frac{\ln(b_i/b_{i-1})}{\ln\theta+1},i\ge 2.\end{aligned}$$ Denote the total amount of yen $\text{ALG}_{\phi^*}$ trades for $\delta_k$ by ALG($\delta_k$) and the offline optimal one by OPT($\delta_k$). We have $\text{OPT}(\delta_k)=p_k,$ and $$\begin{aligned}
\text{ALG}(\delta_k)&=\sum_{i=1}^k b_i x_i\\
&=\frac{b_1 \ln(b_1 e/L)+\sum_{i=2}^k b_i \ln(b_i/b_{i-1})}{\ln\theta+1}.\end{aligned}$$ Let $r_k(b_1,\dots,b_k)=\frac{b_1 \ln(b_1 e/L)+\sum_{i=2}^k b_i \ln(b_i/b_{i-1})}{b_k}$. So the competitive ratio for $\text{ALG}_{\phi^*}$ can be expressed as $$\begin{aligned}
\underset{\{b_1,\dots,b_k,k\}}{\max}\frac{\text{OPT}(\delta_k)}{\text{ALG}(\delta_k)}
&= \frac{\ln\theta+1}{\underset{\{b_1,\dots,b_k,k\}}{\min}r_k(b_1,\dots,b_k)}.\end{aligned}$$ Because $\text{ALG}_{\phi^*}$ can achieve $\ln\theta+1$ with $\phi^*$ by Theorem \[opt\_CR\_for\_alg1\], we know that $\underset{\{b_1,\dots,b_k,k\}}{\min}r_k=1$. Next, we look for $\{b_1,\dots,b_k\} $ that minimize $r_k(b_1,\dots,b_k)$ for each $k$. When $k=1$, $r_1(b_1)=\ln(b_1e/L)\ge 1$, so $\hat{\delta}_1=\{L\}$, $\hat{x}_1=\frac{1}{\ln\theta+1}$ and $\frac{\text{OPT}(\hat{\delta}_1)}{\text{ALG}(\hat{\delta}_1)}=\ln\theta+1$. When $k=2$, $$\begin{aligned}
r_2(b_1,b_2)=\frac{b_1 \ln(b_1 e/L)+b_2 \ln(b_2/b_{1})}{b_2}.\end{aligned}$$ The first order derivatives are $$\begin{aligned}
\frac{\partial r_2}{\partial b_1}=\frac{\ln (b_1 e/L)+1-b_2/b_1}{b_2} \quad \frac{\partial r_2}{\partial b_2}=\frac{b_2-b_1\ln(b_1 e/L)}{{b_2}^2}\end{aligned}$$
We notice that $\partial r_2/\partial b_1$ and $\partial r_2/\partial b_2$ cannot be zero simultaneously. This means that $r_2$ has no critical points, and the minimum value of $r_2$ on $[L,U]\times[L,U]$ must be on one of the four boundary points. It turns out that $r_2$ reaches minimum when $(b_1,b_2)=(L,L)$. We need to find a close neighbor to $(L,L)$ with $b_2>b_1$ and whose value does not increase too much. Notice that $\partial r_2/\partial b_1\big|_{(L,L)}>0,\partial r_2/\partial b_2\big|_{(L,L)}=0$, thus increasing $b_2$ to $b_2+\epsilon$ with infinitesimal positive $\epsilon$ should incur the least inaccuracy. So $\hat{\delta}_2=\{L,L+\epsilon\}$ and $\frac{\text{OPT}(\hat{\delta}_2)}{\text{ALG}(\hat{\delta}_2)}\rightarrow(\ln\theta+1)^-$ as $\epsilon\rightarrow 0^+$. For general $k\ge 3$, $$\begin{aligned}
r_k(b_1,\dots,b_k)=\frac{b_1\ln(b_1 e/L)+\sum_{i=2}^k b_i\ln(b_i/b_{i-1})}{b_k}\end{aligned}$$ The first order derivatives are: $$\begin{aligned}
\frac{\partial r_k}{\partial b_1}=\frac{\ln (b_1 e/L)+1-b_2/b_1}{b_2}\end{aligned}$$ $$\begin{aligned}
\frac{\partial r_k}{\partial b_k}=\frac{b_k-b_{k-1}r_{k-1}(b_1,\dots,b_{k-1})}{{b_k}^{2}}\end{aligned}$$ $$\begin{aligned}
\frac{\partial r_k}{\partial b_i}=\frac{\ln(b_i/b_{i-1})+1-b_{i+1}/b_i}{b_k}, i=2,\dots,k-1.\end{aligned}$$ Notice something in common that, $\partial r_k/\partial b_k$ and $\partial r_k/\partial b_{k-1}$ cannot be zero at the same time, and $r_k$ reaches minimum when $b_i=L,i\in [k]$. The increasing sequence closest to the minimum point is $
\{L,L+\epsilon_1,\dots,L+\sum_{i=1}^{k-1}\epsilon_i\}$, where $\epsilon_i$s are infinitesimal positive values and we have $\frac{\text{OPT}(\hat{\delta}_k)}{\text{ALG}(\hat{\delta}_k)}\rightarrow(\ln\theta+1)^-$ as $\sum_{i=1}^{k-1} \epsilon_i\rightarrow 0^+.$ Actually, each $\hat{\delta}_k$ is the prefix of $\hat{\delta}_m,m\ge k.$ From these observations, we claim that as long as the exchange rate sequence increases slowly enough from $L$, it is the worst-case sequence for $\text{ALG}_{\phi^*}$.
To verify the claim, let $\hat{b}_k$ be $L+\sum_{i=1}^{k-1}\epsilon_i$, we have $$\begin{aligned}
\text{ALG}(\hat{\delta}_k)&=\sum_{i=1}^k \hat{b}_i x_i \\
&= \frac{L}{\ln\theta+1} + \sum_{i=2}^k \hat{b}_i \bigg(\frac{\ln(\hat{b}_i)-\ln(\hat{b}_{i-1})}{\ln\theta+1}\bigg) \\
&= \frac{L}{\ln\theta+1}+\int_{L}^{\hat{b}_k}\gamma\cdot\frac{d\ln(\gamma)}{\ln\theta+1}
= \frac{\hat{b}_k}{\ln\theta+1},\end{aligned}$$ and thus $\frac{\text{OPT}(\hat{\delta}_k)}{\text{ALG}(\hat{\delta}_k)}=\underset{\{b_1,\dots,b_k,k\}}{\max}\frac{\text{OPT}(\delta_k)}{\text{ALG}(\delta_k)}=\ln\theta+1.$ Since the exchange rate is upper bounded by $U$, by the monotone convergence theorem, we have $$\lim_{k\rightarrow\infty}\hat{b}_k = U,$$ and thus $\lim_{k\rightarrow \infty} \sum_{i=1}^k \hat{x}_i = 1$.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'It is shown that there exists two inner authomorpism which lead to different form of the sistems equations of integrable hierarchy. We present discrete and Backlund transformation connected with such systems and a general formula for multi-soliton solutions based on these symmetries.'
author:
- |
A.N. Leznov\
Universidad Autonoma del Estado de Morelos,\
CIICAp,Cuernavaca, Mexico
title: |
$\sigma_1$ and $\sigma_2$ automorphism of the systems of equations of integrable hierarchies.\
Discrete and Backlund transformations
---
3000 3000
Introduction
============
In a series of papers by the author starting in 80’s of the previous century [@i] a simple method for constructing integrable systems together with their soliton-like solutions was proposed. This method requires only two calculational steps – solving a system of linear algebraic equations and performing a Gaussian decomposition of a polynomial into a product determined by its roots. This lead to a nonlinear symmetry of integrable system termed a discrete substitution or an integrable mapping. A discrete substitution is a nonlinear transformation (with no additional parameters) that, given an arbitrary solution of an integrable system, produces a new solution according to certain rules.
All systems of equations invariant with respect to such a mapping are united into a hierarchy of integrable systems that bears the name of the corresponding discrete substitution. This substitution is a canonical transformation [@7],[@8] which explains successful applications of methods of canonical transformation theory to integrable systems [@FT].
Let us assume that an integrable system has some inner automorphism $\sigma$. This implies that there is a solution that is also invariant under $\sigma$. Usually, such solutions are interesting for applications. In general, a discrete transformation will not commute with $\sigma$. However, it is possible starting from a non-invariant solution to produce a “good”, invariant, solution after several applications of the discrete transformation (implementation of this scheme is outlined Refs. [@10] and [@11]).
On the other hand, it is well known that Backlund transformations contain additional parameters and therefore allow to increase the number of parameters in the solution after their application. Because the method of discrete substitution allowed to obtain n-soliton solutions directly and explicitly, the author had no interest in Backlund transformations from point of view obtaining soliton solutions. But Backlund transformation have more wide sence and we would like to close this loophole in the present paper.
The goal of the present paper is to show how to construct Backlund transformations for integrable systems using methods of [@10],[@11]. We restrict ourselves with the well-known example of the nonlinear Schrodinger equation to ease the demonstration of the main idea and methods needed for solution of this problem. The extension to the general case is obvious and will be clear after considering this single example.
Discrete transformation
=======================
All considerations of the present paper are given for the simplest example of the nonlinear Schrodinger equations. However, it will become clear from our arguments, detailed calculations below, and references that our method is applicable to all integrable systems with soliton-like solutions.
The nonlinear Schrodinger equation can be represented in two forms: the classical one for one complex-valued unknown function $\psi(x,t)$ -2i\_t+\_[xx]{}+2(|)=0\[CNL\] and as a wider system of two equations for two unknown complex-valued functions functions $u,v$ -i2 u\_t+u\_[xx]{}+2(uv)u=0, i2v\_t+v\_[xx]{}+2(uv)v=0 \[SCNL\] (The factor of 2 multiplying time derivatives can be of course eliminated by redefining the time variable.) The last system is obviously invariant under an inner automorphism $\sigma_1$: $u\to
\bar v,v\to \bar u$ and thus it has solutions invariant with respect to this change of variables. Such solutions satisfy $u=\bar v=\psi$ and are therefore also solutions of (\[CNL\]).
The system (\[SCNL\]) is invariant with respect to the following nonlinear invertible transformation U=[1v]{},V=v(uv-(v)\_[xx]{}),v=[1U]{},u=U(UV-(U)\_[xx]{})\[DM\] which is referred to as a discrete transformation, discrete substitution or an integrable mapping.
Transformation (\[DM\]) takes a given solution $(u,v)$ of the system (\[SCNL\]) to a new one $(U,V)$. But this transformation is not invariant with respect to $\sigma_1$. Thus, if the initial solution has a property $(u=\bar v)$, the new solution $(U,V)$ will not have this property. Now we make the following very important observation. If we apply the direct discrete transformation $m$ times to a solution of (\[CNL\]) $(u=\bar v)$ with the result $U^{+m},V^{+m}$ and apply the inverse transformation to the same solution $m$ times with the result $u^{-m},v^{-m}$, the resulting new solutions are related as follows: $U^{+m}=\bar
u^{-m},V^{+m}=\bar v^{-m}$. Thus, if we start from an obvious linear solution of the system (\[SCNL\]) $u_0=0, 2i{v_0}_t+{v_0}_{xx}=0$ and after $2m$ steps of direct discrete transformation obtain a solution of $2i{u_{2m}}_t+{u_{2m}}_{xx}=0,v_{2m}=0$ with $u_{2m}=\bar v_0$, we are guaranteed that at the $m$th step we will obtain a (m-soliton) solution of (\[CNL\]) – traditional nonlinear Schrodinger equation. This approach together with the corresponding mathematical formalism is described in detail in [@10],[@11].
Backlund transformation
=======================
Now we would like to find not the symmetry of the enlarged system (\[SCNL\]) but independently the symmetry of (\[CNL\]). This symmetry transformation can contain additional numerical parameters and after each its application we increase the number of parameters in the solution. This is exactly Backlund’s original idea. However, one has to keep in mind that, applied to the general solution of the equation, this transformation cannot add any new independent parameters, but can only change the initial functions on which the general solution depends.
Consider a solution of the classical nonlinear Shrodinger equation $\psi=u=\bar v$. This equation may be written in Lax pair form g\_xg\^[-1]{}=ig\_tg\^[-1]{}=i \[LA\] Now let us use the formalism of [@11] and try to find a new solution of the classical nonlinear Schrodinger equation in the form G=gPg \[IMP\] where functions $A,B,C,D$ are determined by imposing the condition that matrix $G$ has linear dependence between its columns (and rows) with constant coefficients $(c^i_1,c^i_2)$ at two points of the complex $\lambda$ plane, $\lambda_{1,2}$. In the Appendix we show that this condition is not independent but follows directly from (\[IMP\]).
From this condition we immediately obtain $Det P=(\lambda-\lambda_1)(\lambda-\lambda_2)$ and a linear system of equations for relating functions $A,B,C,D$ to matrix elements of $g$, parameters $\lambda_i$, and vectors $c^i$ (\_1+A)(gc)\^1\_1+B(gc)\^1\_2=0,(\_2+A)(gc)\^2\_1+B(gc)\^2\_2=0\[LS1\] C(gc)\^1\_1+(\_1+D)(gc)\^1\_2=0,C(gc)\^2\_1+(\_1+D)(gc)\^2\_2=0\[LS2\] In particular, we have (\_1-\_2)+B([(gc)\^1\_2(gc)\^1\_1]{}-[(gc)\^2\_2(gc)\^2\_1]{})=0,(\_1-\_2)+C([(gc)\^1\_1(gc)\^1\_2]{}-[(gc)\^2\_1(gc)\^2\_2]{})=0\[ESP\] Now let us obtain a new L-A pair. For the matrix element $(G_xG^{-1})_{11}$ we have (G\_xG\^[-1]{})\_[11]{}=[det det P]{}=iwhere we used the fact that the determinants in the numerator and denominator have zeros at the same points. Similarly, we obtain (G\_xG\^[-1]{})\_[12]{}=[det det P]{}=i(u-2B) and finally G\_xG\^[-1]{}=i\[!!\]
If we did not demand the condition of complex conjugation $u=v^*$ to hold, this would exactly be a Backlund transformation for the enlarged nonlinear system (\[SCNL\]).
But if we start from a solution of the nonlinear Schredinger equation (\[CNL\]) $u=v^*$ and would like to obtain a solution of the same equation, we have to require, according to the last expression, that $\bar C=-B$.
To this end, it is necessary to use known explicit expressions (\[ESP\]). From L-A representation (\[LA\]) it follows that $g$ can be considered as a unitary matrix $gg^H=1$ under assumption that $\lambda$ is real.
This means that $g^{-1}=g^H$ or in a matrix form (in what follows $\bar f\equiv f^*)$ $$g(\lambda)_{22}=g^*(\lambda)_{11},\quad g(\lambda)_{12}=-g^*(\lambda)_{21},$$ Taking into account that the matrix elements of $g$ are analytic functions of $\lambda$ (as solutions of a differential equation with coefficients analytic in $\lambda$), we conclude that $$(g(\lambda)_{22})^*=g(\lambda^*)_{11},\quad (g(\lambda)_{11})^*= g(\lambda^*)_{22},$$ $$(g(\lambda)_{21})^*=-g(\lambda^*)_{12},\quad (g(\lambda)_{12})^*=-g(\lambda^*)_{21}$$ Now we can evaluate $\bar C$. We have =[g(\_1)\_[11]{}c\^1\_1+g(\_1)\_[12]{}c\^1\_2g(\_1)\_[21]{}c\^1\_1+g(\_1)\_[22]{}c\^1\_2]{}=[g(\_1)\_[11]{}+g(\_1)\_[12]{}\_1g(\_1)\_[21]{}+g(\_1)\_[22]{}\_1]{}\[MIDL\] where $\alpha_1={c^1_2\over c^1_1}$. According to the above formulae for complex conjugation, we have $$(F(\lambda_1,\alpha_1)^*\equiv({(gc)^1_1\over (gc)^1_2})^*={g(\lambda^*_1)_{22}-g(\lambda^*_1)_{21} \alpha^*_1\over -g(\lambda^*_1)_{12}+g(\lambda^*_1)_{11}\alpha^*_1}=
-{1\over F(\lambda_1^*,-{1\over \alpha^*_1})}$$ For further manipulations let us rewrite (\[ESP\]) in terms of $F_1,F_2$ introduced above (\_1-\_2)+C(F(\_1,\_1)-F(\_2,\_2))=0,(\_1-\_2)+B([1F(\_1,\_1)]{}-[1F(\_2,\_2)]{})=0\[MIDL1\] The condition $B=-C^*$ together with the conjugation properties of functions $F$ lead to \_1=\^\*\_2,\^\*\_1=-[1\_2]{}\[BEK!\] Thus if for some solution of (\[CNL\]) the corresponding element $g$ is known new solution of the same equation differente from the initial on the pair of complex parameters can be constructed by the rules of present section. (But not forget about the comments on this subgect in the beggining of this section).
Multisoliton solutions
======================
Now let us apply $n$ times the transformation of the previous section to a certain solution of the nonlinear Shrodinger equation (\[CNL\]). Each transformation is defined by two complex parameters $\lambda_i,\alpha_i$. After n applications, we have for the corresponding matrix $G_n$ $$G_n=\pmatrix{\lambda+A & B \cr
C & \lambda +D \cr}G_{n-1}=
\pmatrix{\tilde P^n_{11} & P^{n-1}_{12} \cr
P^{n-1}_{21} & \tilde P^{n}_{22} \cr}g_0$$ Here the notation $\tilde P$ means that the coefficient at the highest degree of the corresponding polynomial is equal to one. $G_n$ has zero vectors at $2n$ points of the $\lambda_i,\lambda_i^*$ plane with the coefficients of proportionalities $\alpha_i,-{1\over \alpha_i^*}$. For this reason all coefficients of polynomials of the element $G_n$ can be obtained from the linear system of equations $$\lambda_i^n+\sum_{k=0}^{n-1} P^k_{11}\lambda_i^k +\sum_{k=0}^{n-1} F(\lambda_i,\alpha_i)P^k_{12} \lambda_i^k=0$$ $$\sum_{k=0}^{n-1} F^{-1}(\lambda_i,\alpha_i)P^k_{21} \lambda_i^k +\lambda_i^n+\sum_{k=0}^{n-1} P^k_{22}\lambda_i^k=0$$ where $P^k_{ij}$ are the coefficients at $\lambda^k$ in the corresponding polynomial. In connection with Kramers rules coefficients interesting for further considerations are P\^[n-1]{}\_[12]{}=-[det\_[2n]{} (\_i\^[n-1]{},..,\_i,1;\_i\^n;F\_i\_i\^[n-2]{},...F\_i\_i,F\_i)det\_[2n]{}(\_i\^[n-1]{},..,\_i,1;F\_i\_i\^[n-1]{},...F\_i\_i,F\_i)]{}\[FIN\] P\^[n-1]{}\_[21]{}=-[det\_[2n]{}(\_i\^n;F\^[-1]{}\_i\_i\^[n-2]{},...F\^[-1]{}\_i\_i,F\^[-1]{}\_i;\_i\^[n-1]{},..,\_i,1)det\_[2n]{}(F\^[-1]{}\_i\_i\^[n-1]{},...F\^[-1]{}\_i\_i,F\^[-1]{}\_i;\_i\^[n-1]{},..,\_i,1)]{}\[FINI\] In the last formula we symbolically wrote the structure of each of $2n$ lines of the corresponding determinant matrices. In the case of a diagonal initial matrix $g_0=\exp i(\lambda x+\lambda^2 t) h$ formula (\[FIN\]) was presented in [@10] without any connection to the Backlund transformation of the present paper.
Using absolutely the same technique as in the previous section we obtain (G\_n)\_xG\_n\^[-1]{}=i\[!!!\] Thus, after n steps of Backlund transformation each of which is defined by parameters $(\lambda_i,\lambda_i^*, \alpha_i, -{1\over \alpha_i^*})$, we get a new solution of the nonlinear Shrodinger equation $$U=u-2P^{n-1}_{12},\quad V=v+2P^{n-1}_{21}$$ From the explicit expressions (\[FIN\]) and (\[FINI\]) it follows that all Backlund transformations are commutative ( functions $P^{n-1}_{12},P^{n-1}_{21}$ are symmetrical to permutation of all pairs $\lambda_i,\alpha_i$).
If we choose the initial solution in a “zero” form $u=v^*=0,\quad g=\exp i2(\lambda x+\lambda^2 t) h$, we obtain an n-soliton solution in an explicit form. This solution of course coincides with the one obtained previously in [@10],[@11] using the method of discrete transformation and repeated in the previous section.
The result of [@10] corresponds to decomposition of the determinant of $2n$th order into a sum of the products of the corresponding minors $n$th order.
Backlund transformation in its original form
============================================
As it follows from the material of the previous section, Backlund transformation, in contrast to the discrete one, is not local. Indeed to obtain a new solution $U=V^*$ from a solution $u=v^*$, we have to resolve the L-A system (which is a system of linear differential equations), obtain the element $g$ and use its matrix elements to construct a new solution.
This fact is related to the original Backlund’s result, who found relations between the derivatives of the new and old solutions. We would like to show now, how such relations can be obtained from the formalism of the previous section.
Using the definition of $F(\lambda_i,\alpha_i)$ and taking into account equations of L-A pair (\[LA\]), we obtain $$F^s_x\equiv (F(\lambda_s,\alpha)_s)_x=i(v-2\lambda_s F^s-u(F^s)^2),$$ $$F^s_t=i((\lambda_s v+i{v_x\over 2})-2(\lambda_s^2-{uv\over 2})F^s-(\lambda_su-i{u_x\over 2})(F^s)^2)$$ Calculating $F^1,F^2$ via $2B=u-U,2c=V-v$ from (\[MIDL1\]) and substituting result into the system of derivatives above (with respect to $x$ or $t$ arguments) we come to a system of two ordinary differential equatinos of the first order connected $u,v$ and $U,V$ and containing parameters $\lambda_1=\lambda^*_2$ in explicit form. Two additional parameters $\alpha_1, \alpha_2=-{1\over \alpha^*}$ have to arised in process of integration of the last system in which $u=v^*$ considered as known and $U=V^*$ as unknown functions or visa versa.
Second automorphism $\sigma_2$ of nonlinear Schrodinger system
==============================================================
Let us perform a change of variables $v=e^{i\theta}, u=e^{-i\theta}(R-{i\theta_{xx}\over 2})$ in (\[SCNL\]), where $R,\theta$ two new unknown complex function. The sence of this substitution will be clarified something later. Instead of (\[SCNL\]) we have 2\_t+(\_x)\^2=2R,2R\_t-[12]{}\_[xxxx]{}-2(\_x R)\_x=0\[SCHL1\] Of course the last system is invariant with obvious exchange $R\to R^*,\theta\to
\theta^*$, which we call second inner automorphism $\sigma_2$ of nonlinear Shrodinger system.
The system (\[SCHL1\]) after excluding $R$ is equivalent to single equation \_[tt]{}-[14]{}\_[xxxx]{}+([32]{}\_x\^2+\_t) \_[xx]{}=0 \[s2\] The Author cannot say anything about the system (\[SCHL1\]) or equation (\[s2\]) because as it seems he ever encountered them before in such a form in the literature.
Method of discrete substitution
-------------------------------
The nature of $\sigma_2$ authormorphism consists in the fact that if discrete transformation (\[DM\]) is iterrupted on $2n+1$ step (but not on $2n$ one as it was in the case of $\sigma_1$), it has as it conclusion (\[s2\]). Indeed in this case in the middle of the latice arises two solutions ($u_n={D_{n-1}\over D_n}, v_n={D_{n+1}\over D_n}$) and ($u_{n+1}={D_n\over D_{n+1}}, v_{n+1}={D_{n+2}\over D_{n+1}}$), which are connected by condition $v_n^*=u_{n+1}={1\over v_n},\quad u_n^*=v_{n+1}=v_n(u_nv_n+(\ln v_n)_{xx})$, which lead to (\[SCHL1\]) and (\[s2\]).
Using the technique of dicrete transformation n-soliton solution of the equation (\[s2\])and system (\[SCHL1\]) may be represented in the terms of the following notations $$F=\sum_{k=1}^{2n+1} c_k e^{i2L_k},\quad f=\sum_{k=1}^{2n+1} {1\over c_k} e^{-iL_k}{1\over \prod'
(\lambda_k-\lambda_j)^2}$$ where $L_i=\lambda_i^2t+\lambda_ix$ and $\lambda_i,c_i$ -$(2n+1)$ numerical parameters connected via equation $F^*=f$. From the last condition it follows the limitation on parameters of the problem: $2s+1$ parameters $\lambda_{\beta}^*=\lambda_{\beta}$ are real one, the remaining $2(n-s)$ are in complex congugated pairs $\lambda_A^*=\lambda_B$ ($1 \leq A,B\leq (n-s)$). In all cases $c_ic_i^*=\prod'(\lambda_i-\lambda_j)^2$. This number under all chossing of $\lambda$ parameters above is a positive number. Finally $$\theta=i\ln {D_n\over D_{n+1}},\quad R=(\ln(D_nD_n^*))_{xx}$$ where $D_s$ is determinant of the s- order of the matrix (typical for discrete transformation calcules):
$$\pmatrix{ F & F_x & F_{xx} &....\cr
F_x & F_{xx} & F_{xxx} &...\cr
F_{xx} & F_{xxx} & F_{xxxx} &...\cr}$$
Method of the Backlund transformation
-------------------------------------
It is necessary to repeat word by word all calculations of the section 3 up to (\[!!\]), keeping in mind that in all formulae connected $g$ and ($u,v$) now $v=e^{i\theta},u=e^{-i\theta}(R-{i\over 2}\theta_{xx})$.
“One” soliton solution, the simplest solution of (\[SCHL1\]) is the following one $(\theta=-2(\lambda_0^2t+\lambda_0x)\equiv {-2L_0}, R=0)$. It is obvious that correspoding group element $g$ belongs to the group of lawer triangular matrices and may be represented in the following form: $$g(x,t:\lambda)=e^{\alpha X_-}e^{\tau H}$$ Taking into account equations of L-A pair (\[LA\]) we obtain in a consequence $$g_xg^{-1}=(\alpha_x+2\tau_x\alpha)X_-+\tau_xH=i(\lambda H+e^{-2iL_0}X_-)$$ $$g_tg^{-1}=(\alpha_t+2\tau_t\alpha)X_-+\tau_tH=i(\lambda^2 H+e^{-2iL_0}(\lambda+\lambda_0)X_-)$$ Solition of the last equations are trivial with the finally result $$\alpha={1\over 2}{e^{-i2L_0}\over \lambda-\lambda_0},\tau=i(\lambda^2t+\lambda x)$$
In the general case let us represent element $g$ in ussual form of $S(2,C)$ group g=e\^[X\_+]{}e\^[H]{} e\^[X\_-]{}= \[PAR\] g’g\^[-1]{}=(’-2’-’e\^[-2]{}) X\_++(’+’e\^[-2]{})H+ ’e\^[-2]{}X\_-\[PAR1\] Taking into acount equations of L-A pair (\[LA\]) we obtain $$i\lambda=(\tau_x+\beta_x\alpha e^{-2\tau}),\quad iv=\beta_x e^{-2\tau}$$ From the last relation we conclude that $\beta'e^{-2\tau}$ doesn’t depend from $\lambda$ and $$\alpha={i\lambda-\tau_x \over \beta_x e^{-2\tau}}$$ Now let us rewrite equation defining $C$ (\[ESP\]) substituting in it (\[PAR\]) ( to have no mixing we change parameters $\alpha_{1,2}$ in (\[MIDL\]) on $\nu_{1,2}$) +[1\_1-\_2]{}([e\^[2\_1]{}\_1+\_1]{}+\_1- [e\^[2\_2]{}\_2+\_2]{}-\_2)=0\[!!!\] The new solution $V=v+2C$ (result of transformation $V=v+2C$) (\[!!\]) and condition of its inariantnes with respect to $\sigma_2$ $VV^*=1$ may be rewritten as ($vv^*=1 !$) +[v\^\*C\^\*]{}=-2\[CC\] or using (\[!!!\]) we have in a consequence $(iv=\beta_x e^{-2\tau})$ $${v\over C}+{\beta_xe^{-2\tau}\over i(\lambda_1-\lambda_2)}({e^{2\tau_1}\over \beta_1+\nu_1}+\alpha_1-{e^{2\tau_2}\over \beta_2+\nu_2}-\alpha_2)=0$$ Exept of this from (\[CC\]) ($v^*={1\over v}$) we obtain also $$v=C(-1+i\sqrt {{1\over CC^*}-1}),\quad V=C(1+i\sqrt {{1\over CC^*}-1})$$
Keeping in mind all obtain above expresions and comments we rewrite the last equality in the form $${v\over C}+1+{1\over i(\lambda_1-\lambda_2)}[{\beta^1_x\over \beta^1+\nu_1)}-\tau^1_x-
{\beta^2_x\over \beta^2+\nu_2)}-\tau^2_x]=0,$$ $${v\over C}+1+{1\over i(\lambda_1-\lambda_2)}
[\ln {(\beta^1+\nu_1)e^{-\tau_1}\over (\beta ^2+\nu_2)e^{-\tau_2}}]_x=0$$ Summarizing with complex congugation and not forgetting about (\[CC\]) we come to eqution \[\]\_x-[1i(\_1\^\*-\_2\^\*)]{}\[\]\^\*\_x=0\[VIC\] To find relation connected 4 up to now arbitrary numerical parameters $\lambda_{1,2},\nu_{1,2}$ it is necessary to take into account second condition of congugation dectated by authomrphism $\sigma_2$, which allow to find connection $\beta$ and $\beta^*$ functions.
Comparing (\[PAR1\]) and L-A pair representation (\[LA\]), we obtain $$(\alpha'-2\tau'\alpha-\beta'e^{-2\tau})=i u\equiv i {1\over v} uv=i{1\over v}(r-{1\over 2}(\ln v)_{xx})$$ Substituting all above obtained relations we have $$-r+{i\over 2}\theta_{xx}=\lambda^2-\tau_{x,x}+(\tau_x)^2 +(\lambda+i\tau_x)\theta_x$$ In a L-A pair formalism $\lambda$ have to be considered as a real parameter. Thus imiginary part of last equation has the form $$\theta_{xx}={1\over i}(\tau^*-\tau)_{xx}+(\tau^*+\tau)_x\theta_x+(\tau_x)^2-(\tau^*_x)^2$$ The last equation has obvious first integral $$\theta_x-{1\over i}(\tau^*-\tau)_x=c e^{(\tau^*+\tau)}$$ Reminding that $ie^{\theta}=\beta_x e^{-2\tau}$ and substituting into the last quation we transform it to the form $${1\over i}(\ln \beta_x e^{-(\tau^*+\tau)})_x=c_1 e^{(\tau^*+\tau)}\quad
{1\over i}(\beta_x e^{-(\tau^*+\tau)})_x=c_1 \beta_x$$ Second integration leads to final result (\_x e\^[-(\^\*+)]{})=e\^[i]{}e\^[-(\^\*-)]{}=c\_1+c\_0\[EXTRA\] we pay attention of the reader that in (\[EXTRA\])parameters $c_1,c_0$ not depend from $(x,t)$ variables, but only from real parameter $\lambda$. Inverting (\[EXTRA\]) we obtain $\beta$ in form in which it became clear its structure with respect to complex conjuguation $$\beta+\nu={1\over c_1}e^{i\theta}e^{-(\tau^*-\tau)}+\nu-{c_0\over c_1}$$ Putting in the last expression indexes $k=1,2$ for $\lambda$ we come to elements introducing in (\[VIC\]) $$\beta_k+\nu_k={1\over c_1^k}e^{i\theta}e^{-(\tau^*_k-\tau_k)}+\tilde \nu_k$$ where $\tilde \nu_k=\nu_k-{c_0^k\over c_1^k}$ $$\beta_k^*+\nu^*_k={1\over (c_1^k)^*}e^{-i\theta}e^{(\tau^*_k-\tau_k)}+\tilde \nu_k^*$$ After substitution the last expressions into (\[VIC\]) and simplest manipulations we come to two possibilities. First one $\lambda_1=\lambda_1^*,\lambda_2=\lambda_2^*,\tilde \nu_1\tilde \nu_1^*={1\over c_1^1(c_1^1)^*},\tilde \nu_2\tilde \nu_2^*={1\over c_1^2(c_1^2)^*}$ and the second one $\lambda_1=\lambda_2^*,\tilde \nu_1\tilde \nu_2^*={1\over c_1^1(c_1^2)^*}$. Of course this result in the case of multisoliton solutions coinsedes with obtained above by the method of discrete substitution.
Outlook
=======
First, we would like to explain why, in spite of our comments in the Introduction, Backlund transformations are interesting objects for investigation. Backlund transformations, as well as discrete ones, are canonical transformations. This means that under these transformations densities of conserved quantities change by complete derivatives. In the case we considered above this applies in particular to the energy density. Therefore, if the energy density of the initial solution is $e$, after the application of the Backlund transformation it will be $e_B=e+\Delta_x$. The term $\Delta$ contains all parameters $\lambda_1=\lambda^*_2,\alpha_1=-{1\over \alpha^*}$, which define the Backlund transformation. For obtaining an explicit expression for $\Delta$ an additional (not a very cumbersome) calculation, using the technique of [@10],[@11], is necessary. If $e$ itself has been obtained from some other solution and so on, we will have a final expression $$e_n=\sum \Delta^i_x$$ Each $\Delta^i$ contains only parameters of previous transformations. Because the total energy for an $n$-soliton solution is equal to n, it is very natural to assume that all terms of the last sum have the same behavior at infinity and contribute 1 to the total energy after integration.
It is completely obvious that $\sigma_1$ and $\sigma_2$ exist for all integrable systems possessing soliton solutions and a discrete transformation. Indeed, as it was explained above, this is related only to the parity of the step, odd (2n) or even (2n+1), on which the discrete transformation applied to the initial solution is interrupted.
Without any doubts all results of this paper also apply to supersymmetric integrable hierarchies [@LS] and multicomponent matrix-type hierarchies [@LY],[@LYY].
What are the systems connected to $\sigma_2$ is completely unknown to the author at this moment.
Acknowledgements
================
Author indepeted to E.A.Yuzbajan for discussion of the results and big help in the process of preparetion the manuscript for publication.
APPENDIX
========
In this Appendix we would like to show that
Let us rewrite (\[LA\]) and (\[IMP\]) in equivalent form $$\pmatrix{A_x & B_x \cr
C_x & D_x \cr}+\pmatrix{\lambda+A & B \cr
C & \lambda +D \cr}i\pmatrix{ \lambda, & u \cr
v & -\lambda \cr}=i\pmatrix{ \lambda, & U \cr
V & -\lambda \cr}\pmatrix{\lambda+A & B \cr
C & \lambda +D \cr}$$ $$\pmatrix{A_t & B_t \cr
C_t & D_t \cr}+\pmatrix{\lambda+A & B \cr
C & \lambda +D \cr}i\pmatrix{ \lambda^2 -{uv\over 2}, & \lambda u-i{u_x\over 2} \cr
\lambda v+i{v_x\over 2} & -\lambda^2+{uv\over 2} \cr}=$$ $$i\pmatrix{\lambda^2 -{UV\over 2}, & \lambda U-i{U_x\over 2} \cr
\lambda V+i{V_x\over 2} & -\lambda^2+{UV\over 2} \cr}
\pmatrix{\lambda+A & B \cr
C & \lambda +D \cr}$$ From which we have the following system of equations $$u-U=2B,\quad v-V=-2C,\quad iA_x=Bv-CU,\quad iD_x=Cu-BV,\quad iB_x=Au-DU,\quad
iC_x=Dv-AV$$ and the same kind of the equations with respect to $t$ differetiation. As result we have $A+D,AD-BC$ doesn’t depend from $x,t$ arguments and thus representable in the form $A+D=\lambda_1+\lambda_2,AD-BC=\lambda_1\lambda_2$. Besides this functions $C,D$ satisfy the following system of equations
$$-iC_t-{1\over 2}( C_{xx}+4ikC_x+4k^2C)-C(BC)+{(C_x)^2B\over 4(\epsilon -CB)}=0$$ $$i B_t -{1\over 2}(B_{xx}-4ikB_x+4k^2B)-B(BC)+{(B_x)^2C\over 4(\epsilon -CB)}=0$$ where $k=\lambda_1+\lambda_2,\epsilon=({\lambda_1-\lambda_2)\over 2})^2$.
The last system of course is equivalent to nonlinear Schrodinger equation. More other solution of (\[SCNL\]) is connected with the solution of the above system via relations $$v=-C-{iC_x+kC\over 2\sqrt {\epsilon-BC}},\quad u=B+{iB_x-kC\over 2\sqrt {\epsilon-BC}}$$
[9]{}
A.N.Leznov THE METHOD OF SCALAR $L-A$ PAIR AND THE SOLITON SOLUTIONS OF THE PERIODIC TODA LATTICE [*Proceedings of II INTERNATIONAL WORKSHOP “NONLINEAR AND TURBULENT PROCESSOS” Kiev 1983*]{},[*GORDON AND BREACH NEW-YORK 1984*]{}(1437-1453)
A.N.Leznov [*FUNCTIONAL ANAL. APPL. 18, (83-86), 1984*]{}
A.N.Leznov [*LETT.MATH.PHYS. v8, N5 (379-385) 1984*]{}
A.N.Leznov [*LETT.MATH.PHYS. v8, N4 (353-358) 1984*]{}
A.N.Leznov and A.V.Razumov [*J.MATH.PHYS v35, (4067-4080), 1994*]{}
A.N.Leznov and A.V.Razumov [*J.MATH.PHYS v35, (1738-1754), 1994*]{}
A.N.Leznov COMPLETELY INTEGRABLE SYSTEMS. [*PREPRINT IHEP 92-112, (1-66), 1992*]{}
A.N.Leznov [*Physics of elementary particals and atom nuclears*]{} N27, v.5, p 1161-1246 (1996)
A.N.Leznov and A.S.Sorin [*PHYS.LETT.B389:494-502,1996*]{}.
A. N. Leznov and E. A. Yusbashyan [*LMP v35, p. 345-349, (1995)*]{}
A. N. Leznov and E. A. Yusbashyan [*Nucl.Phys.B 496,(3),643-653,(1997)*]{}
L.D.Fadeev and L.A.Takhtajan [*Hamiltonian approach in soliton theory Moskow, Nauka (1985)*]{}
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Recent networking research has identified that data-driven congestion control (CC) can be more efficient than traditional CC in TCP. Deep reinforcement learning (RL), in particular, has the potential to learn optimal network policies. However, RL suffers from instability and over-fitting, deficiencies which so far render it unacceptable for use in datacenter networks. In this paper, we analyze the requirements for RL to succeed in the datacenter context. We present a new emulator, Iroko, which we developed to support different network topologies, congestion control algorithms, and deployment scenarios. Iroko interfaces with the OpenAI gym toolkit, which allows for fast and fair evaluation of different RL and traditional CC algorithms under the same conditions. We present initial benchmarks on three deep RL algorithms compared to TCP New Vegas and DCTCP. Our results show that these algorithms are able to learn a CC policy which exceeds the performance of TCP New Vegas on a dumbbell and fat-tree topology. We make our emulator open-source and publicly available: [<https://github.com/dcgym/iroko>]{}.'
author:
- |
Fabian Ruffy[^1] [^2]\
`fruffy@cs.ubc.ca`\
Michael Przystupa \
`bot267@cs.ubc.ca`\
Ivan Beschastnikh\
`bestchai@cs.ubc.ca`\
bibliography:
- 'bibliography.bib'
title: 'Iroko: A Framework to Prototype Reinforcement Learning for Data Center Traffic Control'
---
### Acknowledgments {#acknowledgments .unnumbered}
We like to thank Mark Schmidt for his feedback on this paper. This work is supported by Huawei and the Institute for Computing, Information and Cognitive Systems (ICICS) at UBC.
[^1]: Equal contribution
[^2]: University of British Columbia
| {
"pile_set_name": "ArXiv"
} |
---
abstract: |
\[sec:abstract\]
The popularity and wide spread of IoT technology has brought about a rich hardware infrastructure over which it is possible to run powerful applications that were not previously imagined. Among this infrastructure, there are the medical hardware that is progressively advancing but at a slower pace. Nevertheless, medical devices are more powerful now to run more sophisticated functions and applications and exchange big data with external systems in a secure and safe fashion. Towards the design of an architecture for interoperability of medical devices, this paper initially focuses on the background work that is taken by the author for this objective. The paper briefly describes the role of the software in the advances of medical systems and their possibilities for interoperability. It focuses attention on the distribution software layer that is responsible for connectivity, efficiency, and time-sensitivity in the basic operation of medical systems such as exchange of information and commands across devices and systems. The paper analyses a number of previous work on middleware (mostly performed at his research group and also in a broader research community), and pay especial attention to the middleware for web-based systems and how they relate to the development of distributed medical systems.
author:
- Imad Eddine Touahria
bibliography:
- 'References.bib'
title: An incomplete and biased survey on selected resource management for distributed applications as basis for interoperability of medical devices
---
Introduction {#sec:introduction}
=============
As more IoT devices populate everyday life and penetrate on every system and societ, the available hardware infrastructures become progressively richer. The new application paradigms such as *Social Dispersed Computing* introduced in [@SocialDispersedComputing] envision the creation of highly complex systems that will provide powerful and intelligent solutions to many of today’s problems such as mobility, transport, energy, health, smart living, etc.
The amount of patients in need of continuous monitoring by 2025 will be 1.2 billion [@world2003diet] approximately. This indicates the need for providing more sophisticated logic to build intelligent systems with very efficient operation; for this it is needed to find more efficient ways for designing and developing medical devices and their corresponding software that enables continuous monitoring through smart devices (mobile or bedside) that will improve the quality of care and safety.
The integration of newer hardware and software technology in health care systems is an opportunity for decreasing the levels of mortality and will provide, overall, an improved and smarter patient care. Health monitoring solutions can be provided on many available devices, from a small medical application deployed on a PDA (Personal Digital Assistant) to measure heart rate to complex systems deployed on a hospital server that ensures monitoring of a patient with critical or chronic diseases and clinician decision support on diagnosis. Medical devices are an example of the new era of smart health care, as these devices offer new possibilities for physicians (on the user side) and provide new opportunities for research on the design and development of health care software that is data-centric and provides efficient interoperability. Critical clinical scenarios are deeply in need of the installation of medical devices especially in cases where patients are located in remote sites and they are in need of diagnosis services in a remote way. The systems that will support these scenarios will have to integrate devices that will be mobile [@7336385; @8363196; @8336776] or fixed close to the monitored patient such as the bed or somewhere around her/his home or living site [@5070972]. Manufacturers provide numerous alternatives for hardware and software of medical systems with a high degree of variability. These differences may generate some integration problems to build holistic clinical solutions. The network of the care center (i.e., hospital, etc.) has different computational servers, switches, data recorders and medical devices/equipment; the latter ones have to be integrated in the network as Plug and Play (PnP) devices. Manufacturers have to adhere to standards for medical systems (operation, data exchange, etc.) in order to achieve integrability in the network. Improving the assistance given to patients depends greatly on the capacity of medical systems to collect large data amounts in real-time, exchange them, process them, and create new knowledge that could assist the medical decisions. Collecting data from multiple sensors and devices can only be done if *interoperability* is achieved effectively, efficiently, and in a timely manner.
Interoperability is fundamental in the medical domain. At the same time, it is a great challenge given the enormous variety of actors ranging from medical devices to computational units, to human actors. As defined in [@fdainteroperability], *medical device interoperability* is the ability to safely, securely, and effectively exchange and use information among one or more devices, products, technologies, or systems. The data that is exchanged can be used for a number of purposes such as display patient information monitoring, storage, interpretation, analysis, and take autonomous action on or control another product. OR.NET [@ORNET] and ICE [@CIMIT] are two examples of projects whose objective is to provide a safe and interoperable network of medical devices. This will be later described in this paper.
For safety reasons, medical systems have traditionally been deployed in isolation. However, the proliferation of networking technology and distribution software and middleware has also reached eHealth in such a way that medical systems are becoming more open. To support such interconnection in a *safe* way, the involved actor agreed on the necessity of a common environment to facilitate interoperability and safety. The most accepted one is the Integrated Clinical Environment (ICE), that is a new solution involving a number of stakeholders where the final goal is to realize an interoperable network of medical devices in a *safe* way [@8029619]. To fulfill this goal, manufacturers have to use standards such as ICE (Integrated Clinical Environment) or other medical-device interoperability projects. Communication standards are beneficial for manufacturers, health care providers, and users: According to the west health organization [@WEST], the take up of functional interoperability in medical devices can save up to \$30 billion dollars.
Having said the above, this paper focuses on the impact of the software technology on the development of intelligent health solutions and their associated medical systems. By making a biased survey centered in the work of some researchers, the paper exposes selected contributions of a limited part of the research community that target at developing distribution software; this is an exercise to analyze these specific contributions that feed the author’s background and will take them as basis for producing a new enhanced solution tailored to the needs of medical device interoperability.
Paper structure {#sec:structure}
---------------
The paper is structured as follows. Section \[sec:rolemw\] describes the role of the distribution software for developing distributed medical applications, emphasizing the web technology that facilitates interoperability via standard protocols. Section \[sec:group\] describes state of the art on middleware design and distribution, focusing on the work of the author’s group that serves as his reference for improving middleware design for medical systems; also, other technologies are listed that are relevant for his current work. Section \[sec:standards\] describes illustrates some examples of frameworks for medical devices coordination and interoperability projects. Section \[sec:conclusion\] presents the conclusions and the future work that will be part of the author’s thesis.
Distribution software design for medical systems {#sec:rolemw}
================================================
Medical systems experiment the same difficulties of the rest of systems as far as its interoperability is concerned. Connectivity problems arise due to incompatibility of the software level (e.g., operating system, networking protocols, format of exchanged data, etc.). This problem has been well identified by the community that develops and designs medical systems and a number of projects have appeared that are studing the way of solving interoperability problems and providing safe solutions at the same time. The distribution software typically takes care of these type of technical issues, so the middleware community has a lot to say in this picture. Solutions will have to consider ICE, that is the most popular and widely adopted standard for medical device interoperability. Most of its implementations are based on publish-subscribe middleware such as the Data Distribution System [@DDS]. Nevertheless, there is a long road towards obtaining proper solutions for efficient communication and interoperation in a timely, flexible, and reconfigurable manner.
On the author’s road to provide one of this solutions, this paper performs a study of a reduced set of solutions in the research to design and develop real-time middleware. The paper analyzes the research on distribution software of his reference distributed real-time systems group that will be used as the background to the research on this field introduced there. This work has the intention of clearly exposing some selected problems and solutions to real-time middleware design for general distributed applications and also for cyber-physical systems. The analyzed work starts with techiques for resource management that are focused at operating system level, the design of software engineering solutions for middleware, service oriented techniques, real-time reconfiguration, modeling of cyber-physical systems and real-time online verification, and hardware acceleration embedded into the middleware logic. Taking this as a solid baseline, the author will extend these works to suit the needs of medical systems.
Middleware definition {#sec:middefi}
---------------------
The term *“middleware”* has been used under different contexts. At first, it was described as a layer of software located between platforms and applications in distributed systems [@Bernstein1996MiddlewareAM]; it is widely defined as an software layer that connects two or more heterogeneous applications, systems, or structures [@Bishop03asurvey] which, by the end, provides an interface to transfer data and commands among the different stakeholders. In the medical domain, HL7 is one of the most important middleware technologies (referred to in sections \[sec:networking\] and \[sec:interoperability2\]).
Middleware is a reusable software layer that renders standard interfaces and protocols to frequently encountered problems like heterogeneity, interoperability, scalability, fault-tolerance, security, resource-sharing [@5635187]. The middleware technology aim to protect the application layer from problems generated by the lower layers and that are devices heterogeneity, data encoding, security encryption algorithms. Also, middleware constitutes a way to achieve *interoperability*, and this is done by handling the heterogeneity of computer architectures, operating systems, programming languages, and networking protocols to facilitate application development and management [@Geihs:2001:MCA:619064.621730].
Middleware technologies
-----------------------
### RMI.
RMI (Remote Method Invocation) [@RMI] is a JAVA API (Application Programming Interface),and as it name indicate its used to call remote methods, \[8\] defines RMI as a technology to create communication between Java-based applications, where Java objects can beinvoked on the same machine or on different machines deployed on different networks, itsupports object polymorphism, its easy to write, secure and mobile, and supports dis-tributed garbage collection. RMI is based on TCP/IP technology and was proposed likea version for the method called RPC (Remote Procedure Call). The two machines thatare exchanging data with RMI technology have to run a JVM (Java Virtual Machine),RMI is object oriented and implement the client/server technology.
### CORBA
. CORBA (Common Object Request Broker Architecture) [@CORBA] is an OMG standard and is one of the first developed middlewares, it allows different softwares or applications developed with different programming languages and present in different locations to communicate together via an interface broker. The core concept in CORBA is Object Request Broker (ORB), ORB allows a client application to request services from a server application without knowing the details of the application server or the configuration of the network where it’s deployed. CORBA uses Interface Description Language (IDL) to define the interfaces of objects which facilitates the communication between different actors.
### iLand.
iLand [@6198329] is an open source middleware that has been applied in industrial prototypes, including medical systems. It follows the classical principles of a layered middleware design; though its architecture is independent of the underlying communication network protocol, the reference implementation of iLand uses a DDS backbone. iLand includes a number of enhanced functions to support dynamically re-configurable applications based on services: light-weight services in the real-time version and web services in the soft and best effort version with QoS guarantees.
### DDS.
DDS stands for Data Distribution Service [@DDS], it is a middleware protocol and API standard for data-centric connectivity provided by OMG (Object Management Group) standard. DDS serves as middleware architecture for a publish/subscribe messaging pattern and integrates the components of a system together, providing low-latency data connectivity, extreme reliability, and a scalable architecture that business and mission-critical Internet of Things (IoT) applications need. DDS is *data centric*, which is ideal for the Internet of Things. Data centricity means that DDS knows what data it stores and controls how to share that data.
Middleware position in distributed medical solutions {#sec:midpos}
----------------------------------------------------
The development of distributed applications has been greatly enhanced by middleware technology, one important realization of this last in medical systems is the automatic handling of patient records. Current practice often uses health information systems (HIS) and electronic health record (EHR) in an informal manner with adhoc protocols and interoperability solutions in order to develop clinical systems [@marisolmedCP20162].
Middleware technology has an important role in medical systems in what follows we mention some of those roles:
[$\bullet$]{}[=1em =0em]{}
Improves data transfer mechanisms between medical devices and the computational servers where applications are deployed.
Defines a secure layer that defines algorithms and encryption methods and therefore enhance data security.
The programmer is more concentrated on the programming details because the hardware details are abstracted by the middleware.
Achieves interoperability by connecting medical devices, computational servers, applications and data storage solutions.
Patient data must be private, middleware can be the intermediate that ensures a certain level of privacy and especially when data is transferred into third party stakeholders.
Improves the portability of applications [@4519606], where an application can be executed under many plateforms.
Over the last years, middleware has proven successful to address the ever increasing complexity of distributed systems in a reusable way [@Issarny:2007:PFM:1253532.1254722], thus, middlewares are taking an important place in complex systems design like medical applications that controls medical devices that have a direct impact on patient safety or a medical system that controls drug injection in patient body.
Middleware is characterized by abstracting the low level details of the communication protocols and the hardware characteristics of devices to programmers [@sensorsice]. The distributed nature of medical applications requires the use of technologies such as middlewares that provides a bridge between layers or levels, the hardware layer can be connected to the application layer via a *middleware* and the protocols level can be connected to the security level via a *“middleware”*, so the use of the middleware depend on the context of it application. By abstracting the low layers problems and concentrating on the application layer the programmer can generate better solutions. According to [@4809544], middleware technology can be classified into tow types and this depends on the level of use, *low level* middleware exists between sensor nodes and medical devices and *high level* middleware connects diffrent application and tra,sfer data between them. \[sec:midexa\]
Baseline {#sec:group}
========
[ This section analyses the state of the art work on which the author will build on. Here, it is described the work of the author’s group related to the design of resource management strategies for middleware in order to build time-sensitive and efficient systems, as well as real-time middleware design for cyber-physical systems and medical systems. ]{}
**Resource management and components for real-time**. This section presents techniques and algorithms to manage the computational resources; it constitutes the lowest abstraction level related to efficient resource assignment and enforcement techniques to avoid interference in the execution among several application tasks.
Controlling the execution at thread or task level to make an efficient resource usage is essential in distributed applications. There are a number of contributions on the architecture of real-time middleware from a real-time perspective. One of the first ones was named Hola-QoS described in [@HolaQOS] and [@the01]; using real-time scheduling for distributed actions [@ares]; real-time quality of service management [@ambient]; mode changing policies for timely execution [@modechanges; @dualBand]; agents modeling real-time properties [@mata]; identification of Linux kernel properties for improving locking [@peteradaeurope]; architecting open source projects for Linux [@peterspe]; analysis of temporal behavior at bus level within a multiprocessor [@groba]; reconfiguration scheduling [@aina].
Other contributions for service oriented systems have been [@soca]; or component-based modeling over QoS networking [@demiguel]; have progressively shed light over how to handle execution in systems using more abstract modeling like encapsulation through services and separating application from networking responsibilities.
On the distribution software side, a number of works have been published to fine tune the resource management policies inside the distribution software. On [@canojspe] and [@canocorr], a solution to build real-time component replacement for OSGi systems is provided. The work [@iadis] presents a higher abstraction to separate concerns in managing thread-level resources flexibly. Also, [@qosbidimensional] provides a higher abstraction resource management policy based on a quality of service specification inside a middleware to flexibly reconfigure applications based on services. Based on this, [@RTServiceComposition] provides a survey on real-time service composition in distributed systems and proposes a new solution to achieve real-time composition, further enhanced by [@mgvj27] laying down different alternatives for reconfiguration; and [@c29] describes two alternatives for application reconfiguration built inside the middleware one at task level and the other at service level. Other approaches describe garbage collection techniques inside the middleware such as [@j22] that support real-time execution even during system maintenance time.
Other works provide higher level designs of specific applications based on real-time middleware such as [@j16] that uses the Data Distribution Software for remote control.
As part as the new virtualization paradigms that provide new execution infrastructures where applications can coexist, we find [@ddscsi; @c35] that studies the performance and [@j26] that provides predictable cloud computing.
**Cyber-physical systems and IoT spheres**. Cyber-physical systems and IoT are highly related because both refer to the monitoring and actuation over physical objects. In the case of medical systems, the physical objects are the patients and the sensors and actuators that monitor them are medical devices (that are very close to IoT devices). Medical devices are part of a medical cyber-physical system (MCPS) that monitor the physical conditions of patients and actuates on them or help in deciding how a human physician will actuate on the patients, providing advice for a recommendation system or a decision support system. Cyber-physical systems have to employ rigorous design techniques because they are critical systems; for this, they have to use formal methods to verify that the system properties are kept at all times.
CPS have to be systems capable of evolving and should, therefore, be flexible. This is so because CPS in the medical domain target the monitoring of humans; humans have mostly an unpredictable behavior and this may raise unstable situations and unexpected events can be triggered. This means that these systems cannot be designed offline fully as not all the situations are known at design time. CPS have to support evolution and flexibility that are important challenges. It is extremely challenging to integrate the uncertainty raised by the unknown monitored conditions of a patient, etc., and the real-time reaction that is needed, as all the possible situations that could happen when the system is in operation can not be known a priori. In [@JSSPoliMi], a formal design is described based on Petri nets to model systems that can evolve; this technique was also explored in [@Compsac2014]. A different formal method approach is applied in [@Bersani18] and [@BersaniHASE18]. Also, [@c38] inserts online verification mechanisms based on Petri nets inside a distribution middleware.
In the above works, the focus is placed on the design of the software component interactions relative to their timing properties and other behavioral parameters that are modeled. However, the communication across the above components is a key aspect that must be analyzed to achieve communication and interaction infrastructures that support timely interaction and variable conditions such as load peaks or coexistence of components with heterogeneous resource usage patterns. For this purpose, there are also a number of contributions for the design of distribution middleware for CPS as middleware is the key software layer that is capable of abstracting distribution and interaction, masking situations where a node can receive a peak of requests from other nodes; the systems must be resilient to these and other situations, and they have to continue to work at all times. The design of adaptive middleware is provided in OmaCy architecture [@omacy]. In [@FGCSreview] and [@JSAReiser], an analysis of this problem is outlined. In [@CSIcpsmiddleware], the design of a scheme for attending simultaneous requests is provided. In [@RPiDDS], a model for integrating the Data Distribution Software with single board computers and Raspberry Pi is provided; this is further reworked in [@DDSFace] for a different domain such as avionics. Also, there have been a number of dedicated research contributions to building real-time facilities in middleware such as [@jsareaction2012; @JSAReiser], among others; or building abstractions for utilization of multiple interaction paradigms such as [@adaeurope11] or [@adaeurope12].
**Medical systems**. It is fundamental to analyze the position of middleware within medical systems in order to develop safe execution solutions that also provide timely operation. Failure to meet these requirements may yield to hazards to lives. Service-oriented architectures such as iLAND [@ilandTII] (that has been well proven in a number of critical domains) have been integrated with ICE in [@sensorsice]. The reconfiguration capacity [@isorcyork] and timely communication of iLAND have been proposed to be the core interoperation backbone for ICE. A number of studies for profiling the actual performance of communication middleware such as [@uc3muma] has been particularized for medical systems in a number of works such as [@sigbediland] for the Internet Communications Engine and [@sigbedamqp] for AMQP. Moreover, a number of improvements to their execution by making the middleware aware of the underlying hardware structure have been undertaken in [@sac2017] and the benefits of this acceleration in medical systems for remote patient monitoring has been exemplified in [@uc3mupv] for eHealth services and in [@ades] for audio surveillance.
It is important to bare in mind that designing medical systems requires detailed design and validation of properties as they are critical systems. There are different design and development frameworks for critical systems that have to be applied considering not only the functional properties of the application-level logic, but also the non-functional properties of the whole software stack. Some frameworks exist like [@infsof; @c44] that support web-based monitoring of critical software projects like the development of medical systems, including their set of libraries like the distribution middleware.
Key definition of *Medical Devices*
===================================
According to the WHO (World Health Organization) definition of “medical device” [@WHODEVICE], there are a number of possible hardware systems containing software and hardware parts that sample patient data through reading of vital signs, use networking protocols to exchange data or share it, and have different posibilities on mobility.
The development of medical devices undergoes strict regulations that are established by the FDA (Food and Drug Administration) [@FDA]. The role of FDA has faced criticism on its lack of innovation due to the strict regulations; this is the case of X-Ray machines innovation critics [@ekelman1988technological]. On the other side of the Atlantic, the European Union adopted the MDD (Medical Device Directive) as the governing set of standards and regulations for medical-devices manufacturer. Some of these directives are the Medical Device Directive (MDD 93/42/EEC), the In Vitro Diagnostic Medical Device Directive (IVDMDD98/79/EC) and the Active Implantable Medical Device Directive (AIMDD 90/385/EEC) [@french2012medical]. Overall, in different countries, regulation have gone through different paths; however, those countries most involved in medical device regulation established the Global Harmonization Task Force (GHTF) and, after that, the International Medical Device Regulators Forum (IMDRF) [@IMDRF] appeared with the goal of enforcing a faster medical device regulatory harmonization and convergence at international level in what concerns safety, performance and quality of medical devices.
According to the EU Borderline Manual [@manualborder], the following types of software should generally be classified as medical devices:
- picture archiving and communication systems;
- mobile apps for processing ECGs;
- software for delivery and management of cognitive remediation and rehabilitation programs;
- software for information management and patient monitoring; and
- mobile apps for the assessment of moles (e.g. making a recommendation about any changes).
Also, the EU provides recommendations on types of software that should generally not be considered as medical devices. These are mobile apps for communication between patient and caregivers while giving birth or for viewing the anatomy of the human body; software for interpretation of particular guidelines or mobile apps for managing pictures of moles like recording updates and changes over time.
As indicated by the World Health Organization [@WHODEVICE], software can be considered as a medical device. Moreover, IMDRF [@IMDRF] defined the concept of *“Software as a Medical Device”* that is a software intended for use in different devices as medical functions that perform them without being part of a hardware medical device. This follows the classical role of the software engineering vision and software as a service paradigm. It should be noticed that this study considers that a software is a medical device when the functional properties of the software are enough to handle a clinical situation in which the software service is used as a whole medical device.
In the context of this paper, software is a key part, and middleware is a part of software stacks. As the importance of software increases in the health domain [@6334808], the importance of middleware also raises. Given the cyber-physical systems relation to medical devices, also the medical software has vital safety requirements that force it to adhere to strict parameters concerning data accuracy, integrity, security, and verification. This means that its design should follow strict rigorous validation and verification techniques just as CPS do.
Analysis of medical devices characteristics
-------------------------------------------
### Safety.
*Safety* [@7814546] is the most important parameter in the development of medical systems. For this, their design must comply to standards like: European MDD [@199342EC; @200747EC], ISO 13485 [@134852012], IEC 62366 [@623662007], ISO 14971 [@149712012] and others.
The high variability (e.g., displays, sensors, actuators, communication capacities, and even materials) across different devices challenges their design.
To overcome the communication challenges, medical devices are progressively provided with standard input/output ports, leaving behind the propietary data output ports that would highly vary across suppliers of medical devices. The most common ports are RS 232 port (DB-9, DB-15, and DB-25), RJ 45, wireless LAN, Bluetooth, USB, or some proprietary data connection systems developed by suppliers for using data by their own IT systems. The following are the most common hardware connections used by suppliers to input data into the device: PS/2 (for supporting keyboard or a mouse inputs), USB, RS 232, and digital data input [@8029619].
![Hardware configuration of medical devices according to ICE architecture[]{data-label="fig:hardware"}](config.png)
### Timing requirements. {#sec:time}
Medical systems are time-sensitive. It is the case that different systems have different requirements for timing behavior, ranging from QoS (quality of service) and best effort to hard real-time guarantees. Therefore, medical devices should support different levels of real-time guarantees in the collection of data, the processing of data, visualization of medical data and alarms, and actuation on the patient or system. The respect of timing requirements by a medical devices is related to the ability of it software to treat and transfer data with consideration to the quality of it hardware components and the good installation of electrodes and cables.
The timing requirements of a medical device include the execution time of an operation (C), deadline (D), and the period (T) of an operation [@8029619], these timing requirements are defined ate design time (requirements collection phase) and verified at run time (system operation). Manufacturers can only specify and guarantee the individual devices’ standalone real time properties which can be distributed via ISO/IEEE 11073 medical device description format [@7844043], but, when connected in a network or to other devices the real timing measurements can be get after executing the system.
### Electromagnetic compatibility. {#sec:Electromagnetic}
In some clinical cases, more than one medical device can be integrated into the same OR (Operating Room), also, many medical devices can be used for different patients into a small clinical spot. Electromagnetic fields (EMF) may cause problems on electronic devices [@doi:10.1080/10803548.2009.11076785] and generate safety hazards in the medical devices, the level of those hazards increases when the affected MD has an important role in the treatment process, such as: implantable infusion pumps and cardiac peacemakers. An example of a hazard generated by radio-frequencies is an overdose of insulin from infusion pumps exposed to EMF from mobile phones or RFID (radio frequency identification devices, usually emitting RF EMF of 0.8–2.4 GHz) [@karpowicz2013assessment].
The radio-frequencies generated by a medical device or another source (infrared, microwave or ultrasound therapy) can affect the patient body and the body of others-the physiotherapist or bystanders, if they find themselves within the EMF [@Gryz2014EnvironmentalIO], thus, medical devices must be designed to operate in specified intervals of radio frequencies where their behavior will not be affected by the differences of frequencies. The impact of EMF can generate wrong results of medical devices, the deterioration of the hardware components of another MD, and also can cause cancer for clinicians that are exposed for long periods to EMF.
### Data accuracy. {#sec:accuracy}
The final goal of a medical device is to monitor patient safety, inject drugs into patient body in some cases, intervenes in surgical operations, and provide data that is accurate for ulterior use by clinicians and staticians. The big flow of data in healthcare applications generates a need to have well-defined description of rich and structured data required to represent the variety of data used in clinical environment [@weiningerintegrated] and thereby data that is *correct*. To have accurate data the clinician responsible of the good use of the device must insure the good installation of electrodes and cables on the patient body, the appropriate environmental conditions to the MD and the absence of interoperability issues. \[sec:Vulnerabilities\]
Interoperability alternatives {#sec:standards}
=============================
HIMSS (Healthcare Information and Management Systems Society) [@himms] defines *interoperability* in healthcare at three levels:
- Data exchange between two different information technology system. This includes the ability to exchange the data, to receive it and to interpret the data. This is a foundational interoperability level.
- Structure or format of the exchanged data. This refers to the standards for message formats. It implies that the data content and meaning, as well as its operational purpose, is preserved and unaltered.
- Semantic. This level leverages the two levels above and the coding of the data (that includes its vocabulary) in order to allow the receiver systems to interpret the data.
All these levels are common to general purpose middleware applied to the specific domain of health care.
Data exchange standards {#sec:networking}
-----------------------
Communication and data exchange between medical devices is a basic support for *interoperability*. The basic data exchanged is the EHR. As such, there were a number of EHR incentive programs for building efficient data management in this respect. Two health IT standars competed years ago for this purpose:
**CDA**. This stands for Clinical Document Architecture that was backed up by the Health Level Seven International (HL7).
**CCR**. This stands for Continuity of Care Record that was empowered by the American Society for Testing and Materials (ASTM). There were a number of proposals for harmonizing the two; however, CDA won over CCR in a first battle. Nowaday, it exists **C-CDA** that stands for Consolidated CDA format that is e by the National Coordinator for Health Information Technology (ONC).
Some other alternatives are shown below.
**DICOM** (Digital Imaging and Communications in Medicine) is a standard developed by the National Electrical Manufacturers Association (NEMA) to store, retrieve, and transfer medical digital information exchange, basically imaging data from imaging equipment, from various medical devices such as scanners, printers, network hardware etc [@7988050].
**HL7 version 2** is a standard designed for messaging in a centralized or distributed environment. Also, it supports the proposition of interfaces for communicating to third parties that do not adhere to data exchange standards. HL 7 standard is one of the most popular ones; precisely, 95% of hospitals, 95% of medical related equipment and information systems in the whole America use it. It is also in use in Germany, Japan and other developed countries [@5974909].
**HL7 version 3** is a newer version of HL7 version 2 that uses the eXtensible Markup Language (XML) as a powerful tool in the web to transfer data. Also, version 3 integrates web services and the Web Services Description Language (WSDL) [@5687707]. Version 3 of HL7 promotes semantic interoperability defined a more explicit methodology for the development of messages [@7377675].
Health specific seamless-communication standards {#sec:interoperability2}
------------------------------------------------
Following, a number of protocols that provide seamless interoperability in medical systems is provided.
**HL7**. This standard is again listed here as its main goal is to provide seamless integration across a network of medical devices and also in a way that is fully secure. HL7 defines data transfer formats for interoperability across medical devices and HIS (Health Information Systems). **ISO/IEEE 11073** designates a set of standards for plug and play interoperability of medical devices. ISO/IEEE 11073 defines a common framework for the establishment of a unified data structure model [@7494019]. ISO/IEEE 11073 (X73PHD)[@11073; @staff2010iso] objective is to standardize Personal Health Devices (PHDs) and allow semantic interoperability of medical devices by defining the structure of data and the protocol for information delivery between individual medical devices (such as Glucose meter, Weight scale, Blood pressure monitor, etc.) and the manager (computer, smart phone, set top box, etc.), which collects and manages the information from the individual medical devices [@6152915].
**FHIR** (Fast Healthcare Interoperability Resources) of HL7. It is an environment and framework for EHR exchange that integrates both the market needs and the more up-to-date technologies [@8031128]. FHIR targets REST (Representational State Transfer) architectural style as presented by Fielding [@Fielding:2000:ASD:932295]. For this purpose, it models the actors of healthcare scenarios as resources: medical software, clinicians modeling, medical devices, medications, or IT structures, among others. FHIR is the most recent standard of the series HL7 v2, HL7 v3, CDA developed by HL7 [@6627810].
Web-services based interoperability
-----------------------------------
**DPWS** (Devices Profile for Web Services) uses SOA (Service Oriented Architecture) for providing interoperable, cross-platform, cross-domain, and network-agnostic access to devices and their services [@7325198]. DPWS is used for embedded devices with limited resources by enabling Web services using IoT applications. DPWS requires WSDL (Web Service Description Language) and SOAP (Simple Object Access Protocol) to communicate the device services, but it does not need a registry like UDDI (Universal Description, Discovery and Integration) for services discovery. DPWS aims to achieve interoperability by using the loosely coupled concept of Web services over the MD operation and data encryption.
**MDPWS** (Medical Devices Profile for Web Services) is a part of IEEE 11073-20702 series of standards and uses the principles of DPWS but for medical devices interoperability domain with some modifications like the restricted security mechanisms of MDPWS comparing with DPWS, e.g. the usage of client authentication with HTTP authentication is withdrawn in favor of using X.509.v3 certificates [@7390446]. MDPWS uses the principles of DPWS with respect to the high acuity patient environment and the complexity of medical devices.
Software frameworks for medical systems
---------------------------------------
### OSCP {#sec:OSCP}
OSCP (Open Surgical Communication Protocol) [@oscpde] is based on the data transmission technology Medical Device Profile for Web Services (MDPWS, standardized as IEEE 11073-20702) and the Domain Information and Service Model (standardized as IEEE 11073-10207). As mentioned before, MDPWS is based on a SOA architecture and it allows devices to detect and find other medical devices in a local network using WS-Discovery.
The description of devices is based on formal notations and the creation of device description templates. As it is based on formalisms, it supports using different logics to verify correctness and properties of the systems and the devices such as LTL (Linear Temporal Logic, Smart Assertion Logic for Temporal Logic (SALT), regular LTL (RLTL), etc.
### ICE {#sec:ICE}
(Integrated Clinical Environment) [@CIMIT] architecture was defined in 2009 in ASTM (American Society for Testing and Materials) F2761 standard. ICE was designed for bridging the gap defined by the high heterogeneity of medical devices. ICE aims at providing a set of specifications and architecture that implements a plug and play atmosphere to create networks of medical devices and to create a communication gateway between them, where messages and commands are exchanged appropriately. To survive to the next generation of medical applications and systems, manufacturers will have to adhere to the ICE specifications. One of the most important design decisions of ICE is that medical devices have a network output port and must produce data that can be managed through ICE interfaces. The following are the main ICE objectives:
[$\bullet$]{}[=1em =0em]{}
Improve patient safety by coordinating medical devices actions and avoid incorrect medical decisions generated by a faulty device operation.
Ensure support for clinicians in their monitoring and treatment operation, where clinical aid information is generated by a set of workflows implemented in the ICE framework logic.
Create a flexible communication bus between medical devices, servers running medical applications, and the clinicians.
Implement an interoperable network of medical devices and computational servers where data and messages are exchanged in real time.
Define standards for the hardware and software characteristics or dimensions of medical devices that will be used by manufacturers to produce medical devices that comply with ICE.
In ICE-based systems *safety* is the ability to implement interoperability between heterogeneous medical devices in a single high acuity patient environment where communication is done via software or hardware interfaces. ICE aims to improve patient *safety* by elaborating and deploying interoperability of the medical devices, thus, creating an interoperate communication bus between heterogeneous medical devices where messages and commands are exchanged.
![Hardware configuration of medical devices according to ICE architecture[]{data-label="fig:hardware"}](layers.png)
### OR.NET. {#sec:OR.NET}
OR.NET is a solution developed by German academics and industrials for medical devices integration and medical systems interoperability in the operating room and it surroundings. The objective of OR.NET is to develop basic concepts for the secure dynamic networking of computer-controlled medical devices in the operating room and clinic [@ORNET]. In the end, these concepts are evaluated and transformed to standards. OR.NET aims to create a service-oriented architecture for the safe and secure dynamic interconnection of medical devices in the OR context [@7318708].
Besides OSCP, OR.NET also allows the use of different communication protocols (e.g. DICOM and HL7 Version 2). OSCP explicitly does not try to replace these widespread protocols. Instead, dedicated gateways are specified that enable the operation of the DICOM and HL7 protocols despite the separation of OR network and the hospital network.
Conclusion and future works {#sec:conclusion}
===========================
This paper has analyzed the baseline for design of middleware that applies both to general distributed applications and it is later taken to constitute a take-off platform for desiging the needed adjustments for medical systems. The analysis is confined to the research group of reference for the author, comprising the fields that are part of embedded research curricula [@j1] such as resource management techniques, middleware design, cyber-physical systems, and software engineering techniques for functional design considering non-functional properties. Also, technologies related to medical devices are studied in details from data retrieving to processing and final decisions for clinicians. Requirements of medical devices good operation are presented and interoprability as the core concept in medical devices communication is detailed with examples.
This work is the base for future researches aiming to study and define the position of middleware technology in medical systems and especially in medical devices communications, here ICE is the objective of development. An architecture for ICE-based systems development is under study, this work is the outline for medical devices communications and data exchange through real implementations in clinical environment.
\[Bibliography\]
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'The minimality of the penalty function associated with a convex risk measure is analyzed in this paper. First, in a general static framework, we provide necessary and sufficient conditions for a penalty function defined in a convex and closed subset of the absolutely continuous measures with respect to some reference measure $\mathbb{P}$ to be minimal on this set. When the probability space supports a Lévy process, we establish results that guarantee the minimality property of a penalty function described in terms of the coefficients associated with the density processes. The set of densities processes is described and the convergence of its quadratic variation is analyzed.'
author:
- '[Daniel Hernández–Hernández[^1] Leonel Pérez-Hernández[^2]]{}'
title: ' Characterization of the minimal penalty of a convex risk measure with applications to Lévy processes.'
---
**Key words:** Convex risk measures, Fenchel-Legendre transformation, minimal penalization, Lévy process.\
**Mathematical Subject Classification:** 91B30, 46E30.
Introduction
============
The definition of coherent risk measure was introduced by Artzner *et al.* in their fundamental works [@ADEH; @1997], [@ADEH; @1999] for finite probability spaces, giving an axiomatic characterization that was extended later by Delbaen [@Delbaen; @2002] to general probability spaces. In the papers mentioned above one of the fundamental axioms was the positive homogeneity, and in further works it was removed, defining the concept of convex risk measure introduced by Föllmer and Schied [@FoellSch; @2002; @a], [@FoellSch; @2002; @b], Frittelli and Rosazza Gianin [@FritRsza; @2002], [@FritRsza; @2004] and Heath [@Heath; @2000].
This is a rich area that has received a lot of attention and much work has been developed. There exists by now a well established theory in the static and dynamic cases, but there are still many questions unanswered in the static framework that need to be analyzed carefully. The one we focus on in this paper is the characterization of the penalty functions that are minimal for the corresponding static risk measure. Up to now, there are mainly two ways to deal with minimal penalty functions, namely the definition or the biduality relation. With the results presented in this paper we can start with a penalty function, which essentially discriminate models within a convex closed subset of absolutely continuous probability measures with respect to (w.r.t.) the market measure, and then guarantee that it corresponds to the minimal penalty of the corresponding convex risk measure on this subset. This property is, as we will see, closely related with the lower semicontinuity of the penalty function, and the complications to prove this property depend on the structure of the probability space.
We first provide a general framework, within a measurable space with a reference probability measure $\mathbb{P}$, and show necessary and sufficient conditions for a penalty function defined in a convex and closed subset of the absolutely continuous measures with respect to the reference measure to be minimal within this subset. The characterization of the form of the penalty functions that are minimal when the probability space supports a Lévy process is then studied. This requires to characterize the set of absolutely continuous measures for this space, and it is done using results that describe the density process for spaces which support semimartingales with the weak predictable representation property. Roughly speaking, using the weak representation property, every density process splits in two parts, one is related with the continuous local martingale part of the decomposition and the other with the corresponding discontinuous one. It is shown some kind of continuity property for the quadratic variation of a sequence of densities converging in $L^{1}$. From this characterization of the densities, a family of penalty functions is proposed, which turned out to be minimal for the risk measures generated by duality.
The paper is organized as follows. Section 2 contains the description of the minimal penalty functions for a general probability space, providing necessary and sufficient conditions, the last one rectricted to a subset of equivalent probability measures. Section 3 reports the structure of the densities for a probability space that supports a Lévy processes and the convergence properties needed to prove the lower semicontinuity of the set of penalty functions defined in Section 4. In this last section we show that these penalty functions are minimal.
Minimal penalty function of risk measures concentrated in $\mathcal{Q}_{\ll }\left( \mathbb{P}\right) $. \[Sect Minimal Penalty Function of CMR\]
=================================================================================================================================================
Any penalty function $\psi $ induce a convex risk measure $\rho $, which in turn has a representation by means of a minimal penalty function $\psi _{\rho }^{\ast }.$ Starting with a penalty function $\psi $ concentrated in a convex and closed subset of the set of absolutely continuous probability measures with respect to some reference measure $\mathbb{P}$, in this section we give necessary and sufficient conditions in order to guarantee that $\psi $ is the minimal penalty within this set. We begin recalling briefly some known results from the theory of static risk measures, and then a characterization for minimal penalties is presented.
Preliminaries from static measures of risk [Subsect:\_Preliminaries\_SCRM]{}
----------------------------------------------------------------------------
Let $X:\Omega \rightarrow \mathbb{R}$ be a mapping from a set $\Omega $ of possible market scenarios, representing the discounted net worth of the position. Uncertainty is represented by the measurable space $(\Omega,
\mathcal{F})$, and we denote by $\mathcal{X}$ the linear space of bounded financial positions, including constant functions.
1. The function $\rho :\mathcal{X}\rightarrow \mathbb{R}$, quantifying the risk of $X$, is a *monetary risk measure* if it satisfies the following properties: $$\begin{array}{rl}
\text{Monotonicity:} & \text{If }X\leq Y\text{ then }\rho \left( X\right)
\geq \rho \left( Y\right) \ \forall X,Y\in \mathcal{X}.\end{array}
\label{Monotonicity}$$$\smallskip \ $$$\begin{array}{rl}
\text{Translation Invariance:} & \rho \left( X+a\right) =\rho \left(
X\right) -a\ \forall a\in \mathbb{R}\ \forall X\in \mathcal{X}.\end{array}
\label{Translation Invariance}$$
2. When this function satisfies also the convexity property $$\begin{array}{rl}
& \rho \left( \lambda X+\left( 1-\lambda \right) Y\right) \leq \lambda \rho
\left( X\right) +\left( 1-\lambda \right) \rho \left( Y\right) \ \forall
\lambda \in \left[ 0,1\right] \ \forall X,Y\in \mathcal{X},\end{array}
\label{Convexity}$$it is said that $\rho $ is a convex risk measure.
3. The function $\rho $ is called normalized if $\rho \left(
0\right) =0$, and sensitive, with respect to a measure $\mathbb{P}$, when for each $X\in L_{+}^{\infty }\left( \mathbb{P}\right) $ with $\mathbb{P}\left[ X>0\right] >0$ we have that $\rho \left( -X\right) >\rho \left(
0\right) .$
We say that a set function $\mathbb{Q}:\mathcal{F}\rightarrow \left[ 0,1\right] $ is a *probability content* if it is finitely additive and $\mathbb{Q}\left( \Omega
\right) =1$. The set of *probability contents* on this measurable space is denoted by $\mathcal{Q}_{cont}$. From the general theory of static convex risk measures [@FoellSch; @2004], we know that any map $\psi :\mathcal{Q}_{cont}\rightarrow \mathbb{R}\cup \{+\infty \},$ with $\inf\nolimits_{\mathbb{Q}\in \mathcal{Q}_{cont}}\psi (\mathbb{Q})\in \mathbb{R}$, induces a static convex measure of risk as a mapping $\rho :\mathfrak{M}_{b}\rightarrow \mathbb{R}$ given by $$\rho (X):=\sup\nolimits_{\mathbb{Q}\in \mathcal{Q}_{cont}}\left\{ \mathbb{E}_{\mathbb{Q}}\left[ -X\right] -\psi (\mathbb{Q})\right\} .
\label{Static_CMR_induced_by_phi}$$Here $\mathfrak{M}$ denotes the class of measurable functions and $\mathfrak{M}_{b}$ the subclass of bounded measurable functions. The function $\psi$ will be referred as a *penalty function*. Föllmer and Schied \[Theorem 3.2\][FoellSch 2002 b]{} and Frittelli and Rosazza Gianin [@FritRsza; @2002 Corollary 7] proved that any convex risk measure is essentially of this form.
More precisely, a convex risk measure $\rho $ on the space $\mathfrak{M}_{b}\left( \Omega ,\mathcal{F}\right) $ has the representation $$\rho (X)=\sup\limits_{\mathbb{Q}\in \mathcal{Q}_{cont}}\left\{ \mathbb{E}_{\mathbb{Q}}\left[ -X\right] -\psi _{\rho }^{\ast }\left( \mathbb{Q}\right)
\right\} , \label{Static_CMR_Robust_representation}$$where $$\psi _{\rho }^{\ast }\left( \mathbb{Q}\right) :=\sup\limits_{X\in \mathcal{A}\rho }\mathbb{E}_{\mathbb{Q}}\left[ -X\right] , \label{Def._minimal_penalty}$$and $\mathcal{A}_{\rho }:=\left\{ X\in \mathfrak{M}_{b}:\rho (X)\leq
0\right\} $ is the *acceptance set* of $\rho .$
The penalty $\psi _{\rho }^{\ast }$ is called the *minimal penalty function* associated to $\rho $ because, for any other penalty function $\psi $ fulfilling $\left( \ref{Static_CMR_Robust_representation}\right) $, $\psi \left( \mathbb{Q}\right) \geq \psi _{\rho }^{\ast }\left( \mathbb{Q}\right) $, for all $\mathbb{Q}\in \mathcal{Q}_{cont}.$ Furthermore, for the minimal penalty function, the next biduality relation is satisfied $$\psi _{\rho }^{\ast }\left( \mathbb{Q}\right) =\sup_{X\in \mathfrak{M}_{b}\left( \Omega ,\mathcal{F}\right) }\left\{ \mathbb{E}_{\mathbb{Q}}\left[
-X\right] -\rho \left( X\right) \right\} ,\quad \forall \mathbb{Q\in }\mathcal{Q}_{cont}. \label{static convex rsk msr biduality}$$
Let $\mathcal{Q}\left( \Omega ,\mathcal{F}\right) $ be the family of probability measures on the measurable space $\left( \Omega ,\mathcal{F}\right) .$ Among the measures of risk, the class of them that are concentrated on the set of probability measures $\mathcal{Q\subset Q}_{cont}$ are of special interest. Recall that a function $I:E\subset \mathbb{R}^{\Omega }\rightarrow \mathbb{R}$ is *sequentially continuous from below (above)* when $\left\{ X_{n}\right\} _{n\in \mathbb{N}}\uparrow
X\Rightarrow \lim_{n\rightarrow \infty }I\left( X_{n}\right) =I\left(
X\right) $ ( respectively $\left\{ X_{n}\right\} _{n\in \mathbb{N}}\downarrow X\Rightarrow \lim_{n\rightarrow \infty }I\left( X_{n}\right)
=I\left( X\right) $). Föllmer and Schied [@FoellSch; @2004] proved that any sequentially continuous from below convex measure of risk is concentrated on the set $\mathcal{Q}$. Later, Krätschmer [@Kraetschmer; @2005 Prop. 3 p. 601] established that the sequential continuity from below is not only a sufficient but also a necessary condition in order to have a representation, by means of the minimal penalty function in terms of probability measures.
We denote by $\mathcal{Q}_{\ll }(\mathbb{P})$ the subclass of absolutely continuous probability measure with respect to $\mathbb{P}$ and by $\mathcal{Q}_{\approx }\left( \mathbb{P}\right) $ the subclass of equivalent probability measure. Of course, $\mathcal{Q}_{\approx }\left( \mathbb{P}\right) \subset \mathcal{Q}_{\ll }(\mathbb{P})\subset \mathcal{Q}\left(
\Omega ,\mathcal{F}\right) $.
\[Remarkpsi(Q)=+oo\_for\_Q\_not\_<<\] When a convex risk measures in $\mathcal{X}:=L^{\infty }\left( \mathbb{P}\right) $ satisfies the property $$\rho \left( X\right) =\rho \left( Y\right) \text{ if }X=Y\ \mathbb{P}\text{-a.s.} \label{rho(X)=rho(Y)_for_X=Y}$$and is represented by a penalty function $\psi $ as in $\left( \ref{Static_CMR_induced_by_phi}\right) $, we have that $$\mathbb{Q}\in \mathcal{Q}_{cont}\setminus \mathcal{Q}_{cont}^{\ll
}\Longrightarrow \psi \left( \mathbb{Q}\right) =+\infty ,
\label{psi(Q)=+oo_for_Q_not_<<}$$where $\mathcal{Q}_{cont}^{\ll }$ is the set of contents absolutely continuous with respect to $\mathbb{P}$; see [FoellSch 2004]{}.
Minimal penalty functions [Subsect:\_Minimal\_penalty\_functions]{}
-------------------------------------------------------------------
The minimality property of the penalty function turns out to be quite relevant, and it is a desirable property that is not easy to prove in general. For instance, in the study of robust portfolio optimization problems (see, for example, Schied [@Schd; @2007] and Hernández-Hernández and Pérez-Hernández [@PerHer]), using techniques of duality, the minimality property is a necessary condition in order to have a well posed dual problem. More recently, the dual representations of dynamic risk measures were analyzed by Barrieu and El Karoui [@BaElKa2009], while the connection with BSDEs and $g-$expectations have been studied by Delbaen *et. al.* [@DelPenRz]. The minimality of the penalty function also plays a crucial role in the characterization of the time consistency property for dynamic risk measures (see Bion-Nadal [BionNa2008]{}, [@BionNa2009]).
In the next sections we will show some of the difficulties that appear to prove the minimality of the penalty function when the probability space $(\Omega, \mathcal{F},\mathbb{P})$ supports a Lévy process. However, to establish the results of this section we only need to fix a probability space $(\Omega, \mathcal{F}, \mathbb{P})$.
When we deal with a set of absolutely continuous probability measures $\mathcal{K}\subset \mathcal{Q}_{\ll }(\mathbb{P})$ it is necessary to make reference to some topological concepts, meaning that we are considering the corresponding set of densities and the strong topology in $L^{1}\left(
\mathbb{P}\right) .$ Recall that within a locally convex space, a convex set $\mathcal{K}$ is weakly closed if and only if $\mathcal{K}$ is closed in the original topology [@FoellSch; @2004 Thm A.59].
\[static minimal penalty funct. in Q(<<) <=>\] Let $\psi :\mathcal{K}\subset \mathcal{Q}_{\ll }(\mathbb{P})\rightarrow \mathbb{R}\cup \{+\infty
\} $ be a function with $\inf\nolimits_{\mathbb{Q}\in \mathcal{K}}\psi (\mathbb{Q})\in \mathbb{R},$ and define the extension $\psi (\mathbb{Q}):=\infty $ for each $\mathbb{Q}\in \mathcal{Q}_{cont}\setminus \mathcal{K}$, with $\mathcal{K}$ a convex closed set. Also, define the function $\Psi $, with domain in $L^{1}$, as $$\Psi \left( D\right) :=\left\{
\begin{array}{rl}
\psi \left( \mathbb{Q}\right) & \text{if }D=d\mathbb{Q}/d\mathbb{P}\text{
for }\mathbb{Q}\in \mathcal{K} \\
\infty & \text{otherwise.}\end{array}\right.$$Then, for the convex measure of risk $\rho (X):=\sup\limits_{\mathbb{Q}\in
\mathcal{Q}_{cont}}\left\{ \mathbb{E}_{\mathbb{Q}}\left[ -X\right] -\psi
\left( \mathbb{Q}\right) \right\} $ associated with $\psi $ the following assertions hold:
$\left( a\right) $ If $\rho $ has as minimal penalty $\psi _{\rho }^{\ast }$ the function $\psi $ (i.e. $\psi $ $=\psi _{\rho }^{\ast }$ ), then $\Psi $ is a proper convex function and lower semicontinuous w.r.t. the (strong) $L^{1}$-topology or equivalently w.r.t. the weak topology $\sigma \left(
L^{1},L^{\infty }\right) $. $\left( b\right) $ If $\Psi $ is lower semicontinuous w.r.t. the (strong) $L^{1}$-topology or equivalently w.r.t. the weak topology $\sigma \left(
L^{1},L^{\infty }\right) ,$ then $$\psi \mathbf{1}_{\mathcal{Q}_{\ll }(\mathbb{P})}=\psi _{\rho }^{\ast }\mathbf{1}_{\mathcal{Q}_{\ll }(\mathbb{P})}.
\label{PSI_l.s.c=>psi*=psi_on_Q<<}$$
*Proof:* $\left( a\right) $ Recall that $\sigma \left(
L^{1},L^{\infty }\right) $ is the coarsest topology on $L^{1}\left( \mathbb{P}\right) $ under which every linear operator is continuous, and hence $\Psi _{0}^{X}\left( Z\right) :=\mathbb{E}_{\mathbb{P}}\left[ Z\left(
-X\right) \right] $, with $Z\in L^1$, is a continuous function for each $X\in
\mathfrak{M}_{b}\left( \Omega ,\mathcal{F}\right) $ fixed. For $\delta \left(
\mathcal{K}\right) :=\left\{ Z:Z=d\mathbb{Q}/d\mathbb{P}\text{ with }\mathbb{Q}\in \mathcal{K}\right\} $ we have that$$\Psi _{1}^{X}\left( Z\right) :=\Psi _{0}^{X}\left( Z\right) \mathbf{1}_{\delta \left( \mathcal{K}\right) }\left( Z\right) +\infty \times \mathbf{1}_{L^{1}\setminus \delta \left( \mathcal{K}\right) }\left( Z\right)$$is clearly lower semicontinuous on $\delta \left( \mathcal{K}\right) .$ For $Z^{\prime }\in L^{1}\left( \mathbb{P}\right) \setminus \delta \left(
\mathcal{K}\right) $ arbitrary fixed we have from Hahn-Banach’s Theorem that there is a continuous lineal functional $l\left( Z\right) $ with $l\left(
Z^{\prime }\right) <\inf_{Z\in \delta \left( \mathcal{K}\right) }l\left(
Z\right) $. Taking $\varepsilon :=\frac{1}{2}\left\{ \inf_{Z\in \delta
\left( \mathcal{K}\right) }l\left( Z\right) -l\left( Z^{\prime }\right)
\right\} $ we have that the weak open ball $B\left( Z^{\prime },\varepsilon
\right) :=\left\{ Z\in L^{1}\left( \mathbb{P}\right) :\left\vert l\left(
Z^{\prime }\right) -l\left( Z\right) \right\vert <\varepsilon \right\} $ satisfies $B\left( Z^{\prime },\varepsilon \right) \cap \delta \left( \mathcal{K}\right) =\varnothing .$ Therefore, $\Psi _{1}^{X}\left( Z\right) $ is weak lower semicontinuous on $L^{1}\left( \mathbb{P}\right) ,$ as well as $\Psi
_{2}^{X}\left( Z\right) :=\Psi _{1}^{X}\left( Z\right) -\rho \left( X\right)
.$ If $$\psi \left( \mathbb{Q}\right) =\psi _{\rho }^{\ast }\left( \mathbb{Q}\right) =\sup_{X\in \mathfrak{M}_{b}\left( \Omega ,\mathcal{F}\right)
}\left\{ \int Z\left( -X\right) d\mathbb{P}-\rho \left( X\right) \right\},$$ where $Z:=d\mathbb{Q}/d\mathbb{P},$ we have that $\Psi \left( Z\right)
=\sup_{X\in \mathfrak{M}_{b}\left( \Omega ,\mathcal{F}\right) }\left\{ \Psi
_{2}^{X}\left( Z\right) \right\} $ is the supremum of a family of convex lower semicontinuous functions with respect to the topology $\sigma \left(
L^{1},L^{\infty }\right) $, and $\Psi \left( Z\right) $ preserves both properties.
$\left( b\right) $ For the Fenchel - Legendre transform (conjugate function) $\Psi ^{\ast }:\ L^{\infty }\left( \mathbb{P}\right) \longrightarrow \mathbb{R}$ for each $U\in L^{\infty }\left( \mathbb{P}\right) $$$\Psi ^{\ast }\left( U\right) =\sup\limits_{Z\in \delta \left( \mathcal{K}\right) }\left\{ \int ZUd\mathbb{P-}\Psi \left( Z\right) \right\}
=\sup\limits_{\mathbb{Q}\in \mathcal{Q}_{cont}}\left\{ \mathbb{E}_{\mathbb{Q}}\left[ U\right] \mathbb{-\psi }\left( \mathbb{Q}\right) \right\} \equiv
\rho \left( -U\right) .$$ From the lower semicontinuity of $\Psi $ w.r.t. the weak topology $\sigma \left( L^{1},L^{\infty }\right) $ that $\Psi =\Psi ^{\ast \ast }$. Considering the weak$^{\ast }$-topology $\sigma \left( L^{\infty }\left(
\mathbb{P}\right) ,L^{1}\left( \mathbb{P}\right) \right) $ for $Z=d\mathbb{Q}/d\mathbb{P}$ we have that $$\psi \left( \mathbb{Q}\right) =\Psi \left( Z\right) =\Psi ^{\ast \ast
}\left( Z\right) =\sup\limits_{U\in L^{\infty }\left( \mathbb{P}\right)
}\left\{ \int Z\left( -U\right) d\mathbb{P-}\Psi ^{\ast }\left( -U\right)
\right\} =\psi _{\rho }^{\ast }\left( \mathbb{Q}\right) .$$$\Box $
1. As pointed out in Remark \[Remarkpsi(Q)=+oo\_for\_Q\_not\_<<\], we have that $$\mathbb{Q}\in \mathcal{Q}_{cont}\setminus \mathcal{Q}_{cont}^{\ll }\Longrightarrow \psi _{\rho
}^{\ast }\left( \mathbb{Q}\right) =+\infty =\psi \left( \mathbb{Q}\right).$$ Therefore, under the conditions of Lemma \[static minimal penalty funct. in Q(<<) <=>\] $\left( b\right) $ the penalty function $\psi $ might differ from $\psi _{\rho
}^{\ast }$ on $\mathcal{Q}_{cont}^{\ll }\setminus \mathcal{Q}_{\ll }.$ For instance, the penalty function defined as $\psi \left( \mathbb{Q}\right) :=\infty \times \mathbf{1}_{\mathcal{Q}_{cont}\setminus \mathcal{Q}_{\ll }(\mathbb{P})}\left( \mathbb{Q}\right) $ leads to the worst case risk measure $\rho (X):=\sup\nolimits_{\mathbb{Q}\in \mathcal{Q}_{\ll }(\mathbb{P})}\mathbb{E}_{\mathbb{Q}}\left[ -X\right] $, which has as minimal penalty the function $$\psi _{\rho }^{\ast
}\left( \mathbb{Q}\right) =\infty \times \mathbf{1}_{\mathcal{Q}_{cont}\setminus \mathcal{Q}_{cont}^{\ll }}\left( \mathbb{Q}\right).$$
2. Note that the total variation distance $d_{TV}\left( \mathbb{Q}^{1},\mathbb{Q}^{2}\right) :=\sup_{A\in \mathcal{F}}\left\vert \mathbb{Q}^{1}\left[ A\right] -\mathbb{Q}^{2}\left[ A\right] \right\vert $, with $\mathbb{Q}^{1},\;\mathbb{Q}^{2}\in \mathcal{Q}_{\ll }$, fulfills that $d_{TV}\left( \mathbb{Q}^{1},\mathbb{Q}^{2}\right) \leq \left\Vert d\mathbb{Q}^{1}/d\mathbb{P}-\mathbb{Q}^{2}/d\mathbb{P}\right\Vert _{L^{1}}$. Therefore, the minimal penalty function is lower semicontinuous in the total variation topology; see Remark 4.16 (b) p. 163 in [@FoellSch; @2004].
Preliminaries from stochastic calculus\[Sect. Preliminaries\]
=============================================================
Within a probability space which supports a semimartingale with the weak predictable representation property, there is a representation of the density processes of the absolutely continuous probability measures by means of two coefficients. Roughly speaking, this means that the dimension of the linear space of local martingales is two. Throughout these coefficients we can represent every local martingale as a combination of two components, namely as an stochastic integral with respect to the continuous part of the semimartingale and an integral with respect to its compensated jump measure. This is of course the case for local martingales, and with more reason this observation about the dimensionality holds for the martingales associated with the corresponding densities processes. In this section we review those concepts of stochastic calculus that are relevant to understand this representation properties, and prove some kind of continuity property for the quadratic variation of a sequence of uniformly integrable martingales converging in $L^{1}$. This result is one of the contributions of this paper.
Fundamentals of Lévy and semimartingales processes [Subsect:\_Fundamentals\_Levy\_and\_Semimartingales]{}
---------------------------------------------------------------------------------------------------------
Let $\left( \Omega ,\mathcal{F},\mathbb{P}\right) $ be a probability space. We say that $L:=\left\{ L_{t}\right\} _{t\in \mathbb{R}_{+}}$ is a Lévy process for this probability space if it is an adapted càdlàg process with independent stationary increments starting at zero. The filtration considered is $\mathbb{F}:=\left\{ \mathcal{F}_{t}^{\mathbb{P}}\left( L\right) \right\} _{t\in \mathbb{R}_{+}}$, the completion of its natural filtration, i.e. $\mathcal{F}_{t}^{\mathbb{P}}\left( L\right)
:=\sigma \left\{ L_{s}:s\leq t\right\} \vee \mathcal{N}$ where $\mathcal{N}$ is the $\sigma $-algebra generated by all $\mathbb{P}$-null sets. The jump measure of $L$ is denoted by $\mu :\Omega \times \left( \mathcal{B}\left(
\mathbb{R}_{+}\right) \otimes \mathcal{B}\left( \mathbb{R}_{0}\right)
\right) \rightarrow \mathbb{N}$ where $\mathbb{R}_{0}:=\mathbb{R}\setminus
\left\{ 0\right\} $. The dual predictable projection of this measure, also known as its Lévy system, satisfies the relation $\mu ^{\mathcal{P}}\left( dt,dx\right) =dt\times \nu \left( dx\right) $, where $\nu \left(
\cdot \right) :=\mathbb{E}\left[ \mu \left( \left[ 0,1\right] \times \cdot
\right) \right] $ is the intensity or Lévy measure of $L.$
The Lévy-Itô decomposition of $L$ is given by $$L_{t}=bt+W_{t}+\int\limits_{\left[ 0,t\right] \times \left\{ 0<\left\vert
x\right\vert \leq 1\right\} }xd\left\{ \mu -\mu ^{\mathcal{P}}\right\}
+\int\limits_{\left[ 0,t\right] \times \left\{ \left\vert x\right\vert
>1\right\} }x\mu \left( ds,dx\right) . \label{Levy-Ito_decomposition}$$It implies that $L^{c}=W$ is the Wiener process, and hence $\left[ L^{c}\right] _{t}=t$, where $\left( \cdot \right) ^{c}$ and $\left[ \,\cdot \,\right] $ denote the continuous martingale part and the process of quadratic variation of any semimartingale, respectively. For the predictable quadratic variation we use the notation $\left\langle \,\cdot \,\right\rangle $.
Denote by $\mathcal{V}$ the set of càdlàg, adapted processes with finite variation, and let $\mathcal{V}^{+}\subset \mathcal{V}$ be the subset of non-decreasing processes in $\mathcal{V}$ starting at zero. Let $\mathcal{A}\subset \mathcal{V}$ be the class of processes with integrable variation, i.e. $A\in \mathcal{A}$ if and only if $\bigvee_{0}^{\infty }A\in
L^{1}\left( \mathbb{P}\right) $, where $\bigvee_{0}^{t}A$ denotes the variation of $A$ over the finite interval $\left[ 0,t\right] $. The subset $\mathcal{A}^{+}=\mathcal{A\cap V}^{+}$ represents those processes which are also increasing i.e. with non-negative right-continuous increasing trajectories. Furthermore, $\mathcal{A}_{loc}$ (resp. $\mathcal{A}_{loc}^{+}$) is the collection of adapted processes with locally integrable variation (resp. adapted locally integrable increasing processes). For a càdlàg process $X$ we denote by $X_{-}:=\left( X_{t-}\right) $ the left hand limit process, where $X_{0-}:=X_{0}$ by convention, and by $\bigtriangleup X=\left( \bigtriangleup X_{t}\right) $ the jump process $\bigtriangleup X_{t}:=X_{t}-X_{t-}$.
Given an adapted càdlàg semimartingale $U$, the jump measure and its dual predictable projection (or compensator) are denoted by $\mu _{U}\left( \left[ 0,t\right] \times A\right) :=\sum_{s\leq t}\mathbf{1}_{A}\left(
\triangle U_{s}\right) $ and $\mu _{U}^{\mathcal{P}}$, respectively. Further, we denote by $\mathcal{P}\subset \mathcal{F}\otimes \mathcal{B}\left( \mathbb{R}_{+}\right) $ the predictable $\sigma $-algebra and by $\widetilde{\mathcal{P}}:=\mathcal{P}\otimes \mathcal{B}\left( \mathbb{R}_{0}\right) .$ With some abuse of notation, we write $\theta _{1}\in
\widetilde{\mathcal{P}}$ when the function $\theta _{1}:$ $\Omega \times
\mathbb{R}_{+}\times \mathbb{R}_{0}\rightarrow \mathbb{R}$ is $\widetilde{\mathcal{P}}$-measurable and $\theta \in \mathcal{P}$ for predictable processes.
Let $$\begin{array}{clc}
\mathcal{L}\left( U^{c}\right) := & \left\{ \theta \in \mathcal{P}:\exists
\left\{ \tau _{n}\right\} _{n\in \mathbb{N}}\text{ sequence of stopping
times with }\tau _{n}\uparrow \infty \right. & \\
& \left. \text{and }\mathbb{E}\left[ \int\limits_{0}^{\tau _{n}}\theta ^{2}d\left[ U^{c}\right] \right] <\infty \ \forall n\in \mathbb{N}\right\} &
\end{array}
\label{Def._L(U)}$$be the class of predictable processes $\theta \in \mathcal{P}$ integrable with respect to $U^{c}$ in the sense of local martingale, and by $$\Lambda \left( U^{c}\right) :=\left\{ \int \theta _{0}dU^{c}:\theta _{0}\in
\mathcal{L}\left( U^{c}\right) \right\}$$the linear space of processes which admits a representation as the stochastic integral with respect to $U^{c}$. For an integer valued random measure $\mu ^{\prime }$ we denote by $\mathcal{G}\left( \mu ^{\prime
}\right) $ the class of $\widetilde{\mathcal{P}}$-measurable processes $\theta _{1}:$ $\Omega \times \mathbb{R}_{+}\times \mathbb{R}_{0}\rightarrow
\mathbb{R}$ satisfying the following conditions: $$\begin{array}{cl}
\left( i\right) & \theta _{1}\in \widetilde{\mathcal{P}}, \\
\left( ii\right) & \int\limits_{\mathbb{R}_{0}}\left\vert \theta _{1}\left(
t,x\right) \right\vert \left( \mu ^{\prime }\right) ^{\mathcal{P}}\left(
\left\{ t\right\} ,dx\right) <\infty \ \forall t>0, \\
\left( iii\right) & \text{The process } \\
& \left\{ \sqrt{\sum\limits_{s\leq t}\left\{ \int\limits_{\mathbb{R}_{0}}\theta _{1}\left( s,x\right) \mu ^{\prime }\left( \left\{ s\right\}
,dx\right) -\int\limits_{\mathbb{R}_{0}}\theta _{1}\left( s,x\right) \left(
\mu ^{\prime }\right) ^{\mathcal{P}}\left( \left\{ s\right\} ,dx\right)
\right\} ^{2}}\right\} _{t\in \mathbb{R}_{+}}\in \mathcal{A}_{loc}^{+}.\end{array}$$The set $\mathcal{G}\left( \mu ^{\prime }\right) $ represents the domain of the functional $\theta _{1}\rightarrow \int \theta _{1}d\left( \mu ^{\prime
}-\left( \mu ^{\prime }\right) ^{\mathcal{P}}\right) ,$ which assign to $\theta _{1}$ the unique purely discontinuous local martingale $M$ with $$\bigtriangleup M_{t}=\int\limits_{\mathbb{R}_{0}}\theta _{1}\left(
t,x\right) \mu ^{\prime }\left( \left\{ t\right\} ,dx\right) -\int\limits_{\mathbb{R}_{0}}\theta _{1}\left( t,x\right) \left( \mu ^{\prime }\right) ^{\mathcal{P}}\left( \left\{ t\right\} ,dx\right) .$$
We use the notation $\int
\theta _{1}d\left( \mu ^{\prime }-\left( \mu ^{\prime }\right) ^{\mathcal{P}}\right) $ to write the value of this functional in $\theta _{1}$. It is important to point out that this functional is not, in general, the integral with respect to the difference of two measures. For a detailed exposition on these topics see He, Wang and Yan [@HeWanYan] or Jacod and Shiryaev [Jcd&Shry 2003]{}, which are our basic references.
In particular, for the Lévy process $L$ with jump measure $\mu $, $$\mathcal{G}\left( \mu \right) \equiv \left\{ \theta _{1}\in \widetilde{\mathcal{P}}:\left\{ \sqrt{\sum\limits_{s\leq t}\left\{ \theta _{1}\left(
s,\triangle L_{s}\right) \right\} ^{2}\mathbf{1}_{\mathbb{R}_{0}}\left(
\triangle L_{s}\right) }\right\} _{t\in \mathbb{R}_{+}}\in \mathcal{A}_{loc}^{+}\right\} , \label{G(miu) Definition}$$since $\mu ^{\mathcal{P}}\left( \left\{ t\right\} \times A\right) =0$, for any Borel set $A$ of $\mathbb{R}_{0}$.
We say that the semimartingale $U$ has the *weak property of predictable representation* when $$\mathcal{M}_{loc,0}=\Lambda \left( U^{c}\right) +\left\{ \int \theta
_{1}d\left( \mu _{U}-\mu _{U}^{\mathcal{P}}\right) :\theta _{1}\in \mathcal{G}\left( \mu _{U}\right) \right\} ,\ \label{Def_weak_predictable_repres.}$$where the previous sum is the linear sum of the vector spaces, and $\mathcal{M}_{loc,0}$ is the linear space of local martingales starting at zero.
Let $\mathcal{M}$ and $\mathcal{M}_{\infty }$ denote the class of càdlàg and càdlàg uniformly integrable martingale respectively. The following lemma is interesting by itself to understand the continuity properties of the quadratic variation for a given convergent sequence of uniformly integrable martingale . It will play a central role in the proof of the lower semicontinuity of the penalization function introduced in section \[Sect Penalty Function for densities\]. Observe that the assertion of this lemma is valid in a general filtered probability space and not only for the completed natural filtration of the Lévy process introduced above.
\[E\[|Mn-M|\]->0=>\[Mn-M\](oo)->0\_in\_P\]For $\left\{ M^{\left( n\right)
}\right\} _{n\in \mathbb{N}}\subset \mathcal{M}_{\infty }$ and $M\in
\mathcal{M}_{\infty }$ the following implication holds $$M_{\infty }^{\left( n\right) }\overset{L^{1}}{\underset{n\rightarrow \infty }{\longrightarrow }}M_{\infty }\Longrightarrow \left[ M^{\left( n\right) }-M\right] _{\infty }\overset{\mathbb{P}}{\longrightarrow }0.$$Moreover,$$M_{\infty }^{\left( n\right) }\overset{L^{1}}{\underset{n\rightarrow \infty }{\longrightarrow }}M_{\infty }\Longrightarrow \left[ M^{\left( n\right) }-M\right] _{t}\overset{\mathbb{P}}{\underset{n\rightarrow \infty }{\longrightarrow }}0\;\; \forall t.$$
*Proof.* From the $L^{1}$ convergence of $M_{\infty
}^{\left( n\right) }$ to $M_{\infty }$, we have that $\{M_{\infty }^{\left(
n\right) }\}_{n\in \mathbb{N}}\cup \left\{ M_{\infty }\right\} $ is uniformly integrable, which is equivalent to the existence of a convex and increasing function $G:[0,+\infty )\rightarrow \lbrack 0,+\infty )$ such that $$\left( i\right) \quad \lim_{x\rightarrow \infty }\frac{G\left( x\right) }{x}=\infty ,$$and $$\left( ii\right) \quad \sup_{n\in \mathbb{N}}\mathbb{E}\left[ G\left(
\left\vert M_{\infty }^{\left( n\right) }\right\vert \right) \right] \vee
\mathbb{E}\left[ G\left( \left\vert M_{\infty }\right\vert \right) \right]
<\infty .$$Now, define the stopping times $$\tau _{k}^{n}:=\inf \left\{ u>0:\sup_{t\leq u}\left\vert M_{t}^{\left(
n\right) }-M_{t}\right\vert \geq k\right\} .$$Observe that the estimation $\sup_{n\in \mathbb{N}}\mathbb{E}\left[ G\left(
\left\vert M_{\tau _{k}^{n}}^{\left( n\right) }\right\vert \right) \right]
\leq \sup_{n\in \mathbb{N}}\mathbb{E}\left[ G\left( \left\vert M_{\infty
}^{\left( n\right) }\right\vert \right) \right] $ implies the uniformly integrability of $\left\{ M_{\tau _{k}^{n}}^{\left( n\right) }\right\}
_{n\in \mathbb{N}}$ for each $k$ fixed. Since any uniformly integrable càdlàg martingale is of class $\mathcal{D}$, follows the uniform integrability of $\left\{ M_{\tau _{k}^{n}}\right\} _{n\in \mathbb{N}}$ for all $k\in \mathbb{N}$, and hence $\left\{ \sup\nolimits_{t\leq \tau
_{k}^{n}}\left\vert M_{t}^{\left( n\right) }-M_{t}\right\vert \right\}
_{n\in \mathbb{N}}$ is uniformly integrable. This and the maximal inequality for supermartingales $$\begin{aligned}
\mathbb{P}\left[ \sup_{t\in \mathbb{R}_{+}}\left\vert M_{t}^{\left( n\right)
}-M_{t}\right\vert \geq \varepsilon \right] &\leq &\frac{1}{\varepsilon }\left\{ \sup_{t\in \mathbb{R}_{+}}\mathbb{E}\left[ \left\vert M_{t}^{\left(
n\right) }-M_{t}\right\vert \right] \right\} \\
&\leq &\frac{1}{\varepsilon }\mathbb{E}\left[ \left\vert M_{\infty }^{\left(
n\right) }-M_{\infty }\right\vert \right] \longrightarrow 0,\end{aligned}$$yields the convergence of $\left\{ \sup\nolimits_{t\leq \tau
_{k}^{n}}\left\vert M_{t}^{\left( n\right) }-M_{t}\right\vert \right\}
_{n\in \mathbb{N}}$ in $L^{1}$ to $0$. The second Davis’ inequality [@HeWanYan Thm. 10.28] guarantees that, for some constant $C$, $$\mathbb{E}\left[ \sqrt{\left[ M^{\left( n\right) }-M\right] _{\tau _{k}^{n}}}\right] \leq C\mathbb{E}\left[ \sup\limits_{t\leq \tau _{k}^{n}}\left\vert
M_{t}^{\left( n\right) }-M_{t}\right\vert \right] \underset{n\rightarrow
\infty }{\longrightarrow }0\quad \forall k\in \mathbb{N},$$and hence $\left[ M^{\left( n\right) }-M\right] _{\tau _{k}^{n}}\underset{n\rightarrow \infty }{\overset{\mathbb{P}}{\longrightarrow }}0$ for all $k\in \mathbb{N}.$
Finally, to prove that $\left[ M^{\left( n\right) }-M\right] _{\infty }\overset{\mathbb{P}}{\rightarrow }0$ we assume that it is not true, and then $\left[ M^{\left( n\right) }-M\right] _{\infty }\overset{\mathbb{P}}{\nrightarrow }0$ implies that there exist $\varepsilon >0$ and $\left\{
n_{k}\right\} _{k\in \mathbb{N}}\subset \mathbb{N}$ with $$d\left( \left[ M^{\left( n_{k}\right) }-M\right] _{\infty },0\right) \geq
\varepsilon$$for all $k\in \mathbb{N},$where $d\left( X,Y\right) :=\inf \left\{
\varepsilon >0:\mathbb{P}\left[ \left\vert X-Y\right\vert >\varepsilon \right] \leq \varepsilon \right\} $ is the Ky Fan metric. We shall denote the subsequence as the original sequence, trying to keep the notation as simple as possible. Using a diagonal argument, a subsequence $\left\{
n_{i}\right\} _{i\in \mathbb{N}}\subset \mathbb{N}$ can be chosen, with the property that $d\left( \left[ M^{\left( n_{i}\right) }-M\right] _{\tau
_{k}^{n_{i}}},0\right) <\frac{1}{k}$ for all $i\geq k.$ Since $$\lim_{k\rightarrow \infty }\left[ M^{\left( n_{i}\right) }-M\right] _{\tau
_{k}^{n_{i}}}=\left[ M^{\left( n_{i}\right) }-M\right] _{\infty }\quad
\mathbb{P}-a.s.,$$we can find some $k\left( n_{i}\right) \geq i$ such that $$d\left( \left[ M^{\left( n_{i}\right) }-M\right] _{\tau _{k\left(
n_{i}\right) }^{n_{i}}},\left[ M^{\left( n_{i}\right) }-M\right] _{\infty
}\right) <\frac{1}{k}.$$Then, using the estimation $$\mathbb{P}\left[ \left\vert \left[ M^{\left( n_{k}\right) }-M\right] _{\tau
_{k\left( n_{k}\right) }^{n_{k}}}-\left[ M^{\left( n_{k}\right) }-M\right]
_{\tau _{k}^{n_{k}}}\right\vert >\varepsilon \right] \leq \mathbb{P}\left[
\left\{ \sup\limits_{t\in \mathbb{R}_{+}}\left\vert M_{t}^{\left(
n_{k}\right) }-M_{t}\right\vert \geq k\right\} \right] ,$$it follows that $$d\left( \left[ M^{\left( n_{k}\right) }-M\right] _{\tau _{k\left(
n_{k}\right) }^{n_{k}}},\left[ M^{\left( n_{k}\right) }-M\right] _{\tau
_{k}^{n_{k}}}\right) \underset{k\rightarrow \infty }{\longrightarrow }0,$$which yields a contradiction with $\varepsilon \leq d\left( \left[ M^{\left(
n_{k}\right) }-M\right] _{\infty },0\right) $. Thus, $\left[ M^{\left(
n\right) }-M\right] _{\infty }\overset{\mathbb{P}}{\rightarrow }0.$ The last part of the this lemma follows immediately from the first statement. $\Box $
Using the Doob’s stopping theorem we can conclude that for $M\in \mathcal{M}_{\infty }$ and an stopping time $\tau $, that $M^{\tau }\in \mathcal{M}_{\infty },$ and therefore it follows as a corollary the following result.
\[E\[|(Mn-M)thau|\]->0=>\[Mn-M\]thau->0\_in\_P\]For $\left\{ M^{\left(
n\right) }\right\} _{n\in \mathbb{N}}\subset \mathcal{M}_{\infty }$, $M\in
\mathcal{M}_{\infty }$ and $\tau $ any stopping time holds$$M_{\tau }^{\left( n\right) }\overset{L^{1}}{\rightarrow }M_{\tau
}\Longrightarrow \left[ M^{\left( n\right) }-M\right] _{\tau }\overset{\mathbb{P}}{\longrightarrow }0.$$
*Proof.* $\left[ \left( M^{\left( n\right) }\right) ^{\tau }-M^{\tau }\right]
_{\infty }=\left[ M^{\left( n\right) }-M\right] _{\infty }^{\tau }=\left[
M^{\left( n\right) }-M\right] _{\tau }\overset{\mathbb{P}}{\longrightarrow }0.$ $\Box $
Density processes \[Sect. Density\_Processes\]
----------------------------------------------
Given an absolutely continuous probability measure $\mathbb{Q}\ll \mathbb{P}$ in a filtered probability space, where a semimartingale with the weak predictable representation property is defined, the structure of the density process has been studied extensively by several authors; see Theorem 14.41 in He, Wang and Yan [@HeWanYan] or Theorem III.5.19 in Jacod and Shiryaev .
Denote by $D_{t}:=\mathbb{E}\left[ \left. \frac{d\mathbb{Q}}{d\mathbb{P}}\right\vert \mathcal{F}_{t}\right] $ the càdlàg version of the density process. For the increasing sequence of stopping times $\tau
_{n}:=\inf \left\{ t\geq 0:D_{t}<\frac{1}{n}\right\} $ $n\geq 1$ and $\tau
_{0}:=\sup_{n}\tau _{n}$ we have $D_{t}\left( \omega \right) =0$ $\forall
t\geq \tau _{0}\left( \omega \right) $ and $D_{t}\left( \omega \right) >0$ $\forall t<\tau _{0}\left( \omega \right) ,$ i.e.$$D=D\mathbf{1}_{[\hspace{-0.05cm}[0,\tau _{0}[\hspace{-0.04cm}[},
\label{D=D1[[0,To[[}$$and the process $$\frac{1}{D_{s-}}\mathbf{1}_{[\hspace{-0.05cm}[D_{-}\not=0]\hspace{-0.04cm}]}\text{ is integrable w.r.t. }D, \label{1/D_integrable_wrt_D}$$where we abuse of the notation by setting $[\hspace{-0.05cm}[D_{-}\not=0]\hspace{-0.04cm}]:=\left\{ \left( \omega ,t\right) \in \Omega \times \mathbb{R}_{+}:D_{t-}\left( \omega \right) \neq 0\right\} .$ Both conditions $\left( \ref{D=D1[[0,To[[}\right) $ and $\left( \ref{1/D_integrable_wrt_D}\right) $ are necessary and sufficient in order that a semimartingale to be an *exponential semimartigale* [@HeWanYan Thm. 9.41], i.e. $D=\mathcal{E}\left( Z\right) $ the Doléans-Dade exponential of another semimartingale $Z$. In that case we have $$\tau _{0}=\inf \left\{ t>0:D_{t-}=0\text{ or }D_{t}=0\right\} =\inf \left\{
t>0:\triangle Z_{t}=-1\right\}. \label{Tau0=JumpZ=-1}$$
It is well known that the Lévy-processes satisfy the weak property of predictable representation [@HeWanYan], when the completed natural filtration is considered. In the following lemma we present the characterization of the density processes for the case of these processes.
\[Q<<P =>\] Given an absolutely continuous probability measure $\mathbb{Q}\ll \mathbb{P}$, there exist coefficients $\theta _{0}\in \mathcal{L}\left(
W\right) \ $and $\theta _{1}\in \mathcal{G}\left( \mu \right) $ such that $$\frac{d\mathbb{Q}_{t}}{d\mathbb{P}_{t}}=\frac{d\mathbb{Q}_{t}}{d\mathbb{P}_{t}}\mathbf{1}_{[\hspace{-0.05cm}[0,\tau _{0}[\hspace{-0.04cm}[}=\mathcal{E}\left( Z^{\theta }\right) \left( t\right) , \label{Dt=exp(Zt)}$$where $Z_{t}^{\theta }\in \mathcal{M}_{loc}$ is the local martingale given by$$Z_{t}^{\theta }:=\int\limits_{]0,t]}\theta _{0}dW+\int\limits_{]0,t]\times
\mathbb{R}_{0}}\theta _{1}\left( s,x\right) \left( \mu \left( ds,dx\right)
-ds\ \nu \left( dx\right) \right) , \label{Def._Ztheta(t)}$$and $\mathcal{E}$ represents the Doleans-Dade exponential of a semimartingale. The coefficients $\theta _{0}$ and $\theta _{1}$ are $dt$-a.s and $\mu _{\mathbb{P}}^{\mathcal{P}}\left( ds,dx\right) $-a.s. unique on $[\hspace{-0.05cm}[0,\tau _{0}]\hspace{-0.04cm}]$ and $[\hspace{-0.05cm}[0,\tau _{0}]\hspace{-0.04cm}]\times \mathbb{R}_{0}$ respectively for $\mathbb{P}$-almost all $\omega $. Furthermore, the coefficients can be choosen with $\theta _{0}=0$ on $]\hspace{-0.05cm}]\tau _{0},\infty \lbrack
\hspace{-0.04cm}[$ and $\theta _{1}=0$ on $]\hspace{-0.05cm}]\tau
_{0},\infty \lbrack \hspace{-0.04cm}[\times \mathbb{R}$ .
*Proof.* We only address the uniqueness of the coefficients $\theta _{0}$ and $\theta _{1},$ because the representation follows from $\left( \ref{D=D1[[0,To[[}\right) $ and $\left( \ref{1/D_integrable_wrt_D}\right) .$ Let assume, that we have two possible vectors $\theta :=\left(
\theta _{0},\theta _{1}\right) $ and $\theta ^{\prime }:=\left( \theta
_{0}^{\prime },\theta _{1}^{\prime }\right) $ satisfying the representation, i.e. $$\begin{array}{rl}
D_{u}\mathbf{1}_{[\hspace{-0.05cm}[0,\tau _{0}[\hspace{-0.04cm}[} & =\int
D_{t-}d\{\int\limits_{]0,t]}\theta _{0}\left( s\right)
dW_{s}+\int\limits_{]0,t]\times \mathbb{R}_{0}}\theta _{1}\left( s,x\right)
\left( \mu \left( ds,dx\right) -ds\ \nu \left( dx\right) \right) \} \\
& =\int D_{t-}d\{\int\limits_{]0,t]}\theta _{0}^{\prime }\left( s\right)
dW_{s}+\int\limits_{]0,t]\times \mathbb{R}_{0}}\theta _{1}^{\prime }\left(
s,x\right) \left( \mu \left( ds,dx\right) -ds\ \nu \left( dx\right) \right)
\},\end{array}$$and thus$$\begin{aligned}
\triangle D_{t} &=&D_{t-}\triangle \left( \int\limits_{]0,t]\times \mathbb{R}_{0}}\theta _{1}\left( s,x\right) \left( \mu \left( ds,dx\right) -ds\ \nu
\left( dx\right) \right) \right) \\
&=&D_{t-}\triangle \left( \int\limits_{]0,t]\times \mathbb{R}_{0}}\theta
_{1}^{\prime }\left( s,x\right) \left( \mu \left( ds,dx\right) -ds\ \nu
\left( dx\right) \right) \right) .\end{aligned}$$Since $D_{t-}>0$ on $[\hspace{-0.05cm}[0,\tau _{0}[\hspace{-0.04cm}[,$ it follows that $$\triangle \left( \int\limits_{]0,t]\times \mathbb{R}_{0}}\theta _{1}\left(
s,x\right) \left( \mu \left( ds,dx\right) -ds\ \nu \left( dx\right) \right)
\right) =\triangle \left( \int\limits_{]0,t]\times \mathbb{R}_{0}}\theta
_{1}^{\prime }\left( s,x\right) \left( \mu \left( ds,dx\right) -ds\ \nu
\left( dx\right) \right) \right) .$$ Since two purely discontinuous local martingales with the same jumps are equal, it follows $$\int\limits_{]0,t]\times \mathbb{R}_{0}}\theta _{1}\left( s,x\right) \left(
\mu \left( ds,dx\right) -ds\ \nu \left( dx\right) \right)
=\int\limits_{]0,t]\times \mathbb{R}_{0}}\widehat{\theta }_{1}\left(
s,x\right) \left( \mu \left( ds,dx\right) -ds\ \nu \left( dx\right) \right)$$and thus $$\int D_{t-}d\{\int\limits_{]0,t]}\theta _{0}\left( s\right) dW_{s}\}=\int
D_{t-}d\{\int\limits_{]0,t]}\theta _{0}^{\prime }\left( s\right) dW_{s}\}.$$Then, $$0=\left[ \int D_{s-}d\left\{ \int\nolimits_{]0,s]}\left( \theta
_{0}^{\prime }\left( u\right) -\theta _{0}\left( u\right) \right)
dW_{u}\right\} \right] _{t}=\int\limits_{]0,t]}\left( D_{s-}\right)
^{2}\left\{ \theta _{0}^{\prime }\left( s\right) -\theta _{0}\left( s\right)
\right\} ^{2}ds$$and thus $\theta _{0}^{\prime }=\theta _{0}\ dt$-$a.s$ on $[\hspace{-0.05cm}[0,\tau _{0}]\hspace{-0.04cm}]$ for $\mathbb{P}$-almost all $\omega $.
On the other hand, $$\begin{aligned}
0 &=&\left\langle \int \left\{ \theta _{1}^{\prime }\left( s,x\right)
-\theta _{1}\left( s,x\right) \right\} \left( \mu \left( ds,dx\right) -ds\
\nu \left( dx\right) \right) \right\rangle _{t} \\
&=&\int\limits_{]0,t]\times \mathbb{R}_{0}}\left\{ \theta _{1}^{\prime
}\left( s,x\right) -\theta _{1}\left( s,x\right) \right\} ^{2}\nu \left(
dx\right) ds,\end{aligned}$$implies that $\theta _{1}\left( s,x\right) =\theta _{1}^{\prime }\left(
s,x\right) \quad \mu _{\mathbb{P}}^{\mathcal{P}}\left( ds,dx\right) $-a.s. on $[\hspace{-0.05cm}[0,\tau _{0}]\hspace{-0.04cm}]\times \mathbb{R}_{0}$ for $\mathbb{P}$-almost all $\omega $. $\Box $
For $\mathbb{Q}\ll \mathbb{P}$ the function $\theta _{1}\left( \omega
,t,x\right) $ described in Lemma \[Q<<P =>\] determines the density of the predictable projection $\mu _{\mathbb{Q}}^{\mathcal{P}}\left( dt,dx\right) $ with respect to $\mu _{\mathbb{P}}^{\mathcal{P}}\left( dt,dx\right) $ (see He,Wang and Yan [@HeWanYan] or Jacod and Shiryaev ). More precisely, for $B\in \left( \mathcal{B}\left( \mathbb{R}_{+}\right)
\otimes \mathcal{B}\left( \mathbb{R}_{0}\right) \right) $ we have $$\mu _{\mathbb{Q}}^{\mathcal{P}}\left( \omega ,B\right) =\int_{B}\left(
1+\theta _{1}\left( \omega ,t,x\right) \right) \mu _{\mathbb{P}}^{\mathcal{P}}\left( dt,dx\right) . \label{Q<<P=>_miu_wrt_Q}$$
In what follows we restrict ourself to the time interval $\left[ 0,T\right]
, $ for some $T>0$ fixed, and we take $\mathcal{F}=\mathcal{F}_{T}.$ The corresponding classes of density processes associated to $\mathcal{Q}_{\ll }(\mathbb{P})$ and $\mathcal{Q}_{\approx }\left( \mathbb{P}\right) $ are denoted by $\mathcal{D}_{\ll }\left( \mathbb{P}\right) $ and $\mathcal{D}_{\approx }\left( \mathbb{P}\right) $, respectively. For instance, in the former case $$\mathcal{D}_{\ll }\left( \mathbb{P}\right) :=\left\{ D=\left\{ D_{t}\right\}
_{t\in \left[ 0,T\right] }:\exists \mathbb{Q}\in \mathcal{Q}_{\ll }\left(
\mathbb{P}\right) \text{ with }D_{t}=\left. \frac{d\mathbb{Q}}{d\mathbb{P}}\right\vert _{\mathcal{F}_{t}}\right\} , \label{Def._D<<}$$and the processes in this set are of the form $$\begin{array}{rl}
D_{t}= & \exp \left\{ \int\limits_{]0,t]}\theta
_{0}dW+\int\limits_{]0,t]\times \mathbb{R}_{0}}\theta _{1}\left( s,x\right)
\left( \mu \left( ds,dx\right) -\nu \left( dx\right) ds\right) -\frac{1}{2}\int\limits_{]0,t]}\left( \theta _{0}\right) ^{2}ds\right\} \times \\
& \times \exp \left\{ \int\limits_{]0,t]\times \mathbb{R}_{0}}\left\{ \ln
\left( 1+\theta _{1}\left( s,x\right) \right) -\theta _{1}\left( s,x\right)
\right\} \mu \left( ds,dx\right) \right\}\end{array}
\label{D(t) explicita}$$for $\theta _{0}\in \mathcal{L}\left( W\right) $ and $\theta _{1}\in
\mathcal{G}\left( \mu \right) $.
The set $\mathcal{D}_{\ll }\left( \mathbb{P}\right) $ is characterized as follow.
\[D<<\_<=>\] $D$ belongs to $\mathcal{D}_{\ll }\left( \mathbb{P}\right) $ if and only if there are $\theta _{0}\in \mathcal{L}\left( W\right) $ and $\theta _{1}\in \mathcal{G}\left( \mu \right) $ with $\theta _{1}\geq -1$ such that $D_{t}=\mathcal{E}\left( Z^{\theta }\right) \left( t\right) \
\mathbb{P}$-a.s. $\forall t\in \left[ 0,T\right] $ and $\mathbb{E}_{\mathbb{P}}\left[ \mathcal{E}\left( Z^{\theta }\right) \left( t\right) \right] =1\
\forall t\geq 0$, where $Z^{\theta }\left( t\right) $ is defined by $\left( \ref{Def._Ztheta(t)}\right) .$
*Proof.* The necessity follows from Lemma \[Q<<P =>\]. Conversely, let $\theta _{0}\in \mathcal{L}\left( W\right) $ and $\theta
_{1}\in \mathcal{G}\left( \mu \right) $ be arbitrarily chosen. Since $D_{t}=\int D_{s-}dZ_{s}^{\theta }\in \mathcal{M}_{loc}$ is a nonnegative local martingale, it is a supermartingale, with constant expectation from our assumptions. Therefore, it is a martingale, and hence the density process of an absolutely continuous probability measure. $\Box$
Since density processes are essentially uniformly integrable martingales, using Lemma \[E\[|Mn-M|\]->0=>\[Mn-M\](oo)->0\_in\_P\] and Corollary \[E\[|(Mn-M)thau|\]->0=>\[Mn-M\]thau->0\_in\_P\] the following proposition follows immediately.
\[E\[|Dn-D|\]->0 => \[Dn-D\](T)->0\_in\_P\] Let $\left\{ \mathbb{Q}^{\left(
n\right) }\right\} _{n\in \mathbb{N}}$ be a sequence in $\mathcal{Q}_{\ll }(\mathbb{P})$, with $D_{T}^{\left( n\right) }:=\left. \frac{d\mathbb{Q}^{\left( n\right) }}{d\mathbb{P}}\right\vert _{\mathcal{F}_{T}}$ converging to $D_{T}:=\left. \frac{d\mathbb{Q}}{d\mathbb{P}}\right\vert _{\mathcal{F}_{T}}$ in $L^{1}\left( \mathbb{P}\right) $. For the corresponding density processes $D_{t}^{\left( n\right) }:=\mathbb{E}_{\mathbb{P}}\left[
D_{T}^{\left( n\right) }\left\vert \mathcal{F}_{t}\right. \right] $ and $D_{t}:=\mathbb{E}_{\mathbb{P}}\left[ D_{T}\left\vert \mathcal{F}_{t}\right. \right] $, for $t\in \left[ 0,T\right] $, we have$$\left[ D^{\left( n\right) }-D\right] _{T}\overset{\mathbb{P}}{\rightarrow }0.$$
Penalty functions for densities\[Sect Penalty Function for densities\]
======================================================================
Now, we shall introduce a family of penalty functions for the density processes described in Section \[Sect. Density\_Processes\], for the absolutely continuous measures $\mathbb{Q}\in \mathcal{Q}_{\ll }\left(
\mathbb{P}\right) $.
Let $h:\mathbb{R}_{+}\mathbb{\rightarrow R}_{+}$ and $h_{0},$$h_{1}:\ \mathbb{R\rightarrow R}_{+}$ be convex functions with $0=h\left(
0\right) =h_{0}\left( 0\right) =h_{1}\left( 0\right) $. Define the penalty function, with $\tau_0$ as in (\[Tau0=JumpZ=-1\]), by $$\begin{array}{rl}
\vartheta \left( \mathbb{Q}\right) := & \mathbb{E}_{\mathbb{Q}}\left[
\int\limits_{0}^{T\wedge \tau _{0}}h\left( h_{0}\left( \theta _{0}\left(
t\right) \right) +\int\nolimits_{\mathbb{R}_{0}}\delta \left( t,x\right)
h_{1}\left( \theta _{1}\left( t,x\right) \right) \nu \left( dx\right)
\right) dt\right] \mathbf{1}_{\mathcal{Q}_{\ll }}\left( \mathbb{Q}\right) \\
& +\infty \times \mathbf{1}_{\mathcal{Q}_{cont}\setminus \mathcal{Q}_{\ll
}}\left( \mathbb{Q}\right) ,\end{array}
\label{Def._penalty_theta}$$ where $\theta _{0},$ $\theta _{1}$ are the processes associated to $\mathbb{Q}$ from Lemma \[Q<<P =>\] and $\delta \left( t,x\right) :\mathbb{R}_{+}\times \mathbb{R}_{0}\rightarrow \mathbb{R}_{+}$ is an arbitrary fixed nonnegative function $\delta \left( t,x\right) \in \mathcal{G}\left( \mu
\right) $. Since $\theta _{0}\equiv 0$ on $[\hspace{-0.05cm}[\tau
_{0},\infty \lbrack \hspace{-0.04cm}[$ and $\theta _{1}\equiv 0$ on $[\hspace{-0.05cm}[\tau _{0},\infty \lbrack \hspace{-0.04cm}[\times \mathbb{R}_{0}$ we have from the conditions imposed to $h,h_{0},$ and $h_{1}$$$\begin{array}{rl}
\vartheta \left( \mathbb{Q}\right) = & \mathbb{E}_{\mathbb{Q}}\left[
\int\limits_{0}^{T}h\left( h_{0}\left( \theta _{0}\left( t\right) \right)
+\int\nolimits_{\mathbb{R}_{0}}\delta \left( t,x\right) h_{1}\left( \theta
_{1}\left( t,x\right) \right) \nu \left( dx\right) \right) dt\right] \mathbf{1}_{\mathcal{Q}_{\ll }}\left( \mathbb{Q}\right) \\
& +\infty \times \mathbf{1}_{\mathcal{Q}_{cont}\setminus \mathcal{Q}_{\ll
}}\left( \mathbb{Q}\right) .\end{array}
\label{Def._penalty_theta_(2)}$$Further, define the convex measure of risk $$\rho \left( X\right) :=\sup_{\mathbb{Q\in }\mathcal{Q}_{\ll }(\mathbb{P})}\left\{ \mathbb{E}_{\mathbb{Q}}\left[ -X\right] -\vartheta \left( \mathbb{Q}\right) \right\} . \label{rho def.}$$Notice that $\rho $ is a normalized and sensitive measure of risk. For each class of probability measures introduced so far, the subclass of those measures with a finite penalization is considered. We will denote by $\mathcal{Q}^{\vartheta },$ $\mathcal{Q}_{\ll }^{\vartheta }(\mathbb{P})$ and $\mathcal{Q}_{\approx }^{\vartheta }(\mathbb{P})$ the respective subclasses, i.e. $$\mathcal{Q}^{\vartheta }:=\left\{ \mathbb{Q}\in \mathcal{Q}:\vartheta \left(
\mathbb{Q}\right) <\infty \right\} ,\ \mathcal{Q}_{\ll }^{\vartheta }(\mathbb{P}):=\mathcal{Q}^{\vartheta }\cap \mathcal{Q}_{\ll }(\mathbb{P})\text{ and }\mathcal{Q}_{\approx }^{\vartheta }(\mathbb{P}):=\mathcal{Q}^{\vartheta }\cap \mathcal{Q}_{\approx }(\mathbb{P}). \label{Def._Qdelta(P)}$$Notice that $\mathcal{Q}_{\approx }^{\vartheta
}(\mathbb{P})\neq \varnothing .$
The next theorem establishes the minimality on $\mathcal{Q}_{\ll }\left(
\mathbb{P}\right) $ of the penalty function introduced above for the risk measure $\rho $ . Its proof is based on the sufficient conditions given in Theorem \[static minimal penalty funct. in Q(<<) <=>\].
\[theta=minimal penalty function\] The penalty function $\vartheta $ defined in $\left( \ref{Def._penalty_theta}\right) $ is equal to the minimal penalty function of the convex risk measure $\rho $, given by $\left( \ref{rho def.}\right) $, on $\mathcal{Q}_{\ll }\left( \mathbb{P}\right) $, i.e.$$\vartheta \mathbf{1}_{\mathcal{Q}_{\ll }\left( \mathbb{P}\right) }=\psi
_{\rho }^{\ast }\mathbf{1}_{\mathcal{Q}_{\ll }\left( \mathbb{P}\right) }.$$
*Proof:* From Lemma \[static minimal penalty funct. in Q(<<) <=>\] $\left( b\right) $, we need to show that the penalization $\vartheta $ is proper, convex and that the corresponding identification, defined as $\Theta \left( Z\right) :=\vartheta \left( \mathbb{Q}\right) $ if $Z\mathbb{\in }\delta \left( \mathcal{Q}_{\ll }\left( \mathbb{P}\right)
\right) :=\left\{ Z\in L^{1}\left( \mathbb{P}\right) :Z=d\mathbb{Q}/d\mathbb{P}\text{ with }\mathbb{Q}\in \mathcal{Q}_{\ll }\left( \mathbb{P}\right)
\right\} $ and $\Theta \left( Z\right) :=\infty $ on $L^{1}\setminus \delta
\left( \mathcal{Q}_{\ll }\left( \mathbb{P}\right) \right) $, is lower semicontinuous with respect to the strong topology.
First, observe that the function $\vartheta $ is proper, since $\vartheta
\left( \mathbb{P}\right) =0$. To verify the convexity of $\vartheta $, choose $\mathbb{Q}$, $\widetilde{\mathbb{Q}}\in \mathcal{Q}_{\ll
}^{\vartheta }$ and define $\mathbb{Q}^{\lambda }:=\lambda \mathbb{Q}+\left(
1-\lambda \right) \widetilde{\mathbb{Q}}$, for $\lambda \in \left[ 0,1\right]
$. Notice that the corresponding density process can be written as $D^{\lambda }:=\dfrac{d\mathbb{Q}^{\lambda }}{d\mathbb{P}}=\lambda D+\left(
1-\lambda \right) \widetilde{D}$ $\mathbb{P}$-a.s. .
Now, from Lemma \[Q<<P =>\], let $\left( \theta _{0},\theta _{1}\right) $ and $(\widetilde{\theta }_{0},\widetilde{\theta }_{1})$ be the processes associated to $\mathbb{Q}$ and $\widetilde{\mathbb{Q}}$, respectively, and observe that from$$D_{t}=1+\int\limits_{\left[ 0,t\right] }D_{s-}\theta _{0}\left( s\right)
dW_{s}+\int\limits_{\left[ 0,t\right] \times \mathbb{R}_{0}}D_{s-}\theta
_{1}\left( s,x\right) d\left( \mu \left( ds,dx\right) -ds\nu \left(
dx\right) \right) )$$and the corresponding expression for $\widetilde{D}$ we have for $\tau
_{n}^{\lambda }:=\inf \left\{ t\geq 0:D_{t}^{\lambda }\leq \frac{1}{n}\right\} $ $$\int\limits_{0}^{t\wedge \tau _{n}^{\lambda }}\left( D_{s-}^{\lambda
}\right) ^{-1}dD_{s}^{\lambda }=\int\limits_{0}^{t\wedge \tau _{n}^{\lambda
}}\tfrac{\lambda D_{s-}\theta _{0}\left( s\right) +\left( 1-\lambda \right)
\widetilde{D}_{s-}\widetilde{\theta }_{0}\left( s\right) }{\left( \lambda
D_{s-}+\left( 1-\lambda \right) \widetilde{D}_{s-}\right) }dW_{s}+\int\limits_{\left[ 0,t\wedge \tau _{n}^{\lambda }\right] \times
\mathbb{R}_{0}}\tfrac{\lambda D_{s-}\theta _{1}\left( s,x\right) +\left(
1-\lambda \right) \widetilde{D}_{s-}\widetilde{\theta }_{1}\left( s,x\right)
}{\left( \lambda D_{s-}+\left( 1-\lambda \right) \widetilde{D}_{s-}\right) }d\left( \mu -\mu _{\mathbb{P}}^{\mathcal{P}}\right) .$$The weak predictable representation property of the local martingale $\int\nolimits_{0}^{t\wedge \tau _{n}^{\lambda }}\left( D_{s-}^{\lambda
}\right) ^{-1}dD_{s}^{\lambda }$, yield on the other hand $$\int\limits_{0}^{t\wedge \tau _{n}^{\lambda }}\left( D_{s-}^{\lambda
}\right) ^{-1}dD_{s}^{\lambda }=\int\limits_{0}^{t\wedge \tau _{n}^{\lambda
}}\theta _{0}^{\lambda }\left( s\right) dW_{s}+\int\limits_{\left[ 0,t\wedge
\tau _{n}^{\lambda }\right] \times \mathbb{R}_{0}}\theta _{1}^{\lambda
}\left( s,x\right) d\left( \mu -\mu _{\mathbb{P}}^{\mathcal{P}}\right) ,$$where identification $$\theta _{0}^{\lambda }\left( s\right) =\frac{\lambda D_{s-}\theta _{0}\left(
s\right) +\left( 1-\lambda \right) \widetilde{D}_{s-}\widetilde{\theta }_{0}\left( s\right) }{\left( \lambda D_{s-}+\left( 1-\lambda \right)
\widetilde{D}_{s-}\right) },$$and $$\theta _{1}^{\lambda }\left( s,x\right) =\frac{\lambda D_{s-}\theta
_{1}\left( s,x\right) +\left( 1-\lambda \right) \widetilde{D}_{s-}\widetilde{\theta }_{1}\left( s,x\right) }{\left( \lambda D_{s-}+\left( 1-\lambda
\right) \widetilde{D}_{s-}\right) }.$$ is possible thanks to the uniqueness of the representation in Lemma [Q<<P =>]{}. The convexity follows now from the convexity of $h,h_{0}$and $h_{1}$, using the fact that any convex function is continuous in the interior of its domain. More specifically, $$\begin{array}{rl}
\vartheta \left( \mathbb{Q}^{\lambda }\right) \leq & \mathbb{E}_{\mathbb{Q}^{\lambda }}\left[ \int\limits_{\left[ 0,T\right] }\tfrac{\lambda D_{s}}{\left( \lambda D_{s}+\left( 1-\lambda \right) \widetilde{D}_{s}\right) }h\left( h_{0}\left( \theta _{0}\left( s\right) \right) +\int\limits_{\mathbb{R}_{0}}\delta \left( s,x\right) h_{1}\left( \theta _{1}\left( s,x\right)
\right) \nu \left( dx\right) \right) ds\right] \\
& +\mathbb{E}_{\mathbb{Q}^{\lambda }}\left[ \int\limits_{\left[ 0,T\right] }\tfrac{\left( 1-\lambda \right) \widetilde{D}_{s}}{\left( \lambda
D_{s}+\left( 1-\lambda \right) \widetilde{D}_{s}\right) }h\left( h_{0}\left(
\widetilde{\theta }_{0}\left( s\right) \right) +\int\limits_{\mathbb{R}_{0}}\delta \left( s,x\right) h_{1}(\widetilde{\theta }_{1}\left( s,x\right)
)\nu \left( dx\right) \right) ds\right] \\
= & \int\limits_{\left[ 0,T\right] }\int\limits_{\Omega }\dfrac{\lambda D_{s}}{\left( \lambda D_{s}+\left( 1-\lambda \right) \widetilde{D}_{s}\right) }h\left( h_{0}\left( \theta _{0}\left( s\right) \right) +\int\limits_{\mathbb{R}_{0}}\delta \left( s,x\right) h_{1}\left( \theta _{1}\left( s,x\right)
\right) \nu \left( dx\right) \right) \\
& \ \ \ \ \ \ \ \ \times \left( \lambda D_{s}+\left( 1-\lambda \right)
\widetilde{D}_{s}\right) \mathbf{1}_{\left\{ \lambda D_{s}+\left( 1-\lambda
\right) \widetilde{D}_{s}>0\right\} }d\mathbb{P}ds \\
& +\int\limits_{\left[ 0,T\right] }\int\limits_{\Omega }\dfrac{\left(
1-\lambda \right) \widetilde{D}_{s}}{\left( \lambda D_{s}+\left( 1-\lambda
\right) \widetilde{D}_{s}\right) }h\left( h_{0}\left( \widetilde{\theta }_{0}\left( s\right) \right) +\int\limits_{\mathbb{R}_{0}}\delta \left(
s,x\right) h_{1}(\widetilde{\theta }_{1}\left( s,x\right) )\nu \left(
dx\right) \right) \\
& \ \ \ \ \ \ \ \ \times \left( \lambda D_{s}+\left( 1-\lambda \right)
\widetilde{D}_{s}\right) \mathbf{1}_{\left\{ \lambda D_{s}+\left( 1-\lambda
\right) \widetilde{D}_{s}>0\right\} }d\mathbb{P}ds \\
= & \lambda \vartheta \left( \mathbb{Q}\right) +\left( 1-\lambda \right)
\vartheta \left( \widetilde{\mathbb{Q}}\right) ,\end{array}$$where we used that $\left\{ \int\nolimits_{\mathbb{R}_{0}}\delta \left(
t,x\right) h_{1}\left( \theta _{1}\left( t,x\right) \right) \nu \left(
dx\right) \right\} _{t\in \mathbb{R}_{+}}$ and $\left\{ \int\nolimits_{\mathbb{R}_{0}}\delta \left( t,x\right) h_{1}(\widetilde{\theta }_{1}\left(
t,x\right) )\nu \left( dx\right) \right\} _{t\in \mathbb{R}_{+}}$ are predictable processes.
It remains to prove the lower semicontinuity of $\Theta $. As pointed out earlier, it is enough to consider a sequence of densities $Z^{\left(
n\right) }:=\frac{d\mathbb{Q}^{\left( n\right) }}{d\mathbb{P}}\in \delta
\left( \mathcal{Q}_{\ll }\left( \mathbb{P}\right) \right) $ converging in $L^{1}\left( \mathbb{P}\right) $ to $Z:=\frac{d\mathbb{Q}}{d\mathbb{P}}$. Denote the corresponding density processes by $D^{\left( n\right) }$ and $D$, respectively. In Proposition \[E\[|Dn-D|\]->0 => \[Dn-D\](T)->0\_in\_P\] it was verified the convergence in probability to zero of the quadratic variation process $$\begin{aligned}
\left[ D^{\left( n\right) }-D\right] _{T} &=&\int\limits_{0}^{T}\left\{
D_{s-}^{\left( n\right) }\theta _{0}^{\left( n\right) }\left( s\right)
-D_{s-}\theta _{0}\left( s\right) \right\} ^{2}ds \\
&&+\int\limits_{\left[ 0,T\right] \times \mathbb{R}_{0}}\left\{
D_{s-}^{\left( n\right) }\theta _{1}^{\left( n\right) }\left( s,x\right)
-D_{s-}\theta _{1}\left( s,x\right) \right\} ^{2}\mu \left( ds,dx\right) .\end{aligned}$$This implies that $$\left.
\begin{array}{cc}
& \int\nolimits_{0}^{T}\left\{ D_{s-}^{\left( n\right) }\theta _{0}^{\left(
n\right) }\left( s\right) -D_{s-}\theta _{0}\left( s\right) \right\} ^{2}ds\overset{\mathbb{P}}{\rightarrow }0, \\
\text{and } & \\
& \int\limits_{\left[ 0,T\right] \times \mathbb{R}_{0}}\left\{
D_{s-}^{\left( n\right) }\theta _{1}^{\left( n\right) }\left( s,x\right)
-D_{s-}\theta _{1}\left( s,x\right) \right\} ^{2}\mu \left( ds,dx\right)
\overset{\mathbb{P}}{\rightarrow }0.\end{array}\right\} \label{[]=>*}$$Then, for an arbitrary but fixed subsequence, there exists a sub-subsequence such that $\mathbb{P}$-a.s. $$\left\{ D_{s-}^{\left( n\right) }\theta _{0}^{\left( n\right) }\left(
s\right) -D_{s-}\theta _{0}\left( s\right) \right\} ^{2}\overset{L^{1}\left(
\lambda \right) }{\longrightarrow }0$$and $$\left\{ D_{s-}^{\left( n\right) }\theta _{1}^{\left( n\right) }\left(
s,x\right) -D_{s-}\theta _{1}\left( s,x\right) \right\} ^{2}\overset{L^{1}\left( \mu \right) }{\longrightarrow }0,$$where for simplicity we have denoted the sub-subsequence as the original sequence. Now, we claim that for the former sub-subsequence it also holds that $$\left\{
\begin{array}{c}
D_{s-}^{\left( n\right) }\theta _{0}^{\left( n\right) }\left( s\right)
\overset{\lambda \times \mathbb{P}\text{-a.s.}}{\longrightarrow }D_{s-}\theta _{0}\left( s\right) , \\
\smallskip \ \\
D_{s-}^{\left( n\right) }\theta _{1}^{\left( n\right) }\left( s,x\right)
\overset{\mu \times \mathbb{P}\text{-a.s.}}{\longrightarrow }D_{s-}\theta
_{1}\left( s,x\right) .\end{array}\right. \label{[]=>*.1}$$
We present first the arguments for the proof of the second assertion in $\left( \ref{[]=>*.1}\right) $. Assuming the opposite, there exists $C\in
\mathcal{B}\left( \left[ 0,T\right] \right) \otimes \mathcal{B}\left(
\mathbb{R}_{0}\right) \otimes \mathcal{F}_{T}$, with $\mu \times \mathbb{P}\left[ C\right] >0$, and such that for each $\left( s,x,\omega \right) \in C$ $$\lim_{n\rightarrow \infty }\left\{ D_{s-}^{\left( n\right) }\theta
_{1}^{\left( n\right) }\left( s,x\right) -D_{s-}\theta _{1}\left( s,x\right)
\right\} ^{2}=c\neq 0,$$or the limit does not exist.
Let $C\left( \omega \right) :=\left\{ \left( t,x\right) \in \left[ 0,T\right]
\times \mathbb{R}_{0}:\left( t,x,\omega \right) \in C\right\} $ be the $\omega $-section of $C$. Observe that $B:=\left\{ \omega \in \Omega :\mu \left[ C\left( \omega \right) \right] >0\right\} $ has positive probability: $\mathbb{P}\left[ B\right] >0.$
From $\left( \ref{[]=>*}\right) $, any arbitrary but fixed subsequence has a sub-subsequence converging $\mathbb{P}$-a.s.. Denoting such a sub-subsequence simply by $n$, we can fix $\omega \in B$ with$$\begin{aligned}
&&\int\nolimits_{C\left( \omega \right) }\left\{ D_{s-}^{\left( n\right)
}\theta _{1}^{\left( n\right) }\left( s,x\right) -D_{s-}\theta _{1}\left(
s,x\right) \right\} ^{2}d\mu \left( s,x\right) \\
&\leq &\int\nolimits_{\left[ 0,T\right] \times \mathbb{R}_{0}}\left\{
D_{s-}^{\left( n\right) }\theta _{1}^{\left( n\right) }\left( s,x\right)
-D_{s-}\theta _{1}\left( s,x\right) \right\} ^{2}d\mu \left( s,x\right)
\underset{n\rightarrow \infty }{\longrightarrow }0,\end{aligned}$$and hence $\left\{ D_{s-}^{\left( n\right) }\theta _{1}^{\left( n\right)
}\left( s,x\right) -D_{s-}\theta _{1}\left( s,x\right) \right\} ^{2}$ converges in $\mu $-measure to $0$ on $C\left( \omega \right) .$ Again, for any subsequence there is a sub-subsequence converging $\mu $-a.s. to $0$. Furthermore, for an arbitrary but fixed $\left( s,x\right) \in C\left(
\omega \right) $, when the limit does not exist $$\begin{array}{clc}
a & :=\underset{n\rightarrow \infty }{\lim \inf }\left\{ D_{s-}^{\left(
n\right) }\theta _{1}^{\left( n\right) }\left( s,x\right) -D_{s-}\theta
_{1}\left( s,x\right) \right\} ^{2} & \\
& \neq \underset{n\rightarrow \infty }{\lim \sup }\left\{ D_{s-}^{\left(
n\right) }\theta _{1}^{\left( n\right) }\left( s,x\right) -D_{s-}\theta
_{1}\left( s,x\right) \right\} ^{2} & =:b,\end{array}$$and we can choose converging subsequences $n\left( i\right) $ and $n\left(
j\right) $ with $$\begin{aligned}
\underset{i\rightarrow \infty }{\lim }\left\{ D_{s-}^{n\left( i\right)
}\theta _{1}^{n\left( i\right) }\left( s,x\right) -D_{s-}\theta _{1}\left(
s,x\right) \right\} ^{2} &=&a \\
\underset{j\rightarrow \infty }{\lim }\left\{ D_{s-}^{n\left( j\right)
}\theta _{1}^{n\left( j\right) }\left( s,x\right) -D_{s-}\theta _{1}\left(
s,x\right) \right\} ^{2} &=&b.\end{aligned}$$From the above argument, there are sub-subsequences $n\left( i\left(
k\right) \right) $ and $n\left( j\left( k\right) \right) $ such that $$\begin{aligned}
a &=&\underset{k\rightarrow \infty }{\lim }\left\{ D_{s-}^{n\left( i\left(
k\right) \right) }\theta _{1}^{n\left( i\left( k\right) \right) }\left(
s,x\right) -D_{s-}\theta _{1}\left( s,x\right) \right\} ^{2}=0 \\
b &=&\underset{k\rightarrow \infty }{\lim }\left\{ D_{s-}^{n\left( j\left(
k\right) \right) }\theta _{1}^{n\left( j\left( k\right) \right) }\left(
s,x\right) -D_{s-}\theta _{1}\left( s,x\right) \right\} ^{2}=0,\end{aligned}$$which is clearly a contradiction.
For the case when $$\underset{n\rightarrow \infty }{\lim }\left\{ D_{s-}^{\left( n\right)
}\theta _{1}^{\left( n\right) }\left( s,x\right) -D_{s-}\theta _{1}\left(
s,x\right) \right\} ^{2}=c\neq 0,$$the same argument can be used, and get a subsequence converging to $0$, having a contradiction again. Therefore, the second part of our claim in $\left( \ref{[]=>*.1}\right) $ holds.
Since $D_{s-}^{\left( n\right) }\theta _{1}^{\left( n\right) }\left(
s,x\right) ,\ D_{s-}\theta _{1}\left( s,x\right) \in \mathcal{G}\left( \mu
\right) $, we have, in particular, that $D_{s-}^{\left( n\right) }\theta
_{1}^{\left( n\right) }\left( s,x\right) \in \widetilde{\mathcal{P}}$ and $D_{s-}\theta _{1}\left( s,x\right) \in \widetilde{\mathcal{P}}$ and hence $C\in \widetilde{\mathcal{P}}$. From the definition of the predictable projection it follows that $$\begin{aligned}
0 &=&\mu \times \mathbb{P}\left[ C\right] \mathbb{=}\int\limits_{\Omega
}\int\limits_{\left[ 0,T\right] \times \mathbb{R}_{0}}\mathbf{1}_{C}\left(
s,\omega \right) d\mu d\mathbb{P=}\int\limits_{\Omega }\int\limits_{\left[
0,T\right] \times \mathbb{R}_{0}}\mathbf{1}_{C}\left( s,\omega \right) d\mu
_{\mathbb{P}}^{\mathcal{P}}d\mathbb{P} \\
&=&\int\limits_{\Omega }\int\limits_{\mathbb{R}_{0}}\int\limits_{\left[ 0,T\right] }\mathbf{1}_{C}\left( s,\omega \right) dsd\nu d\mathbb{P=}\lambda
\times \nu \times \mathbb{P}\left[ C\right] ,\end{aligned}$$and thus $$D_{s-}^{\left( n\right) }\theta _{1}^{\left( n\right) }\left( s,x\right)
\overset{\lambda \times \nu \times \mathbb{P}\text{-a.s.}}{\longrightarrow }D_{s-}\theta _{1}\left( s,x\right) .$$
Since $\int\limits_{\Omega \times \left[ 0,T\right] }\left\vert
D_{t-}^{\left( n\right) }-D_{t-}\right\vert d\mathbb{P}\times dt\mathbb{=}\int\limits_{\Omega \times \left[ 0,T\right] }\left\vert D_{t}^{\left(
n\right) }-D_{t}\right\vert d\mathbb{P}\times dt\longrightarrow 0$, we have that $\left\{ D_{t-}^{\left( n\right) }\right\} _{t\in \left[ 0,T\right] }$ $\overset{L^{1}\left( \lambda \times \mathbb{P}\right) }{\longrightarrow }\left\{ D_{t-}\right\} _{t\in \left[ 0,T\right] }$ and $\left\{ D_{t}^{\left( n\right) }\right\} _{t\in \left[ 0,T\right] }$ $\overset{L^{1}\left( \lambda \times \mathbb{P}\right) }{\longrightarrow }\left\{ D_{t}\right\} _{t\in \left[ 0,T\right] }.$ Then, for an arbitrary but fixed subsequence $\left\{ n_{k}\right\} _{k\in \mathbb{N}}\subset
\mathbb{N}$, there is a sub-subsequence $\left\{ n_{k_{i}}\right\} _{i\in
\mathbb{N}}\subset \mathbb{N}$ such that $$\begin{array}{ccc}
D_{t-}^{\left( n_{k_{i}}\right) }\theta _{1}^{\left( n_{k_{i}}\right)
}\left( t,x\right) & \overset{\lambda \times \nu \times \mathbb{P}\text{-a.s.}}{\longrightarrow } & D_{t-}\theta _{1}\left( t,x\right) , \\
D_{t-}^{\left( n_{k_{i}}\right) } & \overset{\lambda \times \mathbb{P}\text{-a.s.}}{\longrightarrow } & D_{t-}, \\
D_{t}^{\left( n_{k_{i}}\right) } & \overset{\lambda \times \mathbb{P}\text{-a.s.}}{\longrightarrow } & D_{t}.\end{array}$$Furthermore, $\mathbb{Q}\ll \mathbb{P}$ implies that $\lambda \times \nu
\times \mathbb{Q}\ll \lambda \times \nu \times \mathbb{P}$, and then $$\begin{array}{ccc}
D_{t-}^{\left( n_{k_{i}}\right) }\theta _{1}^{\left( n_{k_{i}}\right)
}\left( t,x\right) & \overset{\lambda \times \nu \times \mathbb{Q}\text{-a.s.}}{\longrightarrow } & D_{t-}\theta _{1}\left( t,x\right) , \\
D_{t-}^{\left( n_{k_{i}}\right) } & \overset{\lambda \times \nu \times
\mathbb{Q}\text{-a.s.}}{\longrightarrow } & D_{t-},\end{array}$$and $$D_{t}^{\left( n_{k_{i}}\right) }\overset{\lambda \times \nu \times \mathbb{Q}\text{-a.s.}}{\longrightarrow }D_{t}. \label{[]=>*.2}$$Finally, noting that $\inf D_{t}>0$ $\mathbb{Q}$-a.s. $$\theta _{1}^{\left( n_{k_{i}}\right) }\left( t,x\right) \overset{\lambda
\times \nu \times \mathbb{Q}\text{-a.s.}}{\longrightarrow }\theta _{1}\left(
t,x\right) . \label{[]=>*.3}$$
The first assertion in $\left( \ref{[]=>*.1}\right) $ can be proved using essentially the same kind of ideas used above for the proof of the second part, concluding that for an arbitrary but fixed subsequence $\left\{
n_{k}\right\} _{k\in \mathbb{N}}\subset \mathbb{N}$, there is a sub-subsequence $\left\{ n_{k_{i}}\right\} _{i\in \mathbb{N}}\subset \mathbb{N}$ such that $$\left\{ D_{t}^{\left( n_{k_{i}}\right) }\right\} _{t\in \left[ 0,T\right] }\overset{\lambda \times \mathbb{Q}\text{-a.s.}}{\longrightarrow }\left\{
D_{t}\right\} _{t\in \left[ 0,T\right] } \label{[]=>*.4}$$and $$\left\{ \theta _{0}^{\left( n_{k_{i}}\right) }\left( t\right) \right\}
_{t\in \left[ 0,T\right] }\overset{\lambda \times \mathbb{Q}\text{-a.s.}}{\longrightarrow }\left\{ \theta _{0}\left( t\right) \right\} _{t\in \left[
0,T\right] }. \label{[]=>*.5}$$
We are now ready to finish the proof of the theorem, observing that $$\begin{aligned}
&&\underset{n\rightarrow \infty }{\lim \inf }\vartheta \left( \mathbb{Q}^{\left( n\right) }\right) \\
&=&\underset{n\rightarrow \infty }{\lim \inf }\int\limits_{\Omega \times \left[ 0,T\right] }\left\{ h\left( h_{0}\left( \theta _{0}^{\left( n\right)
}\left( t\right) \right) +\int\nolimits_{\mathbb{R}_{0}}\delta \left(
t,x\right) h_{1}\left( \theta _{1}^{\left( n\right) }\left( t,x\right)
\right) \nu \left( dx\right) \right) \right\} \dfrac{D_{t}^{\left( n\right) }}{D_{t}}d\left( \lambda \times \mathbb{Q}\right) .\end{aligned}$$Let $\left\{ n_{k}\right\} _{k\in \mathbb{N}}\subset \mathbb{N}$ be a subsequence for which the limit inferior is realized. Using $\left( \ref{[]=>*.2}\right) ,\left( \ref{[]=>*.3}\right) ,\ $$\left( \ref{[]=>*.4}\right) ,$ and $\left( \ref{[]=>*.5}\right) $ we can pass to a sub-subsequence $\left\{ n_{k_{i}}\right\} _{i\in \mathbb{N}}\subset \mathbb{N}$ and, from the continuity of $h,\ h_{0}$and$h_{1}$, it follows $$\begin{aligned}
&&\underset{n\rightarrow \infty }{\lim \inf }\ \vartheta \left( \mathbb{Q}^{\left( n\right) }\right) \\
&\geq &\int\limits_{\Omega \times \left[ 0,T\right] }\underset{i\rightarrow
\infty }{\lim \inf }\left( \left\{ h\left( h_{0}\left( \theta _{0}^{\left(
n_{k_{i}}\right) }\left( t\right) \right) +\int\limits_{\mathbb{R}_{0}}\delta \left( t,x\right) h_{1}\left( \theta _{1}^{\left(
n_{k_{i}}\right) }\left( t,x\right) \right) \nu \left( dx\right) \right)
\right\} \tfrac{D_{t}^{\left( n_{k_{i}}\right) }}{D_{t}}\right) d\left(
\lambda \times \mathbb{Q}\right) \\
&\geq &\int\limits_{\Omega \times \left[ 0,T\right] }h\left( h_{0}\left(
\theta _{0}\left( t\right) \right) +\int\nolimits_{\mathbb{R}_{0}}h_{1}\left( \theta _{1}\left( t,x\right) \right) \nu \left( dx\right)
\right) d\left( \lambda \times \mathbb{Q}\right) \\
&=&\vartheta \left( \mathbb{Q}\right) .\end{aligned}$$$\Box $
[99]{} Artzner, P. ; Delbaen, F. ; Eber, J.M. and Heath, D. 1997 Thinking coherently, RISK Magazine 10, pp 68-71.
Artzner, P. ; Delbaen, F. ; Eber, J.M. and Heath, D. 1999 Coherent measures of risk, Math. Finance, 9, pp 203-228.
Barrieu, P. and El Karoui, N. 2009 Pricing, hedging and optimality designing derivatives via minimization of risk measures In: Volume on Indifference Pricing (ed: Rene Carmona), Princeton University Press, 2009.
Bion-Nadal, J. 2008 Dynamic risk measures: Time consitency and risk measures from BMO martingales, Finance and Stochastics 12, pp 219-244.
Bion-Nadal, J. 2009 Time consistent dynamic risk processes, Stochastics Processes and Their Applications, 119, pp 633-654.
Delbaen, F. 2002 Coherent risk measures on general probability spaces in Advances in Finance and Stochastics, Essays in Honor of Dieter Sondermann, pp 1-37, Eds. K. Sandmann, Ph. Schönbucher. Berlin, Heidelberg, New York: Springer.
Delbaen, F.; Peng, S. and Rosazza Gianin, E. 2010 Representation of the penalty term of dynamic concave utilities, Finance and Stochastics 14, pp 449-472.
Föllmer, H. and Schied, A. 2002 Convex measures of risk and trading constraints, Finance and Stochastics 6, pp 429-447.
Föllmer, H. and Schied, A. 2002 Robust Preferences and Convex Risk Measures in Advances in Finance and Stochastics, Essays in Honor of Dieter Sondermann, 39-56, Eds. K. Sandmann, Ph. Schönbucher. Berlin, Heidelberg, New York: Springer.
Föllmer, H. and Schied, A. 2004 Stochastic Finance. An Introduction in Discrete Time (2nd. Ed.), de Gruyter Studies in Mathematics 27.
Frittelli, M. and Rosazza Gianin, E. 2002 Putting order in risk measures, Journal of Banking & Finance 26, pp 1473 - 1486.
Frittelli, M. and Rosazza Gianin, E. 2004 Dynamic Convex Risk Measures, in Risk Measures for the 21st Century, pp 227 - 248, Ed. G. Szegö, Wiley.
Heath, D. 2000 Back to the future. Plenary lecture at the First World Congres of the Bachelier Society, Paris.
He, S.W. ; Wang, J.G. and Yan, J.A. 1992 Semimartingale theory and stochastic calculus, Beijing, Science Press.
Hernández-Hernández, D. and Pérez-Hernández, L. 2011 Robust utility maximization for Lévy processes: Penalization and Solvability, arXiv 1206.0715.
Jacod, J. and Shiryaev, A. 2003 Limit Theorems for Stochastic Processes (2nd Ed.), Springer.
Krätschmer, V. 2005 Robust representation of convex risk measures by probability measures, Finance and Stochastics 9, pp 597 - 608.
Schied, A. 2007 Optimal investments for risk- and ambiguity-averse preferences: a duality approach, Finance and Stochastics 11, pp 107 - 129.
[^1]: Centro de Investigación en Matemáticas, Apartado postal 402, Guanajuato, Gto. 36000, México. E-mail: dher@cimat.mx
[^2]: Departamento de Economía y Finanzas, Universidad de Guanajuato, DCEA Campus Guanajuato, C.P. 36250, Guanajuato, Gto. E-mail: lperezhernandez@yahoo.com
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We study two kinetically constrained models in a quenched random environment. The first model is a mixed threshold Fredrickson-Andersen model on $\zz^{2}$, where the update threshold is either $1$ or $2$. The second is a mixture of the Fredrickson-Andersen $1$-spin facilitated constraint and the North-East constraint in $\zz^{2}$. We compare three time scales related to these models the bootstrap percolation time for emptying the origin, the relaxation time of the kinetically constrained model, and the time for emptying the origin of the kinetically constrained model and understand the effect of the random environment on each of them.'
author:
- Assaf Shapira
bibliography:
- 'random\_constraint.bib'
title: Kinetically Constrained Models with Random Constraints
---
\#1[\#1]{}
\#1[\#1]{}
Introduction
============
Kinetically constrained models (KCMs) are a family of interacting particle systems introduced by physicists in order to study glassy and granular materials [@garrahansollichtoninelli2011kcm; @Ritort]. These are reversible Markov processes on the state space $\left\{ 0,1\right\} ^{V}$, where $V$ is the set of vertices of some graph. The equilibrium measure of these processes is a product measure of i.i.d. Bernoulli random variables, and their nontrivial behavior is due to kinetic constraints the state of each site is resampled at rate $1$, but only when a certain local constraint is satisfied. This condition expresses the fact that sites are blocked when there are not enough empty sites in their vicinity. One example of such a constraint is that of the Fredrickson-Andersen $j$-spin facilitated model on $\zz^{d}$, in which an update is only possible if at least $j$ nearest neighbors are empty [@FA1984kcm]. We will refer to this constraint as the FA$j$f constraint. Another example is the North-East constraint: the underlying graph is $\zz^{2}$, and an update is possible only if both the site above and the site to the right are empty [@Lalley2006northeast]. These constraints result in the lengthening of the time scales describing the dynamics as the density of empty sites $q$ tends to $0$. This happens since sites belonging to large occupied regions could only be changed when empty sites penetrate from the outside. The main difficulty in the analysis of KCMs is that they are not attractive, which prevents us from using tools such as monotone coupling and censoring often used in the study of Glauber dynamics. For this reason spectral analysis and inequalities related to the spectral gap are essential for the study of time scales in these models. See [@martinelli2018universality2d] for more details.
A closely related family of models are the bootstrap percolation models, which are, unlike KCMs, monotone deterministic processes in discrete time. The state space of the bootstrap percolation is the same as that of the KCM, and they share the same family of constraints; but in the bootstrap percolation occupied sites become empty (deterministically) whenever the constraint is satisfied, and empty sites can never be filled. The initial conditions of the bootstrap percolation are random i.i.d Bernoulli random variables with parameter $1-q$, i.e., they are chosen according to the equilibrium measure of the KCM. In this paper we will refer to the bootstrap percolation that corresponds to certain constraint by their KCM name, so, for example, the $j$-neighbor bootstrap percolation will be referred to as the bootstrap percolation with the FA$j$f constraint.
In the examples given previously of the FA$j$f model and the North-East model the constraints are translation invariant. Universality results for general homogeneous models have been studied recently for the bootstrap percolation in a series of works that provide a good understanding of their behavior [@balisterbollobas2016subcriticalbp; @bollobas2016universality2d; @bollobassmithuzzell2015universalityzd; @hartarsky2018mathcal]. Inspired by the tools developed for the bootstrap percolation, universality results on the KCMs could also be obtained for systems with general translation invariant constraints [@martinellitonitelli2016towardsuniversality; @martinelli2018universality2d]. Another type of models vastly studied for the bootstrap percolation are models in a random environment, such as the bootstrap percolation on the polluted lattice [@gravner1997polluted; @gravner2017polluted], Galton-Watson trees [@bollobas2014bootstrapongwcriticality], random regular graphs [@balogh2007randomregular; @janson2009percolationexplosion] and the Erdős-Rényi graph [@janson2012Gnp]. KCMs in random environments have also been studied in the physics literature, see [@schulz1994modifiedfa; @willart1999dynlengthscale].
In this paper we will consider two models on the two dimensional lattice with random constraints. We will focus on the divergence of time scales when the equilibrium density of empty sites $q$ is small.
The time scale that is commonly considered in KCMs is the relaxation time, i.e., the inverse of the spectral gap. This time scale determines the slowest possible relaxation of correlation between observables, but for homogeneous system it often coincides with typical time scales of the system (see, e.g., [@mareche2018duarte]). However, when the system is not homogeneous it will in general not describe actual time scales that we observe. We will see in this paper that very unlikely configurations of the disorder that appear far away determine the relaxation time, even though the observed local behavior is not likely to be effected by these remote regions.
Another time scale that is natural to look at is the first time at which the origin (or any arbitrary vertex) is empty. In the bootstrap percolation literature, this is indeed the time most commonly studied. This time could be observed physically, and we will see that it is not significantly effected by the “bad” regions far away from the origin.
In this paper we compare the three time scales the time it takes for the origin to be emptied with the bootstrap percolation, the relaxation time for the KCM, and the first time the origin is empty in the KCM. We will first analyze these time scales in a mixed threshold FA model on the two dimensional lattice. The second model we consider is a mixed KCM, in which vertices have either the FA1f constraint, or the North-East constraint. This will be an example of a model in which the relaxation time is infinite, but the time at which the origin becomes empty is almost surely finite.
Mixed threshold Fredrickson-Andersen model
==========================================
Model and notation
------------------
In this section we will treat two models the mixed threshold bootstrap percolation on $\zz^{2}$ and the mixed threshold Fredrickson-Andersen model on $\zz^{2}$. Both models will live on the same random environment, that will determine the threshold at each vertex of $\zz^{2}$. It will be denoted $\omega$, and the threshold at a vertex $x$ will be $\omega_{x}\in\left\{ 1,2\right\} $. This environment is chosen according to a measure $\nu$, which depends on a parameter $\pi\in\left(0,1\right)$. $\nu$ will be a product measure for each vertex $x\in\zz^{2}$ $\omega_{x}$ equals $1$ with probability $\pi$ and $2$ with probability $1-\pi$, independently from the other vertices. Sites with threshold $1$ will be called *easy*, and sites with threshold $2$ *difficult*.
Both the bootstrap percolation and the FA dynamics are defined on the state space $\Omega=\left\{ 0,1\right\} ^{\zz^{2}}$. For a configuration $\eta\in\Omega$, we say that a site $x$ is *empty* if $\eta_{x}=0$ and *occupied* if $\eta_{x}=1$. For $\eta\in\Omega$ and $x\in\zz^{2}$ define the constraint $$\begin{aligned}
c_{x}\left(\eta\right) & =\begin{cases}
1 & \sum_{y\sim x}\left(1-\eta_{y}\right)\ge\omega_{x}\\
0 & \text{otherwise}
\end{cases}.\label{eq:faconstraint}\end{aligned}$$
We can now define the bootstrap percolation with these constraints it is a deterministic dynamics in discrete time, where at each time step $t$ empty vertices stay empty, and an occupied vertex $x$ becomes empty if the constraint is satisfied, namely $c_{x}\left(\eta\left(t-1\right)\right)=1$. The initial conditions for the bootstrap percolation are random, depending on a parameter $q\in\left(0,1\right)$. They are chosen according to the measure $\mu$, defined as a product of independent Bernoulli measures: $$\begin{aligned}
\mu & =\bigotimes_{x\in\zz^{2}}\mu_{x},\\
\mu_{x} & \sim\text{Ber}\left(1-q\right).\end{aligned}$$
The Fredrickson-Andersen model is a continuous time Markov process on $\Omega$. It is reversible with respect to the equilibrium measure $\mu$ defined above, and its generator $\mathcal{L}$ is defined by $$\mathcal{L}f=\sum_{x}c_{x}\left(\mu_{x}f-f\right)\label{eq:generatorofkcm}$$ for any local function $f$. We will denote by $\mathcal{D}$ the Dirichlet form associated with $\mathcal{L}$. Probabilities and expected values with respect to this process starting at $\eta$ will be denoted by $\pp_{\eta}$ and $\ee_{\eta}$. When starting from equilibrium we will use $\pp_{\mu}$ and $\ee_{\mu}$.
Finally, for any event $A\subseteq\Omega$, we define the hitting time $$\tau_{A}=\inf\left\{ t\,:\,\eta(t)\in A\right\} .$$ The hitting time is defined for both the KCM and the bootstrap percolation. For the time it takes to empty the origin we will use the notation $$\tau_{0}=\tau_{\left\{ \eta_{0}=0\right\} }.$$
Results
-------
The first result concerns the bootstrap percolation. It will say that for small values of $q$, $\tau_{0}$ scales as $\frac{1}{\sqrt{q}}$. To avoid confusion we stress that $\mu$ and $\pp_{\mu}$ depend on $q$, even though this dependence is not expressed explicitly in the notation.
Consider the bootstrap percolation with the mixed FA constraint. Then $\nu$-almost surely $$\begin{aligned}
\lim_{q\rightarrow0}\,\mu\left[\tau_{0}\ge\frac{a}{\sqrt{q}}\right]\xrightarrow{a\rightarrow\infty}0,\label{eq:fa12d2bpupperbound}\\
\lim_{q\rightarrow0}\,\mu\left[\tau_{0}\le\frac{a}{\sqrt{q}}\right]\xrightarrow{a\rightarrow0}0.\label{eq:fa12d2bplowerbound}\end{aligned}$$
For the KCM we have an exponential divergence of the relaxation time, but a power law behavior of $\tau_{0}$.
\[thm:scalingoftimeforfa12\]Consider the KCM with the mixed FA constraint.
1. There exist a constant $c>0$ (that does not depend on $\pi,q$) such that $\nu$-almost surely the relaxation time of the dynamics is at least $e^{\nicefrac{c}{q}}$.
2. $\nu$-almost surely there exist $\underline{\alpha}$ and $\overline{\alpha}$ (which may depend on $\omega$) such that $$\begin{aligned}
\pp_{\mu}\left[\tau_{0}\ge q^{-\overline{\alpha}}\right] & \xrightarrow{q\rightarrow0}0,\label{eq:fa12d2kcmupperbound}\\
\pp_{\mu}\left[\tau_{0}\le q^{-\underline{\alpha}}\right] & \xrightarrow{q\rightarrow0}0.\label{eq:fa12d2kcmlowerbound}\end{aligned}$$
Moreover, $\ee_{\mu}\left[\tau_{0}\right]\ge q^{-\underline{\alpha}}$ for $q$ small enough.
\[rem:exponentsaretruelyrandom\]We will see that the two exponents $\underline{\alpha}$ and $\overline{\alpha}$ cannot be deterministic there is $\alpha_{0}\in\rr$ such that $\nu\left(\overline{\alpha}<\alpha_{0}\right)>0$ but $\nu\left(\underline{\alpha}<\alpha_{0}\right)<1$.
In these two theorems we see that while $\tau_{0}$ for the bootstrap percolation behaves like $q^{-\nicefrac{1}{2}}$, its scaling for the FA is random. In the proof we will see in details the reason for this difference, but we could already try to describe it heuristically. The bootstrap percolation is dominated by the sites far away from the origin, and once these sites are emptied the origin will be emptied as well. The influence of the environment far away becomes deterministic by a law of large numbers, so we do not see the randomness of $\omega$ in the exponent. To the contrary, in the FA dynamics even when sites far away are empty, one must empty many sites in a close neighborhood of the origin simultaneously before the origin could be emptied. Therefore, in order to empty the origin we must overcome a large energy barrier, which makes $\tau_{0}$ bigger. This effect depends on the structure close to the origin, so it feels the randomness of the environment.
For simplicity, we have chosen to focus on the two dimensional case. However, a more general result can also be obtained. In the next two theorems we will consider the bootstrap percolation and KCM on $\zz^{d}$. The thresholds $\left\{ \omega_{x}\right\} _{i\in\zz^{2}}$ are i.i.d., according to a law that we denote by $\nu$. We will also assume that the probability that the threshold is $1$ is nonzero, and that the probability that the threshold is more than $d$ is zero.
For the bootstrap percolation model described above, $\nu$-almost surely $$\begin{aligned}
\lim_{q\rightarrow0}\,\mu\left[\tau_{0}\ge aq^{-\nicefrac{1}{d}}\right]\xrightarrow{a\rightarrow\infty}0,\label{eq:fa1jbpupperbound}\\
\lim_{q\rightarrow0}\,\mu\left[\tau_{0}\le aq^{-\nicefrac{1}{d}}\right]\xrightarrow{a\rightarrow0}0.\label{eq:fa1jbplowerbound}\end{aligned}$$
For the KCM described above, $\nu$-almost surely there exist $\underline{\alpha}$ and $\overline{\alpha}$ (which may depend on $\omega$) such that $$\begin{aligned}
\pp_{\mu}\left[\tau_{0}\ge q^{-\overline{\alpha}}\right] & \xrightarrow{q\rightarrow0}0,\label{eq:fa1jkcmupperbound}\\
\pp_{\mu}\left[\tau_{0}\le q^{-\underline{\alpha}}\right] & \xrightarrow{q\rightarrow0}0.\label{eq:fa1jkcmlowerbound}\end{aligned}$$
Mixed north-east and FA1$f$ KCM
===============================
Model and notation
------------------
In this section we will consider again a kinetically constrained dynamics in an environment with mixed constraints. This time, however, the two constraints we will have are FA1f and north-east. That is, using the same $\omega$ and $\nu$ as before, for $x$ such that $\omega_{x}=1$
$$\begin{aligned}
c_{x}\left(\eta\right) & =\begin{cases}
1 & \sum_{y\sim x}\left(1-\eta_{y}\right)\ge1\\
0 & \text{otherwise}
\end{cases},\end{aligned}$$
and when $\omega_{x}=2$
$$\begin{aligned}
c_{x}\left(\eta\right) & =\begin{cases}
1 & \eta_{x+e_{1}}=0\text{ and }\eta_{x+e_{2}}=0\\
0 & \text{otherwise}
\end{cases}.\end{aligned}$$
For the same $\mu$, we can define $\mathcal{L}$ as in \[eq:generatorofkcm\]. Note that $c_{x}$ (and therefore $\mathcal{L}$) are not the same as those of the previous section, even though we use the same letters to describe them. Again, the hitting time of a set $A$ will be denoted by $\tau_{A}$, and $\tau_{0}=\tau_{\left\{ \eta_{0}=0\right\} }$.
We restrict ourselves to the case where $\pi$ is greater than the critical probability for the Bernoulli site percolation on $\zz^{2}$, denoted by $p^{\text{SP }}$. The critical probability for the oriented percolation on $\zz^{2}$ will be denoted by $p^{\text{OP}}$.
Our choice of regime, where easy sites percolate, guarantees that all sites are emptiable for the bootstrap percolation. The infinite cluster $\mathcal{C}$ of easy sites is emptiable since it must contain an empty site somewhere. The connected components of $\zz^{2}\setminus\mathcal{C}$ are finite, and have an emptiable boundary, so each of them will also be emptied eventually.
This choice, however, is not the only one for which all sites are emptiable. For any fixed environment $\omega$ there is a critical value $q_{c}$ such that above $q_{c}$ all sites are emptiable and below $q_{c}$ some sites remain occupied forever. For $\pi>p^{\text{SP}}$ we already know that $\nu$-almost surely $q_{c}=0$. In fact, the same argument gives a slightly better result by allowing sites to be difficult if they are also empty. This implies that $q_{c}\le1-\frac{1-p^{\text{SP}}}{1-\pi}$. On the other hand, if there is an infinite up-right path of difficult sites that are all occupied, this path could never be emptied. This will imply that $q_{c}\ge1-\frac{p^{\text{OP}}}{1-\pi}$.
Results
-------
We will see for this model that it is possible to have an infinite relaxation time, and still the tail of the distribution of $\tau_{0}$ decays exponentially, with a rate that scales polynomially with $q$.
Consider the kinetically constrained model described above, with $\pi>p^{\text{SP}}$ and $q\le q^{\text{OP}}$.
1. $\nu$-almost surely the spectral gap is $0$, i.e., the relaxation time is infinite.
2. There exist two positive constants $c,C$ depending on $\pi$ and a $\nu$-random variable $\tau$ such that
1. $\pp_{\mu}\left(\tau_{0}\ge t\right)\le e^{-\nicefrac{t}{\tau}}$ for all $t>0$,
2. $\nu\left(\tau\ge t\right)\le C\,t^{\frac{c}{\log q}}$ for $t$ large enough.
Some tools
==========
In this section we will present some tools that will help us analyze the kinetically constrained models that we have introduced. We will start by considering a general state space $\Omega$, and any Markov process on $\Omega$ that is reversible with respect to a certain measure $\mu$. We denote its generator by $\mathcal{L}$ and the associated Dirichlet form by $\mathcal{D}$. We will consider, for some event $A$, its hitting time $\tau_{A}$. With some abuse of notation, we use $\tau_{A}$ also for the $\mu$-random variable giving for every state $\eta\in\Omega$ the expected hitting time at $A$ starting from that state: $$\tau_{A}\left(\eta\right)=\ee_{\eta}\left(\tau_{A}\right).$$
$\tau_{A}\left(\eta\right)$ satisfies the following Poisson problem: $$\begin{aligned}
\mathcal{L}\tau_{A} & =-1\text{ on }A^{c},\label{eq:poissonproblem}\\
\tau_{A} & =0\text{ on }A.\nonumber \end{aligned}$$
By multiplying both sides of the equation by $\tau_{A}$ and integrating with respect to $\mu$, we obtain
\[cor:dirichletequalsexpectation\]$\mu\left(\tau_{A}\right)=\mathcal{D}\tau_{A}$.
Rewriting this corollary as $\mu\left(\tau_{A}\right)=\frac{\mu\left(\tau_{A}\right)^{2}}{\mathcal{D}\tau_{A}}$, it resembles a variational principle introduced in [@AsselhaDaiPra2001quasistationary] that will be useful in the following. In order to formulate it we will need to introduce some notation.
For an event $A\subseteq\Omega$, $V_{A}$ is the set of all functions in the domain of $\mathcal{L}$ that vanish on the event $A$. Note that, in particular, $\tau_{A}\in V_{A}$.
\[def:taubar\]For an event $A\subseteq\Omega$, $$\overline{\tau}_{A}=\sup_{0\neq f\in V_{A}}\,\frac{\mu\left(f^{2}\right)}{\mathcal{D}f}.$$
The following proposition is given in the first equation of the proof of Theorem 2 in [@AsselhaDaiPra2001quasistationary]:
\[prop:exponentialdecayofhittingtime\] $\pp_{\mu}\left[\tau_{A}>t\right]\le e^{-t/\overline{\tau}_{A}}$.
In particular, implies that $\mu\left(\tau_{A}\right)\le\bar{\tau}_{A}$. This, however, could be derived much more simply from $$\mu\left(\tau_{A}\right)^{2}\le\mu\left(\tau_{A}^{2}\right)\le\overline{\tau}_{A}\mathcal{D}\tau_{A}=\overline{\tau}_{A}\mu\left(\tau_{A}\right).$$ Note that whenever $\tau_{A}$ is not constant on $A^{c}$ this inequality is strict. Thus on one hand gives an exponential decay of $\pp_{\mu}\left[\tau_{A}>t\right]$, which is stronger than the information on the expected value we can obtain from the Poisson problem in . On the other hand, $\overline{\tau}_{A}$ could be longer than the actual expectation of $\tau_{A}$.
In order to bound the hitting time from below we will formulate a variational principle that will characterize $\tau_{A}$.
For $f\in V_{A}$, let $$\mathcal{T}f=2\mu\left(f\right)-\mathcal{D}f.$$
\[prop:variationalprincipleforhittingtime\]$\tau_{A}$ maximizes $\mathcal{T}$ in $V_{A}$. Moreover, $\mu\left(\tau_{A}\right)=\sup_{f\in V_{A}}\mathcal{T}f$.
Consider $f\in V_{A}$, and let $\delta=f-\tau_{A}$. Using the self-adjointness of $\mathcal{L}$, , and the fact that $\delta\in V_{A}$ we obtain $$\begin{aligned}
\mathcal{T}f & =\mathcal{T}\left(\tau_{A}+\delta\right)\\
& =2\mu\left(\tau_{A}\right)+2\mu\left(\delta\right)-\mathcal{D}\tau_{A}-\mathcal{D}\delta+2\mu\left(\delta\mathcal{L}\tau\right)\\
& =\mathcal{T}\tau_{A}-\mathcal{D}\delta.\end{aligned}$$ By the positivity of the Dirichlet form, $\mathcal{T}$ is indeed maximized by $\tau_{A}$. Finally, by , $$\begin{aligned}
\sup_{f\in V_{A}}\mathcal{T}f & =\mathcal{T}\tau_{A}=2\mu\left(\tau_{A}\right)-\mathcal{D}\tau_{A}=\mu\left(\tau_{A}\right).\end{aligned}$$
As an immediate consequence we can deduce the monotonicity of the expected hitting time:
\[cor:monotonicityofhittingtime\]Let $\mathcal{D}$ and $\mathcal{D}^{\prime}$ be the Dirichlet forms of two reversible Markov processes defined on the same space, such that both share the same equilibrium measure $\mu$. We denote the expectations with respect to these processes starting at equilibrium by $\ee_{\mu}$ and $\ee_{\mu}^{\prime}$. Assume that the domain of $\mathcal{D}$ is contained in the domain of $\mathcal{D}^{\prime}$, and that for every $f\in\text{Dom}\mathcal{D}$ $$\mathcal{D}f\le\mathcal{D}^{\prime}f.$$ Then, for an event $A\subseteq\Omega$, $$\ee_{\mu}\tau_{A}\le\ee_{\mu}^{\prime}\tau_{A}.$$
We will now restrict ourselves to kinetically constrained models. Fix a graph $G$ and take $\Omega=\left\{ 0,1\right\} ^{G}$. For every vertex $x\in G$ and a state $\eta\in\Omega$ we define a constraint $c_{x}\left(\eta\right)\in\left\{ 0,1\right\} $. The constraint does not depend on the value at $x$, and is non-increasing in $\eta$. The equilibrium measure $\mu$ is a product measure. The generator of this process, operating on a local function $f$, is given by $$\mathcal{L}f=\sum_{x}c_{x}\left(\mu_{x}f-f\right)$$ and its Dirichlet form by $$\mathcal{D}f=\mu\left(\sum_{x}c_{x}\text{Var}_{x}f\right).$$
Fix a subgraph $H$ of $G$, and denote the complement of $H$ in $G$ by $H^{c}$.
We will compare the dynamics of this KCM to the dynamics restricted to $H$, with boundary conditions that are the most constrained ones.
\[def:restricteddynamics\]The restricted dynamics on $H$ is the KCM defined by the constraints $$c_{x}^{H}\left(\eta\right)=c_{x}\left(\eta^{H}\right),$$ where, for $\eta\in\left\{ 0,1\right\} ^{H}$, $\eta^{H}$ is the configuration given by
$$\eta^{H}(x)=\begin{cases}
\eta_{x} & x\in H\\
1 & x\in H^{c}
\end{cases}.$$ We will denote the corresponding generator by $\mathcal{L}_{H}$ and its Dirichlet form by $\mathcal{D}_{H}$.
\[claim:restrictingdirichlet\]For any $f$ in the domain of $\mathcal{L}$ $$\mathcal{D}f\ge\mu_{H^{c}}\mathcal{D}_{H}f.$$
$c_{x}^{H}\le c_{x}$ and $\text{Var}_{x}f$ is positive, therefore $$\begin{aligned}
\mathcal{D}f & =\mu\left(\sum_{x}c_{x}\text{Var}_{x}f\right)\ge\mu\left(\sum_{x\in H}c_{x}^{H}\text{Var}_{x}f\right).\end{aligned}$$
The next claim will allow us to relate the spectral gap of the restricted dynamics to the variational principles discussed earlier.
\[claim:gapofrestricteddynamics\]Let $\gamma_{H}$ be the spectral gap of $\mathcal{L}_{H}$, and fix an event $A$ that depends only on the occupation of the vertices of $H$. Then for all $f\in V_{A}$
1. $\mathcal{D}f\ge\mu\left(A\right)\gamma_{H}\,\left(\mu f\right)^{2}$,
2. $\mathcal{D}f\ge\frac{\mu\left(A\right)}{1+\mu\left(A\right)}\gamma_{H}\,\mu\left(f^{2}\right)$
First, note that $\mu_{H}\left(A\right)\le\mu_{H}\left(f=0\right)\le\mu_{H}\left(\left|f-\mu_{H}f\right|\ge\mu_{H}f\right)$. Therefore, by Chebyshev inequality and the fact that $\mu\left(A\right)=\mu_{H}\left(A\right)$, $$\mu\left(A\right)\le\frac{\text{Var}_{H}f}{\left(\mu_{H}f\right)^{2}}.\label{eq:boundingvarianceoverexpectaitionsquared}$$ Then, implies $$\begin{aligned}
\mathcal{D}f & \ge\mu_{H^{c}}\mathcal{D}_{H}f\ge\gamma_{H}\mu_{H^{c}}\text{Var}_{H}f\ge\mu\left(A\right)\gamma_{H}\,\mu_{H^{c}}\left(\mu_{H}f\right)^{2}\ge\mu\left(A\right)\gamma_{H}\,\left(\mu f\right)^{2}\end{aligned}$$ by Jensen inequality. For the second part, we use inequality \[eq:boundingvarianceoverexpectaitionsquared\] $$\text{Var}_{H}f\ge\mu\left(A\right)\left(\mu_{H}\left(f^{2}\right)-\text{Var}_{H}f\right),$$ which implies $$\text{Var}_{H}f\ge\frac{\mu\left(A\right)}{1+\mu\left(A\right)}\mu_{H}\left(f^{2}\right).$$ The result then follows by applying .
Proof of the results
====================
Mixed threshold bootstrap percolation on $\protect\zz^{2}$
----------------------------------------------------------
### Proof of
For the upper bound we will find a specific mechanism in which a cluster of empty sites could grow until it reaches the origin.
\[def:goodsquareforfa12\]A square (that is, a subset of $\zz^{2}$ of the form $x+\left[L\right]^{2}$) is *good* if it contains at least one easy site in each line and in each column.
\[claim:squaresaregood\]Fix $L$. The probability that a square of side $L$ is good is at least $1-2Le^{-\pi L}$.
$$\begin{aligned}
\pp\left[\text{easy site in each line}\right] & =\left[1-\left(1-\pi\right)^{L}\right]^{L}\ge1-Le^{-\pi L}.\end{aligned}$$
The same bound holds for $\pp\left[\text{easy site in each column}\right]$, and then we conclude by the union bound.
The square $\left[L\right]^{2}$ is *excellent* if for every $2\le i\le L$ at least one of the sites in $\left\{ i\right\} \times\left[i-1\right]$ is easy, and at least one of the sites in $\left[i-1\right]\times\left\{ i\right\} $ is easy. For other squares of side $L$ being excellent is defined by translation.
We will use $p_{L}$ to denote the probability that a square of side $L$ is excellent. Note that $p_{L}$ depends only on $\pi$ and not on $q$.
(0,0) grid +(5,5); (0,5) grid +(5,5); (5,5) grid +(5,5);
(0,0) grid +(5,5); (0,5) grid +(5,5); (5,5) grid +(5,5);
(+0.5,+0.5) node\[black\] [$0$]{};
(+0.5,+1.5) node\[black\] [$e$]{}; (+1.5,+0.5) node\[black\] [$e$]{}; (+2.5,+1.5) node\[black\] [$e$]{}; (+1.5,+2.5) node\[black\] [$e$]{}; (+2.5,+3.5) node\[black\] [$e$]{}; (+3.5,+0.5) node\[black\] [$e$]{}; (+1.5,+4.5) node\[black\] [$e$]{}; (+4.5,+2.5) node\[black\] [$e$]{};
(+0.5,+9.5) node\[black\] [$e$]{}; (+1.5,+7.5) node\[black\] [$e$]{}; (+2.5,+8.5) node\[black\] [$e$]{}; (+3.5,+5.5) node\[black\] [$e$]{}; (+4.5,+6.5) node\[black\] [$e$]{};
(+5.5,+7.5) node\[black\] [$e$]{}; (+6.5,+5.5) node\[black\] [$e$]{}; (+7.5,+6.5) node\[black\] [$e$]{}; (+8.5,+8.5) node\[black\] [$e$]{}; (+9.5,+9.5) node\[black\] [$e$]{};
(11,5) to (13,5);
(0,0) grid +(5,5); (0,5) grid +(5,5); (5,5) grid +(5,5);
(0,0) grid +(5,5); (0,5) grid +(5,5); (5,5) grid +(5,5);
(+0.5,+0.5) node\[black\] [$0$]{};
(+0.5,+1.5) node\[black\] [$0$]{}; (+1.5,+0.5) node\[black\] [$0$]{}; (+2.5,+1.5) node\[black\] [$e$]{}; (+1.5,+2.5) node\[black\] [$e$]{}; (+2.5,+3.5) node\[black\] [$e$]{}; (+3.5,+0.5) node\[black\] [$e$]{}; (+1.5,+4.5) node\[black\] [$e$]{}; (+4.5,+2.5) node\[black\] [$e$]{};
(+0.5,+9.5) node\[black\] [$e$]{}; (+1.5,+7.5) node\[black\] [$e$]{}; (+2.5,+8.5) node\[black\] [$e$]{}; (+3.5,+5.5) node\[black\] [$e$]{}; (+4.5,+6.5) node\[black\] [$e$]{};
(+5.5,+7.5) node\[black\] [$e$]{}; (+6.5,+5.5) node\[black\] [$e$]{}; (+7.5,+6.5) node\[black\] [$e$]{}; (+8.5,+8.5) node\[black\] [$e$]{}; (+9.5,+9.5) node\[black\] [$e$]{};
(25,5) to (27,5);
(0,0) grid +(5,5); (0,5) grid +(5,5); (5,5) grid +(5,5);
(0,0) grid +(5,5); (0,5) grid +(5,5); (5,5) grid +(5,5);
(+0.5,+0.5) node\[black\] [$0$]{};
(+0.5,+1.5) node\[black\] [$0$]{}; (+1.5,+0.5) node\[black\] [$0$]{}; (+1.5,+1.5) node\[black\] [$0$]{}; (+2.5,+1.5) node\[black\] [$e$]{}; (+1.5,+2.5) node\[black\] [$e$]{}; (+2.5,+3.5) node\[black\] [$e$]{}; (+3.5,+0.5) node\[black\] [$e$]{}; (+1.5,+4.5) node\[black\] [$e$]{}; (+4.5,+2.5) node\[black\] [$e$]{};
(+0.5,+9.5) node\[black\] [$e$]{}; (+1.5,+7.5) node\[black\] [$e$]{}; (+2.5,+8.5) node\[black\] [$e$]{}; (+3.5,+5.5) node\[black\] [$e$]{}; (+4.5,+6.5) node\[black\] [$e$]{};
(+5.5,+7.5) node\[black\] [$e$]{}; (+6.5,+5.5) node\[black\] [$e$]{}; (+7.5,+6.5) node\[black\] [$e$]{}; (+8.5,+8.5) node\[black\] [$e$]{}; (+9.5,+9.5) node\[black\] [$e$]{};
(39,5) to (41,5);
(0,0) grid +(5,5); (0,5) grid +(5,5); (5,5) grid +(5,5);
(0,0) grid +(5,5); (0,5) grid +(5,5); (5,5) grid +(5,5);
(+0.5,+0.5) node\[black\] [$0$]{};
(+0.5,+1.5) node\[black\] [$0$]{}; (+1.5,+0.5) node\[black\] [$0$]{}; (+1.5,+1.5) node\[black\] [$0$]{}; (+2.5,+1.5) node\[black\] [$0$]{}; (+1.5,+2.5) node\[black\] [$0$]{}; (+2.5,+3.5) node\[black\] [$e$]{}; (+3.5,+0.5) node\[black\] [$e$]{}; (+1.5,+4.5) node\[black\] [$e$]{}; (+4.5,+2.5) node\[black\] [$e$]{};
(+0.5,+9.5) node\[black\] [$e$]{}; (+1.5,+7.5) node\[black\] [$e$]{}; (+2.5,+8.5) node\[black\] [$e$]{}; (+3.5,+5.5) node\[black\] [$e$]{}; (+4.5,+6.5) node\[black\] [$e$]{};
(+5.5,+7.5) node\[black\] [$e$]{}; (+6.5,+5.5) node\[black\] [$e$]{}; (+7.5,+6.5) node\[black\] [$e$]{}; (+8.5,+8.5) node\[black\] [$e$]{}; (+9.5,+9.5) node\[black\] [$e$]{};
(53,5) to (55,5);
(0,0) grid +(5,5); (0,5) grid +(5,5); (5,5) grid +(5,5);
(0,0) grid +(5,5); (0,5) grid +(5,5); (5,5) grid +(5,5);
in [0,...,2]{}[ in [0,...,2]{}[ (++0.5,++0.5) node\[black\] [$0$]{}; ]{} ]{}
(+2.5,+3.5) node\[black\] [$e$]{}; (+3.5,+0.5) node\[black\] [$e$]{}; (+1.5,+4.5) node\[black\] [$e$]{}; (+4.5,+2.5) node\[black\] [$e$]{};
(+0.5,+9.5) node\[black\] [$e$]{}; (+1.5,+7.5) node\[black\] [$e$]{}; (+2.5,+8.5) node\[black\] [$e$]{}; (+3.5,+5.5) node\[black\] [$e$]{}; (+4.5,+6.5) node\[black\] [$e$]{};
(+5.5,+7.5) node\[black\] [$e$]{}; (+6.5,+5.5) node\[black\] [$e$]{}; (+7.5,+6.5) node\[black\] [$e$]{}; (+8.5,+8.5) node\[black\] [$e$]{}; (+9.5,+9.5) node\[black\] [$e$]{};
(11,-8) to (13,-8);
(0,0) grid +(5,5); (0,5) grid +(5,5); (5,5) grid +(5,5);
(0,0) grid +(5,5); (0,5) grid +(5,5); (5,5) grid +(5,5);
in [0,...,3]{}[ in [0,...,3]{}[ (++0.5,++0.5) node\[black\] [$0$]{}; ]{} ]{}
(+1.5,+4.5) node\[black\] [$e$]{}; (+4.5,+2.5) node\[black\] [$e$]{};
(+0.5,+9.5) node\[black\] [$e$]{}; (+1.5,+7.5) node\[black\] [$e$]{}; (+2.5,+8.5) node\[black\] [$e$]{}; (+3.5,+5.5) node\[black\] [$e$]{}; (+4.5,+6.5) node\[black\] [$e$]{};
(+5.5,+7.5) node\[black\] [$e$]{}; (+6.5,+5.5) node\[black\] [$e$]{}; (+7.5,+6.5) node\[black\] [$e$]{}; (+8.5,+8.5) node\[black\] [$e$]{}; (+9.5,+9.5) node\[black\] [$e$]{};
(25,-8) to (27,-8);
(0,0) grid +(5,5); (0,5) grid +(5,5); (5,5) grid +(5,5);
(0,0) grid +(5,5); (0,5) grid +(5,5); (5,5) grid +(5,5);
in [0,...,4]{}[ in [0,...,4]{}[ (++0.5,++0.5) node\[black\] [$0$]{}; ]{} ]{}
(+0.5,+9.5) node\[black\] [$e$]{}; (+1.5,+7.5) node\[black\] [$e$]{}; (+2.5,+8.5) node\[black\] [$e$]{}; (+3.5,+5.5) node\[black\] [$e$]{}; (+4.5,+6.5) node\[black\] [$e$]{};
(+5.5,+7.5) node\[black\] [$e$]{}; (+6.5,+5.5) node\[black\] [$e$]{}; (+7.5,+6.5) node\[black\] [$e$]{}; (+8.5,+8.5) node\[black\] [$e$]{}; (+9.5,+9.5) node\[black\] [$e$]{};
(39,-8) to (41,-8);
(0,0) grid +(5,5); (0,5) grid +(5,5); (5,5) grid +(5,5);
(0,0) grid +(5,5); (0,5) grid +(5,5); (5,5) grid +(5,5);
in [0,...,4]{}[ in [0,...,4]{}[ (++0.5,++0.5) node\[black\] [$0$]{}; ]{} ]{}
(+0.5,+9.5) node\[black\] [$e$]{}; (+1.5,+7.5) node\[black\] [$e$]{}; (+2.5,+8.5) node\[black\] [$e$]{}; (+3.5,+5.5) node\[black\] [$0$]{}; (+4.5,+6.5) node\[black\] [$e$]{};
(+5.5,+7.5) node\[black\] [$e$]{}; (+6.5,+5.5) node\[black\] [$e$]{}; (+7.5,+6.5) node\[black\] [$e$]{}; (+8.5,+8.5) node\[black\] [$e$]{}; (+9.5,+9.5) node\[black\] [$e$]{};
(53,-8) to (55,-8);
(0,0) grid +(5,5); (0,5) grid +(5,5); (5,5) grid +(5,5);
(0,0) grid +(5,5); (0,5) grid +(5,5); (5,5) grid +(5,5);
in [0,...,4]{}[ in [0,...,4]{}[ (++0.5,++0.5) node\[black\] [$0$]{}; ]{} ]{}
in [0,...,4]{}[ in [5]{}[ (++0.5,++0.5) node\[black\] [$0$]{}; ]{} ]{}
(+0.5,+9.5) node\[black\] [$e$]{}; (+1.5,+7.5) node\[black\] [$e$]{}; (+2.5,+8.5) node\[black\] [$e$]{}; (+4.5,+6.5) node\[black\] [$e$]{};
(+5.5,+7.5) node\[black\] [$e$]{}; (+6.5,+5.5) node\[black\] [$e$]{}; (+7.5,+6.5) node\[black\] [$e$]{}; (+8.5,+8.5) node\[black\] [$e$]{}; (+9.5,+9.5) node\[black\] [$e$]{};
(11,-21) to (13,-21);
(0,0) grid +(5,5); (0,5) grid +(5,5); (5,5) grid +(5,5);
(0,0) grid +(5,5); (0,5) grid +(5,5); (5,5) grid +(5,5);
in [0,...,4]{}[ in [0,...,4]{}[ (++0.5,++0.5) node\[black\] [$0$]{}; ]{} ]{}
in [0,...,4]{}[ in [5,...,6]{}[ (++0.5,++0.5) node\[black\] [$0$]{}; ]{} ]{}
(+0.5,+9.5) node\[black\] [$e$]{}; (+1.5,+7.5) node\[black\] [$e$]{}; (+2.5,+8.5) node\[black\] [$e$]{};
(+5.5,+7.5) node\[black\] [$e$]{}; (+6.5,+5.5) node\[black\] [$e$]{}; (+7.5,+6.5) node\[black\] [$e$]{}; (+8.5,+8.5) node\[black\] [$e$]{}; (+9.5,+9.5) node\[black\] [$e$]{};
(25,-21) to (27,-21);
(0,0) grid +(5,5); (0,5) grid +(5,5); (5,5) grid +(5,5);
(0,0) grid +(5,5); (0,5) grid +(5,5); (5,5) grid +(5,5);
in [0,...,4]{}[ in [0,...,4]{}[ (++0.5,++0.5) node\[black\] [$0$]{}; ]{} ]{}
in [0,...,4]{}[ in [5,...,9]{}[ (++0.5,++0.5) node\[black\] [$0$]{}; ]{} ]{}
(+5.5,+7.5) node\[black\] [$e$]{}; (+6.5,+5.5) node\[black\] [$e$]{}; (+7.5,+6.5) node\[black\] [$e$]{}; (+8.5,+8.5) node\[black\] [$e$]{}; (+9.5,+9.5) node\[black\] [$e$]{};
(39,-21) to (41,-21);
(0,0) grid +(5,5); (0,5) grid +(5,5); (5,5) grid +(5,5);
(0,0) grid +(5,5); (0,5) grid +(5,5); (5,5) grid +(5,5);
in [0,...,4]{}[ in [0,...,4]{}[ (++0.5,++0.5) node\[black\] [$0$]{}; ]{} ]{}
in [0,...,4]{}[ in [5,...,9]{}[ (++0.5,++0.5) node\[black\] [$0$]{}; ]{} ]{}
in [5,...,9]{}[ in [5,...,9]{}[ (++0.5,++0.5) node\[black\] [$0$]{}; ]{} ]{}
The next two claims will show how a cluster of empty sites could propagate. See .
\[claim:infectingexcellentsquares\]Assume that $\left[L\right]^{2}$ is excellent, and that $\left(1,1\right)$ is initially empty. Then $\left[L\right]^{2}$ will be entirely emptied by time $L^{2}$.
This could be done by induction on the size of the empty square assume that $\left[l\right]^{2}$ is entirely emptied for some $l\le L$. By the definition of an excellent square, there is an easy site $x\in\left\{ l+1\right\} \times\left[l\right]$. Its neighbor to the left is empty (since it is in $\left[l\right]^{2}$), so at the next time step this site will also be empty. Once $x$ is empty, the two sites $x\pm e_{2}$ could be emptied, and then the sites $x\pm2e_{2}$ and so on, as long as they stay in $\left\{ l+1\right\} \times\left[l\right]$. Thus, at time $l$ all sites in $\left\{ l+1\right\} \times\left[l\right]$ will be empty, and by the same reasoning the sites of $\left[l\right]\times\left\{ l+1\right\} $ will also be empty. Since $\left(l+1,l+1\right)$ has two empty neighbors it will be emptied at step $l+1$, and thus $\left[l+1\right]^{2}$ will be emptied.
\[claim:infectionpropagatesforfa2\]Assume that $\left[L\right]^{2}$ is good, and that it has a neighboring square that is entirely empty by time $T$. Then $\left[L\right]^{2}$ will be entirely empty by time $T+L^{2}$.
We can empty $\left[L\right]^{2}$ line by line (or column by column, depending on whether its empty neighbor is in the horizontal direction or the vertical one). For each line, we start by emptying the easy site that it contains, and then continue to propagate.
Until the end of the proof of the upper bound, $L$ will be the minimal length for which the probability to be good exceeds $p^{\text{SP}}+0.01$.
$\mathcal{C}$ will denote the infinite cluster of good boxes of the form $Li+\left[L\right]^{2}$ for $i\in\zz^{2}$. $\mathcal{C}_{0}$ will denote the cluster of the origin surrounded by a path in $\mathcal{C}$, or just the origin if it is in $\mathcal{C}$. $\partial\mathcal{C}_{0}$ will be the outer boundary of $\mathcal{C}_{0}$ (namely the boxes out of $\mathcal{C}$ that have a neighbor in $\mathcal{C}_{0}$). Note that $\mathcal{C}_{0}$ is finite and that $\partial\mathcal{C}_{0}$ is connected.
\[claim:infectionreachestheorigin\]Assume that at time $T$ one of the boxes on $\partial\mathcal{C}_{0}$ is entirely empty. Then by time $T+T_{0}$ the origin will be empty, where $T_{0}=\left(\left|\partial\mathcal{C}_{0}\right|+\left|\mathcal{C}_{0}\right|\right)L^{2}$.
By , the boundary $\partial\mathcal{C}_{0}$ will be emptied by time $T_{0}+L^{2}\left|\partial\mathcal{C}_{0}\right|$. Then, at each time step at least one site of $\mathcal{C}_{0}$ must be emptied, since no finite region could stay occupied forever.
Assume that a box $Li+\left[L\right]^{2}$ in $\mathcal{C}$ is empty at time $T$. Also, assume that the graph distance in $\mathcal{C}$ between this box and $\partial\mathcal{C}_{0}$ is $l$. Then by time $T+lL^{2}+T_{0}$ the origin will be empty.
This is again a direct application of claims \[claim:infectionpropagatesforfa2\] and \[claim:infectionreachestheorigin\].
Finally, we will use the following result from percolation theory:
For $l$ large enough, the number of boxes in $\mathcal{C}$ that are at graph distance in $\mathcal{C}$ at most $l$ from $\partial\mathcal{C}_{0}$ is greater than $\theta l^{2}$, where $\theta$ depends only on the probability that a box is good.
By ergodicity the cluster $\mathcal{C}$ has an almost sure positive density, so in particular $$\liminf_{l\rightarrow\infty}\frac{\left|\mathcal{C}\cap\left[-l,l\right]^{2}\right|}{\left|\left[-l,l\right]^{2}\right|}>0.$$ By [@AntalPisztora1996chemicaldistancesupercritical], there exists a positive constant $\rho$ such that boxes of graph distance $l$ from the origin must be in the box $\left[-\frac{1}{\rho}l,\frac{1}{\rho}l\right]^{2}$ for $l$ large enough. Combining these two facts proves the claim.
This claim together with a large deviation estimate yields
\[cor:densityofexcellentboxes\]For $l$ large enough, the number of excellent boxes in $\mathcal{C}$ that are at graph distance in $\mathcal{C}$ at most $l$ from $\partial\mathcal{C}_{0}$ is greater than $\theta^{\prime}l^{2}$, where $\theta^{\prime}=0.99\,\theta p_{L}$.
We can now put all the ingredients together and obtain the upper bound.
Fix $c>0$, and $l=\frac{c}{\sqrt{q}}$. By , for $q$ small, there are at least $\frac{\theta^{\prime}c^{2}}{q}$ excellent boxes at distance smaller than $l$ from $\partial\mathcal{C}_{0}$. If one of them contains an empty site at the bottom left corner, the origin will be emptied by time $\left(l+1\right)L^{2}+T_{0}$. For $q$ small enough, this time is bounded by $\frac{2cL^{2}}{\sqrt{q}}$. The probability that non of them do is $\left(1-q\right)^{\frac{\theta^{\prime}c^{2}}{q}}$, thus it tends to $0$ uniformly in $q$ as $c\rightarrow\infty$.
### Proof of
The lower bound results from the simple observation, that the root could only be infected by time $t$ if there is an empty site at distance smaller than $t$. The probability of that event is $1-\left(1-q\right)^{4t^{2}}$, and taking $t=\frac{a}{\sqrt{q}}$ and $q$ small enough this probability is bounded by $1-e^{-2a}$. This tends to $0$ with $a$ uniformly in $q$, which finishes the proof.
Mixed threshold KCM on $\protect\zz^{2}$
----------------------------------------
### Spectral gap
The spectral gap of this model is dominated by that of the FA2f model. Fix any $\gamma$ strictly greater than the gap of FA2f. Then there is a local non-constant function $f$ such that $$\frac{\mathcal{D}^{\text{FA2f}}f}{\text{Var}f}\le\gamma,$$ where $\mathcal{D}^{\text{FA2f}}$ is the Dirichlet form of the FA2f model.
Since $f$ is local, it is supported in some square of size $L\times L$, for $L$ big enough. $\nu$-almost surely it is possible to find a far away square in $\zz^{2}$ of size $L\times L$ that contains only difficult sites. By translation invariance of the FA2f model we can assume that this is the square in which $f$ is supported. In this case, $\mathcal{D}f=\mathcal{D}^{\text{FA2f}}f$, and this shows that indeed the gap of the model with random threshold is smaller than that of FA2f, which by [@cancrini2008kcmzoo] is bounded by $e^{-\nicefrac{c}{q}}$.
### Proof of
In this part we will use in order to bound $\tau_{0}$ by a path argument. As in the proof of the upper bound for the bootstrap percolation, we will consider the good squares (see ) and their infinite cluster. In fact, by , by choosing $L$ big enough we may assume that the box $\left[L\right]^{2}$ is in this cluster. Let us fix this $L$ until the end of this part. We will also choose an infinite self avoiding path of good boxes starting at the origin and denote it by $i_{0},i_{1},i_{2},\dots$. Note that this path depends on $\omega$ but not on $\eta$.
On this cluster empty sites will be able to propagate, and the next definition will describe the seed needed in order to start this propagation.
A box in $\zz^{2}$ is *essentially empty* if it is good and contains an entire line or an entire column of empty sites. This will depend on both $\omega$ and $\eta$.
In order to guarantee the presence of an essentially empty box we will fix $l=q^{-L-1}$, and define the bad event
$B=\left\{ \text{none of the boxes }i_{0},\dots,i_{l}\text{ is essentially empty}\right\} $. For fixed $\omega$ the path $i_{0},i_{1},i_{2},\dots$ is fixed, and $B$ is an event in $\Omega$.
A simple bound shows that $$\mu\left(B\right)\le\left(1-q^{L}\right)^{l}\le e^{-\nicefrac{1}{q}}.\label{eq:badeventhassmallprob}$$
We can use this bound in order to bound the hitting time at $B$:
\[claim:taubisbig\]There exists $C>0$ such that $\pp_{\mu}\left(\tau_{B}\le t\right)\le Ce^{-\nicefrac{1}{q}}\,t$.
We use the graphical construction of the Markov process. In order to hit $B$, we must hit it at a certain clock ring taking place in one of the sites of $\cup_{n=1}^{l}\left(Li+\left[L\right]^{2}\right)$. Therefore, $$\begin{aligned}
\pp\left(\tau_{B}\le t\right) & \le\mathbb{P}\left[\text{more than }2\left(2L+1\right)^{2}lt\text{ rings by time }t\right]+2\left(2L+1\right)^{2}l\,t\,\mu\left(B\right)\\
& \le e^{-\left(2L+1\right)^{2}q^{-L-1}t}+2\left(2L+1\right)^{2}q^{-L-1}\,t\,e^{-\nicefrac{1}{q}}\le Ce^{-\nicefrac{1}{q}}\,t.\end{aligned}$$
In order to bound $\tau_{0}$ we will study the hitting time of $A=B\cup\left\{ \eta\left(0\right)=0\right\} $.
\[lem:fa12path\]Fix $\eta\in\Omega$. Then there exists a path $\eta_{0},\dots,\eta_{N}$ of configurations and a sequence of sites $x_{0},\dots x_{N-1}$ such that
1. $\eta_{0}=\eta$,
2. $\eta_{N}\in A$,
3. $\eta_{i+1}=\eta_{i}^{x_{i}}$,
4. $c_{x_{i}}\left(\eta_{i}\right)=1$,
5. $N\le4L^{2}l$,
6. For all $i\le N$, $\eta_{i}$ differs from $\eta$ at at most $3L$ points, contained in at most two neighboring boxes.
If $\eta\in A$, we take the path $\eta$ with $N=0$. Otherwise $\eta\in B^{c}$, so there is an essentially empty box in $i_{0},\dots,i_{l}$, which contain an empty column (or row). We can then create an empty column (row) next to it and propagate that column (row) as in . When the path rotates we can rotate this propagating column (row) as show in .
(0,0) grid +(3,5); (0,0) grid +(3,5);
in [0,...,4]{}[ (+0.5,++0.5) node\[black\] [$0$]{}; ]{}
(+1.5,+1.5) node\[black\] [$e$]{}; (+2.5,+2.5) node\[black\] [$e$]{};
(3.5,2.5) to (5,2.5);
(0,0) grid +(3,5); (0,0) grid +(3,5);
in [0,...,4]{}[ (+0.5,++0.5) node\[black\] [$0$]{}; ]{}
(+1.5,+1.5) node\[black\] [$0$]{}; (+2.5,+2.5) node\[black\] [$e$]{};
(9,2.5) to (10.5,2.5);
(0,0) grid +(3,5); (0,0) grid +(3,5);
in [0,...,4]{}[ (+0.5,++0.5) node\[black\] [$0$]{}; ]{} in [0,...,4]{}[ (+1.5,++0.5) node\[black\] [$0$]{}; ]{}
(+2.5,+2.5) node\[black\] [$e$]{};
(14.5,2.5) to (16,2.5);
(0,0) grid +(3,5); (0,0) grid +(3,5);
in [0,...,4]{}[ (+0.5,++0.5) node\[black\] [$0$]{}; ]{} in [0,...,4]{}[ (+1.5,++0.5) node\[black\] [$0$]{}; ]{}
in [0,...,4]{}[ (+2.5,++0.5) node\[black\] [$0$]{}; ]{}
(20,2.5) to (21.5,2.5);
(0,0) grid +(3,5); (0,0) grid +(3,5);
in [0,...,4]{}[ (+0.5,++0.5) node\[black\] [$0$]{}; ]{} in [0,...,3]{}[ (+1.5,++0.5) node\[black\] [$0$]{}; ]{}
in [0,...,4]{}[ (+2.5,++0.5) node\[black\] [$0$]{}; ]{}
(25.5,2.5) to (27,2.5);
(0,0) grid +(3,5); (0,0) grid +(3,5);
in [0,...,4]{}[ (+0.5,++0.5) node\[black\] [$0$]{}; ]{}
in [0,...,4]{}[ (+2.5,++0.5) node\[black\] [$0$]{}; ]{}
(+1.5,+1.5) node\[black\] [$0$]{};
(31,2.5) to (32.5,2.5);
(0,0) grid +(3,5); (0,0) grid +(3,5);
in [0,...,4]{}[ (+0.5,++0.5) node\[black\] [$0$]{}; ]{}
in [0,...,4]{}[ (+2.5,++0.5) node\[black\] [$0$]{}; ]{}
(+1.5,+1.5) node\[black\] [$e$]{};
(0,0) grid +(5,5); (0,0) grid +(5,5);
in [0,...,4]{}[ (+0.5,++0.5) node\[black\] [$0$]{}; ]{}
(+1.5,+1.5) node\[black\] [$e$]{}; (+2.5,+2.5) node\[black\] [$e$]{}; (+3.5,+4.5) node\[black\] [$e$]{}; (+3.5,+0.5) node\[black\] [$e$]{}; (+4.5,+3.5) node\[black\] [$e$]{};
(6,2.5) to (8,2.5);
(0,0) grid +(5,5); (0,0) grid +(5,5);
in [0,...,4]{}[ (+0.5,++0.5) node\[black\] [$0$]{}; ]{} in [0,...,4]{}[ (+1.5,++0.5) node\[black\] [$0$]{}; ]{} in [0,...,4]{}[ (+2.5,++0.5) node\[black\] [$0$]{}; ]{}
(+3.5,+4.5) node\[black\] [$e$]{}; (+3.5,+0.5) node\[black\] [$e$]{}; (+4.5,+3.5) node\[black\] [$e$]{};
(15,2.5) to (17,2.5);
(0,0) grid +(5,5); (0,0) grid +(5,5);
in [0,...,4]{}[ (+0.5,++0.5) node\[black\] [$0$]{}; ]{}
in [0,...,4]{}[ (+2.5,++0.5) node\[black\] [$0$]{}; ]{}
(+1.5,+1.5) node\[black\] [$e$]{}; (+1.5,+4.5) node\[black\] [$0$]{}; (+3.5,+4.5) node\[black\] [$e$]{}; (+3.5,+0.5) node\[black\] [$e$]{}; (+4.5,+3.5) node\[black\] [$e$]{};
(24,2.5) to (26,2.5);
(0,0) grid +(5,5); (0,0) grid +(5,5);
in [0,...,4]{}[ (+0.5,++0.5) node\[black\] [$0$]{}; ]{}
in [0,...,4]{}[ (+4.5,++0.5) node\[black\] [$0$]{}; ]{}
(+1.5,+4.5) node\[black\] [$0$]{}; (+2.5,+4.5) node\[black\] [$0$]{}; (+3.5,+4.5) node\[black\] [$0$]{};
(+1.5,+1.5) node\[black\] [$e$]{}; (+3.5,+0.5) node\[black\] [$e$]{}; (+2.5,+2.5) node\[black\] [$e$]{};
(33,2.5) to (35,2.5);
(0,0) grid +(5,5); (0,0) grid +(5,5);
in [0,...,4]{}[ (+0.5,++0.5) node\[black\] [$0$]{}; ]{}
in [0,...,4]{}[ (+3.5,++0.5) node\[black\] [$0$]{}; ]{}
(+1.5,+4.5) node\[black\] [$0$]{}; (+2.5,+4.5) node\[black\] [$0$]{}; (+4.5,+4.5) node\[black\] [$0$]{};
(+1.5,+1.5) node\[black\] [$e$]{}; (+4.5,+3.5) node\[black\] [$e$]{}; (+2.5,+2.5) node\[black\] [$e$]{};
(42,2.5) to (44,2.5);
(0,0) grid +(5,5); (0,0) grid +(5,5);
in [0,...,4]{}[ (+0.5,++0.5) node\[black\] [$0$]{}; ]{} in [1,...,4]{}[ (++0.5,+4.5) node\[black\] [$0$]{}; ]{}
(+1.5,+1.5) node\[black\] [$e$]{}; (+2.5,+2.5) node\[black\] [$e$]{}; (+3.5,+0.5) node\[black\] [$e$]{}; (+4.5,+3.5) node\[black\] [$e$]{};
(6,-4.5) to (8,-4.5);
(0,0) grid +(5,5); (0,0) grid +(5,5);
in [0,...,4]{}[ (+0.5,++0.5) node\[black\] [$0$]{}; ]{} in [1,...,4]{}[ (++0.5,+0.5) node\[black\] [$0$]{}; ]{}
(+1.5,+1.5) node\[black\] [$e$]{}; (+2.5,+2.5) node\[black\] [$e$]{}; (+3.5,+4.5) node\[black\] [$e$]{}; (+4.5,+3.5) node\[black\] [$e$]{};
(15,-4.5) to (17,-4.5);
(0,0) grid +(5,5); (0,0) grid +(5,5);
in [1,...,4]{}[ (+0.5,++0.5) node\[black\] [$0$]{}; ]{} in [1,...,4]{}[ (++0.5,+1.5) node\[black\] [$0$]{}; ]{}
(+3.5,+0.5) node\[black\] [$e$]{}; (+2.5,+2.5) node\[black\] [$e$]{}; (+3.5,+4.5) node\[black\] [$e$]{}; (+4.5,+3.5) node\[black\] [$e$]{};
(24,-4.5) to (26,-4.5);
(0,0) grid +(5,5); (0,0) grid +(5,5);
in [0,...,4]{}[ (++0.5,+4.5) node\[black\] [$0$]{}; ]{}
(+3.5,+0.5) node\[black\] [$e$]{}; (+2.5,+2.5) node\[black\] [$e$]{}; (+1.5,+1.5) node\[black\] [$e$]{}; (+4.5,+3.5) node\[black\] [$e$]{};
We can use this path together with in order to bound $\tau_{A}$.
\[lem:upperboundontaul\]There exists $C_{L}>0$ (that may depends on $L$ but not on $q$) such that $\mu\left(\tau_{A}\right)\le C_{L}\,q^{-5L-2}$.
Since $\tau$ vanishes on $A$, taking the path defined in , $$\tau_{A}\left(\eta\right)=\sum_{i=0}^{N-1}\left(\tau_{A}\left(\eta_{i}\right)-\tau_{A}\left(\eta_{i+1}\right)\right).$$ In the following we use the notation $$\nabla_{x}\tau_{A}\left(\eta\right)=\tau_{A}\left(\eta\right)-\tau_{A}\left(\eta^{x}\right).$$
Then, by Cauchy Schwartz inequality, $$\begin{aligned}
\mu\left(\tau_{A}\right)^{2} & \le & \mu\left(\tau_{A}^{2}\right)=\sum_{\eta}\mu\left(\eta\right)\left(\sum_{i=0}^{N-1}\nabla_{x_{i}}\tau_{A}\left(\eta_{i}\right)\right)^{2}\\
& \le & \sum_{\eta}\mu\left(\eta\right)N\sum_{i}c_{x_{i}}\left(\eta_{i}\right)\left(\nabla_{x_{i}}\tau_{A}\left(\eta_{i}\right)\right)^{2}\\
& = & \sum_{\eta}\mu\left(\eta\right)N\sum_{i}\sum_{z}\sum_{\eta^{\prime}}c_{z}\left(\eta^{\prime}\right)\left(\nabla_{z}\tau_{A}\left(\eta^{\prime}\right)\right)^{2}\One_{z=x_{i}}\One_{\eta^{\prime}=\eta_{i}}.\end{aligned}$$ By property number 6 of the path, we know that $\mu\left(\eta\right)\le q^{-3L}\mu\left(\eta^{\prime}\right)$, so we obtain
$$\mu\left(\tau_{A}\right)^{2}\le q^{-3L}N\sum_{\eta^{\prime}}\mu\left(\eta^{\prime}\right)\sum_{z}c_{z}\left(\eta^{\prime}\right)\left(\nabla_{z}\tau_{A}\left(\eta^{\prime}\right)\right)^{2}\sum_{i}\One_{z=x_{i}}\sum_{\eta}\One_{\eta^{\prime}=\eta_{i}}.$$ Still using property $6$, $\eta$ differs form $\eta^{\prime}$ at at most $3L$ point, all of them in the box containing $z$ or in one of the two neighboring boxes. This gives the bound $\sum_{\eta}\One_{\eta^{\prime}=\eta^{(i)}}\le\left(3L^{2}\right)^{3L}$. Finally, bounding $\One_{z=x_{i}}$ by $1$, $$\begin{aligned}
\mu\left(\tau_{A}\right)^{2} & \le q^{-3L}\left(3L^{2}\right)^{3L}N^{2}\sum_{\eta^{\prime}}\mu\left(\eta^{\prime}\right)\sum_{z}c_{z}\left(\eta^{\prime}\right)\left(\nabla_{z}\tau_{A}\left(\eta^{\prime}\right)\right)^{2}\\
& \le16\left(3L^{2}\right)^{3L}L^{4}q^{-5L-2}\,\mathcal{D}\tau_{A}.\end{aligned}$$ This concludes the proof of the lemma by .
Using this lemma and the bound on $\tau_{B}$ in , we can finish the estimation of the upper bound. By Markov inequality $$\mu\left(\tau_{A}\ge C_{L}\,q^{-5L-3}\right)\le q.$$ On the other hand, by , $$\begin{aligned}
\mu\left(\tau_{A}<C_{L}\,q^{-5L-5}\right) & \le\mu\left(\tau_{0}<C_{L}\,q^{-5L-3}\right)+\mu\left(\tau_{B}<C_{L}\,q^{-5L-3}\right)\\
& \le\mu\left(\tau_{0}<C_{L}\,q^{-5L-3}\right)+C_{L}^{\prime}e^{-\nicefrac{1}{q}}.\end{aligned}$$ Therefore $$\mu\left(\tau_{0}\ge C_{L}\,q^{-5L-3}\right)\le q+C_{L}^{\prime}e^{-\nicefrac{1}{q}},$$ and taking $\overline{\alpha}=5L+3$ will suffice.
Concerning , fix $L_{0}$ such that the probability that $\left[L_{0}\right]^{2}$ is good exceeds $p^{\text{SP}}$. Then, for $\alpha_{0}=5L_{0}+3$, $\nu\left(\overline{\alpha}\le\alpha_{0}\right)>0$. We will see later that the other inequality holds as well for the same $\alpha_{0}$.
### Proof of
A trivial bound could be obtained by taking any $\underline{\alpha}<1$, since the rate at which the origin becomes empty is always at most $q$. We will, however, look for a bound that will describe better the effect of the disorder, and will allow us to prove . The basic observation for the estimation of this lower bound is that if $\left[-L,L\right]^{2}$ is initially occupied, and if it contains only difficult sites, then at some point we will need to empty at least $\frac{L}{2}$ sites before the origin could be emptied. This energy barrier forces $\tau_{A}$ to be greater than $q^{-\nicefrac{L}{2}}$.
In the following we will fix $L$ such that $\left[-L,L\right]^{2}$ contains only difficult sites (so it is not the same $L$ we have used for the upper bound).
For a rectangle $R$, the span of $R$ are the sites that could be emptied with the bootstrap percolation using only the $0$s of $R$. If the span of $R$ equals $R$ we say that $R$ is internally spanned.
The next fact is a consequence of the fact a set which is stable under the bootstrap percolation must be a rectangle [@aizenmanlebowitz1988metastabilityz2].
\[fact:spanisunionofrects\]The span of a rectangle $R$ is a union of internally spanned rectangles.
For $x\in\zz^{2}$, let $\overline{G}_{x}$ be the event that the origin is in the span of $\left[-L,L\right]^{2}$ for $\eta$, but not for $\eta^{x}$. $G_{x}$ is defined as the intersection of $\overline{G}_{x}$ with the event $\left\{ c_{x}=1\right\} $.
First, note that legal flips of sites in the interior of a rectangle or outside the rectangle cannot change its span. Therefore, $G_{x}=\emptyset$ for $x$ which is not on the inner boundary of $\left[-L,L\right]^{2}$.
Fix $x$ on the inner boundary of $\left[-L,L\right]^{2}$, and let $\eta\in G_{x}$. Then $x$ and the origin belong to the same internally spanned rectangle.
Recalling , we consider the internally spanned rectangle containing the origin. If it didn’t contain $x$, the origin would be in the span of $\left[-L,L\right]^{2}$ also for $\eta^{x}$, contradicting the definition of $G_{x}$.
\[cor:barrierof2f\]Fix $x$ on the inner boundary of $\left[-L,L\right]^{2}$. Then $\mu\left(G_{x}\right)\le\binom{L^{2}}{L/2}q^{\nicefrac{L}{2}}$.
Assume without loss of generality that $x$ is on the right boundary. Then there must be an internally spanned rectangle in $\left[-L,L\right]^{2}$ whose width is at least $L$. Since all sites of $\left[-L,L\right]^{2}$ are difficult, it cannot contain two consequent columns that are entirely occupied, therefore it must contain at least $\frac{L}{2}$ empty sites.
We can use the same argument as in the proof of . Defining $G=\cup_{x}G_{x}$, this argument will tell us that the hitting time $\tau_{G}$ is bigger than $C\,q^{-\nicefrac{L}{2}+1}$ with probability that tends to $1$ as $q\rightarrow0$, for some constant $C$ that depends on $L$. If we start with a configuration for which the origin is not in the span of $\left[-L,L\right]^{2}$, it could only be emptied after $\tau_{G}$ at the first instant in which the span of $\left[-L,L\right]^{2}$ includes the origin, $G_{x}$ must occur for the site that has just been flipped. Since the probability to start with an entirely occupied $\left[-L,L\right]^{2}$ tends to $1$ as $q\rightarrow0$, is satisfied for $\underline{\alpha}=\frac{L}{2}-1$ .
In order to bound also the expected value of $\tau_{0}$ we will use . Let us consider the function $$f=\One_{0\text{ is not in the span of }\left[-L,L\right]^{2}}.$$ We can bound its Dirichlet form using : $$\begin{aligned}
\mathcal{D}f & =\mu\left(\sum_{x}c_{x}\text{Var}_{x}f\right)\le\mu\left(\sum_{x}c_{x}\,q\One_{G_{x}}\right)\\
& \le q\,16L\,\binom{L^{2}}{L/2}\,q^{\nicefrac{L}{2}}=C_{L}\,q^{\nicefrac{L}{2}+1}.\end{aligned}$$ The expected value is bounded by the probability that all sites are occupied $$\mu f\ge\left(1-q\right)^{\left(2L+1\right)^{2}}.$$
Now consider for some $\lambda\in\rr$ the rescaled function $\overline{f}=\lambda f$. $$\begin{aligned}
\mathcal{T}\overline{f} & =2\mu\overline{f}-\mathcal{D}\overline{f}\\
& \ge2\lambda\left(1-q\right)^{\left(2L+1\right)^{2}}-\lambda^{2}\,C_{L}\,q^{\nicefrac{L}{2}+1}.\end{aligned}$$ The optimal choice of $\lambda$ is $\frac{\left(1-q\right)^{\left(2L+1\right)^{2}}}{C_{L}}q^{-\nicefrac{L}{2}-1}$, which yields $$\begin{aligned}
\mathcal{T}\overline{f} & \ge\frac{\left(1-q\right)^{\left(2L+1\right)^{2}}}{C_{L}}q^{-\nicefrac{L}{2}-1}.\end{aligned}$$ and the fact that $\overline{f}$ vanishes on $\left\{ \eta_{0}=0\right\} $ implies that $\mu\left(\tau_{0}\right)\ge C_{L}^{\prime}q^{-\nicefrac{L}{2}-1}$, and therefore $\ee_{\mu}\left(\tau_{0}\right)\ge q^{-\underline{\alpha}}$ for $q$ small enough.
Concerning , note that for every $\alpha$ $$\nu\left(\underline{\alpha}\ge\alpha\right)\ge\nu\left(L\ge2\alpha+4\right)=\left(1-\pi\right)^{\left(4\alpha+9\right)^{2}}.$$ In particular for $\alpha_{0}$ defined in the proof of the upper bound $\nu\left(\underline{\alpha}\ge\alpha_{0}\right)>0$.
Mixed threshold models on $\protect\zz^{d}$
-------------------------------------------
The argument above, for the case of $\zz^{2}$, works also in more general settings, as long as the probability to be easy (i.e., threshold $1$) is strictly positive.
The lower bound of the bootstrap percolation is immediate, since only at scale $q^{-\nicefrac{1}{d}}$ it is possible to find an easy site. The lower bound for the kinetically constrained model could be analyzed similarly to the the two dimensional case using the methods of [@BaloghBollobasDuminilMorris1012sharpbpzd], but in order to keep things simple we could take $\overline{\alpha}<1$, which suffices since the rate at which the origin becomes empty is always at most $q$.
For the upper bound of both the bootstrap percolation and the kinetically constrained model we need to construct a path that will empty the origin. First, note that we may assume that sites have either threshold $1$ (easy) or threshold $d$ (difficult) with probabilities $\pi$ and $1-\pi$, for some $\pi>0$. In this case the path is described explicitly in [@martinellitonitelli2016towardsuniversality]. It is constructed for the FA$d$f model, but we will only need to adapt the definitions there in order to take into account the easy sites.
Fix $L$ that will be equal $n$ defined in the beginning of subsection 5.1 of [@martinellitonitelli2016towardsuniversality], replacing $q$ by $\pi$ and taking $\epsilon$ such that good boxes (see that follows) percolate, and the origin belongs to the infinite cluster. We then consider, just as before, an infinite path of good boxes starting at the origin.
The easy bootstrap percolation will be the threshold $d$ bootstrap percolation defined on $\zz^{d}$, with initial conditions in which easy sites are empty and difficult sites are filled. A set $V\in\zz^{d}$ is *easy internally spanned* if it is internally spanned for this process.
\[def:goodboxinzd\]A *good* box $V=x+\left[L\right]^{d}$ is a box for which the event $G_{1}$ in Definition 5.4 of [@martinellitonitelli2016towardsuniversality] occurs, replacing “internally spanned” by “easy internally spanned”.
As *excellent* box is a box that, by adding a single easy site at its corner, will be easy internally spanned. $p_{L}$ will be the probability that the box $\left[L\right]^{d}$ is excellent. Note that (as for the two dimensional case) $p_{L}$ is nonzero, and that it does not depend on $q$.
An *essentially empty* box $V=x+\left[L\right]^{d}$ will correspond to the event $G_{2}$ in Definition 5.4 of [@martinellitonitelli2016towardsuniversality] it is a good box in which the first slice in any direction is empty.
Being good and being excellent are events measurable with respect to the disorder $\omega$, whereas being essentially empty depends also on the configuration of the empty and filled sites. The definition of the bad event $B$ remains the same as in the previous section, and taking $l=q^{-dL^{d-1}-1}$ its probability has the same decay.
With these definitions, replacing “supergood” by “essentially empty”, the proof in section 5 of [@martinellitonitelli2016towardsuniversality] shows how to propagate an essentially empty box along the path of good boxes, corresponding to figures \[fig:propagationgacolumn12\] and \[fig:rotatingcol12\] in the two dimensional case. We may then consider a path as the one of . That is, for any configuration $\eta$ there is a path $\eta_{0},\dots,\eta_{N}$ with flips $x_{0},\dots,x_{N-1}$ such that
1. $\eta_{0}=\eta$,
2. $\eta_{N}\in A$,
3. $\eta_{i+1}\in\eta_{i}^{x_{i}}$,
4. $c_{x_{i}}\left(\eta_{i}\right)=1$,
5. $N\le cL^{d}l$ for some $c>0$,
6. For all $i\le N$, $\eta_{i}$ differs from $\eta$ at at most $cL^{d-1}$ points, contained in at most two neighboring boxes.
Applying the exact same argument as for the two dimensional case yields the upper bounds.
Mixed North-East and FA1f
-------------------------
### Spectral gap
This is the same argument as for the previous model one can always find arbitrarily large regions of difficult sites, so the gap is bounded by that of the north-east model. Since for the parameters that we have chosen the north-east model is not ergodic, it has $0$ gap [@cancrini2008kcmzoo].
### Hitting time
Let $A$ be the event $\left\{ \eta_{0}=0\right\} $. Recall and let $$\tau=\overline{\tau}_{A}.$$ The exponential tail of $\tau_{0}$ is a consequence of , so we are left with proving that $\nu\left(\tau\ge t\right)\le t^{\frac{c}{\log q}}$ for some constant $c$. We will do that by choosing a subgraph on which we can estimate the gap, and then apply .
Since $\pi$ is greater than the critical probability for the Bernoulli site percolation, there will be an infinite cluster of easy sites $\mathcal{C}$. We denote by $\mathcal{C}_{0}$ the cluster of the origin surrounded by a path in $\mathcal{C}$. $\partial\mathcal{C}_{0}$ will be the outer boundary of $\mathcal{C}_{0}$, i.e., the sites in $\mathcal{C}$ that have a neighbor in $\mathcal{C}_{0}$. Then, we fix a self avoiding infinite path of easy sites $v_{0},v_{1},\dots$ starting with the sites of $\partial\mathcal{C}_{0}$. That is, $v_{0},\dots,v_{\left|\partial\mathcal{C}_{0}\right|}$ is a path that encircles $\mathcal{C}_{0}$, and then $v_{\left|\partial\mathcal{C}_{0}\right|+1},\dots$ continues to infinity. We will denote $\mathcal{V}=\left\{ v_{i}\right\} _{i\in\nn}$. Let $H=\mathcal{V}\cup\mathcal{C}_{0}$, and consider the restricted dynamics $\mathcal{L}_{H}$ introduced in . We split the dynamics in two for some local function $f$ on $H$ $$\begin{aligned}
\mathcal{L}_{H}f & =\mathcal{L}^{\mathcal{C}_{0}}f+\mathcal{L}^{\mathcal{V}}f,\\
\mathcal{L}^{\mathcal{V}} & =\sum_{i\in\nn}c_{v_{i}}^{H}\left(\mu_{v_{i}}f-f\right),\\
\mathcal{L}^{\mathcal{C}_{0}} & =\sum_{x\in\mathcal{C}_{0}}c_{x}^{H}\left(\mu_{x}f-f\right).\end{aligned}$$
Note that the boundary conditions of the $\mathcal{C}_{0}$ dynamics depend on the state of the vertices in $\mathcal{V}$ and vice versa. We will denote by $\text{\ensuremath{\mathcal{L}}}_{0}^{\mathcal{C}_{0}}$ the $\mathcal{C}_{0}$ dynamics with empty boundary conditions and by$\text{\ensuremath{\mathcal{L}}}_{1}^{\mathcal{V}}$ the $\mathcal{V}$ dynamics with occupied boundary conditions. All generators come with their Dirichlet forms carrying the same superscript and subscript.
We will bound the gap of $\mathcal{L}_{H}$ using the gaps of $\mathcal{L}_{1}^{\mathcal{V}}$, $\mathcal{L}_{0}^{\mathcal{C}_{0}}$ and the following block dynamics: $$\mathcal{L}^{b}f=\left(\mu_{\mathcal{V}}\left(f\right)-f\right)+\One_{\partial\mathcal{C}_{0}\text{ is empty}}\left(\mu_{\mathcal{C}}f-f\right).$$ Denote the spectral gaps of $\mathcal{L}_{1}^{\mathcal{V}}$, $\mathcal{L}_{0}^{\mathcal{C}_{0}}$,$\mathcal{L}^{b},\mathcal{L}_{H}$ by $\gamma_{1}^{\mathcal{V}}$,$\gamma_{0}^{\mathcal{C}_{0}}$,$\gamma^{b}$,$\gamma_{H}$.
By Proposition 4.4 of [@cancrini2008kcmzoo]:
$$\gamma^{b}=1-\sqrt{1-q^{\left|\partial\mathcal{C}_{0}\right|}},$$ i.e., $\text{Var}f\le\frac{1}{1-\sqrt{1-q^{\left|\partial\mathcal{C}_{0}\right|}}}\mathcal{D}^{b}f$ for any local function $f$.
Let us now use this gap in order to relate $\gamma_{H}$ to $\gamma^{\mathcal{V}}$ and $\gamma^{\mathcal{C}_{0}}$:
$$\gamma_{H}\ge\gamma^{b}\min\left\{ \gamma_{1}^{\mathcal{V}},\gamma_{0}^{\mathcal{C}_{0}}\right\} .$$
Fix a non-constant local function $f$. $$\begin{aligned}
\text{Var}f & \le\frac{1}{\gamma^{b}}\mathcal{D}^{b}f=\frac{1}{\gamma^{b}}\left[\mu\left(\text{Var}_{\mathcal{V}}f\right)+\mu\left(\One_{\partial\mathcal{C}_{0}\text{ is empty}}\text{Var}_{\mathcal{C}}f\right)\right]\\
& \le\frac{1}{\gamma^{b}}\left[\frac{1}{\gamma_{1}^{\mathcal{V}}}\mu\left(\mathcal{D}_{1}^{\mathcal{V}}f\right)+\frac{1}{\gamma_{0}^{\mathcal{C}_{0}}}\mu\left(\One_{\partial\mathcal{C}_{0}\text{ is empty}}\mathcal{D}^{\mathcal{C}_{0}}f\right)\right]\\
& \le\frac{1}{\gamma^{b}}\max\left\{ \frac{1}{\gamma_{1}^{\mathcal{V}}},\frac{1}{\gamma_{0}^{\mathcal{C}_{0}}}\right\} \mathcal{D}_{H}f.\end{aligned}$$
We are left with estimating $\gamma_{1}^{\mathcal{V}}$ and $\gamma_{0}^{\mathcal{C}_{0}}$.
There exists $C>0$ such that $\gamma_{1}^{\mathcal{V}}\ge Cq^{3}$.
The Dirichlet form $\mathcal{D}_{1}^{\mathcal{V}}$ is dominated by the Dirichlet form of FA1f on $\zz_{+}$, and that dynamics has spectral gap which is proportional to $q^{3}$ (see [@cancrini2008kcmzoo]).
For $\gamma_{0}^{\mathcal{C}_{0}}$ we will use the bisection method, comparing the gap on a box to that of a smaller box. For $L\in\nn$, let $\mathcal{L}_{L}^{\text{NE}}$ be the generator of the north-east dynamics in the box $\left[L\right]^{2}$ with empty boundary (for the north east model this is equivalent to putting empty boundary only above and to the right). Denote its gap by $\gamma_{\left[L\right]^{2}}^{\text{NE}}$. By monotonicity we can restrict the discussion to this dynamics, i.e., $$\gamma_{0}^{\mathcal{C}_{0}}\ge\gamma_{\text{diam }\mathcal{C}_{0}}^{\text{NE}}.\label{eq:expandingnetosquare}$$
We will now bound $\gamma^{\text{NE}}$ (see also Theorem 6.16 of [@cancrini2008kcmzoo]).
$\gamma_{\left[L\right]^{2}}^{\text{NE}}\ge e^{3\log q\,L}$.
We will prove the result for $L_{k}=2^{k}$ by induction on $k$. Then monotonicity will complete the argument for all $L$. Consider the box $\left[L_{k}\right]^{2}$, and divide it in two rectangles $R_{-}=\left[L_{k-1}\right]\times\left[L_{k}\right]$ and $R_{+}=\left[L_{k-1}+1,L_{k}\right]\times\left[L_{k}\right]$. We will run the following block dynamics $$\begin{aligned}
\mathcal{L}^{b\text{NE}}f & =\left(\mu_{R_{+}}f-f\right)+\One_{\partial_{-}R_{+}\text{ is empty}}\left(\mu_{R_{-}}f-f\right),\end{aligned}$$ where $\partial_{-}R_{+}$ is the inner left boundary of $R_{+}$. Again, by Proposition 4.4 of [@cancrini2008kcmzoo], $$\begin{aligned}
\text{gap}\left(\mathcal{L}^{b\text{NE}}\right) & =1-\sqrt{1-\mu\left(\One_{\partial_{-}R_{+}\text{ is empty}}\right)}\\
& =1-\sqrt{1-q^{L_{k}}}.\end{aligned}$$ Therefore for every local function $f$ $$\begin{aligned}
\text{Var}f & \le\frac{1}{1-\sqrt{1-q^{L_{k}}}}\mathcal{D}^{b\text{NE}}f\\
& =\frac{1}{1-\sqrt{1-q^{L_{k}}}}\mu\left(\text{Var}_{R_{+}}f+\One_{\partial_{-}R_{+}\text{ is empty}}\text{Var}_{R_{-}}f\right)\\
& \le\frac{1}{1-\sqrt{1-q^{L_{k}}}}\mu\left(\frac{1}{\gamma_{R_{+}}^{\text{NE}}}\mathcal{D}_{R_{+}}^{\text{NE}}f+\frac{1}{\gamma_{R_{-}}^{\text{NE}}}\mathcal{D}_{R_{-}}^{\text{NE}}f\right),\end{aligned}$$ where $\gamma_{R}^{\text{NE}},\mathcal{D}_{R}^{\text{NE}}$ are the spectral gap and Dirichlet form of the north-east dynamics in $R$ with empty boundary conditions for any fixed rectangle $R$. We see that $$\gamma_{\left[L_{k}\right]^{2}}^{\text{NE}}\ge\left(1-\sqrt{1-q^{L_{k}}}\right)\gamma_{\left[L_{k-1}\right]\times\left[L_{k}\right]}^{\text{NE}}.$$ If we repeat the same argument dividing $\left[L_{k-1}\right]\times\left[L_{k}\right]$ into the rectangles $\left[L_{k-1}\right]\times\left[L_{k-1}\right]$ and $\left[L_{k-1}\right]\times\left[L_{k-1}+1,L_{k}\right]$, we obtain
$$\gamma_{L_{k-1}\times L_{k}}^{\text{NE}}\ge\left(1-\sqrt{1-q^{L_{k-1}}}\right)\gamma_{\left[L_{k-1}\right]^{2}}^{\text{NE}}.$$ Hence, $$\log\gamma_{\left[L_{k}\right]^{2}}^{\text{NE}}\ge\log\gamma_{\left[L_{k-1}\right]^{2}}^{\text{NE}}+2^{k}\log q-\log4,$$ yielding $$\log\gamma_{\left[L_{k}\right]^{2}}^{\text{NE}}\ge\log q\sum_{n=1}^{k}2^{n}-k\log4$$ which finishes the proof.
We can now put everything together. Let $L$ be the diameter of $\mathcal{C}_{0}$. By the second part of $$\begin{aligned}
\tau & \le\frac{1+q}{q}\,\frac{1}{\gamma^{r}}\le\frac{1+q}{q}\,\frac{1}{\left(1-\sqrt{1-q^{\left|\mathcal{C}\right|}}\right)\min\left\{ \gamma_{1}^{\mathcal{V}},\gamma_{0}^{\mathcal{C}_{0}}\right\} }\label{eq:boundingtauforneinfixedomega}\\
& \le q^{-4L-1}.\nonumber \end{aligned}$$
Finally, we will use the sharpness of the phase transition for the site percolation on the dual graph (see [@AizenmanBarsky87percosharpness; @DuminilCopinTassion16percoshaprness]):
There exists a positive constant $c_{2}$ that depends on $\pi$ such that $\nu\left(L\ge D\right)\le e^{-c_{2}D}$ for any $D\in\nn$.
Using this claim and
$$\begin{aligned}
\nu\left(\tau\ge t\right) & \le\nu\left(q^{-4L-1}\ge t\right)=\nu\left(L\ge\frac{\log t}{4\log\frac{1}{q}}-\frac{1}{4}\right)\\
& \le\,C\,t^{\nicefrac{c}{\log q}}.\end{aligned}$$
Conclusions and further questions
=================================
We have seen here two simple examples of KCMs in random environments. These examples show that when the environment has some rare remote “bad” regions the relaxation time fails to describe the true observed time scales of the system. Since the dynamics is not attractive, also techniques such as monotone coupling and censoring cannot be applied to these models. In order to overcome this difficulty we considered the hitting time $\tau_{0}$, which on one hand describes a physically measurable observable, and on the other hand could be studied using variational principles. We formulated some tools based on these variational principles and used them in order to understand the behavior of $\tau_{0}$ in both models.
For future research, one may try to apply these tools on kinetically constrained models in more types of random environment, such as the polluted lattice, more general mixture of constraints, and models on random graphs.
There are also some questions left open considering the models studied here. For the first model, it is natural to conjecture that $\tau_{0}$ scales as $q^{-\alpha}$ for some random $\alpha$. We can also look at the $\pi$ dependence of $\alpha$ we know that when there are not many easy sites this time should become larger, until it reaches the FA2f time when $\pi=0$. Looking at the proof of , we can see that $\overline{\alpha}$ scales like $\frac{\log\nicefrac{1}{\pi}}{\pi}$, and $\underline{\alpha}$ scales like $\pi^{-\nicefrac{1}{2}}$. It seems more likely, however, that the actual exponent $\alpha$ behaves like $\frac{1}{\pi}$ the $\log\frac{1}{\pi}$ of the lower bound comes up also in the proof bounding the gap of the FA2f model [@martinellitonitelli2016towardsuniversality], and also there it is conjectured that it does not appear in the true scaling. In fact, if we use the path that we have chosen in order to bound $\tau_{0}$ also for the bootstrap percolation, we will have the same $\log\frac{1}{q}$ factor, and in this case it is known that it does not appear in the correct scaling. The lower bound of $\pi^{-\nicefrac{1}{2}}$ could be improved, with the price of complicating the proof. In the proof we assume that $\left[-L,L\right]^{2}$ contains only difficult sites. If we require instead that enough lines and columns are entirely difficult (but not necessarily the entire square), we can obtain a bound that scales as $\frac{1}{\pi}$. It is worth noting here that for the bootstrap percolation, by repeating the arguments of [@aizenmanlebowitz1988metastabilityz2; @holroyd2003sharp] with some minor adaptations, we can show that the scaling of the prefactor $a$ with $\pi$ is between $e^{c/\pi}$ and $e^{C/\pi}$ for $c,C>0$.
For the mixed threshold FA model, $\nu$-almost surely the limit $\lim_{q\rightarrow0}\frac{\log\tau_{0}}{\log\nicefrac{1}{q}}$ exists. Its value $\alpha$ is a random variable whose law depends on $\pi$. Moreover, the law of $\pi\alpha$ converges (in some sense) to a non-trivial law as $\pi$ tends to $0$.
The mixed North-East and FA1f model also raises many questions, among them finding the critical probability $q_{c}$, and characterizing the behavior of both the bootstrap percolation and the KCM in the different parameter regimes.
Acknowledgments {#acknowledgments .unnumbered}
===============
I would like to thank Cristina Toninelli and Fabio Martinelli for the helpful discussions and useful advice. I acknowledge the support of the ERC Starting Grant 680275 MALIG.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Having accurate tools to describe non-classical, non-Gaussian environmental fluctuations is crucial for designing effective quantum control protocols. We show how the Keldysh approach to quantum noise characterization can be usefully employed to characterize frequency-dependent noise, focusing on the quantum bispectrum (i.e. frequency-resolved third cumulant). Using the paradigmatic example of photon shot noise fluctuations in a driven bosonic mode, we show that the quantum bispectrum can be a powerful tool for revealing distinctive non-classical noise properties, including an effective breaking of detailed balance by quantum fluctuations. The Keldysh-ordered quantum bispectrum can be directly accessed using existing noise spectroscopy protocols.'
author:
- 'Yu-Xin Wang'
- 'A. A. Clerk'
bibliography:
- 'refFCS.bib'
title: 'Spectral characterization of non-Gaussian quantum noise: Keldysh approach and application to photon shot noise'
---
[*Introduction–* ]{}An accurate description of environmental fluctuations is crucial for quantum information processing and quantum control. While it is common to assume noise that is both classical and Gaussian, there are many physically-relevant situations where these assumptions fail [@Cywinski2014; @Viola2014; @Viola2016; @Oliver2019; @Ramon2019]. Understanding how to usefully characterize non-Gaussian, non-classical noise in a frequency-resolved manner could enable the design of more optimal dynamical decoupling protocols, enhancing qubit coherence. It could also provide fundamental insights into the nature of the underlying dissipative environment.
For classical noise, the frequency-resolved higher noise cumulants (so-called polyspectra [@Rao1984book]) provide a full characterization. Recent work has proposed [@Viola2016; @Ramon2019] and demonstrated [@Oliver2019] measurement protocols to reconstruct polyspectra using a qubit driven by classical non-Gaussian noise. However, the proper generalization of these ideas to truly [*quantum*]{} non-Gaussian noise (as produced by a generic quantum bath) remains an interesting open question; here, the non-commutativity of noise operators at different times poses a challenge as to how best to define polyspectra.
In this paper, we show that the Keldysh approach [@Levitov1996; @Nazarov2003; @BelzigPRL2010; @Clerk2011; @Hofer2017], a method used extensively to characterize low-frequency noise, also provides an unambiguous and practically useful way to describe non-Gaussian quantum bath noise in the frequency domain. It provides a systematic way to construct a quasiprobability distribution to describe the noise, and to assess whether the noise can be faithfully mimicked by completely classical noise processes [@Nazarov2003; @Clerk2011]. It also has a direct operational meaning: the “quantum polyspectra" we introduce are [*exactly*]{} the quantities that contribute to the dephasing of a coupled qubit at each order in the coupling. Moreover, these quantities can be measured using [*the same*]{} non-Gaussian noise spectroscopy techniques designed for classical noise sources [@Viola2016; @Oliver2019; @RBLiu2019]; one does not have to decide in advance whether the noise is classical or quantum to perform the characterization. Note that a recent work presented a method to measure arbitrary quantum bath correlation functions [@RBLiu2019]; in contrast, our work focuses on characterizing the most physically-relevant correlation function at each order and identifying a corresponding quasiprobability.
To highlight the utility of our approach, we apply it to the concrete but non-trivial case of photon shot noise in a driven-damped bosonic mode (a relevant source of dephasing noise in circuit QED systems [@Schoelkopf2006; @Devoret2019] among others). Prior work used the Keldysh approach to study this noise at zero frequency [@Clerk2011; @Clerk2016]; here we instead focus on the behaviour of the frequency-resolved third cumulant, the “quantum bispectrum" (QBS). We show that the QBS reveals important new physics and distinct quantum signatures: at low temperatures, qualitatively new features emerge that would never be present in a classical model with only thermal fluctuations. We also show that the QBS is a generic tool for revealing the breaking of detailed balance and violation of Onsager-like symmetry relations. We find that the photon shot noise QBS violates detailed balance at low temperatures.
[*Keldysh ordering and quantum polyspectra–* ]{} Consider first a classical noise process $\xi(t)$. Its moment generating function (MGF) is defined as $$\begin{aligned}
\label{Eq:MGFcl}
\Lambda_{\rm class}[F(t); t_f] & = \overline{ \exp \left[ -i \int_0^{t_f} F(t) \xi(t) \right] },\end{aligned}$$ where the bar indicates a stochastic average. Functional derivatives of $\Lambda_{\rm class}$ with respect to $F(t)$ can be used to calculate arbitrary-order correlation functions of $\xi(t)$, while functional derivatives of $\ln \Lambda_{\rm class}$ generate the cumulants of $\xi(t)$ (see e.g. [@JacobsStochasticBook]). Fourier transforming these cumulants yields the polyspectra, which completely characterizes the noise in the frequency domain [@Rao1984book]. In the quantum case, our noise is a Heisenberg picture operator $\hat{\xi}(t)$ whose evolution is generated by the Hamiltonian of some bath; we take $\hat{\xi}(t)$ to be Hermitian for simplicity. Defining correlation functions now has some subtlety, as $\hat{\xi}(t)$ will not in general commute with itself at different times; hence, different time-ordering choices yield different results. Correlation functions at a given order describe both how the bath responds to external perturbations, as well as its intrinsic fluctuations [@Kamenev2011book]. We are interested here in characterizing the latter quantity, and asking whether these fluctuations are equivalent to an effective classical noise process.
The well-developed machinery of Keldysh quantum field theory provides a precise method for accomplishing our task [@Levitov1996; @Nazarov2003; @BelzigPRL2010; @Clerk2011; @Hofer2017]. While this approach is completely general, the simplest derivation is to imagine coupling an ancilla qubit to $\hat{\xi}$, such that the only qubit dynamics is from the interaction picture Hamiltonian $\hat H_{\mathrm{int}} (t) = \frac{1}{2} F(t) \hat{\xi}(t) \hat \sigma_z$. We then use the dephasing of the qubit to [*define*]{} the MGF of the noise in the quantum case, exactly like we would if the noise were classical: $$\begin{aligned}
\label{Eq:MGFqu}
& \Lambda[F( t );t_f] \equiv
\langle \hat{\sigma}_{-}(t) \rangle / \langle \hat{\sigma}_{-}(0) \rangle,
\\
\label{Eq:MGFK}
= &
\textrm{Tr} \left[
\mathcal{T} e^{- \frac{i}{2} \int_0^{t_f} dt' F(t') \hat{\xi}(t')}
\hat{\rho}_{\rm B}
\tilde{\mathcal{T}} e^{-\frac{i}{2} \int_0^{t_f} dt' F(t') \hat{\xi}(t')}
\right].
$$ Here $\hat{\rho}_B$ is the initial bath density matrix, the trace is over bath degrees of freedom, and $\mathcal{T}$ ($\tilde{\mathcal{T}}$) is the time-ordering (anti-time ordering) symbol. Expanding $\Lambda$ in powers of $F(t')$ defines correlation functions at a given order with a particular time-ordering prescription (the so-called Keldysh ordering). We stress that this approach amounts to trying to ascribe the qubit evolution to an effective classical stochastic process; this correspondence then defines cumulants (and implicitly a quasiprobability) for the quantum noise of interest. For truly classical noise, Eq. reduces to classical MGF in Eq. . The moments and corresponding quasiprobability defined here are intrinsic to the noisy system, and can be used to predict the outcomes of a wide class of schemes designed to measure this noise [@Nazarov2003; @Hofer2017]. They also have a direct role in Keldysh non-equilibrium field theory: they characterize the fluctuations of the “classical" field associated with the operator $\hat{\xi}(t)$. At each order, the Keldysh-ordered correlation function describes the intrinsic fluctuations of the system [@Kamenev2011book]. In contrast, the remaining independent correlation functions at the same order describe how the system responds to external fields which couple to $\hat{\xi}(t)$ (either higher-order response coefficieints, or generalized noise susceptibilities; see Supplemental Material (SM) for details [@SI]). We stress again that at each order, the Keldysh-ordered correlation function is precisely the correlation function “seen" by the qubit.
It follows that the Keldysh-ordered cumulants $C^{(k)}(\vec{t}_{k}) \equiv \langle \langle \hat{\xi}\left( t_1\right) \cdots
\hat \xi \left( t_k\right) \rangle\rangle_\mathcal{K}$ of the noise can be generated from $\chi[F(t);t_f] \equiv \ln \Lambda[ F(t); t_f ] $ via $$\label{eq:ncdef}
\chi[F(t);t_f] = \sum_{\ell=1}^\infty\!\frac{(-i)^\ell }{ \ell !}\prod_{j=1}^\ell
\left[ \int_0^{t_f}\!\! dt_j F(t_j) \right]\!C^{(\ell)}(\vec{t}_{\ell}),$$ where we define $\vec{t}_n \equiv (t_1,\ldots,t_n)$. Explicit expressions for the first few Keldysh-ordered cumulants are provided in Eqs. (\[SEq:Kcml\_2\]) and (\[SEq:Kcml\_3\]) of the SM [@SI]. The Keldysh-ordered second cumulant is simply a symmetrized correlation function, whereas the third cumulant corresponds to suppressing time-orderings where the earliest operator appears in the middle of an expectation value.
For stationary noise, the $k$-th order cumulant $C^{(k)}(\vec{t}_{k})$ only depends on the $k-1$ time separations $\tau_j \equiv t_{j+1}-t_1$, $j= 1,\ldots, k-1 $. We thus define the quantum polyspectra as Fourier transforms of the Keldysh-ordered cumulants with respect to $\{ \tau_j \}$: $$\begin{aligned}
&S_{n} [\vec{\omega}_{n}] \equiv \!\int_{\mathbb{R}^n}\! d\vec{\tau}_{n} \,
e^{-i \vec{\omega}_{n}\cdot\vec{\tau}_{n}} C^{(n+1)}(\vec{\tau}_{n}), \quad n \geq 1.
\label{eq::polyspectra}\end{aligned}$$ For more discussion on properties of classical polyspectra, see Refs. [@Elgar1994; @Viola2016; @Ramon2019]. The zero frequency limit $\omega_j \to 0 $ of $S_{n} [\vec{\omega}_{n}]$ characterize fluctuations in $\hat m=\int_0^t dt' \hat \xi ( t' ) $ in the long-time limit (so-called full counting statistics (FCS)). This is the typical setting where the Keldysh approach has found great utility, largely for studying electronic current fluctuations. Here we extend the method to study non-classical, non-Gaussian noise [*at non-zero frequencies*]{} (see also Ref. [@PekolaPRB2006] for an application to frequency-dependent current noise).
![image](S2_10.eps){width="180mm"}
[*Photon shot noise dephasing–* ]{} In what follows, we focus on the physics of the frequency-dependent third cumulant, the so-called quantum bispectrum (QBS) $S_2[\omega_1,\omega_2]$; we drop the subscript $2$ hereafter. The quantum non-Gaussian noise of interest will be the energy fluctuations of a driven-damped bosonic mode. As we will see, the frequency-dependent QBS reveals a host of physics that is not apparent if one only considers the low-frequency behaviour of fluctuations (as has been studied earlier [@Harris2010; @Clerk2011; @Clerk2016]).
Our “bath" here is a driven damped cavity mode $c$ (frequency $\omega_c$, Markovian energy decay rate $\gamma$). Coupling the photon number of this mode to an ancilla qubit, working in a rotating frame at the drive frequency, and letting $\hat{\rho}$ denote the qubit-cavity reduced density matrix, the system dynamics follows the master equation $$\dot {\hat \rho} =
- i [ \hat H_0 + \hat H_{\mathrm{int}} (t) ,\hat \rho ] + \gamma ( {\bar n}_\mathrm{th} + 1 ) \mathcal{D} [ \hat c ]\hat \rho + \gamma {\bar n}_\mathrm{th} \mathcal{D} [\hat c^\dag ]\hat \rho.
\label{Eq:meq}$$ Here $ \mathcal{D} [ \hat A ]\hat \rho = \hat A\rho {\hat A^\dag } - ( {{\hat A^\dag }\hat A\hat \rho + \hat \rho {\hat A^\dag }\hat A} )/2$ is the Lindblad dissipator and ${\bar n}_\mathrm{th} $ the thermal photon number associated with the cavity dissipation. Letting $f$ ($\delta$) denote the drive amplitude (detuning), the cavity Hamiltonian $\hat H_0$ is $$\hat H_0 =- \delta {\hat c^\dag }\hat c - \left( f \hat c + \mathrm{H.c.} \right) .$$ Finally, to extract the statistics of the photon number $\hat n = \hat c^\dag \hat c$, we couple the driven cavity to an ancilla qubit via $\hat H_{\mathrm{int}} \left( t\right)= F(t) \hat n \hat \sigma_z /2$. With this setup in place, we solve the master equation in Eq. to obtain the time-dependent qubit dephasing; from Eq. (\[Eq:MGFK\]), this directly yields the Keldysh-ordered MGF $\Lambda$ for the cavity photon number fluctuations. Even with an arbitrary time-dependent coupling $F(t)$, the qubit dephasing can be solved exactly using an extension of the phase space method in Ref. [@Utami2007] (see also SM [@SI]). An equivalent approach is to calculate correlation functions using standard techniques (e.g. quantum regression theorem, Heisenberg-Langevin equations) [@Gardiner2004], and then apply the Keldysh ordering defined in Eqs. and . For the remainder of the paper, we will always take the long time limit $t_f \rightarrow \infty$, so that the fluctuations are stationary. One finds the QBS can be written as: $$\begin{aligned}
S[\omega_1,\omega_2] =
S_{\rm th}[\omega_1,\omega_2] +
S_{\rm dr}[\omega_1,\omega_2],
\label{eq:QBSDecomposition}\end{aligned}$$ where the first term is completely independent of the drive $f$, and the second term is proportional to $|f|^2$.
[*Quantum bispectrum of drive-independent photon number fluctuations–* ]{} The $f$-independent QBS $S_{\rm th}[\omega_1,\omega_2]$ can be calculated by solving Eq. with $f=0$. In this case, the cavity relaxes to a thermal steady state with no coherence between different Fock states. Its fluctuations can thus be mapped to a classical Markovian master equation. For such a classical and thermal Markov process, the bispectrum $S_{\mathrm{th} } [ {\omega}_{1},{\omega}_{2} ] $ [*must*]{} always be real [@Semerjian2004; @Sinitsyn2016]. In the SM, we also show that in our case, $S_{\mathrm{th} } [ {\omega}_{1},{\omega}_{2} ] $ must also be positive semidefinite [@SI]. Letting $\omega_{3} \equiv -\omega_1 -\omega_2$ in all equations that follow, our full calculation for the Keldysh-ordered QBS yields as expected a real, positive function: $$\label{Eq:bispmth}
S_{\mathrm{th} } [ {\omega}_{1},{\omega}_{2} ] =
\mathcal{C}_{{\bar n}_\mathrm{th} } \gamma^2
\left( 6 \gamma^2+ \sum\limits_{j=1}^3 { \omega}^2 _j \right) \Big / \prod\limits_{j=1}^3 ( \gamma^2 + { \omega}^2 _j ),$$ with $\mathcal{C}_{{\bar n}_\mathrm{th} }= {\bar n}_\mathrm{th} ( {\bar n}_\mathrm{th} +1 ) ( 2{\bar n}_\mathrm{th} +1) $. We see that the frequency dependence of this contribution to the bispectrum is the same both in the classical high-temperature limit ${\bar n}_\mathrm{th} \to \infty $, and in the extreme quantum limit ${\bar n}_\mathrm{th} \to 0$; the only temperature dependence is in the prefactor. $S_{\mathrm{th} } [ {\omega}_{1},{\omega}_{2} ]$ vanishes in the absence of thermal fluctuations (i.e. ${\bar n}_\mathrm{th} \to 0$). In the limit ${\bar n}_\mathrm{th} \to 0$, this expression corresponds (as expected) to the bispectrum of asymmetric telegraph noise (see e.g., [@Sinitsyn2013]), corresponding to fluctuations between the $n=0$ and $n=1$ Fock states. While our general result here suggests that the $\omega$ dependence of the QBS is not sensitive to quantum corrections, we will see that this is not true as soon as a coherent drive is added to the system.
[*Quantum bispectrum of driven photon-number fluctuations–* ]{} We now consider the drive-dependent contribution to the bispectrum, $S_{\mathrm{dr} } [ {\omega}_{1},{\omega}_{2} ]$ in Eq. (\[eq:QBSDecomposition\]). This quantity only depends on the drive amplitude $f$ through the overall prefactor ${\bar n}_{ \rm{dr} } =4 |f|^2/(\gamma^2+ 4\delta^2)$ (the intracavity photon number generated by $f$). Note that $S_{\mathrm{dr} } [ {\omega}_{1},{\omega}_{2} ]$ remains non-zero at zero temperature, and is the only contribution to the QBS in this limit.
We find that the drive-dependent QBS shows striking quantum signatures. In the classical limit of high temperatures, it is always real and positive (like the purely thermal contribution) [@SI]. However, as temperature is lowered and quantum fluctuations become more important, this quantity can have a negative real part, and even a non-zero imaginary part. These purely quantum features become more pronounced as the magnitude of the drive detuning $\delta$ is increased. The real and imaginary parts of $S_{\mathrm{dr} } [ {\omega}_{1},{\omega}_{2} ]$ are plotted for zero temperature in Figs. \[fig:S2\_10\](b) and \[fig:S2\_10\](c) for a large drive detuning ($\delta / \gamma =10$).
We start by discussing the surprising negativity of the real part of the zero-temperature QBS. Negativity in the zero-frequency limit was already discussed in [@Clerk2011; @Clerk2016]. These works showed that this negativity is a purely quantum effect, and that moreover, for large detunings the magnitude of the third cumulant is so great that the fluctuations cannot be described by a positive-definite quasiprobability distribution. Our results show how this striking non-classicality also manifests itself in the [*non-zero*]{} frequency fluctuations. It also demonstrates that this quantum correction has a non-trivial frequency dependence. We find that the quantum bispectrum $ S_{\mathrm{dr}} [ {\omega}_{1},{\omega}_{2}]$ has a different frequency dependence in the quantum limit (${\bar n}_{ \rm{th} }=0$) from its classical limit ${\tilde S}_{ \mathrm{cl}}[ {\omega}_{1},{\omega}_{2}]$. To see this, it is useful to write $$\frac{S_{\mathrm{dr}} [ {\omega}_{1},{\omega}_{2}] }{{\bar n}_{ \rm{dr} }}
= ( 2{\bar n}_{ \rm{th} }+1)^2 \! {\tilde S}_{ \mathrm{cl}}[ {\omega}_{1},{\omega}_{2}]
+
{\tilde S}_{ \mathrm{q}}[ {\omega}_{1},{\omega}_{2}] .$$ The first term is the classical contribution which dominates in the high-temperature limit; ${\tilde S}_{ \mathrm{cl}}[ {\omega}_{1},{\omega}_{2}]$ is independent of both $ {\bar n}_{ \rm{dr} } ,{\bar n}_{ \rm{th} }$, and is real and positive for all frequencies. Its form can be found directly from a classical Langevin equation calculation (see Eq. of the SM [@SI]). In contrast, the second term is the temperature-independent quantum correction. It has a [*completely different*]{} frequency dependence from the classical limit, as described by ${\tilde S}_{ \mathrm{q}}[ {\omega}_{1},{\omega}_{2}]$ $${\tilde S}_{ \mathrm{q}}[ {\omega}_{1},{\omega}_{2}] = -\frac{1}{2 } \! \! \sum\limits_{ \substack{\alpha \ne \beta \\ \alpha ,\beta = 1,2,3 }} \frac{\frac{\gamma }{2} + i{ \omega} _\beta }{ (\gamma - i { \omega} _\alpha) [( \frac{\gamma }{2} + i { \omega}_\beta )^2+ \delta ^2]}.
\label{eq:Sdr2qu}$$ This function can have both a negative real part, and a non-zero imaginary part. In the quantum limit ${\bar n}_{ \rm{th} }=0$, one finds that real part of the QBS only becomes negative above a critical value of the detuning $|\delta|$. Moreover, the initial onset of negativity occurs at $\omega_1 = \omega_2 = 0$. In the large-detuning regime $|\delta| \gg \gamma$, the negative region of the QBS is peaked near a polygon whose shape is defined by the resonance conditions $\omega_j = \pm \delta$ ($j=1,2,3$).
![Frequency dependence of the imaginary parts of the photon-shot noise bispectrum $\mathrm{Im} S [ {\omega}_{1} ,{\omega}_{2} ]$ in the extreme quantum limit ${\bar n}_{ \rm{th} }=0$ at different detunings. Parameters: (a) $\delta / \gamma = 0$, (b) $\delta / \gamma = 1$. Note that for ${\bar n}_{ \rm{th} }=0$ the full quantum bispectrum coincides with the drive-dependent contribution $S_{\rm dr} [ {\omega}_{1},{\omega}_{2} ]$.[]{data-label="fig:ImS2"}](ImS2.eps){width="85mm"}
![Time-dependent Keldysh-ordered photon-shot noise third cumulant $C_{\mathrm{dr} }^{(3)}(| t|,|t|)$ in the quantum limit ${\bar n}_{ \rm{th} } =0$ for $t>0$ (red solid lines) and $t<0$ (orange dashed lines). Difference between curves highlights asymmetry under time reversal $t \to -t$. In contrast, the thin blue curves correspond to the same correlator $C_{\mathrm{dr} }^{(3)}(| t|,|t|)$ in the classical limit normalized by thermal photon number, which is symmetric. All correlation functions are normalized by in-cavity drive photon number ${\bar n}_{ \rm{dr} }$. Detunings are: (a) $\delta=0$, and (b) $\delta/\gamma=5$.[]{data-label="fig:timeskn"}](n3K.eps){width="85mm"}
[*Imaginary bispectrum and violations of detailed balance–* ]{} We now turn to another striking feature of the photon shot noise QBS: while in the classical, high-temperature limit it is always real, the quantum correction ${\tilde S}_{ \mathrm{q}}[ {\omega}_{1},{\omega}_{2}]$ has a non-zero imaginary part (see Fig. \[fig:ImS2\]). We stress that this non-trivial imaginary bispectrum can [*only*]{} be probed at finite frequency: by its very definition in Eq. , the imaginary part of the QBS must vanish if either $ \omega_1=0 $ or $\omega_2 =0$. The non-zero imaginary QBS is directly related to the basic symmetries of our quantum noise process, in particular the violation of higher-order Onsager reciprocity relations [@Onsager1931; @Semerjian2004; @Sinitsyn2016]. If a temporal cumulant $C^{(n+1)}(\vec{\tau}_{n})$ is invariant under $\vec{\tau}_{n} \to -\vec{\tau}_{n}$ (i.e. exhibits Onsager reciprocity), then the corresponding polyspectrum must be real [@Brillinger1967]. Further, a classical Markov process obeying detailed balance always respects this symmetry. In our system quantum corrections (as described by ${\tilde S}_{ \mathrm{q}}[ {\omega}_{1},{\omega}_{2}]$) cause a breaking of this symmetry and hence of detailed balance. There is a long history of studying detailed balance in non-equilibrium driven-dissipative quantum systems (see e.g. [@Kubo1966; @Agarwal1973; @Carmichael1976; @Tomita1973; @Tomita1974; @Carmichael2002]); the QBS provides yet another tool for exploring this physics. In the SM, we discuss another related quantum system which exhibits an apparent breaking of detailed balance, namely a cavity driven by squeezed noise [@SI].
For a heuristic understanding of this symmetry breaking, we consider a simpler object, the temporal (Keldysh-ordered) third cumulant $C^{(3)}(\tau_1,\tau_2)$ at $\tau_1 = \tau_2 = t$. The non-zero imaginary QBS implies that this correlator differs for $t$ and $-t$ (see Fig. \[fig:timeskn\]). Using the definition of Keldysh ordering in Eqs. (\[Eq:MGFK\]) and (\[eq:ncdef\]) we find: $$\begin{aligned}
& \langle \delta \hat n(0) \delta \hat n(t ) \delta \hat n(t )\rangle_\mathcal{K} =
\label{eq:KeldyshOrderedSkewness} \\
& \frac{1 }{2} \langle \{\delta \hat n(0) ,[\delta \hat n(t )]^2 \} \rangle
-\frac{ \Theta(-t)}{4} \langle [ \delta \hat n(t), [ \delta \hat n(t ),\delta \hat n(0)] ] \rangle
\nonumber ,\end{aligned}$$ where $\delta \hat{n}(t) = \hat{n}(t) - \langle \hat{n}(t) \rangle$, and $\Theta(t)$ is the Heavisde step-function. One finds that any imaginary quantum correction $ \mathrm{Im} \tilde{S}_q[\omega_1,\omega_2]$ is entirely due to the second term on the RHS; it is thus completely responsible for the the lack of time-symmetry. What does this mean physically? As we have emphasized, the Keldysh ordering is relevant for any measurement protocol that directly probes $\hat{n}(t)$ [@Nazarov2003; @Hofer2017]. In contrast, the first term on the RHS of Eq. (\[eq:KeldyshOrderedSkewness\]) would be relevant if we correlated a measurement of $\delta \hat{n}$ with a separate, direct measurement of $\delta \hat{n}^2$ (i.e. the Keldysh approach would give this answer for this sort of setup [@Hofer2017]). These protocols are not equivalent: measuring $\delta \hat{n}$ and then squaring the result has a different back-action than if one directly measured $\delta \hat{n}^2$. The latter measurement provides less information (and hence has less backaction), as it provides no information on the sign of $\delta \hat{n}$. This now provides a heuristic way of understanding the second term on the RHS of Eq. (\[eq:KeldyshOrderedSkewness\]) (and the consequent lack of time symmetry). For $t<0$, one is first measuring $\delta \hat{n}^2$. As a result, the two measurement protocols have different backaction effects, and the two correlation functions are distinct. In contrast, for $t>0$, the earlier measurement is the same in both protocols, hence the backaction effect is identical, and the two protocols agree.
While our heuristic explanation here invokes measurement backaction, we stress yet again that the Keldysh-ordered correlation function is an [*intrinsic*]{} property of the driven cavity system [@Kamenev2011book; @Nazarov2003], with a relevance that goes beyond the analysis of just a single measurement setup. Further, this is the ordering that is “chosen" by our qubit: if one simply interprets the qubit dephasing as arising from classical noise, then the Keldysh ordered bispectrum (with its imaginary part) plays the role of the bispectrum of this effective classical noise.
[*Conclusions–* ]{} We have shown how the Keldysh approach to quantum noise provides a meaningful way to define the polyspectra of non-classical, non-Gaussian noise. In the experimentally-relevant case of photon shot noise fluctuations in a driven-damped resonator, the quantum bispectrum reveals distinct quantum features and a surprising quantum-induced breaking of detailed balance. We stress that our approach amounts to interpreting the dephasing of a qubit by quantum noise as arising from an effective classical noise process. As such, the same noise spectroscopy techniques that have been used successfully to measure classical bispectra with qubits [@Viola2016; @Oliver2019] can be directly used (without modification) to measure our quantum bispectra.
This work was supported as part of the Center for Novel Pathways to Quantum Coherence in Materials, an Energy Frontier Research Center funded by the U.S. Department of Energy, Office of Science, Basic Energy Sciences.
**Supplementary Material: Spectral characterization of non-Gaussian quantum noise**
Distinguishing fluctuations from response properties
====================================================
As discussed in the main text, a general $n$-point quantum correlation function describes both the intrinsic fluctuation properties of the system of interest (i.e. quantities that play the role of classical noise), as well as the response properties of the system to external applied fields. The situation is very clear at second order, where the product ${\hat{\xi}}(t) {\hat{\xi}}(t')$ can be decomposed as the sum of a commutator and an anti-commutator. The commutator determines the retarded Green function $$G^R(t) \equiv -i \Theta(t) \left \langle [ {\hat{\xi}}(t), {\hat{\xi}}(0) ] \right \rangle.$$ This describes how the average value $\langle {\hat{\xi}}(t) \rangle$ changes to first order in response to an external perturbing field $V(t)$ entering the Hamiltonian as $$\hat{H}_{\rm ext}(t) = V(t) {\hat{\xi}}.$$ The relevant Kubo formula is: $$\delta \langle {\hat{\xi}}(t) \rangle =
\int_{-\infty}^{\infty} dt' G^R(t-t') V(t').$$ In contrast, the anti-commutator describes the symmetrized noise spectral density: $$S[\omega] \equiv \frac{1}{2} \int dt e^{i \omega t} \langle \{ {\hat{\xi}}(t), {\hat{\xi}}(0) \} \rangle .$$ As has been discussed in many places (see e.g. Ref. [@ClerkRMP]), this spectral density plays the role of a classical noise spectral density.
The Keldysh technique provides an unambiguous way of extending this separation between noise and response to higher orders. A full exposition of this method is beyond the scope of this paper; we refer the reader to Ref. [@Kamenev2011book]. We sketch the main ideas needed here. In the path-integral formulation of the Keldysh technique, each operator corresponds to two different fields, the classical field $\xi_{\rm cl}(t)$ and the quantum field $\xi_{\rm q}(t)$. Averages of these fields (weighted by the appropriate Keldysh action describing the system) then correspond to operator averages with a particular time ordering. One finds that:
- Averages only involving quantum fields are necessarily zero.
- Averages involving at least one classical field $\xi_{\rm cl}(t)$ and one or more quantum fields $\xi_{\rm q}(t)$ can always be interpreted as response coefficients to an external perturbation of the form $\hat{H}_{\rm ext}(t)$.
- Averages [*only*]{} involving classical fields $\xi_{\rm cl}(t)$ do not correspond to any kind of response function. Instead, they describe the intrinsic fluctuation properties of the system
Formally, this dichotomy arises because the perturbation $\hat{H}_{\rm ext}(t)$ enters the action of the system as a term that [*only*]{} involves the quantum field, i.e. $S_{\rm ext} = \int dt V(t) \xi_{\rm q}(t)$. Perturbation theory in $V(t)$ thus necessarily introduces powers of the quantum field. For example, at second order we have:
- The average $\overline{\xi_{\rm cl}(t) \xi_{\rm q}(0)}$ is directly proportional to the retarded Green function $G^R(t)$, and thus describes linear response to the external field.
- The average $\overline{\xi_{\rm cl}(t) \xi_{\rm cl}(0)}$ is proportional to $\langle \{ {\hat{\xi}}(t), {\hat{\xi}}(0) \} \rangle$ and thus determines the usual symmetrized noise spectral density.
The same decomposition applies at higher orders. Consider third order correlators. The average of three classical fields $\overline{\xi_{\rm cl}(t_1) \xi_{\rm cl}(t_2) \xi_{\rm cl}(0)}$ is precisely the Keldysh ordered correlator discussed in the main text; it cannot be associated with a response coefficient. The remaining non-zero correlators describe different kinds of response:
- The average $\overline{\xi_{\rm cl}(t) \xi_{\rm q}(t') \xi_{\rm q}(t'')}$ represents a second-order Kubo response coefficient. It determines to second order how $\langle {\hat{\xi}}(t) \rangle$ is modified by $\hat{H}_{\rm ext}(t')$ at earlier times (i.e. how it depends on $V(t')$ and $V(t'')$).
- The average $\overline{\xi_{\rm cl}(t) \xi_{\rm cl}(t') \xi_{\rm q}(t'')}$ describes a first order noise-susceptibility [@Reulet2007]. It determines how the symmetrized correlator $\langle \{ {\hat{\xi}}(t), {\hat{\xi}}(t') \} \rangle$ is modified to first order by $\hat{H}_{\rm ext}(t'')$
The arguments sketched here provided perhaps the deepest justification for considering Keldysh ordered correlation functions: they provide a clear and unambiguous way to distinguish fluctuation properties from response properties. We stress that an arbitrary correlation function could always be written as a linear combination of the Keldysh-ordered correlator (which describes pure noise) and additional terms describing response properties.
Explicit expressions for the second and third Keldysh-ordered cumulants
=======================================================================
For concreteness, here we provide explicit expressions for the first few Keldysh-ordered cumulants $C^{(k)}(\vec{t}_{k}) \equiv \langle \langle \hat{\xi}\left( t_1\right) \cdots \hat \xi \left( t_k\right) \rangle\rangle_\mathcal{K}$ defined by Eqs. and in the main text. The second order cumulant function $C^{(2)}(\vec{t}_{2}) $ is just the auto-correlation function of $ \hat \xi ( t )$ $$\label{SEq:Kcml_2}
C^{(2)}(\vec{t}_{2}) = \langle \langle \hat \xi\left( {t_1} \right) \hat \xi\left( {t_1} \right) \rangle \rangle_\mathcal{K} = \frac{1}{2}\langle\{ \delta \hat \xi\left( {t_1} \right) , \delta \hat \xi\left( {t_1} \right) \}\rangle,$$ where $\delta \hat \xi=\hat \xi -\langle \hat \xi \rangle $. However, the third cumulant corresponds to a more complex ordering $$\label{SEq:Kcml_3}
C^{(3)}(\vec{t}_{3}) = \langle \langle \hat \xi\left( {t_1} \right) \hat \xi\left( {t_2} \right) \hat \xi\left( {t_3} \right)\rangle \rangle_\mathcal{K} = \frac{1}{4}\sum_{\vec{\pi}_3 \in \mathcal{P}_3} [1-\Theta( t_{\pi _1} - t_{\pi _2}) \Theta( t_{\pi _3} - t_{\pi _2})] \langle \delta \hat \xi (t_{\pi _1}) \delta \hat \xi (t_{\pi _2}) \delta \hat \xi (t_{\pi _3}) \rangle,$$ where $\mathcal{P}_3$ denotes the set of all possible permutations of $(123)$ indices. Such ordering is given by an average over all permutations of the three displaced operators $\delta \hat \xi (t_j)$, except for the terms where the earliest time appears in the middle position (as implied by the step functions), in agreement with expansion of the operator in Eq. in powers of coupling $F(t')$. A similar expression of Keldysh-ordered third cumulant has also been derived for current operators in Ref. [@PekolaPRB2006].
Phase space method for computing Keldysh-ordered cumulants of a driven damped cavity
====================================================================================
In this section, we outline the phase space method to calculate Keldysh-ordered cumulants. However, we remark that once we have defined the unique Keldysh ordering for each higher cumulant using Eqs. and , standard techniques for computing multi-point correlation functions (e.g. Langevin equations of motion, and quantum regression theorem) work equally well for the Keldysh-ordered cumulants.
In the phase space method, we need to solve the time evolution of qubit coherence $\hat \rho _{ \uparrow \downarrow } (t) \equiv \langle\uparrow \! |\hat \rho (t)|\! \downarrow \rangle$ $$\dot {\hat \rho}_{ \uparrow \downarrow } =
- i [ \hat H_0 ,\hat \rho_{ \uparrow \downarrow } ] - i \{ \hat H_{\mathrm{int}}(t) ,\hat \rho_{ \uparrow \downarrow } \}+ \gamma ( {\bar n}_\mathrm{th} + 1 ) \mathcal{D} [ \hat c ]\hat \rho_{ \uparrow \downarrow } + \gamma {\bar n}_\mathrm{th} \mathcal{D} [\hat c^\dag ]\hat \rho_{ \uparrow \downarrow },
\label{SEq:meqcoh}$$ which is a direct extension, with a time-modulation $F(t)$ in interaction $\hat H_{\mathrm{int}} \left( t\right)= {\lambda} F(t) \hat n \hat \sigma_z /2$, of the technique used in Ref. [@Utami2007]. Here we use a constant coefficient $\lambda$ to keep track of orders in expansion on the coupling; by the end of the calculation, one can always set $\lambda=1$. We stress that if we replace the time-independent coupling $\lambda$ with a time-dependent one, the relevant derivations in Ref. [@Utami2007] still hold rigorously, and we refer interested readers to this paper for more detail.
Without loss of generality, the system initial state can be chosen as a product state between the qubit and the cavity, with the cavity in thermal equilibrium. Thus, Wigner function $W(x,p;t)$ of the coherence operator $\hat \rho _{ \uparrow \downarrow } (t) $ is Gaussian throughout time evolution. Moreover, for the Fourier transform of $W(x,p;t)$, we can assume the following ansatz [@Utami2007] $$W\left[ {k,q;t} \right] = e^{ - \nu ( t )} \exp \left( - i [ {k\bar x (t) + q\bar p (t)} ] - \frac{1}{2} (k^2+q^2) \sigma _s (t) \right),$$ from which the moment generating function can be computed as $\Lambda[F( t );t_f] = e^{-\nu ( t_f )} $. Substituting this ansatz into the master equation in Eq. , we then need to solve a set of ordinary differential equations for the coefficient functions
\[SEq:cohODE\] $$\begin{aligned}
&\dot \nu_\mathrm{th} = i\lambda F ( t ) \left( {\sigma _s- \frac{1}{2}} \right) ,\\
& {\dot \sigma }_s = \gamma \left( {\bar n}_\mathrm{th}+ \frac{1}{2} \right) - \gamma {\sigma _s} - iF ( t ) \lambda \sigma _s^2 + \frac{ i\lambda F \left( t\right) }{4} , \\
&\dot \nu_\mathrm{dr} = \frac{{i\lambda }}{2} F(t) ({\bar x}^2 +{\bar p}^2) ,\\
& \dot {\bar x} = - \delta \bar p + \sqrt 2 \, \mathrm{Im}f - iF ( t ) \lambda {\sigma _s} \bar x- \frac{\gamma }{2}\bar x , \label{SEq:mfGSantcoef-xb} \\
& \dot {\bar p} = \delta \bar x + \sqrt 2 \, \mathrm{Re}f - i F (t) \lambda {\sigma _s} {\bar p} - \frac{\gamma }{2}\bar p , \label{SEq:mfGSantcoef-pb}
\end{aligned}$$
where the exponent $\nu (t)=\nu_\mathrm{th}(t) +\nu_\mathrm{dr}(t)$ can be automatically written as a sum of drive-independent and drive-dependent parts.
The Keldysh-ordered cumulants $C^{(\ell)}(\vec{t}_{\ell})$ can now be extracted using the equation (see Eq. in the main text) $$\label{SEq:ncdef}
\nu (t_f) = - \sum_{\ell=1}^\infty\!\lambda^\ell\frac{ (-i)^\ell }{ \ell !}\prod_{j=1}^\ell
\left[ \int_0^{t_f}\!\! dt_j F(t_j) \right]\!C^{(\ell)}(\vec{t}_{\ell}),$$ i.e. the cumulants can be obtained by solving Eqs. perturbatively in orders of $\lambda$, and comparing the results to the integrals above. Since the cumulant functions $C^{(\ell)}(\vec{t}_{\ell})$ must be symmetric over permutations of its variables $\{\vec{t}_{\ell} \}$, such procedure will lead to a unique result. For example, for the photon shot noise in a driven damped cavity discussed in the main text, first few drive-independent contributions to cumulants are given by
$$\begin{aligned}
& C_{\mathrm{th} }^{(1)}( t_1) = {\bar n}_\mathrm{th} ,\\
& C_{\mathrm{th} }^{(2)}(\vec{t}_2) ={\bar n}_\mathrm{th} ( {\bar n}_\mathrm{th} +1 ) e^{-\gamma | t_1 - t_2 |}, \\
&C_{\mathrm{th} }^{(3)}( \vec{t}_{3}) = {\bar n}_\mathrm{th} ( {\bar n}_\mathrm{th} +1 ) ( 2{\bar n}_\mathrm{th} +1 ) \exp \left( - \frac{\gamma }{2} | t_1 - t_2 | - \frac{\gamma }{2}| t_2 - t_3 | - \frac{\gamma }{2}| t_1 - t_3 | \right) . \label{SEq:cmlth_3}
\end{aligned}$$
Taking Fourier transform of Eq. for the third cumulant, we obtain the drive-independent QBS, as given by Eq. in the main text.
Proof of non-negative energy shot noise bispectrum in a classical driven damped oscillator
==========================================================================================
In the classical limit ${\bar n}_\mathrm{th} \to \infty $, the cavity mode annihilation operator $\hat c$ in the main text can be described by a classical stochastic variable $c(t)$, describing the amplitude of a driven damped classical harmonic oscillator. The equation of motion is now given by $$d c = - ( {\gamma }/{2}-i\delta )c dt + if dt+ \sqrt{ \gamma{\bar n}_\mathrm{eff} } dW ,$$ where ${\bar n}_\mathrm{eff} ={\bar n}_\mathrm{th} +1/2 \simeq {\bar n}_\mathrm{th}$ (the $1/2$ correction is added so that second-order correlators between $c(t)$, $c^*(t')$ match their symmetrized quantum counterparts), and $dW$ is a complex-valued Wiener increment. The solution to this stochastic differential equation can be written as $c(t) = c_0 + \zeta (t)$, where $c_0 = \langle c(t) \rangle$ is a complex constant number, and $\zeta (t)$ is a complex zero-mean stochastic variable. In the long time limit, $\zeta (t)$ is Gaussian and stationary, satisfying the equation $$\langle\zeta^*(t)\zeta(t') \rangle ={\bar n}_\mathrm{eff} \exp \!\left[-i \delta (t-t') -\frac{\gamma}{2} |t-t'|\right],$$ whereas all other second correlators vanish $\langle\zeta(t)\zeta(t') \rangle =\langle\zeta^*(t)\zeta^*(t') \rangle^* \equiv 0 $. The photon number operator $\hat n$ then corresponds to the energy of the classical oscillator $n (t)=|c(t)|^2$, so that its Fourier transform can be expressed using Fourier components of $\zeta(t)$ as $$n [\omega] = \int dt e^{i \omega t} n(t)= |c_0|^2 +\int d\omega' \zeta^* [\omega - \omega']\zeta[ \omega']+c^*_0 \zeta[ \omega] +c_0 \zeta^*[ \omega] .$$
Since the Fourier transform $\zeta [\omega]$ of a Gaussian variable must also be Gaussian, polyspectra of $n(t)$ can be calculated using the expression above by applying Wick’s theorem. Noting that all the anomalous correlators vanish, the only contractions that contribute would be given by terms of the following form $$\langle\zeta^*[\omega]\zeta[\omega'] \rangle = \frac{\gamma {\bar n}_\mathrm{eff}}{\left(\omega - \delta\right)^2+\left(\frac{\gamma}{2}\right)^2} \delta (\omega +\omega') ,$$ which is always non-negative. It is then straightforward to show that both drive-independent and drive-dependent contributions to polyspectra must also be non-negative for all frequencies. In particular, the frequency dependence ${\tilde S}_{ \mathrm{cl}}[ {\omega}_{1},{\omega}_{2}]$ of the drive-dependent bispectrum in the classical limit (see main text for definition) is real and positive semidefinite, which can be explicitly written as $$\begin{aligned}
{\tilde S}_{ \mathrm{cl}}& [ {\omega}_{1},{\omega}_{2}] = \frac{1}{4} \sum\limits_{ \alpha \ne \beta ; \,\alpha ,\beta = 1,2,3 } \frac{\gamma ^2}{ \left[ {\left( \frac{\gamma }{2} \right)}^2 + {\left( \omega _\alpha +\delta \right)}^2 \right]\left[ {\left( \frac{\gamma }{2} \right)}^2 + {\left( \omega _\beta - \delta \right)}^2 \right] } .
\label{Seq:QBSdrcl}\end{aligned}$$
Temporal skewness for squeezed bath photon fluctuations
=======================================================
In the main text, we show a violation of higher-order Onsager reciprocity relations solely due to quantum corrections in the temporal third cumulant (skewness), which can be probed by an imaginary part in the QBS. Here we provide an example where the temporal skewness exhibits time asymmetry in both the classical and the quantum limits, and the skewness function also reveals insights into non-equilibrium dynamics in well-defined classical systems. We again consider photon shot noise in a dissipative bosonic mode, but now driven by squeezed noise. The master equation is $$\dot {\hat \rho} = - i [ \hat H_0 + \hat H_{\mathrm{int} },\hat \rho ] + \gamma ( {\bar n}_\mathrm{cl} + 1 ) \mathcal{D} [ \hat s_r ]\hat \rho + \gamma {\bar n}_\mathrm{cl} \mathcal{D} [\hat s_r^\dag ]\hat \rho ,
\label{Seq:meqSQ}$$ where $\hat s_r = \hat c \cosh{r} + \hat c^\dag \sinh r$ denotes the squeezed bath operator. In the rotating frame, the oscillator Hamiltonian is $\hat H_0 =- \delta {\hat c^\dag }\hat c$, and its interaction with the qubit is $\hat H_{\mathrm{int}} (t) = \frac{1}{2} F(t) \hat{n}(t) \hat \sigma_z$. Such noise model has a well-defined classical limit if we let ${\bar n}_\mathrm{cl} \to \infty$, where the bosonic mode can be equivalently described by a classical stochastic variable $c(t)$. We note that the steady state of the corresponding classical model is not thermal equilibrium, enabling a violation of Onsager-like relations even in the classical limit.
For concreteness, we again consider the temporal third cumulant $C ^{(3)}( t,t)$, which can be written as a sum of classical and quantum contributions as $$C ^{(3)}( t,t)= (2{\bar n}_\mathrm{cl} + 1)^3 f( t ) \left[ {\tilde C}_{\mathrm{cl}}^{(3)}(t) - \frac{1}{(2{\bar n}_\mathrm{cl} + 1)^2}\right],$$ where $f( t )=e^{ - \gamma | t |}\cosh (2r )/4$ is an even function of time $t$ and independent of ${\bar n}_\mathrm{cl}$. The coefficient function ${\tilde C}_{\mathrm{cl}}^{(3)}(t)$ for the classical contribution is given by $${\tilde C}_{\mathrm{cl}}^{(3)}(t) =\cosh^2 (2r)+ \frac{ \gamma ^2 \sinh^2 (2r) }{ \gamma ^2+ 4\delta ^2} [ 1+ 2 \cos ( \delta t +\delta | t | ) ].$$ The situation is now reversed: the quantum correction is symmetric under time reversal $t \to -t$, whereas the classical contribution is asymmetric for a generic nonzero detuning $\delta \ne 0$.
The time asymmetry in $C ^{(3)}( t,t)$ has its roots in classical non-equilibrium dynamics: in the classical limit ${\bar n}_\mathrm{cl} \gg 1$, we can introduce two real quadratures $x$ and $p$ defined by $c=( x + i p)/ \sqrt 2$ to describe the corresponding classical oscillator. Their dynamics satisfies the stochastic differential equations
$$\begin{aligned}
& dx = (-\delta p - \frac{\gamma }{2} x ) dt + e^{r} \sqrt {\gamma {\bar n}_\mathrm{eff}} dW_1 , \\
& dp = ( \delta x - \frac{\gamma }{2} p ) dt + e^{-r} \sqrt {\gamma {\bar n}_\mathrm{eff}} dW_2 ,\end{aligned}$$
where ${\bar n}_\mathrm{eff} ={\bar n}_\mathrm{cl} +1/2 \simeq {\bar n}_\mathrm{cl}$, and $dW_1 $ and $dW_2$ are independent Wiener increments. These equations formally also describe time evolution of a resonantly coupled pair of real harmonic modes, where the interaction strength is given by $|\delta |$, and each oscillator is also coupled to a thermal reservoir with thermal excitations $e^{\pm 2 r} {\bar n}_\mathrm{eff}$. This coupled two-mode system for $r \ne 0$ is a typical example of non-equilibrium system that violates detailed balance, manifested as time asymmetry in cross correlation functions $\langle A(0)B(t)\rangle$ [@Tomita1973; @Tomita1974; @Carmichael2002]. Noting that $n(t)$ corresponds to the total energy in the classical limit, the skewness $C ^{(3)}( t,t)$ can then be viewed as a correlation function between energy fluctuations $\delta n(0)$ and its higher order fluctuations $[\delta n(t )]^2$ at a different time. Thus, the time asymmetry in $C ^{(3)}( t,t)$ is again a signature of detailed balance violation, which in turn is due to the imbalanced thermal baths set by the nonzero $r$.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'The aim of this paper is to study static spherically symmetric noncommutative $F(T,T_G)$ wormhole solutions along with Lorentzian distribution. Here, $T$ and $T_G$ are torsion scalar and teleparallel equivalent Gauss-Bonnet term, respectively. We take a particular redshift function and two $F(T,T_G)$ models. We analyze the behavior of shape function and also examine null as well as weak energy conditions graphically. It is concluded that there exist realistic wormhole solutions for both models. We also study the stability of these wormhole solutions through equilibrium condition and found them stable.'
author:
- |
M. Sharif [^1] and Kanwal Nazir [^2] [^3]\
Department of Mathematics, University of the Punjab,\
Quaid-e-Azam Campus, Lahore-54590, Pakistan.
title: '**Lorentz Distributed Noncommutative $F(T,T_G)$ Wormhole Solutions**'
---
[**Keywords:**]{} Noncommutative geometry; Modified gravity; Wormhole.\
[**PACS:**]{} 02.40.Gh; 04.50.Kd; 95.35.+d.
Introduction
============
This is well-known through different cosmological observations that our universe undergoes accelerated expansion that opens up new directions. A plethora of work has been performed to explain this phenomenon. It is believed that behind this expansion, there is a mysterious force dubbed as dark energy (DE) identified by its negative pressure. Its nature is generally described by the following two well-known approaches. The first approach leads to modify the matter part of general relativity (GR) action that gives rise to several DE models including cosmological constant, k-essence, Chaplygin gas, quintessence etc [@1]-[@5].
The second way leads to gravitational modification which results in modified theories of gravity. Among these theories, the $F(T)$ theory [@8] is a viable modification which is achieved by torsional formulation. Various cosmological features of this theory have been investigated such as solar system constraints, static wormhole solutions, discussion of Birkhoff’s theorem, instability ranges of collapsing stars and many more [@11]-[@11b]. Recently, a well known modified version of $F(T)$ theory is proposed by involving higher order torsion correction terms named as $F(T,T_G)$ theory depend upon $T$ and $T_G$ [@16a]. This is a completely different theory which does not correspond to $F(T)$ as well as any other modified theory. It is a novel modified gravity theory having no curvature terms. The dynamical analysis [@16] and cosmological applications [@16b] of this theory turn out to be very captivating.
Chattopadhyay et al. [@18] studied pilgrim DE model and reconstructed $F(T,T_{G})$ models by assuming flat FRW metric. Jawad et al. [@19] explored reconstruction scheme in this theory by considering a particular ghost DE model. Jawad and Debnath [@17] worked on reconstruction scenario by taking a new pilgrim DE model and evaluated different cosmological parameters. Zubair and Jawad discussed thermodynamics at the apparent horizon [@17a]. We developed reconstructed models by assuming different eras of DE and their combinations with FRW and Bianchi type I universe models, respectively [@20].
The study of wormhole solutions provides fascinating aspects of cosmology especially in modified theories. Agnese and Camera [@24a] discussed static spherically symmetric and traversable wormhole solutions in Brans-Dicke scalar tensor theory. Anchordoqui et al. [@25a] showed the existence of analytical wormhole solutions and concluded that there may exist a wormhole sustained by normal matter. Lobo and Oliveira [@21] considered $f(R)$ theory to examine the traversable wormhole geometries through different equations of state. They analyzed that wormhole solution may exist in this theory and discussed the behavior of energy conditions. B$\ddot{o}$hmer et al. [@22] examined static traversable $F(T)$ wormhole geometry by considering a particular $F(T)$ model and constructed physically viable wormhole solutions. The dynamical wormhole solutions have also been studied in this theory by assuming anisotropic fluid [@23]. Recently, Sharif and Ikram [@24] explored static wormhole solutions and investigated energy conditions in $f(G)$ gravity. They found that these conditions are satisfied only for barotropic fluid in some particular regions.
General relativity does not explain microscopic physics (completely described through quantum theory). Classically, the smooth texture of spacetime damages at short distances. In GR, the spacetime geometry is deformed by gravity while it is quantized through quantum gravity. To overcome this problem, noncommutative geometry establishes a remarkable framework that discusses the dynamics of spacetime at short distances. This framework introduces a scale of minimum length having a good agreement with Planck length. The consequences of noncommutativity can be examined in GR by taking the standard form of the Einstein tensor and altered form of matter tensor.
Noncommutative geometry is considered as the essential property of spacetime geometry which plays an impressive role in several areas. Rahaman et al. [@29] explored wormhole solutions along with noncommutative geometry and showed the existence of asymptotically flat solutions for four dimensions. Abreu and Sasaki [@30] studied the effects of null (NEC) and weak (WEC) energy conditions with noncommutative wormhole. Jamil et al. [@31] discussed the same work in $f(R)$ theory. Sharif and Rani [@32] investigated wormhole solutions with the effects of electrostatic field and for galactic halo regions in $F(T)$ gravity.
Recently, Bhar and Rahaman [@33] considered Lorentzian distributed density function and examined that wormhole solutions exist in different dimensional spacetimes with noncommutative geometry. They found that wormhole solutions can exist only in four and five dimensions but no wormhole solution exists for higher than five dimension. Jawad and Rani [@34] investigated Lorentz distributed noncommutative wormhole solutions in $F(T)$ gravity. We have explored noncommutative geometry in $F(T,T_G)$ gravity and found that effective energy-momentum tensor is responsible for the violation of energy conditions rather than noncommutative geometry [@35]. Inspired by all these attempts, we investigate whether physically acceptable wormholes exist in $F(T,T_G)$ gravity along with noncommutative Lorentz distributed geometry. We study wormhole geometry and corresponding energy conditions.
The paper is arranged as follows. Section **2** briefly recalls the basics of $F(T,T_G)$ theory, the wormhole geometry and energy conditions. In section **3**, we investigate physically acceptable wormhole solutions and energy conditions for two particular $F(T,T_G)$ models. In section **4**, we analyze the stability of these wormhole solutions. Last section summarizes the results.
$F(T,T_G)$ Gravity
==================
This section presents some basic review of $F(T,T_G)$ gravity. The idea of such extension is to construct an action involving higher order torsion terms. In curvature theory other than simple modification as $f(R)$ theory, one can propose the higher order curvature correction terms in order to modify the action such as GB combination $G$ or functions $f(G)$. In a similar way, one can start from the teleparallel theory and construct an action by proposing higher torsion correction terms.
The most dominant variable in the underlying gravity is the tetrad field $e_{a}(x^{\lambda})$. The simplest one is the trivial tetrad which can be expressed as $e_{a}={\delta^{\lambda}}_{a}\partial_{\lambda}$ and $e^{b}={\delta_{\lambda}}^{b}\partial^{\lambda}$, where the Kronecker delta is denoted by $\delta^{\lambda}_{a}$. These tetrad fields are of less interest as they result in zero torsion. On the other hand, the non-trivial tetrad field is more favorable to construct teleparallel theory because they give non-zero torsion. They can be expressed as $$\nonumber
h_{a}={h_{a}^{~\lambda}}\partial_{\lambda},\quad
h^{b}={h^{a}_{~\lambda}}dx^{\lambda}.$$ The non-trivial tetrad satisfies $h_{~\lambda}^{a}h^{~\lambda}_{b}=\delta^{a}_{b}$ and $h_{~\lambda}^{a}h^{~\mu}_{a}=\delta^{\mu}_{\lambda}$. The tetrad fields can be related with metric tensor through $$\nonumber
g_{\lambda\mu}=\eta_{ab}h_{\lambda}^{a}h^{b}_{\mu},$$ where $\eta_{ab}=diag(1,-1,-1,-1)$ is the Minkowski metric. Here, Greek indices $(\lambda,\mu)$ represent coordinates on manifold and Latin indices $(a, b)$ correspond to the coordinates on tangent space. The other field is described as the connection 1-forms ${\omega^{a}}_{b}(x^{\lambda})$ which are the source of parallel transportation, also known as Weitzenb$\ddot{o}$ck connection. It has the following form $$\nonumber
\omega^{\mu}_{\lambda\nu}={h^{\mu}}_{a}{h^{a}}_{\lambda,\nu}.$$
The structure coefficients $C^{c}_{ab}$ appear in commutation relation of the tetrad as $$\nonumber
C^{c}_{ab}=h_{c}^{-1}[h_{a},h_{b}],$$ where $$\nonumber
C^{c}_{ab}={h^{\mu}}_{b}{h^{\lambda}}_{a}({h^{c}}_{\lambda,\mu}-{h^{c}}_{\mu,\lambda}).$$ The torsion as well as curvature tensors has the following expressions $$\begin{aligned}
\nonumber
T^{a}_{bc}&=&-C^{a}_{bc}-\omega^{a}_{bc}+\omega^{a}_{cb},\\\nonumber
R^{a}_{bcd}&=&\omega^{e}_{bd}\omega^{a}
_{ec}+\omega^{a}_{bd,c}-C^{e}_{cd}\omega^{a}_{be}
-\omega^{e}_{bc}\omega^{a}_{ed}-\omega^{a}_{bc,d}.\end{aligned}$$ The contorsion tensor can be described as $$\nonumber
K_{abc}=-K_{bac}=\frac{1}{2}(-T_{abc}+T_{cab}-T_{bca}).$$ Both the torsion scalars are written as $$\begin{aligned}
\nonumber
T&=&\frac{1}{4}T^{abc}T_{abc}+\frac{1}{2}T^{abc}T_{cba}
-T_{ab}^{~~a}T^{cb}_{~~c},\\\nonumber
T_G&=&({K^{ea_{2}}}_{b}{K^{a_{3}}}_{fc}{K^{a_{1}}}_{ea}
{K^{fa_{4}}}_{d}+2{K^{ea_{4}}}_{f}{K^{f}}_{cd}
{K^{a_{1}a_{2}}}_{a}{{K^{a_{3}}}_{eb}}
+2{K^{ea_{4}}}_{c,d}{K^{a_{3}}}_{eb}\\\nonumber
&\times&{K^{a_{1}a_{2}}}_{a}-2{{K^{a_{3}}}_{eb}{K^{e}}
_{fc}K^{a_{1}a_{2}}}_{a}{K^{fa_{4}}}_{d})
\delta^{abcd}_{a_{1}a_{2}a_{3}a_{4}}.\end{aligned}$$ This comprehensive theory has been proposed by Kofinas and Saridakis [@16] whose action is described as $$\nonumber
S=\int
h\left[\frac{F(T,T_G)}{\kappa^{2}}+\mathcal{L}_{m}\right]d^4x,$$ where $\mathcal{L}_{m}$ is the matter Lagrangian, $\kappa^{2}=1$, $g$ represents determinant of the metric coefficients and $h=\sqrt{-g}=\det(h^a_\lambda)$. The field equations obtained by varying the action about $h^a_\lambda$ are given as $$\begin{aligned}
\nonumber
&&C^{b}_{~cd}(H^{dca}+2H^{[ac]d})+(-T_G F_{T_G}(T,T_G)+F(T,T_G)-T
F_{T}(T,T_G))\eta^{ab}\\\nonumber
&+&2(H^{[ba]c}-H^{[kcb]a}+H^{[ac]b})C_{~dc}^{d}+2(-H^{[cb]a}
+H^{[ac]b}+H^{[ba]c})_{,c}+4H^{[db]c}\\\label{1}&\times&
C_{(dc)}^{~~~a}+T^{a}_{~cd}H^{cdb}-\mathcal{H}^{ab}=\kappa^2\mathcal{T}^{ab},\end{aligned}$$ where $$\begin{aligned}
\nonumber
H^{abc}&=&(\eta^{ac}K^{bd}_{~~d}-K^{bca})F_{T}(T,T_{G})+F_{T_{G}}(T,T_{G})[
(\epsilon^{ab}_{~~lf}K^{d}_{~qr}K^{l}_{~dp}\\\nonumber
&+&2K^{bc}_{~~p}\epsilon^{a}_{~dlf}K^{d}_{~qr}
+K^{il}_{~~p}\epsilon_{qdlf}K^{jd}_{~~r})K^{qf}_{~~t}\epsilon^{kprt}
+\epsilon^{ab}_{~~ld}K^{fd}_{~~p}\epsilon^{cprt}(K^{l}_{~fr,t}
\\\nonumber&-&\frac{1}{2}C^{q}_{~tr}K^{l}_{~fq})
+\epsilon^{cprt}K^{df}_{~p}\epsilon^{al}_{~~df}(K^{b}_{~kr,t}
-\frac{1}{2}C^{q}_{~tr}K^{b}_{~lq})]
+\epsilon^{cprt}\epsilon^{a}_{~ldf}\\\nonumber&\times&[F_{T_{G}}(T,T_{G})
K^{bl}_{~~[q}K^{df}_{~~r]}C^{q}_{~pt}+(K^{bl}_{~p}F_{T_{G}}(T,T_{G})
K^{df}_{~r})_{,t}],\\\nonumber
\mathcal{H}^{ab}&=&F_{T}(T,T_G)\epsilon^{a}_{~lce}K^{l}
_{~fr}\epsilon^{brte}K^{fc}_{~~t}.\end{aligned}$$ Here, $\mathcal{T}^{ab}$ represents the matter energy-momentum tensor. The functions $F_{T}$ and $F_{T_{G}}$ are the derivatives of $F$ with respect to $T$ and $T_{G}$, respectively. Notice that for $F(T,T_{G})=-T$, teleparallel equivalent to GR is achieved. Also, for $T_G=0$, we can obtain $F(T)$ theory.
Next, we explain the wormhole geometry as well as energy conditions in this gravity.
Wormhole Geometry
-----------------
Wormholes associate two disconnected models of the universe or two distant regions of the same universe (interuniverse or intrauniverse wormhole). It has basically a tube, bridge or tunnel type appearance. This tunnel provides a shortcut between two distant cosmic regions. The well-known example of such a structure is defined by Misner and Wheeler [@36] in the form of solutions of the Einstein field equations named as wormhole solutions. Einstein and Rosen made another attempt and established Einstein-Rosen bridge.
The first attempt to introduce the notion of traversable wormholes is made by Morris and Thorne [@37]. The Lorentzian traversable wormholes are more fascinating in a way that one may traverse from one to another end of the wormhole [@37a]. The traversability is possible in the presence of exotic matter as it produces repulsion which keeps open throat of the wormhole. Being the generalization of Schwarzschild wormhole, these wormholes have no event horizon and allow two way travel. The spacetime for static spherically symmetric as well as traversable wormholes is defined as [@37] $$\label{2}
ds^2=e^{2\alpha(r)}dt^2-\frac{dr^2}{\left(1-\frac{\beta(r)}{r}\right)}
-r^2d\theta^2-r^2\sin^2\theta d\phi^2,$$ where $\alpha(r)$ is the redshift function and $\beta(r)$ represents the shape function. The gravitational redshift is measured through the function $\alpha(r)$ whereas $\beta(r)$ controls the wormhole shape. The radial coordinate $r$, redshift and shape functions must satisfy few conditions for the traversable wormhole. The redshift function needs to satisfy no horizon condition because it is necessary for traversability. Thus to avoid horizons, $\alpha(r)$ must be finite throughout. For this purpose, we assumed zero redshift function that implies $e^{2\alpha(r)}\rightarrow1$. There are two properties related to the shape function to maintain the wormhole geometry. The first property is positiveness, i.e., as $r\rightarrow\infty$, $\beta(r)$ must be defined as a positive function. The second is flaring-out condition, i.e., $\left(\frac{\beta(r)-r\beta'(r)}{\beta^2(r)}\right)>0$ and $\beta(r)=r_{th}$ at $r=r_{th}$ with $\beta'(r_{th})<1$ ( $r_{th}$ is the wormhole throat radius). The condition of asymptotic flatness ($\frac{\beta(r)}{r}\rightarrow0$ as $r\rightarrow\infty$) should be fulfilled by the spacetime at large distances.
To investigate the wormhole solutions, we assume a diagonal tetrad [@37] as $$\begin{aligned}
\label{3}
h^{a}_{\lambda}=diag\left(e^{-\alpha(r)},\frac{1}{\left(1-\frac{\beta(r)}{r}\right)}
,~r,~r\sin\theta\right).\end{aligned}$$ This is the simplest and frequently used tetrad for the Morris and Thorne static spherically symmetric metric. This also provides non-zero $T_\mathcal{G}$ which is the basic ingredient for this theory. If we take some other tetrad then it may lead to zero $T_\mathcal{G}$. Thus these orthonormal basis are most suitable for this theory. The torsion scalars turn out to be $$\begin{aligned}
\label{4}
T&=&\frac{4}{r}\left(1-\frac{\beta(r)}{r}\right)\alpha'+\frac{2}{r^2}\left(1-\frac{\beta(r)}{r}\right),\\\nonumber
T_G&=&\frac{8\beta(r)\alpha'(r)}{r^4}-\frac{8\beta(r)\alpha'^2(r)}{r^3}
\left(1-\frac{\beta(r)}{r}\right)+\frac{12\beta(r)\alpha'(r)\beta'(r)}{r^4}\\\label{5}
&-&\frac{8\beta'(r)\alpha'(r)}{r^3}
-\frac{12\beta^2(r)\alpha'(r)}{r^5}-\frac{8\beta(r)\alpha''(r)}{r^3}
\left(1-\frac{\beta(r)}{r}\right).\end{aligned}$$ In order to satisfy the condition of no horizon for a traversable wormhole, we have to assume $\alpha(r)=0$. Substituting this assumption in the above torsion scalars, we obtain $T_G=0$ which means that the function $F(T,T_G)$ reduces to $F(T)$ representing $F(T)$ theory. Hence, we cannot take $\alpha(r)$ as a constant function instead we assume $\alpha(r)$ as $$\begin{aligned}
\label{6}
\alpha(r)=-\frac{\psi}{r},\quad\psi>0,\end{aligned}$$ which is finite and non-zero for $r>0$. Also, it satisfies asymptotic flatness as well as no horizon condition. We assume that anisotropic matter threads the wormhole for which the energy-momentum tensor is defined as $$\begin{aligned}
\nonumber
\mathcal{T}^{(m)}_{~~\lambda\mu}=(p_t+\rho)V_\lambda
V_\mu-g_{\lambda\mu}p_t+(p_r-p_t)\eta_\mu\eta_\lambda,\end{aligned}$$ where $\rho$, $V_\lambda$, $\eta_\lambda$, $p_r$ and $p_t$ represent the energy density, four-velocity, radial spacelike four-vector orthogonal to $V_\lambda$, radial and tangential components of pressure, respectively. We consider energy-momentum tensor as $\mathcal{T}^{(m)}_{~~\lambda\mu}=diag(\rho, -p_r,-p_t, -p_t)$. Using Eqs.(\[2\])-(\[6\]) in (\[1\]), we obtain the field equations as $$\begin{aligned}
\nonumber
\rho&=&F(T,T_G)+\frac{2\beta'(r)}{r^2}F_T(T,T_G)
-T_GF_{T_G}(T,T_G)-TF_T(T,T_G)-\frac{4F_{T}'}{r}(1\\\nonumber&-&
\frac{\beta(r)}{r})+\frac{4F_{T_G}'(T,T_G)}{r^3}\left(\frac{5\beta(r)}{r}
-\frac{3\beta^2(r)}{r^2}-2-3\beta'(r)\left(1-
\frac{\beta(r)}{r}\right)\right)\\\label{7}&+&
\frac{8F_{T_G}''(T,T_G)}{r^2}\left(1-\left(2-\frac{\beta(r)}{r}\right)
\frac{\beta(r)}{r}\right)
,\\\nonumber
p_r&=&-F(T,T_G)+F_T(T,T_G)\left(T-\frac{4\psi}{r^3}-\frac{2\beta(r)}{r^3}
+\frac{4\beta(r)\psi}{r^4}\right)
+T_GF_{T_G}(T,T_G)
\\\label{8}&+&\frac{48}{r^4}\left(1-\frac{\beta(r)}{r}\right)^2
\psi F_{T_G}'(T,T_G),\\\nonumber
p_t&=&-F(T,T_G)+T_GF_{T_G}(T,T_G)+TF_T(T,T_G)+\left(\frac{\beta(r)}{r^3}-\frac{2\psi}{r^3}
-\frac{\beta'(r)}{r^2}
\right.\\\nonumber&+&\left.\frac{\beta(r)\psi}{r^4}
+\frac{2\psi^2}{r^4}+\frac{\beta'(r)\psi}{r^3}-\frac{2\beta(r)\psi^2}{r^5}
-\frac{4\beta(r)\psi}{r^4}+\frac{4\psi}{r^3}\right)F_{T}(T,T_G)\\\nonumber
&+&2\left(\frac{1}{r}-\left(1-\frac{\beta(r)}{r}\right)\psi'+\frac{\beta(r)
\psi}{r^3}\right)F_{T}'(T,T_G)
+\left(\frac{12\psi\beta(r)}{r^5}-\frac{12\psi\beta^2(r)}{r^6}
\right.\\\nonumber&+&\left.\frac{16\beta(r)\psi^2}{r^6}
+\frac{12\psi\beta'(r)}{r^4} -\frac{8\psi^2}{r^5}
-\frac{8\beta^2(r)\psi^2}{r^7}+\frac{12\beta(r)\psi\beta'(r)}{r^5}
-\frac{16\psi}{r^4}\right.\\\nonumber&-&\left.\frac{16\beta^2(r)\psi}{r^6}-\frac{32\beta(r)\psi}{r^5}
\right)F_{T_G}'(T,T_G)+\frac{8\psi}{r^3}\left(1
-\frac{\beta(r)}{r}\left(2+\frac{\beta(r)}{r}\right)\right)\\\label{9}&\times&F_{T_G}''(T,T_G),\end{aligned}$$ where prime stands for the derivative with respect to $r$.
Energy Conditions
-----------------
These conditions are mostly considered in GR and also in modified theories of gravity. As these conditions violates in GR and this guarantees the presence of realistic wormhole. The origin of these conditions is the Raychaudhuri equations along with the requirement of attractive gravity [@38]. Consider timelike and null vector field congruences as $u^\lambda$ and $k^\lambda$, respectively, the Raychaudhuri equations are formulated as follows $$\begin{aligned}
\nonumber
\frac{d\Theta}{d\tau}-\omega_{\lambda\mu}\omega^{\lambda\mu}+R_{\lambda\mu}u^\lambda
u^\mu+\frac{1}{3}\Theta^2+\sigma_{\lambda\mu}\sigma^{\lambda\mu}&=&0,\\\nonumber
\frac{d\Theta}{d\chi}-\omega_{\lambda\mu}\omega^{\lambda\mu}+R_{\lambda\mu}k^\lambda
k^\mu+\frac{1}{2}\Theta^2+\sigma_{\lambda\mu}
\sigma^{\lambda\mu}&=&0,\end{aligned}$$ where the expansion scalar $\Theta$ is used to explain expansion of the volume and shear tensor $\sigma^{\lambda\mu}$ provides the information about the volume distortion. The vorticity tensor $\omega^{\lambda\mu}$ explains the rotating curves. The positive parameters $\chi$ and $\tau$ are used to interpret the congruences in manifold. In the above equations, we may neglect quadratic terms as we consider small volume distortion (without rotation). Thus these equations reduce to $\Theta=-\tau R_{\lambda\mu}u^\lambda
u^\mu=-\chi R_{\lambda\mu}k^\lambda k^\mu$. The expression $\Theta<0$ ensures the attractiveness of gravity which leads to $R_{\lambda\mu}u^\lambda u^\mu\geq0$ and $R_{\lambda\mu}k^\lambda
k^\mu\geq0$. In modified theories, the Ricci tensor is replaced by the effective energy-momentum tensor, i.e., $\mathcal{T}_{\lambda\mu}^{(eff)}u^\lambda u^\mu\geq0$ and $\mathcal{T}_{\lambda\mu}^{(eff)}k^\lambda k^\mu\geq0$ which introduce effective pressure and effective energy density in these conditions.
It is well-known that the violation of NEC is the basic ingredient to develop a traversable wormhole (due to the existence of exotic matter). It is noted that in GR, this type of matter leads to the non-realistic wormhole otherwise normal matter fulfills NEC. In modified theories, we involve effective energy density as well as pressure by including effective energy-momentum tensor $\mathcal{T}^{eff}_{\lambda\mu}$ in the corresponding energy conditions. This effective energy-momentum tensor is given as $$\nonumber
\mathcal{T}^{(eff)}_{\lambda\mu}=\mathcal{T}^{(H)}_{\lambda\mu}+\mathcal{T}^{(m)}_{\lambda\mu},$$ where $\mathcal{T}^{(H)}_{\lambda\mu}$ are dark source terms related to the underlying $F(T,T_G)$ theory. The condition (violation of NEC) related to $\mathcal{T}^{(eff)}_{\lambda\mu}$ confirms the presence of traversable wormhole by holding its throat open. Thus, there may be a chance for normal matter to fulfil these conditions. Hence, there can be realistic wormhole solutions in this modified scenario.
The four conditions (NEC, WEC, dominant (DEC) and strong energy condition (SEC)) are described as
- NEC:$p_n^{(eff)}+\rho^{(eff)}\geq 0$, where $n=1,2,3.$
- WEC:$p_n^{(eff)}+\rho^{(eff)}\geq0,\quad\rho^{(eff)}\geq0$,
- DEC:$p_n^{(eff)}\pm\rho^{(eff)}\geq0,\quad\rho^{(eff)}\geq0$,
- SEC:$p_n^{(eff)}+\rho^{(eff)}\geq0,\quad\rho^{(eff)}+3p^{(eff)}\geq0$.
Solving Eqs.(\[7\]) and (\[8\]) for effective energy density and pressure, we evaluate the radial effective NEC as $$\nonumber
p_r^{(eff)}+\rho^{(eff)}=\frac{1}{F_T(T,T_G)}\left(\frac{-\beta(r)}{r^3}
+\frac{\beta'(r)}{r^2}+\frac{2}{r}
\left(1-\frac{\beta(r)}{r}\right)\alpha'\right).$$ So, $p_r^{(eff)}+\rho^{(eff)}<0$ represents the violation of effective NEC as $$\label{9pN}
\left(\frac{-\beta(r)}{r^3}+\frac{\beta'(r)}{r^2}+\frac{2}{r}
\left(1-\frac{\beta(r)}{r}\right)\alpha'\right)<0.$$ If this condition holds then it shows that the traversable wormhole exists in this gravity.
Wormhole Solutions
==================
Noncommutative geometry is the fundamental discretization of the spacetime and it performs effectively in different areas. It plays an important role in eliminating the divergencies that originates in GR. In noncommutativity, smeared substances take the place of pointlike structures. Considering the Lorentzian distribution, the energy density of particle-like static spherically symmetric object with mass $\mathcal{M}$ has the following form [@39] $$\label{10}
\rho_{NCL}=\frac{\mathcal{M}\sqrt{\theta}}{\pi^2(\theta+r^2)^2},$$ where $\theta$ is the noncommutative parameter. Comparing Eqs.(\[7\]) and (\[10\]), i.e., $\rho_{NCL}=\rho$, we obtain $$\begin{aligned}
\nonumber
\frac{\mathcal{M}\sqrt{\theta}}{\pi^2(\theta+r^2)^2}&=&F(T,T_G)
+\frac{2\beta'(r)F_T}{r^2}-TF_T(T,T_G)
-\frac{4F'_T(T,T_G)}{r}\left(1\right.\\\nonumber&-&\left.\frac{\beta(r)}{r}\right)-
T_GF_{T_G}(T,T_G)+\frac{4F_{T_G}'(T,T_G)}{r^3}\left(\frac{5\beta(r)}{r}
-2-\frac{3\beta^2(r)}{r^2}\right.\\\nonumber&-&\left.3\left(1-
\frac{\beta(r)}{r}\right)\beta'(r)\right)
+\frac{8F_{T_G}''(T,T_G)}{r^2}\left(1-\frac{\beta(r)}{r}
\left(2\right.\right.\\\label{11}&-&\left.\left.
\frac{\beta(r)}{r}\right)\right).\end{aligned}$$ The above equation contains two unknown functions $F(T,T_G)$ and $\beta(r)$. In order to solve this equation, we have to assume one of them and evaluate the other one. Next, we consider some specific and viable models from $F(T,T_G)$ theory and investigate the wormhole solutions under Lorentzian distributed noncommutative geometry. We also discuss the corresponding energy conditions.
First Model
-----------
The first model is considered as [@17a] $$\begin{aligned}
\label{12}
F(T,T_G)=-T+\gamma_1(T^2+\gamma_2T_G)+\gamma_3(T^2+\gamma_4T_G)^2,\end{aligned}$$ where $\gamma_{1}$, $\gamma_{2}$, $\gamma_{3}$ and $\gamma_{4}$ are arbitrary constants. Here, we take $\gamma_{2}$ and $\gamma_{4}$ as dimensionless whereas $\gamma_{1}$ and $ \gamma_{3}$ have dimensions of lengths. This model involves second order $T_G$ terms and fourth order contribution from torsion term $T$. Using Eqs.(\[4\]), (\[5\]) and (\[12\]) in (12), we achieve a complicated differential equation in terms of $\beta(r)$ that cannot be handled analytically. So, we solve it numerically by choosing the corresponding parameters as $\gamma_1=81$, $\gamma_2=-0.0091$, $\gamma_3=12$, $\gamma_4=32$. The values of the remaining parameters $\mathcal{M}=15$, $\theta=0.5$ and $\psi=1$ are taken from [@34]. To plot the graph of $\beta(r)$, we take the initial values as $\beta(1)=0.7$, $\beta'(1)=9.9$ and $\beta''(1)=5.5$. Figure **1** (left panel) represents the increasing behavior of shape function $\beta(r)$. We discuss the wormhole throat by plotting $\beta(r)-r$ in the right panel.
As we know that throat radius is the point where $\beta(r)-r$ cuts the $r$-axis. Here, the throat radius is located at $r_{th}=1.029$ which also satisfies the condition $\beta(r)=r_{th}$ up to two digits, i.e., $\beta(1.029)=1.028$. The third graph of Figure **1** implies that the spacetime does not satisfy the asymptotically flatness condition. The upper left panel of Figure **2** represents the validity of condition (\[9pN\]). Thus, the violation of effective NEC confirms the presence of traversable wormhole. Also, Figure **2** shows the plots of $\rho+p_r$ (upper right panel), $\rho+p_t$ (lower left panel) and $\rho$ (lower right panel) for normal matter that exhibit positive behavior in the interval $1.003<r<1.015$. This shows that ordinary matter satisfies the NEC and physically acceptable wormhole solution is achieved for this model.
Second Model
------------
We assume the second model as [@16] $$\label{13}
F(T,T_G)=-T+\eta_1\sqrt{T^2+\eta_2T_G},$$ where $\eta_1$ and $\eta_2$ are the arbitrary constants. We get a differential equation by substituting Eqs.(\[4\]), (\[5\]) and (\[13\]) in (12). The numerical technique is used to calculate $\beta(r)$ from the differential equation by assuming same values of $\theta$, $\mathcal{M}$ and $\psi$ as above. The model parameters are taken as $\eta_1=-1.1259$ and $\eta_2=-0.9987$. Also, we take the following conditions: $\beta(1.5)=2.1$, $\beta'(1.5)=-133.988$ and $\beta''(1.5)=-60000$. We discuss the properties necessary for the development of wormhole structure. The plot of shape function is shown in Figure **3** (left panel) which represents increasing behavior for all values of $r$. It can be noted that $\beta(0.5)=0.5$. In the upper right panel, we plot $\beta(r)-r$ versus $r$ to discuss the location of wormhole throat. It can be observed that small values of $r$ refer as the throat radius.
The lower graph represents the behavior of $\frac{\beta(r)}{r}$. It can be seen that as the value of $r$ increases, the curve of $\frac{\beta(r)}{r}$ approaches to $0$. Hence, the spacetime satisfies asymptotically flatness condition. The upper left panel of Figure **4** represents the negative behavior and shows the validity of condition (\[9pN\]). For physically acceptable wormhole solution, we check the graphical behavior of NEC and WEC for matter energy density and pressure. Figure **4** shows that $\rho+p_r$, $\rho+p_t$ and $\rho$ behave positively in the intervals $1.32\leq r\leq1.474$, $1.28\leq r\leq1.342$ and $1.307\leq
r\leq1.471$, respectively. The common region of these intervals are $1.28\leq r\leq1.342$. This indicates that NEC and WEC are satisfied in a very small interval. Thus there can exist a micro or tiny physically acceptable wormhole for this model. Tiny wormhole means small radius with narrow throat.
Equilibrium Condition
=====================
In this section, we investigate equilibrium structure of wormhole solutions. For this purpose, we consider generalized Tolman-Oppenheimer-Volkov equation in an effective manner as $$\nonumber
-p'^{(eff)}_r-(p^{(eff)}_r+\rho^{(eff)})\left(\frac{\alpha'}{2}\right)
+(p^{(eff)}_t-p^{(eff)}_r)\left(\frac{2}{r}\right)=0,$$ with the metric $ds^2=diag(e^{2\alpha(r)},-e^{\nu(r)},-r^2,-r^2\sin^2\theta)$, where $e^{\nu(r)}=\left(1\right.$-$\left.\frac{\beta(r)}{r}\right)^{-1}$. The above equation can be written as $$\label{14L}
-p'^{(eff)}_r-(p^{(eff)}_r+\rho^{(eff)})\left(\frac{M^{(eff)}e^{\frac{\alpha-\nu}{2}}}{r^2}\right)
+(p^{(eff)}_t-p^{(eff)}_r)\left(\frac{2}{r}\right)=0,$$ where the effective gravitational mass is described as $M^{(eff)}=\frac{1}{2}\left(r^2e^{\frac{\nu-\alpha}{2}}\right)\nu'$. The equilibrium picture describes the stability of corresponding wormhole solutions with the help of three forces known as gravitational force $F_{gf}$, anisotropic force $F_{af}$ and hydrostatic force $F_{hf}$. The gravitational force exists because of gravitating mass, anisotropic force occurs in the presence of anisotropic system and hydrostatic force is due to hydrostatic fluid. We can rewrite Eq.(\[14L\]) as
$$\label{16L}
F_{hf}+F_{gf}+F_{af}=0,$$
where $$\begin{aligned}
\nonumber
F_{gf}&=&-(p_r^{(eff)}+\rho^{(eff)})\left(\frac{e^{\frac{\alpha-\nu}{2}}M^{(eff)}}{r^2}\right),
F_{hf}=-p'^{(eff)}_r,\\\nonumber
F_{af}&=&(p_t^{(eff)}-p_r^{(eff)})\left(\frac{2}{r}\right).\end{aligned}$$ Further, we examine the stability of wormhole solutions for first and second model through equilibrium condition. Using Eqs.(\[7\])-(\[9\]) and (\[12\]) in (\[16L\]), we obtain a difficult equation for the first model. By applying numerical technique, we plot the graphs of above defined three forces. In Figure **5**, it can be easily analyzed that all the three forces cancel their effects and balance each other in the interval $4.8\leq r\leq5$. This means that wormhole solution satisfies the equilibrium condition for the first model. Next, we take the second model and follow the same procedure by using Eq.(\[13\]). After simplification, we finally get a differential equation and solve it numerically. Figure **6** indicates that the gravitational force is zero but anisotropic and hydrostatic forces completely cancel their effects. Hence, for this model, the system is balanced which confirms the stability of the corresponding wormhole solution.
Concluding Remarks
==================
In general relativity, the structure of wormhole is based on the condition that NEC is violated. This violation supports the fact that there exist a mysterious matter in the universe famous as exotic matter and distinguished by its negative energy density. The amount of this amazing matter would be minimized to obtain a physically viable wormhole. However, in modified theories, the situation may be completely different. This paper investigates noncommutative wormhole solutions with Lorentzian distribution in $F(T,T_G)$ gravity. For this purpose, we have assumed a diagonal tetrad and a particular redshift function. We have examined these wormhole solutions graphically.
For the first model, all the properties are satisfied that are necessary for wormhole geometry regarding the shape function except asymptotic flatness. In this case, WEC and NEC for normal matter are also satisfied. Hence, this model provides realistic wormhole solution in a small interval threaded by normal matter rather than exotic matter. The violation of effective NEC confirms the traversability of the wormhole. Furthermore, the second model fulfills all properties regarding shape function and also satisfies WEC and NEC for normal matter. There exists a micro wormhole solution which is supported by normal matter. This model satisfies the traversability condition (\[9pN\]). We have investigated stability of both models through equilibrium condition. It is mentioned here that stability is attained for both models.
Bhar and Rahaman [@33] examined in GR whether the wormhole solutions exist in different dimensional noncommutative spacetime with Lorentzian distribution. They found that wormhole solutions appear only for four and five dimensions but no solution exists for higher dimensions. It is interesting to mention here that we have also obtained wormhole solutions that satisfy all the conditions and are stable in $F(T,T_G)$ gravity. Our results show consistency with the teleparallel equivalent of GR limits. For the first model, if we substitute $\gamma_1=\gamma_3=0$, then the behavior of shape function $\beta(r)$ and energy conditions in teleparallel theory remains the same as in this theory. For the second model, $\eta_1=0$ provides no result but if we consider $\eta_2=0$, then $\beta(r)$ as well as energy conditions represent consistent behavior.
In $F(T)$ gravity [@41], the resulting noncommutative wormhole solutions are supported by normal matter by assuming diagonal tetrad. In the underlying work, we have also obtained solutions that are threaded by normal matter. Kofinas et al. [@42] discussed spherically symmetric solutions in scalar-torsion gravity in which a scalar field is coupled to torsion with a derivative coupling. They obtained exact solution which represents a new wormhole-like solution having interesting physical features. We can conclude that in $F(T,T_G)$ gravity, noncommutative geometry with Lorentzian distribution is a more favorable choice to obtain physically acceptable wormhole solutions rather than noncommutative geometry [@35].\
\
**The authors have no conflict of interest for this research.**
[40]{}
Kamenshchik, A.Y., Moschella, U. and Pasquier, V.: Phys. Lett. B **511**(2001)265.
Li, M.: Phys. Lett. B **603**(2004)1.
Cai, R.G.: Phys. Lett. B **657**(2007)228.
Wei, H.: Commun. Theor. Phys. **52**(2009)743.
Sheykhi, A., Jamil, M.: Gen. Relativ. Gravit. **43**(2011)2661.
Linder, E.V.: Phys. Rev. D **124**(2010)127301.
Ferraro, R. and Fiorini, F.: Phys. Rev. D **75**(2007)084031.
Bengochea, G.R. and Ferraro, R.: Phys. Rev. D **79**(2009)124019.
Linder, E.V.: Phys. Rev. D **81**(2010)127301.
Kofinas, G. and Saridakis, E.N.: Phys. Rev. D **90**(2014)084044.
Kofinas, G., Leon, G. and Saridakis, E.N.: Class. Quantum Grav. **31**(2014)175011
Kofinas, G. and Saridakis, E.N.: Phys. Rev. D **90**(2014)084045.
Chattopadhyay, S., Jawad, A., Momeni, D. and Myrzakulov, R.: Astrophys. Space Sci. **353**(2014)279.
Jawad, A., Rani, S. and Chattopadhyay, S.: Astrophys. Space Sci. **360**(2014)37.
Jawad, A. and Debnath, U.: Commun. Theor. Phys. **64**(2015)145.
Zubair, M. and Jawad, A.: Astrophys. Space Sci. **360**(2015)11.
Sharif, M. and Nazir, K.: Mod. Phys. Lett. A **31**(2016)1650175; Can. J. Phys. **95**(2017)297.
Agnese, A. and Camera, M.L.: Phys. Rev. D **51**(1995)2011.
Anchordoqui, L.A., Bergliaffa, S.E.P. and Torres, D.F.: Phys. Rev. D **55**(1997)5226.
Lobo, F.S.N. and Oliveira, M.A.: Phys. Rev. D **80**(2009)104012.
B$\ddot{o}$hmer, C.G., Harko, T. and Lobo, F.S.N.: Phys. Rev. D **85**(2012)044033.
Sharif, M. and Rani, S.: Gen. Relativ. Gravit. **45**(2013)2389.
Sharif, M. and Ikram, A.: Int. J. Mod. Phys. D **24**(2015)1550003.
Rahaman, F., Islam, S., Kuhfitting, P.K.F. and Ray, S.: Phys. Rev. D **86**(2012)106010.
Abreu, E.M.C. and Sasaki, N.: arXiv:1207.7130.
Jamil, M., et al.: J. Kor. Phys. Soc. **65**(2014)917.
Sharif, M. and Rani, S.: Eur. Phys. J. Plus **129**(2014)237; Adv. High Energy Phys. **2014**(2014)691497.
Bhar, P. and Rahaman, F.: Eur. Phys. J. Plus **74**(2014)3213.
Jawad, A. and Rani, S.: Eur. Phys. J. C **75**(2015)173.
Sharif, M. and Nazir, K.: Mod. Phys. Lett. A **32**(2017)1750083.
Misner, C.W. and Wheeler, J.A.: Ann. Phys. **2**(1957)525.
Morris, M. and Thorne, A.: Am. J. Phys. **56**(1988)395.
Visser, Matt.: *Lorentzian Wormholes: From Einstein to Hawking* (AIP Press, New York, 1995).
Carroll, S.: *Spacetime and Geometry: An Introduction to General Relativity* (Addison-Wesley, 2004).
Nicolini, P., Smailagic, A. and Spalluci, E.: Phys. Lett. B **632**(2006)547.
Sharif, M. and Rani, S.: Phys. Rev. D **88**(2013)123501.
Kofinas, G., Papantonopoulos, E. and Saridakis, E.N.: Phys. Rev. D **91**(2015)104034.
[^1]: msharif.math@pu.edu.pk
[^2]: awankanwal@yahoo.com
[^3]: On leave from Department of Mathematics, Lahore College for Women University, Lahore-54000, Pakistan.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We investigate the effect of electronic correlations on the coupling of electrons to Holstein phonons in the one-band Hubbard model. We calculate the static electron-phonon vertex within linear response of Kotliar-Ruckenstein slave-bosons in the paramagnetic saddle-point approximation. Within this approach the on-site Coulomb interaction $U$ strongly suppresses the coupling to Holstein phonons at low temperatures. Moreover the vertex function does [*not*]{} show particularly strong forward scattering. Going to larger temperatures $kT\sim t$ we find that after an initial decrease with $U$, the electron-phonon coupling starts to [*increase*]{} with $U$, confirming a recent result of Cerruti, Cappelluti, and Pietronero. We show that this behavior is related to an unusual reentrant behavior from a phase separated to a paramagnetic state upon [*decreasing*]{} the temperature.'
author:
- Erik Koch
- Roland Zeyher
title: 'Renormalization of the electron-phonon coupling in the one-band Hubbard model'
---
The relevance of phonons for high-temperature superconductivity has been debated since the discovery of the high-T$_c$ cuprates. Recently, strong renormalization effects of the electrons near the Fermi surface, observed in angle-resolved photoemission in several cuprates, have been at least partially ascribed to phonons [@lanzara]. Furthermore, quantum Monte Carlo simulations of the Hubbard-Holstein model suggest that the electron-phonon coupling shows forward scattering and no substantial suppression at large U and small dopings [@hanke], similar as in the $1/N$ expansion for the t-J model [@zeyher]. On the other hand, it has been pointed out [@keller] that at small dopings the Kotliar-Ruckenstein (K-R) slave-boson approach [@KR] might yield results quite different from the 1/N expansion. Below we study the influence of strong electronic correlations on the electron-phonon coupling using the K-R approach. The quantity of interest is the static vertex function $\Gamma$ which acts as a momentum-dependent, multiplicative renormalization factor for the bare electron-phonon coupling.
We consider the one-band Hubbard model on a square lattice with nearest and next-nearest neighbor hopping, $t$ and $t'$, respectively, $$\label{H0}
H=-t \!\!\sum_{\langle i,j\rangle,\sigma}\!
c_{j\sigma}^\dagger c_{i\sigma}^{\phantom{\dagger}}
-t'\!\!\!\sum_{\langle\langle i,j\rangle\rangle,\sigma}\!\!
c_{j\sigma}^\dagger c_{i\sigma}^{\phantom{\dagger}}
+ U \sum_i n_{i\uparrow} n_{i\downarrow}\,.$$ For the non-interacting system the dispersion relation is $\varepsilon_k=-2t(\cos(k_x)\!+\!\cos(k_y))-4t'\cos(k_x)\cos(k_y)$, and the density of states has a logarithmic van Hove singularity at $4t'$. For this model we want to study the influence of electronic correlations on the coupling of electrons to an external field $V_i$. The bare coupling has the form $H' = \,\sum_{i,\sigma} n_{i\sigma}V_i$. Writing $V_i=g u_i$, $H'$ describes also the interaction of electrons and atomic displacements $u_i$ with coupling constant $g$. The linear change in the one-particle Green’s function $G(p)$ due to $V_q$ is, $$\label{var}
{{\delta G(p)}\over {\delta V_q}} = G(p) \Gamma(p,q) G(p+q),$$ with the charge or electron-phonon vertex $\Gamma(p,q) = -\delta G^{-1}(p) / \delta V_q$. The components of the three-dimensional vectors $p$ and $q$ consist of a frequency and a two-dimensional momentum. For the calculation of the vertex we use the slave-boson technique of Kotliar and Ruckenstein [@KR]. The basic idea of our approach [@sblin] is to calculate linear responses by linearizing the saddle-point equations for the perturbed system about the homogeneous saddle-point solution. We consider only paramagnetic solutions. Then there are three slave-bosons, $e$, $p$, and $d$, describing empty, singly, and doubly occupied sites, and two Lagrange parameters $\lambda^{(1)}$ and $\lambda^{(2)}$ enforcing consistency between slave-fermions and slave-bosons. The linear response to a charge-like perturbation of a given wave-vector can be determined by solving the $5\times5$ system of linear equations given in [@sblin]. Considering only static fields $V_{\bf q}$ we find $$\label{SBvertex}
\Gamma({\bf p},{\bf q}) = 1+{\delta\lambda^{(2)}\over \delta\,V_{\bf q}}
+z\,\left(\varepsilon_{\bf p}
+\varepsilon_{\bf p+q}\right){{\delta\,z}\over
\delta\,V_{\bf q}}.$$ The first term in Eq. (\[SBvertex\]) is due to the explicit dependence of $G^{-1}$ on $V$, the remaining terms are obtained by taking the derivative of the self-energy with respect to $V$. $\Gamma$ does not depend on frequencies because we assumed zero frequency in $q$ and because the saddle-point self-energy is frequency-independent. $z$ is given by the Kotliar-Ruckenstein choice $$\label{KRz}
z={(e+d)p\over\sqrt{1-p^2-d^2}\sqrt{1-e^2-p^2}}\;.$$
In the limit $U\to\infty$ our approach reduces to method (II) of Ref. [@keller], and we have checked that for large $U$ we recover the results given in their Fig. 1.
While it was shown that the slave-boson linear-response method gives very good results for the charge susceptibility (see, e.g., Fig. 1 of Ref. [@sblin] for a comparison with exact diagonalization), it is not clear [*a priori*]{} how well it will work for the charge vertex $\Gamma({\bf p},{\bf q})$. As a check we have calculated the static vertex for a small system using exact diagonalization. The result for the scattering of an electron from a state just below, to a state just above the Fermi surface is shown in Fig. \[lanc\]. Considering the fact that in exact diagonalization the number of particles is fixed, while the slave-boson calculations are performed in the grand canonical ensemble, the agreement between both methods is remarkably good. This indicates that the slave-boson approach should work well at zero temperature.
To find out how well the slave-bosons work at finite temperatures we compare to the quantum Monte Carlo (QMC) calculations of Ref. [@hanke]. Fig. 4 of that work shows the effective electron-phonon coupling $g({\bf p},{\bf q})$, as defined in their Eq. (7), for the Hubbard model on an $8\times8$ lattice with filling $n=0.88$, calculated at the lowest fermionic Matsubara frequency, for an inverse temperature of $\beta=2$. For comparison, we show in Fig. \[hanke\] the results of our slave-boson calculations for the same model at $\omega=0$ and a slightly different filling $n=0.875$. Also here we find a remarkable agreement. In particular, we find that after an initial decrease the coupling for forward scattering (small $\bf q$) starts to [*increase*]{} for $U\gtrsim8$. This seems to indicate that the slave-boson method also works well at finite temperatures. Moreover, Eq. (\[SBvertex\]) naturally explains why the QMC results for different electron momenta $\bf p$, shown in Fig. 4 of Ref. [@hanke], are so similar.
After comparing the results of the slave-boson linear response calculations to more accurate methods, which are, however, limited to small systems (exact diagonalization) or finite temperatures (quantum Monte Carlo), we now turn to very large systems at very low temperatures. First we calculate the electron-phonon vertex for electrons on the Fermi surface, where in Eq. (\[SBvertex\]) $\varepsilon_{\bf p}$ and $\varepsilon_{{\bf p}+{\bf q}}$ are both replaced by the Fermi energy of the non-interacting system. Fig. \[gamma\] shows the vertex for momentum transfer $\bf q$ along high symmetry lines in the Brillouin zone for the Hubbard model at essentially zero temperature. The effect of next-nearest neighbor hopping is illustrated by comparing calculations for $t'=0$ and $t'=-0.35\,t$. We find that in both cases the on-site Coulomb interaction strongly reduces the electron-phonon coupling. This is not completely unexpected as the charge response should be strongly suppressed by an on-site Coulomb interaction. It is, however, in striking difference to the behavior at higher temperature (Fig. \[hanke\]). Also, while $\Gamma({\bf q})$ shows a broad peak around ${\bf q}=0$, we do not find a particularly pronounced forward scattering. In fact, the electron-phonon vertex is often strongest close to ${\bf q}=(\pi,0)$. This is different from what was found within an $1/N$ expansion [@zeyher]. The $1/N$ expansion relies on the smallness of $1/\delta N$, i.e., it breaks down at small dopings $\delta$. This can be seen from the fact that the charge-charge correlation function remains in leading order finite for $\delta \rightarrow 0$ though the exact correlation function vanishes in this limit. The Kotliar-Ruckenstein method, on the other hand, reproduces this limit correctly in leading order which makes it plausible that in this case the charge vertex is smaller than in the 1/N expansion, especially at smaller dopings [@keller]. Which of the two methods is more reliable, in particular, near optimal doping, is not clear and can probably only be judged by comparison with exact numerical methods.
To assess the importance of the electron-phonon coupling for superconductivity we calculate the renormalization factor $$\Lambda_\alpha =
{\int_{\mathrm{FS}}{dp \over|{\bf v_p} |}
\int_{\mathrm{FS}}{dp'\over|{\bf v_{p'}}|}
g_\alpha({\bf p})\Gamma({\bf p},{\bf p'-p})g_\alpha({\bf p'}) \over
z^2\;
\int_{\mathrm{FS}}{dp \over|{\bf v_p} |}
\int_{\mathrm{FS}}{dp'\over|{\bf v_{p'}}|} g_\alpha^2({\bf p})}$$ for the pairing channels $g_s({\bf p})=1$, $g_{s^\ast}({\bf p})=\cos(p_x)+\cos(p_y)$, $g_{p_x}({\bf p})=\sin(p_x)$, $g_{d_{x^2-y^2}}({\bf p})=\cos(p_x)-\cos(p_y)$, and $g_{d_{xy}}({\bf p})=\sin(p_x)\sin(p_y)$. $\Lambda_\alpha$ is equal to the ratio $\lambda_\alpha/\lambda^{(0)}_\alpha$, where $\lambda_\alpha$ and $\lambda^{(0)}_\alpha$ denote the dimensionless electron-phonon coupling constants in the interacting and non-interacting cases, respectively. To judge the importance of forward scattering we also calculate the renormalization factor for transport, $$\Lambda_{\mathrm{tr}}
={\int_{\mathrm{FS}}{dp \over|{\bf v_p} |}
\int_{\mathrm{FS}}{dp'\over|{\bf v_{p'}}|}
\Gamma({\bf p},{\bf p'-p})\,|{\bf v}({\bf p})-{\bf v}({\bf p'})|^2 \over
2\,z^2\;
\int_{\mathrm{FS}}{dp \over|{\bf v_p} |}
\int_{\mathrm{FS}}{dp'\over|{\bf v_{p'}}|} |{\bf v}({\bf p})|^2}\,.$$ The results are shown in Fig. \[lambda\]. We find that for $U\lesssim10$ the $s$-wave couplings decrease almost exponentially with $U$. For the special case of the Hubbard model with nearest neighbor hopping only ($t'=0$) we have $\Lambda_s^\ast=\Lambda_s$, since $g_{s^\ast}$ is constant on the Fermi surface. Moreover $\Lambda_\mathrm{tr}\approx\Lambda_s$, reflecting that there is no pronounced forward scattering; only for larger $U$ does $\Lambda_\mathrm{tr}$ become somewhat smaller than $\Lambda_s$. But by then both coupling constants are already very small. The higher pairing channels are even weaker, starting from zero at $U=0$, going through a maximum around $U=2$ only to decay almost exponentially. We can thus conclude that within Kotliar-Ruckenstein slave-boson theory, restricting the system to be paramagnetic, the contribution of Holstein phonons to superconductivity should be very small.
We now come back to the surprising upturn of the electron-phonon vertex for $U\gtrsim8$ shown in Fig. \[hanke\] and also found in the QMC calculations of Ref. [@hanke]. Calculating $\Gamma({\bf q})$ at $kT\sim t$, we indeed find a drastically different behavior than for $T\to0$: Instead of monotonically decreasing with $U$, the coupling starts to [*increase*]{} and develops a very strong forward scattering peak. An example is shown in Fig. \[gammabe\]. Looking at the charge response function $\chi({\bf q})$ in the paramagnetic phase we find that this behavior is a precursor of a phase-separation instability — a divergence of $\chi({\bf q}=0)$. This has already been pointed out in Ref. [@emmanuele]. Calculating the phase diagram, we find a very peculiar reentrant behavior around the phase separated region as shown in Fig. \[phasedia\]: When cooling down the system phase separates, but at low enough temperature it reverts back to the paramagnetic phase. Since in our calculations we only allow for a paramagnetic phase, other phases might mask the phase separation. Also, since slave-bosons may have problems at high temperatures [@KR], it is not clear if the Hubbard model really shows such an reentrant behavior. Nevertheless, we find a qualitatively similar behavior in the limit $U\to\infty$ in the gauge invariant $1/N$ expansion (i.e., a theory without Bose condensation). Phase separation at finite $T$ has also been proposed in Refs. [@Woelfle; @Imada; @Cappelluti]. Moreover the good agreement with the quantum Monte Carlo calculations of Ref. [@hanke] suggests that our approach might indeed capture the relevant physics. It would therefore be interesting to test the phase diagram shown in Fig. \[phasedia\] with QMC: A calculation for, e.g., $\beta=1$ and $U=4\ldots8$, i.e., at temperatures and values of $U$, where QMC has little problems, should show clear signs of phase separation. Of course, these calculations should be done at $\omega=0$ as the extrapolation from finite Matsubara frequencies might be difficult close to an instability.
In conclusion, we have studied the influence of strong electronic correlations on the electron-phonon interaction for the Hubbard-Holstein model using the Kotliar-Ruckenstein slave boson method. For high temperatures the boundaries of the phase-separated state were determined in the $T-U$ plane for different dopings and the increase of the static vertex $\Gamma$ near the boundaries was studied, confirming and extending recent results of Ref. [@emmanuele]. At low temperatures and moderate or small dopings we found that $\Gamma$ does not exhibit pronounced forward scattering behavior and that $\Gamma$ reduces dramatically the electron-phonon coupling. It seems that exact numerical calculations are necessary to judge the reliability of the $1/N$ and the Kotliar-Ruckenstein approaches.
We would like to thank O. Dolgov, O. Gunnarsson, W. Hanke, M. Lavagna, A. Muramatsu, and D. Vollhardt for fruitful discussions.
[99]{} A. Lanzara, P.V. Bogdanov, X.J. Zhou, S.A. Kellar, D.L. Feng, E.D. Lu, T. Yoshida, H. Eisaki, A. Fujimori, K. Kishio, J.-I. Shimoyoama, T. Noda, S. Uchida, Z. Hussain, and Z.-X. Shen, Nature [**412**]{}, 510 (2001) Z.B. Huang, W. Hanke, E. Arrigoni, and D.J. Scalapino, cond-mat/0306131. M.L. Kulić and R. Zeyher, Phys. Rev. B [**49**]{}, 4395 (1994); R. Zeyher and M.L. Kulić, Phys. Rev. B [**53**]{}, 2850 (1996). J. Keller, C.E. Leal, and F. Forsthofer, Physica B [**206&207**]{}, 739 (1995). G. Kotliar and A.E. Ruckenstein, Phys. Rev. Lett. [**57**]{}, 1362 (1986). E. Koch, Phys. Rev. B [**64**]{}, 165113 (2001). B. Cerruti, E. Cappelluti, and L. Pietronero, cond-mat/0307190 and cond-mat/0312654. P. Wölfle, Journ. of Low Temp. Physics [**99**]{},625 (1995). S. Onoda and M. Imada, cond-mat/0304580. E. Cappelluti and R. Zeyher, Phys. Rev. B [**59**]{}, 6475 (1999).
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Multiple hypothesis testing is a fundamental problem in high dimensional inference, with wide applications in many scientific fields. In genome-wide association studies, tens of thousands of tests are performed simultaneously to find if any SNPs are associated with some traits and those tests are correlated. When test statistics are correlated, false discovery control becomes very challenging under arbitrary dependence. In the current paper, we propose a novel method based on principal factor approximation, which successfully subtracts the common dependence and weakens significantly the correlation structure, to deal with an arbitrary dependence structure. We derive an approximate expression for false discovery proportion (FDP) in large scale multiple testing when a common threshold is used and provide a consistent estimate of realized FDP. This result has important applications in controlling FDR and FDP. Our estimate of realized FDP compares favorably with Efron (2007)’s approach, as demonstrated in the simulated examples. Our approach is further illustrated by some real data applications. We also propose a dependence-adjusted procedure, which is more powerful than the fixed threshold procedure.'
author:
- 'Jianqing Fan, Xu Han and Weijie Gu'
bibliography:
- 'amsis-ref.bib'
title: '**Estimating False Discovery Proportion Under Arbitrary Covariance Dependence[^1]** '
---
[**Keywords:**]{} Multiple hypothesis testing, high dimensional inference, false discovery rate, arbitrary dependence structure, genome-wide association studies.
Introduction
============
Multiple hypothesis testing is a fundamental problem in the modern research for high dimensional inference, with wide applications in scientific fields, such as biology, medicine, genetics, neuroscience, economics and finance. For example, in genome-wide association studies, massive amount of genomic data (e.g. SNPs, eQTLs) are collected and tens of thousands of hypotheses are tested simultaneously to find if any of these genomic data are associated with some observable traits (e.g. blood pressure, weight, some disease); in finance, thousands of tests are performed to see which fund managers have winning ability (Barras, Scaillet & Wermers 2010).
False Discovery Rate (FDR) has been introduced in the celebrated paper by Benjamini & Hochberg (1995) for large scale multiple testing. By definition, FDR is the expected proportion of falsely rejected null hypotheses among all of the rejected null hypotheses. The classification of tested hypotheses can be summarized in Table 1:
------------ -------------- ---------- -------
Number Number
Number of not rejected rejected Total
True Null $U$ $V$ $p_0$
False Null $T$ $S$ $p_1$
$p-R$ $R$ $p$
------------ -------------- ---------- -------
: Classification of tested hypotheses[]{data-label="ar"}
Various testing procedures have been developed for controlling FDR, among which there are two major approaches. One is to compare the $P$-values with a data-driven threshold as in Benjamini & Hochberg (1995). Specifically, let $p_{(1)} \leq p_{(2)} \leq \cdots \leq p_{(p)}$ be the ordered observed $P$-values of $p$ hypotheses. Define $k = \text{max}\Big\{i: p_{(i)} \leq i\alpha/p \Big\}$ and reject $H_{(1)}^0, \cdots, H_{(k)}^0$, where $\alpha$ is a specified control rate. If no such $i$ exists, reject no hypothesis. The other related approach is to fix a threshold value $t$, estimate the FDR, and choose $t$ so that the estimated FDR is no larger than $\alpha$ (Storey 2002). Finding such a common threshold is based on a conservative estimate of FDR. Specifically, let $\mathrm{\widehat{FDR}}(t) = \widehat{p}_0t/(R(t)\vee 1)$, where $R(t)$ is the number of total discoveries with the threshold $t$ and $\widehat{p}_0$ is an estimate of $p_0$. Then solve $t$ such that $\mathrm{\widehat{FDR}}(t)\leq\alpha$. The equivalence between the two methods has been theoretically studied by Storey, Taylor & Siegmund (2004) and Ferreira & Zwinderman (2006).
Both procedures have been shown to control the FDR for independent test statistics. However, in practice, test statistics are usually correlated. Although Benjamini & Yekutieli (2001) and Clarke & Hall (2009) argued that when the null distribution of test statistics satisfies some conditions, dependence case in the multiple testing is asymptotically the same as independence case, multiple testing under general dependence structures is still a very challenging and important open problem. Efron (2007) pioneered the work in the field and noted that correlation must be accounted for in deciding which null hypotheses are significant because the accuracy of false discovery rate techniques will be compromised in high correlation situations. There are several papers that show the Benjamini-Hochberg procedure or Storey’s procedure can control FDR under some special dependence structures, e.g. Positive Regression Dependence on Subsets (Benjamini & Yekutieli 2001) and weak dependence (Storey, Taylor & Siegmund 2004). Sarkar (2002) also shows that FDR can be controlled by a generalized stepwise multiple testing procedure under positive regression dependence on subsets. However, even if the procedures are valid under these special dependence structures, they will still suffer from efficiency loss without considering the actual dependence information. In other words, there are universal upper bounds for a given class of covariance matrices.
In the current paper, we will develop a procedure for high dimensional multiple testing which can deal with any arbitrary dependence structure and fully incorporate the covariance information. This is in contrast with previous literatures which consider multiple testing under special dependence structures, e.g. Sun & Cai (2009) developed a multiple testing procedure where parameters underlying test statistics follow a hidden Markov model, and Leek & Storey (2008) and Friguet, Kloareg & Causeur (2009) studied multiple testing under the factor models. More specifically, consider the test statistics $$(Z_1,\cdots,Z_p)^T\sim N((\mu_1,\cdots,\mu_p)^T,{\mbox{\boldmath $\Sigma$}}),$$ where ${\mbox{\boldmath $\Sigma$}}$ is known and $p$ is large. We would like to simultaneously test $H_{0i}: \mu_i=0$ vs $H_{1i}: \mu_i\neq0$ for $i=1,\cdots,p$. Note that ${\mbox{\boldmath $\Sigma$}}$ can be any non-negative definite matrix. Our procedure is called Principal Factor Approximation (PFA). The basic idea is to first take out the principal factors that derive the strong dependence among observed data $Z_1,\cdots,Z_p$ and to account for such dependence in calculation of false discovery proportion (FDP). This is accomplished by the spectral decomposition of ${\mbox{\boldmath $\Sigma$}}$ and taking out the largest common factors so that the remaining dependence is weak. We then derive the asymptotic expression of the FDP, defined as $V/R$, that accounts for the strong dependence. The realized but unobserved principal factors that derive the strong dependence are then consistently estimated. The estimate of the realized FDP is obtained by substituting the consistent estimate of the unobserved principal factors.
We are especially interested in estimating FDP under the high dimensional sparse problem, that is, $p$ is very large, but the number of $\mu_i\neq0$ is very small. In section 2, we will explain the situation under which ${\mbox{\boldmath $\Sigma$}}$ is known. Sections 3 and 4 present the theoretical results and the proposed procedures. In section 5, the performance of our procedures is critically evaluated by various simulation studies. Section 6 is about the real data analysis. All the proofs are relegated to the Appendix and the Supplemental Material.
Motivation of the Study
=======================
In genome-wide association studies, consider $p$ SNP genotype data for $n$ individual samples, and further suppose that a response of interest (i.e. gene expression level or a measure of phenotype such as blood pressure or weight) is recorded for each sample. The SNP data are conventionally stored in an $n\times p$ matrix ${\mbox{\bf X}}=(x_{ij})$, with rows corresponding to individual samples and columns corresponding to individual SNPs . The total number $n$ of samples is in the order of hundreds, and the number $p$ of SNPs is in the order of tens of thousands.
Let $X_j$ and $Y$ denote, respectively, the random variables that correspond to the $j$th SNP coding and the outcome. The biological question of the association between genotype and phenotype can be restated as a problem in multiple hypothesis testing, [*i.e.*]{}, the simultaneous tests for each SNP $j$ of the null hypothesis $H_j$ of no association between the SNP $X_j$ and $Y$. Let $\{X_j^i\}_{i=1}^n$ be the sample data of $X_j$, $\{Y^i\}_{i=1}^n$ be the independent sample random variables of $Y$. Consider the marginal linear regression between $\{Y^i\}_{i=1}^n$ and $\{X_j^i\}_{i=1}^n$: $$\label{gwj1}
(\alpha_j,\beta_j)={\mathrm{argmin}}_{a_j,b_j}\frac{1}{n}\sum_{i=1}^nE(Y^i-a_j-b_jX_j^i)^2, \ \ \ j=1,\cdots,p.$$ We wish to simultaneously test the hypotheses $$H_{0j}:\quad\beta_j=0\quad\text{vs}\quad H_{1j}:\quad\beta_j\neq0, \quad\quad j=1,\cdots,p$$ to see which SNPs are correlated with the phenotype.
Recently statisticians have increasing interests in the high dimensional sparse problem: although the number of hypotheses to be tested is large, the number of false nulls ($\beta_j\neq0$) is very small. For example, among 2000 SNPs, there are maybe only 10 SNPs which contribute to the variation in phenotypes or certain gene expression level. Our purpose is to find these 10 SNPs by multiple testing with some statistical accuracy.
Because of the sample correlations among $\{X_j^i\}_{i=1,j=1}^{i=n,j=p}$, the least-squares estimators $\{\widehat{\beta}_j\}_{j=1}^p$ for $\{\beta_j\}_{j=1}^p$ in (1) are also correlated. The following result describes the joint distribution of $\{\widehat{\beta}_j\}_{j=1}^p$. The proof is straightforward.
[Let $\widehat{\beta}_j$ be the least-squares estimator for $\beta_j$ in (1) based on $n$ data points, $s_{kl}$ be the sample correlation between $X_k$ and $X_l$. Assume that the conditional distribution of $Y^i$ given $X_1^i,\cdots,X_p^i$ is $N(\mu(X_1^i,\cdots,X_p^i),\sigma^2)$. Then, conditioning on $\{X_j^i\}_{i=1,j=1}^{i=n,j=p}$, the joint distribution of $\{\widehat{\beta}_j\}_{j=1}^p$ is $(\widehat{\beta}_1,\cdots,\widehat{\beta}_p)^T\sim N((\beta_1,\cdots,\beta_p)^T,{\mbox{\boldmath $\Sigma$}}^*)$, where the $(k,l)$th element in ${\mbox{\boldmath $\Sigma$}}^*$ is $\displaystyle{\mbox{\boldmath $\Sigma$}}_{kl}^*=\sigma^2s_{kl}/(ns_{kk}s_{ll})$.]{}
For ease of notation, let $Z_1,\cdots,Z_p$ be the standardized random variables of $\widehat{\beta}_1,\cdots,\widehat{\beta}_p$, that is, $$\label{b2}
Z_i=\frac{\widehat{\beta}_i}{{\mbox{SD}}(\widehat{\beta}_i)}=\frac{\widehat{\beta}_i}{\sigma/(\sqrt{n}s_{ii})}, \quad\quad i=1,\cdots,p.$$ In the above, we implicitly assume that $\sigma$ is known and the above standardized random variables are z-test statistics. The estimate of residual variance $\sigma^2$ will be discussed in Section 6 via refitted cross-validation (Fan, Guo & Hao, 2011). Then, conditioning on $\{X_j^i\}$, $$\label{c1}
(Z_1,\cdots,Z_p)^T\sim N((\mu_1,\cdots,\mu_p)^T,{\mbox{\boldmath $\Sigma$}}),$$ where $\mu_i=\sqrt{n}\beta_is_{ii}/\sigma$ and covariance matrix ${\mbox{\boldmath $\Sigma$}}$ has the $(k,l)$th element as $s_{kl}$. Simultaneously testing (2) based on $(\widehat{\beta}_1,\cdots,\widehat{\beta}_p)^T$ is thus equivalent to testing
$$\label{c2}
H_{0j}:\quad\mu_j=0\quad\text{vs}\quad H_{1j}:\quad \mu_j\neq0, \quad\quad j=1,\cdots,p$$
based on $(Z_1,\cdots,Z_p)^T$.
In (4), ${\mbox{\boldmath $\Sigma$}}$ is the population covariance matrix of $(Z_1,\cdots,Z_p)^T$, and is known based on the sample data $\{X_j^i\}$. The covariance matrix ${\mbox{\boldmath $\Sigma$}}$ can have arbitrary dependence structure. We would like to clarify that ${\mbox{\boldmath $\Sigma$}}$ is known and there is no estimation of the covariance matrix of $X_1,\cdots,X_p$ in this set up.
Estimating False Discovery Proportion
=====================================
From now on assume that among all the $p$ null hypotheses, $p_0$ of them are true and $p_1$ hypotheses ($p_1 = p - p_0$) are false, and $p_1$ is supposed to be very small compared to $p$. For ease of presentation, we let $p$ be the sole asymptotic parameter, and assume $p_0\rightarrow\infty$ when $p\rightarrow\infty$. For a fixed rejection threshold $t$, we will reject those $P$-values no greater than $t$ and regard them as statistically significance. Because of its powerful applicability, this procedure has been widely adopted; see, e.g., Storey (2002), Efron (2007, 2010), among others. Our goal is to estimate the realized FDP for a given $t$ in multiple testing problem (\[c2\]) based on the observations (\[c1\]) under arbitrary dependence structure of ${\mbox{\boldmath $\Sigma$}}$. Our methods and results have direct implications on the situation in which ${\mbox{\boldmath $\Sigma$}}$ is unknown, the setting studied by Efron (2007, 2010) and Friguet et al (2009). In the latter case, ${\mbox{\boldmath $\Sigma$}}$ needs to be estimated with certain accuracy.
Approximation of FDP
--------------------
Define the following empirical processes: $$\begin{aligned}
V(t) & = & \#\{true \ null \ P_i: P_i \leq t\}, \nonumber\\
S(t) & = & \#\{false \ null \ P_i: P_i \leq t\} \quad \text{and} \nonumber\\
R(t) & = & \#\{P_i: P_i \leq t\}, \nonumber\end{aligned}$$ where $t\in[0,1]$. Then, $V(t)$, $S(t)$ and $R(t)$ are the number of false discoveries, the number of true discoveries, and the number of total discoveries, respectively. Obviously, $R(t) = V(t) + S(t)$, and $V(t)$, $S(t)$ and $R(t)$ are all random variables, depending on the test statistics $(Z_1,\cdots,Z_p)^T$. Moreover, $R(t)$ is observed in an experiment, but $V(t)$ and $S(t)$ are both unobserved.
By definition, ${\mbox{FDP}}(t)=V(t)/R(t)$ and ${\mbox{FDR}}(t) = E\Big[V(t)/R(t)\Big]$. One of interests is to control FDR$(t)$ at a predetermined rate $\alpha$, say $15\%$. There are also substantial research interests in the statistical behavior of the number of false discoveries $V(t)$ and the false discovery proportion ${\mbox{FDP}}(t)$, which are unknown but realized given an experiment. One may even argue that controlling FDP is more relevant, since it is directly related to the current experiment.
We now approximate $V(t)/R(t)$ for the high dimensional sparse case $p_1\ll p$. Suppose $(Z_1,\cdots,Z_p)^T\sim N((\mu_1,\cdots,\mu_p)^T,{\mbox{\boldmath $\Sigma$}})$ in which ${\mbox{\boldmath $\Sigma$}}$ is a correlation matrix. We need the following definition for weakly dependent normal random variables; other definitions can be found in Farcomeni (2007).
Suppose $(K_1,\cdots,K_p)^T\sim N((\theta_1,\cdots,\theta_p)^T,{\mbox{\bf A}})$. Then $K_1,\cdots,K_p$ are called weakly dependent normal variables if $$\lim_{p\rightarrow\infty}p^{-2}\sum_{i,j}|a_{ij}|=0,$$ where $a_{ij}$ denote the $(i,j)$th element of the covariance matrix ${\mbox{\bf A}}$.
Our procedure is called principal factor approximation (PFA). The basic idea is to decompose any dependent normal random vector as a factor model with weakly dependent normal random errors. The details are shown as follows. Firstly apply the spectral decomposition to the covariance matrix ${\mbox{\boldmath $\Sigma$}}$. Suppose the eigenvalues of ${\mbox{\boldmath $\Sigma$}}$ are $\lambda_1,\cdots,\lambda_{p}$, which have been arranged in decreasing order. If the corresponding orthonormal eigenvectors are denoted as ${\mbox{\boldmath $\gamma$}}_1,\cdots,{\mbox{\boldmath $\gamma$}}_p$, then $${\mbox{\boldmath $\Sigma$}}=\sum_{i=1}^p\lambda_i{\mbox{\boldmath $\gamma$}}_i{\mbox{\boldmath $\gamma$}}_i^T.$$ If we further denote ${\mbox{\bf A}}= \sum_{i=k+1}^p\lambda_i{\mbox{\boldmath $\gamma$}}_i{\mbox{\boldmath $\gamma$}}_i^T$ for an integer $k$, then $$\|{\mbox{\bf A}}\|_F^2 =\lambda_{k+1}^2+\cdots+\lambda_p^2,$$ where $\|\cdot\|_F$ is the Frobenius norm. Let ${\mbox{\bf L}}=(\sqrt{\lambda_1}{\mbox{\boldmath $\gamma$}}_1,\cdots,\sqrt{\lambda_k}{\mbox{\boldmath $\gamma$}}_k)$, which is a $p\times k$ matrix. Then the covariance matrix ${\mbox{\boldmath $\Sigma$}}$ can be expressed as $$\label{b19}
{\mbox{\boldmath $\Sigma$}}={\mbox{\bf L}}{\mbox{\bf L}}^T+{\mbox{\bf A}},$$ and $Z_1,\cdots,Z_p$ can be written as $$\label{b20}
Z_i=\mu_i+{\mbox{\bf b}}_i^T{\mbox{\bf W}}+K_i, \quad\quad i=1,\cdots,p,$$ where ${\mbox{\bf b}}_i=(b_{i1},\cdots,b_{ik})^T$, $(b_{1j},\cdots,b_{pj})^T=\sqrt{\lambda_j}{\mbox{\boldmath $\gamma$}}_j$, the factors are ${\mbox{\bf W}}=(W_1,\cdots,W_k)^T$ $\sim N_k(0,{\mbox{\bf I}}_k)$ and the random errors are $ (K_1,\cdots,K_p)^T$ $\sim N(0,{\mbox{\bf A}})$. Furthermore, $W_1,\cdots,W_k$ are independent of each other and independent of $K_1,\cdots,K_p$. Changing a probability if necessary, we can assume (\[b20\]) is the data generation process. In expression (\[b20\]), $\{\mu_i=0\}$ correspond to the true null hypotheses, while $\{\mu_i\neq0\}$ correspond to the false ones. Note that although (\[b20\]) is not exactly a classical multifactor model because of the existence of dependence among $K_1,\cdots,K_p$, we can nevertheless show that $(K_1,\cdots,K_p)^T$ is a weakly dependent vector if the number of factors $k$ is appropriately chosen.
We now discuss how to choose $k$ such that $(K_1,\cdots,K_p)^T$ is weakly dependent. Denote by $a_{ij}$ the $(i,j)$th element in the covariance matrix ${\mbox{\bf A}}$. If we have $$\label{c3}
p^{-1}(\lambda_{k+1}^2+\cdots+\lambda_p^2)^{1/2} \longrightarrow 0 \ \text{as} \ p\rightarrow\infty,$$ then by the Cauchy-Schwartz inequality $$p^{-2}\sum_{i,j}|a_{ij}|\leq p^{-1}\|{\mbox{\bf A}}\|_F=p^{-1}(\lambda_{k+1}^2+\cdots+\lambda_p^2)^{1/2}\longrightarrow0 \ \text{as} \ p\rightarrow\infty.$$ Note that $\sum_{i=1}^p\lambda_i=tr({\mbox{\boldmath $\Sigma$}})=p$, so that (\[c3\]) is self-normalized. Note also that the left hand side of (\[c3\]) is bounded by $p^{-1/2}\lambda_{k+1}$ which tends to zero whenever $\lambda_{k+1}=o(p^{1/2})$. Therefore, we can assume that the $\lambda_k>cp^{1/2}$ for some $c>0$. In particular, if $\lambda_1=o(p^{1/2})$, the matrix ${\mbox{\boldmath $\Sigma$}}$ is weak dependent and $k=0$. In practice, we always choose the smallest $k$ such that $$\frac{\sqrt{\lambda_{k+1}^2+\cdots+\lambda_{p}^2}}{\lambda_1+\cdots+\lambda_p} < \varepsilon$$ holds for a predetermined small $\varepsilon$, say, $0.01$.
Suppose $(Z_1,\cdots,Z_p)^T\sim N((\mu_1,\cdots,\mu_p)^T,{\mbox{\boldmath $\Sigma$}})$. Choose an appropriate $k$ such that $$(C0) \ \ \ \ \ \ \ \ \frac{\sqrt{\lambda_{k+1}^2+\cdots+\lambda_{p}^2}}{\lambda_1+\cdots+\lambda_p}=O(p^{-\delta}) \ \ \ \text{for} \ \ \delta>0.$$ Let $\sqrt{\lambda_j}{\mbox{\boldmath $\gamma$}}_j=(b_{1j},\cdots,b_{pj})^T$ for $j=1,\cdots,k$. Then, where $a_i = (1-\sum_{h=1}^kb_{ih}^2)^{-1/2}$, $\eta_i = {\mbox{\bf b}}_i^T{\mbox{\bf W}}$ with ${\mbox{\bf b}}_i=(b_{i1},\cdots,b_{ik})^T$ and ${\mbox{\bf W}}\sim N_k(0,{\mbox{\bf I}}_k)$ in (\[b20\]), and $\Phi(\cdot)$ and $z_{t/2}=\Phi^{-1}(t/2)$ are the cumulative distribution function and the $t/2$ lower quantile of a standard normal distribution, respectively.
Note that condition (C0) implies that $K_1,\cdots,K_p$ are weakly dependent random variables, but (\[c3\]) converges to zero at some polynomial rate of $p$.
Theorem 1 gives an asymptotic approximation for FDP$(t)$ under general dependence structure. To the best of our knowledge, it is the first result to explicitly spell out the impact of dependence. It is also closely connected with the existing results for independence case and weak dependence case. Let $b_{ih}=0$ for $i=1,\cdots,p$ and $h=1,\cdots,k$ in (\[b20\]) and $K_1,\cdots,K_p$ be weakly dependent or independent normal random variables, then it reduces to the weak dependence case or independence case, respectively. In the above two specific cases, the numerator of (\[a50\]) is just $p_0t$. Storey (2002) used an estimate for $p_0$, resulting an estimator of ${\mbox{FDP}}(t)$ as $\widehat{p}_0t/R(t)$. This estimator has been shown to control the false discovery rate under independency and weak dependency. However, for general dependency, Storey’s procedure will not work well because it ignores the correlation effect among the test statistics, as shown by (\[a50\]). Further discussions for the relationship between our result and the other leading research for multiple testing under dependence are shown in Section 3.4.
The results in Theorem 1 can be better understood by some special dependence structures as follows. These specific cases are also considered by Roquain & Villers (2010), Finner, Dickhaus & Roters (2007) and Friguet, Kloareg & Causeur (2009) under somewhat different settings.
**Example 1: \[Equal Correlation\]** If ${\mbox{\boldmath $\Sigma$}}$ has $\rho_{ij}=\rho\in[0,1)$ for $i\neq j$, then we can write $$Z_i=\mu_i+\sqrt{\rho}W+\sqrt{1-\rho}K_i \ \ \ i=1,\cdots,p$$ where $W\sim N(0,1)$, $K_i\sim N(0,1)$, and $W$ and all $K_i$’s are independent of each other. Thus, we have $$\lim_{p\rightarrow\infty}\Big[\mathrm{FDP}(t)-\frac{p_0\Big[\Phi(d(z_{t/2}+\sqrt{\rho}W))+\Phi(d(z_{t/2}-\sqrt{\rho}W))\Big]}{\sum_{i=1}^p\Big[\Phi(d(z_{t/2}+\sqrt{\rho}W+\mu_i))+\Phi(d(z_{t/2}-\sqrt{\rho}W-\mu_i))\Big]}\Big]=0 \ \ \text{a.s.},$$ where $d=(1-\rho)^{-1/2}$.
Note that Delattre & Roquain (2011) studied the FDP in a particular case of equal correlation. They provided a slightly different decomposition of $\{Z_i\}_{i=1}^p$ in the proof of Lemma 3.3 where the errors $K_i$’s have a sum equal to 0. Finner, Dickhaus & Roters (2007) in their Theorem 2.1 also shows a result similar to Theorem 1 for equal correlation case.
**Example 2: \[Multifactor Model\]** Consider a multifactor model: $$\label{c4}
Z_i=\mu_i+\eta_i+a_i^{-1}K_i, \quad\quad i=1,\cdots,p,$$ where $\eta_i$ and $a_i$ are defined in Theorem 1 and $K_i\sim N(0,1)$ for $i=1,\cdots,p$. All the $W_h$’s and $K_i$’s are independent of each other. In this model, $W_1,\cdots,W_k$ are the $k$ common factors. By Theorem 1, expression (\[a50\]) holds.
Note that the covariance matrix for model (\[c4\]) is $${\mbox{\boldmath $\Sigma$}}= {\mbox{\bf L}}{\mbox{\bf L}}^T + {\mathrm{diag}}(a_1^{-2}, \cdots, a_p^{-2}).$$ When $\{a_j\}$ is not a constant, columns of $L$ are not necessarily eigenvectors of ${\mbox{\boldmath $\Sigma$}}$. In other words, when the principal component analysis is used, the decomposition of (\[b19\]) can yield a different $L$ and condition (\[c3\]) can require a different value of $k$. In this sense, there is a subtle difference between our approach and that in Friguet, Kloareg & Causeur (2009) when the principal component analysis is used. Theorem 1 should be understood as a result for any decomposition (\[b19\]) that satisfies condition (C0). Because we use principal components as approximated factors, our procedure is called principal factor approximation. In practice, if one knows that the test statistics comes from a factor model structure, multiple testing procedure based on this factor model should be preferable. However, when such factor structure is not clear, our procedure can deal with an arbitrary covariance dependence case.
In Theorem 1, since ${\mbox{FDP}}$ is bounded by 1, taking expectation on both sides of the equation (\[a50\]) and by the Portmanteau lemma, we have the convergence of FDR:
Under the assumptions in Theorem 1, $$\label{a51}
\lim_{p\rightarrow\infty}\Big\{\mathrm{FDR}(t)-E\Big[\frac{\sum_{i\in\text{\{true null\}}}\Big\{\Phi(a_i(z_{t/2}+\eta_i))+\Phi(a_i(z_{t/2}-\eta_i))\Big\}}{\sum_{i=1}^p\Big\{\Phi(a_i(z_{t/2}+\eta_i+\mu_i))+\Phi(a_i(z_{t/2}-\eta_i-\mu_i))\Big\}}\Big]\Big\}=0.$$
The expectation on the right hand side of (\[a51\]) is with respect to standard multivariate normal variables $(W_1,\cdots,W_k)^T\sim N_k(0,{\mbox{\bf I}}_k)$.
The proof of Theorem 1 is based on the following result.
Under the assumptions in Theorem 1, $$\begin{aligned}
\lim_{p\rightarrow\infty}\Big[p^{-1}R(t)-p^{-1}\sum_{i=1}^p\Big[\Phi(a_i(z_{t/2}+\eta_i+\mu_i))+\Phi(a_i(z_{t/2}-\eta_i-\mu_i))\Big]\Big]=0 \ \ \text{a.s.}, \label{a54}\\
\lim_{p\rightarrow\infty}\Big[p_0^{-1}V(t)-p_0^{-1}\sum_{i\in\text{\{true null\}}}\Big[\Phi(a_i(z_{t/2}+\eta_i))+\Phi(a_i(z_{t/2}-\eta_i))\Big]\Big]=0 \ \ \text{a.s.}. \label{a54a}\end{aligned}$$
The proofs of Theorem 1 and Proposition 2 are shown in the Appendix.
Estimating Realized FDP
-----------------------
In Theorem 1 and Proposition 2, the summation over the set of true null hypotheses is unknown. However, due to the high dimensionality and sparsity, both $p$ and $p_0$ are large and $p_1$ is relatively small. Therefore, we can use $$\label{a52}
\sum_{i=1}^p\Big[\Phi(a_i(z_{t/2}+\eta_i))+\Phi(a_i(z_{t/2}-\eta_i))\Big]$$ as a conservative surrogate for $$\label{a53}
\sum_{i\in\text{\{true null\}}}\Big[\Phi(a_i(z_{t/2}+\eta_i))+\Phi(a_i(z_{t/2}-\eta_i))\Big].$$ Since only $p_1$ extra terms are included in (\[a52\]), the substitution is accurate enough for many applications.
Recall that ${\mbox{FDP}}(t)=V(t)/R(t)$, in which $R(t)$ is observable and known. Thus, only the realization of $V(t)$ is unknown. The mean of $V(t)$ is $E\Big[\sum_{i\in\text{\{true null\}}}I(P_i\leq t)\Big]=p_0t$, since the $P$-values corresponding to the true null hypotheses are uniformly distributed. However, the dependence structure affect the variance of $V(t)$, which can be much larger than the binomial formula $p_0 t (1-t)$. Owen (2005) has theoretically studied the variance of the number of false discoveries. In our framework, expression (\[a52\]) is a function of i.i.d. standard normal variables. Given $t$, the variance of (\[a52\]) can be obtained by simulations and hence variance of $V(t)$ is approximated via (\[a52\]). Relevant simulation studies will be presented in Section 5.
In recent years, there have been substantial interests in the realized random variable FDP itself in a given experiment, instead of controlling FDR, as we are usually concerned about the number of false discoveries given the observed sample of test statistics, rather than an average of FDP for hypothetical replications of the experiment. See Genovese & Wasserman (2004), Meinshausen (2005), Efron (2007), Friguet et al (2009), etc. In our problem, by Proposition 2 it is known that $V(t)$ is approximately $$\sum_{i=1}^p\Big[\Phi(a_i(z_{t/2}+\eta_i))+\Phi(a_i(z_{t/2}-\eta_i))\Big].$$ Let $$\mathrm{FDP_A}(t)=\Big(\sum_{i=1}^p\Big[\Phi(a_i(z_{t/2}+\eta_i))+\Phi(a_i(z_{t/2}-\eta_i))\Big]\Big)/R(t),$$ if $R(t)\neq0$ and $\mathrm{FDP_A}(t)=0$ when $R(t)=0$. Given observations $z_1,\cdots,z_p$ of the test statistics $Z_1,\cdots,Z_p$, if the unobserved but realized factors $W_1,\cdots,W_k$ can be estimated by $\widehat{W}_1,\cdots,\widehat{W}_k$, then we can obtain an estimator of $\mathrm{FDP_A}(t)$ by $$\label{b21}
\widehat{{\mbox{FDP}}}(t)=\min\Big(\sum_{i=1}^p\Big[\Phi(a_i(z_{t/2}+\widehat{\eta}_i))+\Phi(a_i(z_{t/2}-\widehat{\eta}_i))\Big],R(t)\Big)/R(t),$$ when $R(t)\neq0$ and $\widehat{{\mbox{FDP}}}(t)=0$ when $R(t)=0$. Note that in (\[b21\]), $\widehat{\eta}_i=\sum_{h=1}^kb_{ih}\widehat{W}_h$ is an estimator for $\eta_i={\mbox{\bf b}}_i^T{\mbox{\bf W}}$.
The following procedure is one practical way to estimate ${\mbox{\bf W}}=(W_1,\cdots,W_k)^T$ based on the data. For observed values $z_1,\cdots,z_p$, we choose the smallest $90\%$ of $|z_i|$’s, say. For ease of notation, assume the first $m$ $z_i$’s have the smallest absolute values. Then approximately $$\label{b22}
Z_i={\mbox{\bf b}}_i^T{\mbox{\bf W}}+K_i,\quad i=1,\cdots,m.$$ The approximation from (\[b20\]) to (\[b22\]) stems from the intuition that large $|\mu_i|$’s tend to produce large $|z_i|$’s as $Z_i\sim N(\mu_i,1)$ $1\leq i\leq p$ and the sparsity makes approximation errors negligible. Finally we apply the robust $L_1$-regression to the equation set (\[b22\]) and obtain the least-absolute deviation estimates $\widehat{W}_1,\cdots,\widehat{W}_k$. We use $L_1$-regression rather than $L_2$-regression because there might be nonzero $\mu_i$ involved in (\[b22\]) and $L_1$ is more robust to the outliers than $L_2$. Other possible methods include using penalized method such as LASSO or SCAD to explore the sparsity. For example, one can minimize $$\sum_{i=1}^p (Z_i - \mu_i - {\mbox{\bf b}}_i^T{\mbox{\bf W}})^2 + \sum_{i=1}^p p_\lambda(|\mu_i|)$$ with respect to $\{\mu_i\}_{i=1}^p$ and ${\mbox{\bf W}}$, where $p_\lambda(\cdot)$ is a folded-concave penalty function (Fan and Li, 2001).
The estimator (\[b21\]) performs significantly better than Efron (2007)’s estimator in our simulation studies. One difference is that in our setting ${\mbox{\boldmath $\Sigma$}}$ is known. The other is that we give a better approximation as shown in Section 3.4.
Efron (2007) proposed the concept of conditional FDR. Consider $E(V(t))/R(t)$ as one type of FDR definitions (see Efron (2007) expression (46)). The numerator $E(V(t))$ is over replications of the experiment, and equals a constant $p_0t$. But if the actual correlation structure in a given experiment is taken into consideration, then Efron (2007) defines the conditional FDR as $E(V(t)|A)/R(t)$ where $A$ is a random variable which measures the dependency information of the test statistics. Estimating the realized value of $A$ in a given experiment by $\widehat{A}$, one can have the estimated conditional FDR as $E(V(t)|\widehat{A})/R(t)$. Following Efron’s proposition, Friguet et al (2009) gave the estimated conditional FDR by $E(V(t)|\widehat{{\mbox{\bf W}}})/R(t)$ where $\widehat{{\mbox{\bf W}}}$ is an estimate of the realized random factors ${\mbox{\bf W}}$ in a given experiment.
Our estimator in (\[b21\]) for the realized FDP in a given experiment can be understood as an estimate of conditional FDR. Note that (\[a53\]) is actually $E(V(t)|\{\eta_i\}_{i=1}^p)$. By Proposition 2, we can approximate $V(t)$ by $E(V(t)|\{\eta_i\}_{i=1}^p)$. Thus the estimate of conditional FDR $E(V(t)|\{\widehat{\eta}_i\}_{i=1}^p)/R(t)$ is directly an estimate of the realized FDP $V(t)/R(t)$ in a given experiment.
Asymptotic Justification
------------------------
Let ${\mbox{\bf w}}=(w_1,\cdots,w_k)^T$ be the realized values of $\{W_h\}_{h=1}^k$, and $\widehat{{\mbox{\bf w}}}$ be an estimator for ${\mbox{\bf w}}$. We now show in Theorem 2 that $\widehat{{\mbox{FDP}}}(t)$ in (\[b21\]) based on a consistent estimator $\widehat{{\mbox{\bf w}}}$ has the same convergence rate as $\widehat{{\mbox{\bf w}}}$ under some mild conditions.
If the following conditions are satisfied:
- $R(t)/p>H$ for $H>0$ as $p\rightarrow\infty$,
- $\min_{1\leq i\leq p}\min(|z_{t/2}+{\mbox{\bf b}}_i^T{\mbox{\bf w}}|,|z_{t/2}-{\mbox{\bf b}}_i^T{\mbox{\bf w}}|)\geq \tau>0$,
- $\|\widehat{{\mbox{\bf w}}}-{\mbox{\bf w}}\|_2=O_p(p^{-r})$ for some $r>0$,
then $|\mathrm{\widehat{FDP}}(t)-\mathrm{FDP_A}(t)|=O(\|{\widehat {\mbox{\bf w}}}-{\mbox{\bf w}}\|_2)$.
In Theorem 2, (C2) is a reasonable condition because $z_{t/2}$ is a large negative number when threshold $t$ is small and ${\mbox{\bf b}}_i^T{\mbox{\bf w}}$ is a realization from a normal distribution $N(0, \sum_{h=1}^kb_{ih}^2)$ with $\sum_{h=1}^kb_{ih}^2<1$. Thus $z_{t/2}+{\mbox{\bf b}}_i^T{\mbox{\bf w}}$ or $z_{t/2}-{\mbox{\bf b}}_i^T{\mbox{\bf w}}$ is unlikely close to zero.
Theorem 3 shows the asymptotic consistency of $L_1-$regression estimators under model (\[b22\]). Portnoy (1984b) has proven the asymptotic consistency for robust regression estimation when the random errors are i.i.d. However, his proof does not work here because of the weak dependence of random errors. Our result allows $k$ to grow with $m$, even at a faster rate of $o(m^{1/4})$ imposed by Portnoy (1984b).
Suppose (\[b22\]) is a correct model. Let ${\widehat {\mbox{\bf w}}}$ be the $L_1-$regression estimator: $${\widehat {\mbox{\bf w}}}\equiv {\mathrm{argmin}}_{{\mbox{\boldmath $\beta$}}\in R^k}\sum_{i=1}^m|Z_i-{\mbox{\bf b}}_i^T{\mbox{\boldmath $\beta$}}|$$ where ${\mbox{\bf b}}_i=(b_{i1},\cdots,b_{ik})^T$. Let ${\mbox{\bf w}}=(w_1,\cdots,w_k)^T$ be the realized values of $\{W_h\}_{h=1}^k$. Suppose $k=O(m^{\kappa})$ for $0\leq\kappa<1-\delta$. Under the assumptions
- \[a55\] $\sum_{j=k+1}^p\lambda_j^2\leq\eta$ for $\eta=O(m^{2\kappa})$,
- \[a56\] $$\lim_{m\rightarrow\infty}\sup_{\|{\mbox{\bf u}}\|=1}m^{-1}\sum_{i=1}^mI(|{\mbox{\bf b}}_i^T{\mbox{\bf u}}|\leq d)=0$$ for a constant $d>0$.
- $a_{\max}/a_{\min}\leq S$ for some constant $S$ when $m\rightarrow\infty$ where $1/a_i$ is the standard deviation of $K_i$,
- $a_{\min}=O(m^{(1-\kappa)/2})$.
We have $\|{\widehat {\mbox{\bf w}}}-{\mbox{\bf w}}\|_2=O_p(\sqrt{\frac{k}{m}})$.
(C4) is stronger than (C0) in Theorem 1 as (C0) only requires $\sum_{j=k+1}^p\lambda_j^2=O(p^{2-2\delta})$. (C5) ensures the identifiability of ${\mbox{\boldmath $\beta$}}$, which is similar to Proposition 3.3 in Portnoy (1984a). (C6) and (C7) are imposed to facilitate the technical proof.
We now briefly discuss the role of the factor $k$. To make the approximation in Theorem 1 hold, we need $k$ to be large. On the other hand, to make the realized factors estimable with reasonably accuracy, we hope to choose a small $k$ as demonstrated in Theorem 3. Thus, the practical choice of $k$ should be done with care.
Since $m$ is chosen as a certain large proportion of $p$, combination of Theorem 2 and Theorem 3 thus shows the asymptotic consistency of $\widehat{{\mbox{FDP}}}(t)$ based on $L_1-$regression estimator of ${\mbox{\bf w}}=(w_1,\cdots,w_k)^T$ in model (\[b22\]): $$|\mathrm{\widehat{FDP}}(t)-\mathrm{FDP_A}(t)|=O_p(\sqrt{\frac{k}{m}}).$$
The result in Theorem 3 are based on the assumption that (\[b22\]) is a correct model. In the following, we will show that even if (\[b22\]) is not a correct model, the effects of misspecification are negligible when $p$ is sufficiently large. To facilitate the mathematical derivations, we instead consider the least-squares estimator. Suppose we are estimating ${\mbox{\bf W}}=(W_1,\cdots,W_k)^T$ from (\[b20\]). Let ${\mbox{\bf X}}$ be the design matrix of model (\[b20\]). Then the least-squares estimator for ${\mbox{\bf W}}$ is $\widehat{{\mbox{\bf W}}}_{{\mbox{\scriptsize LS}}}={\mbox{\bf W}}+({\mbox{\bf X}}^T{\mbox{\bf X}})^{-1}{\mbox{\bf X}}^T({\mbox{\boldmath $\mu$}}+{\mbox{\bf K}})$, where ${\mbox{\boldmath $\mu$}}=(\mu_1,\cdots,\mu_p)^T$ and ${\mbox{\bf K}}=(K_1,\cdots,K_p)^T$. Instead, we estimate $W_1,\cdots,W_k$ based on the simplified model (\[b22\]), which ignores sparse $\{\mu_i\}$. Then the least-squares estimator for ${\mbox{\bf W}}$ is $\widehat{{\mbox{\bf W}}}_{{\mbox{\scriptsize LS}}}^*={\mbox{\bf W}}+({\mbox{\bf X}}^T{\mbox{\bf X}})^{-1}{\mbox{\bf X}}^T{\mbox{\bf K}}={\mbox{\bf W}}$, in which we utilize the orthogonality between ${\mbox{\bf X}}$ and ${\mathrm{var}}({\mbox{\bf K}})$. The following result shows that the effect of misspecification in model (\[b22\]) is negligible when $p\rightarrow\infty$, and consistency of the least-squares estimator.
The bias due to ignoring non-nulls is controlled by $$\|\widehat{{\mbox{\bf W}}}_{\mathrm{LS}}-{\mbox{\bf W}}\|_2=\|\widehat{{\mbox{\bf W}}}_{\mathrm{LS}}-\widehat{{\mbox{\bf W}}}_{\mathrm{LS}}^*\|_2\leq\|{\mbox{\boldmath $\mu$}}\|_2\Big(\sum_{i=1}^k\lambda_i^{-1}\Big)^{1/2}.$$
In Theorem 1, we can choose appropriate $k$ such that $\lambda_k>cp^{1/2}$ as noted proceeding Theorem 1. Therefore, $\sum_{i=1}^k\lambda_i^{-1}\rightarrow 0$ as $p\rightarrow\infty$ is a reasonable condition. When $\{\mu_i\}_{i=1}^p$ are truly sparse, it is expected that $\|{\mbox{\boldmath $\mu$}}\|_2$ grows slowly or is even bounded so that the bound in Theorem 4 is small. For $L_1-$regression, it is expected to be even more robust to the outliers in the sparse vector $\{\mu_i\}_{i=1}^p$.
Dependence-Adjusted Procedure
-----------------------------
A problem of the method used so far is that the ranking of statistical significance is completely determined by the ranking of the test statistics $\{|Z_i|\}$. This is undesirable and can be inefficient for the dependent case: the correlation structure should also be taken into account. We now show how to use the correlation structure to improve the signal to noise ratio.
Note that by (\[b20\]), $Z_i-{\mbox{\bf b}}_i^T{\mbox{\bf W}}\sim N(\mu_i,a_i^{-2})$ where $a_i$ is defined in Theorem 1. Since $a_i^{-1}\leq1$, the signal to noise ratio increases, which makes the resulting procedure more powerful. Thus, if we know the true values of the common factors ${\mbox{\bf W}}=(W_1,\cdots,W_k)^T$, we can use $a_i(Z_i-{\mbox{\bf b}}_i^T{\mbox{\bf W}})$ as the test statistics. The dependence-adjusted $p$-values $\displaystyle 2\Phi(-|a_i(Z_i-{\mbox{\bf b}}_i^T{\mbox{\bf W}})|)$ can then be used. Note that this testing procedure has different thresholds for different hypotheses based on the magnitude of $Z_i$, and has incorporated the correlation information among hypotheses. In practice, given $Z_i$, the common factors $\{W_h\}_{h=1}^k$ are realized but unobservable. As shown in section 3.2, they can be estimated. The dependence adjusted $p$-values are then given by $$\label{d1}
2\Phi(-|a_i(Z_i-{\mbox{\bf b}}_i^T\widehat{{\mbox{\bf W}}})|)$$ for ranking the hypotheses where $\widehat{{\mbox{\bf W}}}=(\widehat{W}_1,\cdots,\widehat{W}_k)^T$ is an estimate of the principal factors. We will show in section 5 by simulation studies that this dependence-adjusted procedure is more powerful. The “factor adjusted multiple testing procedure" in Friguet et al (2009) shares a similar idea.
Relation with Other Methods
---------------------------
Efron (2007) proposed a novel parametric model for $V(t)$: $$V(t)=p_0t\Big[1+2A\frac{(-z_{t/2})\phi(z_{t/2})}{\sqrt{2}t}\Big],$$ where $A\sim N(0,\alpha^2)$ for some real number $\alpha$ and $\phi(\cdot)$ stands for the probability density function of the standard normal distribution. The correlation effect is explained by the dispersion variate $A$. His procedure is to estimate $A$ from the data and use $$p_0t\Big[1+2\widehat{A}\frac{(-z_{t/2})\phi(z_{t/2})}{\sqrt{2}t}\Big]\Big/R(t)$$ as an estimator for realized ${\mbox{FDP}}(t)$. Note that the above expressions are adaptations from his procedure for the one-sided test to our two-sided test setting. In his simulation, the above estimator captures the general trend of the FDP, but it is not accurate and deviates from the true FDP with large amount of variability. Consider our estimator $\widehat{\mathrm{FDP}}(t)$ in (\[b21\]). Write $\widehat{\eta}_i=\sigma_iQ_i$ where $Q_i\sim N(0,1)$. When $\sigma_i\rightarrow0$ for $\forall i\in\{\text{true null}\}$, by the second order Taylor expansion,
$$\widehat{\mathrm{FDP}}(t)\approx\frac{p_0t}{R(t)}\Big[1+\sum_{i\in\{\text{true null}\}}\sigma_i^2(Q_i^2-1)\frac{(-z_{t/2})\phi(z_{t/2})}{p_0t}\Big].$$
By comparison with Efron’s estimator, we can see that $$\widehat{A}=\frac{1}{\sqrt{2}p_0}\sum_{i\in\{\text{true null}\}}\Big[\widehat{\eta}_i^2-E(\widehat{\eta}_i^2)\Big].$$ Thus, our method is more general.
Leek & Storey (2008) considered a general framework for modeling the dependence in multiple testing. Their idea is to model the dependence via a factor model and reduces the multiple testing problem from dependence to independence case via accounting the effects of common factors. They also provided a method of estimating the common factors. In contrast, our problem is different from Leek & Storey’s and we estimate common factors from principal factor approximation and other methods. In addition, we provide the approximated FDP formula and its consistent estimate.
Friguet, Kloareg & Causeur (2009) followed closely the framework of Leek & Storey (2008). They assumed that the data come directly from a multifactor model with independent random errors, and then used the EM algorithm to estimate all the parameters in the model and obtained an estimator for FDP$(t)$. In particular, they subtract $\eta_i$ out of (\[c4\]) based on their estimate from the EM algorithm to improve the efficiency. However, the ratio of estimated number of factors to the true number of factors in their studies varies according to the dependence structures by their EM algorithm, thus leading to inaccurate estimated ${\mbox{FDP}}(t)$. Moreover, it is hard to derive theoretical results based on the estimator from their EM algorithm. Compared with their results, our procedure does not assume any specific dependence structure of the test statistics. What we do is to decompose the test statistics into an approximate factor model with weakly dependent errors, derive the factor loadings and estimate the unobserved but realized factors by $L_1$-regression. Since the theoretical distribution of $V(t)$ is known, estimator (\[b21\]) performs well based on a good estimation for $W_1,\cdots,W_k$.
Approximate Estimation of FDR
=============================
In this section we propose some ideas that can asymptotically control the FDR, not the FDP, under arbitrary dependency. Although their validity is yet to be established, promising results reveal in the simulation studies. Therefore, they are worth some discussion and serve as a direction of our future work.
Suppose that the number of false null hypotheses $p_1$ is known. If the signal $\mu_i$ for $i\in\text{\{false null\}}$ is strong enough such that $$\label{b24}
\Phi\Big(a_i(z_{t/2}+\eta_i+\mu_i)\Big)+\Phi\Big(a_i(z_{t/2}-\eta_i-\mu_i)\Big)\approx1,$$ then asymptotically the FDR is approximately given by $$\label{b25}
{\mbox{FDR}}(t)=E\Big\{\frac{\sum_{i=1}^p\Big[\Phi(a_i(z_{t/2}+\eta_i))+\Phi(a_i(z_{t/2}-\eta_i))\Big]}{\sum_{i=1}^p\Big[\Phi(a_i(z_{t/2}+\eta_i))+\Phi(a_i(z_{t/2}-\eta_i))\Big]+p_1}\Big\},$$ which is the expectation of a function of $W_1,\cdots,W_k$. Note that ${\mbox{FDR}}(t)$ is a known function and can be computed by Monte Carlo simulation. For any predetermined error rate $\alpha$, we can use the bisection method to solve $t$ so that ${\mbox{FDR}}(t)=\alpha$. Since $k$ is not large, the Monte Carlo computation is sufficiently fast for most applications.
The requirement (\[b24\]) is not very strong. First of all, $\Phi(3)\approx0.9987$, so (\[b24\]) will hold if any number inside the $\Phi(\cdot)$ is greater than 3. Secondly, $1-\sum_{h=1}^kb_{ih}^2$ is usually very small. For example, if it is $0.01$, then $a_i=(1-\sum_{h=1}^kb_{ih}^2)^{-1/2}\approx10$, which means that if either $z_{t/2}+\eta_i+\mu_i$ or $z_{t/2}-\eta_i-\mu_i$ exceed 0.3, then (\[b24\]) is approximately satisfied. Since the effect of sample size $n$ is involved in the problem in Section 2, (\[b24\]) is not a very strong condition on the signal strength $\{\beta_i\}$.
Note that Finner et al (2007) considered a “Dirac uniform model", where the $p$-values corresponding to a false hypothesis are exactly equal to 0. This model might be potentially useful for FDR control. The calculation of (\[b25\]) requires the knowledge of the proportion $p_1$ of signal in the data. Since $p_1$ is usually unknown in practice, there is also future research interest in estimating $p_1$ under arbitrary dependency.
Simulation Studies
==================
In the simulation studies, we consider $p=2000$, $n=100$, $\sigma=2$, the number of false null hypotheses $p_1=10$ and the nonzero $\beta_i=1$, unless stated otherwise. We will present 6 different dependence structures for ${\mbox{\boldmath $\Sigma$}}$ of the test statistics $(Z_1,\cdots,Z_p)^T\sim N((\mu_1,\cdots,\mu_p)^T,{\mbox{\boldmath $\Sigma$}})$. Following the setting in section 2, ${\mbox{\boldmath $\Sigma$}}$ is the correlation matrix of a random sample of size $n$ of $p-$dimensional vector ${\mbox{\bf X}}_i=(X_{i1},\cdots,X_{ip})$, and $\mu_j=\sqrt{n}\beta_j\widehat{\sigma}_j/\sigma$, $j=1,\cdots,p$. The data generating process vector ${\mbox{\bf X}}_i$’s are as follows.
- **\[Equal correlation\]** Let ${\mbox{\bf X}}^T=(X_{1},\cdots,X_{p})^T\sim N_p(0,{\mbox{\boldmath $\Sigma$}})$ where ${\mbox{\boldmath $\Sigma$}}$ has diagonal element 1 and off-diagonal element $1/2$.
- **\[Fan & Song’s model\]** For ${\mbox{\bf X}}=(X_{1},\cdots,X_{p})$, let $\{X_{k}\}_{k=1}^{1900}$ be i.i.d. $N(0,1)$ and $$X_{k}=\sum_{l=1}^{10}X_{l}(-1)^{l+1}/5+\sqrt{1-\frac{10}{25}}\epsilon_{k}, \ \ k=1901,\cdots,2000,$$ where $\{\epsilon_{k}\}_{k=1901}^{2000}$ are standard normally distributed.
- **\[Independent Cauchy\]** For ${\mbox{\bf X}}=(X_{1},\cdots,X_{p})$, let $\{X_{k}\}_{k=1}^{2000}$ be i.i.d. Cauchy random variables with location parameter 0 and scale parameter 1.
- **\[Three factor model\]** For ${\mbox{\bf X}}=(X_{1},\cdots,X_{p})$, let $$X_{j}=\rho_j^{(1)}W^{(1)}+\rho_j^{(2)}W^{(2)}+\rho_j^{(3)}W^{(3)}+H_{j},$$ where $W^{(1)}\sim N(-2,1)$, $W^{(2)}\sim N(1,1)$, $W^{(3)}\sim N(4,1)$, $\rho_{j}^{(1)},\rho_{j}^{(2)},\rho_{j}^{(3)}$ are i.i.d. $U(-1,1)$, and $H_{j}$ are i.i.d. $N(0,1)$.
- **\[Two factor model\]** For ${\mbox{\bf X}}=(X_{1},\cdots,X_{p})$, let $$X_{j}=\rho_j^{(1)}W^{(1)}+\rho_j^{(2)}W^{(2)}+H_{j},$$ where $W^{(1)}$ and $W^{(2)}$ are i.i.d. $N(0,1)$, $\rho_{j}^{(1)}$ and $\rho_{j}^{(2)}$ are i.i.d. $U(-1,1)$, and $H_{j}$ are i.i.d. $N(0,1)$.
- **\[Nonlinear factor model\]** For ${\mbox{\bf X}}=(X_{1},\cdots,X_{p})$, let $$X_{j}=\sin(\rho_j^{(1)}W^{(1)})+sgn(\rho_j^{(2)})\exp(|\rho_j^{(2)}|W^{(2)})+H_{j},$$ where $W^{(1)}$ and $W^{(2)}$ are i.i.d. $N(0,1)$, $\rho_{j}^{(1)}$ and $\rho_{j}^{(2)}$ are i.i.d. $U(-1,1)$, and $H_{j}$ are i.i.d. $N(0,1)$.
Fan & Song’s Model has been considered in Fan & Song (2010) for high dimensional variable selection. This model is close to the independent case but has some special dependence structure. Note that although we have used the term “factor model" above to describe the dependence structure, it is not the factor model for the test statistics $Z_1,\cdots,Z_p$ directly. The covariance matrix of these test statistics is the sample correlation matrix of $X_1,\cdots,X_p$. The effectiveness of our method is examined in several aspects. We first examine the goodness of approximation in Theorem 1 by comparing the marginal distributions and variances. We then compare the accuracy of FDP estimates with other methods. Finally, we demonstrate the improvement of the power with dependence adjustment.
**Distributions of FDP and its approximation:** Without loss of generality, we consider a dependence structure based on the two factor model above. Let $n=100$, $p_1=10$ and $\sigma=2$. Let $p$ vary from 100 to 1000 and $t$ be either 0.01 or 0.005. The distributions of ${\mbox{FDP}}(t)$ and its approximated expression in Theorem 1 are plotted in Figure \[a59\]. The convergence of the distributions are self-evidenced. Table 2 summarizes the total variation distance between the two distributions.
\[a59\]
$p=100$ $p=500$ $p=1000$
----------- --------- --------- ----------
$t=0.01$ 0.6668 0.1455 0.0679
$t=0.005$ 0.6906 0.2792 0.1862
: Total variation distance between the distribution of ${\mbox{FDP}}$ and the limiting distribution of ${\mbox{FDP}}$ in Figure 1. The total variation distance is calculated based on “TotalVarDist" function with “smooth" option in R software.
**Variance of $V(t)$:** Variance of false discoveries in the correlated test statistics is usually large compared with that of the independent case which is $p_0t(1-t)$, due to correlation structures. Thus the ratio of variance of false discoveries in the dependent case to that in the independent test statistics can be considered as a measure of correlation effect. See Owen (2005). Estimating the variance of false discoveries is an interesting problem. With approximation (\[a54a\]), this can easily be computed. In Table \[e0\], we compare the true variance of the number of false discoveries, the variance of expression (\[a53\]) (which is infeasible in practice) and the variance of expression (\[a52\]) under 6 different dependence structures. It shows that the variance computed based on expression (\[a52\]) approximately equals the variance of number of false discoveries. Therefore, we provide a fast and alternative method to estimate the variance of number of false discoveries in addition to the results in Owen (2005). Note that the variance for independent case is merely less than 2. The impact of dependence is very substantial.
Dependence Structure ${\mathrm{var}}(V(t))$ ${\mathrm{var}}(V)$ ${\mathrm{var}}(V.up)$
------------------------ ------------------------ --------------------- ------------------------
Equal correlation 180.9673 178.5939 180.6155
Fan & Song’s model 5.2487 5.2032 5.2461
Independent Cauchy 9.0846 8.8182 8.9316
Three factor model 81.1915 81.9373 83.0818
Two factor model 53.9515 53.6883 54.0297
Nonlinear factor model 48.3414 48.7013 49.1645
: Comparison for variance of number of false discoveries (column 2), variance of expression (\[a53\]) (column 3) and variance of expression (\[a52\]) (column 4) with $t=0.001$ based on 10000 simulations. []{data-label="e0"}
True FDP PFA Storey B-H
------------------------ ------------- ------------- ------------- -------------
Equal correlation $6.67\%$ $6.61\%$ $2.99\%$ $3.90\%$
($15.87\%$) ($15.88\%$) ($10.53\%$) ($14.58\%$)
Fan & Song’s model $14.85\%$ $14.85\%$ $13.27\%$ $14.46\%$
($11.76\%$) ($11.58\%$) ($11.21\%$) ($13.46\%$)
Independent Cauchy $13.85\%$ $13.62\%$ $11.48\%$ $13.21\%$
($13.60\%$) ($13.15\%$) ($12.39\%$) ($15.40\%$)
Three factor model $8.08\%$ $8.29\%$ $4.00\%$ $5.46\%$
($16.31\%$) ($16.39\%$) ($11.10\%$) ($16.10\%$)
Two factor model $8.62\%$ $8.50\%$ $4.70\%$ $5.87\%$
($16.44\%$) ($16.27\%$) ($11.97\%$) ($16.55\%$)
Nonlinear factor model $6.63\%$ $6.81\%$ $3.20\%$ $4.19\%$
($15.56\%$) ($15.94\%$) ($10.91\%$) ($15.31\%$)
: Comparison of FDP values for our method based on equation (\[b25\]) without taking expectation (PFA) with Storey’s procedure and Benjamini-Hochberg’s procedure under six different dependence structures, where $p=2000$, $n=200$, $t=0.001$, and $\beta_i=1$ for $i\in\text{\{false null\}}$. The computation is based on 10000 simulations. The means of FDP are listed with the standard deviations in the brackets.[]{data-label="e1"}
**Comparing methods of estimating FDP:** Under different dependence structures, we compare FDP values using our procedure PFA in equation (\[b25\]) without taking expectation and with $p_1$ known, Storey’s procedure with $p_1$ known ($(1-p_1)t/R(t)$) and Benjamini-Hochberg procedure. Note that Benjamini-Hochberg procedure is a FDR control procedure rather than a FDP estimating procedure. The Benjamini-Hochberg FDP is obtained by using the mean of “True FDP" in Table \[e1\] as the control rate in B-H procedure. Table \[e1\] shows that our method performs much better than Storey’s procedure and Benjamini-Hochberg procedure, especially under strong dependence structures (rows 1, 4, 5, and 6), in terms of both mean and variance of the distribution of FDP. Recall that the expected value of FDP is the FDR. Table 3 also compares the FDR of three procedures by looking at the averages. Note that the actual FDR from B-H procedure under dependence is much smaller than the control rate, which suggests that B-H procedure can be quite conservative under dependence.
**Comparison with Efron’s Methods:** We now compare the estimated values of our method PFA (\[b21\]) and Efron (2007)’s estimator with true values of false discovery proportion, under 6 different dependence structures. Efron (2007)’s estimator is developed for estimating FDP under unknown ${\mbox{\boldmath $\Sigma$}}$. In our simulation study, we have used a known $\Sigma$ for Efron’s estimator for fair comparisons. The results are depicted in Figure \[a60\], Figure \[c7\] and Table \[e2\]. Figure \[a60\] shows that our estimated values correctly track the trends of FDP with smaller amount of noise. It also shows that both our estimator and Efron’s estimator tend to overestimate the true FDP, since $\mathrm{FDP_A}(t)$ is an upper bound of the true $\mathrm{FDP}(t)$. They are close only when the number of false nulls $p_1$ is very small. In the current simulation setting, we choose $p_1=50$ compared with $p=1000$, therefore, it is not a very sparse case. However, even under this case, our estimator still performs very well for six different dependence structures. Efron (2007)’s estimator is illustrated in Figure \[a60\] with his suggestions for estimating parameters, which captures the general trend of true FDP but with large amount of noise. Figure \[c7\] shows that the relative errors of PFA concentrate around 0, which suggests good accuracy of our method in estimating FDP. Table \[e2\] summarizes the relative errors of the two methods.
------------------------ -------- -------- -------- --------
mean SD mean SD
Equal correlation 0.0241 0.1262 1.4841 3.6736
Fan & Song’s model 0.0689 0.1939 1.2521 1.9632
Independent Cauchy 0.0594 0.1736 1.3066 2.1864
Three factor model 0.0421 0.1657 1.4504 2.6937
Two factor model 0.0397 0.1323 1.1227 2.0912
Nonlinear factor model 0.0433 0.1648 1.3134 4.0254
------------------------ -------- -------- -------- --------
: Means and standard deviations of the relative error between true values of FDP and estimated FDP under the six dependence structures in Figure \[a60\]. $\text{RE}_{\text{P}}$ and $\text{RE}_{\text{E}}$ are the relative errors of our PFA estimator and Efron (2007)’s estimator, respectively. RE is defined in Figure \[c7\].[]{data-label="e2"}
**Dependence-Adjusted Procedure:** We compare the dependence-adjusted procedure described in section 3.4 with the testing procedure based only on the observed test statistics without using correlation information. The latter is to compare the original z-statistics with a fixed threshold value and is labeled as “fixed threshold procedure” in Table \[e5\]. With the same FDR level, a procedure with smaller false nondiscovery rate (FNR) is more powerful, where ${\mbox{FNR}}=E[T/(p-R)]$ using the notation in Table 1.
------------------------ -------- ------- ----------- -------- ------- -----------
FDR FNR Threshold FDR FNR Threshold
Equal correlation 17.06% 4.82% 0.06 17.34% 0.35% 0.001
Fan & Song’s model 6.69% 6.32% 0.0145 6.73% 1.20% 0.001
Independent Cauchy 7.12% 0.45% 0.019 7.12% 0.13% 0.001
Three factor model 5.46% 3.97% 0.014 5.53% 0.31% 0.001
Two factor model 5.00% 4.60% 0.012 5.05% 0.39% 0.001
Nonlinear factor model 6.42% 3.73% 0.019 6.38% 0.68% 0.001
------------------------ -------- ------- ----------- -------- ------- -----------
: Comparison of Dependence-Adjusted Procedure with Fixed Threshold Procedure under six different dependence structures, where $p=1000$, $n=100$, $\sigma=1$, $p_1=200$, nonzero $\beta_i$ simulated from $U(0,1)$ and $k=n-3$ over 1000 simulations.[]{data-label="e5"}
In Table \[e5\], without loss of generality, for each dependence structure we fix threshold value 0.001 and reject the hypotheses when the dependence-adjusted $p$-values (\[d1\]) is smaller than 0.001. Then we find the corresponding threshold value for the fixed threshold procedure such that the FDR in the two testing procedures are approximately the same. The FNR for the dependence-adjusted procedure is much smaller than that of the fixed threshold procedure, which suggests that dependence-adjusted procedure is more powerful. Note that in Table \[e5\], $p_1=200$ compared with $p=1000$, implying that the better performance of the dependence-adjusted procedure is not limited to sparse situation. This is expected since subtracting common factors out make the problem have a higher signal to noise ratio.
Real Data Analysis
==================
Our proposed multiple testing procedures are now applied to the genome-wide association studies, in particular the expression quantitative trait locus (eQTL) mapping. It is known that the expression levels of gene CCT8 are highly related to Down Syndrome phenotypes. In our analysis, we use over two million SNP genotype data and CCT8 gene expression data for 210 individuals from three different populations, testing which SNPs are associated with the variation in CCT8 expression levels. The SNP data are from the International HapMap project, which include 45 Japanese in Tokyo, Japan (JPT), 45 Han Chinese in Beijing, China (CHB), 60 Utah residents with ancestry from northern and western Europe (CEU) and 60 Yoruba in Ibadan, Nigeria (YRI). The Japanese and Chinese population are further grouped together to form the Asian population (JPTCHB). To save space, we omit the description of the data pre-processing procedures. Interested readers can find more details from the websites: http://pngu.mgh.harvard.edu/ purcell/plink/res.shtml and ftp://ftp.sanger.ac.uk/pub/genevar/, and the paper Bradic, Fan & Wang (2010).
We further introduce two sets of dummy variables $(\mbox{\bf d}_1,\mbox{\bf d}_2)$ to recode the SNP data, where $\mbox{\bf d}_1=(d_{1,1},\cdots,d_{1,p})$ and $\mbox{\bf d}_2=(d_{2,1},\cdots,d_{2,p})$, representing three categories of polymorphisms, namely, $(d_{1,j},d_{2,j})=(0,0)$ for $\text{SNP}_j=0$ (no polymorphism), $(d_{1,j},d_{2,j})=(1,0)$ for $\text{SNP}_j=1$ (one nucleotide has polymorphism) and $(d_{1,j},d_{2,j})=(0,1)$ for $\text{SNP}_j=2$ (both nucleotides have polymorphisms). Let $\{Y^i\}_{i=1}^n$ be the independent sample random variables of $Y$, $\{d_{1,j}^i\}_{i=1}^n$ and $\{d_{2,j}^i\}_{i=1}^n$ be the sample values of $d_{1,j}$ and $d_{2,j}$ respectively. Thus, instead of using model (\[gwj1\]), we consider two marginal linear regression models between $\{Y^i\}_{i=1}^n$ and $\{d_{1,j}^i\}_{i=1}^n$: $$\label{gwj2}
\min_{\alpha_{1,j},\beta_{1,j}}\frac{1}{n}\sum_{i=1}^nE(Y^i-\alpha_{1,j}-\beta_{1,j} d_{1,j}^i)^2, \ \ \ j=1,\cdots,p$$ and between $\{Y^i\}_{i=1}^n$ and $\{d_{2,j}^i\}_{i=1}^n$: $$\label{gwj3}
\min_{\alpha_{2,j},\beta_{2,j}}\frac{1}{n}\sum_{i=1}^nE(Y^i-\alpha_{2,j}-\beta_{2,j} d_{2,j}^i)^2, \ \ \ j=1,\cdots,p.$$ For ease of notation, we denote the recoded $n \times 2p$ dimensional design matrix as ${\mbox{\bf X}}$. The missing SNP measurement are imputed as $0$ and the redundant SNP data are excluded. Finally, the logarithm-transform of the raw CCT8 gene expression data are used. The details of our testing procedures are summarized as follows.
- To begin with, consider the full model $Y=\alpha+{\mbox{\bf X}}\beta + \epsilon$, where $Y$ is the CCT8 gene expression data, ${\mbox{\bf X}}$ is the $n \times 2p$ dimensional design matrix of the SNP codings and $\epsilon_i\sim N(0,\sigma^2)$, $i=1,\cdots,n$ are the independent random errors. We adopt the refitted cross-validation (RCV) (Fan, Guo & Hao 2010) technique to estimate $\sigma$ by $\widehat{\sigma}$, where LASSO is used in the first (variable selection) stage.
- Fit the marginal linear models (\[gwj2\]) and (\[gwj3\]) for each (recoded) SNP and obtain the least-squares estimate $\widehat{\beta}_j$ for $j=1,\cdots,2p$. Compute the values of $Z$-statistics using formula (\[b2\]), except that $\sigma$ is replaced by $\widehat{\sigma}$.
- Calculate the P-values based on the $Z$-statistics and compute $R(t)=\#\{P_j: P_j \leq t\}$ for a fixed threshold $t$.
- Apply eigenvalue decomposition to the population covariance matrix ${\mbox{\boldmath $\Sigma$}}$ of the $Z$-statistics. By Proposition 1, ${\mbox{\boldmath $\Sigma$}}$ is the sample correlation matrix of $(d_{1,1},d_{2,1},\cdots,$ $d_{1,p},d_{2,p})^T$. Determine an appropriate number of factors $k$ and derive the corresponding factor loading coefficients $\{b_{ih}\}_{i=1,\ h=1}^{i=2p,\ h=k}$.
- Order the absolute-valued $Z$-statistics and choose the first $m=95\%\times2p$ of them. Apply $L_1$-regression to the equation set (\[b22\]) and obtain its solution $\widehat{W}_1,\cdots,\widehat{W}_k$. Plug them into (\[b21\]) and get the estimated $\text{FDP}(t)$.
For each intermediate step of the above procedure, the outcomes are summarized in the following figures. Figure \[a61\] illustrates the trend of the RCV-estimated standard deviation $\widehat{\sigma}$ with respect to different model sizes. Our result is similar to that in Fan, Guo & Hao (2010), in that although $\widehat{\sigma}$ is influenced by the selected model size, it is relatively stable and thus provides reasonable accuracy. The empirical distributions of the $Z$-values are presented in Figure \[a62\], together with the fitted normal density curves. As pointed out in Efron (2007, 2010), due to the existence of dependency among the $Z$-values, their densities are either narrowed or widened and are not $N(0,1)$ distributed. The histograms of the $P$-values are further provided in Figure \[a63\], giving a crude estimate of the proportion of the false nulls for each of the three populations.
\[0.40\][![$\widehat{\sigma}$ of the three populations with respect to the selected model sizes, derived by using refitted cross-validation (RCV).[]{data-label="a61"}](Figure_4 "fig:")]{}
\[0.40\][![Empirical distributions and fitted normal density curves of the $Z$-values for each of the three populations. Because of dependency, the $Z$-values are no longer $N(0,1)$ distributed. The empirical distributions, instead, are $N(0.12,1.22^2)$ for CEU, $N(0.27,1.39^2)$ for JPT and CHB, and $N(-0.04,1.66^2)$ for YRI, respectively. The density curve for CEU is closest to $N(0,1)$ and the least dispersed among the three.[]{data-label="a62"}](Figure_5 "fig:")]{}
\[0.40\][![Histograms of the $P$-values for each of the three populations. []{data-label="a63"}](Figure_6 "fig:")]{}
\[0.40\][![Number of total discoveries, estimated number of false discoveries and estimated False Discovery Proportion as functions of thresholding $t$ for CEU population (row 1), JPT and CHB (row 2) and YRI (row 3). The $x$-coordinate is $-\log t$, the minus $\log_{10}$-transformed thresholding.[]{data-label="a64"}](Figure_7_CEU "fig:")]{} \[0.40\][![Number of total discoveries, estimated number of false discoveries and estimated False Discovery Proportion as functions of thresholding $t$ for CEU population (row 1), JPT and CHB (row 2) and YRI (row 3). The $x$-coordinate is $-\log t$, the minus $\log_{10}$-transformed thresholding.[]{data-label="a64"}](Figure_7_JPTCHB "fig:")]{} \[0.40\][![Number of total discoveries, estimated number of false discoveries and estimated False Discovery Proportion as functions of thresholding $t$ for CEU population (row 1), JPT and CHB (row 2) and YRI (row 3). The $x$-coordinate is $-\log t$, the minus $\log_{10}$-transformed thresholding.[]{data-label="a64"}](Figure_7_YRI "fig:")]{}
The main results of our analysis are presented in Figures \[a64\], which depicts the number of total discoveries $R(t)$, the estimated number of false discoveries $\widehat{V}(t)$ and the estimated False Discovery Proportion $\widehat{\text{FDP}}(t)$ as functions of (the minus $\log_{10}$-transformed) thresholding $t$ for the three populations. As can be seen, in each case both $R(t)$ and $\widehat{V}(t)$ are decreasing when $t$ decreases, but $\widehat{\text{FDP}}(t)$ exhibits zigzag patterns and does not always decrease along with $t$, which results from the cluster effect of the P-values. A closer study of the outputs further shows that for all populations, the estimated FDP has a general trend of decreasing to the limit of around $0.1$ to $0.2$, which backs up the intuition that a large proportion of the smallest $P$-values should correspond to the false nulls (true discoveries) when Z-statistics is very large; however, in most other thresholding values, the estimated FDPs are at a high level. This is possibly due to small signal-to-noise ratios in eQTL studies.
The results of the selected SNPs, together with the estimated FDPs, are depicted in Table \[a67\]. It is worth mentioning that Deutsch et al. (2005) and Bradic, Fan & Wang (2010) had also worked on the same CCT8 data to identify the significant SNPs in CEU population. Deutsch et al. (2005) performed association analysis for each SNP using ANOVA, while Bradic, Fan & Wang (2010) proposed the penalized composite quasi-likelihood variable selection method. Their findings were different as well, for the first group identified four SNPs (exactly the same as ours) which have the smallest P-values but the second group only discovered one SNP rs965951 among those four, arguing that the other three SNPs make little additional contributions conditioning on the presence of rs965951. Our results for CEU population coincide with that of the latter group, in the sense that the false discovery rate is high in our findings and our association study is marginal rather than joint modeling among several SNPs.
Population Threshold \# Discoveries Estimated FDP Selected SNPs
------------ ----------------------- ---------------- --------------- ---------------------
JPTCHB $1.61 \times 10^{-9}$ 5 $ 0.1535$ rs965951 rs2070611
rs2832159 rs8133819
rs2832160
YRI $1.14 \times 10^{-9}$ 2 $ 0.2215$ rs9985076 rs965951
CEU $6.38 \times 10^{-4}$ 4 $ 0.8099$ rs965951 rs2832159
rs8133819 rs2832160
: Information of the selected SNPs and the associated FDP for a particular threshold. Note that the density curve of the $Z$-values for CEU population is close to $N(0,1)$, so the approximate $\widehat{\text{FDP}}(t)$ equals $pt/R(t)\approx 0.631$. Therefore our high estimated FDP is reasonable.[]{data-label="a67"}
Population Threshold \# Discoveries Estimated FDP Selected SNPs
------------ ----------------------- ---------------- --------------- -------------------------
JPTCHB $2.89 \times 10^{-4}$ 5 $ 0.1205$ rs965951 rs2070611
rs2832159 rs8133819
rs2832160
YRI $8.03 \times 10^{-5}$ 4 $ 0.2080$ rs7283791 rs11910981
rs8128844 rs965951
CEU $5.16 \times 10^{-2}$ 6 $ 0.2501$ rs464144\* rs4817271
rs2832195 rs2831528\*
rs1571671\* rs6516819\*
: Information of the selected SNPs for a particular threshold based on the dependence-adjusted procedure. The number of factors $k$ in (\[d1\]) equals 10. The estimated FDP is based on estimator (\[b21\]) by applying PFA to the adjusted Z-values. $*$ is the indicator for SNP equal to 2 and otherwise is the indicator for 1.[]{data-label="a68"}
Table \[a68\] lists the SNP selection based on the dependence-adjusted procedure. For JPTCHB, with slightly smaller estimated FDP, the dependence-adjusted procedure selects the same SNPs with the group selected by the fixed-threshold procedure, which suggests that these 5 SNPs are significantly associated with the variation in gene CCT8 expression levels. For YRI, rs965951 is selected by the both procedures, but the dependence-adjusted procedure selects other three SNPs which do not appear in Table \[a67\]. For CEU, the selections based on the two procedures are quite different. However, since the estimated FDP for CEU is much smaller in Table \[a68\] and the signal-to-noise ratio of the test statistics is higher from the dependence-adjusted procedure, the selection group in Table \[a68\] seems more reliable.
Discussion
==========
We have proposed a new method (principal factor approximation) for high dimensional multiple testing where the test statistics have an arbitrary dependence structure. For multivariate normal test statistics with a known covariance matrix, we can express the test statistics as an approximate factor model with weakly dependent random errors, by applying spectral decomposition to the covariance matrix. We then obtain an explicit expression for the false discovery proportion in large scale simultaneous tests. This result has important applications in controlling FDP and FDR. We also provide a procedure to estimate the realized FDP, which, in our simulation studies, correctly tracks the trend of FDP with smaller amount of noise.
To take into account of the dependence structure in the test statistics, we propose a dependence-adjusted procedure with different threshold values for magnitude of $Z_i$ in different hypotheses. This procedure has been shown in simulation studies to be more powerful than the fixed threshold procedure. An interesting research question is how to take advantage of the dependence structure such that the testing procedure is more powerful or even optimal under arbitrary dependence structures.
While our procedure is based on a known correlation matrix, we would expect that it can be adapted to the case with estimated covariance matrix. The question is then how accuracy the covariance matrix should be so that a simple substitution procedure will give an accurate estimate of FDP.
We provide a simple method to estimate the realized principal factors. A more accurate method is probably the use of penalized least-squares method to explore the sparsity and to estimate the realized principle factor.
Appendix
========
Lemma 1 is fundamental to our proof of Theorem 1 and Proposition 2. The result is known in probability, but has the formal statement and proof in Lyons (1988).
Let $\{X_n\}_{n=1}^{\infty}$ be a sequence of real-valued random variables such that $E|X_n|^2\leq1$. If $|X_n|\leq1$ a.s. and $\sum_{N\geq1}\frac{1}{N}E|\frac{1}{N}\sum_{n\leq N}X_n|^2<\infty$, then $\lim_{N\rightarrow\infty}\frac{1}{N}\sum_{n\leq N}X_n=0 \ \ a.s.$.
**Proof of Proposition 2:** Note that $P_i = 2\Phi(-|Z_i|)$. Based on the expression of $(Z_1,\cdots,Z_p)^T$ in (\[b20\]), $\Big\{I(P_i\leq t|W_1,\cdots,W_k)\Big\}_{i=1}^p$ are dependent random variables. Nevertheless, we want to prove $$\label{b27}
p^{-1}\sum_{i=1}^p[I(P_i\leq t|W_1,\cdots,W_k)-P(P_i\leq t|W_1,\cdots,W_k)]\stackrel{p\rightarrow\infty}{\longrightarrow}0 \ a.s. .$$ Letting $X_i=I(P_i\leq t|W_1,\cdots,W_k)-P(P_i\leq t|W_1,\cdots,W_k)$, by Lemma 1 the conclusion (\[b27\]) is correct if we can show $${\mbox{Var}}\Big(p^{-1}\sum_{i=1}^{p}I(P_i\leq t|W_1,\cdots,W_k)\Big)=O_p(p^{-\delta}) \ \ \text{for some} \ \delta>0.$$ To begin with, note that $$\begin{aligned}
&&{\mbox{Var}}\Big(p^{-1}\sum_{i=1}^{p}I(P_i\leq t|W_1,\cdots,W_k)\Big)\\
&=&p^{-2}\sum_{i=1}^{p}{\mbox{Var}}\Big(I(P_i\leq t|W_1,\cdots,W_k)\Big)\\
&&+2p^{-2}\sum_{1\leq i<j\leq p}{\mbox{Cov}}\Big(I(P_i\leq t|W_1,\cdots,W_k),I(P_j\leq t|W_1,\cdots,W_k)\Big).\end{aligned}$$ Since ${\mbox{Var}}\big(I(P_i\leq t|W_1,\cdots,W_k)\big)\leq\frac{1}{4}$, the first term in the right-hand side of the last equation is $O_p(p^{-1})$. For the second term, the covariance is given by $$\begin{aligned}
&&P(P_i\leq t,P_j\leq t|W_1,\cdots,W_k)-P(P_i\leq t|W_1,\cdots,W_k)P(P_j\leq t|W_1,\cdots,W_k)\\
&=&P(|Z_i|<-\Phi^{-1}(t/2),|Z_j|<-\Phi^{-1}(t/2)|W_1,\cdots,W_k)\\
&&-P(|Z_i|<-\Phi^{-1}(t/2)|W_1,\cdots,W_k)P(|Z_j|<-\Phi^{-1}(t/2)|W_1,\cdots,W_k)\end{aligned}$$ To simplify the notation, let $\rho_{ij}^k$ be the correlation between $K_i$ and $K_j$. Without loss of generality, we assume $\rho_{ij}^k>0$ (for $\rho_{ij}^k<0$, the calculation is similar). Denote by $$c_{1,i}= a_i(-z_{t/2}-\eta_i-\mu_i), \ \ \ c_{2,i}= a_i(z_{t/2}-\eta_i-\mu_i).$$ Then, from the joint normality, it can be shown that $$\begin{aligned}
\label{e3}
&&P(|Z_i|<-\Phi^{-1}(t/2),|Z_j|<-\Phi^{-1}(t/2)|W_1,\cdots,W_k)\nonumber\\
&=&P(c_{2,i}/a_i<K_i<c_{1,i}/a_i, c_{2,j}/a_j<K_j<c_{1,j}/a_j)\nonumber\\
&=&\int_{-\infty}^{\infty}\Big[\Phi\Big(\frac{(\rho_{ij}^k)^{1/2}z+c_{1,i}}{(1-\rho_{ij}^k)^{1/2}}\Big)-\Phi\Big(\frac{(\rho_{ij}^k)^{1/2}z+c_{2,i}}{(1-\rho_{ij}^k)^{1/2}}\Big)\Big]\\
&&\quad\quad\times\Big[\Phi\Big(\frac{(\rho_{ij}^k)^{1/2}z+c_{1,j}}{(1-\rho_{ij}^k)^{1/2}}\Big)-\Phi\Big(\frac{(\rho_{ij}^k)^{1/2}z+c_{2,j}}{(1-\rho_{ij}^k)^{1/2}}\Big)\Big]\phi(z)dz.\nonumber\end{aligned}$$
Next we will use Taylor expansion to analyze the joint probability further. We have shown that $(K_1,\cdots,K_p)^T\sim N(0,{\mbox{\bf A}})$ are weakly dependent random variables. Let $cov_{ij}^k$ denote the covariance of $K_i$ and $K_j$, which is the $(i,j)$th element of the covariance matrix ${\mbox{\bf A}}$. We also let $b_{ij}^k=(1-\sum_{h=1}^kb_{ih}^2)^{1/2}(1-\sum_{h=1}^kb_{jh}^2)^{1/2}$. By the Hölder inequality, $$\begin{aligned}
p^{-2}\sum_{i,j=1}^p|cov_{ij}^k|^{1/2}\leq p^{-1/2}(\sum_{i,j=1}^p|cov_{ij}^k|^2)^{1/4}=\Big[p^{-2}(\sum_{i=k+1}^{p}\lambda_i^2)^{1/2}\Big]^{1/4}\rightarrow0\end{aligned}$$ as $p\rightarrow\infty$. For each $\Phi(\cdot)$, we apply Taylor expansion with respect to $(cov_{ij}^k)^{1/2}$, $$\begin{aligned}
\Phi\Big(\frac{(\rho_{ij}^k)^{1/2}z+c_{1,i}}{(1-\rho_{ij}^k)^{1/2}}\Big)&=&\Phi\Big(\frac{(cov_{ij}^k)^{1/2}z+(b_{ij}^k)^{1/2}c_{1,i}}{(b_{ij}^k-cov_{ij}^k)^{1/2}}\Big)\\
&=&\Phi(c_{1,i})+\phi(c_{1,i})(b_{ij}^k)^{-1/2}z(cov_{ij}^k)^{1/2}\\
&&\quad\quad\quad+\frac{1}{2}\phi(c_{1,i})c_{1,i}(b_{ij}^k)^{-1}(1-z^2)cov_{ij}^k+R(cov_{ij}^k).\end{aligned}$$ where $R(cov_{ij}^k)$ is the Lagrange residual term in the Taylor’s expansion, and $R(cov_{ij}^k)=f(z)O(|cov_{ij}^k|^{3/2})$ in which $f(z)$ is a polynomial function of $z$ with the highest order as 6.
Therefore, we have (\[e3\]) equals $$\begin{aligned}
&&\Big[\Phi(c_{1,i})-\Phi(c_{2,i})\Big]\Big[\Phi(c_{1,j})-\Phi(c_{2,j})\Big]\\
&&\quad\quad+\Big(\phi(c_{1,i})-\phi(c_{2,i})\Big)\Big(\phi(c_{1,j})-\phi(c_{2,j})\Big)(b_{ij}^k)^{-1}cov_{ij}^k+O(|cov_{ij}^k|^{3/2}),\end{aligned}$$ where we have used the fact that $\int_{-\infty}^{\infty} z\phi(z)dz=0$, $\int_{-\infty}^{\infty} (1-z^2)\phi(z)dz=0$ and the finite moments of standard normal distribution are finite. Now since $P(|Z_i|<-\Phi^{-1}(t/2)|W_1,\cdots,W_k)=\Phi(c_{1,i})-\Phi(c_{2,i})$, we have $$\begin{aligned}
&&{\mbox{Cov}}\Big(I(P_i\leq t|W_1,\cdots,W_k),I(P_j\leq t|W_1,\cdots,W_k)\Big)\\
&=&\Big(\phi(c_{1,i})-\phi(c_{2,i})\Big)\Big(\phi(c_{1,j})-\phi(c_{2,j})\Big)a_ia_jcov_{ij}^k+O(|cov_{ij}^k|^{3/2}).\end{aligned}$$ In the last line, $\big(\phi(c_{1,i})-\phi(c_{2,i})\big)\big(\phi(c_{1,j})-\phi(c_{2,j})\big)a_ia_j$ is bounded by some constant except on a countable collection of measure zero sets. Let $C_i$ be defined as the set $\{z_{t/2}+\eta_i+\mu_i=0\}\cup\{z_{t/2}-\eta_i-\mu_i=0\}$. On the set $C_i^c$, $\big(\phi(c_{1,i})-\phi(c_{2,i})\big)a_i$ converges to zero as $a_i\rightarrow\infty$. Therefore, $\big(\phi(c_{1,i})-\phi(c_{2,i})\big)\big(\phi(c_{1,j})-\phi(c_{2,j})\big)a_ia_j$ is bounded by some constant on $(\bigcup_{i=1}^pC_i)^c$.
By the Cauchy-Schwartz inequality and $(C0)$ in Theorem 1, $p^{-2}\sum_{i,j}|cov_{i,j}^k|=O(p^{-\delta})$. Also we have $|cov_{ij}^k|^{3/2}<|cov_{ij}^k|$. On the set $(\bigcup_{i=1}^pC_i)^c$, we conclude that $${\mbox{Var}}\Big(p^{-1}\sum_{i=1}^pI(P_i\leq t|W_1,\cdots,W_k)\Big)=O_p(p^{-\delta}).$$ Hence by Lemma 1, for fixed $(w_1,\cdots,w_k)^T$, $$\label{g1}
p^{-1}\sum_{i=1}^p\big\{I(P_i\leq t|W_1=w_1,\cdots,W_k=w_k)
-P(P_i\leq t|W_1=w_1,\cdots,W_k=w_k)\big\}\stackrel{p\to\infty}{\longrightarrow}0\ \text{a.s.}.$$ If we define the probability space on which $(W_1,\cdots,W_k)$ and $(K_1,\cdots,K_p)$ are constructed as in (10) to be $(\Omega, \mathcal{F},\nu)$, with $\mathcal{F}$ and $\nu$ the associated $\sigma-$algebra and (Lebesgue) measure, then in a more formal way, $(\ref{g1})$ is equivalent to $$p^{-1}\sum_{i=1}^p\big\{I(P_i(\omega)\leq t|W_1=w_1,\cdots,W_k=w_k)
-P(P_i\leq t|W_1=w_1,\cdots,W_k=w_k)\big\}\stackrel{p\to\infty}{\longrightarrow}0$$ for each fixed $(w_1,\cdots,w_k)^T$ and almost every $\omega\in\Omega$, leading further to $$p^{-1}\sum_{i=1}^p\big\{I(P_i(\omega)\leq t)
-P(P_i\leq t|W_1(\omega),\cdots,W_k(\omega))\big\}\stackrel{p\to\infty}{\longrightarrow}0$$ for almost every $\omega\in\Omega$, which is the definition for $$p^{-1}\sum_{i=1}^p\big\{I(P_i\leq t)
-P(P_i\leq t|W_1,\cdots,W_k)\big\}\stackrel{p\to\infty}{\longrightarrow}0\ \text{a.s.}.$$ Therefore, $$\lim_{p\to\infty}p^{-1}\sum_{i=1}^p\Big\{I(P_i\leq t)
-\big [\Phi(a_i(z_{t/2}+\eta_i+\mu_i))+\Phi(a_i(z_{t/2}-\eta_i-\mu_i))\big ]\Big\}=0\ \text{a.s.}.$$ With the same argument we can show $$\lim_{p\to\infty}p_0^{-1}\Big\{V(t)-\sum_{i\in\{\text{true null}\}}
\big [\Phi(a_i(z_{t/2}+\eta_i))+\Phi(a_i(z_{t/2}-\eta_i))\big ]\Big\}=0\ \text{a.s.}$$ for the high dimensional sparse case. The proof of Proposition 2 is now complete.\
**Proof of Theorem 1:**\
For ease of notation, denote $\sum_{i=1}^p\big [\Phi(a_i(z_{t/2}+\eta_i+\mu_i))+\Phi(a_i(z_{t/2}-\eta_i-\mu_i))\big ]$ as $\tilde R(t)$ and $\sum_{i\in\{\text{true null}\}}\big [\Phi(a_i(z_{t/2}+\eta_i))+\Phi(a_i(z_{t/2}-\eta_i))\big ]$ as $\tilde V(t)$, then
$$\begin{array}{rl}
&\displaystyle\lim_{p\to\infty}\Big \{\text{FDP}(t)
-\displaystyle\frac{\sum_{i\in\{\text{true null}\}}\big [\Phi(a_i(z_{t/2}+\eta_i))+\Phi(a_i(z_{t/2}-\eta_i))\big ]}
{\sum_{i=1}^p\big [\Phi(a_i(z_{t/2}+\eta_i+\mu_i))+\Phi(a_i(z_{t/2}-\eta_i-\mu_i))\big ]}\Big\} \\
= &\displaystyle\lim_{p\to\infty}\Big\{\displaystyle\frac{V(t)}{R(t)}-\displaystyle\frac{\tilde V(t)}{\tilde R(t)}\Big\} \\
= &\displaystyle\lim_{p\to\infty}\displaystyle\frac{(V(t)/p_0)[(\tilde R(t)-R(t))/p]+(R(t)/p)[(V(t)-\tilde V(t))/p_0]}
{R(t)\tilde R(t)/(p_0p)}\\
= &0\ \text{a.s.}
\end{array}$$
by the results in Proposition 2 and the fact that both $p_0^{-1}V(t)$ and $p^{-1}R(t)$ are bounded random variables. The proof of Theorem 1 is complete.
**Proof of Theorem 2:** Letting $$\begin{aligned}
\Delta_1&=&\sum_{i=1}^p\Big[\Phi(a_i(z_{t/2}+{\mbox{\bf b}}_i^T{\widehat {\mbox{\bf w}}}))-\Phi(a_i(z_{t/2}+{\mbox{\bf b}}_i^T{\mbox{\bf w}}))\Big]\quad\quad\text{and}\\
\Delta_2&=&\sum_{i=1}^p\Big[\Phi(a_i(z_{t/2}-{\mbox{\bf b}}_i^T{\widehat {\mbox{\bf w}}}))-\Phi(a_i(z_{t/2}-{\mbox{\bf b}}_i^T{\mbox{\bf w}}))\Big],\end{aligned}$$ we have $$\widehat{{\mbox{FDP}}}(t)-{\mbox{FDP}}_A(t)=(\Delta_1+\Delta_2)/R(t).$$ Consider $\Delta_1=\sum_{i=1}^p\Delta_{1i}$. By the mean value theorem, there exists $\xi_i$ in the interval of $({\mbox{\bf b}}_i^T{\widehat {\mbox{\bf w}}},{\mbox{\bf b}}_i^T{\mbox{\bf w}})$, such that $\Delta_{1i}=\phi(a_i(z_{t/2}+\xi_i))a_i{\mbox{\bf b}}_i^T({\widehat {\mbox{\bf w}}}-{\mbox{\bf w}})$ where $\phi(\cdot)$ is the standard normal density function.
Next we will show that $\phi(a_i(z_{t/2}+\xi_i))a_i$ is bounded by a constant. Without loss of generality, we discuss about the case in (C2) when $z_{t/2}+{\mbox{\bf b}}_i^T{\mbox{\bf w}}<-\tau$. By (C3), we can choose sufficiently large $p$ such that $z_{t/2}+\xi_i<-\tau/2$. For the function $g(a)=\exp(-a^2x^2/8)a$, $g(a)$ is maximized when $a=2/x$. Therefore, $$\sqrt{2\pi}\phi(a_i(z_{t/2}+\xi_i))a_i<a_i\exp(-a_i^2\tau^2/8)\leq2\exp(-1/2)/\tau.$$ For $z_{t/2}+{\mbox{\bf b}}_i^T{\mbox{\bf w}}>\tau$ we have the same result. In both cases, we can use a constant $D$ such that $\phi(a_i(z_{t/2}+\xi_i))a_i\leq D$.
By the Cauchy-Schwartz inequality, we have $\sum_{i=1}^p|b_{ih}|\leq(p\sum_{i=1}^pb_{ih}^2)^{1/2}=(p\lambda_h)^{1/2}$. Therefore, by the Cauchy-Schwartz inequality and the fact that $\sum_{h=1}^k\lambda_h<p$, we have $$\begin{aligned}
|\Delta_{1}|&\leq&D\sum_{i=1}^p\Big[\sum_{h=1}^k|b_{ih}||\widehat{w}_h-w_h|\Big]\\
&\leq&D\sum_{h=1}^k(p\lambda_h)^{1/2}|\widehat{w}_h-w_h|\\
&\leq&D\sqrt{p}\Big(\sum_{h=1}^k\lambda_h\sum_{h=1}^k(\widehat{w}_h-w_h)^2\Big)^{1/2}\\
&<&Dp\|{\widehat {\mbox{\bf w}}}-{\mbox{\bf w}}\|_2.\end{aligned}$$ By (C1) in Theorem 2, $R(t)/p>H$ for $H>0$ when $p\rightarrow\infty$. Therefore, $|\Delta_{1}/R(t)|=O(\|\widehat{{\mbox{\bf w}}}-{\mbox{\bf w}}\|_2)$. For $\Delta_{2}$, the result is the same. The proof of Theorem 2 is now complete.
**Proof of Theorem 3:** Without loss of generality, we assume that the true value of ${\mbox{\bf w}}$ is zero, and we need to prove $\|{\widehat {\mbox{\bf w}}}\|_2=O_p(\sqrt{\frac{k}{m}})$. Let $L: R^k\rightarrow R^k$ be defined by $$L_j({\mbox{\bf w}})=m^{-1}\sum_{i=1}^m b_{ij}sgn(K_i-{\mbox{\bf b}}_i^T{\mbox{\bf w}})$$ where $sgn(x)$ is the sign function of $x$ and equals zero when $x=0$. Then we want to prove that there is a root ${\widehat {\mbox{\bf w}}}$ of the equation $L({\mbox{\bf w}})=0$ satisfying $\|{\widehat {\mbox{\bf w}}}\|_2^2=O_p(k/m)$. By classical convexity argument, it suffices to show that with high probability, ${\mbox{\bf w}}^TL({\mbox{\bf w}})<0$ with $\|{\mbox{\bf w}}\|_2^2=Bk/m$ for a sufficiently large constant $B$.
Let $V={\mbox{\bf w}}^TL({\mbox{\bf w}})=m^{-1}\sum_{i=1}^mV_i$, where $V_i=({\mbox{\bf b}}_i^T{\mbox{\bf w}})sgn(K_i-{\mbox{\bf b}}_i^T{\mbox{\bf w}})$. By Chebyshev’s inequality, $P(V<E(V)+h\times {\mbox{SD}}(V))>1-h^{-2}$. Therefore, to prove the result in Theorem 3, we want to derive the upper bounds for $E(V)$ and ${\mbox{SD}}(V)$ and show that $\forall h>0$, $\exists B$ and $M$ s.t. $\forall m>M$, $P(V<0)>1-h^{-2}$.
We will first present a result from Polya (1945), which will be very useful for our proof. For $x>0$, $$\label{a70}
\Phi(x)=\frac{1}{2}\Big[1+\sqrt{1-\exp(-\frac{2}{\pi}x^2)}\Big](1+\delta(x)) \ \ \ \text{with} \ \ \sup_{x>0}|\delta(x)|<0.004.$$ The variance of $V$ is shown as follows: $${\mbox{Var}}(V)=m^{-2}\sum_{i=1}^m{\mbox{Var}}(V_i)+m^{-2}\sum_{i\neq j}{\mbox{Cov}}(V_i,V_j).$$ Write ${\mbox{\bf w}}=s{\mbox{\bf u}}$ with $\|{\mbox{\bf u}}\|_2=1$ where $s=(Bk/m)^{1/2}$. By (C5), (C6) and (C7) in Theorem 3, for sufficiently large $m$, $$\begin{aligned}
\label{d1}
\sum_{i=1}^m{\mbox{Var}}(V_i)&=&\sum_{i=1}^mI(|{\mbox{\bf b}}_i^T{\mbox{\bf u}}|\leq d){\mbox{Var}}(V_i)+\sum_{i=1}^mI(|{\mbox{\bf b}}_i^T{\mbox{\bf u}}|>d){\mbox{Var}}(V_i)\nonumber \\
&=&\Big[\sum_{i=1}^mI(|{\mbox{\bf b}}_i^T{\mbox{\bf u}}|>d){\mbox{Var}}(V_i)\Big](1+o(1)),\end{aligned}$$ and $$\begin{aligned}
\label{d2}
\sum_{i\neq j}{\mbox{Cov}}(V_i,V_j)&=&\sum_{i\neq j}I(|{\mbox{\bf b}}_i^T{\mbox{\bf u}}|\leq d)I(|{\mbox{\bf b}}_j^T{\mbox{\bf u}}|\leq d){\mbox{Cov}}(V_i,V_j)\nonumber \\
&&+2\sum_{i\neq j}I(|{\mbox{\bf b}}_i^T{\mbox{\bf u}}|\leq d)I(|{\mbox{\bf b}}_j^T{\mbox{\bf u}}|>d){\mbox{Cov}}(V_i,V_j)\nonumber\\
&&+\sum_{i\neq j}I(|{\mbox{\bf b}}_i^T{\mbox{\bf u}}|>d)I(|{\mbox{\bf b}}_j^T{\mbox{\bf u}}|>d){\mbox{Cov}}(V_i,V_j)\nonumber\\
&=&\Big[\sum_{i\neq j}I(|{\mbox{\bf b}}_i^T{\mbox{\bf u}}|>d)I(|{\mbox{\bf b}}_j^T{\mbox{\bf u}}|>d){\mbox{Cov}}(V_i,V_j)\Big](1+o(1)).\end{aligned}$$ We will prove (\[d1\]) and (\[d2\]) in detail at the end of proof for Theorem 3.
For each pair of $V_i$ and $V_j$, it is easy to show that $${\mbox{Cov}}(V_i,V_j)=4({\mbox{\bf b}}_i^T{\mbox{\bf w}})({\mbox{\bf b}}_j^T{\mbox{\bf w}})\Big[P(K_i<{\mbox{\bf b}}_i^T{\mbox{\bf w}},K_j<{\mbox{\bf b}}_j^T{\mbox{\bf w}})-\Phi(a_i{\mbox{\bf b}}_i^T{\mbox{\bf w}})\Phi(a_i{\mbox{\bf b}}_j^T{\mbox{\bf w}})\Big].$$ The above formula includes the ${\mbox{Var}}(V_i)$ as a specific case.
By Polya’s approximation (\[a70\]), $$\label{d3}
{\mbox{Var}}(V_i)=({\mbox{\bf b}}_i^T{\mbox{\bf w}})^2\exp\Big\{-\frac{2}{\pi}(a_i{\mbox{\bf b}}_i^T{\mbox{\bf w}})^2\Big\}(1+\delta_j) \ \ \text{with} \ |\delta_j|<0.004.$$ Hence $$\begin{aligned}
\sum_{i=1}^mI(|{\mbox{\bf b}}_i^T{\mbox{\bf u}}|>d){\mbox{Var}}(V_i)&\leq&\sum_{i=1}^ms^2\exp\Big\{-\frac{2}{\pi}(a_ids)^2\Big\}(1+\delta_j)\\
&\leq&2ms^2\exp\Big\{-\frac{2}{\pi}(a_{\min}ds)^2\Big\}.\end{aligned}$$ To compute ${\mbox{Cov}}(V_i,V_j)$, we have $$\begin{aligned}
&&P(K_i<{\mbox{\bf b}}_i^T{\mbox{\bf w}},K_j<{\mbox{\bf b}}_j^T{\mbox{\bf w}})\\
&=&\int_{-\infty}^{\infty}\Phi\Big(\frac{(|\rho_{ij}^k|)^{1/2}z+a_i{\mbox{\bf b}}_i^T{\mbox{\bf w}}}{(1-|\rho_{ij}^k|)^{1/2}}\Big)\Phi\Big(\frac{\delta_{ij}^k(|\rho_{ij}^k|)^{1/2}z+a_j{\mbox{\bf b}}_j^T{\mbox{\bf w}}}{(1-|\rho_{ij}^k|)^{1/2}}\Big)\phi(z)dz\\
&=&\Phi(a_i{\mbox{\bf b}}_i^T{\mbox{\bf w}})\Phi(a_j{\mbox{\bf b}}_j^T{\mbox{\bf w}})+\phi(a_i{\mbox{\bf b}}_i^T{\mbox{\bf w}})\phi(a_j{\mbox{\bf b}}_j^T{\mbox{\bf w}})a_ia_jcov_{ij}^k(1+o(1)),\end{aligned}$$ where $\delta_{ij}^k=1$ if $\rho_{ij}^k\geq0$ and $-1$ otherwise. Therefore, $$\label{d4}
{\mbox{Cov}}(V_i,V_j)= 4({\mbox{\bf b}}_i^T{\mbox{\bf w}})({\mbox{\bf b}}_j^T{\mbox{\bf w}})\phi(a_i{\mbox{\bf b}}_i^T{\mbox{\bf w}})\phi(a_j{\mbox{\bf b}}_j^T{\mbox{\bf w}})a_ia_jcov_{ij}^k(1+o(1)),$$ and $$\begin{aligned}
&&|\sum_{i\neq j}I(|{\mbox{\bf b}}_i^T{\mbox{\bf u}}|>d)I(|{\mbox{\bf b}}_j^T{\mbox{\bf u}}|>d){\mbox{Cov}}(V_i,V_j)|\\
&<&\sum_{i\neq j}s^2\exp\Big\{-(a_{\min}ds)^2\Big\}a_{\max}^2|cov_{ij}^k|(1+o(1)).\end{aligned}$$ Consequently, we have $${\mbox{Var}}(V)<\frac{2}{m}s^2\exp\Big\{-\frac{2}{\pi}(a_{\min}ds)^2\Big\}a_{\max}^2\Big[\frac{1}{m}\sum_i\sum_j|cov_{ij}^k|\Big].\\$$ We apply (C4) in Theorem 3 and the Cauchy-Schwartz inequality to get $\frac{1}{m}\sum_i\sum_jcov_{ij}^k\leq(\sum_{i=k+1}^p\lambda_i^2)^{1/2}$ $\leq\eta^{1/2}$, and conclude that the standard deviation of $V$ is bounded by $$\sqrt{2}sm^{-1/2}\exp\Big\{-\frac{1}{\pi}(a_{\min}ds)^2\Big\}a_{\max}(\eta)^{1/4}.$$ In the derivations above, we used the fact that ${\mbox{\bf b}}_i^T{\mbox{\bf u}}\leq\|{\mbox{\bf b}}_i\|_2<1$, and the covariance matrix for $K_i$ in (21) of the paper is a submatrix for covariance matrix of $K_i$ in (10).
Next we will show that $E(V)$ is bounded from above by a negative constant. Using $x(\Phi(x)-\frac{1}{2})\geq0$, we have $$\begin{aligned}
-E(V)&=&\frac{2}{m}\sum_{i=1}^m{\mbox{\bf b}}_i^T{\mbox{\bf w}}\Big[\Phi(a_i{\mbox{\bf b}}_i^T{\mbox{\bf w}})-\frac{1}{2}\Big]\\
&\geq&\frac{2ds}{m}\sum_{i=1}^mI(|{\mbox{\bf b}}_i^T{\mbox{\bf u}}|>d)\Big[\Phi(a_ids)-\frac{1}{2}\Big]\\
&=&\frac{2ds}{m}\sum_{i=1}^m\Big[\Phi(a_ids)-\frac{1}{2}\Big]-\frac{2ds}{m}\sum_{i=1}^mI(|{\mbox{\bf b}}_i^T{\mbox{\bf u}}|\leq d)\Big[\Phi(a_ids)-\frac{1}{2}\Big].\end{aligned}$$ By (C5) in Theorem 3, $\frac{1}{m}\sum_{i=1}^mI(|{\mbox{\bf b}}_i^T{\mbox{\bf u}}|\leq d)\rightarrow0$, so for sufficiently large $m$, we have $$-E(V)\geq \frac{ds}{m}\sum_{i=1}^m\Big[\Phi(a_ids)-\frac{1}{2}\Big].$$ An application of (\[a70\]) to the right hand side of the last line leads to $$\begin{aligned}
-E(V)\geq \frac{ds}{m}\sum_{i=1}^m\frac{1}{2}\sqrt{1-\exp\big\{-\frac{2}{\pi}(a_{\min}ds)^2\big\}}.\end{aligned}$$ Note that $$1-\exp(-\frac{2}{\pi}x^2)=\frac{2}{\pi}x^2\sum_{l=0}^{\infty}\frac{1}{(l+1)!}(-\frac{2}{\pi}x^2)^l>\frac{2}{\pi}x^2\sum_{l=0}^{\infty}\frac{1}{l!}(-\frac{1}{\pi}x^2)^l=\frac{2}{\pi}x^2\exp(-\frac{1}{\pi}x^2),$$ so we have $$-E(V)\geq \frac{d^2}{2}s^2\sqrt{\frac{2}{\pi}}a_{\min}\exp\Big\{-\frac{1}{2\pi}(a_{\min}ds)^2\Big\}.$$ To show that $\forall h>0$, $\exists B$ and $M$ s.t. $\forall m>M$, $P(V<0)>1-h^{-2}$, by Chebyshev’s inequality and the upper bounds derived above, it is sufficient to show that $$\frac{d^2}{2}s^2\sqrt{\frac{1}{\pi}}a_{\min}\exp\Big\{-\frac{1}{2\pi}(a_{\min}ds)^2\Big\}>hsm^{-1/2}\exp\Big\{-\frac{1}{\pi}(a_{\min}ds)^2\Big\}a_{\max}\eta^{1/4}.$$ Recall $s=(Bk/m)^{1/2}$, after some algebra, this is equivalent to show $$d^2(Bk)^{1/2}(\pi)^{-1/2}\exp\Big\{\frac{1}{2\pi}(a_{\min}ds)^2\Big\}>2h\eta^{1/4}\frac{a_{\max}}{a_{\min}}.$$ By (C6), then for all $h>0$, when $B$ satisfies $d^2(Bk)^{1/2}(\pi)^{-1/2}>2h\eta^{1/4}S$, we have $P(V<0)>1-h^{-2}$. Note that $k=O(m^{\kappa})$ and $\eta=O(m^{2\kappa})$, so $k^{-1/2}\eta^{1/4}=O(1)$. To complete the proof of Theorem 3, we only need to show that (\[d1\]) and (\[d2\]) are correct.
To prove (\[d1\]), by (\[d3\]) we have $$\sum_{i=1}^mI(|{\mbox{\bf b}}_i^T{\mbox{\bf u}}|\leq d){\mbox{Var}}(V_i)\leq\sum_{i=1}^mI(|{\mbox{\bf b}}_i^T{\mbox{\bf u}}|\leq d)s^2d^2,$$ and $$\sum_{i=1}^mI(|{\mbox{\bf b}}_i^T{\mbox{\bf u}}| > d){\mbox{Var}}(V_i)\geq\sum_{i=1}^mI(|{\mbox{\bf b}}_i^T{\mbox{\bf u}}| > d)s^2d^2\exp\Big\{-\frac{2}{\pi}a_{\max}^2s^2\Big\}.$$ Recall $s=(Bk/m)^{1/2}$, then by (C6) and (C7), $\exp\Big\{\frac{2}{\pi}a_{\max}^2s^2\Big\}=O(1)$. Therefore, by (C5) we have $$\frac{\sum_{i=1}^mI(|{\mbox{\bf b}}_i^T{\mbox{\bf u}}|\leq d){\mbox{Var}}(V_i)}{\sum_{i=1}^mI(|{\mbox{\bf b}}_i^T{\mbox{\bf u}}| > d){\mbox{Var}}(V_i)}\rightarrow0 \ \ \text{as} \ \ m\rightarrow\infty,$$ so (\[d1\]) is correct. With the same argument and by (\[d4\]), we can show that (\[d2\]) is also correct. The proof of Theorem 3 is now complete.
**Proof of Theorem 4:** Note that $\|\widehat{{\mbox{\bf W}}}_{{\mbox{\scriptsize LS}}}-\widehat{{\mbox{\bf W}}}_{{\mbox{\scriptsize LS}}}^*\|_2=\|({\mbox{\bf X}}^T{\mbox{\bf X}})^{-1}{\mbox{\bf X}}^T{\mbox{\boldmath $\mu$}}\|_2$. By the definition of ${\mbox{\bf X}}$, we have ${\mbox{\bf X}}^T{\mbox{\bf X}}=\Lambda$, where $\Lambda={\mathrm{diag}}(\lambda_1,\cdots,\lambda_k)$. Therefore, by the Cauchy-Schwartz inequality, $$\|\widehat{{\mbox{\bf W}}}_{{\mbox{\scriptsize LS}}}-\widehat{{\mbox{\bf W}}}_{{\mbox{\scriptsize LS}}}^*\|_2=\Big[\sum_{i=1}^k\big(\frac{\sqrt{\lambda_i}{\mbox{\boldmath $\gamma$}}_i^T{\mbox{\boldmath $\mu$}}}{\lambda_i}\big)^2\Big]^{1/2}\leq\|{\mbox{\boldmath $\mu$}}\|_2\Big(\sum_{i=1}^k\frac{1}{\lambda_i}\Big)^{1/2}$$ The proof is complete.
[99]{}
Barras, L., Scaillet, O. and Wermers, R. (2010). False Discoveries in Mutual Fund Performance: Measuring Luck in Estimated Alphas. [*Journal of Finance*]{}, [**65**]{}, 179-216.
Benjamini, Y. and Hochberg, Y. (1995). Controlling the False Discovery Rate: A Practical and Powerful Approach to Multiple Testing. [*Journal of the Royal Statistical Society, Series B*]{}, [**57**]{}, 289-300.
Benjamini, Y. and Yekutieli, D. (2001). The Control of the False Discovery Rate in Multiple Testing Under Dependency. [*The Annals of Statistics*]{}, [**29**]{}, 1165-1188.
Bradic, J., Fan, J. and Wang, W. (2010). Penalized Composite Quasi-Likelihood For Ultrahigh-Dimensional Variable Selection. [*Journal of the Royal Statistical Society, Series B*]{}, [**73(3)**]{}, 325-349.
Clarke, S. and Hall, P. (2009). Robustness of Multiple Testing Procedures Against Dependence. [*The Annals of Statistics*]{}, [**37**]{}, 332-358.
Delattre, S. and Roquain, E. (2011). On the False Discovery Proportion Convergence under Gaussian Equi-Correlation. [*Statistics and Probability Letters*]{}, [**81**]{}, 111-115.
Deutsch, S., Lyle, R., Dermitzakis, E.T., Attar, H., Subrahmanyan, L., Gehrig, C., Parand, L., Gagnebin, M., Rougemont, J., Jongeneel, C.V. and Antonarakis, S.E. (2005). Gene expression variation and expression quantitative trait mapping of human chromosome 21 genes. [*Human Molecular Genetics*]{}, [**14**]{}, 3741-3749.
Efron, B. (2007). Correlation and Large-Scale Simultaneous Significance Testing. [*Journal of the American Statistical Association*]{}, [**102**]{}, 93-103.
Efron, B. (2010). Correlated Z-Values and the Accuracy of Large-Scale Statistical Estimates. [*Journal of the American Statistical Association*]{}, [**105**]{}, 1042-1055.
Fan, J., Guo, S. and Hao, N. (2010). Variance Estimation Using Refitted Cross-Validation in Ultrahigh Dimensional Regression. [*Journal of the Royal Statistical Society: Series B*]{} to appear.
Fan, J. and Li, R. (2001). Variable selection via nonconcave penalized likelihood and its oracle properties. [*Journal of American Statistical Association*]{}, [**96**]{}, 1348-1360.
Fan, J. and Song, R. (2010). Sure Independence Screening in Generalized Linear Models with NP-Dimensionality. [*Annals of Statistics*]{}, [**38**]{}, 3567-3604.
Ferreira, J. and Zwinderman, A. (2006). On the Benjamini-Hochberg Method. [*Annals of Statistics*]{}, [**34**]{}, 1827-1849.
Farcomeni, A. (2007). Some Results on the Control of the False Discovery Rate Under Dependence. [*Scandinavian Journal of Statistics*]{}, [**34**]{}, 275-297.
Friguet, C., Kloareg, M. and Causeur, D. (2009). A Factor Model Approach to Multiple Testing Under Dependence. [*Journal of the American Statistical Association*]{}, [**104**]{}, 1406-1415.
Finner, H., Dickhaus, T. and Roters, M. (2007). Dependency and False Discovery Rate: Asymptotics. [*Annals of Statistis*]{}, [**35**]{}, 1432-1455.
Genovese, C. and Wasserman, L. (2004). A Stochastic Process Approach to False Discovery Control. [*Annals of Statistics*]{}, [**32**]{}, 1035-1061.
Leek, J.T. and Storey, J.D. (2008). A General Framework for Multiple Testing Dependence. [*PNAS*]{}, [**105**]{}, 18718-18723.
Lyons, R. (1988). Strong Laws of Large Numbers for Weakly Correlated Random Variables. [*The Michigan Mathematical Journal*]{}, [**35**]{}, 353-359.
Meinshausen, N. (2006). False Discovery Control for Multiple Tests of Association under General Dependence. [*Scandinavian Journal of Statistics*]{}, [**33(2)**]{}, 227-237.
Owen, A.B. (2005). Variance of the Number of False Discoveries. [*Journal of the Royal Statistical Society, Series B*]{}, [**67**]{}, 411-426.
Polya, G. (1945). Remarks on computing the probability integral in one and two dimensions. [*Proceeding of the first Berkeley symposium on mathematical statistics and probability*]{}, 63-78.
Portnoy, S. (1984a). Tightness of the sequence of c.d.f. processes defined from regression fractiles. [*Robust and Nonlinear Time Series Analysis*]{}. Springer-Verlag, New York, 231-246.
Portnoy, S. (1984b). Asymptotic behavior of M-estimators of $p$ regression parameters when $p^2/n$ is large; I. Consistency. [*Annals of Statistics*]{}, [**12**]{}, 1298-1309, 1984.
Roquain, E. and Villers, F. (2011). Exact Calculations For False Discovery Proportion With Application To Least Favorable Configurations. [*Annals of Statistics*]{}, [**39**]{}, 584-612.
Sarkar, S. (2002). Some Results on False Discovery Rate in Stepwise Multiple Testing Procedures. [*Annals of Statistics*]{}, [**30**]{}, 239-257.
Storey, J.D. (2002). A Direct Approach to False Discovery Rates. [*Journal of the Royal Statistical Society, Series B*]{}, [**64**]{}, 479-498.
Storey, J.D., Taylor, J.E. and Siegmund, D. (2004). Strong Control, Conservative Point Estimation and Simultaneous Conservative Consistency of False Discovery Rates: A Unified Approach. [*Journal of the Royal Statistical Society, Series B*]{}, [**66**]{}, 187-205.
Sun, W. and Cai, T. (2009). Large-scale multiple testing under dependency. [*Journal of the Royal Statistical Society, Series B*]{}, [**71**]{}, 393-424.
[^1]: Jianqing Fan is Frederick L. Moore’18 professor, Department of Operations Research & Financial Engineering, Princeton University, Princeton, NJ 08544, USA and honorary professor, School of Statistics and Management, Shanghai University of Finance and Economics, Shanghai, China (Email: jqfan@princeton.edu). Xu Han is assistant professor, Department of Statistics, University of Florida, Florida, FL 32606 (Email: xhan@princeton.edu). Weijie Gu is graduate student, Department of Operations Research & Financial Engineering, Princeton University, Princeton, NJ 08544 (Email: wgu@princeton.edu). The paper was completed while Xu Han was a postdoctoral fellow at Princeton University. This research was partly supported by NSF Grants DMS-0704337 and DMS-0714554 and NIH Grant R01-GM072611. The authors are grateful to the editor, associate editor and referees for helpful comments.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'I present a simple and robust method of quantum state reconstruction using non-ideal detectors able to distinguish only between presence and absence of photons. Using the scheme, one is able to determine a value of Wigner function in any given point on the phase plane using expectation-maximization estimation technique.'
author:
- 'D. Mogilevtsev'
title: Quantum state reconstruction with binary detectors
---
,
Development of effective and robust methods of quantum state reconstruction is a task of crucial importance for quantum optics and informatics. One needs such methods to verify the preparation of states, to analyze changes occurring in the process of dynamics and to infer information about processes causing such a dynamics, to estimate an influence of decoherence and noise-induced errors, to improve measurement procedures and characterize quantum devices.
For existing schemes of quantum state reconstruction losses are the major obstacle. In the real experiment they are unavoidable; detectors which one has to use to collect the set of data necessary for the reconstruction are not ideal. Presence of losses poses a limit on the possibility of reconstruction. For example, in quantum tomography , which is up to date is a the most advanced and successfully realized reconstruction method, an efficiency of detection should exceed $50\%$ to make possible an inference of the quantum state from the collected data. However, the very presence of losses can be turned to advantage and used for the reconstruction purposes.
In 1998 in the work [@moghrad98] it was predicted, that non-ideal binary detectors can be used for complete reconstruction of a quantum state. A detector able to distinguish only between presence and absence of photons is able also to provide sufficient data for the reconstruction. This detector must be non-ideal, since the ideal binary detectors measures only the probability to find the signal in the vacuum state.
To perform the reconstruction one needs a set of probe states mixed via a beam-splitter with a signal state. For the probe coherent states were suggested. When the probe was assumed to be the vacuum, the procedure gives an information sufficient for inference of a photon number distribution of the quantum state. In Inference of the photon number distribution was discussed in works [@mog98]. This scheme was implemented experimentally to realize a multichannel fiber loop detector [@loop]. Very recently it was developed further by implementing the maximal likelihood estimation realized with help of expectation-maximization (EM) algorithm [@em1], and demonstrated experimentally . The reconstruction procedure with help of EM algorithm was shown to be robust with respect to imperfections of the measurement procedure such as, for example, fluctuations in values of detector’s efficiencies. In difference with the quantum tomography reconstruction scheme , such a procedure does not impose lower limits on detector’s efficiency and requires quite a modest number of measurements to achieve a good accuracy of the reconstruction.
Here we demonstrate how to reconstruct a quantum state using sets of binary detectors with different efficiencies. Let us consider a following simple set-up: the signal state (described by the density matrix $\rho$) is mixed on a beam-splitter with the probe coherent state $|\beta\rangle$. Then the probability $p$ to have *no* counts simultaneously on two detectors is measured (as it can be seen later, in fact, it is possible to use only one detector for the reconstruction). According to Mandel’s formula, this probability is $$p=\langle:\exp{\{-\nu_cc^{\dagger}c-\nu_dd^{\dagger}d\}}:\rangle,
\label{p01}$$ where $\nu_c,\nu_d$ are efficiencies of the first and second detectors; $c^{\dagger},c$ and $d^{\dagger},d$ are creation and annihilation operators of output modes and $::$ denoted the normal ordering. For simplicity we assume here, that there is no ‘dark current’, and in absence of the signal detectors produce no clicks. Let us assume, that the beam-splitter transforms input modes $a$ and $b$ in the following way $$c=a\cos(\alpha)+b\sin(\alpha),\qquad
d=b\cos{(\alpha)}-a\sin{(\alpha)}. \label{tr1}$$ Then averaging over the probe mode $b$, from Eqs. (\[p01\]) and (\[tr1\]) one obtains $$p=e^yTr\{:\exp{\{-{\bar\nu}(a^{\dagger}+\gamma^*)(a+\gamma)\}}:\rho\},
\label{p02}$$ where $$\begin{aligned}
\nonumber {\bar\nu}=\nu_c\cos^2(\alpha)+\nu_d\sin^2(\alpha), \\
\gamma=\beta(\nu_d-\nu_c)\cos(\alpha)\sin(\alpha)/{\bar\nu},
\\ \nonumber
y=-|\beta|^2\nu_c\nu_d\sin^2(2\alpha)/{\bar\nu}. \label{coef1}\end{aligned}$$ Finally, from Eq. (\[p02\]) one obtains $$p=e^y\sum\limits_{n=0}(1-{\bar\nu})^n\langle
n|D^{\dagger}(\gamma)\rho D(\gamma)|n\rangle, \label{p03}$$ where $D(\gamma)=\exp{\{\gamma a^{\dagger}-\gamma^*a\}}$ is a coherent shift operator, and $|n\rangle$ denotes a Fock state of the signal mode $a$.
Essence of the reconstruction procedure is in measurement of $p$ for different values of ${\bar\nu}$ and fixed value of the parameter $\gamma$. Let us, for example, assume detectors efficiencies $\nu_c,\nu_d$ to be constant, and $\nu_c\neq\nu_d$ (one of efficiencies can be set to zero; only one detector might be used in the scheme). Then for arbitrary $\gamma$ let us mix the signal state with the probe coherent state having the amplitude $$\beta_j=2{\gamma(\nu_c\cos^2(\alpha_j)+\nu_d\sin^2(\alpha_j))\over
(\nu_d-\nu_c)\sin(2\alpha_j)}
\label{beta}$$ and measure a set of probabilities $p$ for different values of the beam-splitter rotation angle $\alpha_j$. Then we have linear positive inverse problem of finding quantities $$\begin{aligned}
R_{n}(\gamma)=\langle n|D^{\dagger}(\gamma)\rho
D(\gamma)|n\rangle,\end{aligned}$$ which could be solved by the EM iterative algorithm similar to the one used in works for the reconstruction of diagonal elements of the signal state density matrix. Besides, $R_{n}(0)$ are diagonal elements of the density matrix of the signal.
The EM algorithm for the suggested scheme of the reconstruction is as follows. We assume the signal density matrix to be finite in Fock-state basis being $N\times N$, and the number of different values of $\alpha_j$ is $M\geq N$. Then we chose an initial set of $R^{(0)}_n(\gamma)>0$, $\forall n$, and implement the following iterative procedure : $$R^{k+1}_n(\gamma)=R^{k}_n(\gamma)\sum\limits_{j=0}^{M-1}{(1-{\bar\nu}_j)^np_j^{exp}
\over f_jp_j^{(k)}},
\label{em}$$ where $p_j^{exp}$ is the experimentally measured frequency of having no clicks on both detectors for a given $\alpha_j$, and $p_j^{(k)}$ is the left-hand side of Eq. (\[p03\]) calculated using the result of $k$-th iteration. The weights $$\begin{aligned}
\nonumber f_j=\sum\limits_{n=0}^{N-1}(1-{\bar\nu}_j)^n.\end{aligned}$$ The procedure (\[em\]) guarantees positiveness and unit sum of the reconstructed $R_n(\gamma)$.
Finally, having reconstructed quantities $R_n(\gamma)$, it is straightforward to find a value of the Wigner function at the point $\gamma$ [@wig]: $$W(\gamma^*,\gamma)={2\over\pi}\sum\limits_{n=0}^{N-1}(-1)^nR_n(\gamma).
\label{wign}$$
The scheme can be even more simplified if we set the efficiency of the second detector to zero $\nu_d=0$. Then $p$ is simply the probability to have no clicks on a single detector. We have also $y=0$, and the parameter $\gamma$ does not depend on the efficiency: $$\gamma=-\beta \tan(\alpha).
\label{gnew}$$
Thus, one can determine $R_n(\gamma)$ by varying the efficiency $\nu_c$ and keeping $\alpha$ constant without even changing the amplitude of the probe coherent state $\beta$.
In Figure \[fig2\] the reconstruction of the Wigner function of the signal coherent state with help of the one-detector version of the method is illustrated.
Starting from the uniform distribution $R_n^{(0)}(\gamma)=1/N$, very good results of the reconstruction was achieved with only a $10^3$ iterations of the EM algorithm and $10^4$ measurements for each point on the phase plane. As it was mentioned in the work , the choice of $R_n^{(0)}(\gamma)\neq 0$ was indeed not influencing much the convergence for any given point. However, for different points on the phase plane the rate of convergence might differ strongly. In region of more rapid change of the Wigner function one needs more iterations and more measurements to achieve the same precision (as it can bee seen in Figure \[fig2\](d); here the variance is smaller near the peak of $W(\gamma,\gamma^*)$). An explanation can be easily found from the formula (\[wign\]): in the region of rapid change one needs to find with high precision several comparable $R_n(\gamma)$, whereas, for example, the behavior of the Wigner function near $\gamma=\alpha$ is defined mostly by $R_0(\gamma)$. Also, the precision is influenced by the truncation number $N$. Increasing the region on the phase plane, where the Wigner function is to be estimated, one needs also to increase $N$. In Figure \[fig2\](b) one can see the difference between the exact and truncated Wigner functions. As a consequence (as Figures \[fig2\](c) and (d) show), in the regions when the truncation error is significant errors of the reconstruction procedure are also increased.
For illustration of how the total error of the reconstruction propagates, we use the average distance between values of the exact and the reconstructed Wigner functions $$\delta W={1\over
N_p}\sum\limits_{\forall\gamma}|W_{exact}(\gamma^*,\gamma)-W(\gamma^*,\gamma)|,
\label{dwig}$$ where the summation taken over all points on the phase plane were the estimation was made; $N_p$ stands for the number of points on the phase plane.
It can be seen in Figure \[fig3\] that for smaller number of iterations $N_{it}$ an increase of $N_r$ leads to quicker convergence for small number of measurements; after that the error $\Delta W$ decreases very slowly with increasing of $N_r$. An increase in the number of iteration leads to much slower convergence for small $N_r$. However, with increasing of $N_r$ an accuracy improve more rapidly; the error goes below values achieved for smaller number of iterations. Generally, Figure \[fig3\] confirms an observation made in the work [@paris_teor]: for performing the reconstruction procedure it is reasonable to use a number of iterations close to the number of measurements, since for $N_{r}\gg N_{it}$ an accuracy improves very slowly, and too large $N_{it}$ might even lead to increasing of $\Delta W$.
From the practical point of view, to perform the reconstruction it is reasonable first to estimate a photon number distribution (to find the set of $R_n(0)$). This will provide a clue for estimating the region of the plane sufficient for the reconstruction, and also the truncation number necessary for the purpose.
It is possible to infer elements $\rho_{mn}$ of the signal state density matrix in the Fock state basis using the reconstructed Wigner function in the following way [@glaub]: $$\rho_{mn}=2\int d^2\gamma (-1)^nW(\gamma^*,\gamma)D_{mn}(2\gamma),
\label{ro1}$$ where $$\begin{aligned}
\nonumber D_{mn}(2\gamma)=\exp{\{-2|\gamma|^2\}}\sqrt{m!n!}\times
\\ \nonumber
\sum\limits_{l=0}^{min(m,n)}{(2\gamma)^{n-l}(-2\gamma^*)^{m-l}\over
l!(m-l)!(n-l)!}.\end{aligned}$$
In Figure \[fig4\] an example of matrix elements $\rho_{mn}$ of the squeezed vacuum state $$|r\rangle=\exp\{-r^2(a^{\dagger2}-a^2)/2\} \label{sq1}$$ is demonstrated. One can see that even for modest number of points on the phase plane ($50$ points along each axis) and measurements ($10^4$ per point) the accuracy of the reconstruction of $\rho_{mn}$ is remarkable. The truncation error does not influence this elements much, because the function $D_{mn}(2\gamma)$ is small in the regions where the truncation error strongly influences the reconstructed Wigner function. One can mention here, that for reconstruction of $\rho_{mn}$ from the quantities $R_n(\gamma)$ one does not need to find the Wigner function in the whole region required for the integral $\int d^2\gamma
W(\gamma,\gamma^*)$ to be close to unity. It is sufficient to find $R_{n}(\gamma)$ on $N$ circles on the phase plane [@wel]. However, this approach leads to non-positive problem of inferring $\rho_{mn}$ from the set of $R_{n}(\gamma)$.
To conclude, we have suggested and discussed a simple and robust method of the reconstruction of the quantum state of light. For the method one needs to use a binary detector, a coherent state for the probe and a beam-splitter. The reconstruction problem is linear and positive, and solved with help of the fast and efficient EM algorithm of the maximal likelihood estimation. With help of the method a value of the Wigner function of the signal state can be found in any point on the phase plane.
The author acknowledges financial support from the Brazilian agency CNPq and the Belarussian foundation BRRFI.
[99]{}
K. Vogel and H. Risken, Phys. Rev. A**40**, 2847 (1989).
M. Raymer, M. Beck, in ‘Quantum states estimation’, M. G. A. Paris and J. Rehacek Eds., Lect. Not. Phys. 649 (Springer, Heidelberg, 2004).
D. Mogilevtsev, Z. Hradil and J. Perina, Quantum. Semicl. Opt. **10**, 345 (1998).
D. Mogilevtsev, Opt. Comm. **156**, 307 (1998); D. Mogilevtsev, Acta Physica Slovaca **49**, 743 (1999).
J. Rehacek, Z. Hradil, O. Haderka, J. Perina Jr, M. Hamar, Phys. Rev. A **67**, 061801(R) (2003); O. Haderka, M. Hamar, J. Perina Jr, Eur. Phys. J. D **28**, 149 (2004).
A. P. Dempster, N. M. Laird, D. B. Rubin, J. R. Statist. Soc. B**39**, 1 (1977); R. A. Boyles, J. R. Statist. Soc. B**45**, 47 (1983); Y. Vardi and D. Lee, J. R. Statist. Soc. B**55**, 569 (1993).
A.R. Rossi, S. Olivares, M.G.A. Paris, Phys. Rev. A **70**, 055801 (2005); A.R. Rossi and M.G.A. Paris, Eur. Phys. J. D **32**, 223 (2005).
M. Bondani and G. Zambra, A. Andreoni, M. Gramegna, M. Genovese and G. Brida, A. Rossi and M. Paris, preprint arXiv quant-ph/0502060 (2005).
J. Rehacek, Z. Hradil, and M. Jezek, preprint arXiv quant-ph/0009093v2 (2000).
A. Royer, Phys. Rev. Lett. **52**, 1064 (1984); H.Moya-Cessa and P. L. Knight, Phys. Rev. A**48**, 2479 (1993); S. Wallentowitz and W. Vogel, Phys. Rev. A**53**, 4528 (1996).
K. E. Cahill and R. J. Glauber, Phys. Rev. **177**, 1857 (1969); *ibid.* Phys. Rev. **177**, 1882 (1969).
T. Opatrny and D.-G. Welsch, Phys. Rev. A**55**, 1462 (1997).
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'The use of an interferometer to perform an ultra-precise parameter estimation under noisy conditions is a challenging task. Here we discuss nearly optimal measurement schemes for a well known, sensitive input state, squeezed vacuum and coherent light. We find that a single mode intensity measurement, while the simplest and able to beat the shot-noise limit, is outperformed by other measurement schemes in the low-power regime. However, at high powers, intensity measurement is only outperformed by a small factor. Specifically, we confirm, that an optimal measurement choice under lossless conditions is the parity measurement. In addition, we also discuss the performance of several other common measurement schemes when considering photon loss, detector efficiency, phase drift, and thermal photon noise. We conclude that, with noise considerations, homodyne remains near optimal in both the low and high power regimes. Surprisingly, some of the remaining investigated measurement schemes, including the previous optimal parity measurement, do not remain even near optimal when noise is introduced.'
author:
- 'Bryan T. Gard'
- Chenglong You
- 'Devendra K. Mishra'
- Robinjeet Singh
- Hwang Lee
- 'Thomas R. Corbitt'
- 'Jonathan P. Dowling'
bibliography:
- 'bib.bib'
title: 'Nearly Optimal Measurement Schemes in a Noisy Mach-Zehnder Interferometer with Coherent and Squeezed Vacuum'
---
[^1]
Introduction
============
Typical parameter estimation with the use of interferometric schemes aims to estimate some unknown parameter which is probed with the input quantum states of light. In principle, the sensitivity of these measurements depends on the chosen input states of light, the interferometric scheme, the noise encountered and the detection scheme performed at the output. For a real-world example, perhaps the most sensitive of these types of interferometers are the large scale interferometers used as gravitational wave sensors [@Abbott2016; @GEO6001; @GEO6002; @LIGO1; @LIGO2; @JapanTAMA; @italyVirgo]. In general, if classical states of light are used, then the most sensitive measurement is limited to a classical bound, the shot-noise limit (SNL) [@lloyd2004; @dd2015; @dowling2002]. Despite the remarkable precision possible with classical states, improvements are still possible. Here we discuss nearly optimal measurements achievable when one considers input states of coherent and squeezed vacuum [@Caves1981; @Smerzi07], under many common noisy conditions and in realistic power regimes which are applicable to general interferometry.
It is of practical interest to consider the difficulty with implementing any particular measurement scheme as every additional optical element introduces further loss into the interferometer. It has been previously shown that the parity measurement is one example of an optimal measurement for coherent and squeezed vacuum input states under lossless conditions [@Kaushik1]. It was also previously shown that a more involved detection scheme is optimal under photon loss [@onohoff09]. However, here we will discuss various common detection schemes, which are easier to implement in practice and perform nearly optimal. Discussion of a lossy MZI for Fock state inputs is also discussed in previous works [@focknoisemzi; @focknoisemzi2].
While there are many technical challenges in using squeezed states of light, we show here that some of the measurement techniques commonly used in a classical setup are no longer near optimal. In addition, some measurements exhibit problems with effects such as phase drift and thermal photon noise. With the goal of choosing a simple, yet well-performing measurement, we investigate homodyne [@Paris97], parity measurements [@gerry2000; @gerry2003; @gerry2005; @gerry2010; @leeparity] and compare them to a standard intensity measurements. These measurements form a set that are either simple to implement, or are known to be optimal in the lossless case. Specifically, we confirm that, under lossless conditions, the parity measurement achieves the smallest phase variance. However, under noisy conditions, surprisingly the parity measurement suffers greatly, while the homodyne measurement continues to give a nearly optimal phase measurement. The parity measurement under losses was briefly discussed in the context of entangled coherent states by Joo [*et al*.]{} [@Spiller11]. For the lossless case, we divide our results into two regimes, the low power regime ($|\alpha|^2<500$, e.g. small scale sensors), in which different detection schemes can lead to significantly different phase variances, and the high power regime ($|\alpha|^2>10^{5}$, e.g. large scale, devoted interferometry), where all detection schemes are nearly optimal. While our scheme may hint at applications for setups like LIGO, a much more focused analysis, outside the scope of our investigation, would be required before drawing conclusions about LIGO’s performance.
Method
======
The interferometer considered here is a Mach-Zehnder interferometer (MZI) [@MZI1] as shown in Fig. \[fig:MZI\] and is mathematically equivalent to a Michelson interferometer. Here, an input of a coherent state $(|\alpha\rangle)$ and squeezed vacuum $(|\chi \rangle)$ is used. With this input state, it is known that the phase sensitivity can be below the SNL, typically defined as $\Delta^2 \phi_{\textrm{SNL}}=1/N$, where $N$ is the mean number of photons entering the MZI [@Caves1981].
![A general Mach-Zehnder interferometer with coherent $|\alpha\rangle$ and squeezed vacuum $|\chi\rangle$ states as input. Beam splitters (BS) mix the two spatial modes, while mirrors (M) impart a phase shift, which can be safely neglected since it it common to both modes. A phase shift $\phi$ represents the phase difference between the two arms of the MZI, which can be due to a path length difference. Our goal is to estimate the unknown parameter $\phi$, which corresponds to the interaction of the quantum state with some process of interest.[]{data-label="fig:MZI"}](MZI_Plain.jpg){width="\columnwidth"}
For its close connection to the parity measurement, we shall describe our states in terms of Wigner functions. One can construct any Gaussian states Wigner function directly from the first and second moments by way of, $$W_\rho(\textbf{X})=\frac{1}{\pi^N}\frac{1}{\sqrt{\textrm{det}(\boldsymbol{\sigma})}}e^{-(\textbf{X}-\textbf{d})^\intercal\boldsymbol{\sigma}^{-1}(\textbf{X}-\textbf{d})}$$ where the covariance matrix, $\boldsymbol{\sigma}=\sigma_{ij}=\langle X_i X_j + X_j X_i\rangle -2\langle X_i\rangle \langle X_j \rangle$, mean vector, $d_j=\langle X_j \rangle$ and $X_i, X_j$ are orthogonal phase space variables.
We use this general form for our chosen input states of a coherent state $(|\alpha\rangle)$ and squeezed vacuum $(|\chi=re^{i\delta}\rangle)$ to define our states by, $$\begin{aligned}
W_{\alpha}(x_1,p_1)&=\frac{1}{\pi} \exp \biggl( 2 |\alpha| \left(\sqrt{2} (p_1 \sin\theta+x_1 \cos\theta)-|\alpha| \right)\\
& -p_1^2-x_1^2 \biggr),\\
W_{\chi}(x_2,p_2)&=\frac{1}{\pi} \exp \biggl(\sinh (2 r) \left(2 p_2 x_2 \sin \delta+\cos\delta \left(x_2^2-p_2^2\right)\right)\\
&-\left(p_2^2+x_2^2\right) \cosh (2 r)\biggr).\end{aligned}$$ Here $\alpha,\theta$ are the coherent amplitude and phase, respectively while $r,\delta$ denote the squeezing parameter and phase. As the input state we consider is a product state, it can be written in terms of the product [@Adesso2014], $$\begin{split}
W(\textbf{X})&=W_{\alpha}(x_1,p_1)\times W_{\chi}(x_2,p_2)\\
&=\frac{1}{\pi^2} e^{-p_1^2-(x_1-\sqrt{2}\alpha)^2}\times e^{-e^r p_2^2-e^{-r} x_2^2}.
\end{split}$$ For simplicity, both states have equal initial phases, as this gives rise to the optimal phase sensitivity (discussed later) and are taken to be $\theta=\delta=0$. This simply defines the coherent state to be displaced in the $x_1$ direction and the squeezed state to be squeezed along $x_2$ [@bib:GerryKnight05]. The average photon number in the coherent state is $N_{\textrm{coh}}=|\alpha|^2$ and in the squeezed vacuum state $N_{\textrm{sqz}}=\sinh^2{r}$, which sets the SNL to be $\Delta^2 \phi_{\textrm{SNL}}=1/N_{\textrm{tot}}=1/(|\alpha|^2+\sinh^2{r})$.
The propagation of this Wigner function is accomplished by the transformation of the phase space variables through the MZI, dictated by its optical elements. These transformations are described by $$\begin{aligned}
\textrm{BS}(1/2)=\frac{1}{\sqrt{2}}
\left(
\begin{array}{cccc}
1&0&1&0\\
0&1&0&1\\
1&0&-1&0\\
0&1&0&-1\\
\end{array} \right)\end{aligned}$$ $$\begin{aligned}
\textrm{PS}(\phi)=
\left(
\begin{array}{cccc}
\cos(\frac{\phi}{2})&-\sin(\frac{\phi}{2})&0&0\\
\sin(\frac{\phi}{2})&\cos(\frac{\phi}{2})&0&0\\
0&0&\cos(\frac{\phi}{2})&\sin(\frac{\phi}{2})\\
0&0&-\sin(\frac{\phi}{2})&\cos(\frac{\phi}{2})\\
\end{array} \right) ,\end{aligned}$$ where both beam splitters are fixed to be 50-50 and $\phi$ represents the unknown phase difference between the two arms of our MZI. We have chosen to use a symmetric phase model in order to simplify calculations as well as agree with previous results [@nori1; @Hofmann]. Our goal then will be minimizing our uncertainty in the estimation of the unknown parameter $\phi$. Using these transforms, the total transform for the phase space variables is given by, $$\begin{aligned}
\left(
\begin{array}{cc}
x_{1\textrm{f}}\\
p_{1\textrm{f}}\\
x_{2\textrm{f}}\\
p_{2\textrm{f}}
\end{array}\right)
=\textrm{BS}(1/2) \cdot \textrm{PS}(\phi) \cdot \textrm{BS}(1/2) \cdot
\left(
\begin{array}{cc}
x_{1}\\
p_{1}\\
x_{2}\\
p_{2}
\end{array} \right) ,\end{aligned}$$ where $\{x_{j\textrm{f}},p_{j\textrm{f}}\}$ represent the phase space variables, for each mode, after propagation through the MZI.
We can also consider photon loss in the model by way of two mechanisms, photon loss to the environment inside the interferometer and photon loss at the detectors, due to inefficient detectors. Both of these can be modeled by placing a fictitious beam splitter in the interferometer with vacuum and a interferometer arm as input and tracing over one of the output modes, to mimic loss of photons to the environment [@photon_loss]. This linear photon loss mechanism can be modeled with the use of a relatively simple transform, since these states are all of Gaussian form. Specifically this amounts to a transform of the covariance matrix according to $\sigma_L=(1-L)\mathbb{I} \cdot \sigma +L\mathbb{I}$, $0\leq L\leq 1$ is the combined photon loss and $\mathbb{I}$ is the 4$\times$4 identity matrix. Similarly the mean vector is transformed according to $\textbf{d}_L=\sqrt{(1-L)}\mathbb{I}\cdot\textbf{d}$ [@onohoff09; @2016APS..MAR.G1277B].
Results and Discussion
======================
Quantum Cramér-Rao Bound
------------------------
We consider an optimal measurement scheme with the meaning of saturating the quantum Cramér-Rao bound (QCRB) [@cramer1946mathematical; @QFI_Caves], which gives the best phase sensitivity possible for a chosen interferometer setup and input states. This optimality is independent of measurement scheme and it remains a separate task to show which measurement scheme achieves this optimal bound [@Smerzi07]. In what we call the classical version of this setup, a coherent state and vacuum state are used as input. With these two input states, the best sensitivity one can achieve is bounded by the SNL, which is achievable with many different detection schemes. Many interferometer models mainly focus on analytical analysis of Fisher information [@Datta11; @Walmsley09] when there is loss and phase drift. While this analysis is useful in that it demonstrates a ’best case scenario’, it is unknown whether the optimal detection scheme is hard to realize in an actual experimental setup. Thus, in our analysis, we are more focused on Fisher information *and* how it compares with specific detection schemes, under noisy conditions.
The benefit of using squeezed vacuum in place of vacuum is then that the phase measurement can now reach below the SNL. In order to compare various choices of measurement schemes, we not only need to calculate the various measurement outcomes, but also need to show the best sensitivity attainable with these input states. The best phase measurement one can do is given by the quantum Cramér-Rao bound [@cramer1946mathematical] and is related to the quantum Fisher information (QFI, $\mathscr{F}$) [@QFI_Caves], simply by $\Delta^2 \phi_{\textrm{QCRB}}=\mathscr{F}^{-1}$. For the input states of a coherent and squeezed vacuum, one can use the Schwinger representation to calculate the QFI, since these are pure states [@nori1; @dd1a]. Another option, and the method we use here, instead utilizes the Gaussian form of the states and can be calculated directly in terms of covariance and mean [@QFI1; @parisgaussian; @gao2014]. This method applies to pure and mixed states, as long as it maintains Gaussian form. Using this formalism, the QCRB for a coherent state and squeezed vacuum into an MZI can be found to be [@Smerzi07; @Kaushik1], $$\Delta^2 \phi_{\textrm{QCRB}}=\frac{1}{|\alpha|^2 \textrm{e}^{2r}+\sinh^2(r)}.
\label{eq:qcrb}$$ While this gives us a bound on the best sensitivity obtainable with these given input states, it does not directly consider loss or even tell us which detection scheme attains this bound.
Specific Measurements under lossless conditions
-----------------------------------------------
Now that we have a bound on the best possible sensitivity, we now seek to show how various choices of measurement compare to this bound. We consider some standard measurement choices including single-mode intensity, intensity difference, homodyne, and parity. While each of these measurements would require a significant reconfiguration of any interferometer, it is worthwhile to show how each choice impacts the resulting phase sensitivity measurement. We utilize the bosonic creation and annihilation operators ($\hat{a}^\dagger,\hat{a}$), which obey the commutation relation, $[\hat{a},\hat{a}^\dagger]=1$. We also utilize the quadrature operators ($\hat{x},\hat{p}$) which are related to the creation and annihilation operators by the transform $\hat{a}_j=\frac{1}{\sqrt{2}}(\hat{x}_j+i \hat{p}_j)$. These quadrature operators obey a similar commutator, $[\hat{x},\hat{p}]=i$.
In terms of our output Wigner function, $\langle \hat{O}_{\textrm{sym}}\rangle=\int_{-\infty}^{\infty}O \times W(\textbf{X}) d\textbf{X}$, where “sym" indicates that this integral calculates the symmetric ordered expectation value of the operator $\hat{O}$. Each measurement operator, $\langle \hat{O}\rangle$, gives rise to a phase uncertainty by way of $\Delta^2 \phi=\Delta^2\hat{O}/|\partial \langle \hat{O}\rangle/\partial \phi|^2$.
Starting with the simplest measurement, an intensity measurement is given by, $\hat{O}=\langle \hat{a}^{\dagger}\hat{a} \rangle=\langle \hat{x}^2+\hat{p}^2 \rangle/2$, which is implemented by simply collecting the outgoing light, directly onto a detector. For homodyne detection, $\hat{O}=\hat{x}$ (we find the optimal homodyne measurement is taken along the $x$ quadrature). For a balanced homodyne detection scheme, one would impinge one of the outgoing light outputs onto a 50-50 beam splitter, along with a coherent state of the same frequency as the input coherent state (usually this is derived from the same source) and perform intensity difference between the two outputs of this beam splitter. While there exist other implementations of homodyne than we describe here, we choose a standard balanced homodyne scheme, for simplicity. A standard intensity difference is defined as $\hat{O}= \hat{a}^{\dagger}\hat{a}- \hat{b}^{\dagger}\hat{b}$. This particular measurement choice is also explored in Ref. [@paris2015]. Parity detection is defined to be $\hat{O}=(-1)^{\langle\hat{a}^{\dagger}\hat{a}\rangle}=\pi W(0,0)\equiv \langle \hat \Pi \rangle$. Parity detection has been implemented experimentally, though focusing on its ability for super-resolution [@Cohen14]. While all chosen measurements can surpass the SNL, in the lossless case, to various degrees, in order of improving phase sensitivity, single-mode intensity performs the worst, followed by intensity difference, homodyne, and finally parity. The analytical forms of each detection scheme, at their respective minima, are listed below and we confirm that, under lossless conditions, the parity measurement matches the QCRB [@Kaushik1], $$\Delta^2 \phi_{\hat{\Pi}}=\frac{1}{|\alpha|^2 \textrm{e}^{2r}+\sinh^2(r)}.$$ homodyne attains, $$\Delta^2 \phi_{\hat{x}}=\frac{1}{|\alpha|^2 \textrm{e}^{2r}},$$ and intensity difference attains, $$\Delta^2 \phi_{\hat{a}^{\dagger}\hat{a}- \hat{b}^{\dagger}\hat{b}}=\frac{\textrm{e}^{-2r}(4|\alpha|^2+(\textrm{e}^{2r}-1)^2)}{(\cosh(2r)-2|\alpha|^2-1)^2}$$ while a single mode intensity measurement attains a minimum of, $$\begin{split}
&\Delta^2 \phi_{\hat{a}^{\dagger}\hat{a}}=\\
&\frac{4|\alpha|^2\textrm{e}^{-2r}+2\cosh(2r)+4\sqrt{2} |\alpha|\sinh(2r)-2}{(\cosh(2r)-2|\alpha|^2-1)^2}.
\end{split}$$ We can notice that for high coherent state powers ($|\alpha|^2\gg1$), each detection scheme’s leading term in its respective phase variance is given by $\Delta^2 \phi_{\textrm{all}}\approx (|\alpha|^2 \textrm{e}^{2r})^{-1}$, which is nearly optimal since the $\sinh^2(r)$ term in the QCRB is negligible compared to large $\alpha$. From these forms then, we can say that in the low-photon-number regime ($|\alpha|^2<500$), the difference in these detection schemes can be significant, but in the high photon number regime ($|\alpha|^2>10^{5}$), there is little difference between the various detection schemes.
Lossy Inteferometer
-------------------
We now consider the effects of loss and calculate the lossy QCRB. This is done following the same loss procedure described previously. The lossy QCRB of this mixed state becomes [@onohoff09] $$\begin{split}
&\Delta^2 \phi^{\textrm{Loss}}_{\textrm{QCRB}}=\\
&\frac{L(e^{2r}-1)+1}{(1-L)\lbrace|\alpha|^2e^{2r}+\sinh^2(r)[L(e^{2r}-1)+1]\rbrace}.
\label{eq:qcrbnoise}
\end{split}$$
Note that this QCRB with loss only considers linear photon loss caused by photon loss inside the interferometer and photon loss due to inefficient detectors. In reality, there may be more specific sources of noise one needs to consider, but our method’s purpose is to show a preliminary case when simple loss models are considered. We note that a measurement scheme proposed by Ono and Hofmann is exactly optimal (thus it is able to achieve the bound given by Eq. \[eq:qcrbnoise\]) under loss [@onohoff09], but we wish to explore how simpler measurement schemes perform when compared with this bound.
In the case of losses, the forms of each phase variance necessarily becomes much less appealing. For this reason, we only list the analytical form of the homodyne measurement under loss, as it is our prime candidate for a nearly optimal measurement. The phase variance of homodyne in a lossy interferometer is given by, $$\Delta^2 \phi_{\hat{x}}(L)=\frac{1}{|\alpha|^2 e^{2r}}+\frac{L}{|\alpha|^2 (1-L)},$$ where we have fixed the optimal phase to $\phi=\pi$ to obtain the phase variance minimum. We can note several interesting comparisons from this form, including the obvious $\Delta^2 \phi_{\hat{x}}(L) \geq \Delta^2 \phi_{\hat{x}}$ and $\Delta^2 \phi_{\hat{x}}(0)=\Delta^2 \phi_{\hat{x}}$. However, if we investigate Eq. \[eq:qcrbnoise\] for high powers (large $|\alpha|$), we find, $$\begin{split}
\Delta^2 \phi^{\textrm{Loss}}_{\textrm{QCRB}}&=\frac{1}{|\alpha|^2 e^{2r}}+\frac{L}{|\alpha|^2 (1-L)}+\mathcal{O}\left(\frac{1}{|\alpha|^4}\right)\\ \nonumber
&\approx \Delta^2 \phi_{\hat{x}}(L) . \nonumber
\end{split}$$ This expansion illustrates the fact that homodyne is nearly optimal and approaches the QCRB in the large power limit. One can see in Fig. \[fig:phases\], which shows when loss is considered, parity detection suffers greatly, while other detection schemes are still able to achieve sub-SNL phase variances. In all but the intensity and parity measurement schemes, the optimal phase (the point at which each curve achieves its minimum) has a constant value and therefore should not prove overly difficult to stabilize. In the case of intensity and parity measurement however, this optimal phase depends on both the squeezing strength $r$ and the amplitude of the coherent state $|\alpha|$. Therefore, fluctuations in the source will actually affect the optimal phase setting and in general degrade the phase measurement in this measurement scheme. Note that, in practice, typical experiments use an offset to remain near these optimum values, but purposely remain slightly away from the minimum, due to noise considerations. At this point we can note, that current technological limits enforce $|\alpha|^2\gg \sinh^2(r)$, as generally it is relatively easier to increase laser power, than to increase squeezing power. Just as it was in the lossless case, under lossy conditions then, Fig. \[fig:phases\] shows that homodyne remains nearly optimal in the low power regime. In contrast, the previously optimal measurement, parity, is now not able to even reach sub-SNL.
![Log plot of phase variance for various detection schemes for a coherent state and squeezed vacuum into an MZI, as a function of the unknown phase difference $\phi$. Loss parameters have been set to $L=20\%$. Input state parameters for each respective state are set to $|\alpha|^2=500$ and $r=1$. SNL and QCRB are also plotted with the same loss parameters. Note that a homodyne measurement nearly, but not exactly, reaches the QCRB as shown by the inset.[]{data-label="fig:phases"}](phases2){width="\columnwidth"}
We can also plot the phase variance as a function of average photon number, shown in Fig. \[fig:nums\] for large powers, which can be related to the light’s optical frequency and power by $|\alpha|^2=P/(\hbar \omega_0)$ [@GEO_bound]. The phase variances shown in Fig. \[fig:nums\] are at their respective minima in terms of optimal phase. In this form, it’s clear that a parity measurement suffers greatly, under lossy conditions and at high powers. Parity may also be difficult to implement in certain inteferometric setups as it either involves number counting (which is not feasible at very large powers) or several homodyne measurements [@plick2010]. Alternatively, a single homodyne measurement is nearly optimal in this lossy case, still only requires measurement on a single mode, is simpler to implement than parity, and is not nearly as sensitive to loss. We note that while homodyne appears to meet the QCRB in Fig. \[fig:phases\], it actually doesn’t exactly reach the QCRB (as indicated in the inset of Fig \[fig:phases\]). While intensity difference is also close in phase variance to a homodyne measurement (when $|\alpha|^2>100$) it requires utilization of both output modes for phase measurement, which may not be feasible in some setups. We note that while we have chosen typical parameters for $|\alpha|$ and $r$, the trend of homodyne achieving near optimal measurement generalizes to other parameter choices as well.
![Log plot of phase variance for various detection schemes for a coherent state and squeezed vacuum into an MZI, as a function of the average coherent photon number, $|\alpha|^2$, shown in the very large power regime. Total loss parameters have been set to $L=20\%$. We have assumed one can set the control phase to its optimal value, to obtain the best phase variance in each measurement choice. Squeezing strength in the squeezed state is set to $r=1$. Note that Parity is now not able to achieve even sub-SNL, due to loss, while homodyne and intensity difference quickly approach the QCRB (appear on top of one another). SNL and QCRB are also plotted with the same lossy parameters.[]{data-label="fig:nums"}](nums2){width="\columnwidth"}
Phase Drift
-----------
Returning to Fig. \[fig:phases\], it is clear at which value of phase the various measurements attain their lowest value. It is this value of phase that one attempts to always take measurements at with the use of a control phase inside the interferometer. The width of each of curve then can be interpreted as the chosen measurement scheme’s resistance to phase drift. The mechanism of phase drift comes about due to the limited ability to set control phases in the interferometer with infinite precision. In general, the control phase value will vary around the optimal phase setting. For this reason we aim to show this phase drift in a more rigorous way. We therefore will use the analytical forms of the various measurement phase variances, as a function of unknown phase $\phi$. We simulate phase drift by computing a running average of the phase variance, with a pseudo-randomly chosen phase, near the optimal phase, for each measurement. This pseudo-random choice is made from a Gaussian distribution, whose mean is fixed at each measurements respective optimal phase choice and has a chosen variance of $\sigma=0.15$. As predicted in the previous discussion, this gives a clearer picture of each measurement’s behavior under phase drift. For simplicity, we focus on the lossless case for this treatment of phase drift.
Shown in Fig. \[fig:phasedrift\], we see the phase variance ratio to the QCRB for each measurement scheme, as a function of the number of measurements. As the number of measurements is increased, the phase variance asymptotes to the ideal measurement case, given by the phase variance at the optimal phase. This is an illustration of the law of large numbers, as the number of measurement increases, each scheme approaches its true average. However, as is clear from Fig. \[fig:phasedrift\], each scheme approaches its average at significantly different rates. We can see that parity performs fairly poorly, as compared to the other measurement schemes. In the case of intensity, homodyne, and intensity difference measurements, it’s clear that homodyne and intensity difference attain a small phase variance, while also being more tolerant of phase drift. This confirms a special case of Genoni [*et al*.]{} [@Paris11], who showed that homodyne measurement is resistant to phase diffusion in pure Gaussian states. In principle, all of the different measurement schemes will each attain their respective phase variance minimum, as the number of measurements increases to infinity, but it is instructive to see how quickly a finite number of measurements approaches the ideal phase variance minimum.
[![Log plots of phase variance ratio to the lossless QCRB as a function of number of measurements ($M$) under phase drift noise only. For all plots shown, $|\alpha|^2=100$, $r=1$ and the standard deviation of the chosen Gaussian distribution is fixed to $\sigma=0.15$. Note that parity and intensity measurement remain noisy even after 200 averaged measurements, where homodyne and intensity difference approach their optimal phase variance quickly in $M$.[]{data-label="fig:phasedrift"}](pdriftintparity.eps "fig:"){width="0.8\columnwidth"}]{}
[![Log plots of phase variance ratio to the lossless QCRB as a function of number of measurements ($M$) under phase drift noise only. For all plots shown, $|\alpha|^2=100$, $r=1$ and the standard deviation of the chosen Gaussian distribution is fixed to $\sigma=0.15$. Note that parity and intensity measurement remain noisy even after 200 averaged measurements, where homodyne and intensity difference approach their optimal phase variance quickly in $M$.[]{data-label="fig:phasedrift"}](pdrifthomointdiff.eps "fig:"){width="0.8\columnwidth"}]{}
Thermal Photon Noise
--------------------
In addition to photon loss, detector efficiency, and phase drift, we also model the inevitable interaction with thermal photon noise from the environment. This is accomplished much in the same way as a photon loss model, but here we consider a thermal photon state incident on a fictitious beam splitter, on both arms of the interferometer, and trace out one of its output modes. This allows a tunable amount of thermal photon noise (by changing the average photon number in the thermal state), into the interferometer. The effects of this unwanted thermal photon noise, to the various measurement’s phase variance, is shown in Fig. \[fig:thermal\]. From this, we can see that even in the regime of a relatively low photon number of thermal photon noise, such noise significantly degrades the phase variance of each scheme, but drastically affects the parity scheme, making it significantly above the SNL. The SNL and QCRB under this noise model do not directly incorporate the additional thermal photons. Also in this regime, a standard single-mode intensity measurement now does not acheive sub-SNL phase variance, but homodyne and intensity difference continue to reach sub-SNL. We also note that the advantage of homodyne over intensity difference measurement is significantly decreased in the presence of thermal photon noise, but homodyne still maintains its superiority. The introduction of larger average thermal photon number continues to degrade all measurements so that they no longer beat the SNL, but this example showcases their behavior under this noise model. It should be noted that in the optical regime, the occupation of a thermal state, at room temperature is approximately $n_{\textrm{th}}\approx10^{-20}$ and therefore, some interferometric schemes do not deal with significant contribution from this model of thermal photon noise, but experiments in the microwave frequencies can have $n_{\textrm{th}}\approx1$, where this model is more applicable.
![Log plot of phase variance of the various detection schemes, with introduction of thermal photon noise into both interferometer arms, of total average photon number of $n_{\textrm{th}}=1$. Strength of the two input sources are set to $|\alpha|^2=500$, $r=1$. Note that homodyne loses most of its previous advantage over intensity difference but remains the superior measurement choice.[]{data-label="fig:thermal"}](thm2){width="0.9\columnwidth"}
We recommend that a homodyne measurement is the simplest, nearly optimal measurement choice for a setup as discussed here. Homodyne is a typical measurement choice in interferometer experiments, as well as being a single mode measurement, likely resistant to photon loss, detector efficiency, and phase drift. It shows its main benefits in the low power regime, but performs nearly optimal in both the low and high power regimes.
Conclusion
==========
In this paper, we have seen the performance of many common interferometric measurement schemes. While all are able to achieve a sub-SNL phase variance measurement in the lossless case, for the choice of a coherent and squeezed vacuum input state, all are outperformed by a homodyne measurement when loss is introduced. While these measurements each come with their own challenges in implementing, we have shown that each measurement’s performance can vary significantly under different noise models. We have also shown that in the high-photon regime, with loss, most measurement schemes approach the QCRB except for parity which suffers significantly. Our results may imply that simpler measurement schemes are overall appealing when using large powers. The behavior of each measurement scheme under phase drift and thermal photon noise is also discussed, and we find that homodyne and intensity difference measurement behave best within these models. This should be expected as both homodyne and intensity difference measurements operate in a similar way, subtracting intensities between two modes, removing common noise sources. Therefore, when considering ease of implementation as well as near optimal detection, we conclude that homodyne is nearly optimal under loss, phase drift, and thermal photon noise, for the specific choice of input states of coherent and squeezed vacuum and in both power regimes.
B.T.G. would like to acknowledge support from the National Physical Science Consortium & National Institute of Standards and Technology graduate fellowship program as well as helpful discussions with Dr. Emanuel Knill at NIST-Boulder. C.Y. would like to acknowledge support from an Economic Development Assistantship from the National Science Foundation and the Louisiana State University System Board of Regents. D.K.M. would like to acknowledge support from University Grants Commission, New Delhi, India for Raman Fellowship. T.R.C. would like to acknowledge support from the National Science Foundation grants PHY-1150531. This document has been assigned the LIGO document number LIGO-P1600084. J.P.D would like to acknowledge support from the Air Force Office of Scientific Research, the Army Research Office, the Boeing Corporation, the National Science Foundation, and the Northrop Grumman Corporation. We would all also like to thank Dr. Haixing Miao and the MQM LIGO group for helpful discussions. All authors contributed equally to this work.
Declarations
============
Appendix 1: Measurement
-----------------------
We have shown that different choices of measurements lead to varied ability to perform parameter estimation. Here we show the details of each detections analytical calculation for the ideal, lossless case. For each detection choice, $\hat{O}$, we need to calculate $$\begin{split}
\Delta^2 \phi&=\Delta^2\hat{O}/|\partial \langle \hat{O}\rangle/\partial \phi|^2\\
&=(\langle \hat{O}^2\rangle-\langle \hat{O}\rangle^2)/|\partial \langle \hat{O}\rangle/\partial \phi|^2
\end{split}$$ that is, we need the variance of the chosen measurement and the derivative of its first moment. Therefore, in general we need the first and second moments for each chosen detection scheme. We also need the Wigner function at the output, this is obtained by following the transformation of phase space variables described in the text and results in a final output Wigner function of, $$\begin{split}
&W(x_1,p_1,x_2,p_2)=\\
&\frac{1}{\pi^2}\textrm{Exp}\{-\frac{1}{2}e^{-2r}[p_2^2+x_1^2+e^{4r}(p_1^2+x_2^2)\\
&+e^{2r}(p_1^2+p_2^2+x_1^2+x_2^2+4|\alpha|^2\\
&+4\sqrt{2}|\alpha|(x_2\cos{(\phi/2)}-p_1\sin{(\phi/2)}))\\
&+2e^{2r}\sinh{r}((p_2^2-x_1^2+e^{2r}\cos{\phi}(p_1-x_2)(p_1+x_2))\\
&+2\sin{\phi}(p_2x_1+e^{2r}p_1x_2))]\},
\end{split}
\label{eq:wigout}$$ where subscripts label spatial modes, $\alpha$ is the amplitude in the coherent state and $r$ the squeezing strength in the squeezed state and we have chosen $\theta_{\textrm{coh}}=\delta_{\textrm{sqz}}=0$ for simplicity. For an intensity measurement, in terms of phase space quadrature operators $\hat{x},\hat{p}$, for the second moment, we have, $$\begin{split}
\langle (\hat{a}^\dagger\hat{a})^2\rangle&=\langle (\hat{a}^\dagger\hat{a})^2\rangle_{\textrm{sym}}-\langle \hat{a}^\dagger\hat{a}\rangle-\frac{1}{2}\\
&=\int_{-\infty}^{\infty}\left[\frac{1}{4}(x^2+p^2)^2 -\frac{1}{2}(x^2+p^2)\right]W(\textbf{X}) d\textbf{X}-\frac{1}{2}
\end{split}$$ where “sym" denotes the symmetric operator form which is calculated from $\langle(\hat{a}^\dagger\hat{a})^2\rangle_{\textrm{sym}}=\int_{-\infty}^{\infty}\frac{1}{4}(x^2+p^2)^2 \times W(\textbf{X}) d\textbf{X}$ and $W(\textbf{X})$ is the output Wigner function, given by Eq. \[eq:wigout\]. For the variance calculation then, $$\begin{split}
\Delta^2(\hat{a}^\dagger\hat{a})&=\langle (\hat{a}^\dagger\hat{a})^2\rangle-\langle \hat{a}^\dagger\hat{a}\rangle^2\\
&=\int_{-\infty}^{\infty}\left[\frac{1}{4}(x^2+p^2)^2 -\frac{1}{2}(x^2+p^2)\right]W(\textbf{X}) d\textbf{X}-1/2\\
&-\left[\int_{-\infty}^{\infty}\frac{1}{2}(x^2+p^2)W(\textbf{X}) d\textbf{X}-1/2\right]^2
\end{split}$$ For homodyne detection, since the optimal homodyne measurement is along $\hat{x}$, we simply have, $$\Delta^2\hat{x}=\int_{-\infty}^{\infty}x^2 W(\textbf{X})d\textbf{X} -\left(\int_{-\infty}^{\infty}x W(\textbf{X})d\textbf{X}\right)^2$$ For intensity difference we have, $$\begin{split}
&\Delta^2[(\hat{a}^\dagger\hat{a})_1-(\hat{a}^\dagger\hat{a})_2]=\\
&\int_{-\infty}^{\infty}\frac{1}{4}\left[(x_1^2+p_1^2)-(x_2^2+p_2^2)\right]^2W(\textbf{X}) d\textbf{X}\\
&-\left\{\int_{-\infty}^{\infty}\frac{1}{2}\left[(x_1^2+p_1^2)-(x_2^2+p_2^2)\right]W(\textbf{X}) d\textbf{X}\right\}^2
\end{split}$$ and finally for parity measurement, $$\Delta^2\hat{\Pi}=1-[\pi~W(0,0)]^2$$ utilizing $\langle \hat{\Pi}^2\rangle=1$ and $W(0,0)$ is the value of the Wigner function at the origin in phase space.
Competing Interests
-------------------
All authors declare that they have no competing interests.
Authors’ contributions
----------------------
B.T.G. drafted the manuscript and wrote code for simulations. C.Y. and D.K.M. assisted with simulations and interpretation of results. R.S., H.L., T.R.C., and J.P.D. edited manuscript, guided simulation goals and shaped project purpose. All authors read and approved the final manuscript.
[^1]: Corresponding author.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'A mapping is obtained relating radial screened Coulomb systems with low screening parameters to radial anharmonic oscillators in $N$-dimensional space. Using the formalism of supersymmetric quantum mechanics, it is shown that exact solutions of these potentials exist when the parameters satisfy certain constraints.'
author:
- 'B. Gönül, O. Özer, M. Koçak'
- 'Department of Engineering Physics,Faculty of Engineering,'
- 'University of Gaziantep, 27310 Gaziantep-Türkiye'
title: Unified treatment of screening Coulomb and anharmonic oscillator potentials in arbitrary dimensions
---
Pacs Numbers: 03.65.Ca, 03.65.Ge, 11.30.Na
Introduction
============
Since the appearance of quantum mechanics there has been continual interest in models for which the corresponding Schrödinger equation is exactly solvable. Solvable potential problems have played a dual role in physics. First, they represented useful aids in modeling realistic physical problems, and second, they offered an interesting field of investigation in their own right. Related to this latter area, the concept of solvability has changed to some extent in recent years. With regards to solvability of the Schrödinger equation there are three interesting classes of the potentials.
The first class is the exactly solvable potentials allowing to obtain in explicit form all energy levels and corresponding wave functions. The hydrogen atom and harmonic oscillator are the best-known examples of this type. The second class is the so-called quasi-exactly solvable (QES) potentials for which a finite number of eigenstates of corresponding Hamiltonian can be found exactly in explicit form. The first examples of such potentials were given in [@singh]. Subsequently several methods for generating partially solvable potentials were worked out, as a result many QES solvable potentials were found [@turbiner]. The third class is the conditionally-exactly solvable potentials [@dutra] for which the eigenvalue problem for the corresponding Hamiltonian is exactly solvable only when the parameters of the potential obey certain conditions.
Although modern computational facilities permit the construction of solutions for any well-behaved potentials to any degree of accuracy, there remains continuing interest in exact solutions for a wider range of potentials. In connection with this, the technique of changing the independent coordinate has always been a useful tool in the solution of the Schrödinger equation. For one thing, this allows something of a systematic approach, enabling one to recognize the equivalence of superficially unrelated quantum mechanical problems. For example, solvable Natanzon [@natanzon] potentials in nonrelativistic quantum mechanics are known to group into two disjoint classes depending on whether the Schrödinger equation can be reduced to a hypergeometric or a confluent hypergeometric equation. All the potentials within each class are connected via point canonical transformations. Gangopadhyaya and his co-workers [@gango] established a connection also between the two classes with appropriate limiting procedures and redefinition of parameters, thereby inter-relating all known solvable potentials. In order for the Schrödinger equation to be mapped into another Schrödinger equation, there are severe restrictions on the nature of the coordinate transformation. Coordinate transformations which satisfy these restrictions give rise to new solvable problems. When the relationship between coordinates is implicit, then the new solution are only implicitly determined, while if the relationship is explicit then the newly found solvable potentials are also shape invariant[@gango; @junker]. In a more specific special application of these ideas, Kostelecky et al. [@kostelecky1] were able to relate, using an explicit coordinate transformation, the Coulomb problem in $N$-dimensions with the $N$-dimensional harmonic oscillator. Other explicit applications of the coordinate transformation idea can be found in the review articles of Haymaker and Rau [@haymaker].
Moreover, recently an anti-isospectral transformation called also as duality transformation was introduced [@krajewska]. This transformation relates the energy levels and wave functions of two QES potentials. Many recent papers \[10-12, and the references therein\] have addressed this subject of coordinate transformation placing a particular emphasis on QES power-law potentials, which is also the subject of the present work in some extent. The generalization to higher dimensions of one-dimensional QES potentials was discussed in a recent paper [@mavromatis] and a few specific $N$-dimensional solutions were listed. In that work, applying a suitable transformation, these potentials were shown to be related to the isotropic oscillator plus Coulomb potential system and some normalized isolated solutions for this system were obtained.
The importance in the study of QES potentials, apart from intrinsic academic interest, rests on the possibility of using their solutions to test the quality of numerical methods and in the possible existence of real physical systems that they could represent. For instance, anharmonic oscillators and screening Coulomb (or Yukawa) potentials represent simplified models of many situations found in different field of physics. These problems have been studied for years and a general solution has not yet been found.
The problem of quantum anharmonic oscillators has been the subject of much discussion for decades, both from an analytical and a numerical point of view, because of its important applications in quantum-field theory [@itzykson], molecular physics [@reid], and solid-state and statistical physics [@kittle; @pathria]. Various methods have been used successfully for the one-dimensional anharmonic oscillators with various types of anharmonicities. Relatively less attention has been given to the anharmonic oscillators in higher-dimensional space because of the presence of angular-momentum states that make the problem more complicated. The recent works [@chaudhuri; @morales] have shown that there are many interesting features of the anharmonic oscillators and the perturbed Coulomb problems in higher-dimensional space, and discussed the explicit dependence of these two potentials.
Using the spirit of the works in Refs. [@chaudhuri; @morales], we show the mappings between screened Coulomb potentials (with low screening parameters) and anharmonic oscillator potentials in $N$-dimensional space, which have not been previously linked. The connection between these potentials are also checked numerically by the use of the Lagrange-mesh calculation technique [@baye; @hesse]. Next we study the $N$-dimensional screened Coulomb problem and higher order anharmonic oscillators within the framework of supersymmetric quantum mechanics (SSQM) [@cooper] and have shown that SSQM yields exact solutions for a single state only.
In the following section, we outline the mapping procedure used through the article and give the applications. We also discuss in the same section exact supersymmetric treatments of the ground state solutions for the initial and transformed potentials. Analysis and discussion of the results obtained are given in section 3. Section 4 involves concluding remarks.
Mappings between the two distinct systems
=========================================
The radial Schrödinger equation for a spherically symmetric potential in $N$-dimensional space
$$-\frac{1}{2}\left[\frac{d^{2}R}{dr^{2}}+\frac{N-1}{r}\frac{dR}{dr}\right]+\frac{%
\ell(\ell+N-2)}{2r^{2}}R =[E-V(r)]R$$
is transformed to $$-\frac{d^{2}\Psi}{dr^{2}}+\left[\frac{(M-1)(M-3)}{4r^{2}}+2V(r)\right]\Psi=2E\Psi$$ where $\Psi(r)=r^{(N-1)/2}R(r)$, the reduced radial wave function , and $M=N+2\ell$. Note that the solutions for a particular central potential are the same as long as $M$ remains unaltered. For instance, the $s$-wave eigensolutions and eigenvalues in four-dimensional space are identical to the $p$-wave solutions in two-dimensions.
We substitute $r=\alpha\rho^{2}/2$ and $R=F(\rho)/\rho^{\lambda}$, suggested by the known transformations between the Coulomb and harmonic oscillator problems [@kostelecky1; @bergmann] and used by [@chaudhuri; @morales] to show the mappings between unperturbed Coulomb and anharmonic oscillator problems, and transform Eq.(1) to another Schrödinger-like equation in $N^{\prime}=2N-2-2\lambda$ dimensional space with angular momentum $L=2\ell+\lambda$, $$-\frac{1}{2}\left[\frac{d^{2}F}{d\rho^{2}}+\frac{N^{\prime}-1}{\rho}\frac{dF}{%
d\rho}\right]+\frac{L(L+N^{\prime}-2)}{2\rho^{2}}F
=[\hat{E}-\hat{V}(\rho)]F$$ where $$\hat{E}-\hat{V}(\rho)=E\alpha^{2}\rho^{2}-\alpha^{2}\rho^{2}V(\alpha%
\rho^{2}/2)$$ and $\alpha$ is a parameter to be adjusted properly. Thus, by this transformation, the $N$ -dimensional radial wave Schrödinger equation with angular momentum $\ell $ can be transformed to a $N^{\prime}=2N-2-2\lambda$ dimensional equation with angular momentum $L=2\ell+\lambda$. The significance of the mapping parameter $\lambda$ is discussed below.
A student of introductory quantum mechanics often learns that the Schrödinger equation can be solved numerically for all angular momenta for the screened Coulomb and anharmonic oscillator problems. Less frequently, the student is made aware of the relation between these two problems, which are linked by a simple change of the independent variable discussed in detail through this section. Under this transformation, energies and coupling constants trade places, and orbital angular momenta are rescaled. Thus, we have shown here that there is really only one quantum mechanical problem, not two.
The static screened Coulomb potential $$V_{SC}(r)=-e^{2}\exp(-\delta r)/r$$ where $\delta$ is a screening parameter, is known to describe adequately the effective interaction in many-body environment of a variety of fields such as atomic, nuclear, solid-state and plasma physics. In nuclear physics it goes under the name of the Yukawa potential (with $e^{2}$ replaced by another coupling constant), and in plasma physics it is commonly known as the Debye-Hückel potential. Eq. (5) also describes the potential of an impurity in a metal and in a semiconductor [@weisbuch]. Since the Schrödinger equation for such potentials does not admit exact analytic solutions, various approximate methods \[23, 24, and the references therein\], both analytic and numerical, have been developed.
Considering the recent interest in various power-law potentials in the literature, we work through the article within the frame of low screening parameter. In this case, the screened Coulomb potential appears $$\begin{aligned}
V_{SC}(r)&=&-e^{2}~\frac{\exp(-\delta r)}{r}\cong
-\frac{e^{2}}{r}+e^{2}\delta-
\frac{e^{2}\delta^{2}}{2}r+\frac{e^{2}\delta^{3}}{6}r^{^2}-
\frac{e^{2}\delta^{4}}{24}r^{3}+
\frac{e^{2}\delta^{5}}{120}r^{4}%
\nonumber
\end{aligned}$$ $$\begin{aligned}
&=&\frac{A_{1}}{r}+A_{2}+A_{3}r+A_{4}r^{2}+A_{5}r^{3}+A_{6}r^{4}
\end{aligned}$$ in the form. The literature is rich with examples of particular solutions for such power-law potentials employed in different fields of physics, for recent applications see Refs. [@znojil; @alberg]. At this stage one may wonder why the series expansion is truncated at a lower order. This can be understood as follows. It is widely appreciated that convergence is not an important or even desirable property for series approximations in physical problems. Specifically, a slowly convergent approximation which requires many terms to achieve reasonable accuracy is much less valuable then a divergent series which gives accurate answers in a few terms. This is clearly the case for the screening Coulomb problem [@doren]. This also explains why leading orders of the $1/N$ expansion converge at a similar rate for the hydrogen atom, the screening Coulomb potential, and two-electron atom even though the last two of these series diverge eventually [@chatterjee]. In addition, the complexity of calculating higher order terms in the series for the corresponding transformed potential grows rapidly. Hence, if an accurate approximation cannot be achieved in a few terms, the present method may not be useful. However, in section 3 we show that the present technique gives excellent estimates for the energy eigenvalues of both, the truncated screening Coulomb and anharmonic oscillator problems in higher dimensions.
Though the mapping procedure introduced is valid for any bound state, throughout this work we are concerned ony with the ground state. We have two reasons for this restriction. The first reason is that the exact analytical ground state solutions for the potentials of interest are available, which provides a test for the reliability of the present technique. The second reason is that the present model works well for low lying states, which will be shown in section 3. Proceeding, therefore, with the choice of $\alpha^{2}=1/|E_{n=0}|$ in Eq. (4), the screened Coulomb problem in Eq. (6) is transformed to an anharmonic oscillator problem such that $$\hat{V}(\rho)=\left(1+\frac{A_{2}}{|E_{n=0}|}\right)\rho^{2}+
\frac{A_{3}}{2|E_{n=0}|^{3/2}}\rho^{4}+
\frac{A_{4}}{4|E_{n=0}|^{2}}\rho^{6}+
\frac{A_{5}}{8|E_{n=0}|^{5/2}}\rho^{8}+
\frac{A_{6}}{16|E_{n=0}|^{3}}\rho^{10}$$ with the eigenvalue $$\hat{E}_{n=0}=\frac{-2A_{1}}{|E_{n=0}|^{1/2}}$$
Thus, the system given by Eq. (6) in $N$-dimensional space is reduced to another system defined by Eq. (7) in $N^{\prime}=2N-2-2\lambda$ dimensional space. In other words, by changing the independent variable in the radial Schrödinger equation, we have been able to demonstrate a close equivalence between the screened Coulomb potential and anharmonic oscillator potentials.
For almost two decades, the study of higher order anharmonic potentials has been desirable to physicists and mathematicians in understanding a few newly discovered phenomena such as structural phase transitions [@khare], polaron formation in solids [@emin], and the concept of false vacuum in the field theory [@coleman]. Unfortunately, in these anharmonic potentials, not much work has been carried out on the potential like the one defined by (7) except the works in Refs. \[32-34\]. Our investigation in $N$-dimensional space, beyond showing an explicit connection between two distinct systems involving potentials of type (6) and (7), would also be helpful to the literature regarding the solutions of such potentials in arbitrary dimensions due to the recent wide interest in the lower-dimensional field theory.
Eqs. (7) and (8) are the most important expressions in the present work. To test explicitly if these results are reliable, one needs to have an exact analytical solutions for the potentials of interest, which are found below within the frame of supersymmetric quantum mechanics.
Supersymmetric treatment for the ground state
---------------------------------------------
Using the formalism of SSQM [@cooper] we set the superpotential $$W(r)=\frac{a_{1}}{r}+a_{2}+a_{3}r+a_{4}r^{2},~a_{4}<0$$ for the potential in (6) and identify $V_{+}(r)$ supersymmetric partner potential with the corresponding effective potential so that $$\begin{aligned}
V_{+}(r)&=&W^{2}(r)+W^{\prime}(r)=\frac{2a_{1}a_{2}}{r}+
[a_{2}^{2}+a_{3}(2a_{1}+1)]+2(a_{1}a_{4}+a_{4}+a_{2}a_{3})r
\nonumber
\end{aligned}$$ $$\begin{aligned}
+(2a_{2}a_{4}+a_{3}^{2})r^{2}
+2a_{3}a_{4}r^{3}+a_{4}^{2}r^{4}+\frac{a_{1}(a_{1}-1)}{r^{2}}
\nonumber
\end{aligned}$$ $$\begin{aligned}
=\left(\frac{2A_{1}}{r}+2A_{2}+2A_{3}r+2A_{4}r^{2}+2A_{5}r^{3}+2A_{6}r^{4}\right)
+\frac{(M-1)(M-3)}{4r^{2}}-2E_{n=0}
\end{aligned}$$ where $n=0,1,2,...$ is the radial quantum number. The relations between the parameters in (10) satisfy the supersymmetric definitions $$a_{1}=\frac{M-1}{2},~a_{2}=\frac{2A_{1}}{M-1},~
a_{3}=-\frac{A_{5}}{\sqrt{2A_{6}}},~a_{4}=-\sqrt{2A_{6}}$$ Note that in order to retain the well-behaved solution at $r\rightarrow \infty$ we have chosen the negative sign in $a_{4}$. The potential in (6) admits the exact solutions $$\Psi_{n=0}(r)= N_{0} r^{a_{1}}
\exp\left(a_{2}r+\frac{a_{3}}{2}r^{2}+\frac{a_{4}}{3}r^{3}\right)$$ where $N_{0}$ is the normalization constant, with the physically acceptable eigenvalues $$E_{n=0}=A_{2}-\frac{1}{2}\left[\frac{4A_{1}^{2}}{(M-1)^{2}}-\frac{A_{5}}
{\sqrt{2A_{6}}}M\right]$$ under the constraints $$A_{1}=-(M-1)\frac{8A_{6}A_{4}-2A_{5}^{2}}{16A_{6}\sqrt{2A_{6}}}~~,~~
A_{3}=-\sqrt{2A_{6}}\left[\frac{(M+1)}{2}+\frac{A_{1}}{(M-1)}\frac{A_{5}}{A_{6}}\right]~,$$ from which one arrives at the uniquely determined values of $M\approx 5$ and $\delta\approx 0.28$ in case of using atomic units in (6). The results obtained here are the generalization of the work in Ref. [@znojil].
For the anharmonic oscillator potential in (7), we set the corresponding superpotential $$W(\rho)=a\rho^{5}+b\rho^{3}+\frac{c}{\rho}+d\rho,~a<0,~d<0$$ which leads to $$\Psi_{n=0}(\rho)= C_{0} \rho^{c}
\exp\left(\frac{a}{6}\rho^{6}+\frac{b}{4}\rho^{4}+\frac{d}{2}\rho^{2}\right)$$ with $C_{0}$ being the corresponding normalization constant. Note that $\Psi(\rho)$ satisfies a differential equation analogous to Eq. (2) and is related to $F(\rho)$ in Eq. (3) as $\Psi(\rho)=\rho^{(N'-1)/2}~F(\rho)$ like the relation between $R(r)$ and $\Psi(r)$. Identifying $V_{+}(\rho)$ with the effective potential we arrive at an expression for the physically meaningful ground state eigenvalues of the anharmonic oscillator potential in arbitrary dimensions, $$\hat{E}_{n=0}=-\frac{d}{2}(2c+1)=\frac{8A_{6}A_{4}-2A_{5}^{2}}{16A_{6}\sqrt{2A_{6}}}
\frac{M'}{|E_{n=0}|^{1/2}}$$ where $M'=N'+2L$, and the relations between the potential parameters satisfy the supersymmetric constraints $$a=\pm\sqrt{\frac{A_{6}}{8}}\frac{1}{|E_{n=0}|^{3/2}},
b=\frac{A_{5}}{8a}\frac{1}{|E_{n=0}|^{5/2}}, c=\frac{M'-1}{2},
d=\frac{1}{2a}\left(\frac{A_{4}}{2|E_{n=0}|^{2}}-b^{2}\right)$$
As we are dealing with a confined particle system, the negative values for $a$ and $d$ would of course be the right choice to ensure the well behaved nature of the wave function behaviour at infinity. Our results are in agreement with the literature existing for three-dimensions [@kaushal1; @kaushal2] (in case $N'=3)$ and for two-dimensions [@dong] (in case $N'=2,L\rightarrow L-1/2$).
In sum, we have shown that SSQM yields exact solutions for a single state only for the underlying quasi-exactly solvable potentials in higher dimensions with some constraints on the coupling constants. These constraints differ from each eigenvalue, and hence various solutions do not correspond to the same potential and are not orthogonal. We have not found these solutions discussed in the literature.
Significance of the mapping parameter
-------------------------------------
To show explicitly the physics behind the transformation described above, and to make clear the significance of the mapping parameter $\lambda$, we consider Eqs. (13) and (17) together within the same frame and arrive at $$\hat{E}_{n=0}=-\frac{M'}{M-1}\frac{A_{1}}{|E_{n=0}|^{1/2}}$$
To be consistent with Eq. (8) we must impose $0\leq\lambda\leq1$ as an integer, such that $$\frac{M'}{M-1}=\frac{2(N-1-\lambda)+2(2\ell+\lambda)}{N+2\ell-1}=2$$
It is a general feature of this map that , in both cases ($\lambda=0,1$), the spectrum of the $N$-dimensional screened Coulomb problem is related to half the spectrum of the $N'$-dimension anharmonic oscillator for any even integer $N'$. The reader is referred to [@kostelecky1] for a comprehensive discussion of similar conformal mappings in the language of physics.
It is worthwhile at this stage to note that recently Chaudhuri and Mondal [@chaudhuri] studied the relations between anharmonic oscillators and perturbed Coulomb potentials in higher dimensions but their results correspond only to the case when $\lambda=1$, in this case the three-dimensional perturbed Coulomb problem and the four-dimensional anharmonic oscillator cannot be related. However, by introducing an extra degree of freedom for the mapping parameter ($\lambda=0$) through our equations, we can reproduce the well-known results found in the literature in three-dimensions. With this exact correspondence we can check Eq. (8), using exact results for the screened Coulomb potential, and calculated numerical results for the anharmonic oscillator potential.
Results and Discussion
======================
In this section numerical applications of the transformation presented in the previous section are given. Calculations to check the validity of the equations developed for the screened Coulomb and anharmonic potentials are also given here. Table I displays the exact eigenvalues of the screened Coulomb potential in three-, and five-dimensions obtained using the Lagrange-mesh calculation technique [@baye; @hesse] for selected values of the potential parameters. Highly accurate Lagrange-mesh calculation results agree well with the best existing numerical and theoretical values obtained in three-dimensions [@dutt; @lam]. Due to the constraint in the potential parameter $A_{1}$ expressed in Eq. (14), we are not able to show in the same table the corresponding exact energy values which can be calculated by Eq. (13). For the work of interest in this paper we set $A_{1}=-1$, consequently the adequate $\delta$-values satisfying the condition in Eq. (14) fall outside the scope of the presented work which has been performed for only low screening parameters.
Further, our calculation results shown in Table I make clear that the eigenvalues of the five-dimensional screened Coulomb problem with any angular momentum quantum number $\ell$, for a particular $\delta$ value, are equal to the same system with $\ell+1$ in three-dimensions, due to $M=N+2\ell$ which remains unaltered for these states. The tabulated results support the work of Imbo and Sukhatme [@imbo] in which they formulated SSQM for spherically symmetric potentials in $N$ spatial dimensions and showed that the supersymmetric partner of a given potential can be effectively treated as being $N+2~(\ell\rightarrow\ell+1)$ dimensions. This fact was exploited in their calculations using the shifted $1/N$ expansion.
It is also noted that for very small values of the screening parameter, the screening Coulomb potential reduces to the Coulomb potential that is shape invariant having supersymmetric character. Therefore, the related supersymmetric partner potentials, such as $V_{\ell}$ and $V_{\ell+1}$, are expected to have the same spectra except the ground-state energy. This can easily be seen in Table I for the case of $\delta=0.001$ in both arbitrary dimensions. For instance, the supersymmetric partner of the $s$-orbital ($\ell=0$) spectrum of hydrogen is the $p$-orbital ($\ell=1$) spectrum of the same system.
Finally, the exact calculated eigenvalues, by the use of Eq. (8), for the anharmonic oscillator in four-dimensions from the known exact results for the screened Coulomb problem in three-dimensions are displayed in Table II. These eigenvalues are checked by the Lagrange-mesh calculations and tabulated in the same table. For low lying states, the results obtained with the present technique agree well with the numerical calculations, but this agreement deteriorates quickly for higher lying states.
Concluding Remarks
==================
The mapping problems in arbitrary dimensions have been the subject of several papers and have served to illustrate various aspects of quantum mechanics of considerable pedagogical value. As the objective of this presentation we have highlighted a different facet of these studies and established a very general connection between the screened Coulomb and anharmonic oscillator potentials in higher dimensional space through the application of a suitable transformation, the purpose being the emphasize the pedagogical value residing in this interrelationship between two of the most practical applications of quantum mechanics. The exact ground state solutions for the potentials considered are obtained analytically within the framework of supersymmetric quantum mechanics, which provides a testing ground for benchmark calculations. Although much work had been done in the literature on similar problems, an investigation as the one we have discussed in this paper was missing to our knowledge.
[**Acknowledgments**]{}
We are grately indebted to Daniel Baye for his kindness in providing us the modified version of the computer code to perform Lagrange-mesh calculations. We are also grateful to the referees for suggesting several amendments.
[99]{}
V. Singh, S. N. Biswas, K. Dutta, Phys. Rev. D [**18**]{},1901 (1978) ; G. P. Flessas, Phys Lett. A [**72**]{},289 (1979) ; M. Razavy, Am. J. Phys. [**48**]{},285 (1980) ; A. Khare, Phys. Lett. A [**83**]{},237 (1981).
A. V. Turbiner, A. G. Ushveridze, Phys. Lett. A [**126**]{},181 (1987) ; A. Turbiner, Sov. Phys. JETP [**67**]{},230 (1988) ; A. Turbiner, Commun. Math. Phys. [**118**]{},467 (1988) ; M. A. Shifman, Int. Jour. Mod. Phys. A [**4**]{},2897 (1989) ; P. Roy, Y. P. Varshni, Mod. Phys. Lett. A [**6**]{},1257 (1991) ; A. V. Turbiner, Phys. Lett B [**276**]{},95 (1992) ; M. Taler and A. V. Turbiner, J. Phys. A [**26**]{},697 (1993) ; A. Gangopadhyaya et.al., Phys. Lett. A [**208**]{},261 (1995) ; V. V. Ulyanov et al., Fiz. Nizk. Temp. [**23**]{},110 (1997).
A. De Souza Dutra, Phys. Rev. A [**47**]{},R2435 (1993) ; N. Nag, R. Roychoudhury, Y. P. Varshni, Phys. Rev. A [**49**]{},5098 (1994) ; R. Dutt et al., J. Phys. A [**28**]{},L107 (1995) ; G. Junker and P. Roy, Phys. Lett A [**232**]{},155 (1997).
G. A. Natanzon, Theor. Math. Phys. [**38**]{},146 (1979).
A. Gangopadhyaya, P. K. Panigrahi and U. P. Sukhatme, Helv. Phys. Acta [**67**]{},363 (1994).
G. Junker, J. Phys. A [**23**]{}, L881 (1990); R. De, R. Dutt and U. Sukhatme, J. Phys. A [**25**]{}, L843 (1992).
V. A. Kostelecky, M. M. Nieto, and D. R. Truax, Phys. Rev. D [**32**]{},2627 (1985); V. A. Kostelecky and N. Russell, J. Math. Phys. [**37**]{},2166 (1996).
R. Haymaker and A. R. P. Rau, Am. J. Phys. [**54**]{},928 (1986).
A. Krajewska, A. Ushveridze and Z. Walczak, Mod. Phys. Lett. A [**12**]{},1225 (1997).
R. N. Chaudhuri and M. Mondal, Phys Rev. A [**52**]{},1850 (1995).
D. A. Morales and Z. Parra-Mejias, Can. J. Phys. [**77**]{},863 (1999).
B. Gönül, O. Özer, M. Koçak, D. Tutcu and Y. Cançelik, J. Phys. A [**34**]{},8271 (2001).
H. A. Mavromatis, J. Math. Phys. [**39**]{},2592 (1998).
C. Itzykson, and J. B. Zuber, [*Quantum Field Theory*]{} (McGraw-Hill, New York, 1980).
C. Reid, J. Mol. Spectrosc. [**36**]{},183 (1970).
C. Kittel, [*Introduction to Solid State Physics*]{} (Wiley, New York, 1986).
R.K. Pathria, [*Statistical Mechanics*]{} (Pergamon, Oxford, 1986).
D. Baye, J. Phys. B [**28**]{},4399 (1995).
M. Hesse and D. Baye, J. Phys. B [**32**]{},5605 (1999).
F. Cooper, A. Khare and U. Sukhatme, Phys. Rep. [**251**]{},267 (1995).
E. Schrödinger, Proc. R. Irish Acad. Sec. A [**46**]{},183 (1941); D. Bergmann and Y. Frishman, J. Math. Phys. [**6**]{},1855 (1965); S. Chandrasekhar, Newton’s Principia for the common reader, Clarendon Press, Oxford, 1995; V. I. Arnol’d and V. A. Vasil’ev, Not. Am. Math. Soc. [**36**]{},1148 (1989); A. K. Grant and J. L. Rosner, Am. J. Phys. [**62**]{},310 (1994).
C. Weisbuch, B. Vinter, [*Quantum Semiconductor Heterostructures* ]{} (Academic Press, New York, 1993); P. Harrison, [*Quantum Wells, Wires and Dots* ]{} (John Wiley and Sons, England, 2000).
R. Dutt, A. Ray and P.P. Ray, Phys. Lett. A [**83**]{},65 (1981).
C.S. Lam and Y.P. Varshni, Phys. Rev. A [**4**]{},1875 (1971).
M. Znojil, J. Math. Chem. [**26**]{},157 (1999).
M. Alberg, L. Wilets, Phys. Lett. A [**286**]{},7 (2001).
D. J. Doren and D. R. Herschbach, Phys. Rev. A [ **34**]{},2665 (1986).
A. Chatterjee, Phys. Rep. [**186**]{},250 (1990).
A.Khare and S. N. Behra, Pramana J. Phys. [**14**]{},327 (1980).
D. Emin and T. Holstein, Phys. Rev. Lett. [**36**]{},323 (1976); Phys. Today [**35**]{},34 (1982).
S. Coleman, [*Aspects of Symmetry*]{}, selected Erice Lectures (Cambridge Univ. Press, Cambridge, 1988).
R. S. Kaushal, Ann. Phys. (N.Y) [**206**]{},90 (1991).
R. S. Kaushal and D. Parashar, Phys. Lett. A [**170**]{},335 (1992).
S. Dong and Z. Ma, quant-ph/9901037.
T. D. Imbo and U. P. Sukhatme, Phys. Rev. Lett. [**54**]{},2184 (1985).
[cccccc]{}\
\
\
$\delta$&$\ell$&n=0&n=1&n=2&n=3\
0.001&0&-0.499 000&-0.124 003&-0.054 562&-0.030 262\
&1&-0.124 002&-0.054 561&-0.030 261&-0.019 018\
&2&-0.054 561&-0.030 260&-0.019 017&-0.012 914\
0.005&0&-0.495 019&-0.120 074&-0.050 720&-0.026 537\
&1&-0.120 062&-0.050 708&-0.026 526&-0.015 428\
&2&-0.050 684&-0.026 503&-0.015 406&-0.009 474\
0.010&0&-0.490 075&-0.115 293&-0.046 199&-0.022 356\
&1&-0.115 245&-0.046 153&-0.022 313&-0.011 622\
&2&-0.046 061&-0.022 228&-0.011 543&-0.006 070\
0.020&0&-0.480 296&-0.106 148&-0.038 020&-0.015 377\
&1&-0.105 963&-0.037 852&-0.015 232&-0.005 891\
&2&-0.037 515&-0.014 939&-0.005 653&-0.001 521\
0.025&0&-0.475 461&-0.101 776&-0.034 329&-0.012 495\
&1&-0.101 492&-0.034 079&-0.012 287&-0.003 770\
&2&-0.033 573&-0.011 865&-0.003 458& 0.000 253\
\
\
\
\
0.001&0&-0.124 002&-0.054 561&-0.030 261&-0.019 018\
&1&-0.054 561&-0.030 260&-0.019 017&-0.012 914\
&2&-0.030 259&-0.019 016&-0.012 912&-0.009 237\
0.005&0&-0.120 062&-0.050 708&-0.026 526&-0.015 428\
&1&-0.050 684&-0.026 503&-0.015 406&-0.009 474\
&2&-0.026 468&-0.015 373&-0.009 443&-0.005 961\
0.010&0&-0.115 245&-0.046 153&-0.022 313&-0.011 622\
&1&-0.046 061&-0.022 228&-0.011 543&-0.006 070\
&2&-0.022 099&-0.011 425&-0.005 965&-0.002 980\
0.020&0&-0.105 963&-0.037 852&-0.015 232&-0.005 891\
&1&-0.037 515&-0.014 939&-0.005 653&-0.001 521\
&2&-0.014 491&-0.005 286&-0.001 263& 0.000 885\
0.025&0&-0.101 492&-0.034 079&-0.012 287&-0.003 770\
&1&-0.033 573&-0.011 865&-0.003 458& 0.000 253\
&2&-0.011 216&-0.002 974& 0.000 524& 0.003 087\
[cccccc]{}\
\
\
$\delta$&$\ell$&L&$|E_{n=0}|$&$\hat{E}_{n=0}$&$\hat{E}_{n=0}$\
&&&(taken from Table I)&Lagrange-mesh calculations&Exact value (Eq. 8)\
\
0.001&0&0&0.499 000&2.831 259&2.831 259\
&1&2&0.124 002&5.679 579&5.679 573\
&2&4&0.054 561&8.562 285&8.562 268\
0.005&0&0&0.495 019&2.842 624&2.842 622\
&1&2&0.120 062&5.772 014&5.772 012\
&2&4&0.050 684&8.883 704&8.883 714\
0.010&0&0&0.490 075&2.856 927&2.856 924\
&1&2&0.115 245&5.891 401&5.891 406\
&2&4&0.046 061&9.318 882&9.318 871\
0.020&0&0&0.480 296&2.885 862&2.885 862\
&1&2&0.105 963&6.144 014&6.144 024\
&2&4&0.037 515&10.325 883&10.325 891\
0.025&0&0&0.475 461&2.900 499&2.900 498\
&1&2&0.101 492&6.277 884&6.277 896\
&2&4&0.033 573&10.915 282&10.915 281\
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'The motion of a collisionless plasma is described by the Vlasov-Poisson system, or in the presence of large velocities, the relativistic Vlasov-Poisson system. Both systems are considered in one space and one momentum dimension, with two species of oppositely charged particles. A new identity is derived for both systems and is used to study the behavior of solutions for large times.'
author:
- |
Robert Glassey\
Stephen Pankavich\
Jack Schaeffer
title: Decay in Time for a One Dimensional Two Component Plasma
---
*Mathematics Subject Classification : 35L60, 35Q99, 82C21, 82C22, 82D10.*
Introduction
============
Consider the Vlasov-Poisson system (which we shall abbreviate as “VP”):
$$\label{VP}
\left \{ \begin{gathered}
\partial_t f + v \ \partial_x f + E(t,x) \ \partial_v f = 0,\\
\partial_t g + \frac{v}{m} \ \partial_x g - E(t,x) \ \partial_v g = 0,\\
\rho(t,x) = \int \left ( f(t,x,v) - g(t,x,v) \right ) \ dv, \\
E(t,x) = \frac{1}{2} \left ( \int_{-\infty}^x \rho(t,y) \ dy -
\int_x^\infty \rho(t,y) \ dy \right ).
\end{gathered} \right.$$
Here $t \geq 0$ is time, $x \in \mathbb{R}$ is position, $v \in
\mathbb{R}$ is momentum, $f$ is the number density in phase space of particles with mass one and positive unit charge, and $g$ is the number density of particles with mass $m > 0$ and negative unit charge. The effect of collisions is neglected. The initial conditions $$f(0,x,v) = f_0(x,v) \geq 0$$ and $$g(0,x,v) = g_0(x,v)
\geq 0$$ for $(x,v) \in \mathbb{R}^2$ are given where it is assumed that $f_0,g_0 \in C^1(\mathbb{R}^2)$ are nonnegative, compactly supported, and satisfy the neutrality condition $$\label{neutrality} \iint f_0 \ dv dx = \iint g_0 \ dv\, dx.$$ Using the notation $$\hat{v}_m = \frac{v}{\sqrt{m^2 + v^2}},$$ the relativistic Vlasov-Poisson system (abbreviated “RVP”) is $$\label{RVP} \left \{ \begin{gathered}
\partial_t f + \hat{v}_1 \ \partial_x f + E \ \partial_v f = 0,\\
\partial_t g + \hat{v}_m \ \partial_x g - E \ \partial_v g = 0,\\
\rho(t,x) = \int \left ( f - g \right ) \ dv, \\
E(t,x) = \frac{1}{2} \left ( \int_{-\infty}^x \rho \ dy -
\int_x^\infty \rho \ dy \right ).
\end{gathered} \right.$$
It is well known that solutions of (\[VP\]) and (\[RVP\]) remain smooth for all $t \geq 0$ with $f(t,\cdot, \cdot)$ and $g(t,\cdot,\cdot)$ compactly supported for all $t \geq 0$. In fact, this is known for the three-dimensional version of (\[VP\]) ([@LP],[@Pfa]), but not for the three-dimensional version of (\[RVP\]). The literature regarding large time behavior of solutions is quite limited. Some time decay is known for the three-dimensional analogue of (\[VP\])([@GS], [@IR], [@Per]). Also, there are time decay results for (\[VP\]) (in dimension one) when the plasma is monocharged (set $g \equiv 0$) ([@BKR], [@BFFM], [@Sch]). In the work that follows, two species of particles with opposite charge are considered, thus the methods used in [@BKR], [@BFFM], and [@Sch] do not apply. References [@DD], [@Dol], and [@DR] are also mentioned since they deal with time-dependent rescalings and time decay for other kinetic equations.\
In the next section an identity is derived for (\[VP\]) that shows certain positive quantities are integrable in $t$ on the interval $[0,\infty)$. The identity is modified to address (\[RVP\]) also, but the results are weaker. These identities seem to be linked to the one-dimensional situation and do not readily generalize to higher dimension. Additionally, it is not clear if there is an extension which allows for more than two species of particles. However, since this model allows for attractive forces between ions of differing species, it is sensible to expect that additional species of ions will only strengthen repulsive forces and cause solutions to decay faster in time. In Section $3$, the $L^4$ integrability of both the positive and negative charge is derived and used to show time decay of the local charge. Finally, in Section $4$, the main identity and $L^4$ integrability will be used to show decay in time of the electric field for both (\[VP\]) and (\[RVP\]).
The Identity
============
The basic identities for (\[VP\]) and (\[RVP\]) will be derived in this section. The following theorem lists their main consequences :
\[identity\] Assume that $f_0$ and $g_0$ are nonnegative, compactly supported, $C^1$, and satisfy (\[neutrality\]). Then, for a solution of (\[VP\]), there exists $C > 0$ depending only on $f_0,g_0$, and $m$ such that $$\int_0^\infty \iiint f(t,x,w)
f(t,x,v) (w-v)^2 \ dw\, dv\, dx\, dt \leq C,$$ $$\int_0^\infty \iiint
g(t,x,w) g(t,x,v) (w-v)^2 \ dw\, dv\, dx\, dt \leq C,$$ and $$\int_0^\infty \int E^2 \int (f + g) \ dv\, dx\,
dt \leq C.$$ For a solution of (\[RVP\]) there is $C > 0$ depending only on $f_0,g_0$, and $m$ such that $$\int_0^\infty
\iiint f(t,x,w) f(t,x,v) (w-v)(\hat{w}_1 - \hat{v}_1) \ dw\, dv\,
dx\, dt \leq C,$$ $$\int_0^\infty \iiint g(t,x,w) g(t,x,v) (w-v)(\hat{w}_m - \hat{v}_m) \ dw\, dv\, dx\, dt \leq C,$$ and $$\int_0^\infty \int E^2 \int (f + g) \ dv\, dx\,
dt \leq C.$$ Moreover, $(w-v)(\hat{w}_m - \hat{v}_m) \geq 0$ for all $w,v \in \mathbb{R}$, $m > 0$.
[**Proof:**]{} Suppose $$\label{2.1}
\partial_t a + \omega(v) \partial_xa + B(t,x) \partial_v a = 0$$ where $a(t,x,v), \omega(v), B(t,x)$ are $C^1$, and $a(t,\cdot,\cdot)$ is compactly supported for each $t \geq 0$. Let $$A(t,x) = \int a \ dv$$ and $${\mathcal{A}}(t,x) =
\int_{-\infty}^x A(t,y) \ dy.$$ Note that $\partial_x {\mathcal{A}}= A$ and $$\begin{aligned}
\partial_t {\mathcal{A}}& = &
-\int_{-\infty}^x \int \left ( \omega(v) \partial_y a(t,y,v) +
B(t,y) \partial_v a(t,y,v) \right ) \ dy\,dv \\
& = & - \int \omega(v) a(t,x,v) \ dv.\end{aligned}$$ By (\[2.1\]) it follows that $$\label{2.2}
\begin{split} 0 & = {\mathcal{A}}(t,x) \int v \left ( \partial_t a +
\omega(v) \partial_x a + B(t,x) \partial_v a \right ) \ dv \\
& =: I + II + III.
\end{split}$$ Then, $$\label{2.3}
\begin{split}
I & = \partial_t\left ({\mathcal{A}}\int a v \ dv \right ) -
(\partial_t {\mathcal{A}}) \int a v \ dv \\
& = \partial_t \left ({\mathcal{A}}\int a v \ dv \right ) + \left (\int a v \
dv \right) \left (\int a \omega(v) \ dv \right ),
\end{split}$$ $$\label{2.4}
II = \partial_x \left ( {\mathcal{A}}\int a v \omega(v) \ dv \right ) - A
\int a v \omega(v) \ dv,$$ and $$\label{2.5}
\begin{split}
III &= -{\mathcal{A}}B \int a \ dv \\
& = -{\mathcal{A}}B A \\
& = -B \partial_x(\frac{1}{2} {\mathcal{A}}^2) \\
& = -\partial_x \left (\frac{1}{2} {\mathcal{A}}^2 B \right ) + \frac{1}{2}{\mathcal{A}}^2
\partial_x B.
\end{split}$$ Using (\[2.3\]), (\[2.4\]), and (\[2.5\]) in (\[2.2\]) we get
$$\label{2.6}
\begin{split}
0 &= \partial_t \left ( {\mathcal{A}}\int a v \ dv \right) +
\partial_x \left ( {\mathcal{A}}\int a v \omega(v) \ dv -
\frac{1}{2} {\mathcal{A}}^2 B \right ) \\
& \quad + \left ( \int a v \ dv \right ) \left (\int a \omega(v) \
dv \right ) - A \int a v \omega(v) \ dv + \frac{1}{2} {\mathcal{A}}^2
\partial_x B.
\end{split}$$
Next consider (\[VP\]) and let $$F(t,x) := \int f \ dv, \quad
G(t,x) := \int g \ dv,$$ and $${\mathcal{F}}(t,x) := \int_{-\infty}^x F(t,y) \
dy, \quad {\mathcal{G}}:= \int_{-\infty}^x G(t,y) \ dy.$$ Applying (\[2.6\]) twice, once with $a = f$, $\omega(v) = v$ and $B = E$, and once with $a = g$, $\omega(v) = \frac{v}{m}$, and $B = -E$, and adding the results we find $$\label{2.7}
\begin{split}
0 & = \partial_t \left ( {\mathcal{F}}\int f v \ dv + {\mathcal{G}}\int g v \ dv
\right ) + \partial_x \left ( {\mathcal{F}}\int f v^2 \ dv + m^{-1} {\mathcal{G}}\int
g v^2 \ dv \right ) \\
& \quad - \partial_x \left ( \frac{1}{2} {\mathcal{F}}^2 E - \frac{1}{2} {\mathcal{G}}^2
E \right ) + \left ( \int f v \ dv \right)^2 + m^{-1} \left (\int
g v \ dv \right )^2 \\
& \quad - F\int f v^2 \ dv - m^{-1}G \int g v^2 \ dv +
\frac{1}{2}{\mathcal{F}}^2\rho - \frac{1}{2}{\mathcal{G}}^2\rho.
\end{split}$$ It follows directly from (\[VP\]) and (\[neutrality\]) that $$\int \rho(t,x) \ dx = \int \rho(0,x) \ dx = 0$$ and hence that $E \rightarrow 0$ as $\vert x \vert \rightarrow \infty$. In addition, $$E = {\mathcal{F}}- {\mathcal{G}}.$$ Hence $$\begin{aligned}
\int ({\mathcal{F}}^2 - {\mathcal{G}}^2) \rho \ dx & = & \int ({\mathcal{F}}+ {\mathcal{G}}) E \ \partial_x E
\ dx \\
& = & -\frac{1}{2} \int \partial_x({\mathcal{F}}+ {\mathcal{G}}) E^2 \ dx \\
& = & -\frac{1}{2} \int(F + G) E^2 \ dx.\end{aligned}$$ Integration of (\[2.7\]) in $x$ yields $$\label{2.8}
\begin{split}
0 & = \frac{d}{dt} \left ( \int {\mathcal{F}}\int f v \ dv\,dx + \int {\mathcal{G}}\int
g v \ dv\,dx \right ) \\
& \quad + \int \left [ \left ( \int f v \ dv \right)^2 - F\int f
v^2 \ dv + m^{-1} \left (\left (\int g v \ dv \right )^2 - G \int
g v^2 \ dv \right ) \right ] \ dx \\
& \quad - \frac{1}{4}\int (F+G) E^2 \ dx.
\end{split}$$ Notice that exchanging $w$ and $v$ we can write $$\begin{aligned}
-\left (\int f v \ dv \right)^2 + F\int f v^2 \ dv & = & \left (
\int f(t,x,w) \ dw \right ) \left ( \int f(t,x,v) v^2 \ dv \right
) \\
& & \quad - \left ( \int f(t,x,w) w \ dw \right ) \left (\int
f(t,x,v) v \ dv \right ) \\
& = & \iint f(t,x,w) f(t,x,v) \left ( \frac{1}{2}w^2 +
\frac{1}{2}v^2 - wv \right ) \ dw\,dv \\
& = & \frac{1}{2} \iint f(t,x,w) f(t,x,v) (w - v)^2 \ dw\, dv\end{aligned}$$ and similarly for $g$. Thus, (\[2.8\]) yields $$\label{2.9}
\begin{split}
\frac{d}{dt} \left ( \int {\mathcal{F}}\int f v \ dv\,dx + \int {\mathcal{G}}\int g v \
dv\,dx \right ) & = \frac{1}{2} \iiint f(t,x,w)
f(t,x,v) (w - v)^2 \ dw\, dv\, dx \\
& \quad + \frac{1}{2} m^{-1} \iiint g(t,x,w) g(t,x,v) (w - v)^2 \
dw\, dv\, dx \\
& \quad + \frac{1}{4} \int (F + G) E^2 \ dx \\
& \geq 0.
\end{split}$$ Consider the energy $$\iint (f + m^{-1}g) v^2 \ dv\,dx + \int E^2 \
dx.$$ Note that due to (\[neutrality\]), $E(t,\cdot)$ is compactly supported and $\int E^2 \ dx$ is finite (this would fail without (\[neutrality\])). It is standard to show that the energy is constant in $t$. Similarly $\iint f \ dv\, dx = \iint g \ dv\,
dx$ is constant and $f, g \geq 0$. Hence, $$\begin{aligned}
\left \vert \int {\mathcal{F}}\int f v \ dv\, dx \right \vert & \leq &
C\iint f \vert v \vert \ dv\, dx \\
& \leq & C \left ( \iint f \ dv\, dx \right )^\frac{1}{2} \ \left
( \iint f v^2 \ dv\, dx \right )^\frac{1}{2} \\
& \leq & C\end{aligned}$$ and similarly for $g$. Now it follows from (\[2.9\]) that $$\label{2.10} \int_0^\infty \iiint \left ( f(t,x,w) f(t,x,v) +
g(t,x,w) g(t,x,v) \right ) (w - v)^2 \ dw\,dv\,dx\,dt \leq C$$ and $$\label{2.11} \int_0^\infty \int (F + G) E^2 \ dx \,dt \leq C.$$
Next consider (\[RVP\]). Applying (\[2.6\]) twice, once with $a
= f$, $\omega(v) = \hat{v}_1$, $B = E$ and once with $a = g$, $\omega(v) = \hat{v}_m$, $B = -E$, and adding the results we get $$\begin{aligned}
0 & = & \partial_t \left ( {\mathcal{F}}\int f v \ dv + {\mathcal{G}}\int g v \ dv
\right ) + \partial_x \left ( {\mathcal{F}}\int f v \hat{v}_1 \ dv + {\mathcal{G}}\int g v \hat{v}_m \ dv \right ) \\
& & \quad - \partial_x \left ( \frac{1}{2} {\mathcal{F}}^2 E - \frac{1}{2} {\mathcal{G}}^2
E \right ) + \left ( \int f v \ dv \right) \left (\int f \hat{v}_1 \
dv \right ) + \left (\int g v \ dv \right ) \left ( \int g \hat{v}_m \ dv \right ) \\
& & \quad - F\int f v \hat{v}_1 \ dv - G \int g v \hat{v}_m \ dv +
\frac{1}{2}{\mathcal{F}}^2\rho - \frac{1}{2}{\mathcal{G}}^2\rho.\end{aligned}$$ Proceeding as before we obtain the result $$\label{2.12}
\begin{split}
0 & = \frac{d}{dt} \left ( \int {\mathcal{F}}\int f v \
dv\,dx + \int {\mathcal{G}}\int g v \ dv\,dx \right ) \\
& \quad + \int \left [ \left (\int f v \ dv \right ) \left (\int f
\hat{v}_1 \ dv \right) - F\int f v \hat{v}_1 \ dv \right. \\
& \quad + \left. \left (\int g v \ dv \right ) \left (\int g
\hat{v}_m \ dv \right) - G \int g v
\hat{v}_m \ dv \right ] \ dx \\
& \quad - \frac{1}{4} \int (F + G) E^2 \ dx.
\end{split}$$ Note that $$\begin{gathered}
-\left (\int f v \ dv \right ) \left (\int f \hat{v}_1 \ dv
\right) + F\int f v \hat{v}_1 \ dv \\
= \left (\int f(t,x,w) \ dw \right) \left (\int f(t,x,v) v
\hat{v}_1 \ dv \right ) - \left (\int f(t,x,w) w \ dw \right)
\left (\int f(t,x,v) \hat{v}_1 \ dv \right ) \\
= \frac{1}{2} \iint f(t,x,w) f(t,x,v) (v \hat{v}_1 + w
\hat{w}_1 - w \hat{v}_1 - v\hat{w}_1) \ dw\,dv \\
= \frac{1}{2} \iint f(t,x,w) f(t,x,v) (w - v) (\hat{w}_1 -
\hat{v}_1) \ dw\,dv.
\end{gathered}$$ By the mean value theorem for any $w$ and $v$, there is $\xi$ between them such that $$\hat{w}_1 - \hat{v}_1 = (1 +
\xi^2)^{-\frac{3}{2}} (w - v)$$ and hence $$\label{2.13} (w - v)(\hat{w}_1 - \hat{v}_1) = (1 +
\xi^2)^{-\frac{3}{2}} (w - v)^2 \geq 0.$$ Similar results hold for $g$. For solutions of (\[RVP\]) $$\iint
\left ( f \sqrt{1 + \vert v \vert^2} + g\sqrt{m^2 + \vert v \vert^2}
\right ) \ dv\, dx + \frac{1}{2} \int E^2 \ dx = const.$$ and mass is conserved so $$\begin{aligned}
\left \vert \int {\mathcal{F}}\int f v \ dv\,dx + \int {\mathcal{G}}\int g v \ dv\,dx
\right \vert & \leq & C \iint f \vert v \vert \ dv\,dx + C \iint g
\vert v \vert \ dv\,dx \\
& \leq & C.\end{aligned}$$ Hence it follows by integrating (\[2.12\]) in $t$ that $$\int_0^\infty \iiint f(t,x,w) f(t,x,v) (w - v)(\hat{w}_1
- \hat{v}_1) \ dw\,dv\,dx\,dt \leq C,$$ $$\int_0^\infty \iiint
g(t,x,w) g(t,x,v)(w - v)(\hat{w}_m - \hat{v}_m) \ dw\,dv\,dx\,dt
\leq C,$$ and $$\int_0^\infty \int (F + G)E^2 \ dx\,dt \leq C.$$ Theorem \[identity\] now follows.
$\square$
Decay Estimates
===============
In this section we will derive some consequences of the identity from the previous section. We begin by taking $m=1$ and defining $\hat{v} := \hat{v}_m = \hat{v}_1$. Consider solutions to either the system (\[VP\]) or the system (\[RVP\]), and define as above $$F(t,x)=\int f(t,x,v)\,dv, \quad G(t,x)=\int g(t,x,v)\,dv.$$
\[L4bound\] Let $f,\,g$ satisfy the VP system (\[VP\]). Assume that the data functions $f_0,\,g_0$ satisfy the hypotheses of Theorem \[identity\]. Then $$\int_0^{\infty}\int F^4(t,x)\,dx\,dt<\infty$$ and $$\int_0^{\infty}\int G^4(t,x)\,dx\,dt<\infty.$$ When $f,\,g$ satisfy the RVP system (\[RVP\]) and the data functions $f_0,\,g_0$ satisfy the hypotheses of Theorem \[identity\] we have $$\int_0^\infty \left (\int F(t,x)^\frac{7}{4} \ dx
\right )^4 \ dt < \infty$$ with the same result valid for $G$.
[**Proof:**]{} Consider the classical case (\[VP\]). By Theorem \[identity\] we know that $$k(t,x) := \iint (w-v)^2f(t,x,v) f(t,x,w)\,dv\,dw$$ is integrable over all $x,\,t$. Next we partition the set of integration: $$F(t,x)^2 = \iint f(t,x,v) f(t,x,w)\,dv\,dw=\int_{|v-w|<R}+\int_{|v-w|>R} =: I_1+I_2.$$ Clearly we have $I_2\le R^{-2}k(t,x)$. In the integral for $I_1$ we write $$\int_{|v-w|<R}f(t,x,w)\,dw=\int_{v-R}^{v+R}f(t,x,w)\,dw\le 2\|f_0\|_{\infty}R.$$ Thus $$I_1\le c\cdot R \cdot F.$$ Set $RF=R^{-2}k$ or $R=k^{1\over 3}F^{-{1\over 3}}$. Then $F^4(t,x)\le ck(t,x)$ so $F^4(t,x)$ is integrable over all $x,\,t$. The result for $G$ is exactly the same. Now we will find by a similar process the corresponding estimate for solutions to the relativistic version (\[RVP\]). To derive it we will use the estimate from (\[2.13\]), which implies that for $1+|v|+|w|\le S$, there is a constant $c>0$ such that $$(v-w)(\hat v - \hat w)\ge cS^{-3}|v-w|^2.$$ From Theorem \[identity\] with $m=1$ we know that $$k_r(t,x) := \iint (v-w)(\hat v - \hat w)f(t,x,v) f(t,x,w)\,dv\,dw$$ is integrable over all $x,\,t$. Now write $$F(t,x)^2=\iint f(t,x,v) f(t,x,w)\,dv\,dw=\int_{(v-w)(\hat v - \hat w)<R}
+\int_{(v-w)(\hat v - \hat w)>R} =: I_1+I_2.$$ Clearly $I_2\le
R^{-1}k_r(t,x)$. To estimate $I_1$ we partition it as $$I_1=\iint_{(v-w)(\hat v - \hat w)<R \atop
1+|v|+|w|<S} f(t,x,v) f(t,x, w)\,dv\,dw+ \iint_{(v-w)(\hat v -
\hat w)<R\atop 1+|v|+|w|>S} f(t,x,v) f(t,x,w)\,dv\,dw =: I_1' +
I_1''.$$ On $I_1'$ we have by the above estimate $$R\ge (v-w)(\hat v - \hat w)\ge c|v-w|^2S^{-3}.$$ Therefore on $I_1'$ we have $|v-w|<cR^{1\over 2}S^{3\over 2}$ so that $$I_1'\le c\int f(t,x,v) \int_{v-cR^{1\over 2}S^{3\over 2}}
^{v+cR^{1\over 2}S^{3\over 2}}f(t,x,w)\,dw\,dv\le c\cdot
F(t,x)\cdot R^{1\over 2}S^{3\over 2}.$$ $I_1''$ is more troublesome. By the energy and mass bounds, $$I_1''\le S^{-1}\iint (1+|v|+|w|)f(t,x,v) f(t,x,w)\,dv\,dw\le cS^{-1}e(t,x)
F(t,x)$$ where $e(t,x)=\int \sqrt{1+v^2}f(t,x,v)\,dv$. Find $S$ first by setting $$F(t,x)\cdot R^{1\over 2}S^{3\over 2}=S^{-1}e(t,x)F(t,x),$$ that is, $$S=e(t,x)^{2\over 5}R^{-{1\over 5}}.$$ Thus we get for $I_1$ the bound $$I_1\le cS^{-1}e(t,x)F(t,x)= cF(t,x)R^{1\over 5}e(t,x)^{3\over 5}.$$ Above we had $I_2\le R^{-1}k_r(t,x)$. So now set $$F(t,x)R^{1\over 5}e(t,x)^{3\over 5}=R^{-1}k_r(t,x)$$ to find $R$. The result is $$R=k_r(t,x)^{5\over 6}F(t,x)^{-{5\over 6}}e(t,x)^{-{1\over 2}}.$$ Finally then $$F(t,x)^2\le cR^{-1}k_r(t,x)=ck_r(t,x)^{1\over 6}F(t,x)^{5\over 6}e(t,x)^{1\over 2}$$ which is the same as ${F(t,x)^{7}\over e(t,x)^3}\le ck_r(t,x)$. At this point we may integrate in time to produce the result $$\label{L4RVP1} \int_0^{\infty}\int{(\int f(t,x,v)\,dv)^{7}\over
(\int \sqrt{1+v^2}f(t,x,v)\,dv)^3}\,dx\,dt<\infty.$$ Alternatively we can isolate $F(t,x)^7$ on the left side to find $F(t,x)^7 \leq c k_r(t,x) e(t,x)^3$. Then raise both sides to the $\frac{1}{4}$th power, integrate in $x$ and use Hölder’s inequality. Hence we get the bound $$\int F(t,x)^\frac{7}{4} \ dx \leq \left (\int k_r(t,x) \
dx \right )^\frac{1}{4} \left (\int e(t,x) \ dx \right
)^\frac{3}{4}.$$ We use the time–independent bound on $\int e(t,x)
\ dx$ from conservation of energy to get the estimate $$\int F(t,x)^\frac{7}{4} \ dx \leq C \left ( \int k_r(t,x) \ dx
\right )^\frac{1}{4}.$$ Finally, we raise both sides to the $4$th power and integrate in time to produce the result $$\label{L4RVP2} \int_0^\infty \left (\int F(t,x)^\frac{7}{4} \ dx
\right )^4 \ dt < \infty.$$ This is corresponding estimate for solutions to (\[RVP\]).
$\square$
Now we will use these estimates to show that the local charges tend to $0$ as $t\to \infty$ for solutions to both sets of equations.
\[Localcharge\] Let $f,\,g$ be solutions to either the classical VP system (\[VP\]) or to the relativistic RVP system (\[RVP\]) for which the assumptions of Theorem 3.1 hold. Then for any fixed $R
> 0$ the local charges satisfy $$\lim_{t\to \infty}\int_{|x|<R}F(t,x)\,dx=
\lim_{t\to \infty}\int_{|x|<R}G(t,x)\,dx=0.$$
[**Proof:**]{} We begin with solutions to the classical equation (\[VP\]). From above we know that $$\int_0^{\infty}\int
F^4(t,x)\,dx\,dt<\infty.$$ By the Hölder inequality $$\int_{|x|<R}F(t,x)\,dx\le \left(\int
F^4(t,x)\,dx\right)^{1/4}(2R)^{3/4}$$ and therefore $$\label{holder4}
\int_0^{\infty}\left[\int_{|x|<R}F(t,x)\,dx\right]^4\,dt<\infty.$$ Now by the Vlasov equation for $f$ $$F_t=-\int (vf_x+Ef_v)\,dv = -\partial_x \int vf\,dv.$$ Integrate this formula in $x$ over $|x|<R$: $$\partial_t \int_{|x|<R}F(t,x)\,dx=- \int_{|x|<R}\partial_x \int vf\,dv\,dx=
-\int vf(t,R,v)\,dv+\int vf(t,-R,v)\,dv.$$ Call $$j_f(t,x)=\int vf(t,x,v)\,dv.$$ Then $j_f(t,x)$ is boundedly integrable over all $x$ by the energy. Next we compute $$\begin{aligned}
\partial_t \left[\int_{|x|<R}F(t,x)\,dx\right]^4 & = &
4\left[\int_{|x|<R}F(t,x)\,dx\right]^3\int_{|x|<R}F_t(t,x)\,dx\\
& = &
4\left[\int_{|x|<R}F(t,x)\,dx\right]^3\left[-j_f(t,R)+j_f(t,-R)\right].\end{aligned}$$ For $0<R_1<R_2$ integrate this in $R$ over $[R_1,R_2]$: $$\begin{aligned}
\frac{d}{dt} \int_{R_1}^{R_2}\left[\int_{|x|<R}F(t,x)\,dx\right]^4\,
dR & = & 4\int_{R_1}^{R_2}\left[\int_{|x|<R}F(t,x)\,dx\right]^3
\left[-j_f(t,R)+j_f(t,-R)\right]\,dR \\
&\leq & 4\left[\int_{|x|<R_2}F(t,x)\,dx\right]^3\int_{R_1}^{R_2}
\left|-j_f(t,R)+j_f(t,-R)\right|\,dR \\
&\leq & c\left[\int_{|x|<R_2}F(t,x)\,dx\right]^3\end{aligned}$$ for some constant $c$ depending only on the data. For $t_2>t_1>1$ multiply this by $t-t_1$ and integrate in $t$ over $[t_1,t_2]$: $$\int_{t_1}^{t_2}(t-t_1)\partial_t \int_{R_1}^{R_2}\left[\int_{|x|<R}F(t,x)\,dx\right]^4\,dR\,dt
\le
c\int_{t_1}^{t_2}(t-t_1)\left[\int_{|x|<R_2}F(t,x)\,dx\right]^3\,dt.$$ Integrating the left side by parts we get $$(t_2-t_1) \int_{R_1}^{R_2}\left[\int_{|x|<R}F(t_2,x)\,dx\right]^4\,dR
-\int_{t_1}^{t_2}\int_{R_1}^{R_2}\left[\int_{|x|<R}F(t,x)\,dx\right]^4\,dR\,dt.$$ Now take $t_2=t,\,t_1=t-1$. Then we have $$\label{L4bound2} \begin{array}{rcl}
\displaystyle
\int_{R_1}^{R_2}\left[\int_{|x|<R}F(t,x)\,dx\right]^4\,dR & \leq & \displaystyle \int_{t-1}^{t}\!\int_{R_1}^{R_2}\left[\int_{|x|<R}F(t,x)\,dx\right]^4\,dR\,dt\\
& \ & \displaystyle \quad + c\int_{t-1}^{t}(t-t_1)\left[\!\int_{|x|<R_2}F(t,x)\,dx\right]^3\,dt \\
&\leq & \displaystyle \int_{t-1}^{t}(R_2-R_1)\left[\int_{|x|<R_2}F(t,x)\,dx\right]^4\,dt \\
& \ & \displaystyle \quad +
c\int_{t-1}^{t}\left[\int_{|x|<R_2}F(t,x)\,dx\right]^3\,dt.
\end{array}$$ Now take $R_2=2R_1 = 2R$ say. Then the left side of (\[L4bound2\]) is bounded below by $$(R_2-R_1) \left[\int_{|x|<R_1}F(t,x)\,dx\right]^4=R
\left[\int_{|x|<R}F(t,x)\,dx\right]^4$$ and we claim that the right side tends to 0 as $t\to \infty$. This is clear for the first term on the right of (\[L4bound2\]) by use of (\[holder4\]). The second term goes to 0, as well, since $$\int_{t-1}^{t}\left[\int_{|x|<R_2}F(t,x)\,dx\right]^3\,dt \leq
\left(\int_{t-1}^{t}\left[\int_{|x|<R_2}F(t,x)\,dx\right]^4\,dt\right)^{3/4}
\cdot \left(\int_{t-1}^t\,dt\right)^{1/4}.$$ The same computation establishes the estimate for $G$, and the result now follows in the classical case.
The proof for the relativistic case is similar. From above we know that $$\label{L4RVP2} \int_0^\infty \left (\int F(t,x)^\frac{7}{4} \ dx
\right )^4 \ dt < \infty.$$ By the Hölder inequality $$\int_{|x|<R}F(t,x)\,dx\le c_R\left(\int
F^\frac{7}{4}(t,x)\,dx\right)^\frac{4}{7}$$ and therefore $$\int_0^{\infty}\left[\int_{|x|<R}F(t,x)\,dx\right]^7\,dt<\infty.$$ Using the Vlasov equation (\[RVP\]) for $f$ we have $$F_t=-\int (\hat vf_x+Ef_v)\,dv = -\partial_x \int \hat vf\,dv.$$ Integrate this formula in $x$ over $|x|<R$: $$\partial_t \int_{|x|<R}F(t,x)\,dx=- \int_{|x|<R}\partial_x \int \hat vf\,dv\,dx=
-\int \hat vf(t,R,v)\,dv+\int \hat vf(t,-R,v)\,dv.$$ Call $$j_f^r(t,x)=\int \hat vf(t,x,v)\,dv.$$ Then $j_f^r(t,x)$ is boundedly integrable over all $x$ by the mass bound. Next we compute $$\begin{aligned}
\partial_t \left[\int_{|x|<R}F(t,x)\,dx\right]^7 & = &
7\left[\int_{|x|<R}F(t,x)\,dx\right]^6\int_{|x|<R}F_t(t,x)\,dx\\
& = &
7\left[\int_{|x|<R}F(t,x)\,dx\right]^6\left[-j_f^r(t,R)+j_f^r(t,-R)\right].\end{aligned}$$ The proof now concludes exactly as in the classical case.
$\square$
Time Decay of Electric Field
============================
We conclude the paper with results concerning the time integrability and decay of the electric field for both the classical and relativistic systems, (\[VP\]) and (\[RVP\]).
\[E3int\] Let the assumptions of Theorem \[L4bound\] hold and consider solutions $f,\,g$ to either (\[VP\]) or (\[RVP\]). Then $$\int_0^{\infty}\Vert E(t) \Vert_{\infty}^3\,dt<\infty.$$
[**Proof:**]{} This will follow immediately from the result in Theorem \[identity\] that $$Q(t) := \int_{-\infty}^{\infty} E^2(t,x)\left[F(t,x)+G(t,x)\right]\,dx$$ is integrable in time. Indeed by the equation $E_x=\rho =
\int(f-g)\,dv=F-G$, we have $${\partial\over \partial x}E^3=3E^2\rho=3E^2(F-G).$$ Integrate in $x$ to get $$E^3(t,x)=\int_{-\infty}^x 3E^2(F-G)\,dx$$ so that $$\label{E3int} |E(t,x)|^3\le \int_{-\infty}^{\infty} 3E^2(F+G)\,dx
= 3Q(t)$$ and the result follows as claimed.
$ \square $
Our final results will show that for solutions to the classical VP system (\[VP\]) and RVP system (\[RVP\]), the electric field $E$ tends to 0 in the maximum norm.
\[EdecayVP\] Let the assumptions of Theorem \[L4bound\] hold and consider solutions $f,\,g$ to the classical VP system (\[VP\]). Then $$\lim_{t\to \infty}\|E(t)\|_{\infty}=0.$$
[**Proof:**]{} We will show that $$\lim_{t\to \infty}Q(t)=0.$$ The conclusion will then follow from (\[E3int\]). Since $Q(t)$ is integrable over $[0,\infty)$, $\liminf Q(t)=0.$ Therefore, there is a sequence $t_n$ tending to infinity such that $Q(t_n)
\to 0$ as $n \to \infty$. As above, we denote $$F(t,x)=\int f(t,x,v)\,dv, \quad G(t,x)=\int g(t,x,v)\,dv.$$ Using $E_x=\rho=F-G$ and $E_t=-j=-\int v(f-g)\,dv$ we first compute $$\begin{aligned}
\frac{dQ}{dt}& = & -2\int jE(F+G)\,dx + \int
E^2\partial_t(F+G)\,dx\\
& = & -2\int jE(F+G)\,dx - \int E^2\partial_x\int v(f+g)\,dv
\,dx\\
& = & -2\int jE(F+G)\,dx + 2\int \rho E\int v(f+g)\,dv\,dx.\end{aligned}$$ Now, $E$ is uniformly bounded because by definition in (\[VP\]), $$|E(t,x)|\le \int_{-\infty}^x (F+G)(t,x)\,dx\le
\int_{-\infty}^{\infty} (F+G)(t,x)\,dx\le \hbox{const.}$$ where the last inequality follows by conservation of mass. Therefore $$\left|{dQ\over dt}\right| \le c \int (F+G)\int |v|(f+g)\,dv \,dx.$$ Define $e$ to be the kinetic energy density, $$e(t,x) := \int v^2
(f+g)\,dv.$$ Then in the usual manner we get $$\begin{aligned}
\int |v|(f+g)dv & = & \int_{|v|<R}|v|(f+g)\,dv +
\int_{|v|>R}|v|(f+g)\,dv\\
&\leq & \Vert f+g \Vert_{\infty}\cdot R^2+R^{-1}e\\
&\leq & c(R^2+R^{-1}e).\end{aligned}$$ Choosing $R^3=e$ we find that $$\int |v|(f+g)dv \le c e^{2\over 3}(t,x)$$ and therefore $$\label{dQdt}
\left|{dQ\over dt}\right| \le c \int (F+G)e^\frac{2}{3}\,dx \le
c \Big(\int (F+G)^3 \,dx\Big)^{1\over 3}$$ by the Hölder inequality and the bound on kinetic energy from Section $2$. By interpolation, for suitable functions $w$, $$\|w\|_3 \le \|w\|_1^{\theta} \cdot \|w\|_4^{1-\theta}$$ where $${1\over 3}={\theta \over 1} + {1-\theta \over 4}.$$ Therefore $\theta = {1\over 9}$. Apply this to $w=F+G$ and use the boundedness of $F+G$ in $L^1$ to get $$\|F+G\|_3 \le c \|F+G\|_4^{8\over 9}.$$ Using this above we conclude that $$\left|{dQ\over dt}\right| \le c\|F+G\|_4^{8\over 9}.$$ From Theorem \[L4bound\] we know that $\int (F^4 + G^4)\,dx$ is integrable in time. Thus $\left|{dQ\over dt}\right|^{9\over 2}$ is integrable in time. Now, for any $0 < R_1 < R_2$ write $$Q(R_2)^{16\over 9} - Q(R_1)^{16\over 9}={16\over 9}\int_{R_1}^{R_2}Q(t)^{7\over 9}
\dot Q(t)\,dt.$$ By the Hölder inequality again, with $p={9\over
7}$ and $q={9\over 2}$, $$\Big|Q(R_2)^{16\over 9} - Q(R_1)^{16\over 9}\Big| \le c\Big(\int_{R_1}^{R_2}Q(t)\,dt\Big)^{7/9} \cdot
\Big(\int_{R_1}^{R_2}|\dot Q(t)|^{9\over 2}\,dt\Big)^{2\over 9} \to 0$$ as $R_1, \, R_2 \to \infty.$ Therefore the limit $$\lim_{R \to \infty} Q(R)^{16\over 9}$$ exists and equals $\omega$, say. By taking $R=t_n$ and letting $n \to \infty$ we get $\omega =0$. This concludes the proof.
$\square$
\[EdecayRVP\] Let the assumptions of Theorem 2.1 hold and consider solutions $f,\,g$ to the relativistic VP system (\[RVP\]). Then, also in this case $$\lim_{t\to \infty}\|E(t)\|_{\infty}=0.$$
[**Proof:**]{} As is to be expected, the proof is similar to that of Theorem \[EdecayVP\]. From Theorem 2.1 we have again that $Q(t)$ is integrable in time, where exactly as in the non-relativistic case $$Q(t)= \int_{-\infty}^{\infty} E^2(t,x)\left[F(t,x)+G(t,x)\right]\,dx.$$ In this situation we have $\rho=\int(f-g)\,dv$ and (with $m=1$) $j=\int \hat v(f-g)\,dv$ where $\hat v={v\over \sqrt{1+v^2}}$ so that $|\hat v|<1$. The computation of the derivative in time is now $$\begin{aligned}
\frac{dQ}{dt} &=& -2\int jE(F+G)\,dx + \int
E^2\partial_t(F+G)\,dx\\
& = & -2\int jE(F+G)\,dx - \int E^2\partial_x\int \hat v(f+g)\,dv
\,dx\\
& = & -2\int jE(F+G)\,dx + 2\int \rho E\int \hat v(f+g)\,dv\,dx.\end{aligned}$$ It follows that $$\left|{dQ\over dt}\right| \le c\int |E|(F+G)^2\,dx\le c\int (F+G)^2\,dx$$ because $E$ is uniformly bounded. Call $e$ the relativistic kinetic energy density, $$e(t,x) = \int \sqrt{1+v^2} (f+g)\,dv.$$ Then as above $$\begin{aligned}
F+G & = & \int (f+g)\,dv\\
& = & \int_{|v|<R}(f+g)\,dv + \int_{|v|>R}(f+g)\,dv\\
& \leq & \Vert f+g \Vert_{\infty}\cdot 2R+R^{-1}e\\
&\leq & c(R+R^{-1}e).\end{aligned}$$ Hence with $R^2=e$ we find that $F+G\le ce^{1\over 2}$. Thus we see that $$\left|{dQ\over dt}\right| \le c \int e\,dx \le c.$$ In view of Remark 1 below then, $Q(t)\to 0$ as $t\to \infty$ which implies the result for $E$ as in the classical case.
$ \square $
**1.** Once $Q(t)$ is integrable in time, the uniform boundedness of $\left|{dQ\over dt}\right|$ also implies that $Q(t) \to 0$ as $t \to \infty$. The estimate (\[dQdt\]) provides the desired bound in the classical case because $(F+G)^3$ is dominated by the energy integral in this situation.\
**2.** For solutions to (\[VP\]) or (\[RVP\]), using interpolation with Theorem \[EdecayVP\] or Theorem \[EdecayRVP\], and the bound on $\Vert E(t) \Vert_2$ from energy conservation we find $$\lim_{t \rightarrow \infty} \Vert E(t)
\Vert_p = 0$$ for any $p > 2$.\
**3.** We have been unable to find a rate of decay for the maximum norm of $E$. For solutions to the classical Vlasov–Poisson system in three space dimensions such a rate follows from differentiating in time an expression essentially of the form $$\iint |x-tv|^2(f+g)\,dv\,dx$$ (cf. [@IR], [@Per]). This estimate fails to imply time decay in the current one–dimensional case.\
**4.** An identity similar to that in the proof of Theorem \[identity\] holds for solutions to the “one and one–half–dimensional” Vlasov–Maxwell system. However we have been unable to show that certain terms arising from the linear parts of the differential operators have the proper sign.\
**5.** As stated in the introduction, such decay theorems should be true for several species under the hypothesis of neutrality. However we have been unable to achieve this generalization for more than two species.\
**6.** After suitable approximation, these results can be seen to be valid for weak solutions as well.
[99]{} Batt, J.; Kunze, M.; and Rein, G., On the asymptotic behavior of a one-dimensional, monocharged plasma and a rescaling method. Advances in Differential Equations [**1998**]{}, 3:271-292.
Burgan, J.R.; Feix, M.R.; Fijalkow, E.; Munier, A., Self-similar and asymptotic solutions for a one-dimensional Vlasov beam. J. Plasma Physics [**1983**]{}, 29:139-142.
Desvillettes, L. and Dolbeault, J., On long time asymptotics of the Vlasov–Poisson–Boltzmann equation. Comm. Partial Differential Equations [**1991**]{}, 16(2-3):451-489.
Dolbeault, J. Time-dependent rescalings and Lyapunov functionals for some kinetic and fluid models. Proceedings of the Fifth International Workshop on Mathematical Aspects of Fluid and Plasma Dynamics (Maui, HI, 1998). Transport Theory Statist. Phys. [**2000**]{}, 29(3-5): 537-549.
Dolbeault, J. and Rein, G. Time-dependent rescalings and Lyapunov functionals for the Vlasov-Poisson and Euler-Poisson systems, and for related models of kinetic equations, fluid dynamics and quantum physics. Math. Methods Appl. Sci. [**2001**]{}, 11(3):407-432.
Glassey, R. and Strauss, W., Remarks on collisionless plasmas. Contemporary Mathematics [**1984**]{}, 28:269-279
Illner, R. and Rein, G., Time decay of the solutions of the Vlasov-Poisson system in the plasma physical case. Math. Methods Appl. Sci. [**1996**]{}, 19:1409-1413.
Lions, P.L. and Perthame, B. Propogation of moments and regularity for the three dimensional Vlasov-Poisson system. Invent. Math. [**1991**]{}, 105:415-430.
Perthame, B. Time decay, propagation of low moments and dispersive effects for kinetic equations. Comm. Partial Differential Equations [**1996**]{}, 21(3–4):659–686.
Pfaffelmoser, K., Global classical solution of the Vlassov-Poisson system in three dimensions for general initial data. J. Diff. Eq. [**1992**]{}, 95(2):281-303.
Schaeffer, J. Large-time behavior of a one-dimensional monocharged plasma. Diff. and Int. Equations [**2007**]{}, 20(3):277-292.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Fibrous networks such as collagen are common in physiological systems. One important function of these networks is to provide mechanical stability for cells and tissues. At physiological levels of connectivity, such networks would be mechanically unstable with only central-force interactions. While networks can be stabilized by bending interactions, it has also been shown that they exhibit a critical transition from floppy to rigid as a function of applied strain. Beyond a certain strain threshold, it is predicted that underconstrained networks with only central-force interactions exhibit a discontinuity in the shear modulus. We study the finite-size scaling behavior of this transition and identify both the mechanical discontinuity and critical exponents in the thermodynamic limit. We find both non-mean-field behavior and evidence for a hyperscaling relation for the critical exponents, for which the network stiffness is analogous to the heat capacity for thermal phase transitions. Further evidence for this is also found in the self-averaging properties of fiber networks.'
author:
- Sadjad Arzash
- 'Jordan L. Shivers'
- 'Fred C. MacKintosh'
title: Finite size effects in critical fiber networks
---
Introduction
============
In addition to common thermal phase transitions such as melting or ferromagnetism, there are a number of athermal phase transitions such as rigidity percolation [@thorpe_continuous_1983; @feng_effective-medium_1985; @jacobs_generic_1995] and zero-temperature jamming [@cates_jamming_1998; @liu_jamming_1998; @van_hecke_jamming_2010; @bi_jamming_2011; @bi_statistical_2015]. These athermal transitions may even exhibit signatures of criticality that are similar to thermal systems. In the case of rigidity percolation, as bond probability or average connectivity $z$ increases on a random central-force network, the number of floppy modes decreases by adding constraints until the isostatic connectivity $z_c$ is reached, at which the system becomes rigid. A simple counting argument by Maxwell shows that $z_c \approx 2d$ where $d$ is dimensionality [@maxwell_i.reciprocal_1870; @calladine_buckminster_1978]. This linear rigidity transition has been studied in random network models with additional bending interactions [@feng_position-space_1985; @arbabi_elastic_1988; @sahimi_mechanics_1993]. In general, floppy subisostatic central force networks can be stabilized by various mechanisms or additional interactions such as extra springs [@wyart_elasticity_2008], bending resistance [@broedersz_criticality_2011], thermal fluctuations [@dennison_fluctuation-stabilized_2013; @dennison_critical_2016], and applied strain [@guyon_non-local_1990; @sheinman_nonlinear_2012]. Sharma et al. [@sharma_strain-controlled_2016] recently showed that networks with $z<z_c$ exhibit a line of critical floppy-to-rigid transitions under shear deformation and that this line of mechanical phase transitions can account for the nonlinear rheology of collagen networks. The corresponding phase diagram is schematically shown in Fig. \[fig:1\], where the critical strain $\gamma_c$ at the transition is a function of connectivity $z<z_c$.
Recent experiments [@lindstrom_biopolymer_2010; @licup_stress_2015; @jansen_role_2018; @burla_connectivity_2020] have shown that collagen biopolymers form networks that are in the subisostatic regime with $z<z_c$. It has also been shown that the rheology of such networks is consistent with computational fiber network models that include both strong stretching interactions and weak fiber bending rigidity [@licup_stress_2015; @sharma_strain-controlled_2016]. Although even a weak bending rigidity tends to suppress the critical signatures of the transition shown in Fig. \[fig:1\], the critical exponents can still be identified both theoretically and experimentally in a way similar, e.g., to ferromagnetism at non-zero applied field. To understand criticality and finite-size effects in the strain-controlled transition, we focus on fiber networks with purely central force interactions as a function of shear strain $\gamma$. At a critical strain $\gamma_c$, there can be a small but finite discontinuity in the differential shear modulus $K=\partial\sigma/\partial\gamma$, where $\sigma$ is the shear stress [@vermeulen_geometry_2017; @merkel_minimal-length_2019]. Figure \[fig:2\] shows the macroscopic modulus, shear stress and elastic energy of a diluted triangular network as a function of the distance above its critical strain. Although both elastic energy $E$ and shear stress $\sigma$ approach zero as $\Delta\gamma=\gamma-\gamma_c$ approaches zero from above, the stiffness $K$ exhibits a finite discontinuity $K_c$. The left inset of Fig. \[fig:2\] shows $K$ versus $|\Delta \gamma| ^f$, where $f \neq 1$ is a non-mean-field scaling exponent. The observed straight line in this linear plot illustrates the critical scaling behavior of $K$ near $\gamma_c$. Moreover, a distinct discontinuity in the modulus can be seen in the right inset of Fig. \[fig:2\], showing the region closer to $\gamma_c$. The scaling behavior of $K$ and the critical exponent $f$ are more systematically studied in the later sections, where we study the finite-size scaling of the discontinuity and its effect on the scaling exponents, which have also previously been studied using a complementary approach with the addition of small, non-zero bending rigidity [@sharma_strain-controlled_2016]. Using these modified exponents, we test scaling relations recently predicted for fiber networks [@shivers_scaling_2019].
![ \[fig:1\] Rigidity phase diagram of central force networks. [Upon increasing the average]{} connectivity $z$ at $\gamma=0$, [a network]{} passes through three distinct regimes: (i) a disconnected structure for connectivity less than the percolation connectivity $z<z_p$ (ii) a percolated but floppy network for $z_p<z<z_c \simeq 2d $ and (iii) a rigid network for connectivity greater than $z_c$. Applying a sufficiently large finite strain to an otherwise floppy network with $z_p<z<z_c$ rigidifies the system. For a given $z$ in this range, a critical transition is observed with increasing strain, as indicated by the dashed arrow. The second-order line of transitions is characterized by a critical strain $\gamma_c(z)$ that varies linearly with $z$ near $z_c$ [@wyart_elasticity_2008] (see also Fig. \[fig:A3\] in the Appendix).](fig1){width="6cm" height="6cm"}
Simulation method
=================
To investigate the stiffness discontinuity in fiber networks, we use various network models including (i) triangular, (ii) *phantomized* triangular [@broedersz_criticality_2011; @licup_stress_2015], (iii) 2D and (iv) 3D jammed-packing-derived [@wyart_elasticity_2008; @tighe_force_2010; @baumgarten_normal_2018; @merkel_minimal-length_2019; @shivers_normal_2019], (v) Mikado [@wilhelm_elasticity_2003; @head_deformation_2003], and (vi) 2D Voronoi network [@heussinger_stiff_2006; @arzash_stress-stabilized_2019]. Triangular networks are built by depositing individual fibers of length $W$ on a periodic triangular lattice. The lattice spacing is $\ell_0=1$. A full triangular network has an average connectivity of $z=6$. In order to avoid the trivial effects of system-spanning fibers, we initially cut a single random bond from every fiber. Since the number of connections for a crosslink in real biopolymer networks is either 3 (branching point) or 4 (fiber crossing), we enforce this local connectivity in phantomized triangular model. A single node in a full triangular network has three crossing fibers. We *phantomize* the network by detaching one of these fibers randomly for every node [@broedersz_molecular_2011; @licup_stress_2015]. Therefore, a fully phantomized triangular network has an average connectivity of $z=4$. Similar to the triangular network model, a random bond is removed from every fiber to avoid system-spanning fibers.
We note that our lattice models are not generic, i.e., the nodes are not displaced from an initial regular lattice. Although generic lattices can be important for linear elasticity [@jacobs_generic_1995; @moukarzel_elastic_2012], the nonlinear elasticity studied here is insensitive to small displacements in the the initial configuration, as shown in Ref. [@rens_nonlinear_2016]. This is due to the fact that the transition we study occurs at a finite strain threshold, by which significant nonaffine deformation has occurred. 2D (3D) packing-derived networks are generated by randomly placing $N=W^2 \; (W^3)$ disks (spheres) in a periodic box (cube) of length $W$. We use $50/50$ bidisperse particle mixture with radii ratio of $1.4$. These frictionless particles interact via a harmonic soft repulsive potential [@ohern_random_2002; @ohern_jamming_2003; @goodrich_jamming_2014]. The particles are uniformly expanded until the system exhibits both non-zero bulk and shear moduli, i.e., the system is jammed at which a contact network excluding rattlers is derived. This contact network shows an average connectivity of $z \simeq z_c$. Mikado networks are constructed by populating a box of size $W$ with $N$ fibers of length $L$. Permanent crosslinks are introduced at the crossing points between two fibers. Because of the preparation procedure for the Mikado model, the average connectivity of the network approaches $4$ from below as number of fibers $N$ increases. To construct Mikado networks, we choose a line density of $NL^2/W^2 \simeq 7$ that results in an average connectivity of $z \simeq 3.4$. The 2D Voronoi model is prepared by performing a Voronoi tessellation of $W^2/2$ random seeds in a periodic box with side length of $W$, using the CGAL library [@the_cgal_project_cgal_2019]. A full Voronoi network has an average connectivity of $z=3$.
![ \[fig:2\] Elastic energy $E$, shear stress $\sigma$, and differential shear modulus $K$ versus excess shear strain to the critical point $\gamma-\gamma_c$ for a single realization of a subisostatic triangular network with $z=3.3$. We use the finite modulus at the critical strain $\gamma_c$ as the shear modulus discontinuity, i.e., $K_c = K(\gamma_c)$. Inset: a linear plot showing the scaling behavior of $K$ for the same sample. By zooming in this plot on the right side, we observe a distinct modulus discontinuity $K_c$.](fig2){width="7.5cm" height="7.5cm"}
For all network models, we randomly cut bonds until the desired average connectivity $z<z_c$ is reached. Any remaining dangling bonds are removed since they do not contribute to the network’s stiffness. The random dilution process not only yields a subisostatic network similar to real biopolymers but also introduces disorder in the system. All crosslinks in our computational models are permanent and freely hinged. An example image of each model is shown in Fig. \[fig:A1\] in the Appendix. Among these computational models, we note that the bond length distribution of Mikado and Voronoi models is similar to the observed filament length distribution of collagen networks [@lindstrom_biopolymer_2010].
In the above models, the bonds are treated as simple Hookean springs. Therefore, the elastic energy of the network is calculated as $$\label{eq:1}
E = \frac{\mu}{2} \sum_{ij}^{}\frac{(\ell_{ij} - \ell_{ij,0})^2}{\ell_{ij,0}},$$ in which $\mu$ (in units of energy/length) is the stretching (Young’s) modulus of individual bonds, $\ell_{ij}$ and $\ell_{ij,0}$ are the current and rest bond length between nodes $i$ and $j$ respectively. We note that the rest lengths are defined as bond lengths after constructing the networks, i.e., prior to any deformation. The sum is taken over all bonds in the network. We set $\mu=1$ in our simulations.
We apply simple volume-preserving shear deformations in a step-wise procedure with small step size. The deformation tensors in 2D and 3D are as follow $$\label{eq:2}
\Lambda_{2\textrm{D}}(\gamma) = \begin{bmatrix}
1 & \gamma \\
0 & 1
\end{bmatrix}, \;
\Lambda_{3\textrm{D}}(\gamma) = \begin{bmatrix}
1 & 0 & \gamma \\
0 & 1 & 0 \\
0 & 0 & 1
\end{bmatrix}$$ where $\gamma$ is the shear strain and the networks are sheared in $x$-direction. Note that the 3D networks are deformed in $x-z$ plane.
We assume a quasi-static process, i.e., the system reaches mechanical equilibrium after each deformation step. Therefore, after each strain step, we minimize the elastic energy in Eq. \[eq:1\] using one of the multidimensional minimization algorithms such as FIRE [@bitzek_structural_2006], conjugate gradient [@press_numerical_1992], and BFGS2 method from the GSL library [@galassi_et_al_gnu_2018]. To reduce finite size effects, we utilize periodic boundary conditions in both directions. Moreover, we use Lees-Edwards boundary conditions to deform the networks [@lees_computer_1972]. After finding the mechanical equilibrium configuration at each strain step, we compute the stress components as follows [@shivers_scaling_2019] $$\label{eq:3}
\sigma_{\alpha \beta} = \frac{1}{2V} \sum_{ij}^{}f_{ij,\alpha}r_{ij,\beta},$$ in which $V$ is the volume of simulation box, $f_{ij,\alpha}$ is the $\alpha$ component of the force exerted on node $i$ by node $j$, and $r_{ij,\beta}$ is the $\beta$ component of the displacement vector connecting nodes $i$ and $j$. The differential shear modulus $K$ is calculated as $K = d\sigma_{xy}/d\gamma$ in 2D and $K = d\sigma_{xz}/d\gamma$ in 3D at each strain value. To remove any possible asymmetry in $K$, we shear each realization in both positive and negative shear strains. Unless otherwise stated, in order to obtain reliable ensemble averages, we use at least $100$ different realizations for every network model.
![ \[fig:3\] (a) A small section of a triangular network with connectivity $z=3.3$ at the critical strain $\gamma = \gamma_c$. The gray bonds are those with zero force. Bonds with larger forces have a brighter color. This branch-like force chain that appears at the critical strain rigidifies the otherwise floppy network. (b) The participation ratio $\psi$, the ratio of bonds under a finite force to all present bonds, versus shear strain $\gamma$ for the network in (a). As shown, a large portion of bonds undergoes a finite force at the critical strain, i.e., $\psi_c \simeq 0.5$. Inset: the force distributions of the network in (a) at the critical strain, where $\langle |f| \rangle$ is the average of absolute values of bond forces.](fig3){width="8.5cm" height="8.5cm"}
Results
=======
By applying shear strain, the subisostatic networks with central force interactions undergo a mechanical phase transition from a floppy to a rigid state [@sharma_strain-controlled_2016; @sharma_strain-driven_2016]. In contrast to percolation- or jamming-like transitions in which the system rigidifies due to increasing number of bonds or contacts, fiber network models have static structures. Therefore, this floppy-to-rigid transition occurs because of the emergence of finite tension under deformation, here shear strain. The transition point is a function of network’s geometry as well as network’s connectivity $z$ (see the schematic phase diagram in Fig. \[fig:1\]) . As shown in Fig. \[fig:3\], a branch-like tensional structure appears at the critical strain that is responsible for the network’s rigidity. This rigidity mechanism can be understood in terms of the percolation of these tensional paths. By computing the *participation ratio* $\psi$ as the ratio of bonds with non-zero force to all present bonds in the network, we find that a large portion of the network is under a finite force at the transition point (see Fig. \[fig:3\] b). To calculate $\psi$ we use the absolute value of bond forces $|f_{ij}|$, where $f_{ij}>0$ corresponds to tension. The force distribution at the critical strain is shown in the inset of Fig. \[fig:3\] b. The behavior of this distribution is similar to (compressive) contact force distributions in particle packings [@radjai_force_1996; @ohern_random_2002; @ohern_jamming_2003; @wyart_rigidity_2005; @majmudar_contact_2005]. Here, however, the distribution shows that there are more tensile than compressive forces at the critical strain, which stabilize the network. Consistent with prior work [@shivers_normal_2019], we find that the force distribution decays exponentially at the critical strain.
To further understand this criticality in central force networks, we investigate the moments of force distribution that are defined as $$\label{}
M_k = \langle \frac{1}{N_b} \sum_{ij} |f_{ij}|^{k} \rangle ,$$ in which the angle brackets represent the ensemble average over random realizations, $N_b$ is the number of all bonds, and $|f_{ij}| =|\mu (\ell_{ij} - \ell_{ij,0})/\ell_{ij,0}|$ is the magnitude of force on bond $ij$. Similar to the behavior of percolation on elastic networks [@hansen_multifractality_1988; @hansen_universality_1989; @arbabi_mechanics_1993; @sahimi_mechanics_1993], we find that the moments $M_k$ obey a scaling law near the critical strain $$\label{}
M_k \sim |\gamma - \gamma_c|^{q_k}.$$ This scaling behavior of the first three moments is shown in Fig. \[fig:A4\] in the Appendix. For a triangular network with $z=3.3$, we find that $q_1 = 1.3 \pm 0.1$, $q_2 = 2.5 \pm 0.1$ and $q_3 = 3.7 \pm 0.1$. Interestingly, we observe that $q_k \simeq q_{k-1} +1$ for $k>1$. Note that the zeroth moment of the force distribution is the participation ratio $\psi$ shown in Fig. \[fig:3\]b. The mass fraction of the tensional backbone that appears at the critical strain is given by the participation ratio or zeroth moment at $\gamma_c$ [@hansen_universality_1989; @bunde_fractals_1995]. In plotting the mass of the tensional structure at the critical strain versus system size $W$, we find that the fractal dimension of this backbone appears to be the same as the euclidean dimension of $2$ (see Fig. \[fig:A8\] in the Appendix).
Of particular interest are the macroscopic properties of fiber networks such as stiffness $K$ near the transition. As we approach the critical point, we find that $K$ shows a finite discontinuity $K_c$, in agreement with prior work [@vermeulen_geometry_2017; @merkel_minimal-length_2019]. Figure \[fig:2\] shows the behavior of one random realization of a diluted triangular network very close to its critical strain $\gamma - \gamma_c \simeq 10^{-4}$. In order to find the sample-specific critical point $\gamma_c(W,i)$ for a network with size $W$, we use the bisection method [@merkel_minimal-length_2019]. By performing an initial step-wise shearing simulation for every random sample, we first find a strain value $\gamma_{R,i}$ at which the network becomes rigid, i.e., the shear stress calculated from Eq. \[eq:3\] reaches a threshold value. Here we use $10^{-9}$ for the stress threshold. Our results, however, are insensitive to the choice of the threshold value as long as we use a sufficiently small value. The prior strain value to $\gamma_{R,i}$ is considered as the nearest floppy point $\gamma_{F,i}$. Modifying the bracket \[$\gamma_{F,i}$, $\gamma_{R,i}$\] in at least $20$ bisection steps, we are able to accurately identify the critical point for every random sample $i$. After identifying the critical point, the network is sheared in a step-wise manner from $\gamma_c(W,i)$. Therefore, the final ensemble averages of a specific system size are taken over random realizations with the same distance from their critical strain. Prior work has established that this is a suitable averaging method for finite systems with large disorder [@bernardet_disorder_2000].
As shown previously [@vermeulen_geometry_2017] for purely central-force networks, the stiffness $K$ exhibits a scaling behavior with the excess shear strain $$\label{eq:Discont}
K-K_c \sim |\gamma - \gamma_c|^f,$$ in which $K_c$ represents a discontinuity in the shear modulus at the transition and $f$ is a non-mean-field exponent. Subisostatic networks with central force interactions are floppy below this transition. In order to understand the behavior of networks in $\gamma < \gamma_c$ regime, we introduce an additional bending rigidity [@broedersz_criticality_2011; @licup_stress_2015; @shivers_scaling_2019]. In the presence of a weak bending rigidity $\kappa$, the floppy-to-rigid transition in networks becomes a crossover between bend-dominated and stretch-dominated regimes [@onck_alternative_2005; @sharma_strain-controlled_2016; @sharma_strain-driven_2016; @shivers_scaling_2019]. In the small strain regime $\gamma < \gamma_c$, the shear modulus is proportional to the bending rigidity $\kappa$ and the following scaling form captures the behavior of $K$ for bend-stabilized fiber networks [@sharma_strain-controlled_2016] $$\label{eq:Widom}
K \approx |\gamma - \gamma_c|^f \mathcal{G}_\pm(\kappa/|\gamma - \gamma_c |^\phi),$$ in which $\phi$ is a scaling exponent and $\mathcal{G}_\pm$ is the scaling function for regimes above and below the critical strain. In later sections, we discuss in detail the procedure of finding these scaling exponents $f$ and $\phi$.
With the scaling exponents $f$ and $\phi$ obtained, we repeat the tests previously carried out for the scaling theory in Ref. [@shivers_scaling_2019]. Specifically, we consider the finite-size scaling of the nonaffine fluctuations of a diluted triangular network in Fig. \[fig:4\]. The nonaffine displacements are measured by the differential nonaffinity parameter defined as $$\label{eq:4}
\delta \Gamma = \frac{\langle || \delta \mathbf{u}^{\text{NA}} ||^2 \rangle}{\ell^2 \delta \gamma^2},$$ in which $\ell$ is the typical bond length of the network, and $\delta \mathbf{u}^{\text{NA}} = \mathbf{u} - \mathbf{u}^{\text{affine}}$ is the nonaffine displacement of a node that is caused by applying an infinitesimal shear strain $\delta \gamma$. To better illustrate this parameter, we show the nonaffine displacement vectors of nodes for a diluted triangular network before, at and after the critical strain in Fig. \[fig:A5\] in the Appendix [@sharma_strain-driven_2016]. The differential nonaffinity $\delta \Gamma$ diverges at the critical strain for central force networks, with a susceptibility-like exponent $\lambda = \phi - f$, i.e., $\delta \Gamma \sim |\Delta \gamma|^{-\lambda}$ [@broedersz_mechanics_2011; @sharma_strain-driven_2016; @shivers_scaling_2019]. Moreover, as the system approaches the critical strain, the correlation length diverges as $\xi \sim |\Delta \gamma|^{-\nu}$. When the correlation length is smaller than the system size $W$, i.e., $|\Delta \gamma| \times W^{1/\nu}>1$, we should find $\delta \Gamma \sim |\Delta \gamma|^{-\lambda}$. Near the critical strain, however, the finite-size effects result in $\delta \Gamma \sim |\Delta \gamma|^{\lambda/\nu}$. Therefore, the following scaling form must capture the behavior of fluctuations [@sharma_strain-driven_2016] $$\label{FluctuationsScaling}
\delta \Gamma = W^{\lambda/\nu} \mathcal{H}(\Delta \gamma W^{1/\nu}),$$ where the scaling function $\mathcal{H}(x)$ is constant for $|x|<1$ and $|x|^{-\lambda}$ otherwise. The differential nonaffinity is shown for different system sizes of a diluted triangular network in Fig. \[fig:A5\] in the Appendix. Based on the above scaling form, we perform a finite-size scaling analysis as shown in Fig. \[fig:4\]. The correlation length exponent $\nu$ is computed from the hyperscaling relation $f=d\nu - 2$ obtained for this transition in prior work [@shivers_scaling_2019], using the exponent $f$ that is computed by considering the stiffness discontinuity. This excellent collapse of fluctuations further emphasizes the true critical nature of the transition as well as consistency with the hyperscaling relation $f=d\nu - 2$ in fiber networks, even accounting for the discontinuity in $K$. As noted before, this discontinuity has no bearing on the order of the transition, since $K$ is not the order parameter, and is more analogous to the heat capacity in a thermal phase transition [@shivers_scaling_2019]. The inset of Fig. \[fig:4\] shows the distribution of critical strains for the same networks in the main figure. As system size increases, the critical strain distribution becomes narrower. Although we focus on finite-size effects in computational fiber models primarily in order to properly identify the behavior of such networks in the thermodynamic limit, we note that experimental rheology on physical collagen networks can also be strongly affected by the sample size, e.g., in sample size dependence of the yield strain [@arevalo_size-dependent_2010]. This is likely due to the rather large mesh size of order 10 $\mu$m in many of the experimental studies.
![ \[fig:4\] The finite-size collapse of nonaffine fluctuations according to Eq. \[FluctuationsScaling\]. The data are obtained for triangular networks with $z=3.3$ and different lateral size $W$ as specified in the legend. Inset: shows distributions of the critical strain for the same networks.](fig4){width="8cm" height="8cm"}
As indicated above, the exponent $f$ is analogous to the heat capacity exponent $\alpha$ in thermal critical phenomena, but with opposite sign. Based on the Harris criterion [@harris_effect_1974], a positive $f>0$ (i.e., $\alpha<0$), for which $\nu > 2/d$, implies that weak randomness does not change the behavior of critical fiber networks. Closely related to the Harris criterion is the *self-averaging* property in critical phenomena. Any observable $X=E$, $\sigma$ or $K$ has different values for different random samples. Therefore for a system with size $W$, we can define for observable $X$ a probability distribution function $P(X,W)$, which is characterized by its average $\langle X \rangle$ and variance $V(X) = \langle X^2 \rangle - \langle X \rangle ^2$. A system is self-averaging if the relative variance $R_V(X) = V(X)/\langle X \rangle^2 \rightarrow 0$ as $W \rightarrow \infty$. In other words, the ensemble average of a self-averaging system does not depend on the disorder introduced by random samples as the system size becomes infinite.
Far from the transition, where the system size $W$ is much larger than the correlation length $\xi$, the Brout argument [@brout_statistical_1959], which is based on the central limit theorem, indicates *strong* self-averaging $R_V(X) \sim W^{-d}$ where $d$ is dimensionality [@wiseman_lack_1995]. Indeed, for our 2D fiber networks away from the critical strain, we find that the relative variance of macroscopic properties decreases with system size as $W^{-2}$, i.e., fiber networks exhibit strong self-averaging off criticality (see Fig. \[fig:5\]b). Near the transition, however, the correlation length becomes larger than the system size $W \ll \xi$ and the Brout argument does not hold. Therefore, at criticality there is no reason to expect $R_V(X) \sim W^{-d}$ [@wiseman_lack_1995; @aharony_absence_1996; @wiseman_self-averaging_1998]. For example, it is established that $R_V(X)$ shows a $W$-independent behavior, i.e., no self-averaging at the percolation transition for the mass of spanning cluster [@stauffer_introduction_2003] and the conductance of diluted resistor networks [@harris_randomly_1987]. A *weak* self-averaging, that corresponds to $R_V(X) \sim W^{-a}$ with $0<a<d$, has been identified in bond-diluted Ashkin-Teller models [@wiseman_lack_1995]. As proved by Aharony and Harris [@aharony_absence_1996], when randomness is irrelevant, i.e., $\nu > 2/d$ the system exhibits a weak self-averaging behavior where $R_X \sim W^{\alpha/\nu}$ (in our fiber networks $R_X \sim W^{-f/\nu}$). As shown in Fig. \[fig:5\] a, fiber networks appear to exhibit a weak self-averaging at the critical strain, with an exponent close to $f/\nu$. We note that $R_V(X)$ in Fig. \[fig:5\] a is computed in the regime where $|\Delta \gamma| \times W^{1/\nu} \approx 1$. We also find that the variance of critical strains decreases as $V(\gamma_c) \sim W^{-2}$ (see the inset of Fig. \[fig:5\] a), in accordance with Aharony and Harris prediction [@aharony_absence_1996].
![ \[fig:5\] (a) The relative variance of different quantities specified in the legend at the critical strain for a triangular network with $z=3.3$ versus linear system size $W$. Inset: the scaling behavior of variance of critical strains versus system size for the same model. (b) The relative variance of the macroscopic quantities as specified in the legend for the same model in (a) away from the critical strain versus linear system size $W$.](fig5){width="12cm" height="12cm"}
As prior work showed [@wyart_elasticity_2008; @merkel_minimal-length_2019], the shear modulus discontinuity $K_c$ vanishes as network connectivity $z$ approaches the isostatic threshold $z_c=2d$. Figure \[fig:6\] shows the behavior of $K_c$ versus network connectivity $z$. As expected, $K_c$ decreases as $z$ approaches $z_c$. Moreover, as $z$ decreases towards the connectivity percolation transition for a randomly diluted triangular network, we observe a decreasing trend in $K_c$. This regime can be explained by plotting the participation ratio at the critical strain $\psi_c$ in the inset of Fig. \[fig:6\]. As we see $\psi_c$ has a small value for networks with $z$ close to the percolation connectivity. These small tensional patterns are responsible for the network’s rigidity at critical strain, hence resulting in lower modulus discontinuity $K_c$.
![ \[fig:6\] Shear modulus discontinuity versus connectivity $z$ for a triangular network. As connectivity $z$ approaches the isostatic point $z_c$, the jump in shear modulus vanishes $K_c \rightarrow 0$. On the other hand, for networks with low connectivity, a small tensional pattern is responsible for the rigidity of the system. Therefore, $K_c$ decreases as $z$ decreases towards the percolation connectivity. Inset: participation ratio at the critical strain versus connectivity $z$.](fig6){width="8cm" height="8cm"}
In order to understand the network behavior in the thermodynamic limit, we study the finite-size effects in more detail. One trivial finite-size effect is observed by studying the participation ratio $\psi$. For small number of random realizations, a strand-like percolated force chain, which appears at the critical strain, continues to bear tensions under deformation. This effect results in a plateau in network stiffness $K$, as shown in Fig. \[fig:A7\] in the Appendix. This plateau effect is more prevalent in network models with long, straight fibers such as the triangular model. We next explore the finite-size effects of stiffness discontinuity in fiber networks. The distributions of $K_c$ for various system size are shown in Fig. \[fig:7\] a. The mean of these distributions versus inverse system size exhibits a slow decreasing trend for all different network models (Fig. \[fig:7\] b). However, we find that this discontinuity remains finite but small (of order $0.01$) for all network models as we approach the thermodynamic limit $1/W \rightarrow 0$, consistent with findings of Ref. [@vermeulen_geometry_2017] for the Mikado model. This is similar to the behavior of the linear bulk modulus for sphere packings at the jamming transition, which exhibits a finite discontinuity in $z$ in the thermodynamic limit [@wyart_rigidity_2005; @moukarzel_elastic_2012; @goodrich_finite-size_2012; @goodrich_scaling_2016]. Vermeulen et al. [@vermeulen_geometry_2017] argued that the nonlinear shear modulus discontinuity in fiber networks is due to an emerging single state of self-stress at the network’s critical strain. Consistent with this, we find a non-fractal stress backbone at the critical strain.
![ \[fig:7\] (a) The distributions of shear modulus discontinuity $K_c$ for triangular networks with $z=3.3$ and different system sizes as specified in the legend. (b) Shear modulus discontinuity $K_c$ versus inverse system size $1/W$, for various 2D network models as specified in the legend (For Mikado model we used square root of present nodes in the network as $W$). The data are normalized with the length density $\rho$ for every model. The standard deviations are only shown for the triangular network, though the standard deviation at $W=60$ for every model is shown in the legend.](fig7){width="12cm" height="12cm"}
As mentioned above, the stiffness exponent $f$ has a non-mean-field value, i.e., $f \neq 1$. In fiber networks, the correlation length scales as $\xi \sim \Delta \gamma^{-\nu}$. True critical behavior in simulation results such as ours should only be apparent when the correlation length remains smaller than the system size, i.e., $|\Delta \gamma| \times W^{1/\nu} >1$ [@sharma_strain-controlled_2016; @shivers_scaling_2019]. Near the critical point, however, the correlation length diverges and the stiffness scales as $K-K_c \sim W^{-f/\nu}$. Therefore, the following scaling function captures the stiffness behavior $$K-K_c = W^{-f/\nu} \mathcal{F}(\Delta \gamma W^{1/\nu}),$$ in which the function $\mathcal{F}(x)$ is a constant for $x<1$ and $x^f$ for $x>1$. Note that we are only able to investigate one side of the transition $\Delta \gamma >0$ for central force networks.
![ \[fig:8\] (a) The distributions of the stiffness exponents $f$ for different system sizes for a triangular network with $z=3.3$. The exponents are obtained in the critical regime in which $|\Delta \gamma| \times W^{1/\nu} > 1.0$ for all sizes. (b) The ensemble average of $f$, which is obtained from the distributions in (a), versus inverse system size $1/W$. The error bars are showing the standard deviations of samples.](fig8){width="12cm" height="12cm"}
To obtain the stiffness exponent $f$, we implement a power-law fit of $K - K_c$ versus $\gamma - \gamma_c$ for every individual sample of different system sizes in the critical regime, where $|\Delta \gamma| \times W^{1/\nu} >1$ for every size $W$. We use sample-dependent $K_c$ and $\gamma_c$. Figure \[fig:8\] a shows the $f$ distributions for different system sizes for a triangular network with $z=3.3$. The average of these distributions are shown in Fig. \[fig:8\] b. As can be observed, we find negligible differences in $f$ for different system sizes when the exponents are obtained in the true critical regime. However, instead of this size-dependent approach, if the scaling exponents $f$ are collected in a fixed strain window for all sizes, a size-dependent behavior of $f$ is unavoidable due to the finite-size effects (see Fig. \[fig:A9\] in the Appendix). We conclude an $f = 0.79 \pm 0.07$ corresponding to $W=140$ for triangular networks with $z=3.3$.
![ \[fig:9\] (a) Finite-size scaling of $K-K_c$ for a triangular network with $z=3.3$. The inset shows the collapse of data in the critical regime with $f=0.79 \pm 0.07$. (b) A similar finite-size scaling as in (a) for a 2D jammed-packing-derived model with $z=3.3$. A distinct analytic regime, i.e., a slope of $1.0$ can be observed in this model as $\gamma - \gamma_c \rightarrow 0$. The inset, however, shows the non-mean-field exponent $f=0.85 \pm 0.05$ in the critical regime. The finite-size dominated regime is shaded in both plots.](fig9){width="12cm" height="12cm"}
By performing an extensive finite-size scaling analysis of the stiffness data for the diluted triangular model in Fig. \[fig:9\] a, we find three distinct regimes: (i) a finite-size dominated region for $|\Delta \gamma| \times W^{1/\nu} \lesssim1.0$, (ii) a true critical regime for $1\lesssim |\Delta \gamma| \times W^{1/\nu}$ and (iii) an eventual large strain regime outside of the critical regime. By using the hyperscaling relation $f = d\nu -2$, $f$ is the only remaining free parameter used for the analysis in Fig. \[fig:9\] a. As shown in the inset of Fig. \[fig:9\] a, we are able to collapse the data in the critical regime by using $f=0.79 \pm 0.07$ for a randomly diluted triangular network with $z=3.3$. A similar finite-size scaling analysis performed for randomly diluted, 2D jammed-packing-derived networks with $z=3.3$ in Fig. \[fig:9\] b results in a consistent exponent $f=0.85 \pm 0.05$. In agreement with computational studies in 3D [@sharma_strain-controlled_2016; @sharma_strain-driven_2016], we also find a non-mean-field $f <1.0$ for 3D jammed-packing-derived networks with $z=3.3$ (see Fig. \[fig:A10\] in the Appendix). This exponent, however, is obtained using only one system size $W=20$. Further work will be needed for a detailed finite-size scaling analysis in 3D similar to Fig. \[fig:9\]. Nevertheless, prior work has shown a high degree of consistency between the 2D and (the somewhat more limited) 3D simulations. Moreover, experiments on collagen networks have so far shown consistency with 2D models [@sharma_strain-controlled_2016; @jansen_role_2018]. Thus, we have good reason to believe that our conclusions are not limited to idealized 2D systems.
We note that the exponents we observe are robust to changes or errors in the value of the discontinuity $K_c$ in the critical regime (ii) (see Fig. \[fig:A11\] in the Appendix). By performing the same analysis in Fig. \[fig:9\] a, for instance, but using the modulus discontinuity in the thermodynamic limit $K_c^\infty$ instead of sample-dependent $K_c$, we obtain the same scaling exponent $f$, provided that $|\Delta \gamma| \times W^{1/\nu}\gtrsim1$ (see Fig. \[fig:A12\] in the Appendix). Thus, we limit our analysis of the critical exponents to the regime (ii) with $|\Delta \gamma| \times W^{1/\nu}\gtrsim1$, where we find consistent values of $f\simeq0.79-0.85$, as also reported for Mikado networks previously in Ref. [@vermeulen_geometry_2017]. These results are, however, inconsistent with Ref. [@merkel_minimal-length_2019], where it was argued that $f=1$ should be generic for fiber networks. We note that it is possible to observe an apparent $f=1$ regime due to finite size effects, as we clearly observe in Fig. \[fig:9\] b when the system size is smaller than of order $|\Delta\gamma|^{-\nu}$. The apparent exponent $f$ in this case, however, would then not be a critical exponent [@stauffer_introduction_2003; @binder_monte_2010]. A natural explanation for an apparent exponent of $1.0$ here can simply be the first term in a scaling function that becomes analytic (and not critical) for a finite system, as has been argued for packings of soft, frictionless particles [@goodrich_finite-size_2012]. We note that the finite-size scaling analysis studied here is a rather general technique for understanding critical phenomena in finite-size computer simulations. Hence, we expect that a similar approach in thermal gel models with intermolecular interactions [@peleg_filamentous_2007; @kroger_formation_2008; @peleg_effect_2009] will provide insights about their critical phase transition.
As mentioned before, the sub-isostatic central-force networks can be stabilized by adding bending resistance to fibers. Figure. \[fig:A13\] a in the Appendix shows the shear modulus versus strain for diluted triangular networks with different bending rigidity $\kappa$. For such bend-stabilized networks, the shear modulus is captured by the scaling form of Eq. \[eq:Widom\]. To find the exponent $\phi$ in Eq. \[eq:Widom\], we fit a power-law to the stiffness data in the regime where $\gamma < \gamma_c$, in which we have $K \approx \kappa |\gamma - \gamma_c|^{f-\phi}$. For individual samples, we find $\phi$ using the corresponding $f$ exponents that are already collected for central-force networks. For a triangular network with $z=3.3$, we find $\phi = 2.64 \pm 0.12$ that is obtained by using system size $W=100$ and $\kappa = 10^{-5}$. The inset of Fig. \[fig:A13\] b in the Appendix shows the distribution of $\phi$. Using these values of $f$ and $\phi$, a Widom-like scaling collapse corresponding to Eq. \[eq:Widom\] is shown in Fig. \[fig:A13\] b and c in the Appendix, for individual samples and the ensemble average of data respectively.
Summary and Discussion
======================
In this work, we focus on the critical signatures of mechanical phase transitions in central-force fiber networks as a function of shear strain. As the applied strain approaches a critical value $\gamma_c$ from above, the stress is borne by a sparse, branch-like structure that is responsible for network stability. By analyzing various moments of the force distributions, we identify scaling exponents for these moments near the transition, similar to prior work on rigidity percolation [@hansen_multifractality_1988; @hansen_universality_1989; @arbabi_mechanics_1993; @sahimi_mechanics_1993]. We also find that the fractal dimension of the load-bearing structure at the critical strain appears to be $2.0$ in 2D. This is consistent with a finite value of the participation ratio $\psi$, as well as a finite discontinuity in the network stiffness $K$ in the thermodynamic limit $W\rightarrow\infty$.
Further, we study the self-averaging properties of this athermal critical phase transition. We observe a strong self-averaging off criticality, i.e., with relative variance $R_V(X) \sim W^{-d}$ for $X = E$, $\sigma$ and $K$. This is consistent with what is expected for thermal systems, based on the Brout argument [@brout_statistical_1959]. At criticality, however, as the correlation length $\xi$ reaches or becomes larger than the system size $W$, we find a weak self-averaging of all macroscopic properties $E$, $\sigma$, and $K$ at the critical strain. Specifically, $R_V(X) \sim W^{-a}$ with $0<a<d$. This weak self-averaging at the critical point is in agreement with thermal systems that satisfy the Harris criterion [@harris_effect_1974], i.e., for which the heat capacity exponent $\alpha<0$. As argued in Ref. [@shivers_scaling_2019], the network stiffness is analogous to heat capacity but with the stiffness exponent $f=-\alpha$. Thus, our observations of weak self-averaging provide further evidence for this analogy and suggest that the mechanical critical behavior along the line of transitions in Fig. \[fig:1\] should be insensitive to weak disorder.
By simulating various network models, we confirm that fiber networks exhibit a finite shear modulus discontinuity $K_c$, in agreement with Refs. [@vermeulen_geometry_2017; @merkel_minimal-length_2019]. We observe a weakly decreasing trend in $K_c$ as a function of system size, but with a non-zero value in the thermodynamic limit. This discontinuity does, however, vanish as the network connectivity $z$ approaches the isostatic point $z_c$, consistent with Refs. [@wyart_elasticity_2008; @merkel_minimal-length_2019]. We also find that this discontinuity decreases as one approaches connectivity percolation. We show that allowing for this discontinuity slightly modifies the scaling exponents obtained previously for fiber networks using other methods. The discrepancies between these methods, however, are within the estimated error bars.
Moreover, by repeating the finite-size scaling analysis of the nonaffine fluctuations from Ref. [@shivers_scaling_2019] we again find evidence for the hyperscaling relation $f=d\nu -2$ [@shivers_scaling_2019] and non-mean-field nature of the transition. In estimating the stiffness exponent $f$, we perform an extensive finite-size scaling analysis that reveals three distinct regimes; besides a critical region with non-mean-field exponents, we find a finite-size dominated region for $|\Delta \gamma| \times W^{1/\nu} <1.0$, as well as an off critical regime for large strains. In the finite-size dominated regime, we show that the stiffness exponent may appear to be consistent with the mean-field value $f=1$ (Fig. 9). As noted above, however, this may simply be due to analyticity for finite systems and may have no bearing on possible mean-field behavior. This may explain some reports of mean-field behavior, such as in Ref. [@merkel_minimal-length_2019]. It is important to emphasize that the scaling exponents cannot be reliably extracted from simulations close to the transition, i.e., for small $|\Delta\gamma|\rightarrow0$, where $|\Delta \gamma| \times W^{1/\nu} \lesssim1$.
Acknowledgments {#acknowledgments .unnumbered}
===============
This work was supported in part by the National Science Foundation Division of Materials Research (Grant DMR1826623) and the National Science Foundation Center for Theoretical Biological Physics (Grant PHY-1427654). J.L.S. acknowledges the support of the Riki Kobayashi Fellowship in Chemical Engineering and the Ken Kennedy Institute Oil & Gas HPC Conference Fellowship. We also acknowledge useful conversations with Andrea Liu, Tom Lubensky and Lisa Manning.
[70]{} ifxundefined \[1\][ ifx[\#1]{} ]{} ifnum \[1\][ \#1firstoftwo secondoftwo ]{} ifx \[1\][ \#1firstoftwo secondoftwo ]{} ““\#1”” @noop \[0\][secondoftwo]{} sanitize@url \[0\][‘\
12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{} @startlink\[1\] @endlink\[0\] @bib@innerbibempty [****, ()](\doibase 10.1016/0022-3093(83)90424-6) [****, ()](\doibase 10.1103/PhysRevB.31.276) [****, ()](\doibase 10.1103/PhysRevLett.75.4051) [****, ()](\doibase 10.1103/PhysRevLett.81.1841) [****, ()](\doibase 10.1038/23819) [****, ()](\doibase 10.1088/0953-8984/22/3/033101) [****, ()](\doibase
10.1038/nature10667) [****, ()](\doibase
10.1146/annurev-conmatphys-031214-014336), [****, ()](\doibase 10.1017/S0080456800026351) [****, ()](\doibase 10.1016/0020-7683(78)90052-5) [****, ()](\doibase 10.1103/PhysRevB.31.1671) [****, ()](\doibase 10.1103/PhysRevB.38.7173) [****, ()](\doibase 10.1103/PhysRevB.47.703) [****, ()](\doibase
10.1103/PhysRevLett.101.215501) [****, ()](\doibase 10.1038/nphys2127) [****, ()](\doibase 10.1103/PhysRevLett.111.095503) [****, ()](\doibase
10.1039/C6SM01033D) [****, ()](\doibase
10.1088/0034-4885/53/4/001) [****, ()](\doibase 10.1103/PhysRevE.85.021801) @noop [****, ()]{} [****, ()](\doibase 10.1103/PhysRevE.82.051905) [****, ()](\doibase
10.1073/pnas.1504258112) [****, ()](\doibase
10.1016/j.bpj.2018.04.043) [****, ()](\doibase 10.1073/pnas.1920062117) [****, ()](\doibase 10.1103/PhysRevE.96.053003) [****, ()](\doibase 10.1073/pnas.1815436116) [****, ()](\doibase 10.1103/PhysRevLett.122.188003) [****, ()](\doibase 10.1039/b926592a) [****, ()](\doibase 10.1103/PhysRevLett.120.148004) [****, ()](\doibase 10.1039/C8SM02192A) [****, ()](\doibase 10.1103/PhysRevLett.91.108103) [****, ()](\doibase 10.1103/PhysRevLett.91.108102) [****, ()](\doibase 10.1103/PhysRevLett.96.017802) [****, ()](\doibase
10.1103/PhysRevE.99.042412) [****, ()](\doibase 10.1039/c0sm01004a) [****, ()](\doibase 10.1209/0295-5075/97/36008) [****, ()](\doibase
10.1021/acs.jpcb.6b00259) [****, ()](\doibase 10.1103/PhysRevLett.88.075507) [****, ()](\doibase 10.1103/PhysRevE.68.011306) [****, ()](\doibase 10.1103/PhysRevE.90.022138) @noop [**]{}, ed. (, ) [****, ()](\doibase
10.1103/PhysRevLett.97.170201) , ed., @noop [**]{}, ed. (, , ) @noop [**]{} () [****, ()](\doibase 10.1088/0022-3719/5/15/006) [****, ()](\doibase 10.1103/PhysRevE.94.042407) [****, ()](\doibase
10.1103/PhysRevLett.77.274) [****, ()](\doibase 10.1051/anphys:2006003) [****, ()](\doibase 10.1038/nature03805) [****, ()](\doibase 10.1007/BF01014224) [****, ()](\doibase 10.1103/PhysRevB.40.749) [****, ()](\doibase 10.1103/PhysRevB.47.695) @noop [**]{} (, , ) [****, ()](\doibase 10.1103/PhysRevLett.84.4477) [****, ()](\doibase 10.1103/PhysRevLett.95.178102) **, @noop , () [****, ()](\doibase 10.1016/j.bpj.2010.08.008) [****, ()](\doibase 10.1088/0022-3719/7/9/009) [****, ()](\doibase 10.1103/PhysRev.115.824) [****, ()](\doibase 10.1103/PhysRevE.52.3469) [****, ()](\doibase 10.1103/PhysRevLett.77.3700) [****, ()](\doibase 10.1103/PhysRevE.58.2938) @noop [**]{} (, ) [****, ()](\doibase 10.1103/PhysRevB.35.6964) [****, ()](\doibase 10.1103/PhysRevLett.109.095704) [****, ()](\doibase 10.1073/pnas.1601858113) [**](\doibase 10.1007/978-3-642-03163-2), , Vol. (, , ) [****, ()](\doibase
10.1209/0295-5075/77/58007) [****, ()](\doibase
10.1039/B710147C) [****, ()](\doibase 10.1103/PhysRevE.79.040401)
Appendix {#appendix .unnumbered}
========
Network models {#network-models .unnumbered}
--------------
![ \[fig:A1\] Snapshots of various network models. (a) Randomly diluted triangular network with $z=3.3$. (b) Randomly diluted Mikado model with $z=3.3$. (c) Randomly diluted 2D Voronoi network with $z=2.6$. (d) Randomly diluted 2D jammed-packing-derived network with $z=3.3$. (e) Randomly diluted 3D jammed-packing-derived network with $z=3.3$.](figA1){width="16cm" height="16cm"}
![ \[fig:A2\] The bond length distribution of Mikado and Voronoi models. These exponential-like decay of bond length has been identified in real collagen networks.](figA2){width="8.5cm" height="8.5cm"}
![ \[fig:A3\] The critical strain versus connectivity for a randomly diluted triangular network with size $W=80$. Near the isostatic point $z_c$, the relation appears to be linear. Note that $z_c <4.0$ is due to the finite size effects.](figA3){width="8.5cm" height="8.5cm"}
Scaling of the moments of force distributions {#scaling-of-the-moments-of-force-distributions .unnumbered}
---------------------------------------------
![ \[fig:A4\] The scaling behavior of first three moments of force distribution versus excess strain to the critical point for a triangular network with $z=3.3$.](figA4){width="8.5cm" height="8.5cm"}
Nonaffine displacement fluctuations {#nonaffine-displacement-fluctuations .unnumbered}
-----------------------------------
In order to find the correlation length exponent $\nu$, we compute the nonaffine fluctuations in athermal fiber networks. The differential nonaffinity parameter $\delta \Gamma$ defined in Eq. 7 measures the nonaffine node displacements after applying a small shear strain from a previous state.
![ \[fig:A5\] (a) The unscaled differential nonaffinity parameter defined in Eq. 7 in the main text for diluted triangular networks with $z=3.3$ and sizes as shown in the legend. The nonaffine displacement vectors of a single sample of size $W=50$ are shown for a strain value less than (1) at (2) and greater than (3) the critical strain $\gamma_c$. (b) Coarse-grained $\delta \Gamma$, using local averaging of every two adjacent data points in (a).](figA5){width="12cm" height="12cm"}
Figure \[fig:A5\] a shows the differential nonaffinity for diluted triangular network with $z=3.3$ for different system sizes. The nonaffine vectors of network’s nodes for a single sample of size $W=50$ are shown at (1): $\gamma < \gamma_c$ (2): $\gamma = \gamma_c$ (3): $\gamma > \gamma_c$. As we can see from the displacement field, large nonaffine node displacements are evident at the critical strain, which corresponds to the peak in differential nonaffinity parameter. In order to reduce the noise in $\delta \Gamma$ for finite-size scaling, we use the local averaging method; every two adjacent values of Fig. \[fig:A5\] a are averaged and the result is shown in Fig. \[fig:A5\] b. The finite-size collapse shown in Fig. 4 in the main text is indeed the collapse of coarse-grained data in Fig. \[fig:A5\] b.
Finite size analysis of the participation ratio $\psi$ {#finite-size-analysis-of-the-participation-ratio-psi .unnumbered}
------------------------------------------------------
The distribution of participation ratio at the critical strain $\psi_c$ is shown in Fig. \[fig:A6\] for diluted triangular networks at various sizes. The distribution appears to be bimodal: the large peak is related to the branch-like force chains in the network, similar to the structure shown in Fig. 3 a, and the small peak at low participation ratio, which is due to the finite-size effects. Although the location of large peak depends on the network connectivity $z$, the small peak is the result of a small number of realizations that shows a tensional path at the critical strain connecting upper and lower sides of the periodic box. This tension line yields a plateau behavior in stiffness of the network (see Fig. \[fig:A7\] a). As system size increases, the number of samples with this small tensional structure decreases and disappears completely in the thermodynamic limit. This tensional pattern is shown for a single sample in Fig. \[fig:A7\] b.
![ \[fig:A6\] The distributions of critical participation ratio $\psi_c$ for different sizes of a triangular network with $z=3.3$.](figA6){width="8.5cm" height="8.5cm"}
![ \[fig:A7\] (a) The participation ratio $\psi$ and stiffness $K$ for a single random realizations with a plateau effect for diluted triangular model with $z=3.3$ and $W=100$. (b) The tensional line responsible for the plateau effect near the critical strain in (a) is shown by plotting bonds with a thickness proportional to their tensions at the highlighted strain point in (a).](figA7){width="10cm" height="10cm"}
![ \[fig:A8\] The critical participation ratio times the number of bonds, which is a measure of mass of the tensional structure at the critical point, versus network size for a triangular model with $z=3.3$.](figA8){width="8.5cm" height="8.5cm"}
Finite size effects on the scaling exponent $f$ {#finite-size-effects-on-the-scaling-exponent-f .unnumbered}
-----------------------------------------------
![ \[fig:A9\] Comparing two methods of finding $f$ for different sizes of a triangular network with $z=3.3$. The shadow area is showing the standard deviations. The red triangles correspond to the exponents that are obtained in a fixed strain window for all sizes, here the strain window is $\Delta \gamma = 0.055 - 1.0$. The blue circles correspond to the exponents we obtained in a size-dependent strain window in which $1.0 <|\Delta \gamma| \times W^{1/\nu} < 30$ for all sizes.](figA9){width="8.5cm" height="8.5cm"}
$f$ exponent for a 3D network {#f-exponent-for-a-3d-network .unnumbered}
-----------------------------
We obtain $f = 0.84 \pm 0.13$ for 3D jammed-packing-derived model with $z=3.3$. The data are collected for only one system size $W=20$, averaging over $40$ random samples. Assuming the hyperscaling relation $f = d\nu -2$ holds in 3D, we used $\nu = (f + 2)/3 \approxeq 0.95$ for the following scaling plot. This network has $\gamma_c = 0.57 \pm 0.03$ and $K_c = 0.006 \pm 0.004$. Future studies will be needed in 3D for a detailed finite-size scaling analysis similar to Fig. 9 in the main text as well as testing the hyperscaling relation $f = d\nu -2$.
![ \[fig:A10\] Finite-size effects for a 3D packing-derived network with $z=3.3$ and $W=20$. In the critical region, we find a non-mean-field exponent $f = 0.84$. The finite-size dominated region is shaded.](figA10){width="8.5cm" height="8.5cm"}
The effect of $K_c$ on the exponent $f$ {#the-effect-of-k_c-on-the-exponent-f .unnumbered}
---------------------------------------
The scaling exponent $f$, which is obtained in the critical regime, is robust to errors in the value of discontinuity $K_c$. Figure \[fig:A11\] shows that choosing different values for $K_c$ in a triangular network has negligible effect on $f$. Although the jammed-packing-derived model exhibits a slope of $1.0$ in the finite-size dominated region, the triangular model behaves differently (see Fig. 9). This is due to the fact that in contrast to packing-derived networks, triangular networks are likely to be rigidified by a single straight path of bonds connecting upper and lower boundaries of the simulation box in the small strain regime. Therefore, the $K_c$ values for a triangular network that are observed for small strains are results of these strand-like tensions. As we increase the strain, more bonds become involved, thus the slope in the finite-size dominated region gets closer to $1.0$, similar to packing-derived networks. This is clearly observed by choosing different $K_c$ values for finite-size scaling analysis of triangular networks (see Fig. \[fig:A11\]).
![ \[fig:A11\] (a) Differential shear modulus versus $\gamma - \gamma_c$ for triangular networks with $z=3.3$. Plots (b)-(d) show the scaling analysis of the data in (a) using $K_c$ values corresponding to $\gamma - \gamma_c$ at vertical lines (1)-(3) in plot (a).](figA11){width="14cm" height="14cm"}
By using the modulus discontinuity in the thermodynamic limit $K_c^{\infty}$, we repeat the analysis performed in Fig. 9 a in the main text. As can be observed in Fig. \[fig:A12\], we find the same non-mean-field scaling exponent $f$.
![ \[fig:A12\] Finite-size scaling of the data in Fig. 9 a in the main text, using $K_c$ in the thermodynamic limit.](figA12){width="8.5cm" height="8.5cm"}
Fiber networks with bending interactions {#fiber-networks-with-bending-interactions .unnumbered}
----------------------------------------
Using central force networks, we are only able to investigate the positive side of the transition, i.e., $\gamma-\gamma_c \rightarrow 0^{+}$. In order to understand the system’s behavior below the critical point, we stabilize the networks by introducing weak bending interactions between bonds. Therefore, the elastic energy for the network has both stretching $E_s$ and bending $E_b$ contributions
$$\label{}
E = E_s + E_b = \frac{\mu}{2} \sum_{ij}^{}\frac{(\ell_{ij} - \ell_{ij,0})^2}{\ell_{ij,0}} +
\frac{\kappa}{2} \sum_{ij}^{}\frac{(\theta_{ijk} - \theta_{ijk,0})^2}{\ell_{ijk,0}},$$
in which the stretching part $E_s$ is the same as in Eq. 1 in the main text, $\kappa$ is the bending stiffness of individual fibers, $\theta_{ijk,0}$ is the angle between bonds $ij$ and $jk$ in the undeformed state, $\theta_{ijk}$ is the angle between those bonds after deformation, and $\ell_{ijk,0} = \frac{1}{2} (\ell_{ij,0} + \ell_{jk,0})$. Note that the bending energy is defined for consecutive bonds along each fiber on the triangular lattice. In simulations, we set $\mu=1$ and vary the dimensionless bending stiffness $\tilde{\kappa} = \kappa/\mu\ell_0^2$, where $\ell_0$ is the typical bond length ($\ell_0=1$ in lattice models).
The simulation procedure for networks with bending interactions is basically the same as discussed in the main text for central force networks. The differential shear modulus $K$ versus shear strain is shown in Fig. \[fig:A13\] a for various dimensionless bending rigidity $\tilde{\kappa}$.
![ \[fig:A13\] (a) The differential shear modulus versus strain for triangular networks with $W=100, z=3.3$ and varying the dimensionless bending rigidity $\tilde{\kappa}$. (b) The Widom-like collapse of individual samples in (a) according to Eq. 7 in the main text using the exponent $f$ that is already obtained for central force networks. Note that the finite-size-dominated data in which $|\Delta \gamma| \times W^{1/\nu} < 1.0$ are removed from this plot. Inset: showing the distribution of $\phi$, which are collected in $\gamma < \gamma_c$ regime of Eq. 7 in the main text. The $\phi$ values here are obtained using data with $\tilde{\kappa} = 10^{-5}$. The solid symbols are corresponding to $\phi$ values obtained using the ensemble average $f$, the empty symbols, on the other hand, are the distribution of $\phi$ exponents that collected using sample-specific $f$. (c) The Widom-like collapse similar to (b), but for the ensemble average of data. We note that the finite-size-dominated data in which $|\Delta \gamma| \times W^{1/\nu} < 1.0$ are removed from this plot.](figA13){width="18cm" height="18cm"}
| {
"pile_set_name": "ArXiv"
} |
---
author:
- Erik Andersson
- Alex Bökmark
- Riccardo Catena
- Timon Emken
- Henrik Klein Moberg
- and Emil Åstrand
title: 'Projected sensitivity to sub-GeV dark matter of next-generation semiconductor detectors'
---
Introduction
============
The presence of Dark Matter (DM) in the Universe has firmly been established through increasingly accurate cosmological observations [@Bertone:2016nfn]. Evidence has been gathered in a wide range of physical scales, from sub-galactic scales to the largest scales we can probe in the Universe [@Bertone:2004pz]. This includes data on the vertical motion of stars in the solar neighbourhood [@Kuijken:1989hu], the rotation curve of spiral galaxies [@Rubin:1970zza], the velocity dispersion of galaxies in galaxy clusters [@Zwicky], gravitational lensing events [@Kaiser:1992ps], the dynamics of colliding clusters [@Clowe:2006eq], the large-scale cosmological structures [@Blumenthal:1984bp] and anisotropies in the cosmic microwave background temperature [@Ade:2015xua]. While the evidence for DM is strong, it is entirely based on gravitational effects, directly or indirectly related to the gravitational pull that DM exerts on visible matter and light. As a result, we still do not know whether or not DM is made of particles which have so far escaped detection. One promising approach to answer this question is the so-called DM direct detection technique [@Drukier:1983gj; @Goodman:1984dc].
Direct detection experiments search for DM-nucleus or -electron scattering events in low-background detectors located deep underground [@Undagoitia:2015gya]. Next generation direct detection experiments searching for signals of DM-electron interactions with germanium and silicon semiconductor detectors of mass in the 0.1 - 1 kg range [@Agnese:2016cpb; @Crisler:2018gci; @Aguilar-Arevalo:2019wdi] are of special interest to this work. In these detectors, the energy deposited in a DM-electron scattering event can cause an observable electronic transition from the valence to the conduction band of the semiconductor target. For kinematical reasons, this detection principle can outperform methods based on nuclear recoils in the search for DM particles in the 1 MeV to 1 GeV mass range [@Essig:2011nj]. Sub-GeV DM can also be searched for with, e.g. dual-phase argon [@Agnes:2018oej] and xenon [@Essig:2012yx; @Essig:2017kqs; @Aprile:2019xxb] targets, graphene [@Hochberg:2016ntt; @Geilhufe:2018gry], 3D Dirac materials [@Hochberg:2017wce; @Geilhufe:2019ndy; @Coskuner:2019odd], polar crystals [@Knapen:2017ekk], scintillators [@Derenzo:2016fse; @Blanco:2019lrf] and superconductors [@Hochberg:2015pha; @Hochberg:2015fth; @Hochberg:2019cyy]. For a comparison of the performance of different materials in the search for sub-GeV DM, see [@Griffin:2019mvc].
For the purposes of this paper, we divide detectors based on germanium and silicon semiconductor crystals into two categories: 1) “high-voltage” (HV) detectors; and 2) “charge-coupled device” (CCD) detectors. Detectors operating in HV mode have the capability to amplify the small charge produced by DM scattering in target crystals into a large phonon signal by applying a bias of about 100 V across the detector and exploiting the so-called Neganov-Trofimov-Luke effect [@Luke1988Dec]. The SuperCDMS experiment demonstrated that this approach allows to achieve sensitivity to single-charge production [@Agnese:2018col]. Similarly, CCD sensors can achieve single-charge sensitivity by measuring the charge collected by single pixels in the CCD device exploiting ultra-low readout noise techniques, as in “Skipper” CCDs [@Abramoff:2019dfb]. Currently operating experiments belonging to the first category include the HV mode run of the silicon SuperCDMS experiment, which delivered data corresponding to an exposure of 0.49 gram-days [@Agnese:2018col]. The null result reported by this search has been used to set a 90% C.L exclusion limit of $10^{-30}$ cm$^2$ on the cross section for DM-electron scattering for DM-particle masses around 1 MeV. For the future, the SuperCDMS collaboration plans to operate in the HV mode larger germanium and silicon detectors, reaching an exposure of about 50 and 10 kg-year, respectively [@Agnese:2016cpb]. Operating DM direct detection experiments belonging to the second category include the DAMIC and SENSEI experiments, both using CCD silicon sensors. For example, the null result of the SENSEI experiment has been used to set 90% C.L exclusion limits on the DM-electron scattering cross section for DM particle masses in the 0.5 - 100 MeV range, with a minimum excluded cross section of about $5\times 10^{-35}$ cm$^2$ for a DM mass of about 10 MeV (and short-range interactions) [@Abramoff:2019dfb]. For the future, the SENSEI and DAMIC collaborations aim at building CCD silicon detectors of 0.1 kg and 1 kg target mass, respectively [@Crisler:2018gci; @Aguilar-Arevalo:2019wdi].
Predictions for the rate of DM-induced electronic transitions in semiconductor crystals depend on a number of theoretical and experimental inputs [@Essig:2011nj; @Graham:2012su; @Lee:2015qva; @Essig:2015cda; @Roberts:2016xfw; @Crisler:2018gci; @Agnese:2018col; @Abramoff:2019dfb; @Aguilar-Arevalo:2019wdi; @Catena:2019gfa]. Firstly, they depend on the assumed DM-electron interaction model, e.g. on whether the interaction is long- or short-range and on its Lorentz structure [@Catena:2019gfa]. Secondly, they depend on the semiconductor band structure, and in particular on the initial and final state electron energy and wave functions [@Essig:2015cda]. They also depend on astrophysical inputs, such as the local DM density and velocity distribution, as well as on detector characteristics such as energy threshold, efficiency, and energy deposition to number of produced electron-hole pairs conversion, just to name a few.
Motivated by the recent experimental results reviewed above and by the improved understanding of the experimental backgrounds that these results have produced, this article aims at assessing the sensitivity to DM particles in the sub-GeV mass range of future direct detection experiments using germanium and silicon semiconductor crystals as target materials. We address this problem focusing on the so-called “dark photon” model [@Holdom:1985ag; @Essig:2011nj] as a framework to describe the interactions of DM in semiconductor crystals. We compute the projected sensitivities of future germanium and silicon detectors by comparing the null, i.e. background-only hypothesis to the alternative, i.e. background plus signal hypothesis using the likelihood ratio as a test statistic [@Cowan:2010js]. Doing so, we provide a detailed description of the background models used in our analysis. We present our results in terms of the DM-electron scattering cross section required to reject the null hypothesis in favour of the alternative one with a statistical significance corresponding to 3 or 5 standard deviations. We compute the significance for DM particle discovery using asymptotic formulae for the probability density function of the likelihood ratio [@Cowan:2010js], after explicitly validating them by means of Monte Carlo simulations. We also test the stability of our results under variations in the underlying astrophysical inputs. Our sensitivity study extends previous works, e.g. [@Essig:2015cda], by: 1) adopting a refined experimental background model 2) computing the projected sensitivity by using the likelihood ratio method; and 3) exploring the dependence of our results on the DM space and velocity distribution.
This paper is organised as follows. In Sec. \[sec:theory\], we review the theory of DM-electron scattering in silicon and germanium semiconductor crystals, while in Sec. \[sec:detectors\] we describe the efficiency, energy threshold and experimental backgrounds assumed when modelling future HV germanium and silicon detectors, as well as future silicon CCD detectors. We present our methodology and projected sensitivities to sub-GeV DM particles in Sec. \[sec:results\] and conclude in Sec. \[sec:conclusions\].
Dark matter scattering in semiconductor crystals {#sec:theory}
================================================
In this section, we review the theory of DM-electron scattering in semiconductor crystals. We start by presenting a general expression for the rate of DM-induced electronic transitions in condensed matter systems (Sec. \[sec:crystals\]). We then specialise this expression to the case of electronic transitions from the valence to the conduction band of germanium and silicon semiconductors (Sec. \[sec:band\]). As we will see, the transition rate found in Sec. \[sec:band\] depends on an integral over DM particle velocities (Sec. \[sec:kin\]) and on the amplitude for DM scattering by free electrons (Sec. \[sec:free\]).
Dark matter-induced electronic transitions {#sec:crystals}
------------------------------------------
The rate of DM-induced transitions from an initial electron state $|\mathbf{e}_1\rangle$ to a final electron state $|\mathbf{e}_2\rangle$ is [@Catena:2019gfa] $$\begin{aligned}
\mathscr{R}_{1\rightarrow 2}&=\frac{n_{\chi}}{16 m^2_{\chi} m^2_e}
\int \frac{{\rm d}^3 q}{(2 \pi)^3} \int {\rm d}^3 v f_{\chi}(\mathbf{v}) (2\pi) \delta(E_f-E_i) \overline{\left| \mathcal{M}_{1\rightarrow 2}\right|^2}\, ,\label{eq:transition rate}\end{aligned}$$ where $m_\chi$ is the DM particle mass, while $n_\chi=\rho_\chi/m_\chi$ and $f_\chi(\mathbf{v})$ are the local DM number density and velocity distribution, respectively. If not otherwise specified, we set the local DM mass density $\rho_\chi$ to $0.4$ GeV/cm$^3$ [@Catena:2009mf]. In all applications, for the local DM velocity distribution we assume [@Lewin:1995rx] $$\begin{aligned}
f_\chi(\mathbf{v})&= \frac{1}{N_{\rm esc}\pi^{3/2}v_0^3}\exp\left[-\frac{(\mathbf{v}+\mathbf{v}_\oplus)^2}{v_0^2} \right]
\times\Theta\left(v_{\rm esc}-|\mathbf{v}+\mathbf{v}_\oplus|\right)\, ,
\label{eq:fv}\end{aligned}$$ that is, a truncated Maxwell-Boltzmann distribution boosted to the detector rest frame. Here $N_{\rm esc}\equiv {\mathop{\mathrm{erf}}}(v_{\rm esc}/v_0)-2 (v_{\rm esc}/v_0)\exp(-v_{\rm esc}^2/v_0^2)/\sqrt{\pi}$ implies that $f_\chi(\mathbf{v})$ is unit-normalised. In Sec. \[sec:results\], we present our results by varying most probable speed $v_0$, detector’s velocity $v_\oplus$ and galactic escape velocity $v_{\rm esc}$ within their experimental uncertainties (see Sec. \[sec:astro\] for further details). The squared electron transition amplitude, $\overline{\left| \mathcal{M}_{1\rightarrow 2}\right|^2}$, depends on the initial and final state electron wave functions, $\psi_1$ and $\psi_2$, respectively, and on the amplitude for DM scattering by free electrons, $\mathcal{M}$. Without any further restriction on the amplitude $\mathcal{M}$, it can be written as [@Catena:2019gfa] $$\begin{aligned}
\overline{\left| \mathcal{M}_{1\rightarrow 2}\right|^2}\equiv \overline{\left|\int \frac{{\rm d}^3 k}{(2 \pi)^3} \, \psi_2^*(\mathbf{k}+\mathbf{q})
\mathcal{M}(\mathbf{q},\mathbf{v}_{\rm el}^\perp)
\psi_1(\mathbf{k}) \right|^2}\, , \label{eq:transition amplitude}\end{aligned}$$ where a bar denotes an average (sum) over initial (final) spin states. Here, $\mathbf{q}=\mathbf{p}-\mathbf{p}'$, with $\mathbf{p}$ and $\mathbf{p}'$ initial and final DM particle momenta, respectively, is the momentum transfer and we introduced $$\begin{aligned}
\mathbf{v}_{\rm el}^\perp &= \frac{\left( \mathbf{p} + \mathbf{p}' \right)}{2 m_{\chi}} - \frac{\left( \mathbf{k} + \mathbf{k}' \right)}{2 m_e} =\mathbf{v} - \frac{\mathbf{q}}{2\mu_{\chi e}} - \frac{\mathbf{k}}{m_e}\, ,\end{aligned}$$ where $\mathbf{v}\equiv \mathbf{p}/m_\chi$ is the incoming DM particle velocity, $m_e$ the electron mass, and $\mu_{\chi e}$ the reduced DM-electron mass. If the DM-electron scattering were elastic, $\mathbf{v}_{\rm el}^\perp\cdot \mathbf{q}=0$ would apply, justifying the notation. The initial and final state energies in Eq. (\[eq:transition rate\]) are defined as follows, $$\begin{aligned}
E_i &= m_\chi + m_e + \frac{m_\chi}{2}v^2 + E_1\, , \label{eq: energy initial}\\
E_f &= m_\chi + m_e + \frac{|m_\chi\mathbf{v}-\mathbf{q}|^2}{2m_\chi} + E_2\,, \label{eq: energy final}\end{aligned}$$ where we denote the electron initial and final energy by $E_1$ and $E_2$, and their difference by $\Delta E_{1\rightarrow 2}=E_2-E_1$.
In the case of the “dark photon” model for DM-electron interactions (introduced below in Sec. \[sec:free\]), the free electron scattering amplitude only depends on $q=|\mathbf{q}|$, $\mathcal{M}=\mathcal{M}(q)$, and Eq. (\[eq:transition rate\]) simplifies to [@Catena:2019gfa] $$\begin{aligned}
\mathscr{R}_{1\rightarrow 2}&=\frac{n_{\chi}}{16 m^2_{\chi} m^2_e}
\int \frac{{\rm d}^3 q}{(2 \pi)^3} \int {\rm d}^3 v f_{\chi}(\mathbf{v}) (2\pi) \delta(E_f-E_i) \overline{\left| \mathcal{M}(q) \right|^2} \left| f_{1\rightarrow 2}(\mathbf{q})\right|^2 \, ,
\label{eq:transition rate2}\end{aligned}$$ where $f_{1\rightarrow 2}$ is a scalar atomic form factor measuring the initial and final state wave function overlap [^1], $$\begin{aligned}
f_{1\rightarrow 2}(\mathbf{q}) \equiv \int \frac{{\rm d}^3 k}{(2 \pi)^3} \, \psi_2^*(\mathbf{k}+\mathbf{q}) \psi_1(\mathbf{k}) \,.
\label{eq:f12}\end{aligned}$$
Crystal form factors {#sec:band}
--------------------
The above expressions refer to generic $|\mathbf{e}_1\rangle\rightarrow|\mathbf{e}_2\rangle$ electronic transitions. We now specialise them to the case of transitions from a valence to the conduction band in a semiconductor crystal. In the case of crystals, electron states are labelled by a band index “$i$” and a wavevector “$\mathbf{k}$” in the first Brillouin zone (BZ). In Bloch form, the associated wave functions can be expressed as [@Essig:2015cda] $$\begin{aligned}
\psi_{i\mathbf{k}}(\mathbf{x}) = \frac{1}{\sqrt{V}} \sum_{\mathbf{G}} u_i(\mathbf{k}+\mathbf{G}) e^{i(\mathbf{k}+\mathbf{G})\cdot\mathbf{x}} \,,\end{aligned}$$ where $V$ is the volume of the crystal, i.e. $\int d^3{x} \, e^{i\mathbf{k}\cdot \mathbf{x}} = (2\pi)^3 \delta^{(3)}(\mathbf{k})$ and $V=(2\pi)^3 \delta^{(3)}(0)$, while $\mathbf{G}$ is the reciprocal lattice vector. For the wave functions to be unit-normalised, the $u_i$ coefficients must fulfil $$\begin{aligned}
\sum_{\mathbf{G}}|u_i(\mathbf{k}+\mathbf{G})|^2 = 1\,.\end{aligned}$$ With these definitions, we can now interpret the transition rate in Eq. (\[eq:transition rate\]) as the transition rate, $\mathscr{R}_{i\mathbf{k}\rightarrow i'\mathbf{k}'}$, from the valence level $\{i,\mathbf{k}\}$ to the conduction level $\{i',\mathbf{k}'\}$. Summing over all final state energy levels, and all filled initial state energy levels (while taking into account the initial state electron spin degeneracy), the resulting transition rate, $\mathscr{R}_{\rm crystal}$, reads as follows [@Essig:2015cda] $$\begin{aligned}
\label{eq:Rcrystal}
\mathscr{R}_{\rm crystal} &= 2\sum_{i}\int_{\rm BZ} \frac{V {\rm d^3} k}{(2\pi)^3}\sum_{i'}\int_{\rm BZ} \frac{V {\rm d^3} k'}{(2\pi)^3}\, \mathscr{R}_{i\mathbf{k}\rightarrow i'\mathbf{k}'} \,,\nonumber\\
&= \frac{\rho_\chi}{m_\chi} \frac{N_{\rm cell} \alpha}{16 \pi m_\chi^2} \int {\rm d}\ln E_e \int {\rm d}\ln q \left(\frac{E_e}{q} \right) \eta(v_{\rm min}(q,E_e))\overline{\left| \mathcal{M}(q) \right|^2} \left| f_{\rm crystal}(q,E_e) \right|^2 \,,\end{aligned}$$ where the deposited energy $E_e$ is defined as $E_e\equiv\Delta E_{1\rightarrow 2}=E_{i'\mathbf{k}'}-E_{i\mathbf{k}}$, the free electron scattering amplitude, $\mathcal{M}$, is assumed to be a function of $q$ only, as in the case of the dark photon model, and $V=N_{\rm cell} V_{\rm cell}$. Here $V_{\rm cell}$ is the volume of individual cells and $N_{\rm cell}=M_{\rm target}/M_{\rm cell}$ is the number of cells in the crystal, where $M_{\rm cell}=2m_{\rm Ge} = 135.33$ GeV for germanium and $M_{\rm cell}=2m_{\rm Si} = 52.33$ GeV for silicon [@Essig:2015cda], while $M_{\rm target}$ is the detector target mass. The velocity integral in Eq. (\[eq:transition rate\]) is now reabsorbed in the definition of the $\eta(v_{\rm min})$ function (given below in Sec. \[sec:kin\]), while the crystal form factor $|f_{\rm crystal}(q,E_e)|^2$ is defined as [@Essig:2015cda] $$\begin{aligned}
\left| f_{\rm crystal}(q,E_e) \right|^2 &= \frac{2 \pi^2 (\alpha m_e^2 V_{\rm cell})^{-1}}{E_e} \sum_{ii'} \int_{\rm BZ} \frac{V_{\rm cell} {\rm d}^3k}{(2\pi)^3}\int_{\rm BZ} \frac{V_{\rm cell} {\rm d}^3k'}{(2\pi)^3} \nonumber\\
&\times E_e\delta(E_e-E_{i'\mathbf{k}'}+E_{i\mathbf{k}}) \sum_{\mathbf{G}'} q\delta(q-|\mathbf{k}'-\mathbf{k}+\mathbf{G}'|) \left| f_{[i\mathbf{k},i'\mathbf{k}',\mathbf{G}]} \right|^2 \,,
\label{eq:crystalff}\end{aligned}$$ where $$\begin{aligned}
f_{[i\mathbf{k},i'\mathbf{k}',\mathbf{G}]} = \sum_{\mathbf{G}} u_{i'}^*(\mathbf{k}'+\mathbf{G}+\mathbf{G}')u_i(\mathbf{k}+\mathbf{G})\end{aligned}$$ and $\alpha$ is the fine structure constant. In the numerical applications, we use germanium and silicon crystal form factors found in [@Essig:2015cda]. Following [@Essig:2015cda], we set $2\pi^2(\alpha m_e^2V_{\rm cell})^{-1}=1.8$ eV for germanium and $2\pi^2(\alpha m_e^2V_{\rm cell})^{-1}=2.0$ eV for silicon. Finally, we rewrite Eq. (\[eq:Rcrystal\]) in differential form, obtaining the differential rate of electronic transitions in germanium and silicon crystals, $$\begin{aligned}
\frac{{\rm d}\mathscr{R}_{\rm crystal}}{{\rm d} \ln E_e} =
\frac{\rho_\chi}{m_\chi} \frac{N_{\rm cell} \alpha}{16 \pi m_\chi^2} \int {\rm d}\ln q \left(\frac{E_e}{q} \right) \eta(v_{\rm min}(q,E_e))\overline{\left| \mathcal{M}(q) \right|^2} \left| f_{\rm crystal}(q,E_e) \right|^2 \,.
\label{eq:dRcrystal}\end{aligned}$$ In order to compare Eq. (\[eq:dRcrystal\]) with observations, one has to convert $E_e$ into a number of electron-hole pairs produced in a DM-electron scattering event, $Q$. The two quantities can be related as follows [@Essig:2015cda] $$\begin{aligned}
Q(E_e) = 1 + \lfloor (E_e - E_{\rm gap})/\varepsilon \rfloor\,,
\label{eq:conv}\end{aligned}$$ where $\lfloor \cdot \rfloor$ is the floor function. The observed band-gap, $E_{\rm gap}$, and mean energy per electron-hole pair, $\varepsilon$, are $E_{\rm gap}=0.67$ eV and $\varepsilon=2.9$ eV for germanium, while $E_{\rm gap}=1.11$ eV and $\varepsilon=3.6$ eV for silicon.
Kinematics {#sec:kin}
----------
The $\eta(v_{\rm min})$ function in Eq. (\[eq:Rcrystal\]) depends on the velocity distribution $f_\chi$ via a three-dimensional integral, $$\begin{aligned}
\eta(v_{\rm min}(q,E_e)) = \int {\rm d}^3 v \frac{f_\chi(\mathbf{v})}{v} \Theta\left(v - v_{\rm min}(q,E_e)\right)\,,
\label{eq:eta}\end{aligned}$$ where $v=|\mathbf{v}|$, and $v_{\rm min}(q,E_e)$ is the minimum velocity required to induce a transition between two electronic states separated by the energy gap $E_e$ when the momentum transferred in the process is $q$, $$\begin{aligned}
v_{\rm min}(q,E_e) = \frac{E_e}{q} + \frac{q}{2 m_\chi}\,.
\label{eq:vmin} \end{aligned}$$ The $\Theta$ function in Eq. (\[eq:eta\]) arises from the integration over the momentum transfer in Eq. (\[eq:transition rate\]). The minimum velocity $v_{\rm min}(q,E_e)$ can also be derived from energy conservation, which implies $$\begin{aligned}
\mathbf{v}\cdot \mathbf{q} = E_e+\frac{q^2}{2m_\chi}\, .
\label{eq: energy conservation v.q}\end{aligned}$$ Maximising Eq. (\[eq: energy conservation v.q\]) with respect to the momentum transfer $\mathbf{q}$ for a given energy gap $E_e$ gives back Eq. (\[eq:vmin\]). For the velocity distribution in Eq. (\[eq:fv\]) (and for $\mathcal{M}$ depending on $q$ only), the velocity integral in Eq. (\[eq:dRcrystal\]) can be evaluated analytically. For the result of this integration, see e.g. [@Lewin:1995rx].
Dark matter-electron interaction model {#sec:free}
--------------------------------------
In order to evaluate Eq. (\[eq:Rcrystal\]), we need to specify a model for DM-electron interactions from which to calculate $\mathcal{M}$. In this analysis, we focus on the so-called dark photon model [@Holdom:1985ag; @Essig:2011nj], which arises as an extension of the Standard Model (SM) of particle physics. In the dark photon model, the SM is extended by one DM candidate and one additional $U(1)$ gauge group under which only the DM particle candidate is charged. The associated gauge boson, here denoted by $A'_\mu$, is the dark photon. Radiative corrections are generically expected to generate a kinetic mixing term between dark and ordinary photon, i.e. $\epsilon F_{\mu\nu} F^{'\mu\nu}$ where $F_{\mu\nu}$ ($F^{'\mu\nu}$) is the photon (dark photon) field strength tensor and $\epsilon$ is a dimensionless mixing parameter. This kinetic mixing acts as a portal between the DM and SM sectors. After a field redefinition which diagonalises the photon and dark photon kinetic terms, and assuming that the DM candidate is a Dirac fermion, the dark photon model can be formulated in terms of the following Lagrangian [@Holdom:1985ag; @Essig:2011nj] $$\begin{aligned}
\mathscr{L} &= \mathscr{L}_{\rm SM} -\frac{1}{4} F^{'}_{\mu\nu} F^{'\mu\nu} + \frac{1}{2} m_{A'}^2 A_{\mu}^{'} A^{'\mu} \nonumber\\
& + \sum_i \bar{f}_i \left( -e q_i \gamma^\mu A_\mu - \epsilon e q_i \gamma^\mu A^{'}_\mu - m_i \right) f_i \nonumber\\
& + \bar{\chi} (-g_D\gamma^\mu A_\mu^{'} - m_\chi) \chi\,,
\label{eq:LDP}\end{aligned}$$ where $g_D$ is the gauge coupling associated with the additional $U(1)$ group, $m_{A'}$ and $m_{\chi}$ are the dark photon and DM particle mass, respectively, while $\chi$ and $f_i$ are four-component Dirac spinors for the DM particle and the SM fermions, respectively. In the second line of Eq. (\[eq:LDP\]), we denote by $q_i$ ($m_i$) the electric charge (mass) of the $f_i$ SM fermion. For the purposes of this analysis, we do not need to specify a mechanism for the spontaneous breaking of the additional $U(1)$ gauge group and the generation of the dark photon and DM particle mass. Within the dark photon model the squared modulus of the amplitude for DM-electron scattering can be written as follows [@Essig:2015cda] $$\begin{aligned}
\overline{\left| \mathcal{M}(q) \right|^2} = \frac{16\pi m_\chi^2 m_e^2}{\mu^2_{\chi e}} \sigma_e \left| F_{\rm DM} (q) \right|^2 \,,\end{aligned}$$ where $\mu_{\chi e}$ is the reduced DM-electron mass, $\sigma_e$ is a reference scattering cross section setting the strength of DM-electron interactions and $F_{\rm DM}$ is the “DM form factor” which encodes the $q$-dependence of the amplitude. It reads $F_{\rm DM}(q)=1$ for $q^2\ll m^2_{A'}$ (short-range or contact interaction) and $F_{\rm DM}(q)=q_{\rm ref}^2/q^2$ for $q^2\gg m^2_{A'}$ (long-range interaction), where we set the reference momentum $q_{\rm ref}$ to the value $q_{\rm ref}=\alpha m_e$, the typical momentum transfer in DM-induced electronic transitions.
Detector models {#sec:detectors}
===============
In this section we specify the experimental inputs we use and the assumptions we make to compute the statistical significance for DM discovery at direct detection experiments using germanium and silicon semiconductor crystals as target materials. We consider two classes of detectors separately: germanium and silicon HV detectors, as in future runs of the SuperCDMS experiment, and silicon CCDs, as in future runs of the DAMIC and SENSEI experiments.
High voltage Si/Ge detectors
----------------------------
We refer to HV detectors as experimental devices resembling the SuperCDMS experiment in the operating mode described in the recent analysis [@Agnese:2018col]. In this configuration, SuperCDMS exploits a 0.93 g high-purity silicon crystal instrumented on one side with transition-edge sensors and on the other side with an electrode made of an aluminum-amorphous silicon bilayer. This device can achieve single-charge resolution by exploiting the Neganov-Trofimov-Luke effect [@Luke1988Dec]. It consists in the emission of phonons generated by electron-hole pairs drifting across a bias of 140 V (the high voltage defining this operating mode). This effect can amplify the small charge signal associated with DM scattering in a HV detector into a large phonon signal.
In this work, we investigate the sensitivity of next generation HV detectors, taking the expected reach of SuperCDMS as a guideline [@Agnese:2016cpb]. Doing so, we focus on germanium and silicon targets separately, in that different exposures are planned for the two targets. More specifically, in the case of germanium HV detectors, we assume an exposure of 50 kg-year. For HV detectors using silicon targets, we assume an exposure of 10 kg-year (see Tab. 4 in [@Agnese:2016cpb] for further details).
Another important experimental input to our analysis is the detection efficiency of HV detectors. In general, only a fraction of DM-induced electronic transitions is expected to be successfully recorded by detectors used in DM direct detection experiments. The fraction of events that are successfully detected is called the detection efficiency. For both silicon and germanium HV detectors, we assume an energy-independent (i.e. $E_e$-independent) detection efficiency of 90%, which is expected to be a fairly good approximation for $Q$ between 1 and 8 [@Agnese:2018col]. When computing the expected number of DM signal events in a given $Q$-bin, we then multiply Eq. (\[eq:dRcrystal\]) by a detection efficiency factor $\xi_i=0.9$ for both germanium and silicon HV detectors. Here, “$i$” is an index labelling the $Q=i$ bin (see Sec. \[sec:results\] for further details about event binning).
We now describe the experimental background model used for HV detectors in our sensitivity study. As demonstrated recently [@Agnese:2018col], charge leakage is the dominant background source in the search for DM-electron scattering events with HV detectors for values of Q less than 3. Indeed, large electric fields used in HV detectors can ionise impurities within the experimental apparatus causing charge carriers to tunnel into the crystal, producing a background event. We model the event spectrum associated with this experimental background by interpreting the events measured in [@Agnese:2018col] (orange line in Fig. 3) as due to charge leakage, as the authors suggest. For larger values of $Q$, $\beta$’s and $\gamma$’s from the decay of radioactive isotopes originating from the experimental apparatus are also important [@Agnese:2016cpb]. In our sensitivity study, we model the Compton scattering of $\gamma$-rays from the decay of heavy radioactive isotopes as described in [@Barker2018Aug]. For the deposited energy spectrum induced by Compton scattering events from radiogenic $\gamma$’s, we assume a constant function of $E_e$, i.e. $f_C(E)=$ const., in the case of silicon HV detectors, and the following combination of error functions for germanium HV detectors [@Barker2018Aug] $$\begin{aligned}
f_{C}(E_e) = \mathscr{N}\left\{0.005 + \frac{1}{N_1} \sum_{i=K,L,M.N} 0.5 A_i \left[ 1 + {\mathop{\mathrm{erf}}}\left(\frac{E_e - \mu_i}{\sqrt{2}\sigma_i}\right) \right] \right\} \,,
\label{eq:fC}\end{aligned}$$ where $\mathscr{N}$ is a normalisation constant. In principle, Eq. (\[eq:fC\]) receives contributions from the $K$, $L$, $M$, and $N$ shells of germanium. In the energy range of interest, however, only the germanium $N$ shell contribution with input parameters $A_{N}/N_1=18.70$ MeV$^{-1}$, $\mu_N=0.04$ keV, $\sigma_N=13$ eV [@Barker2018Aug] needs to be considered. Both for germanium and silicon, we conservatively normalise $f_C(E_e)$ such that when integrated over the 0 - 50 eV range it gives 0.1 counts kg$^{-1}$day$^{-1}$ [@Agnese:2016cpb].
Finally, we assume that next generation HV germanium and silicon detectors will achieve single-charge resolution, which implies a sensitivity to energy depositions as low as the crystal’s band-gap, i.e. $Q_{\rm th}=1$, where $Q_{\rm th}$ is the experimental threshold, i.e. the minimum number of detectable electron-hole pairs.
CCD detectors
-------------
For CCD detectors, DAMIC and SENSEI are our reference experiments. Both experiments exploit silicon semiconductor crystals as a target and already reported results from the run of prototype detectors [@Crisler:2018gci; @Aguilar-Arevalo:2019wdi; @Abramoff:2019dfb]. For example, SENSEI recently reported data collected by using one silicon Skipper-CCD with a total active mass (before masking) of 0.0947 gram and consisting of $\mathcal{O}(10^6)$ pixels [@Abramoff:2019dfb]. In a CCD, DM-electron scattering events can cause electronic transitions from the valence band to the conduction band of crystals in the device’s pixels. The excited electron subsequently creates additional electron-hole pairs for each 3.6 eV of excitation energy above the band gap which are then moved pixel-by-pixel to one of the CCD corners for the read-out.
Exploring the sensitivity to sub-GeV DM of CCD detectors, we focus on two benchmark values for the experimental exposure. These are: 1) 100 g-year, which is the exposure SENSEI aims at [@Crisler:2018gci]; and 2) 1 kg-year, as expected for the next version of the DAMIC experiment, DAMIC-M [@Aguilar-Arevalo:2019wdi]. As far as the detection efficiency of CCD detectors is concerned, we use the values reported in Tab. I of [@Abramoff:2019dfb] in the “DM in single pixel” line. These values are: $\xi_1=1$, $\xi_2=0.62$, $\xi_3=0.48$, $\xi_4=0.41$, $\xi_5=0.36$ for the $Q=1$, $Q=2$, $Q=3$, $Q=4$, and $Q=5$ bins, respectively. We set $\xi_i=0.36$, for larger values of $Q$ ($i>5$). Consequently, when computing the expected number of DM signal events in the $Q=i$ bin, we multiply Eq. (\[eq:dRcrystal\]) by $\xi_i$.
“Dark current” events are expected to be the dominant experimental background source for next generation CCD detectors [@Abramoff:2019dfb]. These events are evenly distributed across CCDs and are due to thermal fluctuations that excite electrons from the valence to the conduction band in crystals. As in Tab. 1 of [@Essig:2015cda], we assume that the number, $\mathcal{N}_{i}$, of dark current events generating at least $i$ electron-hole pairs in a crystal can be estimated in terms of Poisson probabilities $\mathscr{P}$, $$\begin{aligned}
\mathcal{N}_{i} = (n_{\rm ccd} M_{\rm pix}) (\Delta_T/{\rm hr}) \sum_{k\geq i}\mathscr{P}(\Gamma\times{\rm (counts/pixel/hr})^{-1}|k) \,,\end{aligned}$$ where $n_{\rm ccd}$ is the number of CCDs in the detector, $M_{\rm pix}=8\times 10^6$ is the number of pixels in a CCD, $\Delta_T$ is the time of data taking in hours and $\Gamma$ is the dark current rate in counts/pixel/hr. The number of these background events in the $Q=i$ bin is then given by $\mathcal{N}_{i}-\mathcal{N}_{i+1}$. As anticipated, we consider two benchmark cases: 1) $n_{\rm ccd}=40$, $\Delta T=24\times365$ hr, corresponding to an exposure of 100 g-year, assuming that the mass of a single CCD is $m_{\rm ccd}=$2.5 g; and 2) $n_{\rm ccd}=40$, $\Delta T=24\times365\times10$ hr, corresponding to an exposure of 1 kg-year, again with $m_{\rm ccd}=$2.5 g. In each of the two scenarios above, we present our results for two extreme values of the dark current rate, namely $\Gamma=5\times10^{-3}$ counts/pixel/day and $\Gamma=1\times10^{-7}$ counts/pixel/day.
Similarly to the case of HV silicon detectors, for the deposited energy spectrum induced by Compton scattering events from radiogenic $\gamma$’s, $f_C(E_e)$, we assume a constant function of $E_e$. Conservatively, we normalise $f_C(E_e)$ to 0.1 counts kg$^{-1}$day$^{-1}$ over the energy range 0 - 50 eV [@Agnese:2016cpb].
Background events due to voltage variations in the amplifiers used during read-out are assumed to be negligible, as they can be vetoed by means of a periodic read-out [@Abramoff:2019dfb]. The measured rate at SENSEI for this class of background events is of the order of $10^{-3}$ events/pixel/day and we assume that comparable rates will be achieved at next generation silicon CCD detectors.
Also in the case of silicon CCD detectors, we assume single-charge resolution, which implies $Q_{\rm th}=1$.
Projected sensitivity {#sec:results}
=====================
In Sec. \[sec:stat\], we introduce the statistical method used to compute the significance for a DM particle discovery at future germanium and silicon detectors. We present our results in Sec. \[sec:num\], and investigate their stability under variations in the underlying astrophysical parameters in Sec. \[sec:astro\].
Methodology {#sec:stat}
-----------
We compute the significance for DM particle discovery at a given experiment, $\mathcal{Z}$, using the likelihood ratio [@Cowan:2010js], $$q_0 = -2 \ln \frac{\mathscr{L}(\mathbf{d}|0)}{\mathscr{L}(\mathbf{d}|\hat{\sigma}_e)} \,,
\label{eq:q}$$ as a test statistic. In Eq. (\[eq:q\]), $\mathscr{L}(\mathbf{d}|\sigma_e)$ is the likelihood function, $\hat{\sigma}_e$ the value of $\sigma_e$ that maximises $\mathcal{L}(\mathbf{d}|\sigma_e)$ and ${\mathbf{d}}=(\mathscr{N}_1,\dots,\mathscr{N}_n$) a dataset. Here, $\mathscr{N}_i$ is the observed number of electronic transitions in the $i$-th $Q$-bin and the total number of bins in the $Q$ variable is assumed to be $n$. Notice that the larger $q_0$, the worse $\sigma_e=0$ fits the data $\mathbf{d}$ and that for $\hat{\sigma}_e=0$, $q_0$ takes its minimum value, i.e. zero. By repeatedly simulating $\mathbf{d}$ under the null hypothesis, i.e. $\sigma_e=0$ (for given $m_\chi$, $\rho_\chi$, $v_0$, $v_{\oplus}$ and $v_{\rm esc}$), we obtain a probability density function for $q_0$ denoted here by $f_0$. Similarly, by repeatedly simulating $\mathbf{d}$ under the alternative hypothesis, i.e. $\sigma_e=\bar{\sigma}_e\neq0$ (for given $m_\chi$, $\rho_\chi$, $v_0$, $v_{\oplus}$ and $v_{\rm esc}$), we obtain $f$, i.e. the probability density function of $q_0$ under the alternative hypothesis. The significance $\mathcal{Z}$ is then given by $$\mathcal{Z} = \Phi^{-1}(1-p)\,,
\label{eq:Z}$$ where $\Phi$ is the cumulative distribution function of a Gaussian probability density of mean 0 and variance 1, while $$p = \int^{\infty}_{q_{\rm med}} {\rm d}q_0\,f_0(q_0)\,,$$ and $q_{\rm med}$ is the median of $f$. Notice that the significance given in Eq. (\[eq:Z\]) depends on $\bar{\sigma}_e$ as well as on the assumed values for $m_\chi$, $\rho_\chi$, $v_0$, $v_{\oplus}$ and $v_{\rm esc}$. While both $f_0$ and $f$ can in principle be obtained via Monte Carlo simulations, asymptotically (i.e. in the large sample limit) $f_0$ is expected to obey a “half chi-square distribution” for one degree of freedom, $\frac{1}{2}\chi_1^2$ [@Cowan:2010js]. For a few benchmark values of $m_\chi$, $\rho_\chi$, $v_0$, $v_{\oplus}$ and $v_{\rm esc}$, we verified that $f_0$ is very well approximate by $\frac{1}{2}\chi_1^2$ by comparing the latter with the distribution of $q$ found from 10 million Monte Carlo simulations of $\mathbf{d}$. Specifically, we find that the relative difference between the cumulative distribution functions of (the Monte Carlo generated) $f_0$ and $\frac{1}{2}\chi_1^2$ is of the order of $10^{-7}$ around $\mathcal{Z}=5$. In order to speed up our numerical calculations, we therefore assume that the probability density function $f_0$ can be approximated by $\frac{1}{2}\chi_1^2$. At the same time, we compute the probability density function $f$ and $q_{\rm med}$ from Monte Carlo simulations of $\mathbf{d}$. For the likelihood, we assume $$\mathscr{L}(\mathbf{d}|\sigma_e) = \prod_{i=1}^n \frac{\left(\mathscr{B}_i+\mathscr{S}_i(\sigma_e)\right)^{\mathscr{N}_i}}{\mathscr{N}_i!} e^{-\left(\mathscr{B}_i+\mathscr{S}_i(\sigma_e)\right)} \,,$$ where $$\mathscr{S}_i(\sigma_e) = \mathcal{E} \xi_i \int_{Q=i}^{Q=i+1} {\rm d} Q \, \frac{{\rm d}\mathscr{R}_{\rm crystal}}{{\rm d} Q} \,,
\label{eq:S_i}$$ while $\mathcal{E}$ is the experimental exposure and, finally, $\mathscr{B}_i$ is the total number of expected background events in the $Q=i$ bin. In order to evaluate Eq. (\[eq:S\_i\]), we compute ${\rm d} Q/{\rm d} E_e$ from Eq. (\[eq:conv\]). We introduced our assumptions for $\mathcal{E}$, $\xi_i$ and $\mathscr{B}_i$ in Sec. \[sec:detectors\] focusing on CCD and HV detectors separately. When computing the joint significance for DM discovery at two experiments $A$ and $B$, we repeat the above procedure now with a likelihood function given by $\mathscr{L} = \mathscr{L}_A \mathscr{L}_B$, where $\mathscr{L}_A$ and $\mathscr{L}_B$ are the likelihood functions for the experiments $A$ and $B$, respectively.
![Contours of constant statistical significance, $\mathcal{Z}=3$, in the ($m_\chi$, $\bar{\sigma}_e$) plane for $\rho_\chi=0.4$ GeV cm$^{-3}$, $v_0=230$ km s$^{-1}$, $v_{\oplus}=240$ km s$^{-1}$ and $v_{\rm esc}=600$ km s$^{-1}$. Left and right panels correspond to models where $F_{\rm DM}(q^2)=1$ and $F_{\rm DM}(q^2)=q_{\rm ref}^2/q^2$, respectively. In both panels, distinct coloured lines refer to different experimental setups: 1) CCD silicon detector with $\mathcal{E}=0.1$ kg-year and a high dark current (DC) rate, $\Gamma=5\times10^{-3}$ counts/pixel/day (blue); 2) CCD silicon detector with $\mathcal{E}=1$ kg-year and $\Gamma=5\times10^{-3}$ counts/pixel/day (red); 3) CCD silicon detector with $\mathcal{E}=0.1$ kg-year and a low DC rate of $\Gamma=1\times10^{-7}$ counts/pixel/day (orange); 4) CCD silicon detector with $\mathcal{E}=1$ kg-year and $\Gamma=1\times10^{-7}$ counts/pixel/day (violet); 5) HV silicon detector with $\mathcal{E}=10$ kg-year (green); 6) HV germanium detector with exposure of $\mathcal{E}=50$ kg-year (light blue); and, finally 7) a HV germanium detector with $\mathcal{E}=50$ kg-year that has reported data together with a CCD silicon detector with $\mathcal{E}=1$ kg-year and $\Gamma=1\times10^{-7}$ counts/pixel/day (brown). Along these contours, the null hypothesis can be rejected with a significance of 3 standard deviations by one or a combinations of experiments.[]{data-label="fig:Z3"}](./figures_v2/Z3_FDM1_detreach-eps-converted-to.pdf){width="\textwidth"}
![Contours of constant statistical significance, $\mathcal{Z}=3$, in the ($m_\chi$, $\bar{\sigma}_e$) plane for $\rho_\chi=0.4$ GeV cm$^{-3}$, $v_0=230$ km s$^{-1}$, $v_{\oplus}=240$ km s$^{-1}$ and $v_{\rm esc}=600$ km s$^{-1}$. Left and right panels correspond to models where $F_{\rm DM}(q^2)=1$ and $F_{\rm DM}(q^2)=q_{\rm ref}^2/q^2$, respectively. In both panels, distinct coloured lines refer to different experimental setups: 1) CCD silicon detector with $\mathcal{E}=0.1$ kg-year and a high dark current (DC) rate, $\Gamma=5\times10^{-3}$ counts/pixel/day (blue); 2) CCD silicon detector with $\mathcal{E}=1$ kg-year and $\Gamma=5\times10^{-3}$ counts/pixel/day (red); 3) CCD silicon detector with $\mathcal{E}=0.1$ kg-year and a low DC rate of $\Gamma=1\times10^{-7}$ counts/pixel/day (orange); 4) CCD silicon detector with $\mathcal{E}=1$ kg-year and $\Gamma=1\times10^{-7}$ counts/pixel/day (violet); 5) HV silicon detector with $\mathcal{E}=10$ kg-year (green); 6) HV germanium detector with exposure of $\mathcal{E}=50$ kg-year (light blue); and, finally 7) a HV germanium detector with $\mathcal{E}=50$ kg-year that has reported data together with a CCD silicon detector with $\mathcal{E}=1$ kg-year and $\Gamma=1\times10^{-7}$ counts/pixel/day (brown). Along these contours, the null hypothesis can be rejected with a significance of 3 standard deviations by one or a combinations of experiments.[]{data-label="fig:Z3"}](./figures_v2/Z3_FDMq2_detreach-eps-converted-to.pdf){width="98.00000%"}
Numerical results {#sec:num}
-----------------
We now present the results of our sensitivity study for future DM experiments based on germanium and silicon semiconductor detectors. We focus on the HV and CCD operating modes described in Sec. \[sec:detectors\] and the dark photon model reviewed in Sec. \[sec:theory\]. We present our results in terms of DM-electron scattering cross section, $\bar{\sigma}_e$, required to reject the null, i.e. background only hypothesis with a statistical significance corresponding to 3 or 5 standard deviations as a function of the DM particle mass, and for benchmark values of $\rho_\chi$, $v_0$, $v_{\oplus}$ and $v_{\rm esc}$. We investigate the dependence of our results on the local DM density and escape velocity, most probable DM speed and detector speed in the galactic rest frame in the next subsection.
Fig. \[fig:Z3\] shows the smallest cross section value, $\bar{\sigma}_e$, required to reject the background only hypothesis with a statistical significance of at least 3 standard deviations when the DM particle mass varies in the 1 MeV - 1 GeV range. We obtain such $\mathcal{Z}=3$ contours using the likelihood ratio method described in Sec. \[sec:stat\]. Results are presented for $\rho_\chi=0.4$ GeV cm$^{-3}$, $v_0=230$ km s$^{-1}$, $v_{\oplus}=240$ km s$^{-1}$ and $v_{\rm esc}=600$ km s$^{-1}$. The left panel corresponds to the case $F_{\rm DM}(q^2)=1$, whereas in the right panel we assume $F_{\rm DM}(q^2)=q_{\rm ref}^2/q^2$. In both panels, lines with different colours correspond to distinct detectors, background assumptions or exposures. Specifically, we consider seven different experimental setups:
1. CCD silicon detector operating with an exposure of $\mathcal{E}=0.1$ kg-year and a high dark current rate of $\Gamma=5\times10^{-3}$ counts/pixel/day,
2. CCD silicon detector operating with an exposure of $\mathcal{E}=1$ kg-year and a high dark current rate of $\Gamma=5\times10^{-3}$ counts/pixel/day,
3. CCD silicon detector operating with an exposure of $\mathcal{E}=0.1$ kg-year and a low dark current rate of $\Gamma=1\times10^{-7}$ counts/pixel/day,
4. CCD silicon detector operating with an exposure of $\mathcal{E}=1$ kg-year and a low dark current rate of $\Gamma=1\times10^{-7}$ counts/pixel/day,
5. HV silicon detector operating with an exposure of $\mathcal{E}=10$ kg-year,
6. HV germanium detector operating with an exposure of $\mathcal{E}=50$ kg-year, and finally
7. the case in which two distinct experiments have simultaneously reported data, namely a HV germanium detector with $\mathcal{E}=50$ kg-year and a CCD silicon detector with $\mathcal{E}=1$ kg-year and $\Gamma=1\times10^{-7}$ counts/pixel/day (i.e. the best-case scenario).
![Same as Fig. \[fig:Z3\] but for a statistical significance corresponding to 5 standard deviations, i.e. $\mathcal{Z}=5$.[]{data-label="fig:Z5"}](./figures_v2/Z5_FDM1_detreach-eps-converted-to.pdf){width="\textwidth"}
![Same as Fig. \[fig:Z3\] but for a statistical significance corresponding to 5 standard deviations, i.e. $\mathcal{Z}=5$.[]{data-label="fig:Z5"}](./figures_v2/Z5_FDMq2_detreach-eps-converted-to.pdf){width="98.00000%"}
For $m_\chi$ larger than about 10 MeV, HV detectors are more sensitive to DM than CCD detectors operating in the first three modes described above. In the same mass range, however, the projected sensitivity of CCD silicon detectors operating with an exposure of $\mathcal{E}=1$ kg-year and a low dark current rate of $\Gamma=1\times10^{-7}$ counts/pixel/day is comparable with the one of a HV germanium detector operating with an exposure of $\mathcal{E}=50$ kg-year. Below $m_\chi=10$ MeV, HV detectors rapidly lose sensitivity because of the large number and the energy spectrum of charge leakage background events assumed for HV experiments in this study (see Sec. \[sec:detectors\]). In this second mass range, CCD silicon detectors are found to be more sensitive to DM than germanium and silicon HV detectors (at least for $m_\chi$ as low as 2-3 MeV). Overall, the projected sensitivity of CCD silicon detectors operating with an exposure of $\mathcal{E}=1$ kg-year and a low dark current rate of $\Gamma=1\times10^{-7}$ counts/pixel/day is high (i.e. the cross section corresponding to $\mathcal{Z}=3$ is comparably small) over the whole range of DM particle masses considered here. This is especially true when $F_{\rm DM}(q^2)=q_{\rm ref}^2/q^2$, as illustrated in the right panel of Fig. \[fig:Z3\]. Finally, the $\mathcal{Z}=3$ contour that we find when a HV germanium detector with $\mathcal{E}=50$ kg-year and a CCD silicon detector with $\mathcal{E}=1$ kg-year and $\Gamma=1\times10^{-7}$ counts/pixel/day have simultaneously reported a DM signal is comparable with the contour we obtain for a CCD silicon detector with $\mathcal{E}=1$ kg-year and $\Gamma=1\times10^{-7}$ (for both choices of $F_{\rm DM}$).
![DM-electron scattering cross section, $\bar{\sigma}_e$, required to reject the null hypothesis with a statistical significance corresponding to $\mathcal{Z}=5$ as a function of $\rho_\chi$ for a HV germanium detector with $\mathcal{E}=50$ kg-year, $F_{\rm DM}(q^2)=1$, $m_\chi=100$ MeV, $v_0=230$ km s$^{-1}$, $v_{\oplus}=240$ km s$^{-1}$ and, finally, $v_{\rm esc}=600$ km s$^{-1}$.[]{data-label="fig:rho"}](./figures_v2/RhoHVGe100MeVFDM1-eps-converted-to.pdf){width="50.00000%"}
Similarly, Fig. \[fig:Z5\] shows the smallest cross section value required to exclude the background only hypothesis with a statistical significance corresponding to at least 5 standard deviations. We obtain such $\mathcal{Z}=5$ contours in the ($m_\chi$, $\bar{\sigma}_e$) plane for $\rho_\chi=0.4$ GeV cm$^{-3}$, $v_0=230$ km s$^{-1}$, $v_{\oplus}=240$ km s$^{-1}$ and $v_{\rm esc}=600$ km s$^{-1}$. Compared to the $\mathcal
{Z}=3$ case, the required cross section values are larger at each DM particle mass, but, as expected, above and below $m_\chi=10$ MeV the relative sensitivity of HV and CCD detectors is qualitatively unchanged.
Astrophysical uncertainties {#sec:astro}
---------------------------
We conclude this section by investigating the stability of our conclusions under variations of the astrophysical parameters $\rho_\chi$, $v_0$, $v_{\oplus}$ and $v_{\rm esc}$ governing the local space and velocity distribution of DM particles. We start by focusing on the local DM density, $\rho_\chi$, upon which the rate of DM-induced electron transitions in semiconductor crystals, Eq. (\[eq:dRcrystal\]), linearly depends.
Fig. \[fig:rho\] shows the value of the DM-electron scattering cross section, $\bar{\sigma}_e$, required to reject the null hypothesis with a statistical significance corresponding to $\mathcal{Z}=5$ as a function of the local DM density for a HV germanium detector with $\mathcal{E}=50$ kg-year. Here, we assume $F_{\rm DM}(q^2)=1$ and set $m_\chi=100$ MeV, $v_0=230$ km s$^{-1}$, $v_{\oplus}=240$ km s$^{-1}$ and $v_{\rm esc}=600$ km s$^{-1}$. As expected, we find that the value of $\bar{\sigma}_e$ solving $\mathcal{Z}=5$ is inversely proportional to $\rho_\chi$.
Similarly, Fig. \[fig:vel\] shows the value of $\bar{\sigma}_e$ required to reject the null hypothesis with a statistical significance of $5$ as a function of the detector speed in the galactic rest frame (left panel) of the most probable DM speed (central panel) and of the local escape velocity (right panel) for a CCD silicon detector with $\mathcal{E}=1$ kg-year and $\Gamma=1\times10^{-7}$ counts/pixel/day. In all panels, we separately consider both $F_{\rm DM}(q^2)=1$ (blue lines) and $F_{\rm DM}(q^2)=q^2_{\rm ref}/q^2$ (red lines). Finally, we assume $m_\chi=1$ MeV and set $\rho_\chi$ to $0.4$ GeV cm$^{-3}$. In all panels in Fig. \[fig:vel\], we find that the value of $\bar{\sigma}_e$ solving $\mathcal{Z}=5$ varies by a factor of a few over the range of astrophysical parameters considered here. As expected, we also find that astrophysical uncertainties have a smaller impact on our results for larger values of the DM particle mass. Notice also that in this analysis we treated $v_0$ and $v_\oplus$ as independent parameters. This approach can account for the outcome of hydrodynamical simulations [@Bozorgnia:2017brl] and generalises the so-called Standard Halo Model, where the most probable speed, $v_0$, is set to the speed of the local standard of rest [@Freese:2012xd].
![Scattering cross section, $\bar{\sigma}_e$, required to reject the null hypothesis with a statistical significance of $\mathcal{Z}=5$ as a function of $v_E\equiv v_\oplus$ (left panel), $v_0$ (central panel) and $v_{\rm esc}$ (right panel) for a CCD silicon detector with $\mathcal{E}=1$ kg-year and $\Gamma=1\times10^{-7}$ counts/pixel/day. In all panels, we consider both $F_{\rm DM}(q^2)=1$ (blue lines) and $F_{\rm DM}(q^2)=q^2_{\rm ref}/q^2$ (red lines) and set $m_\chi$ to 1 MeV and $\rho_\chi$ to $0.4$ GeV cm$^{-3}$. When not otherwise specified, we set $v_0=230$ km s$^{-1}$, $v_{\oplus}=240$ km s$^{-1}$ and $v_{\rm esc}=600$ km s$^{-1}$.[]{data-label="fig:vel"}](./figures_v2/velocities_CCD_m_1-eps-converted-to.pdf){width="\textwidth"}
Conclusions {#sec:conclusions}
===========
We computed the sensitivity to DM particles in the sub-GeV mass range of future direct detection experiments using germanium and silicon semiconductor detectors. We addressed this problem within the dark photon model for DM-electron interactions in semiconductor crystals and computed the projected sensitivities of future germanium and silicon detectors by using the likelihood ratio as a test statistic and Monte Carlo simulations. We placed special emphasis on describing the background models used in our analysis and presented our results in terms of DM-electron scattering cross section values required to reject the background only hypothesis in favour of the background plus signal hypothesis with a statistical significance corresponding to 3 or 5 standard deviations. We also tested the stability of our results under variations in the astrophysical parameters that govern the space and velocity distribution of DM in our galaxy. Our sensitivity study extended previous works in terms of background models, statistical methods used to compute the projected sensitivities, and treatment of the underlying astrophysical uncertainties. This work is motivated by the recent experimental progress, and by the improved understanding of the experimental backgrounds that this progress produced.
For $m_\chi$ larger than about 10 MeV, HV detectors are more sensitive to DM than CCD detectors, with the exception of CCD silicon detectors operating with an exposure of $\mathcal{E}=1$ kg-year and a low dark current rate of $\Gamma=1\times10^{-7}$ counts/pixel/day, which exhibit a sensitivity comparable with the one of a HV germanium detector operating with an exposure of $\mathcal{E}=50$ kg-year. Below $m_\chi=10$ MeV, we find that CCD silicon detectors are more sensitive to DM than germanium and silicon HV detectors, at least for $m_\chi$ as low as 2-3 MeV. In the best-case scenario, when a HV germanium detector with $\mathcal{E}=50$ kg-year and a CCD silicon detector with $\mathcal{E}=1$ kg-year and $\Gamma=1\times10^{-7}$ counts/pixel/day have simultaneously reported a DM signal, we find that the smallest cross section value compatible with $\mathcal{Z}=3$ ($\mathcal{Z}=5$) is about $8\times10^{-42}$ cm$^2$ ($1\times10^{-41}$ cm$^2$) for $F_{\rm DM}(q^2)=1$, and $4\times10^{-41}$ cm$^2$ ($7\times10^{-41}$ cm$^2$) for $F_{\rm DM}(q^2)=q_{\rm ref}^2/q^2$.
We would like to thank Noah A. Kurinsky and Belina von Krosigk for useful insights into the backgrounds of SuperCDMS. During this work, RC and TE were supported by the Knut and Alice Wallenberg Foundation (PI, Jan Conrad). RC also acknowledges support from an individual research grant from the Swedish Research Council, dnr. 2018-05029. The research presented in this article made use of the computer programmes and packages WebPlotDigitizer [@webplotdigitizer], Wolfram Mathematica [@Mathematica], Matlab [@MATLAB:2019] and QEdark [@Essig:2015cda].
[10]{}
G. Bertone and D. Hooper, *[History of dark matter]{}*, [*Rev. Mod. Phys.* [**90**]{} (2018) 045002](https://doi.org/10.1103/RevModPhys.90.045002) \[[[1605.04909]{}](https://arxiv.org/abs/1605.04909)\].
G. Bertone, D. Hooper and J. Silk, *[Particle dark matter: Evidence, candidates and constraints]{}*, [*Phys. Rept.* [**405**]{} (2005) 279](https://doi.org/10.1016/j.physrep.2004.08.031) \[[[hep-ph/0404175]{}](https://arxiv.org/abs/hep-ph/0404175)\].
K. Kuijken and G. Gilmore, *[The Mass Distribution in the Galactic Disc - Part Two - Determination of the Surface Mass Density of the Galactic Disc Near the Sun]{}*, [*Mon. Not. Roy. Astron. Soc.* [**239**]{} (1989) 605]{}.
V. C. Rubin and W. K. Ford, Jr., *[Rotation of the Andromeda Nebula from a Spectroscopic Survey of Emission Regions]{}*, [*Astrophys. J.* [**159**]{} (1970) 379](https://doi.org/10.1086/150317).
Z. F., *[Die Rotverschiebung von extragalaktischen Nebeln]{}*, [*Phys. Acta* [**6**]{} (1933) 110]{}.
N. Kaiser and G. Squires, *[Mapping the dark matter with weak gravitational lensing]{}*, [*Astrophys. J.* [**404**]{} (1993) 441](https://doi.org/10.1086/172297).
D. Clowe, M. Bradac, A. H. Gonzalez, M. Markevitch, S. W. Randall, C. Jones et al., *[A direct empirical proof of the existence of dark matter]{}*, [*Astrophys. J.* [**648**]{} (2006) L109](https://doi.org/10.1086/508162) \[[[ astro-ph/0608407]{}](https://arxiv.org/abs/astro-ph/0608407)\].
G. R. Blumenthal, S. M. Faber, J. R. Primack and M. J. Rees, *[Formation of Galaxies and Large Scale Structure with Cold Dark Matter]{}*, [*Nature* [**311**]{} (1984) 517](https://doi.org/10.1038/311517a0).
collaboration, P. A. R. Ade et al., *[Planck 2015 results. XIII. Cosmological parameters]{}*, [*Astron. Astrophys.* [**594**]{} (2016) A13](https://doi.org/10.1051/0004-6361/201525830) \[[[1502.01589]{}](https://arxiv.org/abs/1502.01589)\].
A. Drukier and L. Stodolsky, *[Principles and Applications of a Neutral Current Detector for Neutrino Physics and Astronomy]{}*, [*Phys. Rev.* [ **D30**]{} (1984) 2295](https://doi.org/10.1103/PhysRevD.30.2295).
M. W. Goodman and E. Witten, *[Detectability of Certain Dark Matter Candidates]{}*, [*Phys. Rev.* [**D31**]{} (1985) 3059](https://doi.org/10.1103/PhysRevD.31.3059).
T. Marrod[á]{}n Undagoitia and L. Rauch, *[Dark matter direct-detection experiments]{}*, [*J. Phys.* [**G43**]{} (2016) 013001](https://doi.org/10.1088/0954-3899/43/1/013001) \[[[1509.08767]{}](https://arxiv.org/abs/1509.08767)\].
collaboration, R. Agnese et al., *[Projected Sensitivity of the SuperCDMS SNOLAB experiment]{}*, [*Phys. Rev.* [**D95**]{} (2017) 082002](https://doi.org/10.1103/PhysRevD.95.082002) \[[[1610.00006]{}](https://arxiv.org/abs/1610.00006)\].
collaboration, M. Crisler, R. Essig, J. Estrada, G. Fernandez, J. Tiffenberg, M. Sofo Haro et al., *[SENSEI: First Direct-Detection Constraints on sub-GeV Dark Matter from a Surface Run]{}*, [*Phys. Rev. Lett.* [**121**]{} (2018) 061803](https://doi.org/10.1103/PhysRevLett.121.061803) \[[[1804.00088]{}](https://arxiv.org/abs/1804.00088)\].
collaboration, A. Aguilar-Arevalo et al., *[Constraints on Light Dark Matter Particles Interacting with Electrons from DAMIC at SNOLAB]{}*, [*Phys. Rev. Lett.* [**123**]{} (2019) 181802](https://doi.org/10.1103/PhysRevLett.123.181802) \[[[1907.12628]{}](https://arxiv.org/abs/1907.12628)\].
R. Essig, J. Mardon and T. Volansky, *[Direct Detection of Sub-GeV Dark Matter]{}*, [*Phys. Rev.* [**D85**]{} (2012) 076007](https://doi.org/10.1103/PhysRevD.85.076007) \[[[1108.5383]{}](https://arxiv.org/abs/1108.5383)\].
collaboration, P. Agnes et al., *[Constraints on Sub-GeV Dark Matter-Electron Scattering from the DarkSide-50 Experiment]{}*, [*Phys. Rev. Lett.* [**121**]{} (2018) 111303](https://doi.org/10.1103/PhysRevLett.121.111303) \[[[1802.06998]{}](https://arxiv.org/abs/1802.06998)\].
R. Essig, A. Manalaysay, J. Mardon, P. Sorensen and T. Volansky, *[First Direct Detection Limits on sub-GeV Dark Matter from XENON10]{}*, [*Phys. Rev. Lett.* [**109**]{} (2012) 021301](https://doi.org/10.1103/PhysRevLett.109.021301) \[[[1206.2644]{}](https://arxiv.org/abs/1206.2644)\].
R. Essig, T. Volansky and T.-T. Yu, *[New Constraints and Prospects for sub-GeV Dark Matter Scattering off Electrons in Xenon]{}*, [*Phys. Rev.* [**D96**]{} (2017) 043017](https://doi.org/10.1103/PhysRevD.96.043017) \[[[1703.00910]{}](https://arxiv.org/abs/1703.00910)\].
collaboration, E. Aprile et al., *[Light Dark Matter Search with Ionization Signals in XENON1T]{}*, [*Phys. Rev. Lett.* [**123**]{} (2019) 251801](https://doi.org/10.1103/PhysRevLett.123.251801) \[[[1907.11485]{}](https://arxiv.org/abs/1907.11485)\].
Y. Hochberg, Y. Kahn, M. Lisanti, C. G. Tully and K. M. Zurek, *[Directional detection of dark matter with two-dimensional targets]{}*, [*Phys. Lett.* [**B772**]{} (2017) 239](https://doi.org/10.1016/j.physletb.2017.06.051) \[[[1606.08849]{}](https://arxiv.org/abs/1606.08849)\].
R. M. Geilhufe, B. Olsthoorn, A. Ferella, T. Koski, F. Kahlhoefer, J. Conrad et al., *[Materials Informatics for Dark Matter Detection]{}*, [[1806.06040]{}](https://arxiv.org/abs/1806.06040).
Y. Hochberg, Y. Kahn, M. Lisanti, K. M. Zurek, A. G. Grushin, R. Ilan et al., *[Detection of sub-MeV Dark Matter with Three-Dimensional Dirac Materials]{}*, [*Phys. Rev.* [**D97**]{} (2018) 015004](https://doi.org/10.1103/PhysRevD.97.015004) \[[[1708.08929]{}](https://arxiv.org/abs/1708.08929)\].
R. M. Geilhufe, F. Kahlhoefer and M. W. Winkler, *[Dirac Materials for Sub-MeV Dark Matter Detection: New Targets and Improved Formalism]{}*, [[1910.02091]{}](https://arxiv.org/abs/1910.02091).
A. Coskuner, A. Mitridate, A. Olivares and K. M. Zurek, *[Directional Dark Matter Detection in Anisotropic Dirac Materials]{}*, [[1909.09170]{}](https://arxiv.org/abs/1909.09170).
S. Knapen, T. Lin, M. Pyle and K. M. Zurek, *[Detection of Light Dark Matter With Optical Phonons in Polar Materials]{}*, [*Phys. Lett.* [**B785**]{} (2018) 386](https://doi.org/10.1016/j.physletb.2018.08.064) \[[[1712.06598]{}](https://arxiv.org/abs/1712.06598)\].
S. Derenzo, R. Essig, A. Massari, A. Soto and T.-T. Yu, *[Direct Detection of sub-GeV Dark Matter with Scintillating Targets]{}*, [*Phys. Rev.* [**D96**]{} (2017) 016026](https://doi.org/10.1103/PhysRevD.96.016026) \[[[1607.01009]{}](https://arxiv.org/abs/1607.01009)\].
C. Blanco, J. I. Collar, Y. Kahn and B. Lillard, *[Dark Matter-Electron Scattering from Aromatic Organic Targets]{}*, [[1912.02822]{}](https://arxiv.org/abs/1912.02822).
Y. Hochberg, Y. Zhao and K. M. Zurek, *[Superconducting Detectors for Superlight Dark Matter]{}*, [*Phys. Rev. Lett.* [**116**]{} (2016) 011301](https://doi.org/10.1103/PhysRevLett.116.011301) \[[[1504.07237]{}](https://arxiv.org/abs/1504.07237)\].
Y. Hochberg, M. Pyle, Y. Zhao and K. M. Zurek, *[Detecting Superlight Dark Matter with Fermi-Degenerate Materials]{}*, [*JHEP* [**08**]{} (2016) 057](https://doi.org/10.1007/JHEP08(2016)057) \[[[ 1512.04533]{}](https://arxiv.org/abs/1512.04533)\].
Y. Hochberg, I. Charaev, S.-W. Nam, V. Verma, M. Colangelo and K. K. Berggren, *[Detecting Sub-GeV Dark Matter with Superconducting Nanowires]{}*, [*Phys. Rev. Lett.* [**123**]{} (2019) 151802](https://doi.org/10.1103/PhysRevLett.123.151802) \[[[1903.05101]{}](https://arxiv.org/abs/1903.05101)\].
S. M. Griffin, K. Inzani, T. Trickle, Z. Zhang and K. M. Zurek, *[Multi-Channel Direct Detection of Light Dark Matter: Target Comparison]{}*, [[ 1910.10716]{}](https://arxiv.org/abs/1910.10716).
P. N. Luke, *[Voltage[-]{}assisted calorimetric ionization detector]{}*, [*J. Appl. Phys.* [**64**]{} (1988) 6858](https://doi.org/10.1063/1.341976).
collaboration, R. Agnese et al., *[First Dark Matter Constraints from a SuperCDMS Single-Charge Sensitive Detector]{}*, [*Phys. Rev. Lett.* [**121**]{} (2018) 051301](https://doi.org/10.1103/PhysRevLett.122.069901,
10.1103/PhysRevLett.121.051301) \[[[ 1804.10697]{}](https://arxiv.org/abs/1804.10697)\].
collaboration, O. Abramoff et al., *[SENSEI: Direct-Detection Constraints on Sub-GeV Dark Matter from a Shallow Underground Run Using a Prototype Skipper-CCD]{}*, [*Phys. Rev. Lett.* [**122**]{} (2019) 161801](https://doi.org/10.1103/PhysRevLett.122.161801) \[[[1901.10478]{}](https://arxiv.org/abs/1901.10478)\].
P. W. Graham, D. E. Kaplan, S. Rajendran and M. T. Walters, *[Semiconductor Probes of Light Dark Matter]{}*, [*Phys. Dark Univ.* [**1**]{} (2012) 32](https://doi.org/10.1016/j.dark.2012.09.001) \[[[ 1203.2531]{}](https://arxiv.org/abs/1203.2531)\].
S. K. Lee, M. Lisanti, S. Mishra-Sharma and B. R. Safdi, *[Modulation Effects in Dark Matter-Electron Scattering Experiments]{}*, [*Phys. Rev.* [**D92**]{} (2015) 083517](https://doi.org/10.1103/PhysRevD.92.083517) \[[[1508.07361]{}](https://arxiv.org/abs/1508.07361)\].
R. Essig, M. Fernandez-Serra, J. Mardon, A. Soto, T. Volansky and T.-T. Yu, *[Direct Detection of sub-GeV Dark Matter with Semiconductor Targets]{}*, [*JHEP* [**05**]{} (2016) 046](https://doi.org/10.1007/JHEP05(2016)046) \[[[ 1509.01598]{}](https://arxiv.org/abs/1509.01598)\].
B. M. Roberts, V. A. Dzuba, V. V. Flambaum, M. Pospelov and Y. V. Stadnik, *[Dark matter scattering on electrons: Accurate calculations of atomic excitations and implications for the DAMA signal]{}*, [*Phys. Rev.* [**D93**]{} (2016) 115037](https://doi.org/10.1103/PhysRevD.93.115037) \[[[1604.04559]{}](https://arxiv.org/abs/1604.04559)\].
R. Catena, T. Emken, N. Spaldin and W. Tarantino, *[Atomic responses to general dark matter-electron interactions]{}*, [[1912.08204]{}](https://arxiv.org/abs/1912.08204).
B. Holdom, *[Two U(1)’s and Epsilon Charge Shifts]{}*, [*Phys. Lett.* [**166B**]{} (1986) 196](https://doi.org/10.1016/0370-2693(86)91377-8).
G. Cowan, K. Cranmer, E. Gross and O. Vitells, *[Asymptotic formulae for likelihood-based tests of new physics]{}*, [*Eur. Phys. J.* [**C71**]{} (2011) 1554](https://doi.org/10.1140/epjc/s10052-011-1554-0,
10.1140/epjc/s10052-013-2501-z) \[[[1007.1727]{}](https://arxiv.org/abs/1007.1727)\].
R. Catena and P. Ullio, *[A novel determination of the local dark matter density]{}*, [*JCAP* [**1008**]{} (2010) 004](https://doi.org/10.1088/1475-7516/2010/08/004) \[[[0907.0018]{}](https://arxiv.org/abs/0907.0018)\].
J. Lewin and P. Smith, *[Review of mathematics, numerical factors, and corrections for dark matter experiments based on elastic nuclear recoil]{}*, [*Astropart. Phys.* [**6**]{} (1996) 87](https://doi.org/10.1016/S0927-6505(96)00047-3).
D. Barker, *[SuperCDMS Background Models for Low-Mass Dark Matter Searches]{}*, Ph.D. thesis, Aug, 2018.
N. Bozorgnia and G. Bertone, *[Implications of hydrodynamical simulations for the interpretation of direct dark matter searches]{}*, [*Int. J. Mod. Phys.* [**A32**]{} (2017) 1730016](https://doi.org/10.1142/S0217751X17300162) \[[[1705.05853]{}](https://arxiv.org/abs/1705.05853)\].
K. Freese, M. Lisanti and C. Savage, *[Annual Modulation of Dark Matter: A Review]{}*, [*Rev. Mod. Phys.* [**85**]{} (2013) 1561](https://doi.org/10.1103/RevModPhys.85.1561) \[[[1209.3339]{}](https://arxiv.org/abs/1209.3339)\].
A. Rohatgi, S. Rehberg and Z. Stanojevic, *[WebPlotDigitizer v4.2]{}*, 2019.
, *[Mathematica v12.0]{}*, 2019.
MATLAB, *version 9.7 (R2019b)*. The MathWorks Inc., 2019.
[^1]: As shown in [@Catena:2019gfa], vectorial atomic form factors might also arise within a general treatment of DM-electron interactions.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'A common-sense perception of a physical system is that it is inseparable from its physical properties. The notion of Quantum Chesire Cat challenges this, as far as quantum systems are concerned. It shows that a quantum system can be decoupled from its physical property under suitable pre and postselections. However, in the Quantum Cheisre Cat setup, the decoupling is not permanent. The photon, for example, and its polarization is separated and then recombined. In this paper, we present a thought experiment where we decouple two photons from their respective polarizations and then interchange them during recombination. Thus, our proposal shows that that the belongingness of a property for a physical system is very volatile in the world of quantum mechanics.'
author:
- 'Debmalya Das$^{1,2}$ and Arun Kumar Pati$^{1,2}$'
title: 'Can Two Quantum Chesire Cats Exchange Grins?'
---
Introduction
============
It is commonly believed that the properties of a physical system cannot be separated from the system itself. This picture of the nature of physical systems, however, does not hold true in the realm of quantum mechanics. A thought experiment, known as the Quantum Chesire Cat [@Chesire], shows that a property such as the polarization of a photon can exist in isolation to the photon itself. Based on a modified version of a Mach-Zehnder interferometer, the Quantum Chesire Cat demonstrates that a photon and its circular polarization can be decoupled from each other and made to travel separately through the two arms. This echoes the description of certain events in the novel Alice in Wonderland where Alice remarks,“ ‘Well! I’ve often seen a cat without a grin’,..., ‘but a grin without a cat! It’s the most curious thing I ever saw in my life!’’’ [@Carroll1865].
Quantum Chesire Cat has opened up new understanding of quantum systems and attracted a lot of debates and discussions [@Bancal2013; @Duprey2018; @Correa2015; @Atherton2015]. It pertains not only to photons and their polarizations but can, in principle, be observed with any quantum system and its property, such as neutron and its magnetic moment, electron and its charge and so on. Experimental verifications of the phenomenon with neutron as the cat and its magnetic moment as the grin have been conducted [@Denkmayr2014; @Sponar2016]. The phenomenon has also been observed experimentally in the context of photon and polarization [@Ashby2016]. Further developments on the idea of the Quantum Chesire Cat include the proposal of a complete Quantum Chesire Cat [@CC] and twin Quantum Chesire Cats [@Twin]. The effect has been used to realize the three box paradox [@Pan2013] and has been studied in the presence of decoherence [@Richter2018]. Recently, a protocol has been developed using which the decoupled grin of a Quantum Chesire Cat has been teleported between two spatially separated parties without the cat [@Das2019].
The premise of the realization of the Quantum Chesire Cat involves weak measurements and weak values [@AAV1988]. The development of the concept weak measurements and weak values stemmed from the limitation posed by the measurement problem in quantum mechanics, in acquiring knowledge about the value of an observable of a quantum system. Given a quantum system in a general pure state ${\left\vert{\Psi}\right\rangle}$, if a measurement of an observable $A$ is performed, then the outcome is an arbitrary eigenvalue $a_i$ of $A$. Measurement of $A$ on an ensemble of states, shows that the measurement outcomes are indeterministic and probabilistic, the probability of an outcome $a_i$ in any given run being $|\langle a_i{\left\vert{\Psi}\right\rangle}|^2$, where ${\left\vert{a_i}\right\rangle}$ is the eigenstate, corresponding to the eigenvalue $a_i$. Thus in quantum mechanics, one actually measures the expectation value $\langle A \rangle$ which is the ensemble average of the outcomes. In addition, in a run, the state of the system collapses to ${\left\vert{a_i}\right\rangle}$. As a result, there is no general consensus among physicists as to whether the measurement of an observable actually reveals a property of the system or is an artifact of the measurement process itself. The proponents of weak value tried to circumvent this problem of wavefunction collapse by cutting off the disturbance caused to the initial state. This is achieved by weakly measuring the observable $A$, causing minimal disturbance to the state. We briefly recapitulate the process of weak measurement below.
Consider a quantum system preselected in the state ${\left\vert{\Psi_i}\right\rangle}$. Suppose an observable $A$ is weakly measured by introducing a small coupling between the quantum system and a suitable measurement device or a meter. A second observable $B$ is thereafter measured strongly and one of its eigenstates ${\left\vert{\Psi_f}\right\rangle}$ is postselected. For all successful postselections of the state ${\left\vert{\Psi_f}\right\rangle}$, the meter readings corresponding to the weak measurements of $A$ are taken into consideration while the others are discarded. The shift in the meter readings, on an average, for all such postselected systems is $A_w$ which is known as the weak value of $A$ [@AAV1988]. Mathematically the weak value of $A$ is defined as $$A_w=\frac{{\left\langle{\Psi_f}\right\vert}A{\left\vert{\Psi_i}\right\rangle}}{{\left\langle{\Psi_f}\right\vert}\Psi_i\rangle}.
\label{wv_def}$$
The weak value is therefore to be interpreted as the value the observable A takes between the preselected state ${\left\vert{\Psi_i}\right\rangle}$ and the postselected state ${\left\vert{\Psi_f}\right\rangle}$. The measurement of $A$ must be weak to preserve the initial probability distribution for the final postselected state. Weak values via weak measurements have been observed experimentally [@Pryde2005]. The weak value of an observable can be complex [@Jozsa2007] or can take up large values that lie outside the eigenvalue spectrum [@AAV1988; @Duck1989; @Tollaksen2010]. The latter has led to the use of weak measurements as a tool for signal amplification [@Dixon2009; @Nishizawa2012]. A geometrical interpretation of weak values can be found in Ref. [@Cormann2016]. Among the myriad applications of weak measurements are observation of the spin Hall effect [@Hosten2008], resolving quantum paradoxes [@Hofmann2010; @Dolev2005], quantum state visualization [@Kobayashi2014], quantum state tomography [@Hofmann2010; @Wu2013], direct measurement of wavefunction [@Lundeen2011; @Lundeen2012], probing contextuality [@Tollaksen2007] , measuring the expectation value of non-Hermitian operators [@Pati2015; @Nirala2019] and quantum precision thermometry [@Pati2019].
Any probing of position or a component of polarization of the photon for the observation of the Quantum Chesire Cat effect must necessarily be weak. This is because projective measurements tend to destroy the original state of a system while extracting information. Weak measurements, on the other hand, minimally disturb the system while gaining small information about it. The state cannot be disturbed in a Quantum Chesire Cat setup as any alteration of the original state will lead to altered probabilities of the postselected state.
In this paper we explore yet another counterintuitive aspect of the Quantum Chesire Cat. We design a setup where we can not only decouple the grin from the Quantum Chesire Cat, but can replace it with a grin originally belonging to another Chesire Cat. Our setup is comprised of two modified and overlapping Mach-Zehnder interferometers. We try to show that the notion that a property of a physical system does not uniquely belong to that system in the quantum domain and can be replaced by the same property from another physical system. Some indications of this phenomenon were obtained in an earlier work [@Das2019] where the decoupled circular polarization has been used for teleportation between two spatially separated parties who are not in possession of the photon.
The paper is arranged as follows. In Section \[recap\], we describe the Quantum Chesire Cat protocol in some details. Next, in Section \[swap\] we present our recipe for the exchange of the grins of two Quantum Chesire Cats. We conclude in Section \[conc\] with discussions on some of the implications of our findings.
The Quantum Chesire Cat {#recap}
=======================
![The basic Quantum Chesire Cat setup. The initial state ${\left\vert{\Psi}\right\rangle}$ is prepared by passing a photon with linear polarization ${\left\vert{H}\right\rangle}$ through a beam-splitter $BS_1$. Weak measurements of positions and photon polarizations are carried out on the two arms of the interferometer. The postselection block consists of a half-waveplate $HWP$, a phase-shifter $PS$, the beam-splitter $BS_2$, a polarization beam- splitter $PBS$ that transmits polarization states ${\left\vert{H}\right\rangle}$ and reflects state ${\left\vert{V}\right\rangle}$ and three detectors $D_1$, $D_2$ and $D_3$.](fig1.pdf)
The phenomenon of Quantum Chesire Cat can be realized by a scheme that is based on a Mach Zehnder interferometer, first presented in Ref.[@Chesire]. A source sends a linearly polarized single photon towards a 50:50 beam-splitter $BS_1$ that channels the photon into a left and right path. Let ${\left\vert{L}\right\rangle}$ and ${\left\vert{R}\right\rangle}$ denote two orthogonal states representing the two possible paths taken by the photon, the left and the right arm, respectively. If the photon is initially in the horizontal polarization state ${\left\vert{H}\right\rangle}$, the photon after passing through the beam-splitter $BS_1$ can be prepared in the state $${\left\vert{\Psi}\right\rangle}= \frac{1}{\sqrt{2}}(i{\left\vert{L}\right\rangle}+{\left\vert{R}\right\rangle}){\left\vert{H}\right\rangle},$$ where the relative phase factor $i$ is picked up by the photon traveling through the left arm due to the reflection by the beam splitter. The postselection block, conducting the process of projective measurement and eventual postselection, comprises of a half-waveplate (HWP), a phase-shifter (PS), beam-splitter $BS_2$, a polarization beam-splitter (PBS) and three detectors $D_1$, $D_2$ and $D_3$. Let the postselected state be $${\left\vert{\Psi_f}\right\rangle}=\frac{1}{\sqrt{2}}({\left\vert{L}\right\rangle}{\left\vert{H}\right\rangle}+{\left\vert{R}\right\rangle}{\left\vert{V}\right\rangle}),$$ where ${\left\vert{V}\right\rangle}$ refers to the vertical polarization state orthogonal to the initial polarization state ${\left\vert{H}\right\rangle}$. The HWP flips the polarization of the photon from ${\left\vert{H}\right\rangle}$ to ${\left\vert{V}\right\rangle}$ and vice-versa. The phase-shifter (PS) adds a phase factor of $i$ to the beam. The beam-splitter $BS_2$ is such that when a photon in the state $\frac{1}{\sqrt{2}}({\left\vert{L}\right\rangle}+i{\left\vert{R}\right\rangle})$ is incident on it, the detector $D_2$ never clicks. In other words, in such cases, the photon always emerges towards the PBS. The PBS is chosen such that it always transmits the horizontal polarization ${\left\vert{H}\right\rangle}$ and always reflects the vertical polarization ${\left\vert{V}\right\rangle}$. The above arrangement thus ensures that only a state given by ${\left\vert{\Psi_f}\right\rangle}$, before it enters the postselection block, corresponds to the click of detector $D_1$. Any clicking of the detectors $D_2$ or $D_3$ implies a different state entering the postselection block. Therefore, selecting the clicks of the detector $D_1$ alone and discarding all the others leads to the postselection onto the state ${\left\vert{\Psi_f}\right\rangle}$.
Suppose we want to know which arm a photon, prepared in the state ${\left\vert{\Psi}\right\rangle}$ and was ultimately postselected in the state ${\left\vert{\Psi_f}\right\rangle}$, passed through. This can be effected by performing weak measurements of the observables $\Pi_L={\left\vert{L}\right\rangle}{\left\langle{L}\right\vert}$ and $\Pi_R={\left\vert{R}\right\rangle}{\left\langle{R}\right\vert}$ by placing weak detectors in the two arms. Similarly, the polarizations can be detected in the left and the right arms by respectively performing weak measurements of the following operators $$\begin{aligned}
\sigma^L_z=\Pi_L \otimes\sigma_z,\nonumber\\
\sigma^R_z=\Pi_R \otimes\sigma_z,\end{aligned}$$ where $$\sigma_z={\left\vert{+}\right\rangle}{\left\langle{+}\right\vert}-{\left\vert{-}\right\rangle}{\left\langle{-}\right\vert},$$ the circular polarization basis $\{{\left\vert{+}\right\rangle},{\left\vert{-}\right\rangle}\}$ itself defined as $$\begin{aligned}
{\left\vert{+}\right\rangle}&=&\frac{1}{\sqrt{2}}({\left\vert{H}\right\rangle}+ i {\left\vert{V}\right\rangle}),\nonumber\\
{\left\vert{-}\right\rangle}&=&\frac{1}{\sqrt{2}}({\left\vert{H}\right\rangle}- i {\left\vert{V}\right\rangle}).\end{aligned}$$ The weak values of the photon positions are measured to be $$(\Pi_L)^w=1 \;\mbox{and}\; (\Pi_R)^w=0
\label{position_wv}$$ which implies that the photon in question has traveled through the left arm. The measured weak values of the polarization positions, on the other hand, turn out to be $$(\sigma^L_z)^w=0\;\mbox{and}\;(\sigma^R_z)^w=1.
\label{polarization_wv}$$ Equations (\[position\_wv\]) and (\[polarization\_wv\]) together reveal that the photon traveled through the left arm but its circular polarization traveled through the right arm. This means the two degrees of freedom of a single entity can, in fact, be decoupled. That is, property of a quantum system can exist independent of its existence in that region.
Swapping of grins {#swap}
=================
In the previous section we have seen that the grin of a cat can be separated from the cat itself under suitable choices of pre and postselection. Here we consider two such Quantum Chesire Cats and exchange their grins. In terms of physical realization, we take two linearly polarized photons, decouple their circular polarizations and then recouple them with the other photon.
To see this effect let us consider the setup shown in Fig. \[CC\_Swap\]. There are two sources of linearly polarized photons, near the unprimed and the primed halves of the arrangement, on the left and the right, respectively. The input polarization in the unprimed half is ${\left\vert{H}\right\rangle}$ while that in the primed half is ${\left\vert{H^\prime}\right\rangle}$. The setup allows one to prepare a preselected state given by
$${\left\vert{\Psi}\right\rangle}=\frac{1}{\sqrt{2}}(i{\left\vert{L}\right\rangle}+{\left\vert{R}\right\rangle}){\left\vert{H}\right\rangle}\otimes\frac{1}{\sqrt{2}}({\left\vert{L^\prime}\right\rangle}+i{\left\vert{R^\prime}\right\rangle}){\left\vert{H^\prime}\right\rangle}.$$
where ${\left\vert{L}\right\rangle}$ and ${\left\vert{R}\right\rangle}$ indicate the states of a photon in the left and right arms of the unprimed half of the apparatus and ${\left\vert{L^\prime}\right\rangle}$ and ${\left\vert{R^\prime}\right\rangle}$ indicate the states of a photon in the left and right arms of the primed half of the apparatus. The right arm of the unprimed half is connected to the output of the primed half and the left arm of the primed half is connected to the output of the unprimed half.
The photons and their polarizations are postselected in the state as given by $$\begin{aligned}
{\left\vert{\Psi_f}\right\rangle}&=&\frac{1}{2}({\left\vert{L}\right\rangle}{\left\vert{H}\right\rangle}{\left\vert{R^\prime}\right\rangle}{\left\vert{H^\prime}\right\rangle}+{\left\vert{R}\right\rangle}{\left\vert{V}\right\rangle}{\left\vert{R^\prime}\right\rangle}{\left\vert{H^\prime}\right\rangle}+\nonumber\\
&&{\left\vert{L}\right\rangle}{\left\vert{H}\right\rangle}{\left\vert{L^\prime}\right\rangle}{\left\vert{V^\prime}\right\rangle}-{\left\vert{R}\right\rangle}{\left\vert{V}\right\rangle}{\left\vert{L^\prime}\right\rangle}{\left\vert{V^\prime}\right\rangle}).\end{aligned}$$ This a four qubit maximally entangled state and is one of the cluster states [@Briegel2001]. The postselection is realized using the following setup. It is clear that this postselected state ${\left\vert{\Psi_f}\right\rangle}$ demands that there is entanglement between the path degrees of freedom of the two halves of the interferometric arrangement. Suppose that the $HWP$ and the $HWP^\prime$ cause the transformations ${\left\vert{H^\prime}\right\rangle}\leftrightarrow{\left\vert{V^\prime}\right\rangle}$ and ${\left\vert{H}\right\rangle}\leftrightarrow{\left\vert{V}\right\rangle}$, respectively, and $PS$ and $PS^\prime$ add a phase-factor $i$, in continuation with the previous arrangement. Now let the beam-splitters $BS_2$ and $BS_2^\prime$ are chosen such that if a state ${\left\vert{L}\right\rangle}{\left\vert{L^\prime}\right\rangle}$ is incident on $BS_2$ or a state ${\left\vert{R}\right\rangle}{\left\vert{R^\prime}\right\rangle}$ is incident on $BS_2^\prime$, then the photons emerge towards $PBS$ and $PBS^\prime$, respectively. The $PBS$ and the $PBS^\prime$, once again, allow only polarizations ${\left\vert{H^\prime}\right\rangle}$ and ${\left\vert{H}\right\rangle}$ to be transmitted and other polarizations to be reflected. Now if any other states are incident on $BS_2$ and $BS^\prime_2$, they proceed towards $PBS_1$ or $PBS^\prime_1$. These polarization beam splitters, once again, allow polarizations ${\left\vert{H^\prime}\right\rangle}$ and ${\left\vert{H}\right\rangle}$ to be transmitted, towards $BS_3$ and $BS^\prime_3$, respectively, and reflect polarizations ${\left\vert{V^\prime}\right\rangle}$ and ${\left\vert{V}\right\rangle}$ towards $D_6$ and $D^\prime_6$, respectively. Next, the beam-splitter $BS_3$ is so chosen that the state ${\left\vert{L}\right\rangle}$ is transmitted towards the detector $D_3$ and any other state is reflected towards $BS_4$. In conjuction with this, the beam-splitter $BS^\prime_3$ is chosen to transmit the state ${\left\vert{R^\prime}\right\rangle}$ towards $D^\prime_3$ and reflects any other state towards $BS^\prime_4$. Thus, the simultaneous clicks of the detectors $D_3$ and $D^\prime_3$ would mean a postselection of the state ${\left\vert{L}\right\rangle}{\left\vert{R^\prime}\right\rangle}$. Using a similar reasoning and appropriate choice of beam-splitters, $BS_4$ and $BS^\prime_4$, the state ${\left\vert{R}\right\rangle}{\left\vert{L^\prime}\right\rangle}$ can be postselected using simultaneous clicks of the detectors $D_4$ and $D^\prime_4$.
This means any clicking of the detectors $D_2$, $D_5$, $D_6$, $D_2^\prime$, $D_5^\prime$ or $D_6^\prime$ would indicate an unsuccessful postselection. On the other hand if and only if there are simultaneous clicks of all the detectors $D_1$, $D_1^\prime$, $D_3$, $D_3^\prime$, $D_4$ and $D_4^\prime$ we get a successful postselection.
![Swapping of the grins of two Quantum Chesire Cats. The desired postselection for observing the swapping of the grin is obtained by selecting only the cases for which there are simultaneous clicks of $D_1$, $D_3$, $D_4$, $D_1^\prime$, $D_3^\prime$ and $D_4^\prime$. The mirror $M^{\prime\prime}$ is used just to accommodate the detectors in a compact space.\[CC\_Swap\]](fig23.pdf)
To appreciate the working of this arrangement, let us define two circular polarization bases $$\begin{aligned}
{\left\vert{+}\right\rangle}&=&\frac{1}{\sqrt{2}}({\left\vert{H}\right\rangle}+ i {\left\vert{V}\right\rangle})\nonumber,\\
{\left\vert{-}\right\rangle}&=&\frac{1}{\sqrt{2}}({\left\vert{H}\right\rangle}- i {\left\vert{V}\right\rangle})\end{aligned}$$ and $$\begin{aligned}
{\left\vert{+^\prime}\right\rangle}&=&\frac{1}{\sqrt{2}}({\left\vert{H^\prime}\right\rangle}+ i {\left\vert{V^\prime}\right\rangle})\nonumber,\\
{\left\vert{-^\prime}\right\rangle}&=&\frac{1}{\sqrt{2}}({\left\vert{H^\prime}\right\rangle}- i {\left\vert{V^\prime}\right\rangle})\end{aligned}$$ and two operators $$\begin{aligned}
\sigma_z&=&{\left\vert{+}\right\rangle}{\left\langle{+}\right\vert}-{\left\vert{-}\right\rangle}{\left\langle{-}\right\vert}\nonumber,\\
\sigma_{z^\prime}&=&{\left\vert{+^\prime}\right\rangle}{\left\langle{+^\prime}\right\vert}-{\left\vert{-^\prime}\right\rangle}{\left\langle{-^\prime}\right\vert}.\end{aligned}$$
To detect a photon in an arm of the interferometeric setup we need to measure $\Pi_L={\left\vert{L}\right\rangle}{\left\langle{L}\right\vert}$, $\Pi_R={\left\vert{R}\right\rangle}{\left\langle{R}\right\vert}$, $\Pi_{L^\prime}={\left\vert{L^\prime}\right\rangle}{\left\langle{L^\prime}\right\vert}$ and $\Pi_{R^\prime}=={\left\vert{R^\prime}\right\rangle}{\left\langle{R^\prime}\right\vert}$. In order that the original state is not disturbed due to these measurements, we need to perform them weakly. Subjected to a successful postselection of the state ${\left\vert{\Psi_f}\right\rangle}$, the corresponding weak values are measured as follows. $$\begin{aligned}
(\Pi_L)^w=\frac{{\left\langle{\Psi_f}\right\vert}\Pi_L{\left\vert{\Psi}\right\rangle}}{{\left\langle{\Psi_f}\right\vert}\Psi\rangle}=1,\nonumber\\
(\Pi_R)^w=\frac{{\left\langle{\Psi_f}\right\vert}\Pi_R{\left\vert{\Psi}\right\rangle}}{{\left\langle{\Psi_f}\right\vert}\Psi\rangle}=0,\nonumber\\
(\Pi_{L^\prime})^w=\frac{{\left\langle{\Psi_f}\right\vert}\Pi_{L^\prime}{\left\vert{\Psi}\right\rangle}}{{\left\langle{\Psi_f}\right\vert}\Psi\rangle}=0,\nonumber\\
(\Pi_{R^\prime})^w=\frac{{\left\langle{\Psi_f}\right\vert}\Pi_{R^\prime}{\left\vert{\Psi}\right\rangle}}{{\left\langle{\Psi_f}\right\vert}\Psi\rangle}=1.
\label{photon_swap}\end{aligned}$$ Thus the photon at the unprimed input port must have traveled through the left arm of the unprimed half of the setup while the photon at the primed input port must have traveled through the right arm of the primed half of the setup when the postselection of ${\left\vert{\Psi_f}\right\rangle}$ is done.
In a similar way we can perform weak measurements of the operators $\sigma_z^L=\Pi_L\otimes\sigma_z$, $\sigma_z^R=\Pi_R\otimes\sigma_z$, $\sigma_{z^\prime}^{L^\prime}=\Pi_{L^\prime}\otimes\sigma_{z^\prime}$ and $\sigma_{z^\prime}^{R^\prime}=\Pi_{R^\prime}\otimes\sigma_{z^\prime}$ to detect the path of the polarization of the photons within the arms of the
![Two Quantum Chesire Cats one white, with blue grin, and the other black, with a red grin, enter the arrangement. The paths taken by the cats and their grins are shown. Each grin decouples from its respective cat and then recouples with the other cat. The final result is the exchange of the two grins and the formation of an entangled state, made up of a white cat, with red grin, and a black cat, with blue grin, both the cats being spatially separated from each other.[]{data-label="cartoon"}](cartoon.jpg)
arrangement. To understand these operators, let us take the example of say $\sigma_z^R=\Pi_R\otimes\sigma_z$. On the unprimed side of the setup, the beam-splitter $BS_1$ sends the photon and the polarization it carries into either ${\left\vert{L}\right\rangle}$ or ${\left\vert{R}\right\rangle}$. The polarization on the unprimed side is acted upon by unprimed operators. Thus to find the polarization on the right arm of the unprimed part, it is thus necessary to measure the operator $\sigma_z^R=\Pi_R\otimes\sigma_z$. A similar reasoning can be reached for the other choices of the operators.
These weak values turn out to be $$\begin{aligned}
(\sigma_z^L)^w=\frac{{\left\langle{\Psi_f}\right\vert}\Pi_L\otimes\sigma_z{\left\vert{\Psi}\right\rangle}}{{\left\langle{\Psi_f}\right\vert}\Psi\rangle}=0,\nonumber\\
(\sigma_z^R)^w=\frac{{\left\langle{\Psi_f}\right\vert}\Pi_R\otimes\sigma_z{\left\vert{\Psi}\right\rangle}}{{\left\langle{\Psi_f}\right\vert}\Psi\rangle}=1,\nonumber\\
(\sigma_{z^\prime}^{L^\prime})^w=\frac{{\left\langle{\Psi_f}\right\vert}\Pi_{L^\prime}\otimes\sigma_{z^\prime}{\left\vert{\Psi}\right\rangle}}{{\left\langle{\Psi_f}\right\vert}\Psi\rangle}=1,
\nonumber\\
(\sigma_{z^\prime}^{R^\prime})^w=\frac{{\left\langle{\Psi_f}\right\vert}\Pi_{R^\prime}\otimes\sigma_{z^\prime}{\left\vert{\Psi}\right\rangle}}{{\left\langle{\Psi_f}\right\vert}\Psi\rangle}=0,
\label{pol_swap}\end{aligned}$$ which means that the circular polarization of the photon at the unprimed input port must have traveled via the right arm of the unprimed half of the arrangement and the circular polarization of the photon at the primed input port must have journeyed via the left arm of the primed half of the setup for all final outcomes ${\left\vert{\Psi_f}\right\rangle}$. Equations (\[photon\_swap\]) and (\[pol\_swap\]) jointly demonstrate that, under the above postselection, the unprimed photon ended up at detector $D_1$ but its circular polarization ended up at the detector $D_1^\prime$. Similarly, the primed photon finally reaches the detector $D_1^\prime$ while its circular polarization goes to the detector $D_1$, for the postselected state ${\left\vert{\Psi_f}\right\rangle}$. Thus we have swapped the grins of two Quantum Chesire Cats.
Conclusions {#conc}
===========
We have developed a thought experiment in which the circular polarizations of two photons can be swapped using an interferometric arrangement. The arrangement for executing this process is based on the original Quantum Chesire Cat setup where the circular polarization can be temporarily separated from the photon for suitable postselected states. Our method strives to decouple the polarization and the photon more permanently by replacing the original polarization with another that was previously associated with a different photon. This polarization in turn associates itself with the second photon. The effect is true only for a certain postselected state.
The implications for the swapping of photon polarization are significant. Firstly, it challenges the notion that a property must faithfully ‘belong’ to a particular physical system. In the realm of the quantum systems, this ‘belongingness’ is certainly very capricious with properties belonging to independent physical systems getting interchanged. The second point to note is that entanglement plays a crucial role in the realization of this swapping process. As discussed before, the swapping is successful only when a certain outcome is attained at the end. This so happens that this outcome is an entangled state. This implies that although the photon and its original polarization is permanently separated spatially, they are held together, along with the other photon and polarization as one global state.
Acknowledgement
===============
The authors thank Sreeparna Das for her help with Fig. (\[cartoon\]).
[99]{}
Y. Aharonov, S. Popescu, D. Rohrlich and P. Skrzypczyk, New Journal of Physics, **15**, 113015 (2013).
Carroll, L. Alice’s Adventures In Wonderland, Wisehouse Classics; 2016 ed. (1865).
J. Bancal , Nature Physics, **10**, 11 (2013).
Y. Guryanova, N. Brunner and S. Popescu, ArXiv e-prints, 1203.4215 quant-ph, (2012).
I. Ibnouhsein and A. Grinbaum, ArXiv e-prints, 1202.4894, quant-ph (2012).
A. Matzkin and A. K. Pan, Journal of Physics A: Mathematical and Theoretical, **46**, 315307 (2013).
R. Corrêa, M. F. Santos, C. H. Monken and P. L. Saldanha, New Journal of Physics, **17**, 053042 (2015).
Q. Duprey and S. Kanjilal and U. Sinha and D. Home and A. Matzkin, Annals of Physics, **391**, 1 (2018).
M. Richter, B. Dziewit and J. Dajka, Advances in Mathematical Physics, , 7060586 (2018).
D. Das and A. K. Pati, arXiv:1903.01452 (2019).
T. Denkmayr, H. Geppert, S. Sponar, H. Lemmel, A. Matzkin, J. Tollaksen and Y. Hasegawa, Nature Communications, **5**, 4492 (2014).
S. Sponar, T. Denkmayr, H. Geppert and Y. Hasegawa, Atoms, **4**, 11 (2016).
J. M. Ashby, P.D. Schwarz and M. Schlosshauer, Phys. Rev. A **94,** 012102 (2016).
D. P. Atherton, G. Ranjit, A. A. Geraci and J. D. Weinstein, Opt. Lett., **40**, 879 (2015).
Y. Aharonov, D. Z. Albert, and L. Vaidman, Phys. Rev. Lett. **60**, 1351-1354 (1988).
Y. Aharonov, S. Popescu and J. Tollaksen, Physics Today **63**, 27-32 (2010).
G. J. Pryde, J. L. O’Brien, A. G. White, T. C. Ralph and H. M. Wiseman, Phys. Rev. Lett. **94**, 220405 (2005).
I. M. Duck, P. M. Stevenson, and E. C. G. Sudarshan, Phys. Rev. D, **40**, 2112–2117 (1989).
M. Cormann, M. Remy, B. Kolaric, and Y. Caudano, Phys. Rev. A **93,** [042124]{} (2016).
R. Jozsa, Phys. Rev. A, **76**, 044103 (2007).
Y. Aharonov and L. Vaidman, Phys. Rev. A, **41**, 11–20, (1990).
P. B. Dixon, D. J. Starling, A. N. Jordan, N. Andrew and J. C. Howell, Phys. Rev. Lett., **102**, 173601 (2009).
A. Nishizawa, K. Nakamura and M. Fujimoto, Phys. Rev. A **85** (2012).
D. Sokolovski, Quanta **2**, 50–57 (2013).
O. Hosten and P. Kwiat, Science **319**, 787-790 (2008).
Y. Aharonov and S. Dolev, Springer Berlin Heidelberg, 283-297 (2005).
H. Kobayashi, K. Nonaka and Y. Shikano, Phys. Rev. A, **89**, 053816 (2014).
H. F. Hofmann, Phys. Rev. A **81**, 012103 (2010).
S. Wu, Scientific Reports **3**, 1193 (2013).
J. S. Lundeen, B. Sutherland, A. Patel, C. Stewart and C. Bamber, Nature, **474**, 188-191 (2011)
J. S. Lundeen and C. Bamber, Phys. Rev. Lett. **108**, 070402 (2012).
J. Tollaksen, Journal of Physics: Conference Series **70**, 012014 (2007).
A. K. Pati, U. Singh and U. Sinha, Phys. Rev. A **92**, 052120 (2015).
G. Nirala, S. N. Sahoo,A. K. Pati and U. Sinha, Phys. Rev. A **99,** 022111 (2019).
A. K. Pati, C. Mukhopadhyay, S. Chakraborty and S. Ghosh, arXiv:1901.07415 (2019).
H. J. Briegel and R. Raussendorf, Phys. Rev. Lett. **86,** 910–913 (2001).
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Avoiding collisions with vulnerable road users (VRUs) using sensor-based early recognition of critical situations is one of the manifold opportunities provided by the current development in the field of intelligent vehicles. As especially pedestrians and cyclists are very agile and have a variety of movement options, modeling their behavior in traffic scenes is a challenging task. In this article we propose movement models based on machine learning methods, in particular artificial neural networks, in order to classify the current motion state and to predict the future trajectory of VRUs. Both model types are also combined to enable the application of specifically trained motion predictors based on a continuously updated pseudo probabilistic state classification. Furthermore, the architecture is used to evaluate motion-specific physical models for starting and stopping and video-based pedestrian motion classification. A comprehensive dataset consisting of 1068 pedestrian and 494 cyclist scenes acquired at an urban intersection is used for optimization, training, and evaluation of the different models. The results show substantial higher classification rates and the ability to earlier recognize motion state changes with the machine learning approaches compared to interacting multiple model (IMM) Kalman Filtering. The trajectory prediction quality is also improved for all kinds of test scenes, especially when starting and stopping motions are included. Here, 37% and 41% lower position errors were achieved on average, respectively.'
author:
- 'Michael Goldhammer, Sebastian Köhler, Stefan Zernetsch, Konrad Doll, Bernhard Sick, and Klaus Dietmayer [^1] [^2] [^3]'
bibliography:
- 'IEEEabrv.bib'
- 'bibliography.bib'
title: 'Intentions of Vulnerable Road Users – Detection and Forecasting by Means of Machine Learning'
---
road safety, vulnerable road users, movement modeling, intention recognition, motion classification, trajectory prediction, artificial neural networks
Acknowledgment {#acknowledgment .unnumbered}
==============
This work partially results from the project AFUSS, supported by the German Federal Ministry of Education and Research (BMBF) under grant number 03FH021I3, and the project DeCoInt$^2$, supported by the German Research Foundation (DFG) within the priority program SPP 1835: “Kooperativ Interagierende Automobile”, grant numbers DO 1186/1-1 and SI 674/11-1.
[Michael Goldhammer]{} received the M. Eng. degree in Electrical Engineering and Information Technology from the University of Applied Sciences Aschaffenburg (Germany) in 2009 and the Dr. rer. nat. degree in computer sciences in 2016 from the University of Kassel (Germany). His PHD thesis focuses on self-learning algorithms for video-based intention detection of pedestrians. His research interests include machine vision and self-learning methods for sensor data processing and automotive safety purposes.
[Sebastian Köhler]{} received the Dipl.-Ing. (FH) degree in Mechatronics and the M. Eng. degree in Electrical Engineering and Information Technology from the University of Applied Sciences Aschaffenburg, Germany, in 2010 and 2011, respectively. Currently, he is working on his PhD thesis in cooperation with the Institute of Measurement, Control, and Microtechnology, University of Ulm, Germany, focussing on intention detection of pedestrians in urban traffic. His main research interests include stereo vision, sensor and information fusion, road user detection and short-term behavior recognition for ADAS.
[Stefan Zernetsch]{} received the B.Eng. and the M.Eng. degree in Electrical Engineering and Information Technology from the University of Applied Sciences Aschaffenburg, Germany, in 2012 and 2014, respectively. Currently, he is working on his PhD thesis in cooperation with the Faculty of Electrical Engineering and Computer Science of the University of Kassel, Germany. His research interests include cooperative sensor networks, sensor data fusion, multiple view geometry, pattern recognition and short-term behavior recognition of traffic participants.
[Konrad Doll]{} received the Diploma (Dipl.-Ing.) degree and the Dr.-Ing. degree in Electrical Engineering and Information Technology from the Technical University of Munich, Germany, in 1989 and 1994, respectively. In 1994 he joined the Semiconductor Products Sector of Motorola, Inc. (now Freescale Semiconductor, Inc.). In 1997 he was appointed to professor at the University of Applied Sciences Aschaffenburg in the field of computer science and digital systems design. His research interests include intelligent systems, their real-time implementations on platforms like CPU, GPUs and FPGAs and their applications in advanced driver assistance systems. Konrad Doll is member of the IEEE.
[Bernhard Sick]{} received the diploma, the Ph.D. degree, and the “Habilitation” degree, all in computer science, from the University of Passau, Germany, in 1992, 1999, and 2004, respectively. Currently, he is full Professor for intelligent embedded systems at the Faculty for Electrical Engineering and Computer Science of the University of Kassel, Germany. There, he is conducting research in the areas autonomic and organic computing and technical data analytics with applications in biometrics, intrusion detection, energy management, automotive engineering, and others. He authored more than 90 peer-reviewed publications in these areas.\
Dr. Sick is associate editor of the IEEE TRANSACTIONS ON CYBERNETICS. He holds one patent and received several thesis, best paper, teaching, and inventor awards. He is a member of IEEE (Systems, Man, and Cybernetics Society, Computer Society, and Computational Intelligence Society) and GI (Gesellschaft fuer Informatik).
[Klaus Dietmayer]{} was born in Celle, Germany in 1962. He received the Diploma degree (equivalent to M.Sc. degree) in electrical engineering from Braunschweig University of Technology, Braunschweig, Germany, in 1989 and the Dr.-Ing. degree (equivalent to Ph.D. degree) from the Helmut Schmidt University, Hamburg, Germany, in 1994. In 1994, he joined the Philips Semiconductors Systems Laboratory, Hamburg, as a Research Engineer. Since 1996, he has been a Manager in the field of networks and sensors for automotive applications. In 2000, he was appointed to a professorship at Ulm University, Ulm, Germany, in the field of measurement and control. He is currently a Full Professor and the Director of the Institute of Measurement, Control and Microtechnology with the School of Engineering and Computer Science, Ulm University. His research interests include information fusion, multiobject tracking, environment perception for advanced automotive driver assistance, and E-Mobility.\
Dr. Dietmayer is a member of the German Society of Engineers VDI/VDE.
[^1]: M. Goldhammer, S. Köhler, S. Zernetsch and K. Doll are with the Faculty of Engineering, University of Applied Sciences Aschaffenburg, Aschaffenburg, Germany (e-mail: {michael.goldhammer, sebastian.koehler, stefan.zernetsch, konrad.doll}@h-ab.de)
[^2]: B. Sick is with the Intelligent Embedded Systems Lab, University of Kassel, Kassel, Germany (e-mail: bsick@uni-kassel.de)
[^3]: K. Dietmayer is with the Institute of Measurement Control and Microtechnology, Ulm University, Ulm, Germany (e-mail: klaus.dietmayer@uni-ulm.de)
| {
"pile_set_name": "ArXiv"
} |
---
abstract: |
We examine the orbits of satellite galaxies identified in a suite of N-body/gasdynamical simulations of the formation of $L_*$ galaxies in a $\Lambda$CDM universe. The numerical resolution of the simulations allows us to track in detail the orbits of the $\sim $ ten brightest satellites around each primary. Most satellites follow conventional orbits; after turning around, they accrete into their host halo and settle on orbits whose apocentric radii are steadily eroded by dynamical friction. As a result, satellites associated with the primary are typically found within its virial radius, $r_{\rm vir}$, and have velocities consistent with a Gaussian distribution with mild radial anisotropy. However, a number of outliers are also present. We find that a surprising number (about one-third) of satellites identified at $z=0$ are on unorthodox orbits, with apocenters that exceed their turnaround radii. These include a number of objects with extreme velocities and apocentric radii at times exceeding $\sim 3.5\,
r_{\rm vir}$ (or, e.g., $\gsim \, 1$ Mpc when scaled to the Milky Way). This population of satellites on extreme orbits consists typically of the faint member of a satellite pair whose kinship is severed by the tidal field of the primary during first approach. Under the right circumstances, the heavier member of the pair remains bound to the primary, whilst the lighter companion is ejected onto a highly-energetic orbit. Since the concurrent accretion of multiple satellite systems is a defining feature of hierarchical models of galaxy formation, a fairly robust prediction of this scenario is that at least some of these extreme objects should be present in the Local Group. We speculate that this three-body ejection mechanism may be the origin of (i) some of the newly discovered high-speed satellites around M31 (such as Andromeda XIV); (ii) some of the distant fast-receding Local Group members, such as Leo I; and (iii) the oddly isolated dwarf spheroidals Cetus and Tucana in the outskirts of the Local Group. Our results suggest that care must be exercised when using the orbits of the most weakly bound satellites to place constraints on the total mass of the Local Group.
author:
- |
Laura V. Sales$^{1,2}$, Julio F. Navarro,$^{3,4}$[^1] Mario G. Abadi $^{1,2,3}$ and Matthias Steinmetz$^{5}$\
$^{1}$ Observatorio Astronómico, Universidad Nacional de Córdoba, Laprida 854, 5000 Córdoba, Argentina.\
$^{2}$ Instituto de Astronomía Teórica y Experimental, Conicet, Argentina.\
$^{3}$Department of Physics and Astronomy, University of Victoria, Victoria, BC V8P 5C2, Canada\
$^{4}$Max-Planck Institut für Astrophysik, Karl-Schwarzschild Strasse 1, Garching, D-85741, Germany\
$^{5}$Astrophysikalisches Institut Potsdam, An der Sternwarte 16, Potsdam 14482, Germany\
bibliography:
- 'references.bib'
title: |
Cosmic Ménage à Trois: The Origin of Satellite Galaxies\
on Extreme Orbits
---
galaxies: haloes - galaxies: formation - galaxies: evolution - galaxies: kinematics and dynamics.
Introduction {#sec:intro}
============
The study of Local Group satellite galaxies has been revolutionized by digital imaging surveys of large areas of the sky. More than a dozen new satellites have been discovered in the past couple of years [@zucker04; @zucker06; @willman05b; @martin06; @belokurov06; @belokurov07; @irwin07; @majewski07], due in large part to the completion of the Sloan Digital Sky Survey [@york00; @strauss02] and to concerted campaigns designed to image in detail the Andromeda galaxy and its immediate surroundings (@ibata01 [@ferguson02; @reitzel02; @mcconnachie03; @rich04; @guhathakurta06; @gilbert06; @chapman06], Ibata et al. 2007 submitted). The newly discovered satellites have extended the faint-end of the galaxy luminosity function down to roughly $\sim 10^3 \,
L_{\odot}$, and are likely to provide important constraints regarding the mechanisms responsible for “lighting up” the baryons in low-mass halos. These, in turn, will serve to validate (or falsify) the various theoretical models attempting to reconcile the wealth of “substructure” predicted in cold dark matter (CDM) halos with the scarcity of luminous satellites in the Local Group [see, e.g. @klypin99b; @bullock00; @benson02; @stoehr02; @kazantzidis04; @kravtsov04; @penarrubia07].
At the same time, once velocities and distances are secured for the newly-discovered satellites, dynamical studies of the total mass and spatial extent of the Local Group will gain new impetus. These studies have a long history [@littleandtremaine87; @zaritsky89; @kochanek96; @wilkinson99; @evans00; @battaglia05], but their results have traditionally been regarded as tentative rather than conclusive, particularly because of the small number of objects involved, as well as the sensitivity of the results to the inclusion (or omission) of one or two objects with large velocities and/or distances [@zaritsky89; @kochanek96; @sakamoto03]. An enlarged satellite sample will likely make the conclusions of satellite dynamical studies more compelling and robust.
To this end, most theoretical work typically assumes that satellites are in equilibrium, and use crafty techniques to overcome the limitations of small-N statistics when applying Jeans’ equations to estimate masses [see, e.g., @littleandtremaine87; @wilkinson99; @evans00]. With increased sample size, however, follow enhanced opportunities to discover satellites on unlikely orbits; i.e., dynamical “outliers” that may challenge the expectations of simple-minded models of satellite formation and evolution. It is important to clarify the origin of such systems, given their disproportionate weight in mass estimates.
One issue to consider is that the assumption of equilibrium must break down when considering outliers in phase space. This is because the finite age of the Universe places an upper limit to the orbital period of satellites observed in the Local Group; high-speed satellites have typically large apocenters and long orbital periods, implying that they cannot be dynamically well-mixed and casting doubts on the applicability of Jeans’ theorem-inspired analysis tools.
To make progress, one possibility is to explore variants of the standard secondary infall model [@gunnandgott72; @gott75; @gunn77; @fillmore84], where satellites are assumed to recede initially with the universal expansion, before turning around and collapsing onto the primary due to its gravitational pull. This is the approach adopted by @zaritskyandwhite94 in order to interpret statistically the kinematics of observed satellite samples without assuming well-mixed orbits and taking into account the proper timing and phase of the accretion process.
![image](figs/letfig1.ps){width="100mm"}
In the secondary infall accretion sequence, satellites initially farther away accrete later, after turning around from larger turnaround radii. The turn-around radius grows with time, at a rate the depends on the mass of the primary and its environment, as well as on the cosmological model. Three distinct regions surround a system formed by spherical secondary infall [see, e.g., @bertschinger85; @navarroandwhite93]: (i) an outer region beyond the current turnaround radius where satellites are still expanding away, (ii) an intermediate region containing satellites that are approaching the primary for the first time, and (iii) an inner, “virialized” region containing all satellites that have turned around at earlier times and are still orbiting around the primary. To good approximation, the latter region is delineated roughly by the conventional virial radius of a system[[^2]]{}, $r_{\rm vir}$; the turnaround radius is of order $r_{\rm ta} \sim 3 \,r_{\rm vir}$ [see, e.g. @white93].
We note a few consequences of this model. (a) Satellites outside the virial radius are on their first approach to the system and thus have not yet been inside $r_{\rm vir}$. (b) Satellites inside the virial radius have apocentric radii that typically do not exceed $r_{\rm
vir}$. (c) The farther the turnaround radius the longer it takes for a satellite to turn around and accrete and the higher its orbital energy. (d) Satellites with extreme velocities will, in general, be those completing their first orbit around the primary. Velocities will be maximal near the center, where satellites may reach speeds as high as $\sim 3 \,V_{\rm vir}$. (e) Since all satellites associated with the primary are bound (otherwise they would not have turned around and collapsed under the gravitational pull of the primary), the velocity of the highest-speed satellite may be used to estimate a lower limit to the escape velocity at its location and, thus, a lower bound to the total mass of the system.
Hierarchical galaxy formation models, such as the current $\Lambda$CDM paradigm, suggest further complexity in this picture. Firstly, although numerical simulations show that the sequence of expansion, turnaround and accretion of satellites described above is more or less preserved in hierarchical models, the evolution is far from spherically symmetric [@navarro94; @ghigna98; @jing02; @bailin05; @knebe06]. Much of the mass (as well as many of the satellites) is accreted through filaments of matter embedded within sheets of matter formation [see, e.g., @navarro04]. The anisotropic collapse pattern onto a primary implies that the turnaround “surface” won’t be spherical and that the virial radius may not contain [*all*]{} satellites that have completed at least one orbit around the primary [see, e.g., @balogh00; @diemand07].
More importantly for the purposes of this paper, in hierarchical models galaxy systems are assembled by collecting smaller systems which themselves, in turn, were assembled out of smaller units. This implies that satellites will in general not be accreted in isolation, but frequently as part of larger structures containing multiple systems. This allows for complex many-body interactions to take place during approach to the primary that may result in substantial modification to the orbits of accreted satellites.
We address this issue in this contribution using N-body/gasdynamical simulations of galaxy formation in the current $\Lambda$CDM paradigm. We introduce briefly the simulations in § \[sec:numexp\], and analyze and discuss them in § \[sec:analysis\]. We speculate on possible applications to the Local Group satellite population in §\[sec:LG\] and conclude with a brief summary in § \[sec:conc\].
The Numerical Simulations {#sec:numexp}
=========================
We identify satellite galaxies in a suite of eight simulations of the formation of $L_*$ galaxies in the $\Lambda$CDM scenario. This series has been presented by Abadi, Navarro & Steinmetz (2006), and follow the same numerical scheme originally introduced by @steinmetzandnavarro02. The “primary” galaxies in these simulations have been analyzed in detail in several recent papers, which the interested reader may wish to consult for details [@abadi03a; @abadi03b; @meza03; @meza05; @navarro04]. We give a brief outline below for completeness.
Each simulation follows the evolution of a small region of the universe chosen so as to encompass the mass of an $L_{*}$ galaxy system. This region is chosen from a large periodic box and resimulated at higher resolution preserving the tidal fields from the whole box. The simulation includes the gravitational effects of dark matter, gas and stars, and follows the hydrodynamical evolution of the gaseous component using the Smooth Particle Hydrodynamics (SPH) technique [@steinmetz96]. We adopt the following cosmological parameters for the $\Lambda$CDM scenario: $H_0=65$ km/s/Mpc, $\sigma_8=0.9$, $\Omega_{\Lambda}=0.7$, $\Omega_{\rm CDM}=0.255$, $\Omega_{\rm b}=0.045$, with no tilt in the primordial power spectrum.
All re-simulations start at redshift $z_{\rm init}=50$, have force resolution of order $1$ kpc, and the mass resolution is chosen so that each galaxy is represented on average, at $z=0$, with $\sim 50,000$ dark matter/gas particles. Gas is turned into stars at rates consistent with the empirical Schmidt-like law of @kennicutt98. Because of this, star formation proceeds efficiently only in high-density regions at the center of dark halos, and the stellar components of primary and satellite galaxies are strongly segregated spatially from the dark matter.
Each re-simulation follows a single $\sim L_*$ galaxy in detail, and resolves as well a number of smaller, self-bound systems of stars, gas, and dark matter we shall call generically “satellites”. We shall hereafter refer to the main galaxy indistinctly as “primary” or “host”. The resolved satellites span a range of luminosities, down to about six or seven magnitudes fainter than the primary. Each primary has on average $\sim 10$ satellites within the virial radius.
Figure \[fig:xyzsat\] illustrates the $z=0$ spatial configuration of star particles in one of the simulations of our series. Only star particles are shown here, and are colored according to their age: stars younger than $\simeq 1$ Gyr are shown in blue; those older than $\simeq 10$ Gyr in red. The large box is centered on the primary and is $2\, r_{\rm vir}$ ($632$ kpc) on a side. The “primary” is situated at the center of the large box and contains most of the stars. Indeed, although not immediately apparent in this rendition, more than $85\%$ of all stars are within $\sim 20$ kpc from the center. Outside that radius most of the stars are old and belong to the stellar halo, except for a plume of younger stars stripped from a satellite that has recently merged with the primary. Satellites “associated” with the primary (see § \[ssec:convorb\] for a definition) are indicated with small boxes. Note that a few of them lie well beyond the virial radius of the primary.
A preliminary analysis of the properties of the simulated satellite population and its relation to the stellar halo and the primary galaxy has been presented in Abadi, Navarro & Steinmetz (2006) and Sales et al (2007, submitted), where the interested reader may find further details.
![ Distance to the primary as a function of time for four satellites selected in one of our simulations. The four satellites are accreted into the primary in two pairs of unequal mass. The heavier satellite of the pair, shown by solid lines, follows a “conventional” orbit: after turning around from the universal expansion, it accretes into the primary on a fairly eccentric orbit which becomes progressively more bound by the effects of dynamical friction. Note that, once accreted, these satellites on “conventional” orbits do not leave the virial radius of the primary, which is shown by a dotted line. The light member of the pair, on the other hand, is ejected from the system as a result of a three-body interaction between the pair and the primary during first approach. One of the ejected satellites shown here is the “escaping” satellite identified in Figure \[fig:rvMW\]; the other is the most distant “associated” satellite in that Figure. The latter is still moving toward apocenter at $z=0$, which we estimate to be as far as $\sim 3.5\, r_{\rm vir}$.[]{data-label="fig:orbesc"}](figs/letfig2.ps){width="84mm"}
Results and Discussion {#sec:analysis}
======================
Satellites on conventional orbits {#ssec:convorb}
---------------------------------
The evolution of satellites in our simulations follows roughly the various stages anticipated by our discussion of the secondary infall model; after initially receding with the universal expansion, satellites turn around and are accreted into the primary. Satellites massive enough to be well resolved in our simulations form stars actively before accretion and, by the time they cross the virial radius of the primary, much of their baryonic component is in a tightly bound collection of stars at the center of their own dark matter halos.
The stellar component of a satellite is thus quite resilient to the effect of tides and can survive as a self-bound entity for several orbits. This is illustrated by the [*solid lines*]{} in Figure \[fig:orbesc\], which show, for one of our simulations, the evolution of the distance to the primary of two satellites that turn around and are accreted into the primary at different times. As expected from the secondary infall model, satellites that are initially farther away turn around later; do so from larger radii; and are on more energetic orbits. After accretion (defined as the time when a satellite crosses the virial radius of the primary), their orbital energy and eccentricity are eroded by dynamical friction, and these two satellites do not leave the virial radius of the primary, shown by the dotted line in Figure \[fig:orbesc\]. Depending on their mass and orbital parameters, some of these satellites merge with the primary shortly after accretion, while others survive as self-bound entities until $z=0$. For short, we shall refer to satellites that, by $z=0$, have crossed the virial radius boundary at least once as satellites “associated” with the primary.
The ensemble of surviving satellites at $z=0$ have kinematics consistent with the evolution described above. This is illustrated in Figure \[fig:rvMW\], where we show the radial velocities of all satellites as a function of their distance to the primary, scaled to virial units. Note that the majority of “associated” satellites (shown as circles in this figure) are confined within $r_{\rm vir}$, and that their velocity distribution is reasonably symmetric and consistent with a Gaussian (Sales et al 2007). The most recently accreted satellites tend to have higher-than-average speed at all radii, as shown by the “crossed” circles, which identify all satellites accreted within the last $3$ Gyr.
Crosses (without circles) in this figure correspond to satellites that have not yet been accreted into the primary. These show a clear infall pattern outside $r_{\rm vir}$, where the mean infall velocity decreases with radius and approaches zero at the current turnaround radius, located at about $3 \, r_{\rm vir}$. All of these properties agree well with the expectations of the secondary infall model discussed above.
Three-body interactions and satellites on unorthodox orbits
-----------------------------------------------------------
Closer examination, however, shows a few surprises. To begin with, a number of “associated” satellites are found outside $r_{\rm
vir}$. As reported in previous work [see, e.g., @balogh00; @moore04; @gill05; @diemand07], these are a minority ($\sim 15\%$ in our simulation series), and have been traditionally linked to departures from spherical symmetry during the accretion process. Indeed, anisotropies in the mass distribution during expansion and recollapse may endow some objects with a slight excess acceleration or, at times, may push satellites onto rather tangential orbits that “miss” the inner regions of the primary, where satellites are typically decelerated into orbits confined within the virial radius.
These effects may account for some of the associated satellites found outside $r_{\rm vir}$ at $z=0$, but cannot explain why $\sim 33\%$ of all associated satellites are today on orbits whose apocenters exceed their turnaround radius. This is illustrated in Figure \[fig:rhist\], where we show a histogram of the ratio between apocentric radius (measured at $z=0$; $r_{\rm apo}$) and turnaround radius ($r_{\rm ta}$). The histogram highlights the presence of two distinct populations: satellites on “conventional” orbits with $r_{\rm apo}/r_{\rm ta} <1$, and satellites on orbital paths that lead them well beyond their original turnaround radius.
Intriguingly, a small but significant fraction ($\sim 6\%$) of satellites have extremely large apocentric radius, exceeding their turnaround radius by $50\%$ or more. These systems have clearly been affected by some mechanism that propelled them onto orbits substantially more energetic than the ones they had followed until turnaround. This mechanism seems to operate preferentially on low-mass satellites, as shown by the dashed histogram in Figure \[fig:rhist\], which corresponds to satellites with stellar masses less than $\sim 3\%$ that of the primary.
![Radial velocity of satellites versus distance to the primary. Velocities are scaled to the virial velocity of the system, distances to the virial radius. Circles denote “associated” satellites; i.e., those that have been [*inside*]{} the virial radius of the primary at some earlier time. Crosses indicate satellites that are on their first approach, and have never been inside $r_{\rm
vir}$. Filled circles indicate associated satellites whose apocentric radii exceed their turnaround radius by at least $25\%$, indicating that their orbital energies have been substantially altered during their evolution. “Crossed” circles correspond to associated satellites that have entered $r_{\rm vir}$ during the last $3$ Gyrs. The curves delineating the top and bottom boundaries of the distribution show the escape velocity of an NFW halo with concentration $c=10$ and $c=20$, respectively. Note that there is one satellite “escaping” the system with positive radial velocity. Solid lines show the trajectories in the $r-V_r$ plane of the two ”ejected” satellites shown in figure \[fig:orbesc\]. Filled squares correspond to the fourteen brightest Milky Way satellites, taken from @vandenbergh99 (complemented with NED data for the Phoenix, Tucana and NGC6822), and plotted assuming that $V_{\rm vir}^{\rm
MW} \sim 109$ km/s and $r_{\rm vir}^{\rm MW}=237$ kpc (see Sales et al 2007). Arrows indicate how the positions of MW satellites in this plot would be altered if our estimate of $V_{\rm vir}^{\rm MW}$ (and, consequently, $r_{\rm vir}^{\rm MW}$) is allowed to vary by $\pm 20\%$.[]{data-label="fig:rvMW"}](figs/letfig3.ps){width="84mm"}
We highlight some of these objects in Figure \[fig:rvMW\], using “filled” circles to denote “associated” satellites whose apocenters at $z=0$ exceed their turnaround radii by at least $25\%$. Two such objects are worth noting in this figure: one of them is the farthest “associated” satellite, found at more than $\sim 2.5
\, r_{\rm vir}$ from the primary; the second is an outward-moving satellite just outside the virial radius but with radial velocity approaching $\sim 2\, V_{\rm vir}$. The latter, in particular, is an extraordinary object, since its radial velocity alone exceeds the nominal escape velocity[[^3]]{} at that radius. This satellite is on a trajectory which, for all practical purposes, will remove it from the vicinity of the primary and leave it wandering through intergalactic space.
The origin of these unusual objects becomes clear when inspecting Figure \[fig:orbesc\]. The two satellites in question are shown with [*dashed lines*]{} in this figure; each is a member of a bound [*pair*]{} of satellites (the other member of the pair is shown with solid lines of the same color). During first pericentric approach, the pair is disrupted by the tidal field of the primary and, while one member of the pair remains bound and follows the kind of “conventional” orbit described in § \[ssec:convorb\], the other one is ejected from the system on an extreme orbit. The trajectories of these two “ejected” satellites in the $r$-$V_r$ plane are shown by the wiggly lines in Figure \[fig:rvMW\].
These three-body interactions typically involve the first pericentric approach of a bound pair of accreted satellites and tend to eject the lighter member of the pair: in the example of Figure \[fig:orbesc\], the “ejected” member makes up, respectively, only $3\%$ and $6\%$ of the total mass of the pair at the time of accretion. Other interaction configurations leading to ejection are possible, such as an unrelated satellite that approaches the system during the late stages of a merger event, but they are rare, at least in our simulation series. We emphasize that not all satellites that have gained energy during accretion leave the system; most are just put on orbits of unusually large apocenter but remain bound to the primary. This is shown by the filled circles in Figure \[fig:rvMW\]; many affected satellites are today completing their second or, for some, third orbit around the primary.
The ejection mechanism is perhaps best appreciated by inspecting the orbital paths of the satellite pairs. These are shown in Figure \[fig:orbxyesc\], where the top (bottom) panels correspond to the satellite pair accreted later (earlier) into the primary in Figure \[fig:orbesc\]. Note that in both cases, as the pair approaches pericenter, the lighter member (dashed lines) is also in the process of approaching the pericenter of its own orbit around the heavier member of the pair. This coincidence in orbital phase combines the gravitational attraction of the two more massive members of the trio of galaxies, leading to a substantial gain in orbital energy by the lightest satellite, effectively ejecting it from the system on an approximately radial orbit. The heavier member of the infalling pair, on the other hand, decays onto a much more tightly bound orbit.
Figure \[fig:orbxyesc\] also illustrates the complexity of orbital configurations that are possible during these three-body interactions. Although the pair depicted in the top panels approaches the primary as a cohesive unit, at pericenter each satellite circles about the primary in opposite directions: in the $y$-$z$ projection the heavier member circles the primary [*clockwise*]{} whereas the ejected companion goes about it [*counterclockwise*]{}. After pericenter, not only do the orbits of each satellite have different period and energy, but they differ even in the [*sign*]{} of their orbital angular momentum. In this case it would clearly be very difficult to link the two satellites to a previously bound pair on the basis of observations of their orbits after pericenter.
Although not all ejections are as complex as the one illustrated in the top panels of Figure \[fig:orbxyesc\], it should be clear from this figure that reconstructing the orbits of satellites that have been through pericenter is extremely difficult, both for satellites that are ejected as well as for those that remain bound. For example, the massive member of the late-accreting pair in Figure \[fig:orbesc\] sees its apocenter reduced by more than a factor of $\sim 5$ from its turnaround value in a single pericentric passage. Such dramatic variations in orbital energy are difficult to reproduce with simple analytic treatments inspired on Chandrasekhar’s dynamical friction formula (Peñarrubia 2007, private communication).
![Distribution of the ratio between the apocentric radius of satellites (measured at $z=0$) and their turnaround radius, defined as the maximum distance to the primary before accretion. Note the presence of two groups. Satellites on “conventional” orbits have $r_{\rm apo}/r_{\rm ta}<1$, the rest have been catapulted into high-energy orbits by three-body interactions during first approach. The satellite marked with a rightward arrow is the “escaping” satellite identified by a dot-centered circle in Figure \[fig:rvMW\]; this system has nominally infinite $r_{\rm
apo}$. The dashed histogram highlights the population of low-mass satellites; i.e., those with stellar masses at accretion time not exceeding $2.6\%$ of the primary’s final $M_{str}$. The satellite marked with an arrow is a formal “escaper” for which $r_{\rm apo}$ cannot be computed.[]{data-label="fig:rhist"}](figs/letfig4.ps){width="84mm"}
Application to the Local Group {#sec:LG}
==============================
We may apply these results to the interpretation of kinematical outliers within the satellite population around the Milky Way (MW) and M31, the giant spirals in the Local Group. Although part of the discussion that follows is slightly speculative due to lack of suitable data on the three-dimensional orbits of nearby satellites, we feel that it is important to highlight the role that the concomitant accretion of multiple satellites may have played in shaping the dynamics of the dwarf members in the Local Group.
Milky Way satellites {#ssec:MW}
--------------------
The filled squares in Figure \[fig:rvMW\] show the galactocentric radial velocity of thirteen bright satellites around the Milky Way and compare them with the simulated satellite population. This comparison requires a choice for the virial radius and virial velocity of the Milky Way, which are observationally poorly constrained.
We follow here the approach of Sales et al (2007), and use the kinematics of the satellite population itself to set the parameters of the Milky Way halo. These authors find that simulated satellites are only mildly biased in velocity relative to the dominant dark matter component: $\sigma_{\rm r} \sim 0.9 (\pm 0.2) V_{\rm vir}$, where $\sigma_{\rm r}$ is the radial velocity dispersion of the satellite population within $r_{vir}$. Using this, we find $V_{\rm vir}^{\rm MW}=109 \pm 22$ km/s and $r_{\rm vir}^{\rm MW}=237\pm 50$ kpc from the observed radial velocity dispersion of $\sim 98$ km/s. This corresponds to $M_{\rm
vir}^{\rm MW}=7 \times 10^{11} M_{\odot}$, in reasonable agreement with the $1$-$2 \times 10^{12} M_{\odot}$ estimate of @klypin02 and with the recent findings of @smith06 based on estimates of the escape velocity in the solar neighbourhood.
Since Leo I dwarf has the largest radial velocity of the Milky Way satellites, we have recomputed the radial velocity dispersion excluding it from the sample. We have found that $\sigma_r$ drops from 98 to 82 km/s when Leo I is not taken into account changing our estimation of $V_{\rm vir}^{\rm MW}$ from 109 to 91 km/s, still within the errors of the value previously found. Given the recent rapid growth in the number of known Milky Way satellite one would suspect that the velocity dispersion will significantly increase if more Leo I-like satellites are detected. However, we notice that given their high velocities they are not expected to remain inside the virial radius for a long time period hence not contributing to the $\sigma_r$ computation.
Figure \[fig:rvMW\] shows that, considering $V_{\rm vir}^{\rm MW}=109$ km/s, the velocities and positions of all MW satellites are reasonably consistent with the simulated satellite population, with the possible exception of Leo I, which is located near the virial radius and is moving outward with a velocity clearly exceeding $V_{\rm vir}$. Indeed, for $V_{\rm
vir}^{\rm MW}=109$ km/s, Leo I lies right on the escape velocity curve of an NFW profile with concentration parameter similar to those measured in the simulations. This is clearly a kinematical outlier reminiscent of the satellite expelled by three-body interactions discussed in the previous subsection and identified by a dot-centered circle in Figure \[fig:rvMW\]. This is the [*only*]{} “associated” satellite in our simulations with radial velocity exceeding $V_{\rm
vir}$ and located outside $r_{\rm vir}$.
Could Leo I be a satellite that has been propelled into a highly-energetic orbit through a three-body interaction? If so, there are a number of generic predictions that might be possible to verify observationally. One is that its orbit must be now basically radial in the rest frame of the Galaxy, although it might be some time before proper motion studies are able to falsify this prediction. A second possibility is to try and identify the second member of the pair to which it belonged. An outward moving satellite on a radial orbit takes only $\sim 2$-$3$ Gyr to reach $r_{\rm vir}$ with escape velocity. Coincidentally, this is about the time that the Magellanic Clouds pair were last at pericenter, according to the traditional orbital evolution of the Clouds [see, e.g., @gardiner96; @vandermarel02].
Could Leo I have been a Magellanic Cloud satellite ejected from the Galaxy a few Gyrs ago? Since most satellites that are ejected do so during [*first*]{} pericentric approach, this would imply that the Clouds were accreted only recently into the Galaxy, so that they reached their first pericentric approach just a few Gyr ago. This is certainly in the spirit of the re-analysis of the orbit of the Clouds presented recently by @besla07 and based on new proper motion measurements recently reported by @kallivayalil06. In this regard, the orbit of the Clouds might resemble the orbit of the companion of the “escaping” satellite located next to Leo I in Figure \[fig:rvMW\]. The companion is fairly massive and, despite a turnaround radius of almost $\sim 600$ kpc and a rather late accretion time ($t_{\rm acc}=10.5$ Gyr, see Figure \[fig:orbesc\]), it is left after pericenter on a tightly bound, short-period orbit resembling that of the Clouds today [@gardiner96; @vandermarel02]. To compound the resemblance, this satellite has, at accretion time, a total luminosity of order $\sim 10\%$ of that of the primary, again on a par with the Clouds.
We also note that an ejected satellite is likely to have picked up its extra orbital energy through a rather close pericentric passage and that this may have led to substantial tidal damage. This, indeed, has been argued recently by @sohn06 on the basis of asymmetries in the spatial and velocity distribution of Leo I giants (but see @koch07 for a radically different interpretation).
On a final note, one should not forget to mention another (less exciting!) explanation for Leo I: that our estimate of $V_{\rm
vir}^{\rm MW}$ is a substantial underestimate of the true virial velocity of the Milky Way. The arrows in Figure \[fig:rvMW\] indicate how the position of the MW satellites in this plane would change if our estimate of $V_{\rm vir}^{\rm MW}$ is varied by $\pm
20\%$. Increasing $V_{\rm vir}^{\rm MW}$ by $\sim 20\%$ or more would make Leo I’s kinematics less extreme, and closer to what would be expected for a high-speed satellite completing its first orbit. This rather more prosaic scenario certainly cannot be discounted on the basis of available data (see, e.g., Zaritsky et al 1989, Kochanek 1996, Wilkinson & Evans 1999)
![As Figure \[fig:rvMW\] but for [*line-of-sight*]{} velocities and [*projected*]{} distances. Three random orthogonal projections have been chosen for each simulated satellite system. Signs for $V_{\rm los}$ have been chosen so that it is positive if the satellite is receding away from the primary in projection, negative otherwise. The “escaping” satellite from Figure \[fig:rvMW\] is shown by a starred symbol. Filled squares correspond to the M31 satellites taken from @mcconnachieandirwin06, plus And XIV [@majewski07] and And XII (Chapman et al 2007, submitted) and assuming that $V_{\rm vir}^{\rm M31}\sim 138$ km/s and $r_{\rm
vir}^{\rm M31}=300$ kpc. Arrows indicate how the positions of M31 satellites in this plot would be altered if our estimate of $V_{\rm
vir}^{\rm M31}$ (and, consequently, $r_{\rm vir}^{\rm M31}$) is allowed to vary by $20\%$.[]{data-label="fig:rvM31"}](figs/letfig5.ps){width="84mm"}
![Orbital paths for both pair of satellites shown in Figure \[fig:orbesc\]. Upper (bottom) panel shows the pair that accretes later (earlier) in that figure and shows the orbits in the rest frame of the primary. The coordinate system is chosen so that the angular momentum of the primary is aligned with the $z$ axis. A solid curve tracks the path of the heavier satellite; a dashed line follows the satellite that is propelled into a highly energetic orbit after.[]{data-label="fig:orbxyesc"}](figs/letfig6.ps){width="84mm"}
M31 satellites {#ssec:M31}
--------------
A similar analysis may be applied to M31 by using the projected distances and line-of-sight velocities of simulated satellites, shown in Figure \[fig:rvM31\]. Three orthogonal projections of the simulated satellites are overlapped in this figure, with symbols as defined in Figure \[fig:rvMW\]. Following the same approach as in § \[ssec:MW\], we use the fact that the line-of-sight satellite velocity dispersion is $\sigma_{\rm los} \sim 0.8 (\pm 0.2) \, V_{\rm
vir}$ in our simulations to guide our choice of virial velocity and radius for M31; $V_{\rm vir}^{\rm M31}=138 \pm 35$ km/s and $r_{\rm
vir}^{\rm M31}=300 \pm 76$ kpc. (We obtain $\sigma_{\rm los}=111$ km/s for all $17$ satellites within $300$ kpc of M31.) This compares favourably with the $V_{\rm vir}^{\rm M31}\sim 120$ km/s estimate recently obtained by @seigar06 under rather different assumptions.
With this choice, we show the $19$ satellites around M31 compiled by McConnachie & Irwin (2006), plus two recently-discovered satellites for which positions and radial velocities have become available (And XII, Chapman et al 2007, and And XIV, Majewski et al. 2007). As in Figure \[fig:rvMW\], arrows indicate how the position of M31 satellites would change in this figure if $V_{\rm vir}^{\rm M31}$ were allowed to vary by $\pm
20\%$. We notice that the exclusion of And XII and And XIV (the highest velocity satellites within 300 kpc from Andromeda) in the $V_{\rm vir}^{\rm M31}$ estimation gives $\sim 100$ km/s, consistent with the $V_{\rm vir}^{\rm M31}=138 \pm 35$ km/s previously found considering all satellites. Projected distances are as if viewed from infinity along the direction joining the Milky Way with M31 and that the [*sign*]{} of the line-of-sight velocity in Figure \[fig:rvM31\] is chosen to be positive if the satellite is receding from the primary (in projection) and negative otherwise.
There are a few possible outliers in the distribution of M31 satellite velocities: And XIV (Majewski et al 2007), the Pegasus dwarf irregular (UGC 12613, @gallagher98), And XII (Chapman et al 2007), and UGCA 092 (labelled U092 in Figure \[fig:rvM31\], @mcconnachieandirwin06). And XIV and PegDIG seem likely candidates for the three-body “ejection” mechanism discussed above: they have large velocities for their position, and, most importantly, they are receding from M31; a [*requirement*]{} for an escaping satellite. Note, for example, that And XIV lies very close to the “escaping” satellite (dot-centered symbol in Figure \[fig:rvM31\]) paired to Leo I in the previous subsection. Escapers should move radially away from the primary, and they would be much harder to detect in projection as extreme velocity objects, unless they are moving preferentially along the line of sight. It is difficult to make this statement more conclusive without further knowledge of the orbital paths of these satellites. Here, we just note, in agreement with Majewski et al (2007), that whether And XIV and PegDIG are dynamical “rogues” depends not only on the (unknown) transverse velocity of these galaxies, but also on what is assumed for M31’s virial velocity. With our assumed $V_{\rm vir}^{\rm
M31}=138$ km/s, neither And XIV nor PegDIG look completely out of place in Figure \[fig:rvM31\]; had we assumed the lower value of $120$ km/s advocated by Seigar et al (2006) And XIV would be almost on the NFW escape velocity curve, and would certainly be a true outlier.
High-velocity satellites [*approaching*]{} M31 in projection are unlikely to be escapers, but rather satellites on their first approach. This interpretation is probably the most appropriate for And XII and UGCA 092. As discussed by Chapman et al (2007), And XII is almost certainly [*farther*]{} than M31 but is approaching us at much higher speed ($\sim 281$ km/s faster) than M31. This implies that And XII is actually getting closer in projection to M31 (hence the negative sign assigned to its $V_{\rm los}$ in Figure \[fig:rvM31\]), making the interpretation of this satellite as an escaping system rather unlikely.
Note, again, that although And XII (and UGCA 092) are just outside the loci delineated by simulated satellites in Figure \[fig:rvM31\], revising our assumption for $V_{\rm vir}^{\rm
M31}$ upward by $20\%$ or more would render the velocity of this satellite rather less extreme, and would make it consistent with that of a satellite on its first approach to M31. As was the case for Leo I, this more prosaic interpretation of the data is certainly consistent with available data.
SUMMARY and Conclusions {#sec:conc}
=======================
We examine the orbits of satellite galaxies in a series of Nbody/gasdynamical simulations of the formation of $L_*$ galaxies in a $\Lambda$CDM universe. Most satellites follow orbits roughly in accord with the expectations of secondary infall-motivated models. Satellites initially follow the universal expansion before being decelerated by the gravitational pull of the main galaxy, turning around and accreting onto the main galaxy. Their apocentric radii decrease steadily afterwards as a result of the mixing associated with the virialization process as well as of dynamical friction. At $z=0$ most satellites associated with the primary are found within its virial radius, and show little spatial or kinematic bias relative to the dark matter component (see also Sales et al 2007).
A number of satellites, however, are on rather unorthodox orbits, with present apocentric radii exceeding their turnaround radii, at times by a large factor. The apocenters of these satellites are typically beyond the virial radius of the primary; one satellite is formally “unbound”, whereas another is on an extreme orbit and is found today more than $2.5\, r_{\rm vir}$ away, or $\gsim \, 600$ Mpc when scaling this result to the Milky Way.
These satellites owe their extreme orbits to three-body interactions during first approach: they are typically the lighter member of a pair of satellites that is disrupted during their first encounter with the primary. This process has affected a significant fraction of satellites: a full one-third of the simulated satellite population identified at $z=0$ have apocentric radii exceeding their turnaround radii. These satellites make up the majority ($63\%$) of systems on orbits that venture outside the virial radius.
We speculate that some of the kinematical outliers in the Local Group may have been affected by such process. In particular, Leo I might have been ejected $2$-$3$ Gyr ago, perhaps as a result of interactions with the Milky Way and the Magellanic Clouds. Other satellites on extreme orbits in the Local Group may have originated from such mechanism. Cetus [@lewis07] and Tucana [@oosterloo96] —two dwarf spheroidals in the periphery of the Local Group—may owe their odd location (most dSphs are found much closer to either M31 or the Galaxy) to such ejection mechanism.
If this is correct, the most obvious culprits for such ejection events are likely to be the largest satellites in the Local Group (M33 and the LMC/SMC), implying that their possible role in shaping the kinematics of the Local Group satellite population should be recognized and properly assessed. In this regard, the presence of kinematical oddities in the population of M31 satellites, such as the fact that the majority of them lie on “one side” of M31 and seem to be receding away from it (McConnachie & Irwin 2006), suggest the possibility that at least some of the satellites normally associated with M31 might have actually been brought into the Local Group fairly recently by M33. Note, for example, that two of the dynamical outliers singled out in our discussion above (And XII and And XIV) are close to each other in projection; have rather similar line-of-sight velocities (in the heliocentric frame And XII is approaching us at $556$ km/s, And XIV at $478$ km/s); and belong to a small subsystem of satellites located fairly close to M33.
The same mechanism might explain why the spatial distribution of at least some satellites, both around M31 and the Milky Way, seem to align themselves on a “planar” configuration [@majewski94; @libeskind05; @kochandgrebel06], as this may just reflect the orbital accretion plane of a multiple system of satellites accreted simultaneously in the recent past [@kroupa05; @metz07].
From the point of view of hierarchical galaxy formation models, it would be rather unlikely for a galaxy as bright as M33 to form in isolation and to accrete as a single entity onto M31. Therefore, the task of finding out [*which*]{} satellites (rather than [*whether*]{}) have been contributed by the lesser members of the Local Group, as well as what dynamical consequences this may entail, should be undertaken seriously, especially now, as new surveys begin to bridge our incomplete knowledge of the faint satellites orbiting our own backyard.
Acknowledgements {#acknowledgements .unnumbered}
================
LVS and MGA are grateful for the hospitality of the Max-Planck Institute for Astrophysics in Garching, Germany, where much of the work reported here was carried out. LVS thanks financial support from the Exchange of Astronomers Programme of the IAU and to the ALFA-LENAC network. JFN acknowledges support from Canada’s NSERC, from the Leverhulme Trust, and from the Alexander von Humboldt Foundation, as well as useful discussions with Simon White, Alan McConnachie, and Jorge Peñarrubia. MS acknowledges support by the German Science foundation (DFG) under Grant STE 710/4-1. We thank Scott Chapman and collaborators for sharing their results on Andromeda XII in advance of publication. We also acknowledge a very useful report from an anonymous referee that helped to improve the first version.
[^1]: Fellow of the Canadian Institute for Advanced Research.
[^2]: We define the [*virial*]{} radius, $r_{\rm
vir}$, of a system as the radius of a sphere of mean density $\simeq
\Delta_{\rm vir}(z)$ times the critical density for closure. This definition defines implicitly the virial mass, $M_{\rm vir}$, as that enclosed within $r_{\rm vir}$, and the virial velocity, $V_{\rm vir}$, as the circular velocity measured at $r_{\rm vir}$. We compute $\Delta_{\rm vir}(z)$ using $\Delta_{vir}(z)=18\pi^2+82f(z)-39f(z)^2$, where $f(z)=[\Omega_0(1+z)^3/(\Omega_0(1+z)^3+\Omega_\Lambda))]-1$ and $\Omega_0=\Omega_{\rm CDM}+\Omega_{\rm
bar}$ [@bryanandnorman98], and is $\sim 100$ at $z=0$
[^3]: The notion of binding energy and escape velocity is ill-defined in cosmology; note, for example, that the [*whole universe*]{} may be considered formally bound to any positive overdensity in an otherwise unperturbed Eistein-de Sitter universe. We use here the nominal escape velocity of an NFW model [@nfw96; @nfw97] to guide the interpretation. This profile fits reasonably well the mass distribution of the primaries inside the virial radius, and has a finite escape velocity despite its infinite mass. Certainly satellites with velocities exceeding the NFW escape velocity are likely to move far enough from the primary to be considered true [*escapers*]{}.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Compactification of type IIB theory on torus, in the presence of fluxes, is considered. The reduced effective action is expressed in manifestly S-duality invariant form. Cosmological solutions of the model are discussed in several cases in the Pre-Big Bang scenario.'
---
KEK-TH-1243\
[**Compactification of IIB Theory with Fluxes and Axion-Dilaton String Cosmology**]{}
[ Eiji Konishi$^{1}$[^1] and Jnanadeva Maharana$^{2,3}$[^2]]{}
[*$^1$Faculty of Science, Kyoto University, Kyoto 606-8502, Japan\
$^{2}$National Laboratory for High Energy Physics (KEK), Tsukuba, Ibaraki 305, Japan\
$^{3}$Institute of Physics, Bhubaneswar-751005, India*]{}
Introduction
============
It is recognized that string theory offers the prospect of unifying the fundamental forces of Nature [@book]. The developments in string theory have shed lights in our understanding of the physics of black holes and have addressed important problems in cosmology. Furthermore, there is a lot of progress to establish connections between string theory and the standard model of particle physics which comprehensively explains a vast amount of experimental data. String theory is endowed with a rich symmetry structure. Notable among them are dualities [@r1; @r2; @r3]. The strong-weak duality, S-duality, relates strong and weak coupling phases. In some cases these two phases of the same theory may be related and in some other cases strong and weak coupling regimes of two different theories are S-dual to each other. Type IIB string theory is an example of the former whereas, to recall a familiar example, heterotic string with $SO(32)$ gauge group is related by S-duality to type I theory with same gauge group in $D=10$. The T-duality, which we mention in passing, is tested perturbatively. A simple illustration is to consider compactification of a spatial coordinate of a bosonic closed string on $S^1$ of radius $R$. The perturbative spectrum of this theory matches with the one where the corresponding spatial coordinate is compactified on a circle of reciprocal radius. It is worthwhile to note that the web of dualities have provided us powerful tools to understand string dynamics in diverse dimensions.
The purpose of this article is to envisage toroidal compcatification of type IIB string theory in presence of constant 3-form fluxes along compact direction. Our goal is to investigate consequences of S-duality for reduced string effective action and examine possible mechanism for breaking of S-duality. Moreover, we elaborate a few points and sharpen them which were not addressed in the present perspectives in earlier works in the context of a four dimensional effective action where the symmetry is enhanced.
The vanishingly small value of cosmological constant, $\Lambda_c$, might be understood by invoking the naturalness argument as has been advocated by us [@m1; @m2; @jim; @k1]. Although there are several proposals to explain smallness of the cosmological constant, this issue has not been completely resolved to every one’s satisfaction. It is to be noted that at the present juncture S-duality does not manifest as an exact symmetry of Nature. In other words, one of the consequences of S-duality would be to discover magnetically charged particles and dyons. In this respect, we may mention that we have no conclusive experimental evidence so far in favor of S-duality as a symmetry in the domain of low energy. Moreover, we have not found evidence for massless dilaton and axion from on going experiments. The axion, one often discusses in string theory framework, finds itself in a difficult position to be identified with the axion that one is looking for in various experiments which appears in the standard model phenomenology and GUT phenomenology [@ed; @kim]. Furthermore, it may be argued from the cosmological considerations that the dilaton should acquire its vacuum expectation value rather early in the evolution of the Universe (see section 2). Thus the dilaton and axion are expected to acquire mass, and S-duality may be a broken symmetry of Nature. As we shall see, for the model at hand, the reduced action does not admit a cosmological constant term when S-duality is preserved. If S-duality is spontaneously broken nonzero (positive) cosmological constant appears in our model. It is of interest to explore how S-duality is broken. Recently, spontaneous breaking of S-duality has been proposed by one of us by invoking the idea of gauging the $SL(2,{\bf{R}})$ group [@k2]. Our proposal is, if we invoke naturalness argument due to ’t Hooft, the cosmological constant should remain small in the theory we are dealing with. We are aware that the scenario we have envisaged is not close to the Universe described by the standard model. However, we have provided a concrete example realizing the arguments of naturalness to qualitatively argue why $\Lambda_c$ could be small. We are also aware that there are subtle issues related with non-compact symmetries and their breaking. We have not addressed to those problems here.
In recent years, compactification of string theories with fluxes has attracted considerable attention [@f1; @f2; @f3; @frev1; @frev2; @frev3]. In the context of brane world scenario such compactifications have assumed special significance. There is a lot of optimism that this approach might eventually allow us to build model which will realize the well tested standard model of particle physics although we have not achieved this objective completely. There has been a lot of progress in the string landscape approach to study the rich vacuum structure of string theory and construct models which are close to the phenomenological descriptions of particle physics. It has been shown that the landscape scenario admits a de Sitter phase with appropriate background configurations and a novel mechanism to break supersymmetry[@kklt]. The toroidal compactification and the symmetries of reduced effective action have been studied extensively [@r1; @r2; @r3; @ss; @ms; @hs]. Recently, compactification of heterotic string effective action on $d$-dimensional torus with constant fluxes along compact direction has been investigated in detail [@km1]. It was shown that the reduced action (with constant fluxes) can be cast in $O(d,d)$ invariant form. Moreover, the moduli acquire potentials which would otherwise not appear in standard dimensional reduction of the 10-dimensional action of heterotic theory.
As we shall show in sequel, the toroidal compactification of type IIB theory in the presence of fluxes associated with NS-NS and R-R backgrounds can be expressed in an S-duality invariant form. The type IIB action can be compactified on a torus to obtain manifestly S-duality invariant reduced effective action following the standard procedure [@jm1]. In the present context, due to the coupling of axion-dilaton ${\cal{M}}$-matrix to the fluxes (see later) a potential term appears. We note that when we toroidally compactify the action and allow constant fluxes along the compact direction the resulting reduced action is no longer invariant under supersymmetry. The supersymmetry can be restored by adding appropriate sources [@sources]. However, our purpose is to investigate S-duality attributes of the reduced action. Moreover, as we shall elaborate in the following section, we argue that there might be a symmetry consideration in order to understand smallness of the cosmological constant. Furthermore, when we focus our attention on $D=4$, we can dualize the space-time dependent 3-form field strength and express the effective action in yet another S-duality invariant form as has been noted earlier [@clw].
We study the cosmological solutions of the effective action [@cr; @gvr]. Since in the present scenario axion-dilaton potential emerges due to the coupling of the doublet to the fluxes, it provides an opportunity to reexamine some of the well known issues in string cosmology. Notice that the fluxes dictate the form of the potential. Thus, unlike in the past, we have a frame work to introduce a potential in the effective action whose structure follows from the symmetry considerations. We study the graceful exit problem that arises in the context of Pre-Big Bang cosmology [@gvr; @gv1]. In a simple case, we truncate the action by setting some of the scalars to zero (although we retain the axion and dilaton); however the axion-dilaton potential is kept in tact while addressing the graceful exit issue [@bv]. We find that the no-go theorem of Kaloper, Madden and Olive is still valid for our model [@kmo]. We consider another form of truncated action to examine whether the Pre-Big Bang solution accompanied by scale factor duality [@v1] is admissible. Indeed, for this case we obtain a solution which satisfies the scale factor duality. However, the dilaton blows up at an instant when scale factor approaches a certain value [@gvr]. Therefore, there is an epoch where the tree level string effective action is not trustworthy and the loop corrections have to be accounted for.
The paper is organized as follows. The next section is devoted to dimensional reduction of the ten dimensional action when fluxes are present along compact directions. It is shown that, for a simple compactification scheme, when $D=4$, the action can be expressed in manifestly $SL(3,{\bf{R}})$ invariant form where moduli parameterize the coset $\frac{SL(3,{\bf{R}})}{SO(3)}$, although this result was known for a while [@clw; @jm3]; we discuss this aspect from another point of view. Section III is devoted to solving the equations of motion. We consider different (truncated) version of the 4-dimensional action to obtain cosmological solutions. It is worthwhile to point out that during the initial developments of string cosmology the potentials were introduced by hand to explore various cosmological solutions. One of our motivations to study string cosmology starting from a nonsupersymmetric model is that it is expected that in the early Universe supersymmetry might not be preserved. The fourth section is devoted to discussions about the no-go theorem. A short appendix contains a detail calculation of axion-dilaton and scale factor evolution in the context of graceful exit problem.
Effective Action
================
The massless excitations of type IIB string theory consist of dilaton, $\hat{\phi}$, axion, $\hat{\chi}$, graviton, $\hat{g}_{MN}$, two 2-form potential, $\hat{B}_{MN}^{(i)}$, ($i=1,2$) and a 4-form potential $\hat{C}_{MNPQ}$ with self-dual field strength. Note however, that a covariant 10-dimensional effective action for type IIB theory cannot be written down when we want to incorporate 5-form self-dual field strength. In what follows, we present the 10-dimensional action in the string frame metric and do not include contribution of the field strength of $\hat{C}_{MNPQ}$. This omission does not affect our study of the symmetry properties of the action. As a notational convention we denote the fields in 10-dimensions with a hat. The action is $$\begin{aligned}
&&
\hat{S}=\frac{1}{2}\int d^{10}x\sqrt{-\hat{G}}\biggl\{e^{-2\hat{\phi}}
\biggl(\hat{{{R}}}-\frac{1}{12}\hat{{{H}}}_{MNP}^{(1)}\hat{{{H}}}^{(1)\ MNP}
+4(\partial\hat{\phi})^2\biggr)
-\frac{1}{2}(\partial \hat{\chi})^2\nonumber\\&&-\frac{1}{12}\chi^2
\hat{{{H}}}^{(1)}_{MNP}\hat{{{H}}}^{(1)\ MNP}
-\frac{1}{6}\hat{\chi}\hat{{{H}}}_{MNP}^{(1)}\hat{{{H}}}^{(2)\ MNP}-
\frac{1}{12}\hat{{{H}}}^{(2)}_{MNP}\hat{{{H}}}^{(2)\ MNP}\biggr\}\label{eq:eq1}\;.\end{aligned}$$ It is more convenient to consider string effective action which is expressed in terms of Einstein frame metric $\hat{g}_{MN}$ while discussing S-duality transformations and the invariance properties of the action. The two metrics are related by $\hat{g}_{MN}=e^{-\frac{1}{2}\hat{\phi}}{\hat G}_{MN}$, and the corresponding action is [@hull; @jhs] $$\begin{aligned}
&&\hat{S}_E=\frac{1}{2\kappa^2}\int d^{10}x\sqrt{-\hat{g}}
\biggl\{\hat{{{R}}}_{\hat{g}}+
\frac{1}{4}{\mathrm{Tr}}\bigl(\partial_N\hat{{\cal{M}}}\partial^N\hat{{\cal{M}}}^{-1}\bigr)-
\frac{1}{12}\hat{{\bf{H}}}^T_{MNP}\hat{{\cal{M}}}\hat{{\bf{H}}}^{MNP}
\biggr\}\label{eq:action}\end{aligned}$$ where the axion-dilaton moduli matrix $\hat{{\cal{M}}}$ and the H-fields are defined as follows $$\hat{{\cal{M}}}=\left(\begin{array}{cc}\hat{\chi}^2
e^{\hat{\phi}}+e^{-\hat{\phi}}&\hat{\chi}e^{\hat{\phi}}\\\hat{\chi}
e^{\hat{\phi}}&e^{\hat{\phi}}\end{array}\right)\;,\ \ \ \hat{{\bf{H}}}_{MNP}
=\left(\begin{array}{c}\hat{{{H}}}^{(1)}\\\hat{{{H}}}^{(2)}\end{array}\right)_{MNP}\;.$$ The action Eq.(\[eq:action\]) is invariant under the transformations $$\hat{{\cal{M}}}\to \Lambda\hat{{\cal{M}}}\Lambda^T\;,\ \ \
\hat{{{H}}}\to(\Lambda^T)^{-1}{{H}}\;,\ \ \ \hat{g}_{MN}\to\hat{g}_{MN}\;,$$ $\Lambda\in SL(2,{\bf{R}})$ and $\Sigma$ is metric of $SL(2,{\bf{R}})$ $$\Sigma=\left(\begin{array}{cc}0&i\\-i&0\end{array}\right)\;,$$ $$\Lambda \Sigma\Lambda^T=\Sigma\;,\ \ \
\Sigma\Lambda\Sigma=\Lambda^{-1}\;,$$ $$\hat{{\cal{M}}}\Sigma\hat{{\cal{M}}}=\Sigma\;,\ \ \
\Sigma\hat{{\cal{M}}}\Sigma=\hat{{\cal{M}}}^{-1}\;.$$ In order to facilitate toroidal compactification we choose the following upper triangular form for the vielbein $$\hat{e}_M^A=
\left(\begin{array}{cc}e_{\mu}^r&{\cal{A}}_\mu^\beta E^a_\beta\\0&E_\alpha^a
\end{array}\right)\;.$$ Here $M,N\cdots$ denote the global indices and $A,B\cdots$ the local Lorentz indices. The 10-dimensional metric $\hat{g}_{MN}=e_M^Ae_N^B\eta_{AB}$, where $\eta_{AB}$ is the 10-dimensional Lorentz metric. Here $\mu,\nu=0,1,2,\cdots,D-1$ and $\alpha,\beta=D,\cdots,9$. Note that $r,s,\cdots$ denote the $D$-dimensional local indices whereas $a,b,\cdots$ are corresponding rest of the local indices, taking values $D,\cdots,9$.
Thus $$g_{\mu\nu}=e_\mu^r e_\nu^s \eta_{rs}\;,\ \ \
{\cal{G}}_{\alpha\beta}=E_\alpha^aE_\beta^b\delta_{ab}\;.$$ $\eta_{rs}$ is flat space Lorentzian metric. With the above form of $E_M^A$ we note that $$\sqrt{-\hat{g}}=\sqrt{-g}\sqrt{{\cal{G}}}\;.\nonumber$$ Let us denote the space-time coordinates as $\{x^\mu,\mu=0,1,\cdots,D-1\}$ and the compact coordinate of $T^d$ as $\{Y^\alpha,\alpha=D,\cdots,9\}$.
If we assume that the backgrounds do not depend on the compact coordinates $\{Y^\alpha\}$, then the 10-dimensional Einstein frame effective action reduces to [@ms; @hs; @jm1; @ferr] $$\begin{aligned}
&&S_E=\frac{1}{2}\int d^Dx \sqrt{-g}\sqrt{{\cal{G}}}
\biggl\{R+\frac{1}{4}\bigl[\partial_\mu {\cal{G}}_{\alpha\beta}
\partial^\mu{\cal{G}}^{\alpha\beta}+\partial_\mu {\rm{ln}}{\cal{G}}
\partial^\mu {\rm{ln}}{\cal{G}}
-g^{\mu\lambda}g^{\nu\rho}{\cal{G}}_{\alpha\beta}
{\cal{F}}_{\mu\nu}^\alpha{\cal{F}}_{\lambda\rho}^\beta\bigr]\nonumber\\&&-
\frac{1}{4}{\cal{G}}^{\alpha\beta}{\cal{G}}^{\gamma\delta}\partial_\mu
B_{\alpha\gamma}^{(i)}{\cal{M}}_{ij}\partial^\mu B_{\beta\delta}^{(j)}
-\frac{1}{4}{\cal{G}}^{\alpha\beta}g^{\mu\lambda}g^{\nu\rho}
H_{\mu\nu\alpha}^{(i)}{\cal{M}}_{ij}H_{\lambda \rho \beta}^{(j)}-
\frac{1}{12}H_{\mu\nu\rho}^{(i)}{\cal{M}}_{ij}H^{(j)\ \mu\nu\rho}\nonumber\\
&&+\frac{1}{4}{\rm{Tr}}(\partial_\mu{\cal{M}}\Sigma \partial^\mu{\cal{M}}
\Sigma) \biggr\}\label{eq:action7}\end{aligned}$$ by adopting the standard procedure for toroidal compactification of string effective action.
Note that the gauge fields ${\cal{A}}_\mu^\alpha$ appear due to the prescription of vielbein keeping in mind that the action will be dimensionally reduced. When dimensional reduction is carried out these Abelian gauge fields are associated with the $d$-isometries. Moreover, the gauge fields $A_{\mu\alpha}^{(i)},i=1,2,\alpha=D,\cdots,9$ come from the dimensional reduction of $\hat{B}_{MN}^{(i)}$ as is well known. Besides scalars ${\cal{G}}_{\alpha\beta}$, additional set of scalars $B_{\alpha\beta}^{(i)}$ also appear from the reduction of $\hat{B}_{MN}^{(i)}$.
The above action is expressed in the Einstein frame, ${\cal{G}}$ being determinant of ${\cal{G}}_{\alpha\beta}$. If we demand $SL(2,{\bf{R}})$ invariance of the above action, then the backgrounds are required to satisfy following transformation properties[^3]:
$${\cal{M}}\to\Lambda{\cal{M}}\Lambda^T\;,\ \ \
H_{\mu\nu\rho}^{(i)}\to(\Lambda^T)^{-1}_{ij}H^{(j)}_{\mu\nu\rho}\;,$$
$$A_{\mu\alpha}^{(i)}\to(\Lambda^T)^{-1}_{ij}A_{\mu\alpha}^{(j)}\;,\ \ \
B_{\alpha\beta}^{(i)}\to(\Lambda^T)^{-1}_{ij}B_{\alpha\beta}^{(j)}\;,$$
and $$g_{\mu\nu}\to g_{\mu\nu}\;,\ \ \ {\cal{A}}_\mu^\alpha\to
{\cal{A}}_\mu^\alpha\;, \ \ \ {\cal{G}}_{\alpha\beta}\to{\cal{G}}_{\alpha\beta}\;.$$
We focus our attention on the $D=4$ effective action and truncate it by setting some backgrounds to zero. From now on we set ${\cal{A}}^\beta_\mu=0$. There are two 2-form fields $\hat{{{B}}}^{(i)}_{MN}$ coming from NS-NS and R-R sectors. When we compactify them to 4-dimensions, we keep only ${{B}}_{\mu\nu}^{(i)}$. The other field components ${{B}}_{\mu\alpha}^{(i)}=0$, and for the time being we also set ${{B}}^{(i)}_{\alpha\beta}=0$. The 4-dimensional action is $$\begin{aligned}
&&S_E^{(4)}=\frac{1}{2}\int d^4x\sqrt{-G}\sqrt{{\cal{G}}}\biggl[{{R}}+
\frac{1}{4}\bigl\{\partial_\mu{\cal{G}}_{\alpha\beta}\partial^\mu
{\cal{G}}^{\alpha\beta}
+\partial_\mu{{\mathrm{ln}}}{\cal{G}}\partial^\mu {\mathrm{ln}}{\cal{G}}\bigr\}
-\frac{1}{12}{{H}}^{(i)}_{\mu\nu\rho}{\cal{M}}^{ij}{{H}}^{(j)\ \mu\nu\rho}
\nonumber\\&&+\frac{1}{4}{\mathrm{Tr}}(\partial_\mu{\cal{M}}\Sigma\partial^\mu
{\cal{M}}\Sigma)\biggr]\;.\end{aligned}$$
According to the compcatification procedure adopted above, the field strengths along compact directions are vanishing. The case of constant nonzero fluxes will be considered later.
We choose a simple compactification scheme where only a single modulus, $y(x)$, appears and we adopt the following form of the metric $$ds_{10}^2=g_{\mu\nu}dx^\mu dx^\nu+e^{y(x)/\sqrt{3}}dY^\alpha
dY^\beta\delta_{\alpha\beta}\;.\label{eq:volume}$$
The resulting action is $$\begin{aligned}
S^{(4)}_E=\frac{1}{2}\int d^4x \sqrt{-g}e^{\sqrt{3}y}\biggl[{{R}}+
\frac{5}{2}\partial_\mu y\partial^\mu y -
\frac{1}{12}{{H}}_{\mu\nu\rho}^{(i)}{{\cal{M}}}_{ij}{{H}}^{(j)\mu\nu\rho}+
\frac{1}{4}{\mathrm{Tr}}(\partial_\mu{\cal{M}}\Sigma\partial^\mu
{\cal{M}}\Sigma)\biggr]\;.\label{eq:action2}\end{aligned}$$
By rescaling the space-time metric, $g_{\mu\nu}$ we remove the over all factor of $e^{\sqrt{3}y}$ and bring the above action to the following form (we still denote the new space-time metric as $g_{\mu\nu}$).
The action is written by $$\begin{aligned}
S_E^{(4)}=\frac{1}{2}\int d^4x \sqrt{-g}\biggl[R-2(\nabla y)^2-
\frac{e^{2\sqrt{3}y}}{12}H_{\mu\nu\rho}^{(i)}{\cal{M}}_{ij}H^{(j)\mu\nu\rho}+
\frac{1}{4}{\rm{Tr}}(\partial_\mu {\cal{M}}\Sigma\partial^\mu{\cal{M}}\Sigma)
\biggr]\;.\label{eq:action5}\end{aligned}$$
Note that the 3-form field strengths $H_{\mu\nu\rho}^{(i)}$ can be dualized to trade for two pseudo scalars, $\sigma_i(x)$, $i=1,2$ in four dimensions. Moreover, the equations of motion for $H_{\mu\nu\rho}^{(i)}$ are conservation laws. Therefore, we expect that the five moduli $y(x)$, $\sigma_1(x)$, $\sigma_2(x)$, $\chi(x)$ and $\phi(x)$ will also reflect a symmetry. Indeed, they parameterize the coset $\frac{SL(3,{\bf{R}})}{SO(3)}$ [@clw]. Thus the action can be expressed in the following form $$S_E^{(4)}=\frac{1}{2}\int d^4x\sqrt{-g}\biggl[R+\frac{1}{4}{\rm{Tr}}
(\nabla U\nabla U^{-1})\biggr]\label{eq:action3}$$ using $SL(3,{\bf{R}})/SO(3)$ $U$-matrix $$U=e^{2\phi-\frac{2}{\sqrt{3}}y}
\left(\begin{array}{ccc}1&\chi&
\sigma_1-\chi\sigma_2\\\chi&\chi^2+e^{-2\phi}&\chi(\sigma_1-\chi\sigma_2)-
\sigma_2e^{-2\phi}\\\sigma_1-\chi\sigma_2&\chi(\sigma_1-\chi\sigma_2)-
\sigma_2e^{-2\phi}&(\sigma_1-\chi\sigma_2)^2+\sigma_2^2e^{-2\phi}+e^{-4\phi+2
\sqrt{3}y}\end{array}\right)$$ with $\det U=1$.
Note that the equation of motion for the $U$-matrix is a conservation law,
$$\partial_\mu \bigl(\sqrt{-g}U^{-1}\partial^\mu U\bigr)=0$$
representing five equations of motions since there are only five fields parameterizing $U$-matrix. When the action is expressed in terms of the scalars instead of the $U$-matrix, the corresponding equations of motion apparently give the impressions as if there are interaction potentials among the scalars.
Therefore, one is led to believe that there are $SL(2,{\bf{R}})$ invariant potential contradicting the result of [@jm1]. However, when suitable linear combinations of the equations of motion are taken, (derived from an action with component fields not of the form of the action (\[eq:action3\])) indeed, one obtains five current conservation equations which match with the same number of conservation laws as one derives from an $\frac{SL(3,{\bf{R}})}{SO(3)}$ symmetric action.
Thus far we have considered the scenario when backgrounds do not depend on compact coordinates. However, it is worthwhile to note that the effective action depends on the field strengths. Therefore, if constant field strengths are added to the effective action they will not affect the equations of motion modulo appearance of a cosmological constant term in certain cases. The appearance of cosmological constant term would be ruled out if additional symmetry constraints are imposed (for example, positive cosmological constant is not admissible if supersymmetry is desired).
Now let us examine how the contribution of the fluxes appears in the effective action. They contribute as the field strengths, 3-forms, coming from the NS-NS and R-R sectors. Thus their additional contribution to equation (\[eq:action2\]) will be $$\begin{aligned}
-\frac{1}{12}\sqrt{-g}\sqrt{{\cal{G}}}{{H}}_{\alpha\beta\gamma}^T
{\cal{M}}{{H}}_{\alpha^\prime\beta^\prime\gamma^\prime}
{\cal{G}}^{\alpha\alpha^\prime}{\cal{G}}^{\beta\beta^\prime}
{\cal{G}}^{\gamma\gamma^\prime}\;.\end{aligned}$$
When we consider our simple compactification scheme, (after space-time metric has been rescaled to arrive at (\[eq:action5\])) this term is $$-\frac{e^{-2\sqrt{3}y}}{12}\sqrt{-g}H_{\alpha\beta\gamma}^{(i)}
{\cal{M}}_{ij}H^{(j)}_{\alpha^\prime \beta^\prime\gamma^\prime}
\delta^{\alpha\alpha^\prime}\delta^{\beta\beta^\prime}
\delta^{\gamma\gamma^\prime}$$ with the appearance of this term the four dimensional effective action, $S_4^{D}$, (when $H_{\mu\nu\lambda}^{(i)}$ have been dualized) takes the form $$\begin{aligned}
&&S_4^D=\frac{1}{2}\int d^4x \sqrt{-g}\biggl[R+\frac{1}{4}
{\rm{Tr}}(\partial_\mu{\cal{M}}^{-1}\partial^\mu{\cal{M}})-
2\partial_\mu y\partial^\mu y-\frac{e^{2\sqrt{3}y}}{2}\partial_\mu\sigma^T
{\cal{M}}\partial^\mu\sigma\nonumber\\&& -\frac{e^{-2\sqrt{3}y}}{12}
{\cal{H}}_{\alpha\beta\gamma}^T{\cal{M}}{\cal{H}}^{\alpha\beta\gamma}\biggr]
\label{eq:SSB}\end{aligned}$$ where $\sigma=\left(\begin{array}{c}\sigma_1\\\sigma_2\end{array}\right)$ transforms as $SL(2,{\bf{R}})$ doublet like $\sigma\to(\Lambda^T)^{-1}\sigma$ and ${\cal{H}}_{\alpha\beta\gamma}=\left(\begin{array}{c}
{\cal{H}}_{\alpha\beta\gamma}^{(1)}\\
{\cal{H}}_{\alpha\beta\gamma}^{(2)}\end{array}\right)$ are the 3-form fluxes along compact directions.
The above action merits some discussions. We recall that in absence of ${\cal{H}}_{\alpha\beta\gamma}$ the action (\[eq:action3\]) is expressed in manifestly $SL(3,{\bf{R}})$ invariant form. The presence of the constant flux breaks $SL(3,{\bf{R}})$ symmetry. However, the above action is manifestly $SL(2,{\bf{R}})$ invariant. Moreover, notice the coupling of ${\cal{M}}$-matrix to the fluxes. Thus we have coupling of the $SL(2,{\bf{R}})$ multiplet to gravity and to a constant source term. It is well known that due to the coupling to the fluxes, the tadpoles make their appearances. Moreover, supersymmetry is not maintained in this scenario. As alluded earlier our goal is to explore various aspects of S-duality and we are aware that supersymmetries are not preserved. We would like to dwell on some other aspects due to the presence of fluxes in what follows.
The model under considerations is reminiscent of the works on chiral dynamics. It is worthwhile to note the issues addressed in the context of $\sigma$-model approach to pion physics. We might have a scenario where $SL(2,{\bf{R}})$ symmetry could be broken spontaneously. Alternatively, the S-duality could be broken explicitly if we assign specific configurations of the fluxes. Although the last term in the above action is $SL(2,{\bf{R}})$ invariant, once we assign a specific configuration to the fluxes the symmetry is explicitly broken and still one has an axion-dilaton potential depending on the choice of the flux that breaks the symmetry.
We mention in passing that dilaton is expected to settle down to its ground state value in early epochs of the universe. It is well known that dilaton has two roles. It belongs to the massless spectrum of string theory. More importantly, the vacuum expectation value of dilaton determines gauge couplings, Yukawa couplings (hence fermionic masses) and a lot of other important parameters. To rephrase arguments to Damour and Polyakov [@dp], dilaton must acquire its ground state value (say $\phi_0$) much before the era of nucleosynthesis. Otherwise, the so well tested results of big bang model in that era will be affected since delicate nuclear reactions depend crucially on masses of nucleons (and light nuclei) and other fermions which in turn are determined by fermionic Yukawa couplings which can be computed in principle from underlying string theory ( all these are controlled by vacuum expectation value of dilaton).
Furthermore, at the present epoch of the universe there is no conclusive experimental evidence for a weak (as weak as gravity) long range repulsive universal interaction. Should dilaton remain massless, today, we expect such a repulsive force. Moreover, there are limits on dilaton mass to be $m_{{\mbox{dil}}}\ge 10^{-3}$ MeV.
If we consider the scenario of spontaneous symmetry breaking, we shall get a term like the cosmological constant as is obvious from eq.(\[eq:SSB\]). Suppose the ${\cal{M}}$-matrix assumes a nonzero vacuum expectation value. Then, we may write ${\cal{M}}={\cal{M}}_0+\widetilde{{\cal{M}}}$, ${\cal{M}}_0$ is the constant vacuum expectation value and $\widetilde{{\cal{M}}}$ is being the fluctuation, $<\widetilde{{\cal{M}}}>_0=0$.
Then $$-\frac{1}{12}\sqrt{-g}{\cal{H}}_{\alpha\beta\gamma}^T{\cal{M}}_0
{\cal{H}}_{\alpha^\prime\beta^\prime\gamma^\prime}
{\delta}^{\alpha\alpha^\prime}{\delta}^{\beta\beta^\prime}{\delta}^
{\gamma\gamma^\prime}$$ is the cosmological constant term when the ten dimensional action is compactified on a torus of constant moduli, i.e. the radius of compactification is chosen to be a constant in eq. (\[eq:volume\]) to start with (see discussion before eq.(\[eq:action4\])).
Equations of Motion
===================
In this section we intend to present equations of motion associated with the actions considered in the previous section and look for solutions. We shall envisage cosmological scenario. It will be shown that we can obtain exact cosmological solutions for the Friedmann-Robertson-Walker (FRW) metric in certain cases. We note that some of the interesting aspects of type IIB string cosmology have been studied by several authors and a detail references can be found in the review of article of Copeland, Lidsey and Wands.[@clw] However, as mentioned above, in some of those investigations either axion-dilaton potential were not incorporated or the potentials were introduced on ad hoc basis. Our approach differs from earlier works in this sense. In what follows we shall consider the $k=0$ FRW metric, the spatially flat metric.
Let us first consider the equations of motion corresponding to action (\[eq:action3\]).The matter field equation is that of the $SL(3,{\bf{R}})$ nonlinear $\sigma$-model coupled to gravity $$\partial_\mu\bigl(\sqrt{-g}U^{-1}\partial^\mu U\bigr)=0\;.$$ The Einstein equation is $$R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R=-\frac{1}{8}\sqrt{-g}\partial_\mu U^{-1}
\partial_\nu U\;.$$
In the cosmological scenario the FRW metric is $$ds^2=-dt^2+a^2(t)dr^2+a^2(t)d\Omega^2\;.$$ The equations of motion (using the 0-0 component of Einstein equation and matter equation) $$16\dot{h}+24h^2-{\rm{Tr}}(\dot{U^{-1}}\dot{U})=0\;,$$ with $h=\dot{a}/a$ as the Hubble parameter, and $$\partial_t(a^3 U^{-1}\dot{U})=0\;.$$ Thus $$a^3U^{-1}\dot{U}=A\;,$$ and $A$ is a constant $3\times 3$ matrix.
It follows from the Hamiltonian constraint that $$6h^2+\frac{1}{4}{\rm{Tr}}(\dot{U^{-1}}\dot{U})=0\;.$$
Thus combining these two equations, we solve for the scale factor and the $U$-matrix $$a(t)\sim t^{1/3}\;,\ \ \ U=e^{A{\rm{ln}}t}\;.$$ Note that in our approach we are able to present a cosmological solution for the FRW metric. This is analogous to the solution obtained in [@mvc] while dealing with $O(d,d)$ symmetric action. Here we solve for the scale factor and the $U$-matrix parameterized in terms of the five fields, $\phi$, $\chi$, $y,$ $\sigma_1$ and $\sigma_2$. The solution involves the integration constant $A$ (now a $3\times3$ matrix with five independent components). We may recall here, in their approach to type IIB string cosmology, Copleland, Lidsey and Wands [@clw] identified various $SL(2,{\bf{R}})$ subgroups of $SL(3,{\bf{R}})$ parameterized in terms of the different combinations of the above five fields besides the space-time metric. In that formulation, interactions among these fields (always involving derivatives or currents) were identified as potentials. Thus in the cosmological scenario, one encountered potentials. These authors found several types of solutions for homogeneous cosmologies for various $SL(2,{\bf{R}})$ subgroups of the duality group $SL(3,{\bf{R}})$. We would like to point out that from a purely group theoretic perspective, in the present approach, we are able to arrive the cosmological solution. This is an elegant and powerful method.
Let us turn our attention to the effective action $S_4^D$, eq. (\[eq:SSB\]), where we have already dualized the three forms $H_{\mu\nu\rho}^{(i)}$ to $\sigma_i$, and we have taken into account the presence of fluxes. In the cosmological scenario under consideration, the equations of motion cannot be solved exactly. The presence of fluxes breaks $SL(3,{\bf{R}})$ to S-duality group $SL(2,{\bf{R}})$. If we examine for equations of motion, the variation of modulus, $y$, couples to ${\cal{M}}$-matrix and to $\sqrt{-g}\partial_\mu\sigma^T{\cal{M}}\partial^\mu \sigma$. The equations of motion for $\sigma_i$ are current conservations.
The following clarifying remark is in order at this stage. Let us consider a case when the 10-dimensional theory has been compactified to the 4-dimensional theory with a constant radius of compactification. In other words, $y$, appearing in eq. (\[eq:volume\]) carries no space-time dependence. In such a case, the scalar field $y(x)$ is constant and not dynamical to start with; however, $B_{\alpha\beta}^{(i)}(x)$ can still appear as (scalar) dynamical degree of freedom. This situation is different from the case where the volume modulus stabilizes to some constant value due to a mechanism built into the theory. Moreover, if one retains the modulus, $y(x)$, as dynamical in the reduced action and sets it to a constant value at the level of equation of motion, then the equation of motion for $y$ (although it is set to a constant at that stage) imposes a constraint equation for other fields. This situation is to be contrasted with the former situation where $y$ is already taken to be a constant right from the beginning when we compactify the 10-dimensional action. To illustrate further, if we considered ${\cal{G}}_{\alpha\beta}$ to be a constant then their corresponding kinetic energy term is eq.(\[eq:action7\]) will be absent whereas all other terms will be present. In this paper when we use the phrase “frozen modulus” or freezing of modulus it is to be understood that $y(x)$ is set to be a constant when $D=10$ action is compactified. Let us consider the following simple form of the action where we set $\sigma_i=0$ and $y$ is a constant radius of compactification. The resulting action is $$\begin{aligned}
S_4=\frac{1}{2}\int d^4x \sqrt{-g}\biggl[R+\frac{1}{4}{\rm{Tr}}(\partial_\mu
{\cal{M}}\Sigma\partial^\mu {\cal{M}}\Sigma)-\frac{1}{12}
{\cal{H}}_{\alpha\beta\gamma}^T{\cal{M}}{\cal{H}}^{\alpha\beta\gamma}\biggr]\;.
\label{eq:action4}\end{aligned}$$
We define the last term as $-{\rm{Tr}}SM$ to simplify the notations where
$$S=\frac{1}{12}\left(\begin{array}{cc}{\cal{H}}_{\alpha\beta\gamma}^{(1)}
{\cal{H}}^{(1)\alpha\beta\gamma}&{\cal{H}}_{\alpha\beta\gamma}^{(1)}
{\cal{H}}^{(2)\alpha\beta\gamma}\\{\cal{H}}_{\alpha\beta\gamma}^{(1)}
{\cal{H}}^{(2)\alpha\beta\gamma}&{\cal{H}}_{\alpha\beta\gamma}^{(2)}
{\cal{H}}^{(2)\alpha\beta\gamma}\end{array}\right)\;.$$
The equation of motion associated with the ${\cal{M}}$-matrix is $$\partial_\mu \bigl(\sqrt{-g}\partial^\mu {\cal{M}}{\cal{M}}^{-1}\bigr)+
\sqrt{-g}{\cal{M}}S-\sqrt{-g}\Sigma S{\cal{M}}\Sigma=0$$ in the matrix notations. The above equation is derived from the action (\[eq:action4\]) keeping in mind that variation of ${\cal{M}}$ is constrained since ${\cal{M}}\in SL(2,{\bf{R}})$. Furthermore, if ${\cal{M}}$ were an ordinary matrix (not constrained) then its variation of the last term in action (\[eq:action4\]) will result in $S_{ij}$ in the equation of motion rather than combination of terms like ${\cal{M}}S$ and $S{\cal{M}}$. If we consider a situation where the $SL(2, {\bf R})$ symmetry is broken spontaneously, then in this phase, we may express ${\cal M}=
{\cal M}_0+{\widetilde {\cal M}}$ where ${\cal M}_0$ is the constant $SL(2,{\bf R})$ matrix and ${\widetilde {\cal M}}$ is its fluctuation. Now let us turn our attention to Einstein equation. In spontaneously symmetry broken phase the constant flux does couple to both ${\cal M}_0$ and ${\widetilde {\cal M}}$. Notice that the former is like a cosmological constant in the Einstein equation whereas the latter contributes to the stress energy momentum tensor in the usual fashion.
Now let us discuss the cosmological scenario when the metric and all other fields depend only on the cosmic time. In the presence of the flux term, we shall get a complete set of equations if we include $y(t)$, the modulus, and the other two sets of pseudo scalars, $\sigma_i(t)$, in the action. As noted earlier, the presence of flux introduces axion-dilaton potential, call it $\Lambda(\phi,\chi)$, which is $SL(2,{\bf{R}})$ invariant. We truncate the action with (and still the S-duality invariance is maintained) when moduli of torus is constant and $\sigma_i=0$. Notice that the presence of the S-duality invariant potential offers the prospect of exhibiting some of the interesting aspects of string cosmology.
We recall that, in the Pre-Big Bang proposal of Gasperini and Veneziano, one encountered the problem of graceful exit. It was observed that gravi-dilaton cosmology encountered the no-go theorem [@bv] such that classically the two regimes could not be smoothly connected under certain situations. It was hoped that axion-dilaton cosmology might overcome the difficulty. We shall examine the graceful exit issues in the following.
The 4-dimensional effective action now we consider (including the scalars $B^{(i)}_{\alpha\beta}$) is $$\begin{aligned}
&&S^{(4)}_E=\frac{1}{2}\int d^4x \sqrt{-g}\biggl[R+\frac{1}{4}{\rm{Tr}}(
\partial_\mu{\cal{M}}^{-1}\partial^\mu{\cal{M}})-
\frac{1}{4}\partial_\mu B_{\alpha\beta}^{(i)}{\cal{M}}_{ij}
\partial^\mu B^{(j)\alpha\beta}\nonumber\\&&-\frac{1}{2}\partial_\mu \sigma^T {\cal{M}}
\partial^\mu\sigma-\frac{1}{12}
{\cal{H}}_{\alpha\beta\gamma}^{(i)}
{\cal{M}}_{ij}{\cal{H}}^{(j)\alpha\beta\gamma} - 2(\partial y)^2\biggr]\;.\end{aligned}$$ In a cosmological scenario where modulus is frozen, the above action goes over to $$\begin{aligned}
&&S_E ^{(4)}=\frac{1}{2}\int dt \sqrt{-g}\biggl[R-\frac{1}{4}{\rm{Tr}}
\dot{{\cal{M}}^{-1}}\dot{{\cal{M}}}+\frac{1}{4}\dot{B}_{\alpha\beta}^{(i)}
{\cal{M}}_{ij}\dot{B}^{(j)\alpha\beta}+\frac{1}{2}\dot{\sigma}^T{\cal{M}}
\dot{\sigma}\nonumber\\&&-\frac{1}{12}{\cal{H}}_{\alpha\beta\gamma}^{(i)}
{\cal{M}}_{ij}{\cal{H}}^{(j)\alpha\beta\gamma}\biggr]\;.\label{eq:action6}\end{aligned}$$
Equations of motion are $$\begin{aligned}
&&\frac{d}{dt}(\sqrt{-g}{\cal{M}}_{ij}\dot{B}^{(j)}_{\alpha\beta})=0\;,\label
{eq:EOM4}\\&&\frac{d}{dt}(\sqrt{-g}{\cal{M}}_{ij}\dot{\sigma}_j)=0\;,
\label{eq:EOM5}\\
&&\frac{d}{dt}(\sqrt{-g}(\dot{{\cal{M}}}{\cal{M}}^{-1}))+\sqrt{-g}
{\cal{M}}{\widetilde{S}}-\sqrt{-g}\Sigma {\widetilde{S}}{\cal{M}}\Sigma=0\label{eq:EOM6}\end{aligned}$$ where $\widetilde{S}$ also contains terms involving $${\widetilde{S}}=\frac{1}{12}{\cal{H}}_{\alpha\beta\gamma}^{(i)}
{\cal{H}}^{(j)\alpha\beta\gamma}-\frac{1}{2}\dot{\sigma}^i\dot{\sigma}^j-
\frac{1}{4}\dot{B}^{(i)}_{\alpha\beta}\dot{B}^{(j)\alpha\beta}\;.$$
The first two equations (\[eq:EOM4\]) and (\[eq:EOM5\]) imply that the time integrations are $${\cal{M}}_{ij}\dot{B}_{\alpha\beta}^{(j)}=C_{\alpha\beta}^i\;,\ \ \ \sqrt{-g}
{\cal{M}}_{ij}\dot{\sigma}_j=D_i\label{eq:EOM7}$$ $C$ and $D$ being time independent.
Thus when we consider equations of motion (\[eq:EOM6\]) associated with the ${\cal M}$-matrix and utilize the relation (\[eq:EOM7\]) in them in the definition of ${\tilde S}$, then we notice that elimination of the time derivatives of $B^{(i)}_{\alpha\beta}$ and $\sigma ^i$ from $\tilde S$ leads to additional potential terms that depend on ${\cal M}$-matrix.
Let us discuss the presence of the axion-dilaton potential (the last term) in the action. One might hope that with the presence of a potential $\Lambda(\phi,\chi)$ it might be possible to circumvent the no-go theorem of Kaloper, Maddern and Olive for graceful exit [@kmo] in the context of Pre-Big Bang scenario. In a simple setting where we consider axion-dilaton and the potential due to the fluxes (maintaining S-duality invariance), we find that the no-go theorem still holds (see the appendix for details).
Now we proceed to present an illustrative example for a cosmological solution. We compactify action (\[eq:eq1\]) to $D=4$ on $T^6$ of constant moduli. Then set $B_{\mu\nu}^{(1)}=0,$ $B^{(1)}_{\mu\alpha}=0,$ $B_{\mu\alpha}^{(2)}=0$, $A_{\mu}^\alpha=0$ and the flux ${\cal{H}}_{\alpha\beta\gamma}^{(1)}=0$. The resulting action is $$\begin{aligned}
&&\widetilde{S}=\frac{1}{2}\int d^4x \sqrt{-g}\biggl[e^{-2\phi}\{R
+4(\partial \phi)^2\}-\biggl(\frac{1}{2}(\partial \chi)^2
+\frac{1}{2}\partial_\mu B_{\alpha\beta}^{(2)}\partial^\mu B^{(2)\alpha\beta}
\nonumber\\&&+\frac{1}{2}\partial_\mu \sigma^{(2)}\partial^\mu\sigma^{(2)}
+\frac{1}{12}{\cal{H}}_{\alpha\beta\gamma}^{(2)}
{\cal{H}}^{(2)\alpha\beta\gamma}\biggr)\biggr]\;.\label{eq:GV}\end{aligned}$$ Where $H^{(2)}_{\mu\nu\lambda}$ has been dualized to yield $\sigma^{(2)}$. We have expressed the action (\[eq:GV\]) in such a way that the two terms in the curly bracket, multiplied by $e^{-2\phi}$, are the contributions from the NS-NS states with 3-form set to zero. The rest of the terms come from the R-R sector and these are not multiplied by over all factor of $e^{-2\phi}$. Following Gasperini and Veneziano [@gvr], we denote the second piece as $S_m$, such that $\delta S_m/\delta\phi=0$. When we go to the cosmological setting all the fields depend on cosmic time $t$ and we can define $2\varphi=2\phi
-3{\rm{ln}}{\widetilde{a}}$, with $ds_4^2=-dt^2+{\widetilde{a}}^2(t)dx_i dx_j\delta_{ij}$. ${\widetilde{a}}(t)$ is the scale factor in this string frame metric. Thus scalars coming from R-R sector, including $\sigma_2$, do not couple to dilaton. Furthermore, if we define density and pressure as $$T_0^0=\varrho\;,\ \ \ T_j^i=p\delta_j^i$$ with $2{{\delta S_m}\over {\delta g^{\mu\nu}}}=T_{\mu\nu}$
the massless scalars are pressureless fluids. However, the flux contributes a negative pressure ($({\cal{H}}^{(2)})^2$). In order to establish correspondence with [@gvr] we redefine the shifted dilaton $\bar{\phi}=2\varphi$ the relevant quantities are defined as follows: $\bar{\varrho}=\varrho a^3$, $\bar{p}=pa^3$. The resulting equations are $$\begin{aligned}
&&\dot{\bar{\phi}}^2-3{\widetilde{h}}^2=e^{\bar{\phi}}\bar{\varrho}\;,\\
&&\dot{{\widetilde{h}}}-{\widetilde{h}}\dot{\bar{\phi}}=\frac{1}{2}e^{\bar{\phi}}\bar{p}\;,\\
&&2\ddot{\bar{\phi}}-\dot{\bar{\phi}}^2-3{\widetilde{h}}^2=0\end{aligned}$$ where ${{\widetilde{h}}}\equiv \frac{\dot{{\widetilde{a}}}}{{\widetilde{a}}}$.
The solution to shifted dilaton is $$\bar{\phi}={\rm{ln}}\frac{12}{k}\frac{{\widetilde{a}}^{\sqrt{3}}}{(1-{\widetilde{a}}^{\sqrt{3}})^2}$$ $k$ being the constant of integration. The scale factor is related to cosmic time through the following relation $$\frac{t}{t_0}=\frac{1}{\sqrt{3}}\bigl({\widetilde{a}}^{\sqrt{3}}-{\widetilde{a}}^{-\sqrt{3}}\bigr)
+2{\rm{ln}}{\widetilde{a}}$$ and $t_0$ is another integration constant. The scaled density and pressure (denoted by ${\bar \varrho}$ and $\bar p$ above) satisfy the relation $${\bar \varrho}=k{\widetilde{h}}^2 ~~~~~~{\rm and }~~~~{\bar p}=
-{2\over{\sqrt 3}}{{1-{\widetilde{a}}^{\sqrt{3}}}\over{1
+{\widetilde{a}}^{\sqrt 3}}}{\bar \varrho}\;.$$
We remind the reader that the equations of motion for R-R scalars lead to conservation law (corresponding charge conservation) like the previous case. We note that $\{{\widetilde{a}},\bar{\phi}\}$ satisfy scale factor duality during the entire epoch ${\widetilde{a}}(t)={\widetilde{a}}^{-1}(-t)$, $\bar{\varrho}({\widetilde{a}})=\bar{\varrho}({\widetilde{a}}^{-1})$ and $\bar{\phi}({\widetilde{a}})=\bar{\phi}({\widetilde{a}}^{-1})$. However as ${\widetilde{a}}(t)\rightarrow 1$ dilaton assumes large value and it blows up at ${\widetilde{a}}=1$. Therefore, during this era we cannot trust the tree level string effective action and higher order corrections to the effective action have to be taken into account. Furthermore, the truncated action Eq. (\[eq:GV\]) is no longer manifestly S-duality invariant since we have removed some of the fields from the action associated with $SL(2,{\bf{R}})$ transformation. This is a scenario where S-duality is broken explicitly which specific choice of flux. It is not obvious at this stage whether one can address the graceful exit issue in this frame work if all the scalars appearing in action (\[eq:action6\]) are included. It requires exhaustive study of that effective action.
As we have discussed in the appendix, if we take another truncated version of the action, keeping different set of fields and the full potential, it is not possible to circumvent the no-go theorem. In view of the above discussions the present work offers new opportunities to address the graceful exit problem. Furthermore, other compactification schemes might be adopted to study cosmology from some of the points of view presented here.
Summary and Discussion
======================
In this section we summarize our results and discuss some future directions of investigations. We have adopted toroidal compactification of the type IIB string effective action and included contribution of 3-form fluxes along compact directions. We have imposed a requirement that the reduced action remains S-duality invariant. Thus the axion-dilaton potential generated due to the presence of fluxes is $SL(2,{\bf{R}})$ invariant. Of course such a compactification scheme inherently contains tadpoles and supersymmetries are not preserved. We are aware of this aspect. However, our goal is to study the conventional string cosmology in this approach and address some of the issues. One of the interesting scenario, which is a special feature of graviton-axion-dilaton (string) cosmology is to understand accelerated expansion of the Universe according to Pre-Big Bang hypothesis. One of the issues encountered in the Pre-Big Bang approach is the graceful exit problem. In other words how does the Universe transit from its rapid accelerated expansion regime to the present FRW phase. It has been argued that such a transition is forbidden under certain circumstances due to no-go theorems. There were attempts to introduce phenomenological potentials to circumvent this difficulty. Moreover, it was very difficult to construct S-duality invariant axion-dilaton potential those days. However, with the present compactification scheme the potential is manifestly S-duality invariant. Therefore, we thought it to be appropriate to examine the graceful exit problem. We considered a truncated version of the reduced effective action (setting certain scalar fields to zero) where axion and dilaton which parameterize $\frac{SL(2,{\bf{R}})}{SO(2)}$ were retained in the action and the $SL(2,{\bf{R}})$ invariant potential was also retained. We find that the no-go theorem of Kaloper, Madden and Olive is still valid.
The four dimensional action contains the axion-dilaton potential. The action is that of a non-linear sigma model. It is possible that the global $
SL(2,{\bf{R}})$ symmetry might be spontaneously broken. Moreover, as alluded to in the introduction S-duality is not realized in Nature as an exact symmetry. However, it is believed that it is an exact symmetry of string theory. In this context, we may recall that when the M-theory is compactified on $T^2$ to type IIB theory in 9-dimensions, the resulting action can be cast in S-duality invariant form [@jhsb]. Moreover, “the coupling constants” (the expectation values of dilaton and axion are the coupling constants) belong to $T^2$ on which M-theory is compactified. Thus the two coupling constants have geometric origin as has been argued by Schwarz [@jhsb]. Similarly, from the F-theory point of view axion-dilaton doublet of 10-dimensional type IIB theory has also geometrical interpretations [@vafa].
This symmetry must be broken below the string scale. There are several reasons why dilaton, which appears as a massless excitation along with the graviton, is not expected to remain massless at the present epoch. There are arguments from the cosmological stand point that dilaton must settle its ground state much before the era of nucleosynthesis. If we accept S-duality to be an exact stringy symmetry and that the dilaton does not remain massless at lower scales then the symmetry must be broken. Moreover, from the perspective of the standard model of particle physics axion is expected to be a light weakly interacting pseudoscalar particle. In a qualitative frame work string theory contains many axions. However, if we identify the S-duality partner of dilaton to be the axion that cosmologist search for and standard particle physics model introduces then it is important to understand how S-duality is broken. If S-duality, realized non-linearly, is broken spontaneously then, in our model, it introduces a cosmological constant. We are unable to determine the symmetry breaking scale. We remind the reader that here one is discussing breaking of the non-compact $SL(2,{\bf{R}})$ symmetry. We are aware that in some cases one might encounter technical difficulties due to the noncompact nature of $SL(2,{\bf{R}})$. In the context of cosmological constant problem we would like to invoke the naturalness argument advanced by ’t Hooft to argue that $\Lambda_c$ is small [@gerard]. Naturalness argument, to put qualitatively, says that if an exact symmetry dictates that a parameter in a theory is to vanish then that parameter will remain small when the symmetry is broken. In other words when a symmetry is restored if a parameter is forced to be zero the parameter will assume small value in the broken phase. It has been pointed out by ’t Hooft that if cosmological constant, $\Lambda_c$, is set equal to zero in the Einstein-Hilbert action then there is no enhancement of any symmetry of the action. However, contrast this case with a theory which contains fermions. If fermion mass is set to zero, the action is invariant under chiral symmetry. In a more general setting we encounter situations such that exact symmetry require certain parameter have vanishing values. If some parameters assume small values, then the symmetries could be approximate.
We argue that cosmological constant arising out of spontaneous breaking of S-duality will remain small since in the unbroken phase it vanishes. However, our argument does not prevent from appearance of cosmological constant when other symmetries are broken. At every stage of symmetry breaking there is a contribution to the vacuum energy. The cosmological observations imply presence of a vanishingly small cosmological constant. It is worth pursuing our proposal to seek for a symmetry of the effective field theories that describe phenomena today such that the naturalness argument may be invoked to understand why $\Lambda _c$ is so small. In our study of cosmological scenarios, we have dealt with four dimensional effective actions. In such cases we arrived at the reduced action by compactifying on torus of constant modulus. Thus the volume modulus is set to constant by hand. In other words we have no way to stabilize the volume modulus.
It will be interesting the solve the Wheeler-De Witt equation for the cosmological case in the minisuperspace frame work. It is well known that the graceful exit forbidden in the classical string cosmology, due to the no-go theorems, could be achieved if one adopts the quantum version and solves the corresponding WDW equation [@gmvq]. The axion-dilaton WDW equation has some novel features [@mmp; @max] in that the S-duality symmetry may be exploited to obtain the wave function for axion-dilaton purely from the group theoretic consideration. In fact the dynamics of the two scalars in similar to motion of a particle on the surface of a pseudosphere [@max] in the background FRW metric. We may hope that similar considerations could apply when we include the presence of fluxes. However, it is obvious that the cosmological version of eq.(34) cannot be solved when we consider the corresponding WDW equation. Intuitively, we can argue that, although the interaction term is invariant under $SL(2,{\bf R})$ rotations, the axion-dilaton wave function of the corresponding Hamiltonian cannot be obtained as representations of $SL(2,{\bf R})$ as was the case in the absence of fluxes. If we denote the generators of $SL(2,{\bf R})$ as $\Sigma ^1=\pmatrix{1 & 0 \cr 0 & -1 \cr}$, $\Sigma ^2=\pmatrix{0 & 1 \cr 1 &
0 \cr }$ and $\Sigma ^3=\pmatrix{0 & 1 \cr -1 & 0 \cr}$, then the ${\cal M}$-matrix can be expanded using unit matrix and $\Sigma ^i$ as a basis: ${\cal M}=v_0{\bf 1}+v_1\Sigma ^1+v_2\Sigma ^2+v_3\Sigma ^3$. It turns out from structure of ${\cal M}$ is such that $v_3 =0$ and the three (time dependent) coefficients satisfy the condition $v_0^2-v_1^2-v_2^2=1$, defining surface of the pseudosphere as mentioned. The axion-dilaton kinetic energy term expressed in terms of $\{v_i;i=0,1,2\}$ can be written as the Laplace-Beltrami operator. When we go over to the polar coordinates of the pseudosphere it becomes quite transparent that this is the Casimir of $SL(2, {\bf R})$ [@max]. In the presence of the fluxes, in general, the potential will not commute with any of the generators of $SL(2, {\bf R})$ although it commutes with the Casimir. Therefore, if the flux is along a special direction (say the term commutes with the compact generator of $SL(2, {\bf R})$) then we can obtain wave functions (for the “angular part”) which are characterized by the Casimir and the eigenvalues of the compact generator. These solutions will be infinite dimensional as is the case with unitary representations of $SL(2,{\bf R})$. But it is not possible to solve the full WDW equation in a closed form analytically for the scale factor part. However, it is worthwhile to resort to some approximation like WKB and examine various solutions to address the graceful exit problem.
We have all along discussed $SL(2,{\bf{R}})$ duality symmetry in the context of string cosmology which has been our main focus. It is well known that, in string theory, the S-duality symmetry corresponds to the discrete $SL(2,{\bf{Z}})$ subgroup of $SL(2,{\bf{R}})$. The axion-dilaton potential will be further constrained under $SL(2,{\bf{Z}})$. It is argued that $SL(2,{\bf{Z}})$ is a robust symmetry of string theory. We have addressed the issue of S-duality symmetry breaking in this article. However, it is beyond the scope of present investigation to determine the scale of the symmetry breaking. In other words, although we argue that in the present epoch axion and dilaton are expected to acquire mass, we are unable to identify the scale of the symmetry breaking. We hope to take up some of the issues in future.
One of us (EK) would like to thank Professor Tohru Eguchi for valuable discussions and advices. JM would like to acknowledge fruitful discussions with Professors Ashok Das, Hikaru Kawai, Romesh Kaul, and P. K. Tripathy. The gracious hospitality of Professor Yoshihisa Kitazawa and the Theoretical High Energy Physics Division of KEK is gracefully acknowledged by JM.
APPENDIX: Graceful Exit Problem with Axion-Dilaton Potential
=============================================================
In this short appendix we discuss the graceful exit problem following Kaloper, Madden and Olive.[@kmo] Note that our axion comes from R-R sector and therefore the relevant equations are modified appropriately. Our starting point is the 4-dimensional action eq. (34) written in terms of component fields in string frame metric. We have pulled out an over all factor of $e^{-2\phi}$ and therefore the R-R field $\chi$ and the flux term get multiplied accordingly. The last term (potential is defined to be $2\Lambda (\phi , \chi)$. The action is $$\begin{aligned}
S^{(4)}=\frac{1}{2}\int d^4x\sqrt{-g}e^{-2\phi}\bigg[R+4(\partial \phi)^2-
\frac{1}{2}e^{2\phi}(\partial \chi)^2 -
2\Lambda(\phi ,\chi) \bigg]\;.\end{aligned}$$ In the cosmological case, all the fields are dependent on cosmic time (we consider the FRW metric with $k=0$). We denote the scale factor in the string frame as $\widetilde{a}(t)$ and corresponding Hubble parameter as $\widetilde{h}=\frac{\dot{\widetilde{a}}}{\widetilde{a}}$, then the axion equation of motion is $$\ddot{\chi}+3\widetilde{h}\dot{\chi}+2e^{-2\phi}\frac{\partial\Lambda(\phi,\chi)}{\partial \chi}=0\;.\label{eq:axion}$$ The other equations are $$\begin{aligned}
&&-3\widetilde{h}^2+6\dot{\phi}\widetilde{h}-2\dot{\phi}^2+
\frac{1}{4}e^{2\phi}\dot{\chi}^2+\Lambda (\phi , \chi)=0\;,\label{eq:EOM1}\\
&&4(\ddot{\phi}-\dot{\phi}^2+3\widetilde{h}\dot{\phi})-6\dot{\widetilde{h}}-12
\widetilde{h}^2+2\Lambda (\phi ,\chi)-
\frac{\partial \Lambda (\phi ,\chi)}{\partial \phi}=0\;,\label{eq:EOM2}\\
&&4(\ddot{\phi}-\dot{\phi}^2+2\widetilde{h}\dot{\phi})-4\dot{\widetilde{h}}-
6\widetilde{h}^2+2\Lambda (\phi ,\chi)-\frac{1}{2}
e^{2\phi}\dot{\chi}^2=0\;.\label{eq:EOM3}\end{aligned}$$ Eq. (\[eq:EOM1\]) is the Hamiltonian constraint which follows from variation of the lapse function, $N(t)$ and then setting $N(t)=1$ as is the standard practice. The other two equations are associated with variation of dilaton and the scale factor.
We recall that the Hamiltonian constraint is quadratic is $\dot{\phi}$ and it leads to $$\dot{\phi}=\frac{3\widetilde{h}\pm \sqrt{3\widetilde{h}^2+2\Lambda+\rho/2}}{2}
\label{eq:GV2}$$ with $\rho=e^{2\phi}\dot{\chi}^2$. Moreover, as is well known the other two equations are utilized to eliminate $\ddot{\phi}$ and $\dot{\phi}^2$, which miraculously led to the Pre-Big Bang scenario, and we are left with an equation for $\dot{h}$ $$\dot{\widetilde{h}}=\pm\widetilde{h}\sqrt{3\widetilde{h}^2+2\Lambda+\rho/2}-\frac{1}{2}\frac{\partial\Lambda}{\partial \phi}+\frac{\rho}{4}\;.$$
Where the sign is appropriately chosen from Eq. (\[eq:GV2\]). The potential we recall is $${\rm{Tr}}({\cal{H}}{\cal{H}}^T{\cal{M}})$$ where ${\cal{H}}{\cal{H}}^T$ is to be understood as a $2\times2$ matrix with the (internal) tensor indices of ${\cal{H}}^{(1)}$ and ${\cal{H}}^{(2)}$ are appropriately contracted. The case where the $SL(2,{\bf{R}})$ symmetry remains unbroken, the structure of the potential is such that it is positive everywhere and therefore, the no-go theorem of Kaloper, Madden and Olive remains valid.
[99]{} M. B. Green, J. H. Schwarz and E. Witten, Superstring Theory, Vol I and Vol II, Cambridge University Press, 1987.\
J. Polchinski, String Theory, Vol I and Vol II, Cambridge University Press, 1998.\
K. Becker, M. Becker and J. H. Schwarz, String Theory and M-Theory: A Modern Introduction, Cambridge University Press, 2007.\
B. Zwiebach, A First Course in String Theory, Cambridge University Press, 2004. A. Sen, Int. J. Mod. Phys. [**A 9**]{}, 3707, (1994) \[arXiv:hep-th/9402002\]. A. Giveon, M. Porrati and E. Rabinovici, Phys. Rep. [**C 244**]{}, 77 (1994) \[arXiv:hep-th/9401139\]. E. Alvarez, L. Alvarez-Gaume and Y. Lozano, Nucl. Phys. Suppl. [**41**]{}, 1 (1995) \[arXiv:hep-th/9410237\]. J. Maharana and H. Singh, Phys. Lett. [**B 368**]{}, 64 (1996) \[arXiv:hep-th/9506213\]; S. Kar, J. Maharana and H. Singh, Phys. Lett. [**B 374**]{}, 43 (1996) \[arXiv:hep-th/9507063\]. J. Maharana, Int. J. Mod. Phys. [**D 14**]{}, 2245 (2005). J. E. Lidsey, On Cosmology and Symmetry of Dilaton-Axion Cosmology, \[arXiv:gr-qc/9609063\]. E. Konishi, Axion-Dilaton Gauged S-Duality and Its Symmetry Breaking, arXiv:0710.1228 \[hep-th\]. P. Svrcek and E. Witten, JHEP [**0606**]{}, 051 ,(2006), P. Svrcek, Cosmological Constant and Axions in String Theory, \[arXiv:hep-th/0607086\]. K. E. Kim, A Review on Axions and the Strong CP Problem, arXiv:0909.2595 \[hep-th\]. E. Konishi, Prog. Theor. Phys. [**[121]{}**]{}, 1125 (2009) arXiv:0902.2565 \[hep-th\]. K. Dasgupta, G. Rajesh and S. Sethi, JHEP [**9908**]{}, 023 (1999) \[arXiv:hep-th/9908088\]. R. Bousso and J. Polchinski, JHEP, [**0006**]{}, 006 (2000) \[arXiv:hep-th/0004134\]. S. B. Giddings, S. Kachru and J. Polchinski, Phys. Rev. [**D 66**]{}, 106006 (2002) \[arXiv:hep-th/0105097\]. M. R. Douglas and S. Kachru, Rev. Mod. Phys. [**79**]{}, 733 (2007) \[arXiv:hep-th/0610102\]. M. Grana, Phys. Rep. [**C 423**]{}, 91 (2006) \[arXiv:hep-th/0509003\]. F. Denef, Constructing String Vacua (Les Houches Lecture), arXiv:08003.1194 \[hep-th\]. S. Kachru, R. Kallosh, A. Linde and S. P. Trivedi, Phys. Rev. [**D 68**]{}, 046005 (2003) \[arXiv:hep-th/0301240\]. J. Scherk and J. H. Schwarz, Nucl. Phys. [**B 153**]{}, 61 (1979). J. Maharana and J. H. Schwarz, Nucl. Phys. [**B 390**]{}, 3 (1993) \[arXiv:hep-th/9207016\]. S. F. Hassan and A. Sen, Nucl. Phys. [**B 375**]{}, 103 (1992) \[arXiv:hep-th/9109038\]. N. Kaloper and R. C. Myers, JHEP [**9905**]{}, 010 (1999) \[arXiv:hep-th/9901045\]. J. Maharana, Phys. Lett. [**B 402**]{}, 64 (1997) \[arXiv:hep-th/9703009\]. P. K. Tripathy and S. P. Trivedi, JHEP [**030**]{}, 028 (2003) \[arXiv:hep-th/0301139\]; S. Kachru, M. Schultz and S. P. Trivedi, JHEP [**0310**]{}, 007 (2003) \[arXiv:hep-th/0201028\]; J. Kumar and J. D. Wells, JHEP [**0509**]{}, 067 (2005) \[arXiv:hep-th/0506252\]. E. J. Copeland, J. E. Lidsey and D. Wands, Phys. Rev. [**D 58**]{}, 043503 (1998) \[arXiv:hep-th/9708153\]. E. J. Copeland, J. E. Lidsey and D. Wands, Phys. Rep. [**C 337**]{}, 343 (2000) \[arXiv:hep-th/9909061\]. M. Gasperini and G. Veneziano, Phys. Rep. [**C 373**]{}, 1 (2003) \[arXiv:hep-th/0207130\]. M. Gasperini and G. Veneziano, Astropart. Phys. [**1**]{}, 317 (1993) \[arXiv:hep-th/9211021\]. R. Brustein and G. Veneziano, Phys. Lett. [**B 329**]{}, 429 (1994) \[arXiv:hep-th/9403060\]; N. Kaloper, R. Madden and K. A. Olive, Nucl. Phys. [**B 452**]{}, 677 (1995) \[arXiv:hep-th/9506027\]. N. Kaloper, R. Madden and K. A. Olive, Phys. Lett. [**B 371**]{}, 34 (1996) \[arXiv:hep-th/9510117\]. G. Veneziano, Phys. Lett. [**B 265**]{}, 287(1991). J. Maharana, Phys. Lett. [**B 372**]{}, 53 (1996) \[arXiv:hep-th/9511159\]; J. Maharana (unpublished work) 1996. C. M. Hull, Phys. Lett. [**B 357**]{}, 545 (1995) \[arXiv:hep-th/9506194\]. J. H. Schwarz, Phys. Lett. [**B 360**]{}, 13 (1995) \[arXiv:hep-th/9508143\]. S. Ferrara, C. Kounnas and M. Porrati, Phys. Lett. [**B 181**]{}, 263 (1986). S. Roy, Int. J. Mod. Phys. [**[A 13]{}**]{}, 4445 (1998) \[arXiv:hep-th/9705016\]. T. Damour and A. M. Polyakov, Nucl. Phys. [**B 423**]{}, 532 (1994) \[arXiv:hep-th/9401069\]; also see T. Damour and A. M. Polyakov, Gen. Rel. Grav. [**26**]{}, 1171 (1994) \[arXiv:gr-qc/9411069\]. K. Meissner and G. Veneziano, Mod. Phys. Lett. [**A 6**]{}, 3397 (1991); K. Meissner and G. Veneziano, Phys. Lett. [**B 267**]{}, 33 (1991). J. H. Schwarz, Phys. Lett. [**B 367**]{}, 97 (1996) \[arXiv:hep-th/9510086\]. C. Vafa, Nucl. Phys. [**B 469**]{}, 403 (1996) \[arXiv:hep-th/9602022\]. G. ’t Hooft, Under the Spell of the Gauge Principle, World Scientific, 1994. M. Gasperini, J. Maharana and G. Veneziano, Nucl. Phys. [**B 472**]{}, 349 (1996) \[arXiv:hep-th/9602087\]. J. Maharana, S. Mukherji and S. Panda, Mod. Phys. Lett. [**A 12**]{}, 447 (1996) \[arXiv:hep-th/9701115\]. J. Maharana, Int. J. Mod. Phys. [**A 20**]{}, 1441 (2005) \[arXiv:hep-th/0405039\].
[^1]: E-mail address: konishi.eiji@s04.mbox.media.kyoto-u.ac.jp
[^2]: E-mail address: maharana@iopb.res.in
[^3]: If toroidal compactification of action eq.(\[eq:eq1\]) is carried out then reduced action is not expressible in manifestly S-duality invariant form[@sroy].
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We point out a conceptual analogy between the physics of extra spatial dimensions and the physics of carbon nanotubes which arises for principle reasons, although the corresponding energy scales are at least ten orders of magnitude apart. For low energies, one can apply the Kaluza–Klein description to both types of systems, leading to two completely different but consistent interpretations of the underlying physics. In particular, we discuss in detail the Kaluza–Klein description of armchair and zig-zag carbon nanotubes. Furthermore, we describe how certain experimental results for carbon nanotubes could be re-interpreted in terms of the Kaluza–Klein description. Finally, we present ideas for new measurements that could allow to probe concepts of models with extra spatial dimensions in table-top experiments, providing further links between condensed matter and particle physics.'
author:
- 'Jonas de Woul$^a$[^1], Alexander Merle$^{a,b}$[^2], and Tommy Ohlsson$^a$[^3]'
title: Establishing Analogies between the Physics of Extra Dimensions and Carbon Nanotubes
---
Among the most fascinating ideas in physics are those that question aspects we are so much used to that we nearly take them for granted. One example, questioning the very nature of space-time, is the possible existence of extra spatial dimensions (EDs) in addition to the three we experience in our daily life. Probing the existence of EDs is one of the main tasks of the Large Hadron Collider (LHC), and the corresponding models are generally referred to as *Kaluza–Klein (KK) theories*. Depending on the nature of the additional spatial dimensions, one speaks of *universal extra dimensions* [@ACD], *large extra dimensions* [@ADD], or *warped extra dimensions* [@RS]. Indeed, there are recent limits on the energy scale of, e.g., universal extra dimensions (UEDs) from LHC [@LHC], which amount to a limit on the inverse compactification radius of roughly $\hbar c/R\gtrsim 700$ GeV, i.e., $R\lesssim 0.3\cdot 10^{-18}$ m.
The basic concept of the KK-construction generically used in models with EDs is easy to understand [@ED-Intro]: In case there exists one spatial ED, in addition to the three known ones, this ED must be *compactified* (i.e., “rolled up”) in order not to modify the law of gravity on large scales [@Hoyle:2004cw]. The easiest possibility is the compactification on a circle $S^1$, leading to periodic boundary conditions of any field along the ED. In this work, we will be interested in applying this to fermionic fields in two spatial dimensions. However, the general construction can, without loss of generality, be exemplified using the more transparent case of a bosonic field in any dimension. To this end, consider the action of a massless real scalar field $\Phi$ in $d+1$ space-time dimensions, $$S = \int {\rm d}^d x \int {\rm d}y\ \frac{1}{2} \left( \partial_A \Phi \right) \left( \partial^A \Phi \right),
\label{eq:5D-action}$$ where from now on we mostly use natural units $\hbar=c=1$. Note that the convention is to sum over repeated indices $A=0,1,2,\ldots,d$ in a ($d$+1)-dimensional Minkowski space. A generic example is 3 ordinary spatial dimensions plus 1 temporal dimension and 1 extra spatial dimension, leading to $d+1=5$. Now, the trick is to separate the dynamics in the ED, i.e., $\Phi (x^\mu, y) = \sum_n \phi_n ( x^\mu ) \chi_n (y)$, and to make use of the boundary condition to expand the $y$-dependent part in Fourier modes. While the actual form of these Fourier modes depends very much on the geometry of the ED (e.g. sine and cosine functions for compactifying a flat ED on $S^1$), the generic effect on the $d$-dimensional submanifold is the existence of a so-called *KK-tower* of states with increasing masses $m_n$ ($n = 0, 1, 2,\ldots$), as usual for a 1D-potential with certain boundary conditions, their size being characterized by the compactification scale $R$ of the ED. Integrating out the extra spatial dimension in the above example, one obtains $$S = \int {\rm d}^d x\ \frac{1}{2} \sum_n \left[ \left( \partial_\mu \phi_n \right) \left( \partial^\mu \phi_n \right) - \frac{n^2}{R^2} \phi^2_n \right],
\label{eq:4D-KK}$$ leading to the identification of masses $m_n \equiv \frac{n}{R}$. Note that, in particular, the above construction is independent of the actual number of dimensions $d+1$, so instead of using $d=4$, one could ask if there is an easily accessible system in Nature that exhibits an analogous behavior, but for a different number of dimensions.
An illustrative example that has these features is a carbon nanotube (CN) [@CN-Review], which may be thought of as a two-dimensional sheet of graphite, i.e. graphene, rolled up to a thin tube. As is well-known for graphene, after taking the continuum limit, certain low-energy electronic states obey a 2D Dirac-type equation and resemble massless fermions [@Semenoff] (with the Fermi velocity $v_F$ playing the role of the speed of light $c$), similar to our starting point in Eq. . This feature allows to study different fundamental concepts of relativistic quantum mechanics, such as the Klein paradox [@Klein-Paradox], using graphene. If one compactifies 2D continuous space into a cylinder and applies the KK-construction, one naturally obtains a KK-tower of 1D Dirac fermions. This suggests that the low-energy physics of a CN (for which space is instead discrete) allows for a similar interpretation. In this work, we point out that this is indeed true, although the details turn out to be much richer due to the discrete atomic lattice (see e.g. [@Dienes] for a discussion on modular symmetries in carbon nanotori). Note that a CN has a typical length scale set by its radius, which in turn is of the order of the interatomic spacing $a\simeq 2.46$ [@Graphene-Review], and, as we will discuss, this leads to KK-levels at eV-energies. Both CNs and graphene can be manufactured, see [@Tube_Production] for CNs and [@Graphene_Production] for graphene. The production procedure of graphene has been awarded the Nobel Prize in Physics in 2010, and fullerenes, of which a cylindrical form is simply a CN, led to the Nobel Prize in Chemistry in 1996 [@Nobel].
The goal of this work is to emphasize the analogies between KK-theories and the physics of CNs, and to exchange ideas between both fields – in the same spirit as the Higgs mechanism and the Meissner effect, mass gaps and insulators, etc. This possibility has up to now only been vaguely mentioned (see e.g. [@ModPhys]), but, to our knowledge, never been worked out in any detail. Some of our findings are well-known to condensed matter physicists, but not presented in a way accessible to particle physicists. In turn, a condensed matter physicist could benefit from adopting a new language to describe the phenomena in a CN, which allows for a different way of thinking. Finally, we want to point out possible experiments that could probe the concepts of KK-theories in CN-systems. Actually, some experiments were already performed, but, to our knowledge, have never been interpreted in the context of the KK-description.
To set the stage, we briefly discuss the transition from a graphene sheet to a CN. Extensive descriptions can, e.g., be found in the reviews [@CN-Review; @Graphene-Review], while we here just summarize the basic steps. The typical starting point for graphene is a *tight-binding model* of free electrons (see [@interaction] for cases where interactions are included) on a honeycomb lattice with each triangular sublattice generated by two lattice vectors $\vec{a}_{1,2} = \frac{a}{2} \left( \sqrt{3}, \pm 1 \right)$. The electrons are able to tunnel, or *hop*, between nearest-neighbor sites with an amplitude set by $t\simeq 2.9$ eV, the so-called *hopping strength*. The tight-binding Hamiltonian is diagonalized by transforming to Fourier space. The resulting dispersion relation (one-particle energy eigenvalues) is $$E = \pm t \sqrt{3 + 2 \cos ( a k_1 ) + 2 \cos \left[ a ( k_1 - k_2 ) \right] + 2 \cos ( a k_2 )},
\label{eq:graphene_2}$$ with lattice momenta $\vec{k}= \frac{a}{2\pi} ( k_1 \vec{G}_1 + k_2 \vec{G}_2 )$, lattice constant $a\simeq 2.46$ , and reciprocal lattice vectors $\vec{G}_{1,2} = \frac{2\pi}{\sqrt{3} a} \left( 1, \pm \sqrt{3} \right)$ [@CN-Review; @Graphene-Review; @Condensed-FT]. At half-filling (one electron per lattice site), the so-called *Fermi surface* in Fourier space consists of just two points $\vec{k}_F^\pm$ \[with $E({\vec{k}_F^\pm})=0$\]. Expanding Eq. to linear order around these points one finds $E \simeq \pm \frac{3}{2} t |\vec{k}' |$, where $\vec{k}'=\vec{k}-\vec{k}_F^\pm$. This suggests the interpretation of one-particle states close to the Fermi surface as massless Dirac fermions (where the spin quantum number is replaced by a quantity called *quasi-spin* [@Graphene-Review]), with positive and negative energies corresponding to particle and antiparticle states.
The transition from the infinite honeycomb lattice to a general (*chiral*) carbon nanotube is achieved by imposing the periodicity condition $\Psi (\vec{r} + N_1 \vec{a}_1 + N_2 \vec{a}_2) = \Psi (\vec{r})$ on any one-particle state, with two integer numbers $(N_1, N_2)$. This leads to the quantization of the *orthogonal* momentum $k_\perp$ along $N_1 \vec{a}_1 + N_2 \vec{a}_2$, while the momentum $k_\|$ parallel to the extension of the CN can be treated as continuous. The illustrative cases, to be discussed here, are the *armchair* $(N,N)$ and *zig-zag* $(N,0)$ CNs. Now, starting from the symmetry of the state, one can derive a quantization condition for $k_\perp$ which translates into a new dispersion relation resulting from Eq. . The results are summarized in Tab. \[tab:compare\] and will be discussed in the following. We have also plotted two cases in Fig. \[fig:dispersion\], which exhibit all qualitative features that can appear.
--------------------------------------- --------------------------------------------------------------------------------------------- -------------------------------------------------------------------------- ------------------------------
Properties KK-interpretation
Armchair $(N,N)$ Zig-zag $(N,0)$
Dispersion relation $\Omega^2(k_\|) = 3+4 c_{1\perp} c_{1\|} + 2 c_{2\|}$ $\Omega^2(k_\|) = 3+4 c_{1\perp} c_{1\|} + 2 c_{2\perp}$ Full effect of the ED
$\Omega^2(k_\|) \equiv E^2(k_\|)/t^2$
Quantized momentum $k_\perp = \frac{2\pi}{a} \frac{n}{N}$ ($-N < n \leq N$) $k_\perp = \frac{2\pi}{a} \frac{n}{N}$ ($0\leq n \leq N$) ED-momentum
CN radius $r$ $\frac{\sqrt{3} m a}{2 \pi}$ $\frac{m a}{2 \pi}$ Compactification radius $R$
Minima of the dispersion $m_n^2 = t^2 \sin^2 \left( \pi \frac{n}{N} \right)$ $m_n^2 = t^2 \left[ 1 + 2 \cos \left( \pi \frac{n}{N} \right) \right]^2$ KK-masses
$(|n| \geq N/2)$ $(n> N/2)$
Positions of the minima $k_\|=\frac{2}{a} \arccos \left( - \frac{1}{2} \cos \left( \pi \frac{n}{N} \right) \right)$ $k_\|=0$ Zero-momentum states
Number of non-deg. minima $\lfloor \frac{N+2}{2} \rfloor$ $\lfloor \frac{N+1}{2} \rfloor$ Number of KK-levels
Gapless at half-filling always only if $3 | N$ Existence of a massless mode
--------------------------------------- --------------------------------------------------------------------------------------------- -------------------------------------------------------------------------- ------------------------------
After having derived the dispersion relations for the CN under consideration, one can right away observe the behavior resembling a typical KK-theory: The low energy physics is dominated by the minima of the low-lying bands. Around those minima at the positions $k_{\| n}$ one can perform Taylor expansions in $\bar{k}=k_\| - k_{\| n}$, leading to dispersion relations of the form $E^2 = m_n^2 + \bar{k}^2 + \mathcal{O}(\bar{k}^3)$, which can, for small momenta, be interpreted as free particles with masses $m_n$. For example, for a zig-zag CN, all minima are at $k_{\| n}=0$. Expanding the zig-zag dispersion relation from Tab. \[tab:compare\] leads to $$E^2 = t^2 \left[ 1 + 2 \cos \left( \pi \frac{n}{N} \right) \right]^2 - \frac{t^2}{2} \cos \left( \pi \frac{n}{N} \right) a^2 \bar{k}^2 + \mathcal{O}(\bar{k}^3).
\label{eq:ZZ-free}$$
One can immediately read off that the masses are given by $m_n = t \left[ 1 + 2 \cos \left( \pi \frac{n}{N} \right) \right]$, as long as there is indeed a minimum, which happens for $\cos \left( \pi \frac{n}{N} \right) < 0$ or, equivalently, $n > N/2$. For the example in Fig. \[fig:dispersion\], we have chosen $N=11$, and there should be minima (and hence KK-levels) at $n=6,7,8,9,10,11$ (cf. Tab. \[tab:compare\]), which is perfectly realized as can be seen from the right panel in the figure. This is particularly interesting in the light of recent LHC bounds by the ATLAS and CMS collaborations [@Nishiwaki:2011gk] limiting the number of KK modes to only “a few” in certain models [@Blennow:2011tb]. Note that the KK-masses are *not* equidistant, as is (approximately) the case for typical UED models. The reason for this is the more complicated geometry of the CNs: Space is discrete rather than continuous, so we cannot expect the same behavior as for the simplest particle physics models. In particle physics, it is also known that more complicated geometries lead to more involved KK-spectra, see e.g. [@RS; @Burdman:2005sr].[^4] However, around the regions where the sine and cosine functions in the KK-masses cross zero, one can approximate them by keeping only linear terms, in which case the masses are roughly distributed equidistantly (e.g., $m_n - m_{n+1} \approx \frac{2\pi}{N} t$ for the zig-zag case).[^5]
Note that, contrary to typical examples in particle physics, the *zero mode*, i.e., the state with $n=0$, is not the one with the lowest mass. This is a further effect of the non-trivial geometry. However, there always exists a mode with lowest mass. This smallest mass may or may not be zero (cf. Tab. \[tab:compare\]), depending on the exact values of the parameters. In case there is a massless mode, this means that there is no gap between the positive and negative energy states, which, as is well-known in condensed matter physics, signals that the CN exhibits metallic behavior. If the smallest mass is non-zero, there will be a non-zero gap, making the CN semiconductive rather than metallic (cf. [@CN-Review]). This is another example of two different, but fully equivalent, interpretations of the same physics.
--------------------------------- ---------------------------------
![image](AC.eps){width="8.2cm"} ![image](ZZ.eps){width="8.2cm"}
--------------------------------- ---------------------------------
An important point to observe is the order of magnitude of the KK-masses, which is proportional to the hopping strength $t\simeq 2.9$ eV: Studying Tab. \[tab:compare\], one can see that the typical size of a KK-mass in a CN is $\mathcal{O}(1)$ eV. This coincidence is particularly interesting, since this size corresponds to the energy of *visible* light, thereby enabling the investigation with ordinary lasers, which are the main tools used in experimental quantum optics. This suggests that it would indeed be possible to probe some of the concepts originally developed for models with EDs in the analogue system of a CN by a table-top experiment, without the need of constructing a complex particle physics experiment.
We want to discuss possibilities for such tests. This could go in two directions: On the one hand, several experiments have already been performed, but there is no interpretation in the language of KK-theories available. On the other hand, the two pictures could also stimulate each other, by probing concepts arising in models with EDs through testing the analogous behavior of CNs.
A key concept used for condensed matter systems is the so-called *density of states* (DOS), representing the number of one-particle states available for a given energy interval. Any points where the dispersion relation has a vanishing derivative will show up in the DOS as so-called *van Hove singularities* [@vHs]. In particular, the positions of the KK-masses will result in such singularities. Hence, by measuring the DOS, one can probe the existence of the whole KK-tower. This measurement has already been performed for certain types of CNs, e.g., using optical absorption and emission [@CN_optics] or Raman scattering techniques [@Raman]. In the measured spectra, one can identify the low-lying van Hove singularities, whose energies exactly correspond to the KK-construction, illustrating that it is indeed possible to apply the KK-picture.
One more example of an effect in the CN that could potentially have an impact on the picture of KK-theories is the following: In 1D, due to the anti-symmetry of the electromagnetic field strength tensor, there is no magnetic field but only an electric field. Now imagine a CN subject to a homogenous magnetic field $B$ (orthogonal to the CN extension), and a (hypothetical) 1D observer living on the CN and not knowing about the existence of the “extra” second and third spatial dimensions. If an electron moves along the CN, it would feel a Lorentz force perpendicular to its direction of motion. However, since the momentum in this perpendicular direction is quantized, the Lorentz force can only have an effect if it is strong enough to lift the electron into an excited KK-state: Setting the Lorentz force $e v_F B$ equal to the central force $p^2/(m_e r)$ and taking the characteristic momentum scale to be $p=\hbar/r$, we obtain for the classical magnetic field strength required to induce a KK-transition $B\sim 0.75\ {\rm T} / (r[{\rm nm}])^3$, where the radius $r$ of the CN is measured in units of nm. Since the 1D observer would have no possibility to measure this magnetic field (any charge in the CN could only move in a direction orthogonal to the Lorentz force), and hence to know of the existence of this field, such a transition to a higher KK-mode would look entirely spontaneous for the 1D observer.
Returning to particle physics, there are models which assume certain forces (e.g. gravity [@RS]) to be able to propagate in the full-dimensional *bulk*, while the standard model, or a part of it, lives on the three-dimensional *brane* (see e.g. [@Chang:1999nh] for a case where much more than gravity lives in the bulk). Having the example of the 1D observer in mind, one could ask if it might be possible to derive bounds on the strengths of external forces similar to the one described above and on the size (and maybe even on the geometry) of the bulk from the *non-observation* of spontaneous excitations in Nature. Then, using CNs, one could probe analogous effects in the laboratory in order to test the validity of these considerations. Note also the richer structure: For the CN, the “bulk” (i.e., the 2D system) is actually embedded in an even higher-dimensional space, namely our usual 3D world. In this context, it is worth mentioning recent work on the Casimir effect in CNs [@Casimir].
The CN system suggests an easy picture allowing to think about such possibilities, and to perform at least proof-of-principle experiments with low-dimensional systems. Further possible experiments could be imagined, e.g. probing the approximate conservation of KK-number for small momenta (by measuring the transition rates of electrons from higher KK-levels) or the exact contribution of the virtual states in the KK-tower to processes like KK-electron/KK-hole pair annihilation (by comparing the measured de-excitation rates with the calculated annihilation rates using Feynman diagram techniques).
In conclusion, the purpose of this work is to draw the attention of both communities, particle and condensed matter physics, to the illustrative analogies between KK-theories and CN systems. Ideally, one should be able to find more systems (like ordinary graphene or semiconductor junctions, which are close to but certainly not perfectly two-dimensional) where effects of, e.g., non-trivial space-times and/or compactifications could be probed. These considerations could then be extended to more elaborate models exploiting such effects to motivate certain properties that could be useful in particle physics, e.g. the existence of a natural cutoff for the KK-tower. Our hope is that this work will inspire subsequent studies of this interdisciplinary topic.
*Acknowledgements:* We would like to thank M. Abb, M. Dürr, and E. Langmann for useful discussions, and we are particularly grateful to E. Langmann for valuable comments on the manuscript. This work was supported by the Swedish Research Council (Vetenskapsrdet) under contract no. 621-2011-3985 (TO) and by the Göran Gustafsson Foundation (JdW & AM). AM is now supported by the Marie Curie Intra-European Fellowship within the 7th European Community Framework Programme FP7-PEOPLE-2011-IEF, contract PIEF-GA-2011-297557.
T. Appelquist, H. C. Cheng, and B. A. Dobrescu, Phys. Rev. D [**64**]{}, 035002 (2001), hep-ph/0012100; I. Antoniadis, Phys. Lett. B [**246**]{}, 377 (1990); I. Antoniadis, N. Arkani-Hamed, S. Dimopoulos, and G. R. Dvali, Phys. Lett. B [**436**]{}, 257 (1998), hep-ph/9804398.
N. Arkani-Hamed, S. Dimopoulos, and G. R. Dvali, Phys. Rev. D [**59**]{}, 086004 (1999), hep-ph/9807344; N. Arkani-Hamed, S. Dimopoulos, and G. R. Dvali, Phys. Lett. B [**429**]{}, 263 (1998), hep-ph/9803315. L. Randall and R. Sundrum, Phys. Rev. Lett. [**83**]{}, 3370 (1999), hep-ph/9905221. B. Bhattacherjee and K. Ghosh, Phys. Rev. D [**83**]{}, 034003 (2011), 1006.3043 \[hep-ph\]; H. Murayama, M. M. Nojiri, and K. Tobioka, Phys. Rev. D [**84**]{}, 094015 (2011), 1107.3369 \[hep-ph\]; A. Datta, A. Datta, and S. Poddar, 1111.2912 \[hep-ph\]. T. G. Rizzo, In the Proceedings of 32nd SLAC Summer Institute on Particle Physics (SSI 2004): Natures Greatest Puzzles, Menlo Park, California, 2-13 Aug 2004, pp L013, hep-ph/0409309; S. Dawson and R. N. Mohapatra, Proceedings, Summer School, TASI 2006, Boulder, USA, June 4-30, 2006.
C. D. Hoyle, D. J. Kapner, B. R. Heckel, E. G. Adelberger, J. H. Gundlach, U. Schmidt, and H. E. Swanson, Phys. Rev. D [**70**]{}, 042004 (2004), hep-ph/0405262. J.-C. Charlier, X. Blase, and S. Roche, Rev. Mod. Phys. [**79**]{}, 677 (2007). G. W. Semenoff, Phys. Rev. Lett. [**53**]{}, 2449 (1984).
M. I. Katsnelson, K. S. Novoselov, and A. K. Geim, Nature Phys. [**2**]{}, 620 (2006). K. R. Dienes and B. Thomas, Phys. Rev. B [**84**]{}, 085444 (2011), 1005.4413 \[cond-mat\].
A. H. Castro Neto, F. Guinea, N. M. R. Peres, K. S. Novoselov, and A. K. Geim, Rev. Mod. Phys. [**81**]{}, 109 (2009); N. M. R. Peres, J. Phys.: Condens. Matter [**21**]{}, 323201 (2009); A. Giuliani, 1102.3881 \[math-ph\].
T. W. Ebbesen and P. M. Ajayan, Nature [**358**]{}, 220 (1992).
K. S. Novoselov, A. K. Geim, S. V. Morozov, D. Jiang, Y. Zhang, S. V. Dubonos, I. V. Grigorieva, and A. A. Firsov, Science [**306**]{}, 666 (2004); K. S. Novoselov, D. Jiang, F. Schedin, T. J. Booth, V. V. Khotkevich, S. V. Morozov, and A. K. Geim, Proc. Nat. Ac. Sci. USA [**102**]{}, 10451 (2005).
The Official Web Site of the Nobel prize, [http://www.nobelprize.org/]{}.
P. L. McEuen, Physics World, June 2001, p.31.
L. Balents and M. P. A. Fisher, Phys. Rev. B [**55**]{}, R11973 (1997); C. Kane, L. Balents, and M. P. A. Fisher, Phys. Rev. Lett. [**79**]{}, 5086 (1997).
A. Altland and B. Simons, *Condensed matter field theory*, Cambridge University Press, 2006, 624p.
K. Nishiwaki, K. Y. Oda, N. Okuda, and R. Watanabe, Phys. Lett. B [**707**]{}, 506 (2012), 1108.1764 \[hep-ph\]. M. Blennow, H. Melbéus, T. Ohlsson, and H. Zhang, Phys. Lett. B [**712**]{}, 419 (2012), 1112.5339 \[hep-ph\]. G. Burdman, B. A. Dobrescu, and E. Ponton, JHEP [**0602**]{}, 033 (2006), hep-ph/0506334; K. R. Dienes, Phys. Rev. Lett. [**88**]{}, 011601 (2002), hep-ph/0108115; K. R. Dienes and A. Mafi, Phys. Rev. Lett. [**88**]{}, 111602 (2002), hep-th/0111264. N. Arkani-Hamed, A. G. Cohen, and H. Georgi, Phys. Rev. Lett. [**86**]{}, 4757 (2001), hep-th/0104005; C. T. Hill, S. Pokorski, and J. Wang, Phys. Rev. D [**64**]{}, 105005 (2001), hep-th/0104035. L. van Hove, Phys. Rev. [**89**]{}, 1189 (1953).
S. M. Bachilo, M. S. Strano, C. Kittrell, R. H. Hauge, R. E. Smalley, R. B. Weisman, R. Bruce Science [**298**]{}, 2361 (2002); J. Lefebvre, Y. Homma, and P. Finnie, Phys. Rev. Lett. [**90**]{}, 217401 (2003).
A. Jorio, A. G. Souza Filho, G. Dresselhaus, M. Dresselhaus, R. Saito, J. H. Hafner, C. M. Lieber, F. M. Matinaga, M. S. S. Dantas, and M. A. Pimenta, Phys. Rev. B [**24**]{}, 245416 (2001).
S. Chang, J. Hisano, H. Nakano, N. Okada, and M. Yamaguchi, Phys. Rev. D [**62**]{}, 084025 (2000), hep-ph/9912498. E. Elizalde, S. D. Odintsov, and A. A. Saharian, Phys. Rev. D [**83**]{}, 105023 (2011), 1102.2202 \[hep-th\]; S. Bellucci, A. A. Saharian, and V. M. Bardeghyan, Phys. Rev. D [**82**]{}, 065011 (2010), 1002.1391 \[hep-th\].
[^1]: jodw02@kth.se
[^2]: [A.Merle@soton.ac.uk]{} (corresponding author)
[^3]: tommy@theophys.kth.se
[^4]: Another effect visible is that the number of KK-states is finite, due to the existence of a maximum momentum, translating into a natural cutoff scale of the theory. In this respect, note the similarities to dimensional deconstruction [@deconstruction].
[^5]: A physical interpretation of this approximation would be that the momentum is so small that it cannot “resolve” the discrete nature of space, and the compactification appears more or less like on a circle. In this limit, the states should also resemble an approximate conservation of KK-number/KK-parity.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Many problems in sequential decision making and stochastic control often have natural multiscale structure: sub-tasks are assembled together to accomplish complex goals. Systematically inferring and leveraging hierarchical structure, particularly beyond a single level of abstraction, has remained a longstanding challenge. We describe a fast multiscale procedure for repeatedly compressing, or homogenizing, Markov decision processes (MDPs), wherein a hierarchy of sub-problems at different scales is automatically determined. Coarsened MDPs are themselves independent, deterministic MDPs, and may be solved using existing algorithms. The multiscale representation delivered by this procedure decouples sub-tasks from each other and can lead to substantial improvements in convergence rates both locally within sub-problems and globally across sub-problems, yielding significant computational savings. A second fundamental aspect of this work is that these multiscale decompositions yield new transfer opportunities across different problems, where solutions of sub-tasks at different levels of the hierarchy may be amenable to transfer to new problems. Localized transfer of policies and potential operators at arbitrary scales is emphasized. Finally, we demonstrate compression and transfer in a collection of illustrative domains, including examples involving discrete and continuous statespaces.'
author:
- |
[^1]jvb@math.duke.edu\
Department of Mathematics\
Duke University\
Durham, NC 27708 Mauro Maggioni mauro@math.duke.edu\
Departments of Mathematics, Computer Science, and Electrical and Computer Engineering\
Duke University\
Durham, NC 27708
bibliography:
- 'mdp\_journal.bib'
title: |
Multiscale Markov Decision Problems:\
Compression, Solution, and Transfer Learning
---
Markov decision processes, hierarchical reinforcement learning, transfer, multiscale analysis.
Introduction
============
Identifying and leveraging hierarchical structure has been a key, longstanding challenge for sequential decision making and planning research [@SuttonOptions; @DietterichHRL; @ParrRussell]. Hierarchical structure generally suggests a decomposition of a complex problem into smaller, simpler sub-tasks, which may be, ideally, considered independently [@BartoHRLReview]. One or more layers of abstraction may also provide a broad mechanism for reusing or transferring commonly occurring sub-tasks among related problems [@BarryKael:IJCAI:11; @Taylor09; @SoniSingh:06; @FergusonMahadevan:06]. These themes are restatements of the divide-and-conquer principle: it is usually dramatically cheaper to solve a collection of small problems than a single big problem, when the solution of each problem involves a number of computations super-linear in the size of the problem. Two ingredients are often sought for efficient divide-and-conquer approaches: a hierarchical subdivision of a large problem into disjoint subproblems, and a procedure merging the solution of subproblems into the solution of a larger problem.
This paper considers the discovery and use of hierarchical structure – [*multiscale*]{} structure in particular – in the context of discrete-time Markov decision problems. Fundamentally, inferring multiscale decompositions, learning abstract actions, and planning across scales are intimately related concepts, and we couple these elements tightly within a unifying framework. Two main contributions are presented:
- The first is an efficient multiscale procedure for partitioning and then repeatedly [*compressing*]{} or [*homogenizing*]{} Markov decision processes (MDPs).
- The second contribution consists of a means for identifying [*[transfer]{}*]{} opportunities, representing transferrable information, and incorporating this information into new problems, within the context of the multiscale decomposition.
Several possible approaches to multiscale partitioning are considered, in which statespace geometry, intrinsic dimension, and the reward structure play prominent roles, although a wide range of existing algorithms may be chosen. Regardless of how the partitioning is accomplished, the statespace is divided into a multiscale collection of “[clusters]{}” connected via a small set of “bottleneck” states. A key function of the partitioning step is to tie [*computational complexity*]{} to [*problem complexity*]{}. It is problem complexity, controlled for instance by the inherent geometry of a problem, and its amenability to being partitioned, that should determine the computational complexity, rather than the choice of statespace representation or sampling. The decomposition we suggest attempts to rectify this difficulty.
The multiscale compression procedure is local and efficient because only one [cluster]{}needs to be considered at a time. The result of the compression step is a representation decomposing a problem into a hierarchy of distinct sub-problems at multiple scales, each of which may be solved efficiently and independently of the others. The homogenization we propose is perfectly recursive in that a compressed MDP is again another independent, deterministic MDP, and the statespace of the compressed MDP is a (small) subset of the original problem’s statespace. Moreover, each coarse MDP in a multiscale hierarchy is “consistent in the mean” with the underlying fine scale problem. The compressed representation coarsely summarizes a problem’s statespace, reward structure and Markov transition dynamics, and may be computed either analytically or by Monte-Carlo simulations. Actions at coarser scales are typically complex, “macro” actions, and the coarsening procedure may be thought of as producing different levels of abstraction of the original problem. In an appropriate sense, optimal value functions at homogenized scales are homogenized optimal value functions at the finer scale.
Given such a hierarchy of successively coarsened representations, an MDP may be solved efficiently. We describe a family of multiscale solution algorithms which realize computational savings in two ways: (1) [*Localization*]{}: computation is restricted to small, decoupled sub-problems; and (2) [*conditioning*]{}: sub-problems are comparatively well-conditioned due to improved local mixing times at finer scales and fast mixing globally at coarse scales, and obey a form of global consistency with each other through coarser scales, which are themselves well-conditioned coarse MDPs. The key idea behind these algorithms is that sub-problems at a given scale decouple conditional on a solution at the next coarser scale, but must contribute constructively towards solving the overarching problem through the coarse solution; interleaved updates to solutions at pairs of fine and coarse scales are repeatedly applied until convergence. We present one particular algorithm in detail: a localized variant of modified asynchronous policy iteration that can achieve, under suitable geometric assumptions on the problem, a cost of ${{\mathcal O}}(n\log n)$ per iteration, if there are $n$ states. The algorithm is also shown to converge to a globally optimal solution from any initial condition.
Solutions to sub-problems may be transferred among related tasks, giving a systematic means to approach transfer learning at multiple scales in planning and reinforcement learning domains. If a learning problem can be decomposed into a hierarchy of distinct parts then there is hope that both a “meta policy” governing transitions between the parts, as well as some of the parts themselves, may be transferred when appropriate. We propose a novel form of multiscale knowledge transfer between sufficiently related MDPs that is made possible by the multiscale framework: Transfer between two hierarchies proceeds by matching sub-problems at various scales, transferring policies, value functions and/or potential operators where appropriate (and where it has been determined that transfer can help), and finally solving for the remainder of the destination problem using the transferred information. In this sense knowledge of a partial or coarse solution to one problem can be used to quickly learn another, both in terms of computation and, where applicable, exploratory experience.
The paper is organized as follows. In Section \[sec:bkgnd-prelim\] we collect preliminary definitions, and provide a brief overview of Markov Decision Processes with stochastic policies and state/action dependent rewards and discount factors. Section \[sec:mmdp-toplevel\] describes partitioning, compression and multiscale solution of MDPs. Proofs and additional comments concerning computational considerations related to this section are collected in the Appendix. In Section \[sec:transfer\] we introduce the multiscale transfer learning framework, and in Section \[sec:examples\] we provide examples demonstrating compression and transfer in the context of three different domains (discrete and continuous). We discuss and compare related work in Section \[sec:prior\_work\], and conclude with some additional observations, comments and open problems in Section \[sec:discussion\].
Background and Preliminaries {#sec:bkgnd-prelim}
============================
The following subsections provide a brief overview of Markov decision processes as well as some definitions and notation used in the paper.
Markov Decision Processes {#sec:mdp-defns}
-------------------------
Formally, a Markov decision process (MDP) (see e.g. [@Puterman], [@BertsekasVol2]) is a sequential decision problem defined by a tuple $(S,A,P,R,\Gamma)$ consisting of a statespace $S$, an action (or “control”) set $A$, and for $s,s'\in S, a\in A$, a transition probability tensor $P(s,a,s')$, reward function $R(s,a,s')$ and collection of discount factors $\Gamma(s,a,s')\in(0,1)$. We will assume that $S, A$ are finite sets, and that $R$ is bounded. The definition above is slightly more general than usual in that we allow state and action dependent rewards and discount factors; the reason for adopting this convention will be made clear in Section \[sec:compression\]. The probability $P(s,a,s')$ refers to the probability that we transition to $s'$ upon taking action $a$ in $s$, while $R(s,a,s')$ is the reward collected in the event we transition from $s$ to $s'$ [*[after]{}*]{} taking action $a$ in $s$.
### Stochastic Policies
Let ${{\mathcal P}}(A)$ denote the set of all discrete probability distributions on $A$. A [*[stationary stochastic policy]{}*]{} (simply a [*[policy]{}*]{}, from now on) is a function mapping states into distributions over the actions. Working with this more general class of policies will allow for convex combinations of policies later on. A policy $\pi$ may be thought of as a non-negative function on $S\times A$ satisfying $\sum_{a\in A}\pi(s,a)=1$ for each $s\in S$, where $\pi(s,a)$ denotes the probability that we take action $a$ in state $s$. We will often write $\pi(s)$ when referring to the [*distribution*]{} on actions associated to the (deterministic) state $s\in S$, so that $a\sim\pi(s)$ denotes the $A$-valued random variable $a$ having law $\pi(s)$. Deterministic policies can be recovered by placing unit masses on the desired actions[^2].
We may compute policy-specific Markov transition matrices and reward functions by averaging out the actions according to $\pi$:
\[eqn:PpiRpi\] $$\begin{aligned}
P^{\pi}(s,s') &= {{\mathbb E}}_{a\sim\pi(s)}[P(s,a,s')]= \sum_{a\in A}P(s, a, s')\pi(s,a) \\
R^{\pi}(s,s') &= {{\mathbb E}}_{a\sim\pi(s)}[R(s,a,s')]= \sum_{a\in A}R(s, a, s')\pi(s,a) \;.\end{aligned}$$
For any pair of tensors $X=X(s,a,s'), Y=Y(s,a,s')$ indexed by $s,s'\in S, a\in A$, we define the matrix $(X\circ Y)^{\pi}$ to be the expectation with respect to $\pi$ of the elementwise (Hadamard) product between $X$ and $Y$: $$\label{eqn:hadamard-pi}
\begin{aligned}
[(X\circ Y)^{\pi}]_{s,s'} :=&\; {{\mathbb E}}_{a\sim\pi(s)}[X(s,a,s')Y(s,a,s')]
= \sum_{a\in A} X(s,a,s')Y(s,a,s')\pi(s,a).
\end{aligned}$$ Note that $(X\circ Y)^{\pi}=(Y\circ X)^{\pi}$.
Finally, we will often make use of the uniform random or [*diffusion*]{} policy, denoted $\pi^u$, which always takes an action drawn randomly according to the uniform distribution on the feasible actions. In the case of continuous action spaces, we assume a natural choice of “uniform” measure has been made: for example the Haar measure if $A$ is a group, or the volume measure if $A$ is a Riemannian manifold.
### Value Functions and the Potential Operator {#sec:value_fns}
Given a policy, we may define a value function $V^{\pi}:S\to{{\mathbb R}}$ assigning to each state $s$ the expected sum of discounted rewards collected over an infinite horizon by running the policy $\pi$ starting in $s$: $$\label{eqn:pi-discounted-rewards}
V^{\pi}(s) = {{\mathbb E}}\left[R(s_0,a_1,s_1) +
\sum_{t=1}^{\infty}\left\lbrace\prod_{\tau=0}^{t-1}\Gamma(s_{\tau},a_{\tau+1},s_{\tau+1})\right\rbrace R(s_t,a_{t+1},s_{t+1}) ~\Bigl|\Bigr.~ s_0=s\right],$$ where the sequence of random variables $(s_i)_{i=1}^{\infty}$ is a Markov chain with transition probability matrix $P^{\pi}$. The expectation is taken over all sequences of state-action pairs $\{(s_t,a_t)\}_{t\geq 1}$, where $a_t$ is an $A$-valued random variable representing the action which brings the Markov chain to state $s_t$ from $s_{t-1}$: if $s_{t-1}$ is observed, then $a_t\sim\pi(s_{t-1})$. Thus, the expectation in should be interpreted as $
{{\mathbb E}}_{a_1\sim\pi(s_0)}{{\mathbb E}}_{s_1\sim P(s_0,a_1,\cdot)}{{\mathbb E}}_{a_2\sim\pi(s_1)}\cdots.
$ The state- and action-dependent discount factors accrue in a path-dependent fashion leading to the product in . When the discount factors are state dependent, it is possible to define different optimization criteria; the choice is commonly selected because it defines a value function which may be computed via dynamic programming. This choice is also natural in the context of financial applications[^3]. The optimal value function $V^*$ is defined as $V^*(s)=\sup_{\pi\in\Pi}V^{\pi}(s)$ for all $s\in S$, where $\Pi$ is the set of all stationary stochastic policies, and the corresponding optimal policy $\pi^*$ is any policy achieving the optimal value function. Under the assumptions we have imposed here, a deterministic optimal policy exists whenever an optimal policy (possibly stochastic) exists [@BertsekasVol2 Sec. 1.1.4]. We will make use of stochastic policies primarily to regularize a class of MDP solution algorithms, rather than to achieve better solutions.
The process of computing $V^{\pi}$ given $\pi$ is known as [*value determination*]{}. Following the usual approach, we may solve for $V^{\pi}$ by conditioning on the first transition in and applying the Markov property. However, when $\pi$ is stochastic, the first transition also involves a randomly selected action, and when the discount factors are state/action dependent, the particular discounting seen in must be adopted in order to obtain a linear system. One may derive the following equation for $V^\pi$ (details given in the Appendix) $$\label{eqn:vpi-linsys}
V^{\pi}(s) = \sum_{s',a}P(s,a,s')\pi(s,a)\bigl[R(s,a,s') + \Gamma(s,a,s')V^{\pi}(s')\bigr],\quad s\in S.$$ In matrix-vector form this system may be written as $$V^{\pi} = \bigl(I- (\Gamma\circ P)^{\pi}\bigr)^{-1}r$$ where $r:= (P\circ R)^{\pi}{\mathbf{1}}$. The matrix $
\bigl(I- (\Gamma\circ P)^{\pi}\bigr)^{-1}
$ will be referred to as the [*potential operator*]{}, or fundamental matrix, or Green’s function, in analogy with the naming of the matrix $(I-P^{\pi})^{-1}$ for the Markov chain $P^{\pi}$.
Notation {#sec:notation}
--------
We denote by $\{X_t\}_{t=0}^{\infty}$ a Markov chain, not necessarily time-homogeneous, governed by an appropriate transition matrix $P$. For $S'\subseteq S$, we define the [*restriction*]{} of $P:S\times A\times S\to{{\mathbb R}}_{+}$ to $S'$ to be the transition tensor $P_{S'}:S'\times A\times S'\to{{\mathbb R}}_{+}$ defined by $$\label{eqn:P_cluster}
P_{S'}(s,a,s') =
\begin{cases}
P(s,a,s') & \text{if } s,s'\in S', s\neq s'\\
P(s,a,s) + \sum_{s''\notin S'}P(s,a,s'') & \text{if } s = s', s\in S' \;.
\end{cases}$$ The rewards from $R:S\times A\times S\to{{\mathbb R}}$ associated to transitions between states in the subset $S'$ remain unchanged: $$R_{S'}(s,a,s') = R(s,a,s'), \quad\text{for all } (s,a,s')\text{ such that }s,s'\in S', a\in A.$$ We will refer to this operation as [*truncation*]{}, to distinguish it from restriction as defined by . The sub-tensor $\Gamma_{S'}$ is similarly defined from $\Gamma$. Note that, by definition, $P_{S'},R_{S'},\Gamma_{S'}$ do not include t-uples which start from a state $s$ in the [cluster]{}but which end at a state $s'$ outside of the [cluster]{}.
The restriction operation introduced above does [*[not]{}*]{} commute with taking expectations with respect to a policy. The matrix $P_{S'}^{\pi}$ will be defined by [*first*]{} restricting to $S'$ by Equation , and [*then*]{} averaging $P_{S'}$ with respect to $\pi$ as in Equation . Truncation [*[does]{}*]{} commute with expectation over actions, so $R_{S'}^{\pi},\Gamma_{S'}^{\pi}$ may be computed by truncating or averaging in any order, although it is clearly more efficient to truncate before averaging. In fact, to define these and other related quantities, $\pi$ need only be defined locally on the [cluster]{}of interest. For quantities such as $(P_{S'}\circ R_{S'})^{\pi}$, we will always assume that truncation/restriction occurs before expectation.
Lastly, $\text{diag}(v)$ will denote a diagonal matrix with the elements of vector $v$ along the main diagonal, and $a\wedge b$ will denote the minimum of the scalars $a,b$.
Multiscale Markov Decision Processes {#sec:mmdp-toplevel}
====================================
The high-level procedure for efficiently solving a problem with a multiscale MDP hierarchy, which we will refer to as an [*“MMDP”*]{}, consists of the following steps, to be described individually in more detail below:
1. [**[Partition]{}**]{} the statespace into subsets of states (“clusters”) connected via “bottleneck” states.
2. Given the decomposition into clusters by bottlenecks, [**[compress]{}**]{} or [**[homogenize]{}**]{} the MDP into another, smaller and [*[coarser]{}*]{} MDP, whose statespace is the set of bottlenecks, and whose actions are given by following certain policies in [clusters]{}connecting bottlenecks (“subtasks”).
Repeat the steps above with the compressed MDP as input, for a desired number of compression steps, obtaining a hierarchy of MDPs.
3. [**[Solve the hierarchy of MDPs]{}**]{} from the top-down (coarse to fine) by pushing solutions of coarse MDPs to finer MDPs, down to the finest scale.
We say that the procedure above compresses or homogenizes, in a multiscale fashion, a given MDP. The construction is perfectly recursive, in the sense that the same steps and algorithms are used to homogenize one scale to the next coarser scale, and similarly for the refinement steps of a coarse policy into a finer policy. We may, and will, therefore focus on a single compression step and in a single refinement step. The compression procedure also enjoys a form of consistency across scales: for example, optimal value functions at homogenized scales are good approximations to homogenized optimal value functions at finer scales. Moreover, actions at coarser scales are typically, as one may expect, complex, “higher-level” actions, and the above procedure may be thought of as producing different levels of “abstraction” of the original problem. While automating the process of hierarchically decomposing, in a novel fashion, large complex MDPs, the framework we propose may also yield significant computational savings: if at a scale $j$ there are $r_j$ [clusters]{}of roughly equal size, and $n_j$ states, the solution to the MDP at that scale may be computed in time ${{\mathcal O}}\bigl(r_j(n_j/r_j)^3\bigr)$. If $r_j=n_j/C$ and $n_j=n/C^j$ (with $n$ being the size of the original statespace), then the computation time across $\log n$ scales is ${{\mathcal O}}\bigl(n\log n\bigr)$. We discuss computational complexity in Section \[sec:ms-alg-complexity\], and establish convergence of a particular solution algorithm to the global optimum in Section \[sec:ms-alg-proof\]. Finally, the framework facilitates knowledge transfer between related MDPs. Sub-tasks and coarse solutions may be transferred anywhere within the hierarchies for a pair of problems, instead of mapping entire problems. We discuss transfer in Section \[sec:transfer\].
The rest of this section is devoted to providing details and analysis for Steps $(1)-(3)$ above in three subsections. Each of these subsections contains an overview of the construction needed in each step followed by a more detailed and algorithmic discussion concerning specific algorithms used to implement the construction; the latter may be skipped in a first reading, in order to initially focus on “the big picture”. Proofs of the results in these subsections are all postponed until the Appendix.
[*[Step 1:]{}*]{} Bottleneck Detection and statespace Partitioning {#sec:clustering}
------------------------------------------------------------------
The first step of the algorithm involves partitioning the MDP’s statespace $S$ by identifying a set ${{\mathcal B}}\subseteq S$ of bottlenecks. The bottlenecks induce a partitioning[^4] of $S\setminus{{\mathcal B}}$ into a family ${\ensuremath{{\mathsf{C}}}\xspace}$ of connected components. Typically ${{\mathcal B}}$ depends on a policy $\pi$, and when we want to emphasize this dependency, we will write ${{\mathcal B}^{\pi}}$. We always assume that [*[${{\mathcal B}^{\pi}}$ includes all terminal states of $P^\pi$.]{}*]{} The partitioning of $S\setminus{{\mathcal B}^{\pi}}$ induced by the bottlenecks is the set of equivalence classes $S/\mathord{\sim}$, under the relation $$s_i\sim s_j, \quad \text{if } s_i,s_j\notin{{\mathcal B}^{\pi}}\text{ and there is a path from $s_i$ to $s_j$ not passing through any ${\ensuremath{{\mathsf{b}}}\xspace}\in {{\mathcal B}^{\pi}}$}.$$ Clearly these equivalence classes yield a partitioning of $S\setminus{{\mathcal B}^{\pi}}$. The term [*cluster*]{} will refer to an equivalence class plus any bottleneck states connected to states in the class: if $[s]:=\{s'~|~s\sim s'\}$ is an equivalence class, $${\ensuremath{\mathsf{c}}\xspace}([s]) := [s]\cup \bigl\{{\ensuremath{{\mathsf{b}}}\xspace}\in{{\mathcal B}^{\pi}}~|~ P^{\pi}(s',{\ensuremath{{\mathsf{b}}}\xspace})>0 \text{ or }P^{\pi}({\ensuremath{{\mathsf{b}}}\xspace},s')>0\text{ for some }s'\in[s]\bigr\}.$$ The set of [clusters]{}is denoted by ${\ensuremath{{\mathsf{C}}}\xspace}$. If ${\ensuremath{\mathsf{c}}\xspace}={\ensuremath{\mathsf{c}}\xspace}([s])$, $[s]$ will be referred to as the [cluster]{}’s [*interior*]{}, denoted by ${\accentset{\circ}{{\ensuremath{\mathsf{c}}\xspace}}}$, and the bottlenecks attached to $[s]$ will be referred to as the [cluster]{}’s [*boundary*]{}, denoted by ${\ensuremath{{{{\partial{{\ensuremath{\mathsf{c}}\xspace}}}}}}\xspace}$.
To each [cluster]{} ${\ensuremath{\mathsf{c}}\xspace}$, and policy $\pi$ (defined on at least ${\ensuremath{\mathsf{c}}\xspace}$), we associate the Markov process with transition matrix $P^\pi_{\ensuremath{\mathsf{c}}\xspace}$, defined according to Section \[sec:notation\].
We also assume that a set of designated policies ${\ensuremath{{\boldsymbol{\pi}}}\xspace}_{{\ensuremath{\mathsf{c}}\xspace}}$ is provided for each cluster ${\ensuremath{\mathsf{c}}\xspace}$. For example ${\ensuremath{{\boldsymbol{\pi}}}\xspace}_{{\ensuremath{\mathsf{c}}\xspace}}$ may be the singleton consisting of the diffusion policy in [$\mathsf{c}$]{}. Or ${\ensuremath{{\boldsymbol{\pi}}}\xspace}_{{\ensuremath{\mathsf{c}}\xspace}}$ could be the set of locally optimal policies in ${\ensuremath{\mathsf{c}}\xspace}$ for the family of MDPs, parametrized by $s'\in{\ensuremath{\partial{\ensuremath{\mathsf{c}}\xspace}}\xspace}$ with reward equal to the original rewards plus an additional reward when $s'$ is reached (this approach is detailed in Section \[sec:cluster\_pols\]). Finally, we say that ${\ensuremath{\partial{\ensuremath{\mathsf{c}}\xspace}}\xspace}$ is $\pi$-[*[reachable]{}*]{}, for a policy $\pi$, if the set ${\ensuremath{\partial{\ensuremath{\mathsf{c}}\xspace}}\xspace}$ can be reached in a finite number of steps of $P^\pi_{\ensuremath{\mathsf{c}}\xspace}$, starting from any initial state $s\in{\ensuremath{\mathsf{c}}\xspace}$.
### Algorithms for Bottleneck Detection {#sec:clustering_details}
\[sec:diffmaps\] In the discussion below we will make use of diffusion map embeddings [@Coifman:PNAS:05] as a means to cluster, visualize and compare directed, weighted statespace graphs. This is by no means the only possibility for accomplishing such tasks, and we will point out other references later. Here we focus on this choice and the details of diffusion maps and associated hierarchical clustering algorithms.
[*Diffusion maps*]{} are based on a Markov process, typically the random walk on a graph. The random walks we consider here are of the form $P^\pi$ for some policy $\pi$, and may always be made reversible (by addition of the teleport matrix adding weak edges connecting every pair of vertices, as in Step (2) of Algorithm \[alg:spectral\_clustering\]), but may still be strongly directed particularly as $\pi$ becomes more directed. In light of this directedness, we will compute diffusion map embeddings of the underlying states from normalized graph Laplacians symmetrized with respect to the underlying Markov chain’s invariant distribution, following [@ChungDirected]: $$\label{eqn:directed-lap}
{{\mathcal L}}= I - \tfrac{1}{2}\bigl(\Phi^{1/2}P^{\pi}\Phi^{-1/2} + \Phi^{-1/2}(P^{\pi})^{{\top\!}}\Phi^{1/2}\bigr)$$ where $\Phi$ is a diagonal matrix with the invariant distribution satisfying $(P^{\pi})^{{\top\!}}\mu = \mu$ placed along the main diagonal, $\Phi_{ii}=\mu_i$. One can choose an orthonormal set of eigenvectors $\{\Psi^{(k)}\}_{k\geq 0}$ with corresponding real eigenvalues $\lambda_k$ which diagonalize ${{\mathcal L}}$. If we place the eigenvalues in ascending order $0=\lambda_{0}\leq\cdots\leq\lambda_{n}$, the diffusion map embedding of the state $s_i$ is given by $$\label{eqn:diffmap-embed}
s_i\mapsto \bigl(\Psi^{(k)}_i(1-\lambda_{k})\bigr)_{k\geq 1},\qquad s_i\in S .$$ The [*diffusion distance*]{} between two states $s_i, s_j$ is given by the Euclidean distance between embeddings, $$d^2(s_i,s_j) = \sum_{k\geq 1}(1-\lambda_{k})^2\bigl|\Psi^{(k)}_i - \Psi^{(k)}_j\bigr|^2, \qquad s_i ,s_j\in S \;.$$ See [@Coifman:PNAS:05] for a detailed discussion. Often times this distance may be well approximated (depending on the decay of the spectrum of ${{\mathcal L}}$) by truncating the sum after $p<n$ terms, in which case only the top $p$ eigenvectors need to be computed.
In some cases we will need to align the signs of the eigenvectors for two given Laplacians ${{\mathcal L}},\widetilde{{{\mathcal L}}}$ towards making diffusion map embeddings for different graphs more comparable. If $\{\Psi^{(k)}\}_{k\geq 0}$ and $\{\widetilde{\Psi}^{(k)}\}_{k\geq 0}$ denote the respective sets of eigenvectors, and the eigenvalues of both Laplacians are distinct[^5], we can define the sign alignment vector $\tau$ as $$\label{eqn:tau-signs}
\tau_k =
\begin{cases}
+1 & \text{if } {\operatorname{sgn}}\Psi^{(k)}_1={\operatorname{sgn}}\widetilde{\Psi}^{(k)}_1 \\
-1 & \text{otherwise}
\end{cases}.$$
Given an alignment vector $\tau$, one can extend the above diffusion distance to a distance defined on a union of statespaces. If $S,\widetilde{S}$ are statespaces with embeddings respectively defined by $\{(\Psi^{(k)},\lambda_k)\}_{k=0}^p$, $\{(\widetilde{\Psi}^{(k)},\widetilde{\lambda}_k)\}_{k=0}^p$ for some $p\geq 1$, then we can define the distance $d^2: (S\cup\widetilde{S})\times (S\cup\widetilde{S}) \to {{\mathbb R}}_{+}$ as\
$$d^2(s_i,s_j) :=
\begin{cases}
\rho(s_i,s_j) & \text{if } s_i\in S, s_j\in\widetilde{S} \\
\rho(s_j,s_i) & \text{otherwise}.
\end{cases}$$ using $$\rho(s_u,s_v)= \sum_{k\geq 1}(1-\lambda_{k})(1-\widetilde{\lambda}_k)\bigl|\tau_k\Psi^{(k)}_u - \widetilde{\Psi}^{(k)}_v\bigr|^2$$ with $\tau$ defined by .\
[*Hierarchical clustering*]{}. Given a policy $\pi$, we can construct a weighted statespace graph $G$ with vertices corresponding to states, and edge weights given by $P^{\pi}$. A policy that allows thorough exploration, such as the diffusion policy $\pi^u$, can be chosen to define the weighted statespace graph.
The hierarchical spectral clustering algorithm we will consider recursively splits the statespace graph into pieces by looking for low-conductance cuts. The spectrum of the symmetrized Laplacian for directed graphs [@ChungDirected] is used to determine the graph cuts at each step. The sequence of cuts establishes a partitioning of the statespace, and bottleneck states are states with edges that are severed by any of the cuts. Algorithm \[alg:spectral\_clustering\] describes the process. Other more sophisticated algorithms may also be used, for example Spielman’s [@Spielman:LocalClustering] and that of Anderson & Peres [@Andersen:2009:ESP; @Morris_Peres_2003]. One may also consider “model-free” versions of the algorithms above, that only have access to a “black-box” computing the results of running a process (truncated random walk, evolving sets process, respectively, for the references above), but we do not pursue this here.
\[alg:spectral\_clustering\]
A recursive application of Algorithm \[alg:spectral\_clustering\] produces a set of bottlenecks ${{\mathcal B}^{\pi}}$. Each bottleneck and partition discovered by the clustering algorithm is associated with a spatial scale determined by the recursion depth. The finest scale consists of the finest partition and includes all bottlenecks. The next coarser scale includes all the bottlenecks and partitions discovered up to but not including the deepest level of the recursion, and so on. In this manner the statespaces and actions of all the MDPs in a multi-scale hierarchy can be pre-determined, although if desired one can also apply clustering to the coarsened statespaces [*after*]{} compressing using the compressed MDP’s transition matrix as graph weights. The addition of a teleport matrix in Algorithm \[alg:spectral\_clustering\] (Step 2) guarantees that the equivalence classes partition $\{S\setminus{{\mathcal B}^{\pi}}\}$ and are strongly connected components of the weighted graph defined by $P^{\pi}_{\text{tel}}$.
Because graph weights are determined by $P^{\pi}$ in this algorithm, which bottlenecks will be identified generally depends on the policy $\pi$. In this sense there are two types of “bottlenecks”: problem bottlenecks and geometric bottlenecks. Geometric bottlenecks may be defined as interesting regions of the statespace alone, as determined by a random walk exploration if $\pi$ is a diffusion policy (e.g. $\pi^u$). Problem bottlenecks are regions of the statespace which are interesting from a geometric standpoint [*and*]{} in light of the goal structure of the MDP. If the policy is already strongly directed according to the goals defined by the rewards, then the bottlenecks can be interpreted as choke points for a random walker in the presence of a strong potential.
[*[Step 2:]{}*]{} Multiscale Compression and the Structure of Multiscale Markov Decision Problems {#sec:compression}
-------------------------------------------------------------------------------------------------
Given a set of bottlenecks ${{\mathcal B}}$ and a suitable fine scale policy, we can [*compress*]{} (or [*homogenize*]{}, or coarsen) an MDP [*into another MDP with statespace ${{\mathcal B}}$*]{}. The coarse MDP can be thought of as a low-resolution version of the original problem, where transitions [*between*]{} [clusters]{}are the events of interest, rather than what occurs within each [cluster]{}. As such, coarse MDPs may be vastly simpler: the size of the coarse statespace is on the order of the number of [clusters]{}, which may be small relative to the size of the original statespace. Indeed, [clusters]{}may be generally thought of as geometric properties of a [*problem*]{}, and are constrained by the inherent complexity of the problem, rather than the choice of statespace representation, discretization or sampling.
A solution to the coarse MDP may be viewed as a coarse solution to the original fine scale problem. An optimal coarse policy describes how to solve the original problem by specifying which sub-tasks to carry out and in which order. As we will describe in Section \[sec:mdp\_solution\], a coarse value function provides an efficient means to compute a fine scale value function and its associated policy. Coarse MDPs and their solutions also provide a framework for systematic transfer learning; these ideas are discussed in detail in Section \[sec:transfer\].
We have discussed how to identify a set of bottleneck states in Section \[sec:clustering\_details\] above. As we will explain in detail below, a policy is required to compress an MDP. This policy may encode a priori knowledge, or may be simply chosen to be the diffusion policy. In Section \[sec:cluster\_pols\] below, we suggest an algorithm for determining good local policies for compression that can be expected to produce an MDP at the coarse scale whose optimal solution is compatible with the gradient of the optimal value function at the fine scale.
A homogenized, coarse scale MDP will be denoted by the tuple $(\widetilde{S},\widetilde{A},\widetilde{P},\widetilde{R},\widetilde{\Gamma})$. We first give a brief description of the primary ingredients needed to define a coarse MDP, with a more detailed discussion to follow in the forthcoming subsections.
- [**Statespace $\widetilde{S}$**]{}: The coarse scale statespace $\widetilde{S}$ is the set of bottleneck states ${{\mathcal B}}$ for the fine scale, obtained by clustering the fine scale statespace graph, for example with the methods described in Section \[sec:clustering\]. Note that $\widetilde{S}\subset S$.
- [**Action set $\widetilde{A}$**]{}: A coarse action invoked from ${\ensuremath{{\mathsf{b}}}\xspace}\in\widetilde{S}={{\mathcal B}}$ consists of executing a given fine scale policy $\pi_{{\ensuremath{\mathsf{c}}\xspace}}\in{\ensuremath{{\boldsymbol{\pi}}}\xspace}_{\ensuremath{\mathsf{c}}\xspace}$ within the fine scale [cluster]{} ${\ensuremath{\mathsf{c}}\xspace}$, starting from ${\ensuremath{{\mathsf{b}}}\xspace}\in{\ensuremath{{{{\partial{{\ensuremath{\mathsf{c}}\xspace}}}}}}\xspace}$ (at a time that we may reset to $0$), until the first positive time at which a bottleneck state in ${\ensuremath{{{{\partial{{\ensuremath{\mathsf{c}}\xspace}}}}}}\xspace}$ is hit. Recall that in each [cluster]{} ${\ensuremath{\mathsf{c}}\xspace}$ we have a set of policies ${\ensuremath{{\boldsymbol{\pi}}}\xspace}_{\ensuremath{\mathsf{c}}\xspace}$.
- [**Coarse scale transition probabilities $\widetilde{P}(s,a,s')$**]{}: If $a\in\widetilde{A}$ is an action executing the policy $\pi_{{\ensuremath{\mathsf{c}}\xspace}}\in{\ensuremath{{\boldsymbol{\pi}}}\xspace}_{\ensuremath{\mathsf{c}}\xspace}$, then $\widetilde{P}(s,a,s')$ is defined as the probability that the Markov chain ${P_{\ensuremath{\mathsf{c}}\xspace}^{\pi_{\ensuremath{\mathsf{c}}\xspace}}}$ started from $s\in\widetilde{S}$, hits $s'\in\widetilde{S}$ before hitting any other bottleneck. In particular, $\widetilde{P}(s,a,s')$ may be nonzero only when $s,s'\in{\ensuremath{{{{\partial{{\ensuremath{\mathsf{c}}\xspace}}}}}}\xspace}$ for some ${\ensuremath{\mathsf{c}}\xspace}\in{\ensuremath{{\mathsf{C}}}\xspace}$.
- [**Coarse scale rewards $\widetilde{R}(s,a,s')$**]{}: The coarse reward $\widetilde{R}(s,a,s')$ is defined to be the expected total discounted reward collected along trajectories of the Markov chain associated to action $a$ described above, which start at $s\in\widetilde{S}$ and end by hitting $s'\in\widetilde{S}$ before hitting any other bottleneck.
- [**Coarse scale discount factors $\widetilde{\Gamma}(s,a,s')$**]{}: The coarse discount factor $\widetilde{\Gamma}(s,a,s')$ is the expected product of the discounts applied to rewards along trajectories of the Markov chain ${P_{\ensuremath{\mathsf{c}}\xspace}^{\pi_{\ensuremath{\mathsf{c}}\xspace}}}$ associated to a action $a\in\widetilde{A}$, starting at $s\in\widetilde{S}$ and ending at $s'\in\widetilde{S}$.
One of the important consequences of these definitions is that [*the optimal fine scale value function on the bottlenecks is a good solution to the coarse MDP, compressed with respect to the optimal fine scale policy, and vice versa.*]{} It is this type of consistency across scales that will allow us to take advantage of the construction above and design efficient multiscale algorithms.
The compression process is reminiscent of other instances of conceptually related procedures: coarsening (applied to meshes and PDEs, for example), homogenization (of PDEs), and lumping (of Markov chains). The general philosophy is to reduce a large problem to a smaller one constructed by “locally averaging” parts of the original problem.
The coarsening step may always be accomplished computationally by way of Monte Carlo simulations, as it involves computing the relevant statistics of certain functionals of Markov processes in each of the [clusters]{}. As such, the computation is embarrassingly parallel[^6]. While this gives flexibility to the framework above, it is interesting to note that many of the relevant computations may in fact be carried out analytically, and that eventually they reduce to the solution of multiple independent (and therefore trivially parallelizable) small linear systems, of size comparable to the size of a [cluster]{}. In this section we develop this analytical framework in detail (with proofs collected in the Appendix), as they uncover the natural structure of the multiscale hierarchy we introduce, and lead to efficient, “explicit” algorithms for the solution of the Markov decision problems we consider. The rest of this section is somewhat technical, and on a first reading one may skip directly to Section \[sec:mdp\_solution\] where we discuss the multiscale solution of hierarchical MDPs obtained by our construction.
### Assumptions
We will always assume that the fine scale policy $\pi$ used to compress has been [*regularized*]{}, by blending with a small amount of the diffusion policy $\pi^u$: $$\label{eqn:generic_pol_blending}
\pi(s,\cdot)\gets\lambda\pi^u(s,\cdot) + (1-\lambda)\pi(s,\cdot),\qquad s\in S$$ for some small, positive choice of the regularization parameter $\lambda$. In particular we will assume this is the case everywhere quantities such as $P^{\pi}$ appear below. The goal of this regularization is to address, or partially address, the following issues:
- The solution process may be initially biased towards one particular (and possibly incorrect) solution, but this bias can be overcome when solving the coarse MDP as long as the regularization described above is included every time compression occurs during the iterative solution process we will describe in Section \[sec:mdp\_solution\].
- Directed policies can yield a fine scale transition matrix which, when restricted to a [cluster]{}, may render bottleneck (or other) states unreachable. We require the boundary of each [cluster]{}to be $\pi$-reachable, and this is often guaranteed by the regularization above except in rather pathological situations. If any interior states violate this condition, they can be added to the [cluster]{}’s boundary and to the global bottleneck set at the relevant scale[^7]. We note that these assumptions are significantly weaker than requiring that the subgraphs induced by the restrictions $P_{\ensuremath{\mathsf{c}}\xspace}^{\pi}$, $\pi\in{\ensuremath{{\boldsymbol{\pi}}}\xspace}_{\ensuremath{\mathsf{c}}\xspace}$ of $P$ to a [cluster]{}are strongly connected components; the Markov chain defined by $P_{\ensuremath{\mathsf{c}}\xspace}^{\pi}$ need not be irreducible.
- Compression with respect to a deterministic and/or incorrect policy should not preclude transfer to other tasks. In the case of policy transfer, to be discussed below, errors in a policy used for compression can easily occur, and can lead to unreachable states. Policy regularization helps alleviate this problem.
### Actions
An action $\widetilde{A}$ at $s\in\widetilde{S}$ for the compressed MDP consists of executing a policy $\pi_{\ensuremath{\mathsf{c}}\xspace}\in{\ensuremath{{\boldsymbol{\pi}}}\xspace}_{\ensuremath{\mathsf{c}}\xspace}$ at the fine scale, starting in $s$, within some [cluster]{} ${\ensuremath{\mathsf{c}}\xspace}$ having $s$ on its boundary, until hitting a bottleneck state in ${\ensuremath{\mathsf{c}}\xspace}$. The number of actions is equal to the total number of policies across [clusters]{},$|\widetilde{A}|=\sum_{{\ensuremath{\mathsf{c}}\xspace}\in{\ensuremath{{\mathsf{C}}}\xspace}}|{\ensuremath{{\boldsymbol{\pi}}}\xspace}_{\ensuremath{\mathsf{c}}\xspace}|$. We now fix a [cluster]{} ${\ensuremath{\mathsf{c}}\xspace}$ and a policy $\pi_{\ensuremath{\mathsf{c}}\xspace}\in{\ensuremath{{\boldsymbol{\pi}}}\xspace}_{\ensuremath{\mathsf{c}}\xspace}$. The corresponding local Markov transition matrix is ${P_{\ensuremath{\mathsf{c}}\xspace}^{\pi_{\ensuremath{\mathsf{c}}\xspace}}}$, and let ${R_{\ensuremath{\mathsf{c}}\xspace}^{\pi_{\ensuremath{\mathsf{c}}\xspace}}}$ denote the reward structure, and ${\Gamma_{\ensuremath{\mathsf{c}}\xspace}^{\pi_{\ensuremath{\mathsf{c}}\xspace}}}$ denote the system of discount factors, following Section \[sec:notation\]. Let $((X^{\pi_{\ensuremath{\mathsf{c}}\xspace}}_{{\ensuremath{\mathsf{c}}\xspace}})_n)_{n\geq 0}$ denote the Markov chain with transition matrix ${P_{\ensuremath{\mathsf{c}}\xspace}^{\pi_{\ensuremath{\mathsf{c}}\xspace}}}$. If the coarse action is invoked in state $s\in\widetilde{S}$, then we set $X_0=s$. The set of actions available at $s\in\widetilde{S}$ for the compressed MDP is given by $$\widetilde{A}(s) := \bigcup_{{\ensuremath{\mathsf{c}}\xspace}\in{\ensuremath{{\mathsf{C}}}\xspace}:s\in{\ensuremath{{{{\partial{{\ensuremath{\mathsf{c}}\xspace}}}}}}\xspace}} \Bigl\{\text{``run the MRP }({P_{\ensuremath{\mathsf{c}}\xspace}^{\pi_{\ensuremath{\mathsf{c}}\xspace}}},{R_{\ensuremath{\mathsf{c}}\xspace}^{\pi_{\ensuremath{\mathsf{c}}\xspace}}},{\Gamma_{\ensuremath{\mathsf{c}}\xspace}^{\pi_{\ensuremath{\mathsf{c}}\xspace}}})\text{ in } {\ensuremath{\mathsf{c}}\xspace}\text{ until the first } n>0 :(X^{\pi_{\ensuremath{\mathsf{c}}\xspace}}_{{\ensuremath{\mathsf{c}}\xspace}})_n\in{{\mathcal B}}\text{''}\Bigr\}_{\pi_{_{\ensuremath{\mathsf{c}}\xspace}}\in{\ensuremath{{\boldsymbol{\pi}}}\xspace}_{\ensuremath{\mathsf{c}}\xspace}} \,.$$ A Markov reward process (MRP) refers to an MDP with a fixed policy and corresponding $P, R, \Gamma$ restricted to that fixed policy. The actions above involve running an MRP because while the action is being executed the policy remains fixed. Consider the simple example shown on the left in Figure \[fig:meta\_ex\], where we graphically depict a simple coarse MDP (large circles and bold edges) superimposed upon a stylized fine scale statespace graph (light gray edges; vertices are edge intersections). Undirected edges between coarse states are bidirectional. Dark gray lines delineate four [clusters]{}, to which we have associated fine scale policies $\pi_1,\ldots,\pi_4$. The bottlenecks are the states labeled $s_1,\ldots,s_4$. If an agent is in state $s_1$, for example, the actions “Execute $\pi_1$” and “Execute $\pi_4$” are feasible. If the agent takes the coarse action $a=\text{``Execute }\pi_1\text{''}$, then it can either reach $s_2$ or come back to $s_1$, since this action forbids traveling outside of [cluster]{}1 (top right quadrant). On the other hand if $\pi_4$ is executed from $s_1$, then the agent can reach $s_4$ or return to $s_1$, but the probability of transitioning to $s_2$ is zero.
In general, the compressed MDP will have action and state dependent rewards and discount factors, even if the fine scale problem does not. In Figure \[fig:meta\_ex\] (left), the coarse states straddle two [clusters]{}each, and therefore have different self loops corresponding to paths which return to the starting state within one of the two [clusters]{}. So $\widetilde{R}$ and $\widetilde{\Gamma}$ apparently depend on actions. But, we may reach $s_1$ when executing $\pi_1$ starting from either $s_1$ or from $s_2$, so the compressed quantities in fact depend on both the actions and the source/destination states. Figure \[fig:meta\_ex\] (right) shows another example, where the dependence on source states is particularly clear. Even if the action corresponding to running a fine policy in the center square is the same for all states, each coarse state $s_1,\ldots,s_4$ may be reached from two neighbors as well as itself.
(0,0) grid (4,4); (0,0) grid (4,4); =\[fill=blue!15,draw,minimum width=0.5cm,minimum height=0.5cm\] (s0) at (2,3) [$s_1$]{}; (s1) at (3,2) [$s_2$]{}; (s2) at (2,1) [$s_3$]{}; (s3) at (1,2) [$s_4$]{}; (s0) edge (s1) (s0) edge \[loop left\] (s0) (s0) edge \[loop right\] (s0) (s1) edge (s2) (s1) edge \[loop above\] (s1) (s1) edge \[loop below\] (s1) (s2) edge (s3) (s2) edge \[loop left\] (s2) (s2) edge \[loop right\] (s2) (s3) edge (s0) (s3) edge \[loop above\] (s3) (s3) edge \[loop below\] (s3); (0.5,0.5) node\[draw,fill=ltgray\] [$\boldsymbol{\pi}_{\bf 3}$]{}; (0.5,3.5) node\[draw,fill=ltgray\] [$\boldsymbol{\pi}_{\bf 4}$]{}; (3.5,0.5) node\[draw,fill=ltgray\] [$\boldsymbol{\pi}_{\bf 2}$]{}; (3.5,3.5) node\[draw,fill=ltgray\] [$\boldsymbol{\pi}_{\bf 1}$]{};
1.5cm
(-2,0) grid (4,2); (0,-2) grid (2,4); (-2,0) – (4,0) – (4,2) – (-2,2) – cycle; (0,4) – (0,-2) – (2,-2) – (2,4) – cycle; (0,0) – (0,2) – (2,2) – (2,0) – cycle; =\[fill=blue!15,draw,minimum width=0.5cm,minimum height=0.5cm\] (s0) at (1,0) [$s_3$]{}; (s1) at (2,1) [$s_2$]{}; (s2) at (1,2) [$s_1$]{}; (s3) at (0,1) [$s_4$]{}; (s0) edge (s1) (s0) edge \[loop above\] (s0) (s0) edge \[loop below\] (s0) (s1) edge (s2) (s1) edge \[loop left\] (s1) (s1) edge \[loop right\] (s1) (s2) edge (s3) (s2) edge \[loop above\] (s2) (s2) edge \[loop below\] (s2) (s3) edge (s0) (s3) edge \[loop left\] (s3) (s3) edge \[loop right\] (s3);
### Transition Probabilities {#sec:trans_probs}
Consider the [cluster]{}${\ensuremath{\mathsf{c}}\xspace}$ referred to by a coarse action $a\in\widetilde{A}$. [*[The transition probability $\widetilde{P}(s,a,s')$ for $s,s'\in{\ensuremath{\partial{\ensuremath{\mathsf{c}}\xspace}}\xspace}\subseteq\widetilde{S}$ is defined as the probability that a trajectory in ${\ensuremath{\mathsf{c}}\xspace}\subset S$ hits state $s'$ starting from $s$ before hitting any other state in ${{\mathcal B}}$ (including itself) when running the fine scale MRP restricted to ${\ensuremath{\mathsf{c}}\xspace}$ and along the policy determined by the action $a$.]{}*]{}
If $s$ is a state not in the [cluster]{}associated to $a$, then $a$ is not a feasible action when in state $s$. For the example shown in Figure \[fig:meta\_ex\] (left), for instance, the edge weight connecting $s_1$ and $s_2$ is the probability that a trajectory reaches $s_2$ before it can return to $s_1$, when executing $\pi_1$. These probabilities may be estimated either by sampling (Monte Carlo simulations), or computed analytically. The first approach is trivially implemented, with the usual advantages (e.g. parallelism, access to a black-box simulator is all is needed) and disadvantages (slow convergence); here we develop the latter, which leads to a concise set of linear problems to be solved, and sheds light on both the mathematical and computational structure. Since the bottlenecks partition the statespace into disjoint sets, the probabilities $\widetilde{P}(s,a,s')$ can be quickly computed in each [cluster]{}separately.
Let $a$ be the action corresponding to executing a policy $\pi_{\ensuremath{\mathsf{c}}\xspace}$ in [cluster]{} ${\ensuremath{\mathsf{c}}\xspace}$. Then $$\widetilde{P}(s,a,s') = H_{s,s'},\quad \text{ for all } s,s'\in {\ensuremath{\partial{\ensuremath{\mathsf{c}}\xspace}}\xspace},$$ where $H$ is the [*minimal non-negative solution*]{}, for each $s'\in{\ensuremath{\partial{\ensuremath{\mathsf{c}}\xspace}}\xspace}$, to the linear system $$\label{eqn:trans_probs}
H_{s,s'} = {P_{\ensuremath{\mathsf{c}}\xspace}^{\pi_{\ensuremath{\mathsf{c}}\xspace}}}(s,s') + \sum_{s''\in{\accentset{\circ}{{\ensuremath{\mathsf{c}}\xspace}}}}{P_{\ensuremath{\mathsf{c}}\xspace}^{\pi_{\ensuremath{\mathsf{c}}\xspace}}}(s,s'')H_{s'',s'},\quad s\in{\ensuremath{\mathsf{c}}\xspace}, s'\in{\ensuremath{\partial{\ensuremath{\mathsf{c}}\xspace}}\xspace}\,,$$ or in matrix-vector form, $$\label{eqn:trans_probs_matvec}
\bigl(I - {P_{\ensuremath{\mathsf{c}}\xspace}^{\pi_{\ensuremath{\mathsf{c}}\xspace}}}(I-J)\bigr)H = {P_{\ensuremath{\mathsf{c}}\xspace}^{\pi_{\ensuremath{\mathsf{c}}\xspace}}}$$ where $J$ is a matrix of all zeros except $J_{s''s''}=1$ for $s''\in {\ensuremath{\partial{\ensuremath{\mathsf{c}}\xspace}}\xspace}$. \[prop:coarsePsas\]
Consider the partitioning $${P_{\ensuremath{\mathsf{c}}\xspace}^{\pi_{\ensuremath{\mathsf{c}}\xspace}}}=
\begin{pmatrix}
Q & B\\
C & D
\end{pmatrix},
\qquad
H =
\begin{pmatrix}
h_q \\
h_b
\end{pmatrix}$$ where the blocks $Q, D$ describe the interaction among non-bottleneck and bottleneck states within [cluster]{} ${\ensuremath{\mathsf{c}}\xspace}$ respectively. The compressed probabilities may be obtained by finding the minimal non-negative solution to $$\label{eqn:trprob_submtx_prop}
(I-Q)h_q = B$$ followed by computing $$h_b = D + Ch_q,$$ where $h_b$ is the transition probability matrix of the compressed MDP given the action $a$.
The proof of this Proposition and a discussion are given in the Appendix.
When deriving the the compressed rewards and discount factors below, we will need to reference the set of all pairs of bottlenecks $s,s'$ for which the probability of reaching $s'$ starting from $s$ is positive, when executing the policy $\pi_{\ensuremath{\mathsf{c}}\xspace}$ associated to $a$. Having defined $\widetilde{P}$, this set may be easily characterized as $${\operatorname{supp}}_a(\widetilde{P}):=\{(s,s')\in{\ensuremath{\partial{\ensuremath{\mathsf{c}}\xspace}}\xspace}~|~ \widetilde{P}(s,a,s') > 0\}$$ where ${\ensuremath{\mathsf{c}}\xspace}$ is the [cluster]{}associated to the coarse action $a$.
### Rewards {#sec:coarse_rewards}
[*[The rewards $\widetilde{R}=\widetilde{R}(s,a,s')$, with $s,s'\in{\ensuremath{\partial{\ensuremath{\mathsf{c}}\xspace}}\xspace}$ and $a\in\widetilde{A}$, are defined to be the expected discounted rewards collected along trajectories that start from $s$ and hit $s'$ before hitting any other bottleneck state in ${\ensuremath{\partial{\ensuremath{\mathsf{c}}\xspace}}\xspace}$, when running the fine-scale MRP restricted to the [cluster]{}${\ensuremath{\mathsf{c}}\xspace}$ associated to $a$]{}*]{}.
In general, rewards under different policies and/or in other [clusters]{}are calculated by repeating the process described below for different choices of $\pi\in{\ensuremath{{\boldsymbol{\pi}}}\xspace}_{\ensuremath{\mathsf{c}}\xspace}, {\ensuremath{\mathsf{c}}\xspace}\in{\ensuremath{{\mathsf{C}}}\xspace}$. As was the case in the examples shown in Figure \[fig:meta\_ex\], even if the fine scale MDP rewards do not depend on the source state or actions, the compressed MDP’s rewards will, in general, depend on the source, destination and action taken. However as with the coarse transition probabilities, the relevant computations involve at most a single [cluster]{}’s subgraph at a time.
Given a policy $\pi_{\ensuremath{\mathsf{c}}\xspace}$ on [cluster]{} ${\ensuremath{\mathsf{c}}\xspace}$, consider the Markov chain $(X_t)_{t\geq 0}$ with transition matrix ${P_{\ensuremath{\mathsf{c}}\xspace}^{\pi_{\ensuremath{\mathsf{c}}\xspace}}}$. Let $T$ and $T'$ be two arbitrary stopping times satisfying $0\leq T<T'<\infty$ (a.s.). The discounted reward accumulated over the interval $T\leq t\leq T'$ is given by the random variable $$R_{T}^{T'}:= R(X_T,a_{T+1}, X_{T+1}) + \sum_{t=T+1}^{T'-1}\left[
\prod_{\tau=T}^{t-1}\Gamma\bigl(X_{\tau},a_{\tau+1}, X_{\tau+1}\bigr)\right]
R\bigl(X_t,a_{t+1}, X_{t+1}\bigr)$$ where $a_{t+1}\sim\pi_{\ensuremath{\mathsf{c}}\xspace}(X_t)$ for $t=T,\ldots,T'-1$, and we set $R_T^T \equiv 0$ for any $T$.
Next, define the hitting times of ${\ensuremath{\partial{\ensuremath{\mathsf{c}}\xspace}}\xspace}$: $$T_m =\inf\{t>T_{m-1} ~|~ X_t\in {\ensuremath{\partial{\ensuremath{\mathsf{c}}\xspace}}\xspace}\},\qquad m=1,2,\ldots$$ with $T_0 = \inf\{t\geq 0 ~|~ X_t\in {\ensuremath{\partial{\ensuremath{\mathsf{c}}\xspace}}\xspace}\}$. Note that if the chain is started in a bottleneck state $X_0={\ensuremath{{\mathsf{b}}}\xspace}\in{\ensuremath{\partial{\ensuremath{\mathsf{c}}\xspace}}\xspace}$, then clearly $T_0=0$. We will be concerned with the rewards accumulated between these successive hitting times, and by the Markovianity of $(X_t)_t$, we may, without loss of generality, consider the reward between $T_0$ and $T_1$, namely $R_{T_0}^{T_1}$.
The following proposition describes how to compute the expected discounted rewards by solving a collection of linear systems.
\[prop:coarse\_rewards\] Suppose the coarse scale action $a\in\widetilde{A}$ corresponds to executing a policy $\pi_{\ensuremath{\mathsf{c}}\xspace}$ in [cluster]{} ${\ensuremath{\mathsf{c}}\xspace}$, and let $(X_t)_{t\geq 0}$ denote the Markov chain with transition matrix ${P_{\ensuremath{\mathsf{c}}\xspace}^{\pi_{\ensuremath{\mathsf{c}}\xspace}}}$. The rewards $\widetilde{R}$ at the coarse scale may be characterized as $$\widetilde{R}(s,a,s') = {{\mathbb E}}_s[R_{0}^{T_1} ~|~ X_{T_1}=s'], \qquad (s,s')\in{\operatorname{supp}}_a(\widetilde{P})\,.$$ Moreover, for fixed $a$, $\widetilde{R}(s,a,s')=:H_{s,s'}$ may be computed by finding the (unique, bounded) solution $H$ to the linear system
[H\_[s,s’]{} =]{} P\_[h\_[s’]{}]{}(s,a,s”)(s,a,s”)H\_[s”,s’]{} + P\_[h\_[s’]{}]{}(s,a,s”)R(s,a,s”), & if $s\in{\accentset{\circ}{{\ensuremath{\mathsf{c}}\xspace}}}\cap {\ensuremath{{\ensuremath{\mathsf{c}}\xspace}'_{s'}}\xspace}$\[eqn:coarse\_rews\_linsys\]\
P\_[\_[s’]{}]{}(s,a,s”)(s,a,s”)H\_[s”,s’]{} + P\_[\_[s’]{}]{}(s,a,s”)R(s,a,s”), & if $(s,s')\in{\operatorname{supp}}_a(\widetilde{P})$ \[eqn:coarse\_rews\_sums\]
where ${\ensuremath{{\ensuremath{\mathsf{c}}\xspace}'_{s'}}\xspace}:=\{s\in {\ensuremath{\mathsf{c}}\xspace}~|~h_{s'}(s)>0\}$; $$\label{eqn:Ph_defn_prop}
P_{h_{s'}}(s,a,s'') := \frac{P_{{\ensuremath{\mathsf{c}}\xspace}}(s,a,s'')\pi_{\ensuremath{\mathsf{c}}\xspace}(s,a)h_{s'}(s'')}{h_{s'}(s)}$$ for $s\in {\accentset{\circ}{{\ensuremath{\mathsf{c}}\xspace}}}\cap{\ensuremath{{\ensuremath{\mathsf{c}}\xspace}'_{s'}}\xspace}$, $a\in A$, $s''\in {\ensuremath{{\ensuremath{\mathsf{c}}\xspace}'_{s'}}\xspace}$; and $$\label{eqn:Phtilde_defn_prop}
P_{\tilde{h}_{s'}}(s,a,s'') := \frac{P_{{\ensuremath{\mathsf{c}}\xspace}}(s,a,s'')\pi_{\ensuremath{\mathsf{c}}\xspace}(s,a)h_{s'}(s'')}{\widetilde{P}(s,a,s')}$$ for $(s,s')\in{\operatorname{supp}}_a(\widetilde{P})$, $a\in A$, $s''\in {\ensuremath{{\ensuremath{\mathsf{c}}\xspace}'_{s'}}\xspace}$; with $h_{s'}(s):={{\mathbb P}}_s(X_{T_0}=s'),$ for $s\in{\ensuremath{\mathsf{c}}\xspace}, s'\in{\ensuremath{\partial{\ensuremath{\mathsf{c}}\xspace}}\xspace}$ denoting the minimal, non-negative, harmonic function satisfying $$h_{s'}(s) =
\begin{cases}
\delta_{s,s'} & s\in{\ensuremath{\partial{\ensuremath{\mathsf{c}}\xspace}}\xspace}\\
{P_{\ensuremath{\mathsf{c}}\xspace}^{\pi_{\ensuremath{\mathsf{c}}\xspace}}}(s,s') + \sum_{s''\in{\accentset{\circ}{{\ensuremath{\mathsf{c}}\xspace}}}}{P_{\ensuremath{\mathsf{c}}\xspace}^{\pi_{\ensuremath{\mathsf{c}}\xspace}}}(s,s'')h_{s'}(s'') & s\in{\accentset{\circ}{{\ensuremath{\mathsf{c}}\xspace}}}\,.
\end{cases}$$
Thus, for each $s'\in{\ensuremath{\partial{\ensuremath{\mathsf{c}}\xspace}}\xspace}$, the compressed rewards $\widetilde{R}(s,a,s')$ are computed by first solving a linear system of size at most $|{\accentset{\circ}{{\ensuremath{\mathsf{c}}\xspace}}}|\times |{\accentset{\circ}{{\ensuremath{\mathsf{c}}\xspace}}}|$ given by , and then computing at most $|{\ensuremath{\partial{\ensuremath{\mathsf{c}}\xspace}}\xspace}|$ of the sums given by (the function $h_{s'}$ has already been computed during course of solving for the compressed transition probabilities).
See the Appendix for a proof, a matrix formulation of this result, and a discussion concerning computational consequences.
### Trajectory Lengths {#sec:exp_path_lens}
Assume the hitting times $(T_m)_{m\geq 0}$ are as defined in Section \[sec:coarse\_rewards\]. We note that the average path lengths (hitting times) between pairs of bottlenecks, $$L(s,a,s'):={{\mathbb E}}_s[T_1 ~|~X_{T_1}=s'], \qquad s,s'\in{\ensuremath{\partial{\ensuremath{\mathsf{c}}\xspace}}\xspace}\text{ such that } (s,s')\in{\operatorname{supp}}_a(\widetilde{P})$$ can be computed using the machinery exactly as described in Section \[sec:coarse\_rewards\] above by setting $$\Gamma(s,a,s')=1, \qquad R(s,a,s')=1,\qquad\text{for all }s,s'\in S, a\in A$$ at the fine scale, and then applying the calculations for computing expected “rewards” given by Equations and subject to a non-negativity constraint. Although expected path lengths are not essential for defining a compressed MDP, these quantities can still provide additional insight into a problem. For example, when simulations are involved, expected path lengths might hint at the amount of exploration necessary under a given policy to characterize a [cluster]{}.
### Discount Factors {#sec:coarse_discounts}
When solving an MDP using the hierarchical decomposition introduced above, it is important to seek a good approximation for the discount factors at the coarse scale. In our experience, this results in improved consistency across scales, and improved accuracy of and convergence to the solution. In the preceding sections, a coarse MDP was computed by averaging over paths between bottlenecks at a finer scale. Depending on the particular source/destination pair of states, the paths will in general have different length distributions. Thus, when solving a coarse MDP, rewards collected upon transitioning between states at the coarse scale should be discounted at different, state-dependent rates. The correct discount rate is a random variable (as is the path length), and transitions at the coarse scale implicitly depend on outcomes at the fine scale. We will partially correct for differing length distributions (and avoid the need to simulate at the fine scale) by imposing a coarse non-uniform discount factor based on the cumulative fine scale discount applied on average to paths between bottlenecks. The coarse discount factors $\widetilde{\Gamma}$ are incorporated when solving the coarse MDP so that the scale of the coarse value function is more compatible with the fine problem, and convergence towards the fine-scale policy may be accelerated. The expected cumulative discounts may be computed using a procedure similar to the one given for computing expected rewards in Section \[sec:coarse\_rewards\]. As before, given a policy $\pi_{\ensuremath{\mathsf{c}}\xspace}$ on [cluster]{} ${\ensuremath{\mathsf{c}}\xspace}$, consider the Markov chain $(X_n)_{n\geq 0}$ with transition matrix ${P_{\ensuremath{\mathsf{c}}\xspace}^{\pi_{\ensuremath{\mathsf{c}}\xspace}}}$, and let $T, T'$ be two arbitrary stopping times satisfying $0\leq T<T'<\infty$ (a.s.). The cumulative discount applied to trajectories $(X_{T},X_{T+1},\ldots,X_{T'})$ over the interval $T\leq t\leq T'$ is given by the random variable $$\Delta_{T}^{T'} := \prod_{t=T}^{T'-1}\Gamma\bigl(X_t,a_{t+1},X_{t+1}\bigr) \,,$$ where $a_{t+1}\sim\pi_{\ensuremath{\mathsf{c}}\xspace}(X_t)$ for $t=T,\ldots,T'-1$. The following proposition describes how to compute the expected discount factors by solving a collection of linear problems with non-negativity constraints.
\[prop:coarse\_discount\_factors\] Suppose the coarse scale action $a\in\widetilde{A}$ corresponds to executing the policy $\pi_{\ensuremath{\mathsf{c}}\xspace}$ in [cluster]{} ${\ensuremath{\mathsf{c}}\xspace}$. Let $(X_t)_{t\geq 0}$ denote the Markov chain with transition matrix ${P_{\ensuremath{\mathsf{c}}\xspace}^{\pi_{\ensuremath{\mathsf{c}}\xspace}}}$, and let $(T_m)_{m\geq 0}$ denote the boundary hitting times defined in Section \[sec:coarse\_rewards\]. The discount factors $\widetilde{\Gamma}$ at the coarse scale may be characterized as $$\widetilde{\Gamma}(s,a,s') = {{\mathbb E}}_s[\Delta_{0}^{T_1} ~|~ X_{T_1}=s'], \qquad (s,s')\in{\operatorname{supp}}_a(\widetilde{P})$$ and, letting $H_{s,s'}:=\widetilde{\Gamma}(s,a,s')$, may be computed by finding the minimal non-negative solution $H$ to the linear system $$H_{s,s'} =
\begin{dcases}
\smashoperator[r]{\sum_{s''\in{\accentset{\circ}{{\ensuremath{\mathsf{c}}\xspace}}}\cap {\ensuremath{{\ensuremath{\mathsf{c}}\xspace}'_{s'}}\xspace}, a\in A}} P_{h_{s'}}(s,a,s'')\Gamma(s,a,s'')H_{s'',s'} + \sum_{a\in A}P_{h_{s'}}(s,a,s')\Gamma(s,a,s'), & \text{if } s\in{\accentset{\circ}{{\ensuremath{\mathsf{c}}\xspace}}}\cap {\ensuremath{{\ensuremath{\mathsf{c}}\xspace}'_{s'}}\xspace}\\
\smashoperator[r]{\sum_{s''\in{\accentset{\circ}{{\ensuremath{\mathsf{c}}\xspace}}}\cap {\ensuremath{{\ensuremath{\mathsf{c}}\xspace}'_{s'}}\xspace}, a\in A}} P_{\tilde{h}_{s'}}(s,a,s'')\Gamma(s,a,s'')H_{s'',s'} + \sum_{a\in A}P_{\tilde{h}_{s'}}(s,a,s')\Gamma(s,a,s'), & \text{if }
(s,s')\in{\operatorname{supp}}_a(\widetilde{P})
\end{dcases}$$ where $h_{s'}(s)$, ${\ensuremath{{\ensuremath{\mathsf{c}}\xspace}'_{s'}}\xspace}$, $P_{h_{s'}}$, and $P_{\tilde{h}_{s'}}$ are as defined in Proposition \[prop:coarse\_rewards\].
The proof is again postponed until the Appendix.
It is worth observing that if the discount factor at the fine scale is [*uniform*]{}, $\Gamma(s,a,s')=\gamma$ with no dependence on states or actions, then the expected cumulative discounts may be related to the average path lengths $L(s,a,s')$ between pairs of bottlenecks described in Section \[sec:exp\_path\_lens\]. In particular, suppose $T(s,a,s')$ is the first passage time of a fine scale trajectory starting at $s\in{\ensuremath{\partial{\ensuremath{\mathsf{c}}\xspace}}\xspace}$ and ending at $s'\in{\ensuremath{\partial{\ensuremath{\mathsf{c}}\xspace}}\xspace}$ following a policy determined by the coarse action $a\in\widetilde{A}$. Then $L(s,a,s')={{\mathbb E}}[T(s,a,s')]$, and Jensen’s inequality implies that $$\gamma^{{{\mathbb E}}[T]}\leq {{\mathbb E}}[\gamma^T].$$ Thus $\widetilde{\Gamma}(s,a,s') \geq \gamma^{{{\mathbb E}}[T(s,a,s')]} = \gamma^{L(s,a,s')}$, and this approximation improves as $\gamma\uparrow 1$. However, this is only true for uniform $\gamma$ at the fine scale, and even for $\gamma$ close to $1$, the relationship above may be loose. Although the connection between path lengths and discount factors is illuminating and potentially useful in the context of Monte-Carlo simulations, we suggest calculating coarse discount factors according to Proposition \[prop:coarse\_discount\_factors\] rather than through path length averages.
In this and previous subsections, the approach taken is in the spirit of revealing the structure of the coarsening step and how it is possible to compute many coarser variables, or approximations thereof, by solutions of linear systems. Of course one may always use Monte-Carlo methods, which in addition to estimates of the expected values, may be used to obtain more refined approximations to the law of the random variables $\Delta_{T_0}^{T_1}$ and $R_{T_0}^{T_1}$.
;
[ \[start chain\] (p4) \[join\]; [ \[start branch=p4 loop\] (sp\_row2) \[join=by [vh path]{}\]; ]{} (sp\_row2); (ub) \[join=by tip\]; (sp1\_row2) \[join\]; [ \[start branch=sp1\_row2 loop\] (p3) \[join=by [hv path,tip]{}\]; ]{} ]{} [ \[start chain\] (p0) \[join\]; (p1) \[join\]; (comp) \[join=by tip\]; (p2) \[join\]; (sc) \[join=by tip\]; (sp1) \[join\]; (p3) \[join\]; (sp2) \[join\]; (if) \[join=by tip\]; (sp3) \[join\]; (p4) \[join\]; [ \[start branch=p4 loop\] (p3) \[join=by [skip loop=-8mm,tip]{}\]; ]{} [ \[start branch=p4 loop\] (p1) \[join=by [skip loop=8mm,tip,dashed]{}\]; ]{} (p5) \[join =by tip\]; ]{} (p0) node\[left, inner sep=.5mm\] [$\pi_0$]{}; (p3) node\[above, inner sep=.5mm\] [$V_{\mathrm{coarse}}$]{}; (sp3) node\[above, inner sep=.5mm\] [$\pi_{\mathrm{new}}$]{};
[*[Step 3:]{}*]{} Multiscale Solution of MDPs {#sec:mdp_solution}
---------------------------------------------
Given a (fine) MDP and a coarsening as above, a solution to the fine scale MDP may be obtained by applying one of several possible algorithms suggested by the flow diagram in Figure \[fig:soln-flow\]. Solving for the finer scale’s policy involves alternating between two main computational steps: (1) updating the fine solution given the coarse solution, and (2) updating the coarse solution given the fine solution. Given a coarse solution defined on bottleneck states, the fine scale problem decomposes into a collection of smaller independent sub-problems, each of which may be solved approximately or exactly. These are iterations along the inner loop surrounding “update fine” in Figure \[fig:soln-flow\]. After the fine scale problem has been updated, the solution on the bottlenecks may be updated either with or without a re-compression step. The former is represented by the long upper feedback loop in Figure \[fig:soln-flow\], while the latter corresponds to the outer, lower loop passing through “update boundary”. Updating without re-compressing may, for instance, take the form of the updates (Bellman, averaging) appearing in any of the asynchronous policy/value iteration algorithms. Updating by re-compression consists of re-compressing with respect to the current, updated fine policy and then solving the resulting coarse MDP.
The discussion that follows considers an arbitrary pair of successive scales, a “fine scale” and a “coarse scale”, and applies to any such pair within a general hierarchy of scales. A key property of the compression step is that it yields new MDPs in the same form, and can therefore be iterated. Similarly, the process of coarsening and refining policies and value functions allow one to move from fine to coarse scales and then from coarse to fine, respectively, and therefore may be repeated through several scales, in different possible patterns.
\[alg:hierarchy-solve\]
In a problem with many scales, the hierarchy may be traversed in different ways by recursive applications of the solution steps discussed above. A particularly simple approach is to solve [*top-down*]{} (coarse to fine) and update [*bottom-up*]{}. In this case the solution to the coarsest MDP is pushed down to the previous scale, where we may solve again and push this solution downwards and so on, until reaching a solution to the bottom, original problem. It is helpful, though not essential, when solving top-down if the magnitude of coarse scale value functions are directly compatible with the optimal value function for the fine-scale MDP. What is important, however, is that there is sufficient gradient as to direct the solution along the correct path to the goal(s), stepping from [cluster]{}-to-[cluster]{}at the finest scale. In Algorithm \[alg:hierarchy-solve\], solving top-down will enforce the coarse scale value gradient. One can mitigate the possibility of errors at the coarse scale by compressing with respect to carefully chosen policies at the fine scale (see Section \[sec:cluster\_pols\]). However, to allow for recovery of the optimal policy in the case of imperfect coarse scale information, a [*bottom-up*]{} pass updating coarse scale information is generally necessary. Coarse scale information may be updated either by re-compressing or by means of other local updates we will describe below.
Although we will consider in detail the solution of a two layer hierarchy consisting of a fine scale problem and a single coarsened problem, these ideas may be readily extended to hierarchies of arbitrary depth: what is important is the handling of pairs of successive scales. The particular algorithm we describe chooses (localized) policy iteration for fine-scale policy improvement, and local averaging for updating values at bottleneck states. Algorithm \[alg:hierarchy-solve\] gives the basic steps comprising the solution process. The fine scale MDP is first compressed with respect to one or more policies. In the absence of any specific and reliable prior knowledge, a [*collection*]{} of [cluster]{}-specific stochastic policies, to be described in Section \[sec:cluster\_pols\], is suggested. This collection attempts to provide all of the [*coarse*]{} actions an agent could possibly want to take involving the given [cluster]{}. These coarse actions involve traversing a particular [cluster]{}towards each bottleneck along paths which vary in their directedness, depending on the reward structure within the [cluster]{}. The Algorithm is local to [clusters]{}, however, so computing these policies is inexpensive. Moreover, if given policies defined on one or more [clusters]{}a priori, then those policies may be added to the collection used to compress, providing additional actions from which an agent may choose at the coarse scale. Solving the coarse MDP amounts to choosing the best actions from the available pool.
The next step of Algorithm \[alg:hierarchy-solve\] is to solve the coarse MDP to convergence. Since the coarse MDP may itself be compressed and solved efficiently, this step is relatively inexpensive. The optimal value function for the coarse problem is then assigned to the set of bottleneck states for the fine problem[^8]. With bottleneck values fixed, policy iteration is invoked within each [cluster]{}independently (Steps (\[list:solve-clusters\])-(\[list:blending\])). These local policy iterations may be applied to the [clusters]{}in any order, and any number of times. The value determination step here can be thought of as a boundary value problem, in which a [cluster]{}’s boundary (bottleneck) values are interpolated over the interior of the [cluster]{}. Section \[sec:solve-local\] explains how to solve these problems as required by Step (\[list:solve-clusters\]) of Algorithm \[alg:hierarchy-solve\]. Note that only values on the interior of a [cluster]{}are needed so the policy does not need to be specified on bottlenecks for local policy iteration.
Next, a greedy fine scale policy on a [cluster]{}’s interior states is computed from the interior values (Step (\[list:interior-greedy-policy\])). The new interior policy is a blend between the greedy policy and the previous policy (Step (\[list:blending\])). Starting from an initial stochastic fine scale policy, policy blending allows one to regularize the solution and maintain a degree of stochasticity that can repair coarse scale errors.
Finally, information is exchanged between [clusters]{}by updating the policy on bottleneck states (Step (\[list:alg-bn-pol\])), and then using this (globally defined) policy in combination with the interior values to update bottleneck values by local averaging (Step (\[list:alg-bn-val\])). Both of these steps are computationally inexpensive. Alternating updates to [cluster]{}interiors and boundaries are executed until convergence. This algorithm is guaranteed to converge to an optimal value function/policy pair (it is a variant of modified asynchronous policy iteration, see [@BertsekasVol2]), however in general convergence may not be monotonic (in any norm). Section \[sec:ms-alg-proof\] gives a proof of convergence for arbitrary initial conditions.
We note that often approximate solutions to the top level or [cluster]{}-specific problems is sufficient. Empirically we have found that single policy iterations applied to the [clusters]{}in between bottleneck updates gives rapid convergence (see Section \[sec:examples\]). We emphasize that at each level of the hierarchy below the topmost level, the MDP may be decomposed into distinct pieces to be solved locally and independently of each other. Obtaining solutions at each scale is an efficient process and at no point do we solve a large, global problem. In practice, the multi-scale algorithm we have discussed requires fewer iterations to converge than global, single-scale algorithms, for primarily two reasons. First, the multiscale algorithm starts with a coarse approximation of the fine solution given by the solution to the compressed MDP. This provides a good global warm start that would otherwise require many iterations of a global, single-scale algorithm. Second, the multiscale treatment can offer faster convergence since sub-problems are decoupled from each other given the bottlenecks. Convergence of local (within [cluster]{}) policy iteration is thus constrained by what are likely to be faster mixing times [*within*]{} [clusters]{}, rather than slow global times across [clusters]{}, as conductances are comparatively large within [clusters]{}by construction.
\[alg:room\_comp\_pols\]
### Selecting Policies for Compression {#sec:cluster_pols}
In the context of solving an MDP hierarchy, the ideal coarse value function is one which takes on the exact values of the optimal fine value function at the bottlenecks. Such a value function clearly respects the fine scale gradient, falls within a compatible dynamic range, and can be expected to lead to rapid convergence when used in conjunction with Algorithm \[alg:hierarchy-solve\]. Indeed, the best possible coarse value function that can be realized is precisely the solution to a coarse MDP compressed with respect to the optimal fine scale policy. We propose a [*local*]{} method for selecting a collection of policies at the fine scale that can be used for compression, such that the solution to the resulting coarse MDP is likely to be close to the best possible coarse solution.
Algorithm \[alg:room\_comp\_pols\] summarizes the proposed policy selection method, and is local in that only a single [cluster]{}at a time is considered. The idea behind this algorithm is that useful coarse actions involving a given [cluster]{}generally consist of getting from one of the [cluster]{}’s bottlenecks to another. The best fine scale path from one bottleneck to another, however, depends on the reward structure within the [cluster]{}. In fact, if there are larger rewards within the [cluster]{}than outside, it may not even be advantageous to leave it. On the other hand, if only local rewards within a [cluster]{}are visible, then we cannot tell whether locally large rewards are also globally large. Thus, a collection of policies covering all the interesting alternatives is desired.
For [cluster]{} ${\ensuremath{\mathsf{c}}\xspace}$, let $\bar{r}:=\max_{s\in{\ensuremath{\mathsf{c}}\xspace},a,s'\in{\ensuremath{\mathsf{c}}\xspace}}R_{\ensuremath{\mathsf{c}}\xspace}(s,a,s')$, $\underline{r}:=\min_{s\in{\ensuremath{\mathsf{c}}\xspace},a,s'\in{\ensuremath{\mathsf{c}}\xspace}}R_{\ensuremath{\mathsf{c}}\xspace}(s,a,s')$, and $\bar{\gamma}=\max_{s\in{\ensuremath{\mathsf{c}}\xspace},a,s'\in{\ensuremath{\mathsf{c}}\xspace}}\Gamma_{\ensuremath{\mathsf{c}}\xspace}(s,a,s')$. Let $\text{diam}({\ensuremath{\mathsf{c}}\xspace})$ denote the longest graph geodesic between any two states in [cluster]{} ${\ensuremath{\mathsf{c}}\xspace}$. Then for each bottleneck ${\ensuremath{{\mathsf{b}}}\xspace}\in{{\mathcal B}}\cap{\ensuremath{\mathsf{c}}\xspace}$, and any choice of $r\in R\mathrm{int}_{\ensuremath{\mathsf{c}}\xspace}$, where $$R\mathrm{int}_{\ensuremath{\mathsf{c}}\xspace}=\frac{1-\bar{\gamma}^{\text{diam}({\ensuremath{\mathsf{c}}\xspace})}}{1-\bar{\gamma}}[\min\{0, \underline{r}\}, \max\{0,\bar{r}\}],$$ we consider the following $\text{MDP}_{{\ensuremath{\mathsf{c}}\xspace},{\ensuremath{{\mathsf{b}}}\xspace},r}:=(S_{{\ensuremath{\mathsf{c}}\xspace}},A, P_{{\ensuremath{\mathsf{c}}\xspace},{\ensuremath{{\mathsf{b}}}\xspace}},R_{{\ensuremath{\mathsf{c}}\xspace},{\ensuremath{{\mathsf{b}}}\xspace},r},\Gamma_{{\ensuremath{\mathsf{c}}\xspace}})$:
- The statespace $S_{{\ensuremath{\mathsf{c}}\xspace}}$ is [$\mathsf{c}$]{};
- The transition probability law $P_{{\ensuremath{\mathsf{c}}\xspace},{\ensuremath{{\mathsf{b}}}\xspace}}$ is the transition law of the original MDP restricted to [$\mathsf{c}$]{}, but modified so that ${\ensuremath{{\mathsf{b}}}\xspace}$ is an absorbing state[^9] for $P_{{\ensuremath{\mathsf{c}}\xspace},{\ensuremath{{\mathsf{b}}}\xspace}}^\pi$, regardless of the policy $\pi$;
- The rewards $R_{{\ensuremath{\mathsf{c}}\xspace},{\ensuremath{{\mathsf{b}}}\xspace},r}$, for fixed ${\ensuremath{{\mathsf{b}}}\xspace}$, are the rewards $R$ of the original MDP truncated to [$\mathsf{c}$]{}, with the modification $R_{{\ensuremath{\mathsf{c}}\xspace},{\ensuremath{{\mathsf{b}}}\xspace},r}(s,a,{\ensuremath{{\mathsf{b}}}\xspace})=R(s,a,{\ensuremath{{\mathsf{b}}}\xspace})+\Gamma(s,a,{\ensuremath{{\mathsf{b}}}\xspace})r$, for all $s\in{\ensuremath{\mathsf{c}}\xspace}$ and $a\in A$;
- The discount factors $\Gamma_{{\ensuremath{\mathsf{c}}\xspace}}$ are the discount factors of the original MDP truncated to [$\mathsf{c}$]{}.
The optimal (or approximate) policy $\pi^*_{{\ensuremath{\mathsf{c}}\xspace},{\ensuremath{{\mathsf{b}}}\xspace},r}$ of each $\text{MDP}_{{\ensuremath{\mathsf{c}}\xspace},{\ensuremath{{\mathsf{b}}}\xspace},r}$ is computed. As $r$ ranges in a continuous interval, we expect to find only a small number[^10] of [*[distinct]{}*]{} optimal policies $\{\pi^*_{{\ensuremath{\mathsf{c}}\xspace},{\ensuremath{{\mathsf{b}}}\xspace},r}\}_{r\in R\mathrm{disc}_{{\ensuremath{\mathsf{c}}\xspace},{\ensuremath{{\mathsf{b}}}\xspace}}}$, for each fixed ${\ensuremath{{\mathsf{b}}}\xspace}$, where $R\text{disc}_{{\ensuremath{\mathsf{c}}\xspace},{\ensuremath{{\mathsf{b}}}\xspace}}$ is the set of corresponding rewards placed at ${\ensuremath{{\mathsf{b}}}\xspace}$ giving rise to the distinct policies. Therefore the cardinality of this set of policies is ${{\mathcal O}}(\sum_{{\ensuremath{{\mathsf{b}}}\xspace}\in{\ensuremath{{{{\partial{{\ensuremath{\mathsf{c}}\xspace}}}}}}\xspace}}|R\mathrm{disc}_{{\ensuremath{\mathsf{c}}\xspace},{\ensuremath{{\mathsf{b}}}\xspace}}|)$. The set of policies ${\ensuremath{{\boldsymbol{\pi}}}\xspace}_{\ensuremath{\mathsf{c}}\xspace}:=\cup_{{\ensuremath{{\mathsf{b}}}\xspace}\in{\ensuremath{{{{\partial{{\ensuremath{\mathsf{c}}\xspace}}}}}}\xspace}}\{\pi^*_{{\ensuremath{\mathsf{c}}\xspace},{\ensuremath{{\mathsf{b}}}\xspace},r}\}_{r\in R\mathrm{disc}_{{\ensuremath{\mathsf{c}}\xspace},{\ensuremath{{\mathsf{b}}}\xspace}}}$ is our candidate for the set of actions available at the coarser scale when the agent is at a bottleneck adjacent to [cluster]{} ${\ensuremath{\mathsf{c}}\xspace}$, and for the set of actions which was assumed and used in Sections \[sec:clustering\] and \[sec:compression\].
Finally, we note that Algorithm \[alg:room\_comp\_pols\] is trivially parallel, across [clusters]{}and across bottlenecks within [clusters]{}. In addition, solving for each policy is comparatively inexpensive because it involves a single [cluster]{}.
### Solving Localized Sub-Problems {#sec:solve-local}
Given a solution (possibly approximate) to a coarse MDP in the form of a value function $V_{\mathrm{coarse}}$, one can efficiently compute a solution to the fine-scale MDP by fixing the values at the fine scale’s bottlenecks to those given by the coarse MDP value function. The problem is partitioned into [clusters]{}where we can solve locally for a value function or policy within each [cluster]{}independently of the other [clusters]{}, using a variety of MDP solvers. Values inside a [cluster]{}are conditionally independent of values outside the [cluster]{}given the [cluster]{}’s bottleneck values.
As an illustrative example we show how policy iteration may be applied to learn the policies for each [cluster]{}. Let $\pi_{\ensuremath{\mathsf{c}}\xspace}$ be an initial policy at the fine scale defined on at least ${\accentset{\circ}{{\ensuremath{\mathsf{c}}\xspace}}}$. Determination of the values on ${\accentset{\circ}{{\ensuremath{\mathsf{c}}\xspace}}}$ given values on ${\ensuremath{\partial{\ensuremath{\mathsf{c}}\xspace}}\xspace}$ amounts to solving a [*boundary value problem*]{}: a continuous-domain physical analogy to this situation is that of solving a Poisson boundary value problem with Neumann boundary conditions. The connection with Poisson problems is that if $P_{\ensuremath{\mathsf{c}}\xspace}^{\pi_{\ensuremath{\mathsf{c}}\xspace}}$ is the transition matrix of the Markov chain $(X_n)_{n\geq 0}$ following $\pi_{\ensuremath{\mathsf{c}}\xspace}$ in [cluster]{} ${\ensuremath{\mathsf{c}}\xspace}$, then we would like to compute the function $$V(s):={{\mathbb E}}\bigl[R_0^{T_0} + \Delta_0^{T_0} V_{\mathrm{coarse}}(X_{T_0}) ~|~ X_0=s\bigr], \qquad s\in{\accentset{\circ}{{\ensuremath{\mathsf{c}}\xspace}}},$$ where $T_0:=\inf\{n\geq 0 ~|~ X_n\in{\ensuremath{\partial{\ensuremath{\mathsf{c}}\xspace}}\xspace}\}$ is the first passage time of the boundary of [cluster]{}[$\mathsf{c}$]{}, and $R_0^{T_0}, \Delta_0^{T_0}$ are respectively defined in Sections \[sec:coarse\_rewards\] and \[sec:coarse\_discounts\]. It can be shown that $V(s)$ is unique and bounded under our usual assumption that the boundary ${\ensuremath{{{{\partial{{\ensuremath{\mathsf{c}}\xspace}}}}}}\xspace}$ be $\pi_{\ensuremath{\mathsf{c}}\xspace}$-reachable from any interior state $s\in{\ensuremath{\mathsf{c}}\xspace}$. The value function we seek is computed by solving a linear system similar to Equation . We have, $$V(s) =
\begin{cases}
V_{\mathrm{coarse}}(s) & \text{if } s\in{\ensuremath{\partial{\ensuremath{\mathsf{c}}\xspace}}\xspace}\\
\sum_{s'\in {\ensuremath{\mathsf{c}}\xspace}, a'\in A}P_{\ensuremath{\mathsf{c}}\xspace}(s,a,s')\pi_{{\ensuremath{\mathsf{c}}\xspace}}(s,a)\bigl[R(s,a,s') + \Gamma(s,a,s')V(s')\bigr] &
\text{if } s\in{\accentset{\circ}{{\ensuremath{\mathsf{c}}\xspace}}}\end{cases}$$ where $P_{\ensuremath{\mathsf{c}}\xspace}$ is the restriction of $P$ to ${\ensuremath{\mathsf{c}}\xspace}$ defined by Equation . It is instructive to consider a matrix-vector formulation of this system. Let $R_{\ensuremath{\mathsf{c}}\xspace}, \Gamma_{\ensuremath{\mathsf{c}}\xspace}$ denote the respective truncation of $R(s,a,s'), \Gamma(s,a,s')$ to the triples $\{(s,a,s') ~|~ s\in{\accentset{\circ}{{\ensuremath{\mathsf{c}}\xspace}}}, s'\in {\ensuremath{\mathsf{c}}\xspace}, a\in A\}$. Defining the quantities $(P_{\ensuremath{\mathsf{c}}\xspace}\circ R_{\ensuremath{\mathsf{c}}\xspace})^{\pi_{\ensuremath{\mathsf{c}}\xspace}}, (\Gamma_{\ensuremath{\mathsf{c}}\xspace}\circ P_{\ensuremath{\mathsf{c}}\xspace})^{\pi_{\ensuremath{\mathsf{c}}\xspace}}$ using , assume the partitioning $$(\Gamma_{\ensuremath{\mathsf{c}}\xspace}\circ P_{\ensuremath{\mathsf{c}}\xspace})^{\pi_{\ensuremath{\mathsf{c}}\xspace}} =
\begin{pmatrix}
B & C\\
D & Q
\end{pmatrix},
\quad
(P_{\ensuremath{\mathsf{c}}\xspace}\circ R_{\ensuremath{\mathsf{c}}\xspace})^{\pi_{\ensuremath{\mathsf{c}}\xspace}} =
\begin{pmatrix}
B_r & C_r\\
D_r & Q_r
\end{pmatrix},
\quad
V =
\begin{pmatrix}
V_b\\ V_q
\end{pmatrix}$$ where interactions among bottlenecks attached to [cluster]{} ${\ensuremath{\mathsf{c}}\xspace}$ are captured by $B$ labeled blocks and interactions among non-bottleneck interior states by $Q$ labeled blocks. Fix $V_b=V_{\mathrm{coarse}}$, and let $V_q$ denote the (unknown) values of the [cluster]{}’s interior states. The value function on interior states $V_q$ is given by $$V_q = \bigl[D_r ~~ Q_r]{\mathbf{1}}+ [D ~~ Q]
\begin{pmatrix}
V_b \\
V_q
\end{pmatrix}$$ so that we must solve the $|{\accentset{\circ}{{\ensuremath{\mathsf{c}}\xspace}}}|\times|{\accentset{\circ}{{\ensuremath{\mathsf{c}}\xspace}}}|$ linear system $$\label{eqn:local-system}
(I - Q)V_q = [D_r ~~ Q_r]{\mathbf{1}}+ DV_b\,.$$
Given a value function for a [cluster]{}, the policy update step is unchanged from vanilla policy iteration except that we do not solve for the policy at bottleneck states: only the policy restricted to interior states is needed to update the $Q$ and $D$ blocks of the matrices above, towards carrying out another iteration of value determination inside the [cluster]{}(the blocks $C,D$ are not needed). This shows in yet another way that policy iteration within a given [cluster]{}is completely independent of other [clusters]{}. When policy iteration has converged to the desired tolerance within each [cluster]{}independently, or if the desired number of iterations has been reached, the individual [cluster]{}-specific value functions may be simply concatenated together along with the given values at the bottlenecks to obtain a globally defined value function.
As mentioned above, solving for a value function on a [cluster]{}’s interior does not require the initial policy to be defined on bottlenecks[^11], however a policy on bottleneck states can be quickly determined from the global value function. This step is computationally inexpensive when bottlenecks are few in number and have comparatively low out-degree. A policy defined on [cluster]{}interiors is obtained either from the global value function, or automatically during the solution process if, for example, a policy-iteration variant is used.
### Bottleneck Updates
Given any value function $V$ on [cluster]{}interiors and any globally defined policy $\pi$, values at bottleneck states may be updated using similar asynchronous iterative algorithms: we hold the value function fixed on all [cluster]{}interiors, and update the bottlenecks. Combined with interior updating, this step comprises the second half of the alternating solution approach outlined in Algorithm \[alg:hierarchy-solve\].
Local averaging of the values in the vicinity of a bottleneck is a particularly simple update, $$V(s) \gets
\sum_{s',a}P(s,a,s')\pi(s,a)\bigl(R(s,a,s') + \Gamma(s,a,s') V(s')\bigr), \quad s\in{{\mathcal B}}.$$ Value iteration and modified value iteration variants may be defined analogously. Value determination at the bottlenecks may be characterized as follows. Consider the (global) quantities $(P\circ R)^{\pi}, (\Gamma\circ P)^{\pi}$ (these do not need to be computed in their entirety), and the partitioning $$(\Gamma\circ P)^{\pi} =
\begin{pmatrix}
B & C\\
D & Q
\end{pmatrix},
\quad
(P\circ R)^{\pi} =
\begin{pmatrix}
B_r & C_r\\
D_r & Q_r
\end{pmatrix},
\quad
V =
\begin{pmatrix}
V_b\\ V_q
\end{pmatrix}$$ where, as before, interactions among bottlenecks are captured by $B$ labeled blocks and interactions among non-bottleneck (interior) states by $Q$ labeled blocks. Let $V_q$ be held fixed to the known interior values, and let $V_b$ denote the unknown values to be assigned to bottlenecks. Then $V_b$ is obtained by solving the $|{{\mathcal B}}|\times|{{\mathcal B}}|$ linear system $$\label{eqn:local-bn-system}
(I - B)V_b = [B_r ~~ C_r]{\mathbf{1}}+ CV_q\,.$$ In the ideal situation, by virtue of the spectral clustering Algorithm \[alg:spectral\_clustering\], the blocks $C$ and $C_r$ are likely to be sparse (bottlenecks should have low out-degree) so the matrix-vector products $CV_q$ and $C_r{\mathbf{1}}$ are inexpensive. Furthermore, even though $|{{\mathcal B}}|$ is already small, by similar arguments bottlenecks should not ordinarily have many direct connections to other bottlenecks, and $B$ is likely to be block diagonal. Thus, solving is likely to be essentially negligible.
### Proof of Convergence {#sec:ms-alg-proof}
Algorithm \[alg:hierarchy-solve\] is an instance of modified asynchronous policy iteration (see [@BertsekasVol2] for an overview), and can be shown to recover an optimal fine scale policy from any initial starting point.
Fix any initial fine-scale policy $\pi_0$, and any collection of compression policies $\{{\ensuremath{{\boldsymbol{\pi}}}\xspace}_{{\ensuremath{\mathsf{c}}\xspace}}\}_{{\ensuremath{\mathsf{c}}\xspace}\in{\sf C}}$ such that for each ${{\ensuremath{\mathsf{c}}\xspace}\in{\sf C}}$, ${\ensuremath{\partial{\ensuremath{\mathsf{c}}\xspace}}\xspace}$ is $\pi_{{\ensuremath{\mathsf{c}}\xspace}}$-reachable for all $\pi_{{\ensuremath{\mathsf{c}}\xspace}}\in{\ensuremath{{\boldsymbol{\pi}}}\xspace}_{{\ensuremath{\mathsf{c}}\xspace}}$. Let $V^k$ denote the global fine scale value function after $k>0$ passes of Steps (\[list:solve-clusters\])-(\[list:alg-last\]) in Algorithm \[alg:hierarchy-solve\]. For an appropriate number of updates $N$ per bottleneck per algorithm iteration satisfying $$\label{eqn:num-bn-updates}
N > \log_{\bar{\gamma}}\tfrac{1}{2}$$ with $\bar{\gamma}:=\max_{s,a,s'}\bigl\{\Gamma(s,a,s') {\mathbbm{1}}_{[P(s,a,s')>0]}\bigr\}$, the sequence $V^k$ generated by the alternating interior-boundary policy iteration Algorithm \[alg:hierarchy-solve\] satisfies $$\adjustlimits\lim_{k\to\infty} \max_{s\in S}|V^*(s)-V^k(s)| = 0$$ where $V^*$ is the unique optimal value function.
We first note that the value function updates in Algorithm \[alg:hierarchy-solve\], on both interior and boundary states at the fine scale may be written as one or more applications of (locally defined) averaging operators $T_{\pi}$ of the form $$\label{eqn:Tpi_op}
(T_{\pi}V)(s) = \sum_{s',a}\pi(s,a)P(s,a,s')\bigl(R(s,a,s') + \Gamma(s,a,s')V(s')\bigr) \,.$$ Value determination is equivalent to an “infinite” number of applications. The main challenge is that we require convergence to optimality from any initial condition $(V^0,\pi_0)$. Under the current assumptions on the policy, modified asynchronous policy iteration is known to converge (monotonically) in the $L^{\infty}$ norm to a unique optimal $V^*$, with corresponding optimal $\pi^*$, provided the initial pair $(V^0,\pi_0)$ satisfies $T_{\pi}V^0 \geq V^0$ [@BertsekasVol2], where $T_{\pi}$ is the DP mapping defined by . This initial condition is [*not*]{} in general satisfied here, since $(V^0,\pi_0)$ may be set on the basis of transferred information and/or coarse scale solutions. In Algorithm \[alg:hierarchy-solve\] for instance, the initial value function $V^0$ is the initial coarse MDP solution on the bottlenecks, and zero everywhere else. Furthermore, a common fix that shifts $V^0$ by a large negative constant (depending on $(1-\gamma)^{-1})$ does not apply because it could destroy consistency across sub-problems, and moreover can make convergence extraordinarily slow.
Alternatively, Williams & Baird show that modified asynchronous policy iteration will converge to optimality from any initial condition, provided enough value updates $T_{\pi}$ are applied [@WilliamsBaird:90; @WilliamsBaird:93]. The condition adapts the precise requirement found in [@WilliamsBaird:90 Thm. 8] to the present setting, where discount factors are state and action dependent. The proof follows [@WilliamsBaird:93 Thm. 4.2.10] closely, so we only highlight points where differences arise due to this state/action dependence, and due to the use of multi-step operators, $T_{\pi}^n$. We direct the reader to the references above for further details.
First note that if $(s_n)_n$ is the Markov chain with transition law $P^{\pi}$, $$(T_{\pi}^nV)(s) = {{\mathbb E}}\bigl[R_0^{n} + \Delta_0^{n}V(s_n) ~|~ s_0=s\bigr]$$ where $R_0^{n}, \Delta_0^{n}$ are as defined in Sections \[sec:coarse\_rewards\] and \[sec:coarse\_discounts\] above. One can see this by defining a recursion $V_n=T_{\pi}V_{n-1}$ with $V_0=V$, and repeated substitution of . Fix $\varepsilon>0$ and choose $N$ large enough so that $$\liminf_{i\to\infty} V^i(s) - \varepsilon < V^k(s) < \limsup_{i\to\infty}V^i(s) + \varepsilon$$ for all $k\geq N$ and all $s$. Let $L^*:=\max_{s}\big\{\limsup_{i\to\infty}V^i(s)- \liminf_{i\to\infty} V^i(s)\bigr\}$, and let $s^*$ be any state at which the maximum $L^*$ is achieved. It is enough to show that $L^*=0$ (convergence of the sequence $V^k$) to ensure convergence to optimality [@WilliamsBaird:93 Thm. 4.2.1], however we note that this convergence need not be monotonic in any norm. The action of $T_{\pi}^n$ after $N$ iterations can be bounded as follows: $$\begin{aligned}
\delta &= \sup_{k\geq N} (T^n_{\pi}V^k)(s^*) - \inf_{k\geq N} (T^n_{\pi}V^k)(s^*)\\
&<{{\mathbb E}}_{s^*}\Bigl[\Delta_0^{n}(\limsup_{i\to\infty}V^i(s_n) + \varepsilon)\Bigr] -
{{\mathbb E}}_{s^*}\Bigl[\Delta_0^{n}(\liminf_{i\to\infty}V^i(s_n) - \varepsilon)\Bigr]\\
&\leq 2\varepsilon{{\mathbb E}}_{s^*}[\Delta_0^{n}] + L^*{{\mathbb E}}_{s^*}[\Delta_0^{n}]\\
&\leq 2\varepsilon\bar{\gamma}^n + \bar{\gamma}^n L^* .\end{aligned}$$ Following the reasoning in [@WilliamsBaird:93 Thm. 4.2.10, pg. 27], subsequent policy improvement at state $x^*$ can at most double the length of the interval $\delta$. Hence, $L^*\leq 2\delta < 2\varepsilon\bar{\gamma}^n + \bar{\gamma}^n L^*$, so that $$L^* < \frac{4\varepsilon\bar{\gamma}^n}{1-2\bar{\gamma}^n }$$ for any $\varepsilon > 0 $, giving that $L^*=0$ as long as $$\bar{\gamma}^n < 1/2.$$ For the interior states, the condition $\bar{\gamma}^n < 1/2$ is clearly satisfied, since we perform value determination at those states in Algorithm \[alg:hierarchy-solve\].
Complexity Analysis {#sec:ms-alg-complexity}
-------------------
We discuss the running time complexity of each of the three steps discussed in the introduction of this section: partitioning, compression, and solving an MMDP. For the latter two steps, the computational burden is always limited to at most a single cluster at a time. In all cases, we consider worst case analyses in that we do not assume sparsity, preconditioning or parallelization, although there are ample, low-hanging opportunities to leverage all three.\
**Partitioning:** The complexity of the statespace partitioning and bottleneck identification step depends in general on the algorithm used. The local clustering algorithm of Spielman and Teng [@Spielman:LocalClustering] or Peres and Andersen [@Andersen:2009:ESP] finds an approximate cut in time “nearly” linear in the number of edges. The complexity of Algorithm \[alg:spectral\_clustering\] above is dominated by the cost of finding the stationary distribution of $P_{\text{tel}}$, and of finding a small number of eigenvectors of the directed graph Laplacian ${{\mathcal L}}$. The first iteration is the most expensive, since the computations involve the full statespace. However, the invariant distribution and eigenvectors may be computed inexpensively. $P$ is typically sparse, so $P_{\text{tel}}$ is the sum of a sparse matrix and a rank-1 matrix, and obtaining an exact solution can be expected to cost far less than that of solving a dense linear system. Approximate algorithms are often used in practice, however. For example, the algorithm of [@ChungPR:LNCS:10] computes a stationary distribution within an error bound of $\epsilon$ in time ${{\mathcal O}}\bigl(|E|\log(1/\epsilon)\bigr)$ if there are $|E|$ edges in the statespace graph given by $P$. The eigenvectors of ${{\mathcal L}}$ may also be computed efficiently using randomized approximate algorithms. The approach described in [@Halko:2011] computes $k$ eigenvectors (to high precision with high probability) in ${{\mathcal O}}\bigl(|S|^2\log k\bigr)$ time, assuming no optimizations for sparse input. Finding eigenvectors for the subsequent sub-graphs may be accelerated substantially by preconditioning using the eigenvectors found at the previous clustering iteration.\
**Compression:** As discussed above, compression of an MDP involves computations which only consider one [cluster]{}at a time. This makes the compression step local, and restricts time and space requirements. But assessing the complexity is complicated by the fact that non-negative solutions are needed when finding coarse transition probabilities and discounts. Various iterative algorithms for solving non-negative least-squares (NNLS) problems exist [@BjorckLSBook; @ChenPlemmons:10], however guarantees cannot generally be given as to how many iterations are required to reach convergence. The recent quasi-Newton algorithm of Kim et al. [@KimNNLS:10] appears to be competitive with or outperform many previously proposed methods, including the classic Lawson-Hanson method [@LawsonHansonBook:74] embedded in MATLAB’s `lsqnonneg` routine. We point out, however, that it is often the case in practice that the unique solution to the [*unconstrained*]{} linear systems appearing in Propositions \[prop:coarsePsas\] and \[prop:coarse\_discount\_factors\] are indeed also minimal, non-negative solutions. In this case, the complexity is ${{\mathcal O}}\bigl(|{\ensuremath{\partial{\ensuremath{\mathsf{c}}\xspace}}\xspace}||{\ensuremath{\accentset{\circ}{{\ensuremath{\mathsf{c}}\xspace}}\xspace}}|^3 + |{\ensuremath{\accentset{\circ}{{\ensuremath{\mathsf{c}}\xspace}}\xspace}}||{\ensuremath{\partial{\ensuremath{\mathsf{c}}\xspace}}\xspace}|^2\bigr)$ per [cluster]{}, for finding the coarse transition probabilities and coarse discounts corresponding to a single fine scale policy.
Solving for the coarse rewards always involves solving a linear system (without constraints), since the rewards are not necessarily constrained to be non-negative. This step also involves ${{\mathcal O}}\bigl(|{\ensuremath{\partial{\ensuremath{\mathsf{c}}\xspace}}\xspace}||{\ensuremath{\accentset{\circ}{{\ensuremath{\mathsf{c}}\xspace}}\xspace}}|^3 + |{\ensuremath{\accentset{\circ}{{\ensuremath{\mathsf{c}}\xspace}}\xspace}}||{\ensuremath{\partial{\ensuremath{\mathsf{c}}\xspace}}\xspace}|^2\bigr)$ time per [cluster]{}, per fine policy.
We note briefly, that these complexities follow from naive implementations; many improvements are possible. First of all in many cases the graphs involved are sparse, and iterative methods for the solution of linear systems, for example, would take advantage of sparsity and dramatically reduce the computational costs. For example, solving for the transition probabilities involves solving for multiple ($|{\ensuremath{\partial{\ensuremath{\mathsf{c}}\xspace}}\xspace}|$) right-hand sides, and the left-hand side of the linear systems determining compressed rewards and discounts are the same. The complexities above also do not reflect savings due to sparsity. In addition, the calculation of the compressed quantities above are embarrassingly parallel both at the level of [clusters]{}as well as bottlenecks attached to each [cluster]{}(elements of ${\ensuremath{\partial{\ensuremath{\mathsf{c}}\xspace}}\xspace}$). The case of compression with respect to multiple fine policies is also trivially parallelized.\
**MMDP Solution:** As above, the complexity of solving an MMDP will depend on the algorithm selected to solve coarse MDPs and local sub-problems. Here we will consider solving with the exact, (dynamic programming based) policy iteration algorithm, Algorithm \[alg:hierarchy-solve\]. In the worse case, policy iteration can take $|S|$ iterations to solve an MDP with statespace $S$, but this is pathological. We will assume that the number of iterations required to solve a given MDP is much less than the size of the problem. This is not entirely unreasonable, since we can assume policies are regularized with a small amount of the diffusion policy, and moreover, if there is significant complexity at the coarse scale, then further compression steps should be applied until arriving at a simple coarse problem where it is unlikely that the worse-case number of iterations would be required. Similarly, solving for the optimal policy within [clusters]{}should take few iterations compared to the size of the [cluster]{}because, by construction of the bottleneck detection algorithm, the Markov chain is likely to be fast mixing within a [cluster]{}.
Assume the MDP has already been compressed, and consider a fine/coarse pair of successive scales. Given a coarse scale solution, the cost of solving the fine local boundary value problems (Step \[list:solve-clusters\]) is $\sum_{{\ensuremath{\mathsf{c}}\xspace}\in{\ensuremath{{\mathsf{C}}}\xspace}} {{\mathcal O}}\bigl(|{\ensuremath{\accentset{\circ}{{\ensuremath{\mathsf{c}}\xspace}}\xspace}}|^3\bigr)$ (ignoring sparsity). Updating the policy everywhere on ${\accentset{\circ}{S}}$ (Step \[list:interior-greedy-policy\]) involves solving $|{\accentset{\circ}{S}}|$ maximization problems, but these problems are also local because the [cluster]{}interiors partition ${\accentset{\circ}{S}}$ by construction. The cost of updating the policy on ${\accentset{\circ}{S}}$ is therefore the sum of the costs of locally updating the policy within each [cluster]{}’s interior. The cost for each [cluster]{}${\ensuremath{\mathsf{c}}\xspace}\in{\ensuremath{{\mathsf{C}}}\xspace}$ is ${{\mathcal O}}\bigl(|A||{\ensuremath{\accentset{\circ}{{\ensuremath{\mathsf{c}}\xspace}}\xspace}}||{\ensuremath{\mathsf{c}}\xspace}| +|A||{\ensuremath{\accentset{\circ}{{\ensuremath{\mathsf{c}}\xspace}}\xspace}}|\bigr)$ time to compute the right-hand side of Equation and search for the maximizing action. The cost of updating the policy and value function at bottleneck states is assumed to be negligible, since ordinarily $|{{\mathcal B}}|\ll|S|$. The cost of each iteration of Algorithm \[alg:hierarchy-solve\] is therefore dominated by the cost of solving the collection of boundary value problems.
The cost of solving an MMDP with more than two scales depends on just how “multiscale” the problem is. The number of possible scales, the size and number of clusters at each scale, and the number of bottlenecks at each scale, collectively determine the computational complexity. These are all strongly problem-dependent quantities so to gain some understanding as to how these factors impact cost, we proceed by considering a reasonable scenario in which the problem exhibits good multiscale structure. For ease of the notation, let $n$ be the size of the original statespace. If at a scale $j$ (with $j=0$ the finest scale) there are $n_j$ states and $r_j$ [clusters]{}of roughly equal size, an iteration of Algorthm \[alg:hierarchy-solve\] at that scale has cost ${{\mathcal O}}\bigl(r_j(n_j/r_j)^3\bigr)$. If the sizes of the [clusters]{}are roughly constant across scales, then we can say that $r_j=n_j/C$ for all $j$ and some size $C$. If, in addition, the number of bottlenecks at each scale is about the number of clusters, then $n_j=n/C^j$, and the computation time across $\log n$ scales is ${{\mathcal O}}\bigl(n\log n\bigr)$ per iteration. By contrast, global DP methods typically require $O(n^3)$ time per iteration. The assumption that there are $\log n$ scales corresponds to the assumption that we compress to the maximum number of possible levels, and each level has multiscale structure. If we adopt the assumption above that the number of iterations required to reach convergence at each scale is small relative to $n_j$, then the cost of solving the problem is ${{\mathcal O}}(n\log n)$.
Transfer Learning {#sec:transfer}
=================
Transfer learning possibilities within reinforcement learning domains have only relatively recently begun to receive significant attention, and remains a long-standing challenge with the potential for substantial impact in learning more broadly. We define transfer here as the process of transferring some aspect of a solution to one problem into another problem, such that the second problem may be solved faster or better (a better policy) than would otherwise be the case. Faster may refer to either less exploration (samples) or fewer computations, or both.
Depending on the degree and type of relatedness among a pair of problems, transfer may entail small or large improvements, and may take on several different forms. It is therefore important to be able to [*systematically*]{}:
1. Identify transfer opportunities;
2. Encode/represent the transferrable information;
3. Incorporate transferred knowledge into new problems.
We will argue that [*a novel form of systematic knowledge transfer between sufficiently related MDPs is made possible by the multiscale framework discussed above*]{}. In particular, if a learning problem can be decomposed into a hierarchy of distinct parts then there is hope that both a “meta policy” governing transitions between the parts, as well as the parts themselves, may be transferred when appropriate. In the former setting, one can imagine transferring among problems in which a sequence of tasks must be performed, but the particular tasks or their order may differ from problem to problem. The transfer of distinct sub-problems might for instance involve a database of pre-solved tasks. A new problem is solved by decomposing it into parts, identifying which parts are already in the database, and then stitching the pre-solved components together into a global policy. Any remaining unsolved parts may be solved for independently, and learning a meta policy on sub-tasks is comparatively inexpensive.
A key conceptual distinction is [*the transfer of policies rather than value functions*]{}. Value functions reflect, in a sense, the global reward structure and transition dynamics specific to a given problem. These quantities may not easily translate or bear comparison from one task to another, while a policy may still be very much applicable and is easier to consider locally. Once a policy is transferred (Section \[sec:policy-xfer\]) we may, however, assess goodness of the transfer (Section \[sec:xfer-detect\]) by way of value functions computed with respect to the destination problem’s transition probabilities and rewards. As transfer can occur at coarse scales or within a single partition element at the finest scale, conversion between policies and value functions is inexpensive. If there are multiple policies in a database we would like to test in a [cluster]{}, it is possible to quickly compute value functions and decide which of the candidate policies should be transferred.
If the transition dynamics governing a pair of (sub)tasks are similar (in a sense to be made precise later), then one can also consider [*transferring potential operators*]{} (defined in Section \[sec:mdp-defns\]). In this case the potential operator from a source problem is applied to the reward function of a destination problem, but along a suitable pre-determined mapping between the respective statespaces and action sets. The potential operator approach also provides two advantages over value functions: reward structure independence and localization. The reward structure of the destination problem need not match that of the source problem. And a potential operator may be localized to a specific sub-problem at any scale, where locally the transition dynamics of the two problems are comparable, even if globally they aren’t compatible or a comparison would be difficult.
Both policy transfer and potential operator transfer provide a systematic means for identifying and transferring information where possible. At a high-level, the transfer framework we propose consists of the general steps given in Algorithm \[alg:transfer-general\]. Transfer between two hierarchies proceeds by matching sub-problems at various scales, testing whether transfer can actually be expected to help, transferring policies and/or potential operators where appropriate, and finally solving the unsolved problem using the transferred information. Each of these steps is discussed in detail in the sections below.
\[alg:transfer-general\]
Notation and Assumptions
------------------------
In the following, ${\ensuremath{{\ensuremath{\mathrm{MMDP}_{(1)}}\xspace}}\xspace},{\ensuremath{{\ensuremath{\mathrm{MMDP}_{(2)}}\xspace}}\xspace}$ will denote two distinct MDP hierarchies with underlying statespaces $S_1,S_2$ and action sets $A_1,A_2$, respectively. The notation $P_i,R_i,\Gamma_i$ for $i\in\{1,2\}$ will in this section refer to the respective transition, reward and discount tensors for problems ${\ensuremath{\mathrm{MMDP}_{(i)}}\xspace}, i\in\{1,2\}$. To simplify the notation we will not explicitly attach [cluster]{}indices to these quantities, and assume an appropriate truncation/restriction (see Section \[sec:notation\]) that will be clear from the context. The notation ${\ensuremath{\mathsf{c}}\xspace}\in {\ensuremath{\mathrm{MMDP}_{(i)}}\xspace}$ indicates that a [cluster]{} [$\mathsf{c}$]{}is a [cluster]{}at some scale of the hierarchy ${\ensuremath{\mathrm{MMDP}_{(i)}}\xspace}$. As before, ${\ensuremath{\accentset{\circ}{{\ensuremath{\mathsf{c}}\xspace}}\xspace}}, {\ensuremath{\partial{\ensuremath{\mathsf{c}}\xspace}}\xspace}$ denote the interior and boundary of the [cluster]{} ${\ensuremath{\mathsf{c}}\xspace}$, respectively. For all objects, the scale in question will either be arbitrary or clear from the context. Unless otherwise noted, $\pi^*$ refers to the optimal policy for ${\ensuremath{{\ensuremath{\mathrm{MMDP}_{(1)}}\xspace}}\xspace}$ at the appropriate scale. Throughout this section, we will assume for simplicity that optimal source problem policies $\pi^*$ are deterministic maps from states to actions. This assumption is not important for the main ideas discussed here, and is natural since transferred information is assumed to pass from a pre-solved source problem to an unsolved or partially solved destination problem. In this case the policy for the solved source problem may be chosen deterministic[^12].
At the coarsest scale in a hierarchy, there is only one “[cluster]{}” and there are no local bottlenecks. To see the the coarsest scale as just a special case falling within a more general framework, we will treat the coarsest scale as a single [cluster]{}consisting of only interior states; the boundary will be the empty set. As will be explained below, partial transfer of a policy to a subset of the states in a [cluster]{} is possible, but since the coarsest scale usually involves a small statespace, full statespace graph matching between ${\ensuremath{{\ensuremath{\mathrm{MMDP}_{(1)}}\xspace}}\xspace}$ and ${\ensuremath{{\ensuremath{\mathrm{MMDP}_{(2)}}\xspace}}\xspace}$ should be inexpensive and potentially less error-prone. If this is the case, we may transfer into the entire scale, although the transfer will be seen as a transfer into the interior of a single [cluster]{}in order to fit within a common transfer framework.
We set some further ground rules, since the range of possible transfer scenarios is large and diverse. We will restrict our attention to transferring policies and potential operators from a [cluster]{} ${\ensuremath{{\ensuremath{\mathsf{c}}\xspace}_1}\xspace}\in {\ensuremath{{\ensuremath{\mathrm{MMDP}_{(1)}}\xspace}}\xspace}$ at scale $j$ belonging to a solved source problem, to a [cluster]{} ${\ensuremath{{\ensuremath{\mathsf{c}}\xspace}_2}\xspace}\in {\ensuremath{{\ensuremath{\mathrm{MMDP}_{(2)}}\xspace}}\xspace}$, also at scale $j$, belonging to an unsolved destination problem. We assume scales have been suitably matched, and do not address transfer between different scales. We will assume that if a matching $\eta:S_2'\subset S_2\to S_1'\subset S_1$ between the statespaces of the source and destination problems is given (see Section \[sec:graph-matching\] below), it is a bijection onto its image. We do not treat degenerate situations, such as $\eta({\ensuremath{{\ensuremath{\mathsf{c}}\xspace}_2}\xspace})\cap {\ensuremath{{\ensuremath{\mathsf{c}}\xspace}_1}\xspace}= \emptyset$, and only consider transfer between subsets of [${\ensuremath{\mathsf{c}}\xspace}_1$]{}and [${\ensuremath{\mathsf{c}}\xspace}_2$]{}for which there is a given correspondence.
Finally, when considering [cluster]{}-to-[cluster]{}policy transfers, we will focus our attention on transfer to the [*interior*]{} of a [cluster]{}only – bottlenecks on a [cluster]{}’s boundary will not receive a transferred policy. Unless prior knowledge regarding the matching between [clusters]{}is available, we do not recommend transferring a policy to bottlenecks. Bottleneck states typically play a pivotal role in transitions across [clusters]{}, and transfer errors at bottlenecks can slow down the solution process. Assessing transferability (Section \[sec:xfer-detect\]) at bottleneck states forming the boundary of a given [cluster]{}is also more involved because one has to decide how to keep the problem of determining transferability for one [cluster]{}separate from that of the other [clusters]{}. Instead, solutions on the bottlenecks at a given scale should ordinarily be computed jointly as the solution to the next higher (compressed) MDP, or one can pursue transfer of an entire coarse scale (all bottlenecks simultaneously) if possible.
Cluster Correspondence {#sec:xfer-cluster-correspond}
----------------------
A correspondence between [clusters]{}at a given scale is established by pairing [clusters]{}deemed to be closest to each other in a suitable metric on graphs. A natural distance between graphs is given by the average pair-wise Euclidean distances between diffusion-map embeddings of the underlying states. Let $G,G'$ be two directed, weighted statespace graphs corresponding to a pair of [clusters]{}of size $|S|,|S'|$, and let $\{\xi\}_k, \{\xi'\}_k$ denote the respective collections of diffusion map embeddings computed according to Section \[sec:diffmaps\]. Then we define $$d(G,G') := \frac{1}{|S||S'|}\sum_{i,j}\|(\tau\circ\xi_i) - \xi'_j\|_2$$ where $\tau$ is the sign alignment vector defined by Equation . Given a pair of problems $({\ensuremath{{\ensuremath{\mathrm{MMDP}_{(1)}}\xspace}}\xspace}, {\ensuremath{{\ensuremath{\mathrm{MMDP}_{(2)}}\xspace}}\xspace})$, a [cluster]{} [${\ensuremath{\mathsf{c}}\xspace}_1$]{}in ${\ensuremath{{\ensuremath{\mathrm{MMDP}_{(1)}}\xspace}}\xspace}$ is matched to the [cluster]{}$${\ensuremath{{\ensuremath{\mathsf{c}}\xspace}_2}\xspace}^* =\arg\min_{{\ensuremath{{\ensuremath{\mathsf{c}}\xspace}_2}\xspace}\in {\ensuremath{{\ensuremath{\mathrm{MMDP}_{(2)}}\xspace}}\xspace}}d(G_{{\ensuremath{{\ensuremath{\mathsf{c}}\xspace}_1}\xspace}},G_{{\ensuremath{{\ensuremath{\mathsf{c}}\xspace}_2}\xspace}})$$ in ${\ensuremath{{\ensuremath{\mathrm{MMDP}_{(2)}}\xspace}}\xspace}$, where $G_{{\ensuremath{\mathsf{c}}\xspace}}$ denotes the weighted statespace subgraph corresponding to $P^{\pi}_{{\ensuremath{\mathsf{c}}\xspace}}$ for some choice of $\pi$ (e.g. $\pi^u$). We will only compare [clusters]{}occurring at similar scales.
=\[fill=blue!20,draw,minimum width=0.8cm,minimum height=0.8cm\] =\[fill=black!10,draw=black!50,minimum width=0.6cm,minimum height=0.6cm\] =\[dashed,draw=black!75\] (-0.7,0.3) rectangle (2.7, 4.5); (4.3,0.3) rectangle (7.7, 4.5); (a) at (1,1) [$s$]{}; (ap) at (1,3) [$s'$]{}; (b) at (6,1) [$w$]{}; (bp) at (6,3) [$w'$]{}; (ap\_l) at (0,3) \[MySmall\] ; (ap\_r) at (2,3) \[MySmall\] ; (bp\_l) at (5,3) \[MySmall\] ; (bp\_r) at (7,3) \[MySmall\] ; (a) edge (ap) (b) edge (bp) (a) edge\[draw=black!30\] (ap\_l) (a) edge\[draw=black!30\] (ap\_r) (b) edge\[draw=black!30\] (bp\_l) (b) edge\[draw=black!30\] (bp\_r); (ap) edge\[MyDashed,->,bend left=45\] node\[below\] [$\eta^{-1}$]{} (bp) (a) edge\[MyDashed,<-\] node [$\eta$]{} (b); (-0.1,1.9) node [$\pi^*(s)$]{}; (7.1,1.9) node [$\pi_{{\ensuremath{{\ensuremath{\mathsf{c}}\xspace}_2}\xspace}}(w)$]{}; (0.2,4.17) node [$\mathbf{{\ensuremath{{\ensuremath{\mathrm{MMDP}_{(1)}}\xspace}}\xspace}}$]{}; (6.85,4.17) node [$\mathbf{{\ensuremath{{\ensuremath{\mathrm{MMDP}_{(2)}}\xspace}}\xspace}}$]{};
Policy Transfer {#sec:policy-xfer}
---------------
Given a pair of matched [clusters]{} ${\ensuremath{{\ensuremath{\mathsf{c}}\xspace}_1}\xspace}\in {\ensuremath{{\ensuremath{\mathrm{MMDP}_{(1)}}\xspace}}\xspace}, {\ensuremath{{\ensuremath{\mathsf{c}}\xspace}_2}\xspace}\in {\ensuremath{{\ensuremath{\mathrm{MMDP}_{(2)}}\xspace}}\xspace}$, we describe how the deterministic optimal policy $\pi^*$ from ${\ensuremath{{\ensuremath{\mathrm{MMDP}_{(1)}}\xspace}}\xspace}$ can be transferred to some or all of ${\accentset{\circ}{{\ensuremath{{\ensuremath{\mathsf{c}}\xspace}_2}\xspace}}}$. Policy transfer may be carried out at any scale, and for subsets of a [cluster]{}’s interior states. For example, one may also find that there are sub-tasks which are not exactly the same as solved tasks in a database, but do nevertheless bear a strong resemblance. In these cases one may pursue partial policy transfer possibilities. Transferring via policies provides a convenient way to incorporate both full and partial knowledge: a transferred piece of the policy can serve as an initial condition for computing a value function everywhere in the [cluster]{}.
Assume we are given a bijective statespace mapping $\eta:S'_2\to S'_1$ such that $S'_1\subseteq S_1$, $S'_2\subseteq S_2$, ${\accentset{\circ}{{\ensuremath{{\ensuremath{\mathsf{c}}\xspace}_2}\xspace}}}\cap S'_2 \neq\emptyset$ and $\eta({\accentset{\circ}{{\ensuremath{{\ensuremath{\mathsf{c}}\xspace}_2}\xspace}}}\cap S'_2)\cap {\ensuremath{{\ensuremath{\mathsf{c}}\xspace}_1}\xspace}\neq\emptyset$. That is, we require that $\eta$ matches at least part of ${\accentset{\circ}{{\ensuremath{{\ensuremath{\mathsf{c}}\xspace}_2}\xspace}}}$ with part of [${\ensuremath{\mathsf{c}}\xspace}_1$]{}. Statespace graph matching will be discussed in Section \[sec:graph-matching\] immediately below; here we will assume $\eta$ is either given or simply taken to be the identity map. Next, let $$\label{eqn:W_intersect}
{{\mathcal W}}_{\eta} := \eta^{-1}\bigl(\eta({\accentset{\circ}{{\ensuremath{{\ensuremath{\mathsf{c}}\xspace}_2}\xspace}}}\cap {\operatorname{dom}}\eta) \cap {\ensuremath{{\ensuremath{\mathsf{c}}\xspace}_1}\xspace}\bigr)$$ denote the subset of ${\accentset{\circ}{{\ensuremath{{\ensuremath{\mathsf{c}}\xspace}_2}\xspace}}}$ with a correspondence in [${\ensuremath{\mathsf{c}}\xspace}_1$]{}[^13], and assume that ${\ensuremath{{\ensuremath{\mathsf{c}}\xspace}_1}\xspace},{\ensuremath{{\ensuremath{\mathsf{c}}\xspace}_2}\xspace}$ are both associated to scale $j$. An important aspect of policy transfer is the mapping of actions along $\pi^*$ in ${\ensuremath{{\ensuremath{\mathrm{MMDP}_{(1)}}\xspace}}\xspace}$ to ${\ensuremath{{\ensuremath{\mathrm{MMDP}_{(2)}}\xspace}}\xspace}$. Figure \[fig:policy-xfer\] illustrates action mapping for an arbitrary state $w\in {\ensuremath{{\ensuremath{\mathsf{c}}\xspace}_2}\xspace}$. If $\eta(w)=s\in S_1$, we can follow $\pi^*$ by finding the state $\pi^*(s)$ is trying to transition to, $$s'=\arg\max_{\ell\in S_1} P_1(s,\pi^*(s),\ell) .$$ Then if $s'\in S_1$ corresponds back to $w'\in {\ensuremath{{\ensuremath{\mathsf{c}}\xspace}_2}\xspace}$ via $w'=\eta^{-1}(s')$, the ${\ensuremath{{\ensuremath{\mathrm{MMDP}_{(2)}}\xspace}}\xspace}$ action most likely to induce a transition between $w$ and $w'$ is taken to be the transferred policy at $w$: $$\begin{aligned}
a^* &= \arg\max_{a\in A_2} P_2(w,a,w') \\
\pi_{{\ensuremath{{\ensuremath{\mathsf{c}}\xspace}_2}\xspace}}(w,\cdot) &= \delta_{a^*}, \qquad w\in{{\mathcal W}}_{\eta} .\end{aligned}$$
Once each $w\in{{\mathcal W}}_{\eta}$ has been assigned an action, the remaining missing policy entries in ${\accentset{\circ}{{\ensuremath{{\ensuremath{\mathsf{c}}\xspace}_2}\xspace}}}$ can be set to either the uniform distribution or to a previous policy guess. Abusing notation and using $\pi^u$ to denote either of the random uniform or previous policy, $$\pi_{{\ensuremath{{\ensuremath{\mathsf{c}}\xspace}_2}\xspace}}(s)=\pi^u(s),\qquad s\in \{{\accentset{\circ}{{\ensuremath{{\ensuremath{\mathsf{c}}\xspace}_2}\xspace}}}\setminus{{\mathcal W}}_{\eta}\} .$$
The resulting policy $\pi_{{\ensuremath{{\ensuremath{\mathsf{c}}\xspace}_2}\xspace}}$ can then be used as a starting point for local policy iteration in [${\ensuremath{\mathsf{c}}\xspace}_2$]{}with e.g. Algorithm \[alg:hierarchy-solve\]. A high-level summary of policy transfer and the subsequent solution process is given in Algorithm \[alg:policy-xfer\]. In particular, a value function $V_j$ everywhere in [${\ensuremath{\mathsf{c}}\xspace}_2$]{}is computed by solving the local boundary value problem $$\label{eqn:xfer-values}
V_j(s) =
\begin{dcases}
{{\mathbb E}}_{a\sim \pi_{{\ensuremath{{\ensuremath{\mathsf{c}}\xspace}_2}\xspace}}(s)}\left[
\sum_{s'\in {{\ensuremath{{\ensuremath{\mathsf{c}}\xspace}_2}\xspace}}}P_2(s,a,s')\bigl(R_2(s,a,s') + \Gamma_2(s,a,s')V_j(s')\bigr) \right]
& \text{if } s\in{\accentset{\circ}{{\ensuremath{{\ensuremath{\mathsf{c}}\xspace}_2}\xspace}}}, \\
V_{j+1}(s) & \text{if } s\in \partial {{\ensuremath{{\ensuremath{\mathsf{c}}\xspace}_2}\xspace}}
\end{dcases}$$ where $V_{j+1}$ is the value function associated to the coarse scale $j+1$ (see Section \[sec:solve-local\]). The value function $V_j$ and its associated policy can then be propagated up or down the hierarchy using the ideas discussed in Section \[sec:mdp\_solution\]. For instance, ${\ensuremath{{\ensuremath{\mathrm{MMDP}_{(2)}}\xspace}}\xspace}$ could be re-compressed from the scale at which [${\ensuremath{\mathsf{c}}\xspace}_2$]{}resides (scale $j$) upwards using an updated policy derived from $V_j$ (possibly blended with a previous policy). The value function in [${\ensuremath{\mathsf{c}}\xspace}_2$]{}can also be used to solve downwards below the current [cluster]{}by applying Algorithm \[alg:hierarchy-solve\] to the previous scale with $V_j$ serving as the initial coarse data.
\[alg:policy-xfer\]
Transfer of Potential Operators {#sec:potential-xfer}
-------------------------------
Suppose $S_i^j$ denotes the full statespace for problem $i$ at scale $j$. At any (coarse) scale $j>0$ above the finest scale one can consider transferring the potential operator $$\begin{aligned}
{{\mathcal G}}&: {{\mathbb R}}^{S_1^j}\to {{\mathbb R}}^{S_1^j}\\
{{\mathcal G}}&:= \bigl(I- (\Gamma_1\circ P_1)^{\pi^*}\bigr)^{-1}\end{aligned}$$ associated to the optimal policy $\pi^*$ at scale $j$ of problem ${\ensuremath{{\ensuremath{\mathrm{MMDP}_{(1)}}\xspace}}\xspace}$. Here we let $P_1, \Gamma_1$ generically denote the Markov transition tensor and compressed discount factors (respectively) at the relevant scale of problem ${\ensuremath{{\ensuremath{\mathrm{MMDP}_{(1)}}\xspace}}\xspace}$. We will consider for simplicity of the exposition the transfer of entire scales, and require for the moment that the statespace correspondence satisfy $\eta(S_2^j)=S_1^j$. In general, potential operators specific to [clusters]{}may be readily transferred by extending the development here to include the ideas discussed in Section \[sec:solve-local\] (see Equation in particular).
Sub-problems at scales below $j$ of ${\ensuremath{{\ensuremath{\mathrm{MMDP}_{(2)}}\xspace}}\xspace}$ decouple from each other given values at the states in $S_2^j$. A value function $V_2^j$ on $S_2^j$, may be computed from the transferred potential operator and ${\ensuremath{{\ensuremath{\mathrm{MMDP}_{(2)}}\xspace}}\xspace}$ rewards as follows. Let $\pi_{{\ensuremath{{\ensuremath{\mathsf{c}}\xspace}_2}\xspace}}$ denote the policy on $S_2^j$ transferred from $\pi^*$ according to Section \[sec:policy-xfer\]. Next, consider the ${\ensuremath{{\ensuremath{\mathrm{MMDP}_{(2)}}\xspace}}\xspace}$ rewards $R_2$, aggregated over one step with respect to $(\pi_{{\ensuremath{{\ensuremath{\mathsf{c}}\xspace}_2}\xspace}}, P^{\pi^*})$, and mapped back to ${\ensuremath{{\ensuremath{\mathrm{MMDP}_{(1)}}\xspace}}\xspace}$: $$\label{eqn:potential-rewards}
R_{2,1}(s) = \sum_{s'\in S_1^j}P^{\pi^*}(s,s'){{\mathbb E}}_{a\sim \pi_{{\ensuremath{{\ensuremath{\mathsf{c}}\xspace}_2}\xspace}}(\eta^{-1}(s))}\Bigl[
R_2\bigl(\eta^{-1}(s),a,\eta^{-1}(s')\bigr)\Bigr],\quad s\in S_1^j .$$ The expectation on the right-hand side defines a system of rewards on $S_1^j$ by collecting the (one-step) rewards in ${\ensuremath{{\ensuremath{\mathrm{MMDP}_{(2)}}\xspace}}\xspace}$ following the policy on $S_2^j$ determined by mapping $\pi^*$ to ${\ensuremath{{\ensuremath{\mathrm{MMDP}_{(2)}}\xspace}}\xspace}$. Since the policy $\pi^*$ is deterministic, we do not take an expectation with respect to ${\ensuremath{{\ensuremath{\mathrm{MMDP}_{(1)}}\xspace}}\xspace}$ actions anywhere in . The value function $V_2^j$ is computed by applying ${{\mathcal G}}$ to these rewards and mapping back to ${\ensuremath{{\ensuremath{\mathrm{MMDP}_{(2)}}\xspace}}\xspace}$ along $\eta$, $$\label{eqn:potential-values}
V_j(s) = \bigl({{\mathcal G}}R_{2,1}\bigr)\bigl(\eta(s)\bigr), \qquad s\in S_2^j.$$ We draw attention to the fact that if the reward system for ${\ensuremath{{\ensuremath{\mathrm{MMDP}_{(2)}}\xspace}}\xspace}$ depends on actions, then as shown in Equation computing a value function requires a set of rewards aggregated with respect to a transferred policy $\pi_{{\ensuremath{{\ensuremath{\mathsf{c}}\xspace}_2}\xspace}}$. In general, transferring a potential operator therefore also entails mapping actions across problems, and transferring a policy. If the ${\ensuremath{{\ensuremath{\mathrm{MMDP}_{(2)}}\xspace}}\xspace}$ rewards in [${\ensuremath{\mathsf{c}}\xspace}_2$]{}do not depend on actions however, then reduces to a simpler expression not involving $\pi_{{\ensuremath{{\ensuremath{\mathsf{c}}\xspace}_2}\xspace}}$, however this situation is unlikely at any coarse scale.
Using a potential operator from ${\ensuremath{{\ensuremath{\mathrm{MMDP}_{(1)}}\xspace}}\xspace}$ to compute values for ${\ensuremath{{\ensuremath{\mathrm{MMDP}_{(2)}}\xspace}}\xspace}$ sub-problems provides three major advantages. Value determination is fast, ${{\mathcal O}}(|S_1^j|^2)$ worst case, because the operator is given. The resulting value function for ${\ensuremath{{\ensuremath{\mathrm{MMDP}_{(2)}}\xspace}}\xspace}$ at scale $j$ also respects the specific reward structure of ${\ensuremath{{\ensuremath{\mathrm{MMDP}_{(2)}}\xspace}}\xspace}$. The third advantage is more subtle: the coarse MDP initially associated to scale $j$ of ${\ensuremath{{\ensuremath{\mathrm{MMDP}_{(2)}}\xspace}}\xspace}$ results from compression with respect to a stochastic policy guess, and may not be compatible with the optimal policy at scale $j$ of ${\ensuremath{{\ensuremath{\mathrm{MMDP}_{(1)}}\xspace}}\xspace}$. If we simply transferred $\pi^*$ from ${\ensuremath{{\ensuremath{\mathrm{MMDP}_{(1)}}\xspace}}\xspace}$, but used ${\ensuremath{{\ensuremath{\mathrm{MMDP}_{(2)}}\xspace}}\xspace}$’s coarse transition dynamics to perform value determination, it is likely that any improvement arising from knowledge of $\pi^*$ would be eliminated. Assuming the transfer is viable, ${\ensuremath{{\ensuremath{\mathrm{MMDP}_{(1)}}\xspace}}\xspace}$’s potential operator will determine a value function that obeys the “correct” coarse-scale Markov process and provides a warm start towards finding the optimal fine-scale policy.
Detecting Transferability {#sec:xfer-detect}
-------------------------
Given a pair of matched [clusters]{} $({\ensuremath{{\ensuremath{\mathsf{c}}\xspace}_1}\xspace}\in {\ensuremath{{\ensuremath{\mathrm{MMDP}_{(1)}}\xspace}}\xspace}, {\ensuremath{{\ensuremath{\mathsf{c}}\xspace}_2}\xspace}\in {\ensuremath{{\ensuremath{\mathrm{MMDP}_{(2)}}\xspace}}\xspace})$, we would like to know whether transfer of a policy or potential operator from ${\ensuremath{{\ensuremath{\mathsf{c}}\xspace}_1}\xspace}\in {\ensuremath{{\ensuremath{\mathrm{MMDP}_{(1)}}\xspace}}\xspace}$ to some or all of ${\accentset{\circ}{{\ensuremath{{\ensuremath{\mathsf{c}}\xspace}_2}\xspace}}}$ can be reasonably expected to help solve ${\ensuremath{{\ensuremath{\mathrm{MMDP}_{(2)}}\xspace}}\xspace}$. As in Section \[sec:policy-xfer\], we will restrict our attention to detecting opportunities for transfer only to the interior of [${\ensuremath{\mathsf{c}}\xspace}_2$]{}.
### Policy Transferability
One way to approximately determine transferability of a policy is to check whether running ${\ensuremath{{\ensuremath{\mathrm{MMDP}_{(1)}}\xspace}}\xspace}$’s optimal policy $\pi^*$ in ${\ensuremath{{\ensuremath{\mathsf{c}}\xspace}_2}\xspace}\in {\ensuremath{{\ensuremath{\mathrm{MMDP}_{(2)}}\xspace}}\xspace}$ results in more aggregate reward on average than executing the current policy we have for ${\ensuremath{{\ensuremath{\mathrm{MMDP}_{(2)}}\xspace}}\xspace}$. In most cases the “current” policy will just be the diffusion policy $\pi^u$, so we will overload this notation to indicate either of the current policy or the diffusion policy. If the reward collected following ${\ensuremath{{\ensuremath{\mathrm{MMDP}_{(1)}}\xspace}}\xspace}$’s policy is lower than that collected using the current policy, this suggests that $\pi^*$ does not provide a warm start, and transfer should not be attempted. In addition, if no statespace correspondence is given then this test can be used to check whether assuming the default identity correspondence can still support transfer from [${\ensuremath{\mathsf{c}}\xspace}_1$]{}to [${\ensuremath{\mathsf{c}}\xspace}_2$]{}.
The value function in [${\ensuremath{\mathsf{c}}\xspace}_2$]{}given the current policy $\pi^u$ for ${\ensuremath{{\ensuremath{\mathrm{MMDP}_{(2)}}\xspace}}\xspace}$ is computed in the usual way. Letting $V_j^u$ denote the desired value function and $V_{j+1}^u$ denote the current value function at the next coarser scale $j+1$, we must solve the system $$\label{eqn:vu-detect}
V_j^u(s) =
\begin{dcases}
{{\mathbb E}}_{a\sim\pi^u(s)}\left[\sum_{s'\in {{\ensuremath{{\ensuremath{\mathsf{c}}\xspace}_2}\xspace}}}P_2(s,a,s')\bigl(R_2(s,a,s') + \Gamma_2(s,a,s')V_j^u(s')\bigr) \right] & \text{if } s\in{\accentset{\circ}{{\ensuremath{{\ensuremath{\mathsf{c}}\xspace}_2}\xspace}}}, \\
V_{j+1}^u(s) & \text{if } s\in \partial {{\ensuremath{{\ensuremath{\mathsf{c}}\xspace}_2}\xspace}} .
\end{dcases}$$
The test for transferability compares $V_j^u$ to the value function $V_j$ describing rewards collected in ${{\ensuremath{{\ensuremath{\mathsf{c}}\xspace}_2}\xspace}}\in {\ensuremath{{\ensuremath{\mathrm{MMDP}_{(2)}}\xspace}}\xspace}$ running $\pi^*$ from ${\ensuremath{{\ensuremath{\mathrm{MMDP}_{(1)}}\xspace}}\xspace}$ computed according to Equation . In other words, we transfer following Section \[sec:policy-xfer\], and then check whether we see any improvement relative to the current policy in [${\ensuremath{\mathsf{c}}\xspace}_2$]{}. The result of the computations in Equations and can be reused during the first iteration of Algorithm \[alg:hierarchy-solve\] (or its variants) when solving the transfer problem. If similar underlying states of the environment play different roles in the different tasks ${\ensuremath{{\ensuremath{\mathrm{MMDP}_{(1)}}\xspace}}\xspace},{\ensuremath{{\ensuremath{\mathrm{MMDP}_{(2)}}\xspace}}\xspace}$, then $V_j$ could differ significantly from $V_j^u$. The two functions should be compared on all of ${\accentset{\circ}{{\ensuremath{{\ensuremath{\mathsf{c}}\xspace}_2}\xspace}}}$ (not just ${{\mathcal W}}_{\eta}$ defined in Equation ). One can take a conservative approach and only pursue transfer if $V_j(s)\geq V_j^u(s), \forall s\in{\accentset{\circ}{{\ensuremath{{\ensuremath{\mathsf{c}}\xspace}_2}\xspace}}}$. Or, if the situation is less clear, assessing the improvement may involve other heuristics. For example, a relative comparison such as $$\label{eqn:detect-test-matching}
\sum_{s\in{\accentset{\circ}{{\ensuremath{{\ensuremath{\mathsf{c}}\xspace}_2}\xspace}}}}{\operatorname{sgn}}\bigl(V_j(s) - V_j^u(s)\bigr)\log\left(\frac{|V_j(s) - V_j^u(s)|}{|V_j^u(s)|
+ {\mathbf{1}}_{(V_j^u(s)=0)}} + 1\right) \stackrel{?}{>} 0 .
$$ This test checks whether the policy $\pi^*$ provides a “warm start” relative to $\pi^u$ given the transition dynamics and reward structure for ${\ensuremath{{\ensuremath{\mathrm{MMDP}_{(2)}}\xspace}}\xspace}$. If the inequality above is satisfied, then we can proceed with transferring the policy from [cluster]{} ${\ensuremath{{\ensuremath{\mathsf{c}}\xspace}_1}\xspace}\in {\ensuremath{{\ensuremath{\mathrm{MMDP}_{(1)}}\xspace}}\xspace}$ to ${\ensuremath{{\ensuremath{\mathsf{c}}\xspace}_2}\xspace}\in {\ensuremath{{\ensuremath{\mathrm{MMDP}_{(2)}}\xspace}}\xspace}$. Note that since interior [cluster]{}values are computed in Equation with the [cluster]{}boundary fixed, the problem of assessing transferability from [${\ensuremath{\mathsf{c}}\xspace}_1$]{}to [${\ensuremath{\mathsf{c}}\xspace}_2$]{}is independent of other [clusters]{}in ${\ensuremath{{\ensuremath{\mathrm{MMDP}_{(1)}}\xspace}}\xspace}$ or ${\ensuremath{{\ensuremath{\mathrm{MMDP}_{(2)}}\xspace}}\xspace}$.
### Potential Operator Transferability
The process for determining whether transferring a potential operator will be helpful or not is similar to the procedure for policies. Transferring a potential operator is equivalent to assuming that the dynamics of ${{\ensuremath{{\ensuremath{\mathsf{c}}\xspace}_1}\xspace}}\in {\ensuremath{{\ensuremath{\mathrm{MMDP}_{(1)}}\xspace}}\xspace}$ apply to ${{\ensuremath{{\ensuremath{\mathsf{c}}\xspace}_2}\xspace}}\in {\ensuremath{{\ensuremath{\mathrm{MMDP}_{(2)}}\xspace}}\xspace}$. Thus, as with policies, it is worthwhile to consider comparing the expected discounted rewards collected while following the transition dynamics governing ${\ensuremath{{\ensuremath{\mathrm{MMDP}_{(1)}}\xspace}}\xspace}$ in ${\ensuremath{{\ensuremath{\mathrm{MMDP}_{(2)}}\xspace}}\xspace}$ to the no-transfer alternative. The expected reward starting from states in ${{\mathcal W}}_{\eta}$ under potential operator transfer is given by Equation . If ${{\mathcal W}}_{\eta}$ is a proper subset of ${\accentset{\circ}{{\ensuremath{{\ensuremath{\mathsf{c}}\xspace}_2}\xspace}}}$, then a value function everywhere on ${\accentset{\circ}{{\ensuremath{{\ensuremath{\mathsf{c}}\xspace}_2}\xspace}}}$ can be obtained by solving a small boundary value problem. In this case, ${{\mathcal W}}_{\eta}$ is added to the boundary (in addition to [${\ensuremath{\mathsf{c}}\xspace}_2$]{}’s bottlenecks) and the values computed by Equation serve as the boundary values for states in ${{\mathcal W}}_{\eta}$: $$\label{eqn:potfn-xfer-bvp}
V_j(s) =
\begin{dcases}
{{\mathbb E}}_{a\sim\pi^u(s)}\left[\sum_{s'\in {{\ensuremath{{\ensuremath{\mathsf{c}}\xspace}_2}\xspace}}}P_2(s,a,s')\bigl(R_2(s,a,s') + \Gamma_2(s,a,s')V_j(s')\bigr) \right], & \text{if } s\in{\accentset{\circ}{{\ensuremath{{\ensuremath{\mathsf{c}}\xspace}_2}\xspace}}}\setminus{{\mathcal W}}_{\eta}, \\
V_{j+1}(s), & \text{if } s\in \partial {{\ensuremath{{\ensuremath{\mathsf{c}}\xspace}_2}\xspace}} \\
\bigl({{\mathcal G}}R_{2,1}\bigr)\bigl(\eta(s)\bigr), & \text{if } s\in {{\mathcal W}}_{\eta}
\end{dcases}$$ where $\pi^u$ is the initial policy (diffusion or otherwise), and $V_{j+1}$ is the initial coarse value function. Values for the remaining states are computed according to ${\ensuremath{{\ensuremath{\mathrm{MMDP}_{(2)}}\xspace}}\xspace}$’s rewards and transition probabilities, as this the only possibility in the absence of a wider correspondence. Analogous to the case of policy transfer, the computations in Equation may be reused when solving the transfer problem.
If we do not transfer the potential operator, we would otherwise just follow the current (or uniform stochastic) policy in [${\ensuremath{\mathsf{c}}\xspace}_2$]{}with the usual ${\ensuremath{{\ensuremath{\mathrm{MMDP}_{(2)}}\xspace}}\xspace}$ dynamics. The expected discounted reward when there is no transfer is determined by solving Equation as before. The final comparison between transfer/no-transfer can be performed on all of ${\accentset{\circ}{{\ensuremath{{\ensuremath{\mathsf{c}}\xspace}_2}\xspace}}}$, and may involve a heuristic such as Equation . If the reward system in ${\ensuremath{{\ensuremath{\mathrm{MMDP}_{(2)}}\xspace}}\xspace}$ is strongly dependent on actions, then the quality of a potential operator transfer may also depend on the quality of the mapped policy $\pi_{{\ensuremath{{\ensuremath{\mathsf{c}}\xspace}_2}\xspace}}$ by way of Equation . In such situations one can also assess the quality of $\pi_{{\ensuremath{{\ensuremath{\mathsf{c}}\xspace}_2}\xspace}}$ by using the procedure above.
Statespace Graph Matching {#sec:graph-matching}
-------------------------
Establishing a correspondence between the discrete, finite statespaces of two problems can be an important prerequisite for some, if not most, types of transfer. Recall that a problem’s statespace graph is a graph with states as its vertices and edges/weights defined by a transition probability kernel. Such a graph may, for instance, be characterized by a graph Laplacian of the type defined in Section \[sec:diffmaps\]. The goal of a statespace graph matching is to establish a correspondence between the [*roles*]{} played by states in each problem. Consider for example two related problems ${\ensuremath{{\ensuremath{\mathrm{MMDP}_{(1)}}\xspace}}\xspace},{\ensuremath{{\ensuremath{\mathrm{MMDP}_{(2)}}\xspace}}\xspace}$, each with a single terminal (goal) state. It would be desirable to be able to match the terminal states as “goals”, even if the terminal states are different in the sense that they have different representations in some underlying space (e.g. as features or coordinates in a Euclidean space). The same could be true for other states that play a pivotal role, such as “gateway” states directly connected to goal states. Graph matching ultimately seeks to abstract away problem-specific roles from the underlying state representations, and then match similar roles across problems. We will further illustrate this concept by way of several examples in Section \[sec:examples\].
Although statespace matching can be important for a transfer problem, it can also be expensive computationally and imprecise in practice. For some problems defined on discrete domains, key correspondences may need to be correct, otherwise the transferred information may actually diminish performance. For these reasons we do not require graph matching nor do we propose a full solution to the matching problem. We will restrict our attention to transfer scenarios where:
1. It is possible to use a default “identity” correspondence, or detect that that the identity is a poor choice and transfer should not be attempted (using the ideas in Section \[sec:xfer-detect\]).
2. The graph matching is relatively simple, and there is a limited potential for catastrophic errors. For example, matching at coarse scales between small collections of states.
The algorithm we will use to match statespace graphs is heuristic. Given two sets of states ${\ensuremath{{\ensuremath{\mathsf{c}}\xspace}_1}\xspace}\in {\ensuremath{{\ensuremath{\mathrm{MMDP}_{(1)}}\xspace}}\xspace}, {\ensuremath{{\ensuremath{\mathsf{c}}\xspace}_2}\xspace}\in {\ensuremath{{\ensuremath{\mathrm{MMDP}_{(2)}}\xspace}}\xspace}$,
1. Compute the pairwise diffusion distances $d(s_i,s_j)$, $s_i\in {{\ensuremath{{\ensuremath{\mathsf{c}}\xspace}_1}\xspace}}, s_j\in {{\ensuremath{{\ensuremath{\mathsf{c}}\xspace}_2}\xspace}}$, according to Section \[sec:diffmaps\]. Note that this does not involve any particular underlying representation associated to the statespaces.
2. Build the affinity matrix $W_{ij} = \exp\bigl(-d^2(s_i,s_j)/\sigma^2\bigr)$, for some appropriate choice of $\sigma$ (e.g. the median pairwise distance in the set).
3. Apply any graph matching algorithm based on affinities.
Graph matching is itself an area of active research, and several algorithms exist [@Fremuth-Paeger99; @SanghaviMW07; @Huang:AISTATS:07; @Huang:AISTATS:11]. For a graph with $|V|$ vertices and $|E|$ edges, the min-cost flow algorithm of [@Fremuth-Paeger99] has ${{\mathcal O}}(|V||E|)$ running time. Jebara and collegues have improved upon this with a belief-propagation algorithm giving a running time of ${{\mathcal O}}(|V|^{2.5})$ on average [@Huang:AISTATS:07; @Huang:AISTATS:11] (but ${{\mathcal O}}(|V||E|)$ in the worst case).
The diffusion map embeddings may be computed either locally within [clusters]{}if [${\ensuremath{\mathsf{c}}\xspace}_1$]{}and [${\ensuremath{\mathsf{c}}\xspace}_2$]{}are contained in [clusters]{}at a coarser scale, or approximately with a small number of eigenvectors when [${\ensuremath{\mathsf{c}}\xspace}_1$]{}and/or [${\ensuremath{\mathsf{c}}\xspace}_2$]{}is the entire problem statespace. Specific problem knowledge may guide in many cases the choice of [${\ensuremath{\mathsf{c}}\xspace}_1$]{}and [${\ensuremath{\mathsf{c}}\xspace}_2$]{}. For example, we may need to match a [cluster]{} ${\ensuremath{{\ensuremath{\mathsf{c}}\xspace}_2}\xspace}\in {\ensuremath{{\ensuremath{\mathrm{MMDP}_{(2)}}\xspace}}\xspace}$ to states in ${\ensuremath{{\ensuremath{\mathrm{MMDP}_{(1)}}\xspace}}\xspace}$, but might reasonably expect that [${\ensuremath{\mathsf{c}}\xspace}_2$]{}can only correspond to a small number of states in ${\ensuremath{{\ensuremath{\mathrm{MMDP}_{(1)}}\xspace}}\xspace}$ rather than all of $S_1$. The examples discussed in Section \[sec:examples\] below illustrate graph matching and the transfer procedures suggested above in more detail.
Experiments {#sec:examples}
===========
We will illustrate compression and transfer learning in the case of three examples: a discrete $50\times 50$ gridworld domain with multiscale structure, a 3-dimensional continuous two-task inverted pendulum problem, and pair of problems based on the “playroom” domain of [@Singh2004; @Barto:ICDL:04]. The gridworld tasks require an agent to navigate to a goal location in a 2D environment. The inverted pendulum problem involves first moving a cart to a desired position, and then moving the cart while balancing the pendulum to another position. Finally, the playroom domain examples involve learning to carry out sequences of specific interactions with various objects and actuators in a desired order. The setup of compression, transfer and transfer detection is the focus of this section, rather than an exhaustive performance comparison with other algorithms. For this reason, most of the performance plots below show error versus the number of algorithm iterations, even if different algorithms have dramatically different computational complexity per iteration (in particular, the proposed multiscale algorithms have a cost per iteration much smaller than global algorithms such as policy iteration).
We will consider several multiscale algorithms, obtained by choosing different paths along the flow diagram in Figure \[fig:soln-flow\], and different numbers of interior or boundary update iterations. Each variant has the basic structure of Algorithm \[alg:hierarchy-solve\], however the particular updates applied may differ. The following table summarizes the multiscale algorithms we will consider:
\[tab:expt-algs\]
Algorithm Name Interior Update Boundary Update
---------------- ----------------- ------------------------------
`oo` once once
`oc` once Alg. \[alg:hierarchy-solve\]
`or` once recompress
`co` to convergence once
`cc` to convergence Alg. \[alg:hierarchy-solve\]
`cr` to convergence recompress
: Multiscale algorithms tested in the experiments.
The designations “once” and “to convergence” refer to the number of updates applied to the interior/boundary states, before updating the boundary/interior. There are two [cluster]{}interior update possibilities: either we perform policy iteration in each [cluster]{}until convergence ([*“to convergence”*]{} – when the relative error between iterates falls below 0.01), or we apply only one policy iteration per [cluster]{}([*“once”*]{}). To make comparisons fair, for algorithms iterating within [clusters]{}to convergence, each pass applying one local policy iteration update to all the [clusters]{}is counted as a single outer “algorithm iteration” in the plots[^14]. The bottlenecks (boundary) are updated either as in Algorithm \[alg:hierarchy-solve\], by way of repeated local averaging steps ([*“Alg. \[alg:hierarchy-solve\]”*]{}), or by [*recompressing*]{} the fine scale MDP and then solving the resulting coarse MDP ([*“recompress”*]{}). For accounting purposes, a boundary update, regardless of type, is considered part of the same outer algorithm iteration as the immediately preceding interior update (or pass over [cluster]{}interiors). In all experiments, the cost of initial hierarchy construction and transfer detection/policy-mapping (when applied) is not included, as they are only done once for a problem.
Which of the algorithms is best suited to a given problem strongly depends on whether the initial data can be trusted. There are three kinds of initial data in question: the initial fine scale policy, the initial coarse value function, and the policy or policies used to initially compress the fine scale MDP. Empirically, we have observed the latter two types to be the most significant. If the coarse value function is trustworthy, then iterating within [cluster]{}interiors to convergence before updating the bottlenecks is generally optimal. Initial boundary information is allowed to propagate throughout the fine scale interior, and the boundary values are modified only after the interior cannot be improved further. This situation might arise when pursuing potential operator transfer, or if the boundary value function solves a coarse MDP compressed according to a pool of policies as in Section \[sec:cluster\_pols\]. By contrast, if the initial coarse data is suspect, then we may choose to iterate the interior once or a small number of times, and then improve the boundary immediately afterwards. This may be the case if, for example, the initial coarse value function solves a coarse MDP compressed with respect to the diffusion policy. Applying many interior iterations can otherwise propagate erroneous information, and slow the solution process considerably. In short, when the initial coarse information is trustworthy, it should be leveraged as far as possible. Otherwise, if it is suspect, the coarse initial data should be imposed lightly.
For those experiments where a pool of [cluster]{}policies was used to initially compress the fine scale (following Section \[sec:cluster\_pols\]) [*and*]{} there is recompression, we will effectively [*add*]{} the current fine policy to the existing pool of initial guesses, and use the augmented pool to recompress. This allows the solution at the coarse scale to ignore actions invoking the current fine policy if the actions corresponding to the initial guess policies are better. Under these conditions, the coarse value function can only increase, since we are providing additional actions beyond those resulting from the initial compression. Since each fine scale [cluster]{}policy corresponds to a coarse action, recompression is efficient in practice, and involves compressing only with respect to the new fine policy. One can concatenate new coarse probabilities, discounts, and rewards with those resulting from the initial compression and then proceed to solve at the coarse scale. In experiments involving initial compression with respect to the diffusion policy only, we recompressed using the current fine policy, and discarded coarse actions corresponding to the diffusion policy. In all experiments, compression involved blending each [cluster]{}policy with a small amount ($\lambda=0.01$) of the diffusion policy in order to preserve the boundary reachability assumption.
0.05cm
![[*Output of the [cluster]{}correspondence algorithm (Section \[sec:xfer-cluster-correspond\]) applied to the gridworld transfer problem described in Section \[sec:grid\_expt\]. All [clusters]{}are correctly matched.*]{}[]{data-label="fig:grid-tr1-matching"}](figs/gridworld-tr1-matching){width="99.00000%"}
Gridworld Domain {#sec:grid_expt}
----------------
In the gridworld domain, an agent must navigate within a two-dimensional world from an arbitrary starting point to a designated goal state. Two $50\times 50$ gridworlds we will consider are shown in Figure \[fig:grid\_transfer\], where grey blocks represent immovable obstacles (walls) and large grey circles denote terminal goal states. The actions available to the agent are [`up`, `down`, `left`, `right`]{}. The four movement actions are reversible, and succeed with probability 0.9. Actions that would otherwise allow the agent to step on or through an obstacle fail with probability 1, and the agent remains in place. For all worlds, the reward function is set to $-1$ for all states except the goal state, which is assigned a reward of $+10$. We assume that these transition probabilities $P$ and rewards $R$ are given.
Bottleneck detection and partitioning was done once, before compressing, to a maximum of 3 scales. We then compressed the gridworld problem 3 times using the initial local guess policies described in Section \[sec:cluster\_pols\]. Figure \[fig:grid\_bns\] shows, clockwise from top left, the detected bottlenecks (marked by ‘`x`’ characters) and statespace adjacency graphs (from the compressed transition probability matrices) superimposed on the original world for each successively coarsened MDP. The graphs are directed, however for readability we do not show directionality in the plots. One can see that as the problem is repeatedly compressed, [clusters]{}become successively lumped together into coarser approximations to the original world. A solution to the MDP at the first compressed level, for example, determines the optimal sequence of [clusters]{}the agent should traverse to reach the goal. Figure \[fig:grid\_pols\] shows policies resulting from solving the coarse MDPs, depicted as directed arrows marking a path along bottleneck states to the goal. For this problem, solutions to the coarse problems are compatible with the optimal fine scale policy.
In Figure \[fig:grid\_transfer\] (right), we show a gridworld to which knowledge may be transferred given a solution at some scale to the problem on the left. In the transfer world (right) the optimal policy and state-transition behavior is significantly different from that of the original world (left). The reward function is also different, since the goal has moved, however the multiscale structure is similar, and the goal is in the same [cluster]{}as before (though the [cluster]{}has moved). The optimal fine scale policy within [clusters]{}of similar geometry are also similar across problems. Indeed, for this world some or all of the solution at any of scales 1-4 may be transferred following the process discussed in Section \[sec:transfer\]. We will consider a simple transfer scenario in which the optimal fine scale policy is transferred wherever transferability detection (Section \[sec:xfer-detect\]) indicates it is advantageous to do so. Details discussing the application of transfer Algorithm \[alg:transfer-general\] are given below.\
[*Cluster Correspondence:*]{} For this problem, the [cluster]{}correspondence algorithm described in Section \[sec:xfer-cluster-correspond\] correctly pairs together the [clusters]{}in the source and destination problems. Figure \[fig:grid-tr1-matching\] shows the partitioning of each world into [clusters]{}identified by the recursive spectral clustering step (Algorithm \[alg:spectral\_clustering\]), as well as the correspondences returned by the [cluster]{}correspondence algorithm. Within each world, [clusters]{}are demarcated by shade of color, and across worlds, gray lines connect the centroids of [clusters]{}that have been paired together.\
[*Transfer Detection:*]{} The transfer detection algorithm described in Section \[sec:xfer-detect\] was applied to each pair of matched [clusters]{}. It is clear that with the exception of [clusters]{}1 and 8 in Figure \[fig:grid-tr1-matching\], the [clusters]{}are similarly oriented in both worlds. Thus for this problem, one should be able to skip statespace matching [*within*]{} [clusters]{}and rely on transfer detection to confirm whether this was ultimately a safe thing to do. Omitting a statespace matching at the fine scale is equivalent to assuming that paired subproblems have the same orientation with respect to the problem domain and bottlenecks. In general of course it is hard to know a priori whether identified sub-problems share the same orientation as a pre-solved problem stored in a database of solutions, and statespace matching at [*all*]{} scales involved in the transfer should be performed. Nevertheless, one can still attempt to assume the orientations are correct, and then detect whether this assumption is valid or not. This approach may be particularly fruitful whenever the fine scale statespaces are large and complex, so that graph matching is difficult and error-prone. For the present gridworld problem, the detection algorithm identifies [clusters]{} $2-7$ as policy transfer candidates, and rejects [clusters]{}1 and 8. This result coincides with our earlier visual intuition from Figure \[fig:grid-tr1-matching\].\
[*Fine Scale Policy Transfer:*]{} Within [clusters]{} $2-7$, the fine scale optimal policy for the source problem was mapped to the destination problem following the mapping procedure described in Section \[sec:policy-xfer\]. States in the destination world which did not receive a policy by transfer were given a deterministic policy that always recommends the `up` action, rather than a uniform distribution over all actions. Mapping the actions across worlds is easy in this case, since the action spaces are identical, and pieces of the optimal policy for the original problem largely transfer without modification. Comparing the two worlds in Figure \[fig:grid-tr1-matching\], however, the detected bottleneck states are not always in the same place relative to a given [cluster]{}. For example, the two bottlenecks at the top-left corner of [cluster]{}5 are one grid space to the right in the transfer world as compared to the original world. The policy transfer algorithm in Section \[sec:policy-xfer\] maps a policy between [cluster]{}interiors along the established correspondence, which in our case is simply an enumeration of the [cluster]{}’s states in column-scan order, from top-left to bottom-right. For [cluster]{}5, the difference in relative positioning of the bottlenecks creates a misalignment between the source and destination [clusters]{}’ interior states. For this particular problem, however, this misalignment imposes little error since the optimal policy is constant over large portions of the [cluster]{}. This is likely the reason why [cluster]{}5 was identified as a good candidate for transfer, despite alignment errors at the fine scale. In general, policy transfer may be relatively robust to correspondence errors at the fine scale since, by construction, the underlying Markov chain is fast mixing within [clusters]{}.\
\
\
[*Transfer Problem Solution:*]{} Several multiscale algorithms listed in Table \[tab:expt-algs\] were evaluated, with the transferred (fine scale) policy serving as the initial policy for local policy iteration in the destination problem. In all cases, the initial coarse scale value function was obtained by solving the coarse MDP given by compression with respect to the pool of local policy guesses discussed in Section \[sec:cluster\_pols\], and in all experiments, the blending parameter appearing in policy updates was set to $\lambda=1$, thereby imposing a greedy policy updating convention.
Plots in Figure \[fig:gridworld-partial\] with $y$-axis labeled “Error” show the Euclidean distance between the value function after $t$ iterations ($x$-axis) of the given algorithm and the optimal value function for this problem[^15]. Inset plots detail boxed regions. See Table \[tab:expt-algs\] and surrounding discussion for a description of the algorithms and their labels. The default, fine-scale initial policy for experiments without transfer is an arbitrary deterministic policy that always chooses the `up` action.
Figures \[fig:gridworld-partial-transfer-pool\] and \[fig:gridworld-partial-notransfer-pool\] show the performance of various multiscale algorithms with and without fine scale policy transfer, respectively. Comparing the scale of the vertical axes across the two plots, transfer provides a good warm start for all algorithms. Because the initial coarse value function solves an MDP compressed with respect to a pool of initial policy guesses, we may expect that the initial coarse value function is trustworthy. For gridworld-type problems in particular, this is a reasonable expectation. Comparing curves within the figures, it is clear that the coarse initial data is indeed good: algorithms which leverage the initial condition as far as possible, and iterate inside [clusters]{}to convergence before updating the boundary (“MS-{`cc,co,cr`}” traces), perform better than the algorithms that only iterate over interiors once between boundary updates (“{MS-`oo,oc`}” traces). Algorithms updating the bottlenecks by recompression (“MS-`cr`” traces) are seen to converge to a suboptimal value function, however the corresponding policy is optimal after iteration 29 in Figure \[fig:gridworld-partial-transfer-pool\] and after iteration 33 in Figure \[fig:gridworld-partial-notransfer-pool\]. MS-`cr` is the single best algorithm for solving the gridworld problem, both with and without transfer, and MS-`co` is the best algorithm not involving recompression.
Because we have only considered fine-scale policy transfer, we can compare to the performance of the canonical policy iteration algorithm given the transferred policy as the initial condition[^16]. Figures \[fig:gridworld-partial-compare\] and \[fig:gridworld-partial-compare-oo\] respectively compare policy iteration with and without transfer to multiscale algorithms `co` and `oo`, with and without transfer. The multiscale algorithm curves in these figures are the same as the corresponding curves in Figures \[fig:gridworld-partial-transfer-pool\] and \[fig:gridworld-partial-notransfer-pool\]. Comparing curves within plots, it is clear that the transferred policy also provides policy iteration with a helpful warm start. However, considering the difference in starting error between multiscale transfer/no-transfer curves to those of policy iteration, the multiscale algorithms are better able to take advantage of the transferred information. The improvement of the multiscale no-transfer curve over the policy iteration no-transfer curve reflects the improvement due to both coarse information as well as the multiscale approach. The two multiscale algorithms converge to optimality in about the same number of iterations, however for this domain MS-`co` exhibits stronger non-monotonicity.
As discussed in Section \[sec:mdp\_solution\], there are two primary reasons why the multiscale algorithm can perform better than policy iteration: The multiscale algorithm starts with coarse knowledge of the fine solution given by the solution to the compressed MDP, and the multiscale approach can offer faster convergence since convergence of local ([cluster]{}) policy iteration is constrained by faster mixing times within [clusters]{}rather than slow times across [clusters]{}. These are reasons why Algorithm \[alg:hierarchy-solve\] can converge in fewer iterations. But an iteration-count comparison to vanilla policy iteration is not entirely fair because each iteration of the multiscale algorithm is significantly cheaper, as described in Section \[sec:ms-alg-complexity\]. Figure \[fig:gridworld-tr1-time\] shows elapsed wall time after $t$ iterations for policy iteration and MS-`oo` algorithms. Experiments were conducted on a dual Intel Xeon E5320 machine running 64-bit MATLAB release 2010b under Linux, with no parallelization or optimization beyond the default BLAS/Atlas multicore routines embedded into native MATLAB linear algebra calls. It is evident that the multiscale algorithm compares favorably to policy iteration, both in terms of total iterations and per-iteration scaling in time. We note that the ratio of the slopes in Figure \[fig:gridworld-tr1-time\] is specific to this example. In general this ratio is determined by the cardinality of the statespace and the number of identified [clusters]{}(statespace graph cuts), and will vary across problems.
Continuous Two-Task Cart-Pendulum Domain {#sec:pend-expt}
----------------------------------------
The cart-pendulum problem is a classic continuous control task where an inverted pendulum attached to a cart on a track must be balanced by applying force to the cart. We consider a slightly more complex domain in which there are two additions: the cart must be moved to a particular goal location, and at some portions of the track the pendulum is held fixed and does not need to be balanced, while in other regions the pendulum is free and must be balanced. In all simulations, the length of the pendulum is $0.5$m, its mass is $1$kg, and the mass of the cart is $5$kg. There are three actions, corresponding to applying horizontal forces of $-20$N, $0$N, or $+20$N to the cart. These control inputs are subsequently corrupted at each time step by i.i.d. additive, zero-mean Gaussian noise with standard deviation $\sigma=5$. Three state variables are used, $\{\theta, \dot{\theta}, x\}$, where $\theta$ is the angle of the pendulum from the vertical, $\dot{\theta}=\tfrac{d\theta}{dt}$, and $x$ is the horizontal position along a track spanning the interval $[-30\text{m},30\text{m}]$. If the pendulum falls over ($|\theta| = \pi/2$) or the end of the track is reached ($|x|=30$), a reward of $-1$ is received, and the simulation ends. Unless otherwise noted below, at any other state a reward of $0$ is received. Within this domain we will consider two different tasks:
[Default (${\ensuremath{{\ensuremath{\mathrm{MMDP}_{(1)}}\xspace}}\xspace}$):]{}
: The goal of the default task is to move the cart to position $x=+20$ along the track, whereupon a reward of +100 is received, and the simulation ends. If the cart is at any position $x>0$, the pendulum is held fixed in the upright position $(\theta=0,\dot{\theta}=0)$, but the cart is free to move. Otherwise, if $x\leq 0$ the pendulum is able to move freely as usual and must be balanced. If a simulation is started at some initial position $x_0 < 0$, then two sub-tasks must be solved in order: (1) The pendulum must be balanced while moving right until reaching $x=0$ (“`balance`”), and (2) the pendulum is held upright but must be carried while moving right towards $x=20$ (“`carry`”).
[Transfer (${\ensuremath{{\ensuremath{\mathrm{MMDP}_{(2)}}\xspace}}\xspace}$):]{}
: The goal of the transfer task is the same: the cart must be moved to the position $x=+20$ along the track, whereupon a reward of $+100$ is received, and the simulation ends. The regions where the pendulum must be carried vs. balanced are swapped, however. If the cart is at any position $x<0$, the pendulum is held fixed at $(\theta=0,\dot{\theta}=0)$. Otherwise, if $x\geq 0$ the pendulum is able to move freely and must be balanced. Thus, for simulations starting at some initial position $x_0 < 0$, the two sub-tasks that must solved occur in the opposite order (`carry`,`balance`): (1) Carry the pendulum while moving right until reaching $x=0$, and then (2) balance the pendulum while moving right.
The goal of transfer for this domain is to convey the ability to carry or balance the pendulum. The agent must still learn when to apply these skills, and in which order. Although this particular pair of problems involves only two sub-tasks, it serves to illustrate how transfer within the multiscale framework we have described can be achieved in the context of a continuous control problem. Furthermore, in contrast to the other example domains, the domain described here has a multiscale structure induced by changes in the [*intrinsic dimension*]{} of the statespace; diffusion geometry is less helpful here. In light of these differences, we will consider alternative choices for the statespace partitioning and graph matching below.\
\
[*Simulation and Statespace Discretization:*]{} For both problems above, fine scale MDPs were estimated from Monte-Carlo simulations. For each problem, we simulated the systems with the diffusion (uniform random) policy for 8000 episodes. The initial state[^17] for each episode was drawn according to $\theta,\dot{\theta}\sim\mathcal{U}[-0.2,0.2], x\sim\mathcal{U}[-20,20], \dot{x}\sim\mathcal{U}[-0.1,0.1]$, where $\mathcal{U}[a,b]$ denotes the uniform distribution on the interval $[a,b]$. The simulation stepsize was set to $\Delta t=0.1s$, giving a 10Hz control input.
The resulting samples were normalized to have equal range in each coordinate, and clustered according to a $\delta$-net with $\delta$ chosen so that we obtained approximately 500 representative states from the pool of samples. These 500 states determined the discrete statespace on which the given MDP was defined. Absorbing states reached during the simulation were separately clustered and the resulting cluster representatives added to the previous 500 states. This ensured that the terminal boundaries of the problem were clearly represented in the discretized statespace. The above procedure was applied separately to each problem, and the final size of the statespaces were 502 and 515, of which 56 and 75 were terminal, for the default and transfer problems respectively. Figures \[fig:pend-statespace-default\] and \[fig:pend-statespace-transfer\] show plots of these discretized statespaces. Within each figure, the left-hand plot shows states as circles in $(x,\theta,\dot{\theta})$ coordinates, with large dark circles indicating terminal states. Right-hand plots graph states in $(x,\theta)$ coordinates.\
[*MMDP Construction:*]{} Given the simulation samples and clusters defined above, transition and reward statistics between the $\delta$ regions claimed by the representative states were computed to estimate $P(s,a,s')$ and $R(s,a,s')$ for the respective fine scale MDPs. Absorbing boundaries were enforced by forcing MDP states which had absorbing samples as their closest neighbor (out of all samples) to be absorbing. If any states were subsequently rendered unreachable, they were removed from the problem. Rewards for terminal states were similarly enforced by imposing the reward received at the neighbors nearest to the designated absorbing MDP states. Collectively these steps ensured that the absorbing rewards and states – the boundary conditions – were sufficiently captured in the translation from a continuous to a discrete problem.
To define a coarse scale MDP, we next partitioned the statespace into [clusters]{} and identified bottlenecks. For the problems described above, however, geometry is not as helpful, and we chose to pursue an approach different from the spectral clustering algorithm described in Section \[sec:clustering\]. Here, there is a natural partitioning of the statespace based on intrinsic dimension; balancing is a 3D task, while carrying (without balancing) is a 1D task. Thus, we detected where dimension changes in the statespace take place, and partitioned accordingly. The right-hand plots in Figures \[fig:pend-statespace-default\] and \[fig:pend-statespace-transfer\] respectively show the result of this partitioning for the default and transfer problems. States are colored according to membership in one of two resulting [clusters]{}. One can see that the [clusters]{}clearly correspond to sub-tasks `carry` and `balance`. Terminal states are marked by large gray circles, and non-absorbing bottlenecks are indicated with (magenta) `x`’s. As would be expected, in both problems the state just at the interface of the two [clusters]{}is a bottleneck. Some additional bottlenecks also result from the partitioning.
On the basis of the [clusters]{}and bottlenecks shown in Figures \[fig:pend-statespace-default\] and \[fig:pend-statespace-transfer\], the transfer problem was compressed once with respect to the diffusion policy[^18], assuming a fine scale uniform discount rate $\gamma=0.99$.\
[*Policy Transfer:*]{} We next established [cluster]{}and fine scale statespace graph correspondences in order to transfer the fine scale policy from [${\ensuremath{\mathrm{MMDP}_{(1)}}\xspace}$]{}to [${\ensuremath{\mathrm{MMDP}_{(2)}}\xspace}$]{}. The action sets across problems are already in correspondence, so action mapping was not pursued[^19]. Since the statespaces partition into [clusters]{}on the basis of dimension, we simply matched [clusters]{}having the same dimension across problems. In this way, the `carry` and `balance` subtasks were easily placed into correspondence (respectively).
To construct a fine scale graph matching, we assumed that the statespace coordinate systems were already aligned across problems (that is, we assumed we knew which coordinate in [${\ensuremath{\mathrm{MMDP}_{(1)}}\xspace}$]{}corresponds to the coordinate for $x$ in [${\ensuremath{\mathrm{MMDP}_{(2)}}\xspace}$]{}, and similarly for $\theta$ and $\dot{\theta}$). Letting $S_i$ denote the statespace of ${\ensuremath{\mathrm{MMDP}_{(i)}}\xspace}$, we then considered a state mapping $\phi:S_2\to S_1$ of the form $$\phi: (x,\theta,\dot{\theta})\mapsto \bigl(f(x),g(\theta),h(\dot{\theta})\bigr) .$$ The [cluster]{}correspondence previously established induces a natural mapping between $x$ coordinates; for instance, we simply mapped the $x$ interval $(-30,0]$ in [${\ensuremath{\mathrm{MMDP}_{(2)}}\xspace}$]{}onto the interval $[0,20)$ for [${\ensuremath{\mathrm{MMDP}_{(1)}}\xspace}$]{}to define $f$ on states within the `carry` [cluster]{}of [${\ensuremath{\mathrm{MMDP}_{(1)}}\xspace}$]{}. A similar mapping was constructed to define $f$ on states within the `balance` [cluster]{}. The coordinate maps $g,h$ were taken to be the identity, since $\theta,\dot{\theta}$ are directly comparable across problems. A statespace correspondence $\eta$ was then defined based on the nearest-neighbor Euclidean distance under $\phi$, assuming a neighbor search constrained to fall within matching [clusters]{}. If ${\ensuremath{{\ensuremath{\mathsf{c}}\xspace}_2}\xspace}\in{\ensuremath{{\ensuremath{\mathrm{MMDP}_{(2)}}\xspace}}\xspace}$ has been matched to ${\ensuremath{{\ensuremath{\mathsf{c}}\xspace}_1}\xspace}\in{\ensuremath{{\ensuremath{\mathrm{MMDP}_{(1)}}\xspace}}\xspace}$, then $$\eta(s) = \arg\min_{s'\in {\ensuremath{{\ensuremath{\mathsf{c}}\xspace}_1}\xspace}} \|\mathsf{N}(s')-\mathsf{N}(\phi(s))\|_2,\qquad s\in{\ensuremath{{\ensuremath{\mathsf{c}}\xspace}_2}\xspace},$$ where $\mathsf{N}(\cdot)$ is the same coordinate-wise range normalization used in the state clustering steps above for [${\ensuremath{\mathrm{MMDP}_{(1)}}\xspace}$]{}. Given the fine scale state mapping $\eta$, the optimal fine policy for the default problem [${\ensuremath{\mathrm{MMDP}_{(1)}}\xspace}$]{}was transferred to [${\ensuremath{\mathrm{MMDP}_{(2)}}\xspace}$]{}by transferring separately within matched [clusters]{}(that is, between matched sub-tasks).\
[*Transfer Problem Solution:*]{} We applied several multiscale algorithms listed in Table \[tab:expt-algs\] (see surrounding text for details describing these algorithms and notation) to explore the impact of the transferred policy. Experiments with transfer and without are compared, and we compare to ordinary policy iteration. Where transfer was considered, the initial fine scale policy was the transferred policy. The default, fine-scale initial policy for experiments without transfer is an arbitrary deterministic policy that always chooses the action which applies no force to the cart. In all multiscale experiments, the initial coarse value function was the value function solving a coarse MDP resulting from compression with respect to the diffusion policy. The blending parameter appearing in policy updates was set to $\lambda=1$.
In Figure \[fig:pendulum-iters\] we show plots detailing the Euclidean distance ($y$-axis, “Error”) between the value function after $t$ iterations ($x$-axis) of the given algorithm and the optimal fine value function for [${\ensuremath{\mathrm{MMDP}_{(2)}}\xspace}$]{}. Figures \[fig:pend-transfer\] and \[fig:pend-notransfer\] show the performance of the multiscale algorithms as well as policy iteration (“PI”), respectively with and without fine scale policy transfer. Here, error is plotted on a logarithmic scale. Comparing these two plots at $t=1$ iteration, transfer provides a warm start for all algorithms. Furthermore, the improvement seen in Figure \[fig:pend-transfer\] can be entirely attributed to transfer, since the multiscale algorithms converge as fast as or slower than policy iteration, and do not contribute any improvement in convergence rate in and of themselves for this problem. (Of course, the complexity of each iteration is still much smaller for the multiscale algorithms compared to global algorithms such as policy iteration.)
From Figures \[fig:pend-transfer\] and \[fig:pend-notransfer\] it is also evident that `MS-oc` is the single best multiscale algorithm, and has nearly the same rate of convergence as policy iteration (we note that after $t=3$ iterations the relative error has decreased to $<1\%$, and the problem is nearly solved). Figure \[fig:pend-compare-oc\] gives a more detailed view of the improvement transfer confers in the context of `MS-oc`. Error is plotted on a linear scale to make the difference more visible. Figure \[fig:pend-compare-pi\] compares policy iteration with and without the transferred policy as the starting point. Comparing Figure \[fig:pend-compare-oc\] to \[fig:pend-compare-pi\], the improvement due to transfer as well as the convergence rates are seen to be similar.
Algorithms involving recompression, `MS-cr` and `MS-or`, do not converge to optimality for this problem, and `MS-oo,MS-co` converge very slowly. That the `MS-oo,MS-co` algorithms converge slowly relative to `MS-oc` confirms the importance of the bottleneck near the origin, and by extension, the updates at this bottleneck. The fact that `MS-cc` converges more slowly than `MS-oc` suggests that either of the coarse or fine initial data contain some errors. Forcing too much of the initial coarse value function or fine scale policy results in poorer performance here. For this domain, however, we found that compression with respect to a pool of policies (Section \[sec:cluster\_pols\], simulations not shown) does not yield an initial coarse value function giving any better performance than the coarse value function derived from compression with respect to the diffusion policy. This is likely due to the fact that there are few non-absorbing bottlenecks, and only the bottleneck near the origin evidently plays a key role; the gradients obtained from either coarse value function contain comparable information in this case.
Playroom Domain {#sec:expts-playroom}
---------------
We consider a simplified version of the playroom domain introduced in [@Singh2004; @Barto:ICDL:04]. In our formulation, an agent interacts with four objects in a room: a ball, a bell, a music button and a light switch. The actions available to the agent are:
1. Look at a randomly selected object. (Succeeds with probability 1).
2. Place a marker on the object the agent is looking at.
3. Press the music button.
4. Kick the ball towards the marker.
5. Flip the light switch.
All actions except the first succeed with probability 0.75. In order to take the latter three actions, the agent must first be looking at the relevant object unless otherwise noted (see modifications below). If the ball is kicked into the bell, the bell rings for exactly one time period. If the light switch is flipped to the on position, the light stays on for exactly one time period, and then switches to the off state. The state is 5-dimensional, and consists of the following variables:
1. Object the agent is looking at.
2. Object the marker is currently placed on.
3. Music on/off.
4. Bell on/off.
5. Light on/off.
We will consider two pairs of problems within the playroom domain to illustrate compression and transfer. Each pair consists of a baseline task and a variation task. For a pair of tasks, the rules governing the environment will remain fixed, however the goal of the tasks will change. The objective is thus to apply knowledge gained from solving one problem in the environment towards solving another problem in the same environment.
To build MDP models, the tasks were independently simulated for 1000 episodes of maximum length 1000 actions. Each trial was ended upon reaching either the goal state, or the maximum number of actions. Since the statespaces are small, samples were simply binned according to the underlying state variables. Transition probabilities were then estimated empirically from the samples. Rewards were set to $+10$ for transitions to the goal state, and to $-1$ for all other transitions. In all cases, we fixed the discount parameter to $\gamma=0.96$. Next, the spectral clustering procedure described in Section \[sec:clustering\] was applied, stopping after a single iteration. Note that one iteration of Algorithm \[alg:spectral\_clustering\] can potentially result in more than two parts, since a single cut can produce multiple disconnected subgraphs. We next compressed the tasks once, assuming the diffusion policy at the fine scale. For the baseline tasks from which information is transferred, the MDP hierarchies were solved to the optimal solution following Algorithm \[alg:hierarchy-solve\].
### Coarse Transfer Example {#sec:playroom-simple}
In this example we illustrate the transfer of a coarse scale potential operator from one task to another. We will refer to the problem supplying the potential operator as the [*default*]{} problem, and the problem into which we transfer information as the [*transfer*]{} problem. The default problem is assumed to be solved, in the sense that coarse MDPs have been compressed with respect to optimal policies, and the policy used to define the potential operator is optimal.
For the pair of playroom problems we will explore in this section, the light is turned on for one period by taking the “flip the light switch” action with the marker on the bell, while looking at the bell. These are the same conditions for ringing the bell, only now the agent can alternatively flip the light on. The tasks are as follows:
[Default (${\ensuremath{{\ensuremath{\mathrm{MMDP}_{(1)}}\xspace}}\xspace}$):]{}
: The goal of the default task is to cause the bell to ring while the music is playing. Note that there is no action that will directly ring the bell. The agent must: look at the music button, press the music button, look at the bell, place the marker on the bell, look at the ball, and finally, kick the ball into the bell. Because the bell only rings for a single period, the music must be turned on before ringing the bell. In this task, [*the light switch does not play a role*]{}. Each episode begins with the agent looking at a random object, the marker on a random object, and all on/off objects in the off state.
[Transfer (${\ensuremath{{\ensuremath{\mathrm{MMDP}_{(2)}}\xspace}}\xspace}$):]{}
: The goal is to flip the light switch to the on position while the music is playing. However, to reach the goal state the agent must still have the marker on the bell, and must be looking at the ball in order to take the “flip light switch” action. That is, the role of the light switch and ball kicking actions in solving the problem have been swapped. The agent must: look at the music button, turn on the music, look at the bell, place the marker on the bell, look at the ball, flip the light switch.
The difference between the default and transfer tasks is that the final action leading to the goal state has been switched and the underlying goal state itself has changed.
![[*Diffusion map embeddings of the statespaces for the playroom coarse transfer example of Section \[sec:playroom-simple\]: default (left), and transfer (right) tasks. See text for details.*]{}[]{data-label="fig:playroom-easy-diffmaps"}](figs/playroom-easy-diffmaps){width="\textwidth"}
Figure \[fig:playroom-easy-diffmaps\] shows a 2D diffusion map [@Coifman:PNAS:05] visualization of the statespace graphs for each of the two tasks. Even with two coordinates, the graphs appear nearly identical so that statespace graph matching should not be difficult. Bottlenecks are marked by ’`x`’, ordinary states with ’`o`’, and goal (terminal) states are boxed. Non-bottleneck states are colored according to membership in one of the two possible identified [clusters]{}. Edges connect pairs of states for which there is a non-zero transition probability given some action. Although state [*embeddings*]{} may be similar, the underlying states can be very different. As can be seen from the plots, the goal states in particular have similar diffusion map coordinates, but of course represent different underlying states of the environment. For both tasks, spectral clustering resulted in two [clusters]{}and seven bottlenecks.
To demonstrate coarse transfer from the default task to the transfer task, we will:
1. Match bottlenecks across problems by coarse scale statespace graph matching, following Section \[sec:graph-matching\].
2. Compute values for the transfer tasks’s bottlenecks by transferring the default task’s potential operator following Section \[sec:potential-xfer\], along the coarse statespace correspondence determined in Step (1). For this problem, the coarse action mapping determined following Section \[sec:policy-xfer\] is simply the canonical correspondence defined by executing the fine scale policy in a designated [cluster]{}. If action $a$ in ${\ensuremath{{\ensuremath{\mathrm{MMDP}_{(1)}}\xspace}}\xspace}$ means “execute the fine policy in [cluster]{} ${\ensuremath{\mathsf{c}}\xspace}$”, then this action is mapped to an action $a'$ in ${\ensuremath{{\ensuremath{\mathrm{MMDP}_{(2)}}\xspace}}\xspace}$ corresponding to “execute the fine policy in [cluster]{} ${\ensuremath{\mathsf{c}}\xspace}^{\prime}$”, if [cluster]{} ${\ensuremath{\mathsf{c}}\xspace}^{\prime}$ is matched to [cluster]{} ${\ensuremath{\mathsf{c}}\xspace}$ following Section \[sec:xfer-cluster-correspond\].
3. Push the coarse solution down to solve the transfer task at the fine scale following multiscale Algorithm \[alg:hierarchy-solve\] and variants thereof listed in Table \[tab:expt-algs\].
[*Bottleneck Matching:*]{} Any graph matching algorithm may be used. We used the procedure described in Section \[sec:graph-matching\], together with the matching algorithm of [@Huang:AISTATS:11] (using their freely available MATLAB implementation), to confirm that the matching for this problem can be done easily. The bottlenecks and their correspondences were as follows (matched bottlenecks are listed on the same row):
-------------------------------------- ---------------------------------
Default Transfer
`(look, marker, music, bell, light)`
`(ball, bell, on, on, off)` `(ball, bell, on, off, on)`
`(music, ball, on, off, off)` `(music, ball, on, off, off)`
`(music, music, off, off, off)` `(music, music, off, off, off)`
`(music, bell, on, off, off)` `(music, bell, on, off, off)`
`(music, light, on, off, off)` `(music, light, on, off, off)`
`(bell, bell, off, off, off)` `(bell, bell, off, off, off)`
`(bell, bell, on, off, off)` `(bell, bell, on, off, off)`
-------------------------------------- ---------------------------------
The the goal states in each problem (top row) are successfully paired, while the rest of the bottlenecks are identical across tasks.\
[*Transfer Problem Solution:*]{} We evaluated several multiscale algorithms listed in Table \[tab:expt-algs\] (see surrounding text for a description). For all algorithms, the blending parameter appearing in policy updates was set to $\lambda=1$. In all non-transfer experiments, the initial coarse scale value function was obtained by solving the coarse MDP given by compression with respect to the diffusion policy. The diffusion policy was chosen over the pool method in Section \[sec:cluster\_pols\] for simplicity, so that actions across the two problems could be placed into a natural correspondence, and so that error due to the action mapping would not be conflated with other sources of error. In all experiments, the initial fine scale policy was chosen arbitrarily to be the (deterministic) policy which always takes the `look` action.
Figure \[fig:playroom-simple\] compares performance among multiscale algorithms and to the canonical policy iteration method. As before, vertical axes labeled “Error” show the Euclidean distance between the value function after $t$ iterations ($x$-axis) of the given algorithm, and the optimal value function for this problem. Inset plots detail boxed regions. See Table \[tab:expt-algs\] for a description of the algorithms’ labels.
Figures \[fig:playroom-easy-transfer\] and \[fig:playroom-easy-notransfer\] show the performance of various multiscale algorithms with and without fine scale policy transfer, respectively. Comparing the two plots, transfer provides a good warm start (lower starting error), and also affords faster convergence (fewer iterations) for all multiscale algorithms. For transfer experiments (Figure \[fig:playroom-easy-transfer\]), the coarse initial value function comes from transfer, and we may reasonably assume it is reliable. For this reason, algorithms which leverage the initial coarse information as far as possible, and iterate inside [clusters]{}to convergence before updating the boundary (“MS-{`cc,co,cr`}” traces), perform better than the algorithms that only iterate over interiors once between boundary updates (“MS-{`oo,oc,or`}” traces). In contrast, for the experiments which did not involve transfer (Figure \[fig:playroom-easy-notransfer\]), the MS-{`cc,co,cr`} algorithms exhibit slower convergence than the MS-{`oo,oc,or`} family. Since the initial coarse value function was derived from a fine scale diffusion policy in the no-transfer setting, we can conclude, as one would expect, that the initial coarse estimate was not entirely reliable. When there is potential operator transfer, algorithms MS-{`cc,co,cr`} are equally good for solving the problem, and MS-`cc` is the best algorithm not involving recompression. In the absence of transfer, MS-`or` performs best and converges faster than policy iteration, while MS-`oc` is the best algorithm not involving recompression. Although in this example the recompression algorithms (“MS-{`or,cr`}” traces) do not converge to the optimal value function, the sequence of policies do converge to the optimal policy. All of the multiscale algorithms reach optimal policies in fewer iterations than policy iteration in the case of transfer. When there is no transfer of information, and the initial coarse scale data is unreliable, then the multiscale algorithms not involving recompression can take more iterations to converge as compared to policy iteration. However, as mentioned previously, each iteration of the multiscale algorithms is substantially faster (involving local computations) than iterations of policy iteration, which is a global algorithm (see Section \[sec:grid\_expt\] for a discussion regarding this point).
In Figures \[fig:playroom-easy-compare-cc\] and \[fig:playroom-easy-compare-oc\] we compare algorithms MS-`cc` and MS-`oc`, with and without transfer, to the policy iteration algorithm on a linear scale. Policy iteration in all cases starts from the same initial fine scale policy as the multiscale algorithms. These plots more clearly demonstrate the advantage afforded by the coarse scale transfer: traces labeled “Transfer” confirm that there is both a warm start (lower starting error) and faster convergence (fewer iterations). The effect is also more pronounced in the case of MS-`cc`, since this algorithm maximally leverages the transferred information.
\
### Partial Policy Transfer Example {#sec:playroom-partial}
This example illustrates partial transfer of a policy at the fine scale of a two scale hierarchy. The pair of problems in this section differ from the previous section only in that to turn on the light, the agent must be looking at the light switch and the marker must be on the bell. Now, turning on the light differs from ringing the bell by two actions. The tasks are as follows:
[Default ([${\ensuremath{\mathrm{MMDP}_{(1)}}\xspace}$]{}):]{}
: The goal is to cause the bell to ring while the music is playing. The agent must look at the music button, press the music button, look at the bell, place the marker on the bell, look at the ball, and finally, kick the ball into the bell. The light switch does not play a role.
[Transfer ([${\ensuremath{\mathrm{MMDP}_{(2)}}\xspace}$]{}):]{}
: The goal is to flip the light switch to the on position while the music is playing. The agent must: look at the music button, turn on the music, look at the bell, place the marker on the bell, [*look at the light switch, flip the light switch*]{}.
As before, each episode begins with the agent looking at a random object, the marker on a random object, and all on/off objects in the off state.
![[*Diffusion map embeddings of the statespaces for the partial playroom transfer example of Section \[sec:playroom-partial\]: default (left), and transfer (right) tasks. See text for details.* ]{}[]{data-label="fig:playroom-partial-diffmaps"}](figs/playroom-partial-diffmaps){width="\textwidth"}
Although this pair of problems involves only one additional action change in the sequence leading up to the goal as compared to the previous section’s pair, the detected bottlenecks at the coarse scales cannot be easily matched. Figure \[fig:playroom-partial-diffmaps\] shows 2D diffusion map visualizations of the statespace graphs for the two tasks described in this section. Again, bottlenecks are marked by ’`x`’, ordinary states with ’`o`’, and goal states are boxed. In contrast to Figure \[fig:playroom-easy-diffmaps\], here it can be seen comparing the default task to the transfer task, that some bottlenecks become interior states and vice versa. Thus, direct matching of the coarse scale statespaces (assuming a two layer compression hierarchy) and subsequent coarse scale policy transfer is not an immediate possibility here.
What we might hope, however, is that we can transfer the portion of the fine scale policy dealing with states in which the music is off. It is only after the music is on that the optimal action sequences for the two tasks diverge, and when the music is off the immediate sub-goal for both tasks is to turn it on. Figure \[fig:playroom-partial-diffmaps\] confirms this possibility: interior states are colored according to the spectral partition as before, and the [clusters]{}in this case correspond to “music ON” vs. “music OFF” states[^20]. As can be seen from the plots, the sign of the (Fiedler) eigenvector $\varphi_2$ gives this partitioning. The procedures described in Section \[sec:transfer\] were next applied to the current pair of tasks in order to detect transferability and effect policy transfer.\
[*Cluster Correspondence:*]{} The [cluster]{}correspondence step (Section \[sec:xfer-cluster-correspond\]) correctly paired together “music OFF” and “music ON” [clusters]{}, respectively. The pairwise [cluster]{}distances were found to be:
[${\ensuremath{\mathrm{MMDP}_{(2)}}\xspace}$]{}: Cluster 1 [${\ensuremath{\mathrm{MMDP}_{(2)}}\xspace}$]{}: Cluster 2
------------------------------------------------------------ ------------------------------------------------------------ ------------------------------------------------------------
[${\ensuremath{\mathrm{MMDP}_{(1)}}\xspace}$]{}: Cluster 1 0.2850 0.4115
[${\ensuremath{\mathrm{MMDP}_{(1)}}\xspace}$]{}: Cluster 2 0.3583 0.3201
.
Here, Cluster 1 is the “music OFF” [cluster]{}and Cluster 2 is the “music ON” [cluster]{}.\
![[*Transfer detection. Value functions for Cluster 1 (left) and Cluster 2 (right), shown over the entire respective [cluster]{}interiors. States inside intersection regions are plotted with small open points, and large filled points identify all other states. The value functions were obtained using ${\ensuremath{{\ensuremath{\mathrm{MMDP}_{(1)}}\xspace}}\xspace}$’s optimal policy following Section \[sec:xfer-detect\], and are plotted in ascending sorted order according to the magnitude of $V_0$. See text for details.*]{}[]{data-label="fig:playroom-partial-detect"}](figs/playroom-partial-detect){width="\textwidth"}
[*Detecting Transferability:*]{} Next, the transfer detection algorithm described in Section \[sec:xfer-cluster-correspond\] was applied separately to the pairs (Cluster 1 $\in{\ensuremath{{\ensuremath{\mathrm{MMDP}_{(1)}}\xspace}}\xspace}$, Cluster 1 $\in{\ensuremath{{\ensuremath{\mathrm{MMDP}_{(2)}}\xspace}}\xspace}$) and (Cluster 2 $\in{\ensuremath{{\ensuremath{\mathrm{MMDP}_{(1)}}\xspace}}\xspace}$, Cluster 2 $\in{\ensuremath{{\ensuremath{\mathrm{MMDP}_{(2)}}\xspace}}\xspace}$) following the [cluster]{}correspondence above. No statespace graph matching was done for this example, so we are effectively assuming that that the roles of the states in each problem are the same. Assessing transferability is therefore important in order to determine if and where this assumption might hold. Figure \[fig:playroom-partial-detect\] shows the value functions calculated using ${\ensuremath{{\ensuremath{\mathrm{MMDP}_{(1)}}\xspace}}\xspace}$’s optimal policy (green box traces, labeled $V_0$ for scale $j=0$) and the diffusion policy (blue circle traces, labeled $V_0^u$). States inside ${\ensuremath{{\ensuremath{\mathrm{MMDP}_{(1)}}\xspace}}\xspace},{\ensuremath{{\ensuremath{\mathrm{MMDP}_{(2)}}\xspace}}\xspace}$ [cluster]{}intersection regions are plotted with small open points, and large filled points identify all other states. The states (horizonal axis) are ordered according to the magnitude of $V_0^u$ for improved readability. The left-hand plot shows values for states in Cluster 1 (“music OFF”). The transferred policy clearly leads to more expected reward everywhere, and the two value functions follow a similar general trend. The right-hand plot in Figure \[fig:playroom-partial-detect\] shows value functions on the states in Cluster 2 (“music ON”). Here there is large disagreement on several states, suggesting that applying the optimal policy from ${\ensuremath{{\ensuremath{\mathrm{MMDP}_{(1)}}\xspace}}\xspace}$ to ${\ensuremath{{\ensuremath{\mathrm{MMDP}_{(2)}}\xspace}}\xspace}$ in Cluster 2 could be problematic. This is not surprising considering that the goal states for both tasks are either inside or connected to the respective problem’s Cluster 2. Indeed, the test given by Equation produces $T=-1.31$ in Cluster 2, while $T=+6.64$ in Cluster 1. We conclude that transfer in Cluster 1 should be attempted, but transfer in Cluster 2 should not be attempted in the absence of a better statespace mapping.\
[*Policy Transfer:*]{} With confirmation that transfer within Cluster 1 could be helpful, the optimal policy from ${\ensuremath{{\ensuremath{\mathrm{MMDP}_{(1)}}\xspace}}\xspace}$ was transferred to the overlapping interior ${\ensuremath{{\ensuremath{\mathrm{MMDP}_{(2)}}\xspace}}\xspace}$ Cluster 1 states following the process described in Section \[sec:policy-xfer\]. As mentioned earlier, the identity correspondence between the relevant underlying states in each task was assumed, and the actions mapped accordingly. For this particular problem the actions do not change, but we followed the general mapping process anyhow since one does not generally know [*a priori*]{} whether actions need to be changed, or whether the representation of the two problems is such that actions are known by the same labels or not. The initial policy, post transfer, as well as the optimal policy on the interior of ${\ensuremath{{\ensuremath{\mathrm{MMDP}_{(2)}}\xspace}}\xspace}$ Cluster 1 were as follows:
[Initial]{} 1 1 1 1 1 ? ? ? 2 2 2 1 1 1 1 1
------------- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- ---
[Optimal]{} 1 1 1 1 1 3 3 3 2 2 2 1 1 1 1 1
where [1]{} means “look at a random object”, [2]{} means “place the marker” and [3]{} means “press music button” (see action definitions at the top of Section \[sec:expts-playroom\]). Question marks in the initial policy indicate states which did not recieve a policy from ${\ensuremath{{\ensuremath{\mathrm{MMDP}_{(1)}}\xspace}}\xspace}$. Cluster 1 in ${\ensuremath{{\ensuremath{\mathrm{MMDP}_{(1)}}\xspace}}\xspace}$ contained 13 interior states, while Cluster 1 in ${\ensuremath{{\ensuremath{\mathrm{MMDP}_{(2)}}\xspace}}\xspace}$ contained 16; thus the maximum of 13 policy entries were transferred to ${\ensuremath{{\ensuremath{\mathrm{MMDP}_{(2)}}\xspace}}\xspace}$. Unknown states are given a default guess below. As can be seen in the table above, for this task the transferred policy entries correctly matched the optimal policy for ${\ensuremath{{\ensuremath{\mathrm{MMDP}_{(2)}}\xspace}}\xspace}$.\
\
\
[*Transfer Problem Solution:*]{} Figure \[fig:playroom-partial\] compares performance both with and without partial fine-scale policy transfer, across several different solution algorithms. The four curves in each plot correspond to different initial conditions and/or solution algorithms, and give the Euclidean distance (“Error”, vertical axis) between intermediate value functions computed after $t$ iterations (horizontal axis) of a given algorithm, and the true, optimal value function for ${\ensuremath{{\ensuremath{\mathrm{MMDP}_{(2)}}\xspace}}\xspace}$. Traces labeled with the prefix “PI: No-Transfer” correspond to vanilla policy iteration on the global statespace, with no transfer information, while the traces labeled “PI: Transfer” correspond to policy iteration starting from the transferred fine scale policy. The “PI: No-Transfer” curve is the same in all plots, and the “PI: Transfer” curve is the same in all plots except Figure \[fig:play-partial-pool-full\]. Traces labeled “MS: Transfer” and ”MS: No-transfer” refer to different multiscale algorithms appearing in Table \[tab:expt-algs\], with and without initial coarse solutions obtained following Section \[sec:cluster\_pols\], and with and without fine-scale policy transfer (respectively). In the transfer experiments, the transferred policy served as the initial policy. In all cases, the blending parameter was set to $\lambda=1$ (no blending), giving a purely greedy policy update. Arrows in the plots mark the point at which the [*policy*]{} has converged to the optimal (fine) policy[^21]. The particular multiscale algorithms and conditions we tested in each plot are as follows:
Figure Initial Compression Multiscale Algorithm
----------------------------------------- --------------------- ----------------------
Figure \[fig:play-partial-pool-comp\] pool `cr`
Figure \[fig:play-partial-pool-nocomp\] pool `cc`
Figure \[fig:play-partial-pool-full\] pool `cr,cc`
Figure \[fig:play-partial-diff-comp\] diffusion `or`
Figure \[fig:play-partial-diff-nocomp\] diffusion `oc`
See Table \[tab:expt-algs\] and surrounding discussion for a description of the algorithms. The “Initial Compression” column above specifies whether the initial coarse value function solved a coarse MDP compressed with respect to a collection of policies as described in Section \[sec:cluster\_pols\] ([*“pool”*]{}), or with respect to the diffusion policy ([*“diffusion”*]{}). We assume that the initial coarse value function under the [*pool*]{} condition is trustworthy, and choose multiscale algorithms which iterate within [cluster]{}interiors to convergence before updating the boundary. For initial coarse value functions derived from the diffusion policy, we assume there could be errors and opt for multiscale algorithms which only update [cluster]{}interiors once before each boundary update.
Several conclusions (specific to this problem domain) may be drawn from these experiments:
1. The impact of the transferred policy is essentially only noticeable when used in conjunction with a good initial coarse guess (Figs. \[fig:play-partial-pool-comp\],\[fig:play-partial-pool-nocomp\]). Both algorithms (recompression/no-recompression) give similar performance. The recompression-based algorithm does not converge to the optimal value function, although the corresponding policy sequence does converge to the optimal policy.
2. For the canonical policy iteration algorithm, using the transferred policy as the initial condition gives only a slight advantage. However, policy iteration is far less robust to errors in the transferred policy than the family of multiscale algorithms. Figure \[fig:play-partial-pool-full\] shows the result of transferring the [*entire*]{} fine policy for ${\ensuremath{{\ensuremath{\mathrm{MMDP}_{(1)}}\xspace}}\xspace}$, even in the [cluster]{}where transfer detection suggested transfer could be error prone. The multiscale algorithm with recompression is labeled “`MS-c`”, and without compression “`MS-nc`”. For this particular problem, the multiscale algorithms are tolerant of these errors, and convergence is in fact faster. (We emphasize, however, that this may not at all be true for other problems.) Policy iteration, however, suffers, and takes additional time to correct errors in the second (error-prone) [cluster]{}’s policy.
3. When the diffusion policy is used for compression (Figures \[fig:play-partial-diff-comp\],\[fig:play-partial-diff-nocomp\]), transfer has little impact. Furthermore, recompression during the solution process is necessary to quickly correct errors imposed by a poor initial coarse value function. The algorithm involving local bottleneck updates (Figure \[fig:play-partial-diff-nocomp\]) requires more iterations to converge to optimality, as compared to the other algorithms.
4. Ignoring cost per iteration, most of the improvement of the multiscale algorithms over policy iteration come from the algorithms themselves rather than the transferred information. However, in large complex domains where [clusters]{}may themselves be complex tasks, even small transfer improvements may lead to substantial savings.
Related Work {#sec:prior_work}
============
Our work has many points of contact with the literature, and we do not attempt a comprehensive comparison. We highlight the main, most important similarities and differences.
There are several overarching themes which distinguish our work from much of the literature:
- Multiscale structure: Multiscale is is a unifying, organizational principle in our work. Our approach enforces a strong multiscale decomposition of tasks into subtasks, such that each scale may be treated independently of the others. Hierarchies of arbitrary depth may be easily constructed. Many approaches ultimately require some form of “flattening” (see for instance HAMs [@ParrRussell], options [@SuttonOptions]), or do not generalize well beyond a single layer of abstraction.
- Multiscale consistency: Coarse scales are consistent with finer scales “in the mean”, and each scale is a separate MDP. Semi-Markov decision processes (SMDPs) [@Puterman], for example, do not share this notion of consistency.
- Computational efficiency: The multiscale structure we impose localizes computation and improves conditioning. The computational complexity of learning and planning can be significantly reduced, both in time and in space.
- Coupling between learning, planning, and structure discovery: Our approach combines learning of macro-actions, multiscale planning, and inference of multiscale structure in a fundamental way. Many existing approaches focus on one or the other, resulting in a disconnect that leads to inefficiency and unresolved challenges.
- Transfer: MMDPs support systematic, scale-independent transfer of knowledge between tasks. Knowledge may be exchanged in the form of potential operators, policies, or value functions.
- Generality: Different statespace partitioning and bottleneck detection algorithms may be used. Compression may be carried out with respect to any policy, or collection of locally defined policies. Different value function representations and off the shelf algorithms for solving MDPs may be chosen. Key MMDP quantities may be computed analytically (if the model or an estimate of the model is known), or by Monte Carlo simulation. We do not assume specific choices of algorithms where possible. On the other hand, MMDPs are [*more constrained*]{} than SMDPs. SMDPs are very general objects, but this generality comes at the expense of conceptual and computational complexity.
A more detailed comparison to specific work in the literature follows below.
Hierarchical Reinforcement Learning
-----------------------------------
Empirically, “standard” approaches to learning within flat problem spaces are often slow, scale poorly, and do not lend themselves well to the inclusion of prior knowledge. The hierarchical reinforcement learning (HRL) literature (see [@BartoHRLReview] for a review, and [@SuttonOptions; @DietterichHRL; @ParrRussell] in particular) has long sought to address these challenges by incorporating hierarchy into the domain and into the learning process. The essential goal of the hierarchical learning literature is to divide-and-conquer, paralleling similar strategies for coping with complexity found throughout biology and neuroscience. The notion of state abstraction has been considered extensively. In most cases, coarse or “macro” actions are broadly defined as temporally extended sequences of primitive actions. The pioneering “options” framework [@SuttonOptions] proposed a means to solve reinforcement learning problems, given pre-specified collections of such macro-actions. The options framework is closely related to SMDPs, see [@Puterman; @DasMahadevan99], and SMDPs have more generally become a modeling formalism of choice for HRL. If a set of options have been pre-specified, the problem of learning an optimal policy over a set of options is an SMDP. Many of the hierarchy discovery algorithms surveyed below construct options, and then employ SMDP learning techniques. For this reason we devote special attention to the options framework, and discuss how it relates to the present work.
### Relation to Options and SMDPs
In the options framework [@SuttonOptions], a hierarchical value function is used to define a flat, global policy. One level of abstraction is typically considered: Options are policies accompanied by a specification as to when an option can be invoked (“initiation set”), and when it should end, once triggered (“termination condition”). Some elements of our work can be described in the language of options, however there are some important differences distinguishing our framework from that of options/SMDPs.
Options and SMDPs are general approaches, and some of our design decisions may be viewed as a specialization of certain aspects of options/SMDPs. Other aspects of our work may be viewed as more substantial departures from the options framework. These differences confer computational, transfer and multiscale advantages, and promote learning of the macro-actions themselves: MMDPs are constructed specifically with these objectives in mind. The options framework does not consider learning of the options, and is primarily designed for [*planning*]{} with pre-specified macro-actions. This disconnect between learning and planning leads to inefficiencies, in terms of both computation and exploration, when SMDPs are combined with separate schemes for learning options. How to learn macro-actions within the options framework is a major challenge that has received considerable attention in the literature, yet there is no clear consensus as to how coarse rewards and discounts should be set locally in order to efficiently learn a macro-action that can be combined with others in a consistent way. Fundamentally, we take a holistic point of view, and couple learning of macro-actions and planning with macro-actions together. The hierarchical structure we consider is also tightly coupled with computation and conditioning. Solving for macro-actions takes advantage of improved local mixing times at finer scales and fast mixing globally at coarse scales. At any scale, learning is localized, in terms of space and computation, by way of the multiscale decomposition. The local learning is fast, and consistent globally, due to information feeding in from distant parts of the statespace through coarse scale solutions. Our framework also more easily accommodates multiscale representations of arbitrary depth. Each scale is an MDP and may be solved using any algorithm. In contrast to options, the introduction of additional scales does not necessarily add complexity to the planning phase (indeed, it usually reduces it). As we will discuss below, that each scale is an MDP also supports further transfer opportunities.
We point out a few other salient commonalities and differences:
- Our coarse actions, or “macro-actions”, are temporally extended sequences of actions at the previous (finer) scale, and involve executing a policy within a particular [cluster]{}. In the language of options, the initiation set for a coarse action is any bottleneck connected to a [cluster]{}on which the action’s policy is defined, and the policy terminates whenever a bottleneck is reached. Our coarse actions are always Markov (not semi-Markov), and the termination condition depends only on the current state. Furthermore, hierarchies of coarse actions do not lead to semi-Markov options in our framework – they always remain Markov[^22].
However, options may only direct the agent in one direction, and may terminate in the initiation set of at most one other successor option. To get around these limitations, new, separate options must be defined, increasing the problem’s branching factor, and care must be taken to avoid loops (if so desired). An MMDP coarse action leaves the “direction” of the action undecided: the same fine policy may be executed starting in several bottleneck states, and may take the agent in one of several directions until arriving at one of multiple destinations from which different successor coarse actions may be taken. In the context of MMDPs, if one wanted to be able to transfer policies on the same cluster which guide an agent in different particular directions, separate local policies would need to be stored in the “database” of solved tasks. However, we would only need to transfer and plan with [*one*]{} of them.
- A strength of the options framework is that multiple related queries, or tasks, may be solved essentially within the same SMDP. However, the tasks must be closely related in specific ways (e.g. tasks differing only in the goal state), and this strength comes at the expense of ignoring problem-specific information when one only wants to solve one problem. Our approach to the construction of MMDPs differs in that while we assume a particular problem when building a decomposition, we are able to consider a broader set of transfer possibilities.
- Bottlenecks and partitioning do not explicitly enter the picture in options or SMDPs. Options may be defined on any subset of the statespace, and in applications may often take the form of a macro-action which directs the agent to an intermediate goal state starting from [*any*]{} state in a (possibly large) neighborhood. For example, an option may direct a robot to a hallway from any state in a room. We constrain our “initiation” and “termination” sets to be bottleneck states, however this means that learning policies at coarse scales is fast, and can be carried out completely independent of other scales. Coarse scale learning involves only the bottleneck states, giving a drastically reduced computational complexity. Provided the partitioning of a scale is well chosen, this construction allows one to capitalize on improved mixing times to accelerate convergence.
- MMDPs are a representation for MDPs: we cannot solve problems that cannot be phrased as an MDP (i.e. problems whose solutions require non-Markov policies). A policy solving an MMDP, at any scale, is a Markov policy. SMDPs may in general have non-Markov solutions (for example, policies which depend on which option is currently being executed).
- Multiscaleness: The options framework is arguably, at its core, a flat method. In general, options may reference other options, but essentially any and all options may be made available in a given state. In order to plan with options, one needs to know which option is best to execute, and at which scale, for the given task. To choose an option at a given “scale”, options at other scales must be ruled out. In this sense, planning with options is “bottom-up”, while our approach may described as “top-down”. The bottom-up approach is potentially problematic for two reasons: (1) A domain expert has to specify coarse policies. Determining a multiscale collection of policies by hand can be difficult, if not intractable. (2) To learn a fine policy solving the problem, all options across all scales need to be considered at once potentially. This is accomplished by effectively flattening the hierarchy: a state-option $Q$-function would need to have entries for every option which might be initiated from a given state, across all scales. Scale is lost in the sense that the value at a state may only be defined as the value at that state under a flat policy. Without significant user guidance and tailoring of the SMDP, this cannot be avoided. Either the burden on the user is high, or the computational burden is high.
Even if one repeatedly defines SMDPs on top of each other, a number of difficulties arise: (1) The resulting transition probability kernels and reward functions are not consistent across layers. For example, the transition kernel at a coarse scale is not the transition kernel of the embedded Markov chain observed only on initiation/termination/goal states. (2) An SMDP could have a disconnected statespace at the coarse layer, and it is not clear how this problem can be resolved. (3) The lack of isolation of scales necessarily implies an increase in the number of actions, and thus an increase in the branching factor of the problem.
For these reasons, the extension of options to multiple levels may not be easily carried out, and is not often seen in the literature. By contrast, our approach is strongly multiscale. We impose a specific, stronger form of hierarchy, that abstracts each scale away from the others. At a given scale, coarse actions may refer to fine actions, [*but only through the fixed multiscale organization*]{}, and they cannot invoke coarse actions at or above the current scale. The multiscale structure is always enforced. This leads to significant computational savings, and a form of consistency of problems and their solutions across scales.
- Options/SMDPs define a coarse scale transition probability law which [*combines*]{} transition probabilities between a macro’s starting/ending states, the trajectory (sojourn) length distributions, and uniform discounting (multi-time model). This definition suffices if the options are user-specified, and one is only interested in a single layer of abstraction. The definition is problematic, however, for multi-layer hierarchies, non-uniform discounting, transfer learning, and learning of the options themselves. In our construction, coarse transition probabilities and discount factors are computed separately. One advantageous consequence of this is that path length distributions do not need to be estimated or represented explicitly. But it is also important to keep these quantities separate for the following reasons: (1) Transfer (detection, transfer of potential operators, and partial policy transfer); (2) One needs to modify the transition probability tensor in order to restrict to a local [cluster]{}, and solve localized sub-problems efficiently; (3) It is important to preserve multiscale consistency: a compressed MDP is an MDP that is consistent with the fine scale in the mean. This is not true of the quantities defined in the context of SMDPs. If there are non-uniform discounts, then it is the product of the discounts over trajectories that must be considered rather than a constant raised to the path length (see Section \[sec:exp\_path\_lens\] for a discussion concerning this distinction).
- Options/SMDPs define a coarse reward function which does not depend on the termination state; aggregate rewards are pre-averaged over all possible ending states. In the context of learning and transfer, this choice can lead to serious errors. Consider the effect of averaging over paths ending at the starting state (small reward) with paths spanning a large [cluster]{}(large rewards). It is likely that with such a system of rewards, coarse solutions would contain little information for solving at the fine scale. In any event, this definition does not yield multiscale consistency in the sense discussed above. MMDPs keep track of the coarse reward for each possible starting and ending state, and these quantities are approximated by analytically computed moments given a model or by Monte-carlo estimates. Space requirements are small, however, because only bottlenecks, of which there are few, can be termination states for a macro-action. This convention also ensures multiscale consistency.
### Other HRL Approaches
The MAXQ algorithm [@DietterichHRL] is a method for learning a collection of policies at each layer of a programmer-specified hierarchy of subroutines, using a form of semi-Markov Q-learning. Two types of optimality are discussed: recursive optimality, where each sub-problem’s policy is locally optimal, but the overall solution may not be optimal, and hierarchical optimality, where the global policy is optimal given the constraints the hierarchy imposes. We consider optimality with respect to the true, global optimum in the set of all stationary, Markov policies, and have discussed algorithms for solving MMDPs above which converge to optimal policies at the finest scale. More recent work [@Kaelbling:CRA:11] has begun to explore hierarchy as a means for reducing the amount of search that is required to learn and/or construct a hierarchical plan. In this paper, we do not consider using a (partial) MMDP to speed up exploration and subsequent elaboration of itself, although this is an interesting avenue we leave for future work.
Hierarchical learning in partially observable domains has also been considered [@TheoKael:NIPS:04; @HeBrunskill:JAIR:11; @Pineau:UAI:03; @Kurniawati:09], but is a less developed topic. [@HeBrunskill:JAIR:11] is noteworthy in that they consider online learning with macro-actions in POMDPs, with a particular emphasis on scalability, although macro-actions must be pre-specified by the user, and are constrained to be open-loop sequences. Our coarse actions may also be seen as open-loop controllers, but they only end when a bottleneck state is reached.
### Hierarchy Discovery
Hierarchy is important in the literature referenced above, but the meaning of the hierarchy and its geometric interpretation is often detached from the solution process. If the hierarchy must be provided by a domain expert, the solution algorithms can only make limited assumptions about what the user has provided. In this paper, [*structure discovery*]{} and [*learning*]{} are intimately connected. Hierarchies are (automatically) defined based on the geometry and goals of the problem, and this is exploited to achieve locality of the computations and scalability, and to create opportunities for transfer. Much of the early HRL research primarily sought to define algorithms for learning given user-specified hierarchies, and only later did researchers consider automatic discovery and characterization of task hierarchies. As a result, many approaches to structure discovery appear to rest on top of generic HRL frameworks (such as options), and lack synergy with the underlying learning process. Within this collection of approaches, there are however several ideas which overlap with portions of our work. The literature concerning automatic detection of macro-actions and/or hierarchies can be roughly organized into three categories: (1) Approaches which aggregate states based on statistics observed at individual states during simulation, (2) approaches involving graph-based clustering/analysis, and (3) approaches based on “experimentation” or demonstration trajectories not necessarily directly related to the task to be solved.
The work of [@StollePrecup:02] recognizes a form of “bottleneck”, there defined to be states which are visited frequently. The authors propose a heuristic algorithm which, given simulated trajectories, takes the top most frequently visited states as bottlenecks, and uses these states to define options. Others have attempted to capture similar properties. The HEXQ [@Hengst02] and VISA [@Jonsson06] algorithms group states based on the frequency with which their values change. [@Marthi07] perform a direct greedy search for hierarchical policies consistent with sample trajectories and an analysis of changes in states’ values. However, a commonly occurring problem with these approaches is that they are computationally intensive and can require large trajectory samples. Exploration is global in these approaches, which could be problematic for large, complex domains. Both VISA and the HI-MAT approach of [@Mehta08] (which automatically creates MAXQ hierarchies) require estimating and analyzing a dynamic Bayesian network in order to determine clusters of mutually relevant states. Approaches which assume DBN transition models allow for compact representations, but also lead to a solution cost that is exponential in the size of the representation unless specialized approximate algorithms are used. In addition, some of these approaches do not maintain a principled or consistent notion of scale.
Our intuitive notion of a bottleneck state in Algorithm \[alg:spectral\_clustering\] is close in spirit to several other graph-theoretic definitions appearing in the literature, although we emphasize that we have not designed our approach to HRL around any single characterization of bottlenecks. Several graph clustering algorithms are proposed in [@MenacheMannor:02; @MannorMenache:04] for identifying subgoals (online) to accelerate Q-Learning. The latter reference employs a form of local spectral clustering (local because good bottlenecks may not always be part of a global cut) and is related to the work of [@OsentoskiMahadevan:10], while the former proposes a clustering method that can take advantage of the current value function estimate. In these papers a weighted graph is periodically built from observed state transitions and then cut into clusters. Options are learned so that neighboring pieces of the graph can reach each other. Although policies on clusters (options) are computed separately, they are computed on the basis of an artificial reward, and can therefore be incorrect. The work of [@SimsekBarto:ICML:05] is similar, and considers local spectral clustering on the basis of a limited, recent collection of trajectory samples. The statespace is successively explored and bottlenecks are identified without having to perform global computations. Options corresponding to the clusters resulting from graph cuts are again learned, but may also be incorrect, so re-learning of the options is prescribed. Another approach, distinct from spectral clustering, is the identification of bottleneck states based on “betweenness”, proposed by [@SimsekBarto:NIPS:08]. The idea follows from the observation that bottlenecks may not always be identifiable based on node connectivity. Bottlenecks are defined to be states through which a large fraction of graph geodesics must pass. States within a small neighborhood with comparatively high betweenness are identified as bottlenecks, and options are defined on the basis of these subgoals. Betweenness is a natural alternative to diffusion based clustering techniques, but can give substantially different results depending on the geometry of the statespace and how the graph weights are chosen. Each of the clustering methods described may also be used within our framework to choose bottlenecks and partition the statespace, and in online exploration scenarios the current statespace graph may be re-estimated (and the MDP re-compressed) as desired.
The third category of hierarchy discovery research may be represented by the intrinsic motivation work of [@Singh2004; @Barto:ICDL:04], and the skill discovery method of [@Konidaris:NIPS:09]. A significant difference between these references and the work described here is that, while coarse structure is used to decompose the fine scale problem into more manageable pieces, there is no explicit independent coarse problem that is solved and pushed downwards in order to guide/accelerate the solution at the fine scale, and only one level of abstraction is considered. In [@Singh2004], an agent discovers skills by experimenting within the domain and receiving rewards for actions which lead to novel, salient, or intrinsically interesting outcomes. The learned skills then serve as options within an SMDP framework, to learn policies that can accomplish extrinsically rewarded tasks. The authors consider two layer hierarchies (one level of abstraction), and require manual specification as to which events are salient and how they are rewarded. This work differs from ours in that the sub-tasks we learn are tailored to a particular goal. In our development, nothing is required from the user and learning may be faster, but sub-task solutions may not as easily transfer to other problems. One possible way around this would be to solve multiple MMDPs corresponding to different objectives within the same domain, and store the [cluster]{}specific solutions in a database.
The skill chaining method proposed by [@Konidaris:NIPS:09] seeks to learn skills in continuous domains by working backwards from a goal state, and is particularly useful under a [*query paradigm*]{} in which multiple, closely related problems need to be solved (for instance, involving different goal states). The skill discovery process is local in the sense that only a neighborhood around the previous milestone is considered in defining a new skill. The notion of chaining together skills is similar to the constraint we have imposed stipulating that coarse actions can only be taken in bottleneck states: the initiation/terminal sets of given skill can only be initiation and/or terminal sets of other skills. In a subsequent paper [@Konidaris:IJRR:12], the authors extend skill chains to trees of skills. However, the trees refer to the arrangement of skills within a single level of abstraction, and does not refer to a tree of scales. Skill chains and trees may be described as effectively imposing a localized representation for the value function, driven by the geometry of the problem. By using localized basis functions to represent the value function in a flat model [@MahadevanMaggioni:JMLR:07; @OsentoskiMahadevan:10], it is likely that similar structure could be captured. On the basis of these observations, it is possible that the recursive spectral clustering algorithm described in Section \[sec:clustering\] can lead to a hierarchy of sub-tasks respecting similar geometric properties as that of the skill trees in [@Konidaris:IJRR:12] but at more than one level of abstraction.
The references above propose different heuristics for defining how and when options (or other sub-goals) are learned and/or updated. However, a major challenge is to define an isolated problem whose solution can be obtained efficiently (i.e. locally), but is still consistent with other macro-actions. In several of the approaches above, the rewards used to learn the subtasks and the values fixed at bottleneck states may not be compatible, in which case policies can conflict (with respect to a designated goal) across subtasks. The multiscale framework described here provides a principled way to learn consistent local policies, given a partition of the statespace into subtasks. The reward function, boundary values, and discounts are determined automatically in our setting. Consistency is also maintained across scales so that the process may be readily repeated as necessary. If a single macro-action is itself a large problem, it is not clear how the methods above can be extended to another scale because of scale-dependent assumptions. The learning algorithms we have proposed are independent of the scale at which they are applied. Finally, it is often the case that non-standard algorithms are required to solve a given problem, once a set of subtasks is identified. Our approach provides a hierarchy of ordinary MDPs which can be solved using standard techniques. A more integrated approach to HRL with automatic construction of the hierarchy is the recent work of [@BarryKael:IJCAI:11]. The DetH\* algorithm proposed in [@BarryKael:IJCAI:11] shares some common themes with our work (and we like the name), although there are also pointed differences. DetH\* uses a type of coarse policy (a deterministic map between coarse states) to decompose the fine scale problem into sub-problems (extending beyond two layers is not discussed), but does not determine a coarse value function that can be used to solve the independent sub-problems in a manner which maintains global consistency. A type of local optimality is, however, guaranteed. The end product at the coarse scale is not an MDP, and the complexity of the proposed coarse solution algorithm depends on a quantity similar to the worst case fine scale cluster diameter. The multiscale MDPs proposed here are hierarchies of independent, self-contained MDPs. DetH\* clusters the statespace by trading off the size of the clusters against a reachability condition. Heuristics are applied to ensure that clusters do not become too large or too small. The heuristics indirectly attempt to settle on an appropriate scale, but it is not immediately clear how geometry enters the picture. “Too-large” appears to mainly be a computational consideration (i.e. the number of states). Our perspective is that it is perfectly fine, indeed desirable, for clusters to be large – depending on the problem geometry or other notion of intrinsic complexity. For example, if the Markov chain associated to a policy is fast mixing within the room. More precisely, the number of states in a cluster is not a computational problem so much as [*conditioning*]{} is. The approach taken in our work seeks to find the right scale and partitioning based directly on local geometry, and by extension, conditioning. Another difference worth mentioning is that DetH\* defines coarse states to be representatives of entire fine scale clusters, and goal states are also lumped into a single macro goal state. In our framework the coarse states are bottlenecks, and are elements of the fine scale statespace. Because the coarse states in DetH\* are sets, the authors define a cost function on coarse states based on averages of shortest paths between clusters. The extent to which the underlying dynamics can captured is not clear, and the coarse quantities are not consistent with the fine scale in a precise, probabilistic sense invoking an underlying Markov chain. In addition, DetH\* determines a hierarchy on the basis of shortest paths, and cannot consider a coarsening with respect to a particular policy. Transitions between coarse states are deterministic, whereas we construct a coarse problem which is itself an MDP, having its own transition kernel. This allows for greater generality and encompasses a richer set of problems. Finally, [@BarryKael:IJCAI:11] and the other references above do not consider transfer within their respective hierarchical frameworks.
Transfer Learning {#transfer-learning}
-----------------
A good overview of the current landscape for transfer in reinforcement learning is [@Taylor09]; we provide only a brief summary of some efforts related to ours. The literature discussed up to this point has been primarily concerned with discovering state abstractions for a specific problem, and transfer may be possible only to problems in the same domain, if at all. The approach taken in [@Konidaris:JMLR:12; @Konidaris:IJCAI:07] posits the existence of shared features across related tasks, and discusses transfer of value functions defined from those features (for example, the features may be coefficients in a basis function expansion). The shared features constitute a representation which simultaneously captures relatedness among tasks and solves any relevant correspondence problems. Transferred value functions serve as shaping functions for learning new tasks. Reward systems are the same across tasks. In some instances, options are transferred [@Konidaris:IJCAI:07], again given a suitable feature space, but the option transfer is limited to a single level of abstraction. The authors do not discuss how to identify a suitable feature space (“agent space”) to carry out transfer, although this is a crucial element and determines what kind of transfer is possible, and the degree to which transfer helps in new tasks. The earlier work of [@Guestrin:ICJAI:03] is related to that of [@Konidaris:JMLR:12] in that sets of similar problems are defined from the ground-up using a common, class-based formalism. In the language of [@Konidaris:JMLR:12], [@Guestrin:ICJAI:03] specify problems with a specific, predetermined feature space in mind, so that value functions defined on the features immediately transfer to tasks within the class, by construction. The approach is not, however, hierarchical.
[@FergusonMahadevan:06] (also [@Tsao:AAMAS:08]) do not consider transfer in a hierarchical setting, but take an approach that can also be incorporated into our development. In their approach, eigenfunctions of a graph Laplacian describing the domain are transferred (the graph is constructed from trajectories). These eigenfunctions (or “proto-value functions”) serve as basis functions for defining a value function over the statespace [@MahadevanMaggioni:JMLR:07]. Transfer is considered among tasks with identical domains but varied reward functions, or among tasks with fixed reward function and geometry, but scaled statespaces. Since subtasks (coarse or fine) within an MMDP are themselves MDPs, one may also select various basis functions on which to expand the local value functions specific to [clusters]{}in our framework. This extension of the basic MMDP solution methodology we have described (Section \[sec:mdp\_solution\]) can be applied at any scale, based on graph Laplacians derived from either simulations, or from a transition model $P$. The resulting proto-value functions may be stored as part of the solution to a sub-task in the library of transferrable objects, and transferred when appropriate.
Multiscale MDPs, as defined in this paper, contrast with the approaches above in that transfer may be pursued at any scale (out of many), and the procedure for carrying out transfer is the same regardless of scale. We support transfer of coarse or fine scale knowledge, or combinations thereof. In addition, we have attempted to automatically handle the problem of systematically identifying transfer opportunities and encoding the knowledge to be transferred. Although value functions (defined with basis functions or otherwise) may be transferred within our framework, transfer can also take the form of policies or potential functions, so that transfer can occur between more dissimilar tasks. We still, however, require statespace graph matchings, which may be challenging to obtain depending on the problems and scales under consideration. Finally, refinement and improvement of a transferred quantity is straightforward when working with MMDPs. Any algorithm may be used to improve the policy, since problem and sub-task representations are always independent MDPs, both within and across scales.
Discussion {#sec:discussion}
==========
We have presented a general framework for efficiently compressing Markov decision problems, and considered multiscale knowledge transfer between related MDPs. Our treatment is multiscale, and centers on a hierarchical decomposition in which coarse scale problems are independent, deterministic MDPs, and in which local sub-problems at fine scales may be decoupled given a coarse scale solution. We then argued that such multiscale representations may be used to efficiently solve a problem, and to transfer localized and/or coarse solutions rather than global solutions. The experiments we considered demonstrated computational speedups as well as the transfer of localized potential functions and policies at both coarse and fine scales across three example domains. As one would expect, problems receiving transferred information were shown to be solvable using less computational effort.
In the subsections below, we address a few generalizations and suggest outstanding directions for future consideration.
Model-based vs. Model-free Learning
-----------------------------------
The compression procedure introduced above assumes access to a pre-specified model described by $P, R$ and $\Gamma$, although it is only mildly model-dependent in the sense that compression involves averaging so that the “model” is not needed to high precision. The assumption that the model is known may be relaxed entirely, however. The general multiscale approach to compression we have described may be extended to include a completely model-free setting by considering a fully empirical, Monte-Carlo based compression and bottleneck detection regime. Bottlenecks may be initially detected on the basis of a local exploration (see for example Spielman’s local heat flow algorithm [@Spielman:LocalClustering] or Peres’ evolving sets [@Andersen:2009:ESP; @Morris_Peres_2003]), so that the entire $P$ matrix is not needed. The exploration may be done starting from a goal state, for example, and would be inexpensive because only the [cluster]{}enclosing the goal state needs to be considered. Given the bottlenecks, Monte-Carlo based simulations can be used to compress the MDP locally in the vicinity of the starting state by directly estimating the coarse ingredients (transition probabilities, rewards, and discounts). The process may then be repeated starting from the detected bottleneck states, proceeding outwards, to build up a global picture successively adding one (or a few) [clusters]{}at a time. On-policy exploration could be accelerated in previously explored regions by using the compressed model as a fast simulator. This approach could make difficult problems, where long sequences of actions are necessary to reach the goal, more tractable.
Continuous Domains, Sampling, and Dictionary Expansions
-------------------------------------------------------
We have assumed discrete state and action spaces, although it should be emphasized that the multiscale development here does not critically depend on the discrete assumption, and may be adapted to continuous domains. A simple approach, which was considered in Section \[sec:pend-expt\], is to discretize and then build a model on a discrete set of representative states. The discretization is in general problem dependent, and need not be dense in order to apply the homogenization prescribed above. A continuous problem could be discretized with a coarse sampling determined by the problem’s complexity, with the expectation of good results since complexity (and geometry) largely determines the multiscale decomposition. The translation from continuous to discrete could be model-based (e.g., eigenfunctions of $P$) or model-free (e.g., eigenvectors of the graph Laplacian built from simulated trajectories). Moreover, in this case the coarse MDPs have discrete statespaces, so that handling continuous variables is only a concern at the finest scale.
Another approach might be to overlay a discrete coarse MDP on top of a continuous fine scale problem. In this case, the fine scale quantities (including policies) could be represented by expansions on basis function dictionaries for the statespace, or even with factored representations and neural networks. One could then consider collapsing bottleneck regions (as sets) into single coarse states, and compressing based on fine scale trajectory statistics between these regions. Depending on the form of the model, it is possible that analytical expressions for the coarse quantities can be obtained (for example, if one uses Gaussian basis functions to describe $P, R$, leading to Gaussian integrals). The local boundary value problems occurring at the fine scale could be amenable to solution with existing approximate DP algorithms.
We point out that adopting basis function representations for the model could also support a broader set of transfer possibilities. Basis functions may themselves be transferred locally (representation transfer), in addition to solutions. A careful choice of basis can also impart invariance properties, and provide a means to accommodate domain changes (e.g., scaling, by way of Nystrom extensions, and goal changes) and reward function changes across problems (see [@FergusonMahadevan:06] for a discussion related to representation transfer).
Partially Observable Domains
----------------------------
Partially observable MDPs (POMDPs) can be cast as fully observable MDPs on a continuous belief statespace, so that in theory POMDPs can be decomposed, solved and transferred using the framework discussed in this paper. In practice, solving belief MDPs exactly can be computationally prohibitive. Extending the multiscale MDP framework described above to POMDPs in a more fundamental way could lead to efficient approximate solution algorithms. For instance, solutions to more tractable coarse problems could be used to provide interpolated solutions to finer problems, where accuracy vs. complexity of the interpolation can be balanced locally.
Acknowledgments {#acknowledgments .unnumbered}
===============
This work was partially supported by DARPA grants MSEE FA8650-11-1-7150; Washington State U. DOE contract SUB\#113054 G002745; NSF grants IIS-08-03293, DMS-08-47388. The authors also gratefully acknowledge helpful discussions with, and suggestions from, Sridhar Mahadevan.
Derivation of the linear system describing a value function
===========================================================
Let $\pi$ be a policy on $S$. Here we prove equation : $$V^{\pi}(s) = \sum_{s',a}P(s,a,s')\pi(s,a)\bigl[R(s,a,s') + \Gamma(s,a,s')V^{\pi}(s')\bigr],\quad s\in S.$$ where we recall that $V^\pi$ is the value function defined in , $$V^{\pi}(s) = {{\mathbb E}}\left[R(s_0,a_1,s_1) +
\sum_{t=1}^{\infty}\left\lbrace\prod_{\tau=0}^{t-1}\Gamma(s_{\tau},a_{\tau+1},s_{\tau+1})\right\rbrace R(s_t,a_{t+1},s_{t+1}) ~\Bigl|\Bigr.~ s_0=s\right].$$ Applying the Markov property to the first expectation on the right-hand side $$\begin{aligned}
{{\mathbb E}}\bigl[R(s_0,a_1,s_1) ~|~ s_0=s\bigr] &= \sum_{s',a}{{\mathbb P}}(s_1=s', a_1=a|s_0=s)R(s,a,s')\\
&= \sum_{s',a}P(s,a,s')\pi(s,a)R(s,a,s').\end{aligned}$$ For the second term, we have $$\begin{gathered}
{{\mathbb E}}\left[\sum_{t=1}^{\infty}\prod_{\tau=0}^{t-1}\Gamma(s_{\tau},a_{\tau+1},s_{\tau+1}) R(s_t,a_{t+1},s_{t+1}) ~\Bigl|\Bigr.~ s_0=s\right] \\
\begin{aligned}
&= {{\mathbb E}}_{s_1,a_1}\left\{{{\mathbb E}}\left[\sum_{t=1}^{\infty}\prod_{\tau=0}^{t-1}\Gamma(s_{\tau},a_{\tau+1},s_{\tau+1}) R(s_t,a_{t+1},s_{t+1})
~\Bigl|\Bigr.~ s_0=s, s_1, a_1\right] ~\Bigl|\Bigr.~ s_0=s\right\}\\
& \begin{split}
&= {{\mathbb E}}_{s_1,a_1}\Biggl\{\Gamma(s_0,a_1,s_1){{\mathbb E}}\biggl[ R(s_1,a_2,s_2) \;+ \\
& \qquad\qquad
\sum_{t=2}^{\infty}\prod_{\tau=1}^{t-1}\Gamma(s_{\tau},a_{\tau+1},s_{\tau+1}) R(s_t,a_{t+1},s_{t+1})
~\Bigl|\Bigr.~ s_0=s, s_1, a_1\biggr] ~\Bigl|\Bigr.~ s_0=s\Biggr\}
\end{split} \\
&= {{\mathbb E}}_{s_1,a_1}\bigl[ \Gamma(s_0,a_1,s_1)V^{\pi}(s_1) ~|~ s_0=s\bigr]\\
&= \sum_{s'}V^{\pi}(s')\sum_a P(s,a,s')\Gamma(s,a,s')\pi(s,a).
\end{aligned}\end{gathered}$$ Putting these results together, we obtain the linear system of equations $$V^{\pi}(s) = \sum_{s',a}P(s,a,s')\pi(s,a)\bigl[R(s,a,s') + \Gamma(s,a,s')V^{\pi}(s')\bigr],\quad s\in S\,,$$ which is .
Analytical results and computational considerations for the compression step
============================================================================
Compressed transition matrix $\widetilde P$: Proof of Proposition \[prop:coarsePsas\]
-------------------------------------------------------------------------------------
Let $a\in\widetilde{A}$ be the coarse action corresponding to executing a policy $\pi_{\ensuremath{\mathsf{c}}\xspace}\in{\ensuremath{{\boldsymbol{\pi}}}\xspace}_{\ensuremath{\mathsf{c}}\xspace}$ in [cluster]{} ${\ensuremath{\mathsf{c}}\xspace}$, so that $(X_t)_{t\geq 0}$ is the (discrete time) Markov chain on the [cluster]{}[$\mathsf{c}$]{}with transition matrix ${P_{\ensuremath{\mathsf{c}}\xspace}^{\pi_{\ensuremath{\mathsf{c}}\xspace}}}$. Recall that the set of bottleneck states within the [cluster]{}is denoted ${\ensuremath{\partial{\ensuremath{\mathsf{c}}\xspace}}\xspace}\subseteq {{\mathcal B}}$, and the set of non-bottleneck (interior) states in the [cluster]{}is denoted ${\accentset{\circ}{{\ensuremath{\mathsf{c}}\xspace}}}:={\ensuremath{\mathsf{c}}\xspace}\setminus{\ensuremath{\partial{\ensuremath{\mathsf{c}}\xspace}}\xspace}$.
First, observe that if $s\notin{\ensuremath{\mathsf{c}}\xspace}$, then the entries $\widetilde{P}(s,a,\cdot)$ are not defined because the action is unavailable. Second, if $s'\notin{\ensuremath{\mathsf{c}}\xspace}$, then we know that $\widetilde{P}(s,a,s')=0$. Therefore we restrict our attention to pairs $s,s'\in{\ensuremath{\mathsf{c}}\xspace}$, i.e. $s,s'$ compatible with $a$. The transition probabilities among pairs of states in ${\ensuremath{\partial{\ensuremath{\mathsf{c}}\xspace}}\xspace}$ are computed by observing the Markov chain $(X_t)_{t\ge0}$ at the hitting times of ${\ensuremath{\partial{\ensuremath{\mathsf{c}}\xspace}}\xspace}$: $$T_m =\inf\{t>T_{m-1} ~|~ X_t\in {\ensuremath{\partial{\ensuremath{\mathsf{c}}\xspace}}\xspace}\},\qquad m=1,2,\ldots$$ with $T_0 = \inf\{t\geq 0 ~|~ X_t\in {\ensuremath{\partial{\ensuremath{\mathsf{c}}\xspace}}\xspace}\}$. The hitting times are a.s. finite (${{\mathbb P}}(T_m<\infty)=1$, $\forall m\geq 0$) in light of the fact that, by construction, absorbing states are bottlenecks, and the assumption that [$\partial{\ensuremath{\mathsf{c}}\xspace}$]{}is $\pi$-reachable from any starting point. A new chain $(Y_m)_{m\geq 0}$ taking only values in ${\ensuremath{\partial{\ensuremath{\mathsf{c}}\xspace}}\xspace}$ can now be defined as $Y_m=X_{T_m}$. The transition probability matrix governing $(Y_m)_m$ is computed from that of $(X_t)_t$ by solving a linear system for a few different right hand sides as follows. Let ${{\mathbb P}}_s(B) := {{\mathbb P}}(B ~|~ X_0 = s)$, for any event $B$ (measurable w.r.t. a suitable $\sigma$-algebra). Consider the hitting probabilities ${{\mathbb P}}_s(X_{T_0}=s')$. Clearly ${{\mathbb P}}_s(X_{T_0}=s') = \delta_{s,s'}$ for $s,s'\in{\ensuremath{\partial{\ensuremath{\mathsf{c}}\xspace}}\xspace}$, where $\delta$ denotes the Kronecker delta function. The strong Markov property (see for instance @Norris97 [Thm 1.4.2]) allows one to apply the Markov property at (finite) stopping times, so that a one-step analysis gives the hitting probabilities for $s\in{\accentset{\circ}{{\ensuremath{\mathsf{c}}\xspace}}},s'\in{\ensuremath{\partial{\ensuremath{\mathsf{c}}\xspace}}\xspace}$ as $$\begin{aligned}
{{\mathbb P}}_s(X_{T_0}=s') &= {{\mathbb E}}\bigl[{{\mathbb P}}_s(X_{T_0}=s'~|~X_1,a_1) ~\bigl|\bigr.~ X_0=s\bigr]\\
&=\smashoperator[r]{\sum_{s''\in{\ensuremath{\mathsf{c}}\xspace},\, a\in A}}P_{{\ensuremath{\mathsf{c}}\xspace}}(s,a,s'')\pi_{{\ensuremath{\mathsf{c}}\xspace}}(s,a){{\mathbb P}}_s(X_{T_0}=s'~|~X_1=s'',a_1=a) \\
&=\sum_{a\in A}P_{{\ensuremath{\mathsf{c}}\xspace}}(s,a,s')\pi_{{\ensuremath{\mathsf{c}}\xspace}}(s,a) +
\smashoperator[r]{\sum_{s''\in {\accentset{\circ}{{\ensuremath{\mathsf{c}}\xspace}}},\, a\in A}}P_{{\ensuremath{\mathsf{c}}\xspace}}(s,a,s'')\pi_{{\ensuremath{\mathsf{c}}\xspace}}(s,a){{\mathbb P}}_{s''}(X_{T_0}=s').\end{aligned}$$ The third equality follows from the second applying the fact that $X_{T_0}$ is independent of $a_{1}$ given $X_{1}$, and the strong Markov property. Summarizing, these probabilities may be computed by solving the linear system $$\label{eqn:t0_cases}
{{\mathbb P}}_s(X_{T_0}=s') =
\begin{cases}
\delta_{s,s'} & s\in{\ensuremath{\partial{\ensuremath{\mathsf{c}}\xspace}}\xspace}\\
{P_{\ensuremath{\mathsf{c}}\xspace}^{\pi_{\ensuremath{\mathsf{c}}\xspace}}}(s,s') + \sum_{s''\in{\accentset{\circ}{{\ensuremath{\mathsf{c}}\xspace}}}}{P_{\ensuremath{\mathsf{c}}\xspace}^{\pi_{\ensuremath{\mathsf{c}}\xspace}}}(s,s''){{\mathbb P}}_{s''}(X_{T_0}=s') & s\in{\accentset{\circ}{{\ensuremath{\mathsf{c}}\xspace}}}\,.
\end{cases}$$ By the strong Markov property, we also have for $s,s'\in{\ensuremath{\partial{\ensuremath{\mathsf{c}}\xspace}}\xspace}$, $$\begin{aligned}
\widetilde{P}(s,a,s') & = {{\mathbb P}}(Y_{m+1}=s' ~|~ Y_m=s) \\
&= {{\mathbb P}}(X_{T_{m+1}} = s' ~|~X_{T_m}=s)\\
&={{\mathbb P}}(X_{T_1} = s' ~|~X_{T_0}=s) \\
&={{\mathbb P}}(X_{T_1} = s' ~|~X_0 =s) = {{\mathbb P}}_s(X_{T_1} = s').\end{aligned}$$ The law of total probability applied to the right-hand side of the third equality gives, for $s,s'\in{\ensuremath{\partial{\ensuremath{\mathsf{c}}\xspace}}\xspace}$, $$\begin{aligned}
{{\mathbb P}}(X_{T_1} = s' ~|~X_{T_0}=s) &=
{{\mathbb E}}\bigl[{{\mathbb P}}(X_{T_1} = s' ~|~X_{T_0}=s, X_{T_0 + 1}, a_{T_0 + 1}) ~\bigl|\bigr.~ X_{T_0}=s \bigr] \nonumber\\
&= \sum_{\substack{s''\in{\ensuremath{\mathsf{c}}\xspace}\\ a\in A}}P_{{\ensuremath{\mathsf{c}}\xspace}}(s,a,s'')\pi_{{\ensuremath{\mathsf{c}}\xspace}}(s,a){{\mathbb P}}(X_{T_1} = s' ~|~X_{T_0}=s, X_{T_0 + 1}=s'', a_{T_0 + 1}=a) \nonumber\\
&= \sum_{s''\in{\ensuremath{\mathsf{c}}\xspace}}{P_{\ensuremath{\mathsf{c}}\xspace}^{\pi_{\ensuremath{\mathsf{c}}\xspace}}}(s,s''){{\mathbb P}}(X_{T_1} = s' ~|~X_{T_0}=s, X_{T_0 + 1}=s'') \nonumber\\
&= {P_{\ensuremath{\mathsf{c}}\xspace}^{\pi_{\ensuremath{\mathsf{c}}\xspace}}}(s,s') + \sum_{s''\in{\accentset{\circ}{{\ensuremath{\mathsf{c}}\xspace}}}}{P_{\ensuremath{\mathsf{c}}\xspace}^{\pi_{\ensuremath{\mathsf{c}}\xspace}}}(s,s''){{\mathbb P}}_{s''}(X_{T_0}=s'). \label{eqn:t1_sys}\end{aligned}$$ The third equality follows from the second using the fact that $X_{T_1}$ is independent of $a_{T_0+1}$ given $X_{T_0+1}$.
Noticing that $({{\mathbb P}}(X_{T_1} = s' ~|~X_{T_0}=s))_{s,s'\in{\ensuremath{\partial{\ensuremath{\mathsf{c}}\xspace}}\xspace}}$ depends on $({{\mathbb P}}_s(X_{T_0}=s'))_{s,s'\in{\accentset{\circ}{{\ensuremath{\mathsf{c}}\xspace}}}}$, but the latter do not depend on the former, we can combine Equations and into a single linear system for each $s'\in{\ensuremath{\partial{\ensuremath{\mathsf{c}}\xspace}}\xspace}$: $$\label{eqn:trans_probs_app}
H_{s,s'} = {P_{\ensuremath{\mathsf{c}}\xspace}^{\pi_{\ensuremath{\mathsf{c}}\xspace}}}(s,s') + \sum_{s''\in{\accentset{\circ}{{\ensuremath{\mathsf{c}}\xspace}}}}{P_{\ensuremath{\mathsf{c}}\xspace}^{\pi_{\ensuremath{\mathsf{c}}\xspace}}}(s,s'')H_{s'',s'},\quad s\in {\ensuremath{\mathsf{c}}\xspace}, s'\in{\ensuremath{\partial{\ensuremath{\mathsf{c}}\xspace}}\xspace}.$$ We then have $$\widetilde{P}(s,a,s') = H_{s,s'},\quad \text{ for all } s,s'\in {\ensuremath{\partial{\ensuremath{\mathsf{c}}\xspace}}\xspace},$$ assuming $H$ is the [*minimal non-negative solution*]{} to , and $a$ is the action corresponding to executing the policy $\pi_{\ensuremath{\mathsf{c}}\xspace}$ in [cluster]{} ${\ensuremath{\mathsf{c}}\xspace}$. Consider the partitioning $${P_{\ensuremath{\mathsf{c}}\xspace}^{\pi_{\ensuremath{\mathsf{c}}\xspace}}}=
\begin{pmatrix}
Q & B\\
C & D
\end{pmatrix},
\qquad
H =
\begin{pmatrix}
h_q \\
h_b
\end{pmatrix}$$ where the blocks $Q, D$ describe the interaction among non-bottleneck and bottleneck states within [cluster]{} ${\ensuremath{\mathsf{c}}\xspace}$ respectively. In matrix-vector form, we can solve for the compressed probabilities by computing the minimal non-negative solution to $$\label{eqn:trprob_submtx}
(I-Q)h_q = B$$ followed by $$h_b = D + Ch_q,$$ where $h_b$ is the desired transition probability matrix of the compressed MDP given the action $a$. If Equation has a unique solution, then the cost of these computations[^23] is at most ${{\mathcal O}}(|{\accentset{\circ}{{\ensuremath{\mathsf{c}}\xspace}}}|^3 + |{\ensuremath{\partial{\ensuremath{\mathsf{c}}\xspace}}\xspace}||{\accentset{\circ}{{\ensuremath{\mathsf{c}}\xspace}}}|^2)$. If solving the linear system does not produce a non-negative solution, then algorithms for non-negative least-squares must be used.
From these expressions, it is clear that the transition probabilities starting from non-bottleneck states $h_q$ do not depend on those starting from the bottleneck states or on entries of ${P_{\ensuremath{\mathsf{c}}\xspace}^{\pi_{\ensuremath{\mathsf{c}}\xspace}}}$ outside of $Q$. In addition, by definition of the stopping times above, the transition probabilities enforce $\widetilde{P}(s,a,s')=\delta_{s,s'}$ whenever $s$ is absorbing.
Compressed rewards $\widetilde R$: Proof of Proposition \[prop:coarse\_rewards\]
--------------------------------------------------------------------------------
We will first need to define a controlled Markov process conditioned on future events. The approach taken here is similar to that of the Doob $h$-transform (see [@LPWBook]) for Markov chains, but differs in that we keep track of the actions. We fix $s'\in{\ensuremath{\partial{\ensuremath{\mathsf{c}}\xspace}}\xspace}$. Consider the event $\{X_{T_0}=s'\}$ and define $$\label{eqn:h_function}
h_{s'}(s):={{\mathbb P}}_s(X_{T_0}=s'),\qquad s\in {\ensuremath{\mathsf{c}}\xspace},$$ with the probabilities ${{\mathbb P}}_s(X_{T_0}=s')$ given by Equation . It can be shown that the function $h_{s'}$ is ${P_{\ensuremath{\mathsf{c}}\xspace}^{\pi_{\ensuremath{\mathsf{c}}\xspace}}}$-harmonic. Using Bayes rule and the strong Markov property, $$\begin{aligned}
{{\mathbb P}}_s(X_1=s'', a_1=a~|~X_{T_0}=s') &= \frac{{{\mathbb P}}_s(X_{T_0}=s' ~|~ X_1=s'',a_1=a){{\mathbb P}}_s(X_1=s'',a_1=a)}{{{\mathbb P}}_s(X_{T_0}=s')} \notag \\
&= \frac{{{\mathbb P}}_{s''}(X_{T_0}=s')P_{\ensuremath{\mathsf{c}}\xspace}(s,a,s'')\pi_{\ensuremath{\mathsf{c}}\xspace}(s,a)}{{{\mathbb P}}_s(X_{T_0}=s')} \notag \\
&= \frac{P_{{\ensuremath{\mathsf{c}}\xspace}}(s,a,s'')\pi_{\ensuremath{\mathsf{c}}\xspace}(s,a)h_{s'}(s'')}{h_{s'}(s)} \notag \\
& =: P_{h_{s'}}(s,a,s'')\label{eqn:Ph_defn}\end{aligned}$$ for $s''\in {\ensuremath{\mathsf{c}}\xspace}_{s'}:=\{s\in {\ensuremath{\mathsf{c}}\xspace}~|~h_{s'}(s)>0\}$, $s\in {\ensuremath{{\ensuremath{\mathsf{c}}\xspace}'_{s'}}\xspace}\setminus\{s'\}$, and $a\in A$.
Similarly, for $(s,s')\in{\operatorname{supp}}_a(\widetilde{P})\subseteq{\ensuremath{\partial{\ensuremath{\mathsf{c}}\xspace}}\xspace}$, $s''\in {\ensuremath{\mathsf{c}}\xspace}_{s'}$, $a\in A$, $$\label{eqn:Phtilde_defn}
\begin{aligned}
{{\mathbb P}}_s & (X_{T_0+1}=s'',a_{T_0+1}=a~|~X_{T_1}=s')\\
&= \frac{{{\mathbb P}}(X_{T_1}=s' ~|~ X_{T_0}=s,X_{T_0+1}=s'',a_{T_0+1}=a){{\mathbb P}}_s(X_1=s'',a_1=a)}{{{\mathbb P}}_s(X_{T_1}=s')}\\
&= \frac{P_{{\ensuremath{\mathsf{c}}\xspace}}(s,a,s'')\pi_{\ensuremath{\mathsf{c}}\xspace}(s,a)h_{s'}(s'')}{\widetilde{P}(s,a,s')}\\
& =: P_{\tilde{h}_{s'}}(s,a,s'')
\end{aligned}$$ since $${{\mathbb P}}(X_{T_1}=s' ~|~ X_{T_0}=s,X_{T_0+1}=s'') =
\begin{cases}
\delta_{s'',s'} & \text{if } s''\in{\ensuremath{\partial{\ensuremath{\mathsf{c}}\xspace}}\xspace}\\
{{\mathbb P}}_{s''}(X_{T_0}=s') & \text{if } s''\in{\accentset{\circ}{{\ensuremath{\mathsf{c}}\xspace}}}\end{cases}$$ is equal to $h_{s'}(s)$ defined by Equation , and for $s\in{\ensuremath{\partial{\ensuremath{\mathsf{c}}\xspace}}\xspace}$ we have ${{\mathbb P}}_s(X_{T_1}=s')=\widetilde{P}(s,a,s')$ as given by Equation . We now consider the expected rewards collected along paths between bottlenecks connected to a [cluster]{}. The process is similar to that of the transition probabilities, where we first defined hitting probabilities at time $T_0$, and from those quantities defined conditional hitting probabilities at time $T_1$. Here we use discounted expected rewards collected up to time $T_0$ to ultimately compute rewards collected between $T_0$ and $T_1$. Recall that we assume a reward is collected only after transitioning. Let $T$ and $T'$ be two arbitrary stopping times satisfying $0\leq T<T'<\infty$ (a.s.). The discounted reward accumulated over the interval $T\leq t\leq T'$ is given by the random variable $$R_{T}^{T'}:= R(X_T,a_{T+1}, X_{T+1}) + \sum_{t=T+1}^{T'-1}\left[
\prod_{\tau=T}^{t-1}\Gamma\bigl(X_{\tau},a_{\tau+1}, X_{\tau+1}\bigr)\right]
R\bigl(X_t,a_{t+1}, X_{t+1}\bigr)$$ where $a_{t+1}\sim\pi_{\ensuremath{\mathsf{c}}\xspace}(X_t)$ for $t=T,\ldots,T'-1$, and we set $R_T^T \equiv 0$ for any $T$.
Consider ${{\mathbb E}}_s[R_0^{T_0}~|~X_{T_0}=s']$ for some fixed $s'\in{\ensuremath{\partial{\ensuremath{\mathsf{c}}\xspace}}\xspace}$. We immediately have that ${{\mathbb E}}_s[R_0^{T_0}~|~X_{T_0}=s']=0$ if $s=s'$, and is undefined if $s\notin {\ensuremath{{\ensuremath{\mathsf{c}}\xspace}'_{s'}}\xspace}:=\{ s~|~h_{s'}(s)>0\}$ (note that $({\ensuremath{\partial{\ensuremath{\mathsf{c}}\xspace}}\xspace}\setminus\{s'\})\subseteq ({\ensuremath{\mathsf{c}}\xspace}\setminus {\ensuremath{{\ensuremath{\mathsf{c}}\xspace}'_{s'}}\xspace})$ from ). We will need the following Lemma.
\[lem:one-step-rews\] For $s\in{\accentset{\circ}{{\ensuremath{\mathsf{c}}\xspace}}}\cap {\ensuremath{{\ensuremath{\mathsf{c}}\xspace}'_{s'}}\xspace}, s'\in{\ensuremath{\partial{\ensuremath{\mathsf{c}}\xspace}}\xspace}, {s''}\in {\ensuremath{{\ensuremath{\mathsf{c}}\xspace}'_{s'}}\xspace}, a\in A$, $${{\mathbb E}}[R_1^{T_0} ~|~ X_{T_0}=s', X_1={s''}, T_0\geq 1] = {{\mathbb E}}_{s''}[R_0^{T_0}~|~ X_{T_0}=s']$$ and therefore, $${{\mathbb E}}_s[R_0^{T_0}~|~X_{T_0}=s', X_1={s''}, a_1=a] = R(s,a,{s''}) + \Gamma(s,a,{s''}){{\mathbb E}}_{s''}[R_0^{T_0}~|~X_{T_0}=s'] .$$
We have $$\begin{aligned}
{{\mathbb E}}_s[R_0^{T_0} & ~|~X_{T_0}=s', X_1={s''}, a_1=a]\\
&= R(s,a,{s''}) + \Gamma(s,a,{s''}){{\mathbb E}}_s\biggl[R(X_1,a_2,X_2) \;+ \biggr. \\
&\qquad \biggl. \sum_{t=2}^{T_0-1}\left\lbrace\prod_{\tau=1}^{t-1}\Gamma\bigl(X_{\tau},a_{\tau+1}, X_{\tau+1}\bigr)\right\rbrace
R\bigl(X_t,a_{t+1}, X_{t+1}\bigr) ~\bigl|\bigr.~X_{T_0}=s', X_1={s''}, a_1=a
\biggr]\\
&= R(s,a,{s''}) + \Gamma(s,a,{s''}){{\mathbb E}}[R_1^{T_0}~|~X_{T_0}=s', X_1={s''}, T_0\geq 1].\end{aligned}$$ If ${s''}=s'$ then there is nothing more to show, since $R_1^1=0$. For ${s''}\in{\accentset{\circ}{{\ensuremath{\mathsf{c}}\xspace}}}\cap {\ensuremath{{\ensuremath{\mathsf{c}}\xspace}'_{s'}}\xspace}$, it will suffice to show that ${{\mathbb E}}[R_1^{T_0}~|~ X_1={s''}, T_0\geq 1] = {{\mathbb E}}_{s''}[R_0^{T_0}]$. Given a sequence of states $(i_n,\ldots,i_{n+p-m})$ and actions $(a_{n+1},\ldots,a_{n+p-m})$ with $0\leq n,m< p$, define the event $$B_{m,n}^p := \Biggl(\bigcap_{j=0}^{p-m} \{X_{m+j}=i_{n+j}\}\Biggr)\cap
\Biggl(\bigcap_{j=1}^{p-m} \{a_{m+j}=a_{n+j}\}\Biggr)$$ and consider the conditional probability, for $n\geq 1$, $$\begin{aligned}
{{\mathbb P}}(\{T_0=n\}\cap B_{1,1}^{T_0} ~|~ X_1={s''}, T_0\geq 1) &=
{{\mathbb P}}(B_{1,1}^{T_0} ~|~ T_0=n, X_1={s''}){{\mathbb P}}(T=n~|~ X_1={s''}, T_0\geq 1) \\
&= {{\mathbb P}}(B_{1,1}^{n} ~|~ X_1={s''},X_2\notin{\ensuremath{\partial{\ensuremath{\mathsf{c}}\xspace}}\xspace},\ldots,X_{n-1}\notin{\ensuremath{\partial{\ensuremath{\mathsf{c}}\xspace}}\xspace},X_n\in{\ensuremath{\partial{\ensuremath{\mathsf{c}}\xspace}}\xspace})\\
&\qquad \times {{\mathbb P}}(T_0=n~|~ X_1={s''}, T_0\geq 1) \\
&= {{\mathbb P}}(B_{0,1}^{T_0} ~|~ T_0=n-1,X_0={s''}){{\mathbb P}}(T_0=n-1~|~ X_0={s''}) \\
&= {{\mathbb P}}(\{T_0=n-1\}\cap B_{0,1}^{T_0} ~|~ X_0={s''}),\end{aligned}$$ where we have used the fact that ${{\mathbb P}}(T_0=n~|~ X_1={s''}, T_0\geq 1) ={{\mathbb P}}(T_0=n-1~|~ X_0={s''})$. This latter equality is true, since, by homogeneity, $$\begin{aligned}
{{\mathbb P}}(T_0=n~|~ X_m={s''}, T_0\geq m) &= {{\mathbb P}}\Biggl(\bigcap_{j=m}^{n-1}\{X_j\notin{\ensuremath{\partial{\ensuremath{\mathsf{c}}\xspace}}\xspace}\} \cap \{X_n\in{\ensuremath{\partial{\ensuremath{\mathsf{c}}\xspace}}\xspace}\} ~\Bigl|\Bigr.~ X_m={s''}\Biggr) \\
&= {{\mathbb P}}\Biggl(\bigcap_{j=0}^{n-m-1}\{X_j\notin{\ensuremath{\partial{\ensuremath{\mathsf{c}}\xspace}}\xspace}\} \cap \{X_{n-m}\in{\ensuremath{\partial{\ensuremath{\mathsf{c}}\xspace}}\xspace}\} ~\Bigl|\Bigr.~ X_0={s''}\Biggr)\\
&= {{\mathbb P}}(T_0=n-m~|~ X_0={s''})\end{aligned}$$ for $n\geq m$. Next, let $$f(i_0,\ldots,i_{n},a_1,\ldots,a_{n}):= R(i_0,a_1, i_1) + \sum_{t=1}^{n-1}
\left[\prod_{\tau=0}^{t-1}\Gamma\bigl(i_{\tau},a_{\tau+1}, i_{\tau+1}\bigr)\right]
R\bigl(i_t,a_{t+1}, i_{t+1}\bigr).$$ Then, assuming $T_0 <\infty$ a.s., we have $$\begin{aligned}
{{\mathbb E}}[R_1^{T_0} & ~|~ X_1={s''}, T_0\geq 1] \\
&=
\sum_{1\leq n <\infty}\sum_{\substack{i_1,\ldots,i_n\in {\ensuremath{\mathsf{c}}\xspace}\\ a_2,\ldots,a_n\in A}} {{\mathbb P}}(\{T_0=n\}\cap B_{1,1}^{T_0} ~|~ X_1={s''}, T_0\geq 1) f(i_1,\ldots,i_{n},a_2,\ldots,a_{n})\\
&= \sum_{1\leq n <\infty}\sum_{\substack{i_1,\ldots,i_n\in {\ensuremath{\mathsf{c}}\xspace}\\ a_2,\ldots,a_n\in A}}
{{\mathbb P}}(\{T_0=n-1\}\cap B_{0,1}^{T_0} ~|~ X_0={s''}) f(i_1,\ldots,i_{n},a_2,\ldots,a_{n}) \\
&= \sum_{0\leq n <\infty}\sum_{\substack{i_0,\ldots,i_n\in {\ensuremath{\mathsf{c}}\xspace}\\ a_1,\ldots,a_n\in A}}
{{\mathbb P}}(\{T_0=n\}\cap B_{0,0}^{T_0} ~|~ X_0={s''}) f(i_0,\ldots,i_{n},a_1,\ldots,a_{n})\\
&= {{\mathbb E}}[R_0^{T_0}~|~ X_0=s''] \,.\end{aligned}$$
With the above definitions, we turn to proving the proposition.
With the same choice of $s'$ used in ${{\mathbb E}}_s[R_0^{T_0}~|~X_{T_0}=s']$, define $h_{s'}$ as in Equation and let as above ${\ensuremath{{\ensuremath{\mathsf{c}}\xspace}'_{s'}}\xspace}:=\{s\in {\ensuremath{\mathsf{c}}\xspace}~|~h_{s'}(s)>0\}$. For $s\in {\accentset{\circ}{{\ensuremath{\mathsf{c}}\xspace}}}\cap {\ensuremath{{\ensuremath{\mathsf{c}}\xspace}'_{s'}}\xspace}= {\ensuremath{{\ensuremath{\mathsf{c}}\xspace}'_{s'}}\xspace}\setminus\{s'\}$, a one-step analysis gives $$\begin{aligned}
{{\mathbb E}}_s[R_0^{T_0} & ~|~X_{T_0}=s'] \\
&= {{\mathbb E}}_s\bigl\{{{\mathbb E}}_s[R_0^{T_0}~|~X_{T_0}=s', X_1, a_1] ~\bigl|\bigr.~ X_{T_0}=s'\bigr\} \\
&= \sum_{s''\in {\ensuremath{\mathsf{c}}\xspace}, a\in A}{{\mathbb P}}_s(X_1=s'',a_1=a ~|~X_{T_0}=s'){{\mathbb E}}_s[R_0^{T_0}~|~X_{T_0}=s', X_1=s'', a_1=a] \\
&= \sum_{s''\in {\ensuremath{{\ensuremath{\mathsf{c}}\xspace}'_{s'}}\xspace}, a\in A}P_{h_{s'}}(s,a,s''){{\mathbb E}}_s[R_0^{T_0}~|~X_{T_0}=s', X_1=s'', a_1=a] \\
&= \sum_{s''\in {\ensuremath{{\ensuremath{\mathsf{c}}\xspace}'_{s'}}\xspace}, a\in A}P_{h_{s'}}(s,a,s'')\bigl(R(s,a,s'') + \Gamma(s,a,s''){{\mathbb E}}_{s''}[R_0^{T_0}~|~X_{T_0}=s'] \bigr) \\
&= \smashoperator[r]{\sum_{s''\in {\accentset{\circ}{{\ensuremath{\mathsf{c}}\xspace}}}\cap {\ensuremath{{\ensuremath{\mathsf{c}}\xspace}'_{s'}}\xspace}, a\in A}} P_{h_{s'}}(s,a,s'')\Gamma(s,a,s''){{\mathbb E}}_{s''}[R_0^{T_0}~|~X_{T_0}=s']
+ \sum_{s''\in {\ensuremath{{\ensuremath{\mathsf{c}}\xspace}'_{s'}}\xspace}, a\in A}P_{h_{s'}}(s,a,s'')R(s,a,s'')\end{aligned}$$ where the third equality follows from the second using Equation and the fact that $P_{h_{s'}}(s,a,s'')>0$ only for $s''\in {\ensuremath{{\ensuremath{\mathsf{c}}\xspace}'_{s'}}\xspace}$, and the fourth follows from the third applying Lemma \[lem:one-step-rews\]. The last equality follows after rearranging terms.
With these expectations in hand, we can compute the discounted rewards between bottlenecks. Note that ${{\mathbb E}}[R_{T_0}^{T_1} ~|~ X_{T_0}=s, X_{T_1}=s'] = {{\mathbb E}}_s[R_{T_0}^{T_1} ~|~ X_{T_1}=s']$, for $s,s'\in{\ensuremath{\partial{\ensuremath{\mathsf{c}}\xspace}}\xspace}$. By convention, we set ${{\mathbb E}}_s[R_{T_0}^{T_1} ~|~ X_{T_1}=s'] = 0$ if $\widetilde{P}(s,a,s')=0$. For $s\in{\ensuremath{\partial{\ensuremath{\mathsf{c}}\xspace}}\xspace}$ such that $(s,s')\in{\operatorname{supp}}_a(\widetilde{P})$, we have $T_0=0$, and a one-step analysis similar to the above gives $$\begin{aligned}
{{\mathbb E}}_s & [R_{T_0}^{T_1} ~|~ X_{T_1}=s']\\
&= {{\mathbb E}}_s\bigl\{{{\mathbb E}}_s[R_{T_0}^{T_1} ~|~ X_{T_1}=s', X_{T_0+1}, a_{T_0+1}] ~\bigl|\bigr.~ X_{T_1}=s'\bigr\} \\
&=\smashoperator[r]{\sum_{s''\in {\ensuremath{\mathsf{c}}\xspace}, a\in A}}{{\mathbb P}}_s(X_{T_0+1}=s'', a_{T_0+1}=a ~|~ X_{T_1}=s')
{{\mathbb E}}_s[R_{T_0}^{T_1} ~|~ X_{T_1}=s', X_{T_0+1}=s'', a_{T_0+1}=a] \\
&= \smashoperator[r]{\sum_{s''\in {\accentset{\circ}{{\ensuremath{\mathsf{c}}\xspace}}}\cap {\ensuremath{{\ensuremath{\mathsf{c}}\xspace}'_{s'}}\xspace}, a\in A}} P_{\tilde{h}_{s'}}(s,a,s'')\Gamma(s,a,s''){{\mathbb E}}_{s''}[R_0^{T_0}~|~X_{T_0}=s'] + \sum_{s''\in {\ensuremath{{\ensuremath{\mathsf{c}}\xspace}'_{s'}}\xspace}, a\in A}P_{\tilde{h}_{s'}}(s,a,s'')R(s,a,s'')\end{aligned}$$ where we have used the fact that $$\begin{gathered}
{{\mathbb E}}_s[R_{T_0}^{T_1} ~|~ X_{T_1}=s', X_{T_0+1}=s'', a_{T_0+1}=a]\\
=
\begin{cases}
R(s,a,s'') + \Gamma(s,a,s''){{\mathbb E}}_{s''}[R_0^{T_0}~|~X_{T_0}=s'] & \text{if } s''\in{\accentset{\circ}{{\ensuremath{\mathsf{c}}\xspace}}}\\
R(s,a,s') & \text{if } s''=s'
\end{cases}.\end{gathered}$$ As mentioned in Section \[sec:trans\_probs\], the boundary is reachable from any state by assumption, so ${{\mathbb P}}_s(T_m < \infty)=1$ for all $s\in {\ensuremath{\mathsf{c}}\xspace}, m <\infty$. Hence, the solution to the linear system above is unique and bounded [@Norris97 Thm. 4.2.3].
We express the linear systems describing the coarse rewards above in matrix-vector form for convenience. Fix a destination bottleneck $s'\in{\ensuremath{\partial{\ensuremath{\mathsf{c}}\xspace}}\xspace}$. Consider $P_{h_{s'}},P_{\tilde{h}_{s'}}$ as tensors, and partition $P_{h_{s'}}$ into the pieces $(P_{h_{s'}}^{\circ})_{s,a,k} = P_{h_{s'}}(s,a,k)$ for $s,k\in{\accentset{\circ}{{\ensuremath{\mathsf{c}}\xspace}}}\cap {\ensuremath{{\ensuremath{\mathsf{c}}\xspace}'_{s'}}\xspace}, a\in A$ and $(P_{h_{s'}}^{\partial})_{s,a,k} = P_{h_{s'}}(s,a,k)$ for $s\in{\accentset{\circ}{{\ensuremath{\mathsf{c}}\xspace}}}\cap {\ensuremath{{\ensuremath{\mathsf{c}}\xspace}'_{s'}}\xspace}, k=s',a\in A$. Next, partition $P_{\tilde{h}_{s'}}$ into the pieces $(P_{\tilde{h}_{s'}}^{\circ})_{s,a,k} = P_{\tilde{h}_{s'}}(s,a,k)$ for $(s,s')\in{\operatorname{supp}}_a(\widetilde{P}), k\in{\accentset{\circ}{{\ensuremath{\mathsf{c}}\xspace}}}\cap {\ensuremath{{\ensuremath{\mathsf{c}}\xspace}'_{s'}}\xspace}, a\in A$ and $(P_{\tilde{h}_{s'}}^{\partial})_{s,a,k} = P_{\tilde{h}_{s'}}(s,a,k)$ for $(s,s')\in{\operatorname{supp}}_a(\widetilde{P}), k=s',a\in A$. Similarly, partition $\Gamma_i,R_i$ into pieces $\Gamma_{h_{s'}}^{\circ},\Gamma_{h_{s'}}^{\partial},\Gamma_{\tilde{h}_{s'}}^{\circ},\Gamma_{\tilde{h}_{s'}}^{\partial}$ and $R_{h_{s'}}^{\circ},R_{h_{s'}}^{\partial},R_{\tilde{h}_{s'}}^{\circ},R_{\tilde{h}_{s'}}^{\partial}$ corresponding to the respective pieces of $P_{h_{s'}},P_{\tilde{h}_{s'}}$ mentioning the same state/action triples $(s,a,s')$. Equations and may be written, respectively, as $$\begin{aligned}
\bigl(I - (P_{h_{s'}}^{\circ}\circ\Gamma_{h_{s'}}^{\circ})^{\pi_{\ensuremath{\mathsf{c}}\xspace}}\bigr)h_q &=
\bigl[(P_{h_{s'}}^{\circ}\circ R_{h_{s'}}^{\circ})^{\pi_{\ensuremath{\mathsf{c}}\xspace}} ~~ (P_{h_{s'}}^{\partial}\circ R_{h_{s'}}^{\partial})^{\pi_{\ensuremath{\mathsf{c}}\xspace}}\bigr]{\mathbf{1}}\\
h_b &= (P_{\tilde{h}_{s'}}^{\circ}\circ\Gamma_{\tilde{h}}^{\circ})^{\pi_{\ensuremath{\mathsf{c}}\xspace}}h_q +
\bigl[(P_{\tilde{h}_{s'}}^{\circ}\circ R_{\tilde{h}_{s'}}^{\circ})^{\pi_{\ensuremath{\mathsf{c}}\xspace}} ~~ (P_{\tilde{h}_{s'}}^{\partial}\circ R_{\tilde{h}_{s'}}^{\partial})^{\pi_{\ensuremath{\mathsf{c}}\xspace}}\bigr]{\mathbf{1}}.\end{aligned}$$
Two final observations of practical interest are in order. The solution of the linear systems defined above (one for each destination bottleneck $s'\in{\ensuremath{\partial{\ensuremath{\mathsf{c}}\xspace}}\xspace}$) can be potentially carried out efficiently by preconditioning some systems on the basis of solutions to the others. In particular, if a particular bottleneck $s'$ defining a system is close to another in the statespace graph in terms of the diffusion distance induced by ${P_{\ensuremath{\mathsf{c}}\xspace}^{\pi_{\ensuremath{\mathsf{c}}\xspace}}}$, then there is good reason to believe that the solutions will be close. Second, calculation of the rewards above is closely related computationally to calculation of the coarse discount factors. The next section derives the discount factors, and discusses when one set of quantities can be obtained from the other at essentially no cost.
Compressed discount factors $\widetilde{\Gamma}$: Proof of Proposition \[prop:coarse\_discount\_factors\]
---------------------------------------------------------------------------------------------------------
The approach is similar to that of the rewards. First note that if we set $R(s,a,s')\equiv 1$ uniformly for all $s,s'\in {\ensuremath{\mathsf{c}}\xspace}, a\in A$, then $\Delta_T^{T'} = R_T^{T'+1} - R_T^{T'}$ (and $T'+1$ is clearly a stopping time, so this quantity is well-defined). If $s\in{\accentset{\circ}{{\ensuremath{\mathsf{c}}\xspace}}}\cap {\ensuremath{{\ensuremath{\mathsf{c}}\xspace}'_{s'}}\xspace}, s'\in{\ensuremath{\partial{\ensuremath{\mathsf{c}}\xspace}}\xspace}, s''\in {\ensuremath{{\ensuremath{\mathsf{c}}\xspace}'_{s'}}\xspace}, a\in A$, then invoking Lemma \[lem:one-step-rews\], $$\label{eqn:delta_t0_onestep}
\begin{aligned}
{{\mathbb E}}_s[\Delta_{0}^{T_0} ~|~ X_{T_0}=s', X_1=s'', a_1=a] &=
\Gamma(s,a,s''){{\mathbb E}}_s[\Delta_{1}^{T_0} ~|~ X_{T_0}=s', X_1=s'']\\
&= \Gamma(s,a,s''){{\mathbb E}}_s[R_{1}^{T_0+1} - R_{1}^{T_0} ~|~ X_{T_0}=s', X_1=s'']\\
&= \Gamma(s,a,s''){{\mathbb E}}_{s''}[R_{0}^{T_0+1} - R_{0}^{T_0} ~|~ X_{T_0}=s']\\
&= \Gamma(s,a,{s''}){{\mathbb E}}_{s''}[\Delta_{0}^{T_0} ~|~ X_{T_0}=s'],
\end{aligned}$$ and ${{\mathbb E}}_s[\Delta_{0}^{T_0} ~|~ X_{T_0}=s', X_1={s''}, a_1=a] = \Gamma(s,a,s')$ if ${s''}=s'$. Thus to obtain the expectations ${{\mathbb E}}_s[\Delta_{0}^{T_0} ~|~ X_{T_0}=s'], s\in{\accentset{\circ}{{\ensuremath{\mathsf{c}}\xspace}}}\cap {\ensuremath{{\ensuremath{\mathsf{c}}\xspace}'_{s'}}\xspace}$, we may solve the linear system defined by $$\begin{aligned}
{{\mathbb E}}_s[\Delta_{0}^{T_0} & ~|~ X_{T_0}=s'] \\
& = \sum_{{s''},a}P_{h_{s'}}(s,a,{s''}){{\mathbb E}}_s[\Delta_{0}^{T_0} ~|~ X_{T_0}=s', X_1={s''}, a_1=a] \\
&= \sum_{{s''}\in{\accentset{\circ}{{\ensuremath{\mathsf{c}}\xspace}}}\cap {\ensuremath{{\ensuremath{\mathsf{c}}\xspace}'_{s'}}\xspace}, a\in A}P_{h_{s'}}(s,a,{s''})\Gamma(s,a,{s''}){{\mathbb E}}_{s''}[\Delta_{0}^{T_0} ~|~ X_{T_0}=s'] + \sum_{a\in A}P_{h_{s'}}(s,a,s')\Gamma(s,a,s') \end{aligned}$$ where $P_{h_{s'}}$ is given by Equation and depends on the choice of $s'$. Next, if $s,s'\in{\ensuremath{\partial{\ensuremath{\mathsf{c}}\xspace}}\xspace}, {s''}\in{\accentset{\circ}{{\ensuremath{\mathsf{c}}\xspace}}}\cap {\ensuremath{{\ensuremath{\mathsf{c}}\xspace}'_{s'}}\xspace}$, by the strong Markov property, $$\begin{aligned}
{{\mathbb E}}_s[\Delta_{T_0}^{T_1} ~|~ X_{T_1}=s', X_{T_0+1}={s''}, a_{T_0+1}=a]
&= \Gamma(s,a,{s''}){{\mathbb E}}[\Delta_{1}^{T_1} ~|~ X_{T_1}=s', X_{1}={s''}, T_0=0]\\
&=\Gamma(s,a,{s''}){{\mathbb E}}[\Delta_{1}^{T_0} ~|~ X_{T_0}=s', X_{1}={s''}, T_0\geq 1 ] \\
&=\Gamma(s,a,{s''}){{\mathbb E}}_{s''}[\Delta_{0}^{T_0} ~|~ X_{T_0}=s'] \end{aligned}$$ where the third equality follows from the second applying Lemma \[lem:one-step-rews\] with $\Delta_1^{T_0} = R_1^{T_0+1} - R_1^{T_0}$. If ${s''}=s'$, then we simply have ${{\mathbb E}}_s[\Delta_{T_0}^{T_1} ~|~ X_{T_0}=s', X_1={s''}, a_1=a]=\Gamma(s,a,s')$.
With these facts in hand, the compressed discount factors may be found from the expectations $\{{{\mathbb E}}_s[\Delta_{0}^{T_0} ~|~ X_{T_0}=s'], s\in{\accentset{\circ}{{\ensuremath{\mathsf{c}}\xspace}}}\cap {\ensuremath{{\ensuremath{\mathsf{c}}\xspace}'_{s'}}\xspace}\}$ computed above, as $$\begin{aligned}
{{\mathbb E}}_s[\Delta_{T_0}^{T_1} & ~|~ X_{T_1}=s'] \\
&= \sum_{{s''}\in {\ensuremath{\mathsf{c}}\xspace},a\in A}P_{\tilde{h}_{s'}}(s,a,{s''}){{\mathbb E}}_s[\Delta_{T_0}^{T_1} ~|~ X_{T_1}=s', X_{T_0+1}={s''}, a_{T_0+1}=a] \\
&= \sum_{{s''}\in{\accentset{\circ}{{\ensuremath{\mathsf{c}}\xspace}}}\cap {\ensuremath{{\ensuremath{\mathsf{c}}\xspace}'_{s'}}\xspace}, a\in A}P_{\tilde{h}_{s'}}(s,a,{s''})\Gamma(s,a,{s''}){{\mathbb E}}_{s''}[\Delta_{0}^{T_0} ~|~ X_{T_0}=s'] + \sum_{a\in A}P_{\tilde{h}_{s'}}(s,a,{s'})\Gamma(s,a,s')\end{aligned}$$ for $(s,s')\in{\operatorname{supp}}_a(\widetilde{P})$, where $P_{\tilde{h}_{s'}}$ is defined by Equation and depends on the choice of $s'$.
We note that for each destination bottleneck $s'\in{\ensuremath{\partial{\ensuremath{\mathsf{c}}\xspace}}\xspace}$, the linear system appearing in Proposition \[prop:coarse\_discount\_factors\] has the same left-hand side as the corresponding linear system for $s'$ in Section \[sec:coarse\_rewards\]: we can compute the compressed discounts essentially for free if we have previously computed the compressed rewards (or vice versa), provided the resulting solutions to the discount factor equations are non-negative. If they are not non-negative, separate non-negative least squares solutions must be computed, although there may still be helpful preconditioning possibilities.
[^1]: Corresponding author. Current address: Laboratory for Computational and Statistical Learning, Massachusetts Institute of Technology, Bldg. 46-5155, 77 Massachusetts Avenue, Cambridge, MA 02139. Email: `jvb@csail.mit.edu`
[^2]: We will allow the set of actions available in state $s$ to be limited to a nonempty state-dependent subset $\mathcal{A}(s)\subseteq A$ of [*feasible*]{} actions, but do not explicitly keep track of the sets $\mathcal{A}(s)$ to avoid cluttering the notation. As a matter of bookkeeping, we assume that these constraints are enforced as needed by setting $P(s,a,s')=0$ for all $s'$ if $a\notin\mathcal{A}(s)$, and/or by assigning zero probability to invalid actions in the case of stochastic policies (discussed below). If a stochastic policy has been restricted to the feasible actions, then it will be assumed that it has also been suitably re-normalized.
[^3]: Consider the present value of an infinite stream of future cash payments $q_0,q_1,\ldots$, paid out at discrete time instances $t=0,1,\ldots$. If the risk-free interest rate over the period $[t,t+1]$ is given by $r_t$, then the present value of the payments is given by $C=q_0 + \sum_{t=1}^{\infty}\prod_{\tau=0}^{t-1}\gamma_{\tau}q_t$, where $\gamma_t=(1+r_t)^{-1}$.
[^4]: A partitioning ${\ensuremath{{\mathsf{C}}}\xspace}=\{{\ensuremath{\mathsf{c}}\xspace}_i\}_{i=1}^C$ is a family of disjoint sets ${\ensuremath{\mathsf{c}}\xspace}_i$ such that $S=\cup_{i=1}^C {\ensuremath{\mathsf{c}}\xspace}_i$.
[^5]: The case of repeated eigenvalues may be treated similarly by generalizing the sign flipping operation $\tau$ to an orthogonal transformation of the subspace spanned by the eigenvectors sharing the repeated eigenvalue.
[^6]: Moreover, it does not require a priori knowledge of the fine details of the models in each [cluster]{}, but only requires the ability to call a “black box” which simulates the prescribed process in each [cluster]{}, and computes the corresponding functional (in this sense coarsening becomes model-free).
[^7]: In fact, if any such state is an element of a closed, communicating class, then the entire class can be lumped into a single state and treated as a single bottleneck. Thus, the bottleneck set does not need to grow with the size of the closed class from which the boundary is unreachable. For simplicity however, we will assume in the development below that no lumped states of this type exist.
[^8]: Recall that the statespace of the coarse problem is exactly the set of bottlenecks for the fine problem.
[^9]: If the [cluster]{}already contains absorbing (terminal) states, then those states remain absorbing (in addition to ${\ensuremath{{\mathsf{b}}}\xspace}$).
[^10]: which may be found by bisection search of $R_{\ensuremath{\mathsf{c}}\xspace}$
[^11]: We will discuss why this situation could arise in the context of transfer learning (Section \[sec:transfer\]).
[^12]: In any event, if the optimal policy is non-deterministic, one can still consider taking $\pi^o(s)=\arg\max_a\pi^*(s,a)$ as the optimal policy for transfer purposes.
[^13]: Recall that, by construction, any state at any level of an MDP hierarchy is a state from the finest scale. Regardless of the scale at which the [clusters]{} ${\ensuremath{{\ensuremath{\mathsf{c}}\xspace}_1}\xspace},{\ensuremath{{\ensuremath{\mathsf{c}}\xspace}_2}\xspace}$ occur, we may always consider subsets of ${\ensuremath{{\ensuremath{\mathsf{c}}\xspace}_1}\xspace},{\ensuremath{{\ensuremath{\mathsf{c}}\xspace}_2}\xspace}$ as subsets of the original underlying statespaces $S_1, S_2$ (resp.).
[^14]: This is overly-conservative because, in general, convergence rates will be different across [clusters]{}. We have assumed that every [cluster]{}has the worst convergence rate.
[^15]: In this experiment and those that follow, we will use the $L_2$ norm to measure error rather than $L_{\infty}$, as it is a more revealing indicator of progress over the entire statespace in question.
[^16]: Note that in general if there is transfer at coarse scales, then such a comparison is not possible since the policy iteration algorithm cannot directly take advantage of coarse information.
[^17]: Our simulator uses the additional state variable $\dot{x}=\tfrac{dx}{dt}$ internally, however this variable is ignored externally (that is, during clustering, policy evaluation, etc.).
[^18]: In this simple example, we illustrate the core ideas by carrying out only fine scale policy transfer into the finest scale of a hierarchy; thus, only the destination problem needs to be compressed. For general transfer into arbitrary levels of a hierarchy, compression of both problems would be required.
[^19]: In more complex problems where this may not be as obvious, the procedure described in Section \[sec:policy-xfer\] may be applied.
[^20]: Of course, one does not need to know or identify what the [clusters]{}mean in order to transfer something; we provide this explanation for illustrative purposes.
[^21]: This is clearly not detectable in practice, but is informative in the context of recompression-based algorithms which do not converge to the optimal value function in this example.
[^22]: This is because the homogenization we prescribe results in deterministic quantities, and Markovianity would not necessarily be preserved if coarse variables were random.
[^23]: Using for instance, an LU factorization (${{\mathcal O}}(|{\accentset{\circ}{{\ensuremath{\mathsf{c}}\xspace}}}|^3)$) to efficiently solve for $|{\ensuremath{\partial{\ensuremath{\mathsf{c}}\xspace}}\xspace}|\ll |{\accentset{\circ}{{\ensuremath{\mathsf{c}}\xspace}}}|$ right-hand sides at a cost of ${{\mathcal O}}(|{\accentset{\circ}{{\ensuremath{\mathsf{c}}\xspace}}}|^2)$ each.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'The longitudinal proton structure function, $F_L(x,Q^2)$, from the $k_t$ factorization formalism by using the unintegrated parton distribution functions (UPDF) which are generated through the KMR and MRW procedures. The LO UPDF of the KMR prescription is extracted, by taking into account the PDF of Martin et al, i.e. MSTW2008-LO and MRST99-NLO and next, the NLO UPDF of the MRW scheme is generated through the set of MSTW2008-NLO PDF as the inputs. The different aspects of $F_L(x,Q^2)$ in the two approaches, as well as its perturbative and non-perturbative parts are calculated. Then the comparison of $F_L(x,Q^2)$ is made with the data given by the ZEUS and H1 collaborations. It is demonstrated that the extracted $F_L(x,Q^2)$ based on the UPDF of two schemes, are consistent to the experimental data, and by a good approximation, they are independent to the input PDF. But the one developed from the KMR prescription, have better agreement to the data with respect to that of MRW. As it has been suggested, by lowering the factorization scale or the Bjorken variable in the related experiments, it may be possible to analyze the present theoretical approaches more accurately.'
author:
- '$M.$ $Modarres$$^\dag$'
- '$H. Hosseinkhani$$^\ddag$'
- '$N. Olanj$$^\flat$'
- '$M.R. Masouminia$$^{\dag}$'
title: A new phenomenological Investigation of $KMR$ and $MRW$ $unintegrated$ parton distribution functions
---
1 cm
Introduction
============
In recent years, the extraction of $unintegrated$ parton distribution functions ($UPDF$) have become very important, since there exists plenty of experimental data on the various events, such as the exclusive and semi-inclusive processes in the high energy collisions in $LHC$, which indicates the necessity for computation of these $k_t$-dependent parton distribution function.
The $UPDF$, $f_a(x,k_t^2,\mu^{2})$, are the two-scale dependent functions, i.e. $k_t^2$ and $\mu^2$, which satisfy the $Ciafaloni$-$Catani$-$Fiorani$-$Marchesini$ ($CCFM$) equations [@4a; @4b; @4c; @4d; @4e], where $x$, $k_t$ and $\mu$ are the longitudinal momentum fraction (the $Bjorken$ variable), the transverse momentum and the factorization scale, respectively. They are $unintegrated$ over $k_t$ with respect to the conventional parton distributions ($PDF$) which satisfy the $Dokshitzer$-$Gribov$-$Lipatov$-$Altarelli$-$Parisi$ ($DGLAP$) evolution equations [@1a; @1b; @1c; @1d].
But the generation of $UPDF$ from the $CCFM$ equations is a complicated task. So, in general, the Monte Carlo event generators [@6a; @6b; @6c; @6d; @6e; @6f; @6g; @6h] are the main users of these equations. Since there is not a complete quark version of the $CCFM$ formalism, the alternative prescriptions are used for producing the quarks and the gluons $UPDF$. Therefore, to obtain the $UPDF$, $Kimber$, $Martin$ and $Ryskin$ ($KMR$) [@8; @tkimber] proposed a different procedure based on the standard $DGLAP$ equations in the leading order ($LO$) approximation, along with a modification due to the angular ordering condition, which is the key dynamical property of the $CCFM$ formalism. Later on, $Martin$, $Ryskin$ and $Watt$ ($MRW$) extended the $KMR$ approach for the next-to-leading order ($NLO$) approximation [@watt; @watt3; @watt1], with this aim to improve the exclusive calculations. These two procedures are the modifications to the standard $DGLAP$ evolution equations and can produce the $UPDF$ by using the $PDF$ as the inputs.
The general behavior and stability of the $KMR$ and $MRW$ prescriptions were investigated in the references [@mho; @mh1; @mh2; @hm1; @hm2]. Furthermore, to check the reliability of generated $UPDF$, their relative behaviors were compared and used to calculate the observable, deep inelastic scattering proton structure function $F_2(x,Q^2)$. Then the predictions of these two methods for the structure functions, $F_2(x,Q^2)$, were also compared to the electron-proton deep inelastic measurements of $NMC$ [@thesis], $ZEUS$ [@thesis1] and $H1+ZEUS$ [@hera] experimental data. The results were promising [@mho1]. It is also concluded that [@mho1], while the $MRW$ formalism is in more compliance with the $DGLAP$ evolution equations requisites, but it seems in the $KMR$ case, the angular ordering constraint spreads the $UPDF$ to whole transverse momentum region, and makes the results to sum up the leading $DGLAP$ and $Balitski$-$Fadin$-$Kuraev$-$Lipatov$ ($BFKL$) logarithms [@1n; @2n; @3n; @4n; @5n].
Another important observable quantity in this connection is the longitudinal structure function, i.e. $F_L(x,\mu^2)$, which is proportional to the cross section of the longitudinal polarized virtual photon with proton. Particulary at small x, it is directly sensitive to the gluon distributions i.e. $g\rightarrow q\bar{q}$ process. Moreover its calculations in this region need the $k_t$ factorization formalism [@7tkimber; @7tkimber1; @7tkimber2; @7tkimber3; @4kimberp], which is beyond the standard collinear factorization procedure [@proton]. Recently, $Golec-Biernat$ and $Sta\acute{s}to$ [@stasto2; @stasto3] ($GS$) have used the $k_t$ and collinear factorizations [@7tkimber; @7tkimber1; @7tkimber2; @7tkimber3; @4kimberp] as well as the dipole approach to generate the longitudinal structure function, but by using the $DGLAP$/$BFKL$ re-summation method, developed by $Kwiecinski$, $Martin$ and $Stasto$ ($KMS$) [@4kimber], for calculation of the $unintegrated$ gluon density at small x. They have parameterized the input non-perturbative gluon distribution such that they could get the best fit to the experimental proton structure function data [@4kimber].
On the experimental side, the longitudinal structure function has been measured by both the $H1$ [@H10; @H101] and $ZEUZ$ [@ZEUS; @ZEUS1] collaborations at the $DESY$ electron-proton collider $HERA$. The $Q^2$ ranges have been varied between 12 to 90 and 24 to 110 $GeV^2$ in each experiments, respectively.
As it was pointed out above, similar to our recent publication on $F_2(x,Q^2)$ [@mho1], in the present paper, we intend to calculate $F_L(x,Q^2)$ by working in the the $k_t$-factorization scheme. But rather than $KMS$ re-summation method pointed out above, the $KMR$ and $MRW$ [@8; @tkimber; @watt; @watt3; @watt1] formalisms are used to predict the $UPDF$ with the input $PDF$ of the $MRST99$-$NLO$ [@MRST], $MSTW2008$-$LO$ [@10] and $MSTW2008$-$NLO$ [@10] which covers wide range of $(x,Q^2)$ plane. Then our results can be compared both with the experimental data as well as the theoretical $KMS-GS$ presentation of $F_L(x,Q^2)$. So the paper is organized as follows: In the section $II$ we give a belief review of the $KMR$ and the $MRW$ formalisms [@8; @tkimber; @watt; @watt3; @watt1] for extraction of the $UPDF$ form the phenomenological $PDF$ [@MRST; @10]. The formulation of $F_L(x,Q^2)$ based on the $k_t$-factorization scheme is given in the section $III$. Finally, the section $IV$ is devoted to results, discussions and conclusions.
A brief review of the $KMR$ and the $MRW$ formalisms
====================================================
The $KMR$ and $MRW$ [@8; @tkimber; @watt; @watt3; @watt1; @watt2] ideas for generating the $UPDF$ work as follows: Using the given integrated $PDF$ as the inputs, the $KMR$ and $MRW$ procedures produce the $UPDF$ as their outputs. They are based on the $DGLAP$ equations along with some modifications due to the separation of virtual and real parts of the evolutions, and the choice of the splitting functions at leading order ($LO$) and the next-to-leading order ($NLO$) levels, respectively:\
\
($i$) In the $KMR$ formalism [@8; @tkimber], the $UPDF$, $f_{a}(x,k_{t}^2,\mu^{2})$ ($a=q$ and $g$), are defined in terms of the quarks and the gluons $PDF$, i.e.: $$\begin{aligned}
f_{q}(x,k_{t}^2,\mu^{2})=T_q(k_t,\mu)\frac{\alpha_s({k_t}^2)}{2\pi}
\int_x^{1-\Delta}dz\Bigg[P_{qq}(z)\frac{x}{z}\,q\left(\frac{x}{z} ,
{k_t}^2 \right)+ P_{qg}(z)\frac{x}{z}\,g\left(\frac{x}{z} , {k_t}^2
\right)\Bigg],
\label{eq:8}\end{aligned}$$ and $$\begin{aligned}
f_{g}(x,k_{t}^2,\mu^{2})=T_g(k_t,\mu)\frac{\alpha_s({k_t}^2)}{2\pi}
\int_x^{1-\Delta}dz\Bigg[\sum_q
P_{gq}(z)\frac{x}{z}\,q\left(\frac{x}{z} , {k_t}^2 \right) +
P_{gg}(z)\frac{x}{z}\,g\left(\frac{x}{z} , {k_t}^2 \right)\Bigg],
\label{eq:9}\end{aligned}$$ respectively, where, $P_{aa^{\prime}}(x)$, are the $LO$ splitting functions, and the survival probability factors, $T_a(k_t,\mu)$, are evaluated from: $$\begin{aligned}
T_a(k_t,\mu)=\exp\Bigg[-\int_{k_t^2}^{\mu^2}\frac{\alpha_s({k'_t}^2)}{2\pi}\frac{{dk'_t}^{2}}{{k'_t}^{2}}
\sum_{a'}\int_0^{1-\Delta}dz'P_{a'a}(z')\Bigg].
\label{eq:5}\end{aligned}$$ The angular ordering condition ($AOC$) [@3a; @3b], which is a consequence of coherent emission of gluons, on the last step of the evolution process [@watt2], is imposed. The $AOC$ determined the cut off, $\Delta=1-z_{max}=\frac{k_t}{\mu+k_t}$, to prevent $z=1$ singularities in the splitting functions, which arises from the soft gluon emission. As it has been pointed out in the references [@8; @tkimber], the $KMR$ approach has several main characteristics. The important one, is the existence of the cut off at the upper limit of the integrals, that makes the distributions to spread smoothly to the region in which $k_t>\mu$ i.e. the characteristic of the small $x$ physics, which is principally governed by the $BFKL$ evolution [@1n; @2n; @3n; @4n; @5n]. This feature of the $KMR$, leads to the $UPDF$ with the behavior very similar to the unified $BFKL$+$DGLAP$ formalism [@8; @tkimber]. The $UPDF$ based on the $KMR$ formalism, have been widely used in the phenomenological calculations which depend on the transverse momentum [@1; @2; @3; @4; @5; @6; @7; @81; @9; @101; @11; @12].\
($ii$) In the $MRW$ formalism [@watt; @watt3; @watt1], the similar separation of real and virtual contributions to the $DGLAP$ evolution is done, but the procedure is performed at the $NLO$ level i.e., $$\begin{aligned}
f_{a}^{NLO}(x,k_{t}^2,\mu^{2})=\int_x^{1}dz
T_a(k^{2},\mu^{2})\frac{\alpha_s({k}^2)}{2\pi}
\sum_{b=q,g}P_{ab}^{(0+1)}(z)\,b^{NLO}\left(\frac{x}{z} , {k}^2
\right)\Theta(\mu^{2}-k^{2}),
\label{eq:10}\end{aligned}$$ where $$\begin{aligned}
P_{ab}^{(0+1)}(z)=P_{ab}^{(0)}(z)+\frac{\alpha_s}{2\pi}P_{ab}^{(1)}(z),k^2=\frac{k_t^2}{1-z}
\label{eq:11}.\end{aligned}$$ In the equations (\[eq:10\]) and (\[eq:11\]) the $P_{ab}^{(0)}$ and the $P_{ab}^{(1)}$ denote the $LO$ and the $NLO$ contributions of the splitting functions, respectively. It is obvious from equation (\[eq:10\]) that in the $MRW$ formalism, the $UPDF$ are defined such that to ensure $k^{2}<\mu^2$. Also, the survival probability factor, $T_a(k^{2},\mu^{2})$, are obtained as follows: $$\begin{aligned}
T_a(k^{2},\mu^{2})=\exp\Bigg(-\int_{k^2}^{\mu^2}\frac{\alpha_s({\kappa}^2)}{2\pi}\frac{{d\kappa}^{2}}{{\kappa}^{2}}
\sum_{b=q,g}\int_0^{1}d\zeta \zeta
P_{ba}^{(0+1)}(\zeta)\Bigg),
\label{eq:12}\end{aligned}$$ where $P_{ab}^{(i)}$ (which is singular in the $z\rightarrow1$) is given in the reference [@19w]. $MRW$ have demonstrated that the sufficient accuracy can be obtained by keeping only the $LO$ splitting functions together with the $NLO$ integrated parton densities. So, by considering angular ordering, we can use $P^{(0)}$ instead of $P^{(0+1)}$. As it is mentioned above unlike the $KMR$ formalism, where the angular ordering is imposed to the all of terms of the equations (\[eq:8\]) and (\[eq:9\]), in the $MRW$ formalism, the angular ordering is imposed to the terms in which the splitting functions are singular, i.e. the terms that include $P_{qq}$ and $P_{gg}$.
The formulation of $F_L(x,Q^2)$ in the $k_t$-factorization approach
===================================================================
The $k_t$-factorization approach has been discussed in the several works i.e. references [@7tkimber; @4c; @7tkimber3; @new7; @new8]. In the following equation [@12kimber; @13kimber; @14kimber; @stasto2], the different terms i.e the perturbative and the non-perturbative contributions to the $F_L(x,Q^2)$ has been broken into the sum of gluons from the quark-box (the first term i.e. the $k_t$ factorization part), see figure 1 [@watt1]), quarks (the second term) and the non-perturbative gluon (the third term) Parts: $$F_L (x,Q^2) =\Bigg[\frac{Q^4}{\pi^2}\sum_{q} e_q^2
\int\frac{dk_t^2}{k_t^4} \Theta(k^2-k_0^2)\int_0^{1}d\beta\int
d^2\kappa_ t \alpha_s(\mu^2) \beta^2(1-\beta)^2\left
(\frac{1}{D_1}-\frac{1}{D_2}\right)^2\times$$ $$f_g\left(\frac{x}{z},k_t^2,\mu^2\right)\Theta(1-{x\over z})\Bigg] +
\Bigg[\frac{4}{3} \int_x^{1}\frac{dy}{y}
\frac{\alpha_s(Q^2)}{\pi} (\frac{x}{y})^2 F_2 (y,Q^2)\Bigg]+$$ $$\begin{aligned}
\frac{\alpha_s(Q^2)}{\pi} \Bigg[\sum_{q} e_q^2
\int_x^{1}\frac{dy}{y}(\frac{x}{y})^2 (1-\frac{x}{y}) y
g(y,k_0^2)\Bigg] \label{eq:2p},\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \ \\end{aligned}$$ where the second term is (see [@stasto4; @stasto5] ): $$\begin{aligned}
\sum_{q} e_i^2 \frac{\alpha_s(Q^2)}{\pi} \frac{4}{3}
\int_x^{1}\frac{dy}{y}(\frac{x}{y})^2
[q_i(y,Q^2)+\overline{q}_i(y,Q^2)]\nonumber .\end{aligned}$$ In the above equation, in which the graphical representations of $k_t$ and $\kappa_t$ have been introduced in the figure 1, the variable $\beta$ is defined as the light-cone fraction of the photon momentum carried by the internal quark [@new8]. Also, the denominator factors are: $$\begin{aligned}
D_1&=&\kappa_t^2+\beta(1-\beta)Q^2+m_q^2,\nonumber\\D_2&=&({\bf{\kappa_t}}
-{\bf{k_t}})^2 +\beta(1-\beta)Q^2+m_q^2
\label{eq:3p}.\end{aligned}$$ Then by defining ${\bf\kappa^\prime_t}={\bf\kappa_t}-(1-\beta){\bf
k}_t$, the variable y takes the following form: $$y=x(1+{{{\kappa^\prime}^2+m^2_q}\over{\beta(1-\beta)Q^2}}),$$ and $$\begin{aligned}
\frac{1}{z}=1+\frac{\kappa_{t}^{2}+m_q^2}{(1-\beta)Q^2}+\frac{k_t^2+\kappa_t^2-2{\bf{\kappa_t}}.{\bf{k_t}}+m_q^2}{\beta
Q^2}
\label{eq:a}.\end{aligned}$$ As in the reference [@4kimber], the scale $\mu$ which controls the $unintegrated$ gluon and the $QCD$ coupling constant $\alpha_s$, is chosen as follows: $$\begin{aligned}
\mu^2=k_t^2+\kappa_t^2+m_q^2
\label{eq:b}.\end{aligned}$$ One should note that the coefficients used for quark and non-perturbative gluon contributions depend on the transverse momentum. As it has been briefly explained before, the main prescription for $F_L$ consists of three terms; the first term is the $k_t$ factorization which explains the contribution of the $UPDF$ into the $F_L$. This term is derived with the use of pure gluon contribution. However, it only counts the gluon contributions coming from the perturbative region, i.e. for $k_t>1$ $GeV$ , and does not have anything to do with the non-perturbative contributions. In the reference [@stasto4], it has been shown that a proper non-perturbative term can be derived from the $k_t$ factorization term, compacting the $k_t$ dependence and the integration with the use of a variable-change, i.e. y, that carries the $k_t$ dependence. Nevertheless, there is a calculable quark contribution in the longitudinal structure function of the proton, which comes from the collinear factorization, i.e. the second term of the equation (\[eq:2p\]).
For the charm quark, $m$ is taken to be $m_c=1.4 GeV$, and $u$, $d$ and $s$ quarks masses are neglected. We also use the same approximation to save the computation time [@tkimber], the one we did for the calculation of $F_2(x,Q^2)$ [@mho1] i.e the representative “average” value for $\phi$, $\langle \phi \rangle
=\frac{\pi}{4}$ for perturbative gluon contribution. This approximation has been checked in the reference [@tkimber] (page 83). The rest of $\phi$ angular integration can be performed analytically by using a series of integral identities given in the reference [@anal]. We will also verify this approximation in the next section. The $unintegrated$ gluon distributions are not defined for $k_t$ and $\kappa_t <k_0$, i.e. the non-$perturbative$ region. So, according to the reference [@12kimber], $k_0$ is chosen to be about one $GeV$ which is around the charm mass in the present calculation, as it should be. On the other hand, one expects that the discrepancy between the $k_t$-factorization calculation and the experimental data can be eliminated by using the $PDF$, which have been fitted to the same data for $F_2(x,Q^2)$ [@stasto1] with respect to the re-summation method of $KMS$ [@4kimber].
Results, discussions and conclusions
====================================
In the figure 2, the longitudinal proton structure functions in the frameworks of $KMR$ (left panels) and $MRW$ (right panels) formalisms, by using the MRST99 [@MRST] and the $MSTW2008-NLO$ [@10] $PDF$ inputs, versus x, for $Q^2$=$2, 4, 6, 12$ and $15$ $GeV^2$ are plotted, respectively. Their total $F_L(x,Q^2)$ and the contributions from $k_t$ factorization scheme, the quarks and the no-perturbative parts (see the equation (\[eq:2p\]) are presented with different curve styles. The behavior of $F_L(x,Q^2)$ mostly comes from the $k_t$ factorization contribution especially as the $Q^2$ is increased and it is more sizable in case of $MRW$ approach. By rising up the $Q^2$ values the contribution of the $k_t$-factorization becomes dominant. Another point is the decrease of non-perturbative parts at small x, in the case of $MRW$ scheme. As we discussed in our pervious works, this is expected. Since the $KMR$ constraint spreads the $UPDF$ to the whole transverse momentum region [@mho1] and it sums up the both leading $DGLAP$ and $BFKL$ logarithms contributions. The general behavior of two schemes in the figure 2 shows some differences also at lower $Q^2$ scales, while the values and behaviors of quarks and $k_t$-factorization portions in both formalisms are almost similar, the non-perturbative contributions have more different values and behavior in the $x\simeq 0.01$. The later point plays the main role in the discrepancies of the total $F_L(x,Q^2)$ at lower $Q^2$. On the other hand the non-perturbative contribution in each case remains almost fix through the variation of $Q^2$. These effects have root in the parent PDF sets at non-perturbative boundary which is very sensitive to the discipline and procedure of the $PDF$ generating group.This figure can also be compared with the figure 2 of $GS$ [@stasto2] at $Q^2$=$2, 4$ and $6$ $GeV^2$. There are general agreements between our approaches and those of $GS$, which have used the $DGLAP$/$BFKL$ re-summation method, developed by $Kwiecinski$, $Martin$ and $Stasto$ ($KMS$) [@4kimber], for calculation of the $unintegrated$ gluon density at small x. This agreement is more visible at larger $Q^2$ and in the $KMR$ approach, which is expected. However our longitudinal proton structure function results go smoothly to zero with respect to those $GS$ as x becomes larger. The reason comes from both our input $PDF$, which is valid for the whole $(x,Q^2)$ plane, and the calculation of $UPDF$ which are calculated by using the $KMR$ and $MRW$ approaches, which are full fill the $DGLAP$ requirements.
Our longitudinal proton structure function results for larger values of $Q^2$, with the different input $PDF$ i.e. $MERST99$ [@MRST], $MSTW2008-LO$ (using $KMR$ formalism) and $MSTW2008-NLO$ [@10] (using $MRW$ formalism) are given the figures 3, 4 and 5, respectively. Again the total $F_L(x,Q^2)$ and the contributions from $k_t$ factorization scheme, the quarks and the no-perturbative parts are presented with different curve styles. The results are mostly decreasing function x, for the various values of $Q^2$. There are sizable differences between the $MERST99$ and $MSTW2008-LO$. On the other hand, as one should expect, for large value of $Q^2$ the results of the $KMR$ and $MRW$ behave more similarly. As we pointed out before, again the $k_t$ factorization contributions are dominant. The increase in the values of $F_L(x,Q^2)$ in the figure 4 is due to the increase of the input PDF at LO approximation. The reason that the results of $F_L(x,Q^2)$ approach to same values as $x$ and $Q^2$ increases, which is a heritage of the parent $DGLAP$ evolution.
In order to analyze the above $Q^2$ dependent more clearly, in the figure 6, the longitudinal proton structure functions are plotted against $Q^2$ for two different values of $x=0.001$ and $0.0001$. Note, that for large $Q^2$, especially the $MRW$ approach, needs large computation time. So we have stopped at $Q^2=100$ $GeV^2$ for this procedure. There are sizable differences between the two approaches and results coming from the two different input $PDF$ . But this should not be very important regarding the experimental data, that we will discuss later on. In the figure 7, a comparison is made between the three different, $F_L(x,Q^2) $ results, namely $KMR$ procedure with $MERST99$ and $MSTW2008-LO$ inputs and $MRW$ scheme with $MSTW2008-NLO$ inputs. Especially there are large differences between $KMR$ and $MRW$ approaches at large $Q^2$. The above results can be directly compared that of $GS$ [@stasto2] (see their figure 3). Very similar behavior is observed especially between the $k_t$ factorization approaches.
In the figures 8, 9 and 10, we present our results in the range of energy available in the $H1$ and $ZEUS$ data [@hera], respectively. Note that for $Q^2\geq 80$ $GeV^2$, because of large computation time, we have only given four points (filled squares) for the $MRW$ case. Very good agreements is observed between our result and those of experimental data at different $Q^2$ and x values. It seems with present existed data the $UPDF$ of gluons generated with different input $PDF$ and constraints procedures, one can reasonably explain the $H1$ and the $ZEUS$ experimental data. It looks that even at low energies and small x values (see the figure 8); we find good agreement between our calculation and available data. However, as we mentioned before and it has been stated by several authors, the $F_L$ is mainly driven through the gluons distributions, especially at low values of x. The fact that $F_2$ is not accurately fit the data (see our previous work [@mho1]), but we get good agreement between the $F_L$ calculations and H1 and ZEUS data, could be caused of the quark-quark contributions which has more contribution to $F_2$. Since $F_L$ is more sensitive to the gluons $UPDF$ with respect to $F_2$. So one can conclude that present calculation can confirm that the $KMR$ and $MRW$ procedures (for generating the gluon $UPDF$) and the $k_t$-factorization scheme can reproduce reasonable $F_2$ (considering our previous work [@mho1]) and present $F_L$. On the other hand, as we stated previously:(1)Present results also shows good agreement with the theoretical calculations of $GS$, which have used more complicated approach such as $KMS$. (2)It is interesting that the $KMR$ and $MRW$ $UPDF$ can generate reasonable $F_L$ without using any free parameter in the (x, $Q^2$)-plane even at low $Q^2$ (regarding figure 8), especially the $UPDF$ generated for gluons.
Finally, the verification of the fact that the $\phi$ integration of perturbative gluon contribution can be averaged by setting $<\phi>=\pi/4$, which was discussed in the end of previous section, is presented in the figure 11, for four values of $Q^2= 3.5, 12, 60$ and $110$ $Gev^2$ by using the $KMR$ formalism and the $MRST99$. It is clearly seen that the above approximation does work properly and one can save much computation time.
In conclusion, the longitudinal proton structure functions, $F_L(x,Q^2)$, were calculated based on the $k_t$ factorization formalism, by using the $UPDF$ which are generated through the $KMR$ and $MRW$ procedures. The $LO$ $UPDF$ of the $KMR$ prescription is extracted, by taking into account the $PDF$ of $MSTW2008$-$LO$ and $MRST99$-$NLO$ and also, the $NLO$ $UPDF$ of the $MRW$ scheme is generated through the set of $MSTW2008$-$NLO$ $PDF$ as the inputs. The different aspects of the $F_L(x,Q^2)$ in the two approaches, as well as its perturbative and non-perturbative parts were calculated and discussed. It was shown that our approaches are in agreement with those given $GS$. Then the comparison of $F_L(x,Q^2)$ was made with the data given by the $ZEUS$ and $H1$ collaborations at $HERA$. It was demonstrated that the extracted longitudinal proton structure functions based on the $UPDF$ of above two schemes, were consistent with the experimental data, and by a good approximation, they are independent to the input $PDF$. But as it was pointed out in our previous work [@mho1], the one developed from the $KMR$ prescription, have better agreement to the data with respect to that of $MRW$. Although the $MRW$ formalism is in more compliance with the $DGLAP$ evolution equations requisites, but it seems in the $KMR$ case, the angular ordering constraint spreads the $UPDF$ to whole transverse momentum region, and makes the results to sum up the leading $DGLAP$ and $BFKL$ logarithms. At first, it seems that there should be a theoretical support for applying the angular ordering condition only to the diagonal splitting functions, in accordance with reference [@watt1]. But as it has been mentioned in the references [@mho1; @mmho1], this phenomenological modifications of the $KMR$ approach (including the application of the $AOC$ to all splitting functions) works as an “effective model” that spreads the $UPDF$ to the $k_t > \mu$ (a characteristic of low x physics) which enables it to represent a good level of agreement with the data. Beside this in our new work [@mmho1] in which we have calculated the $F_L$ in the dipole approximation according to the LO prescription of reference [@watt1], it is shown that there is not much difference if one applies the $AOC$ to the all splitting functions i.e. to use the $KMR$ $UPDF$ instead of using LO prescription of reference [@watt1]. On the other hand, in this paper we have focused on comparison of the LO and the $NLO$ calculation of $F_L$ and since the calculations are very time consuming we restricted the results to the $LO-KMR$ and $NLO-MRW$.
As it has been suggested in the reference [@stasto2], by lowering the factorization scale or the $Bjorken$ variable in the experimental measurements, it may be possible to analyze the present theoretical approaches more accurately.
$MM$ would like to acknowledge the Research Council of University of Tehran and Institute for Research and Planning in Higher Education for the grants provided for him.
M. Ciafaloni, Nucl.Phys.B, [**296**]{} (1988) 49.
S. Catani, F. Fiorani, and G. Marchesini, Phys.Lett.B, [**234**]{} (1990) 339.
S. Catani, F. Fiorani, and G. Marchesini, Nucl.Phys.B, [**336**]{} (1990) 18.
G. Marchesini, Proceedings of the Workshop QCD at 200 TeV Erice, Italy, edited by L. Cifarelli and Yu.L. Dokshitzer, Plenum, New York (1992) 183.
G. Marchesini, Nucl.Phys.B, [**445**]{} (1995) 49.
V.N. Gribov and L.N. Lipatov, Yad. Fiz., [**15**]{} (1972) 781.
L.N. Lipatov, Sov.J.Nucl.Phys., [**20**]{} (1975) 94.
G. Altarelli and G. Parisi, Nucl.Phys.B, [**126**]{} (1977) 298.
Y.L. Dokshitzer, Sov.Phys.JETP, [**46**]{} (1977) 641.
H. Kharraziha and L. Lönnblad, JHEP, [**03**]{} (1998) 006.
G. Marchesini and B. Webber, Nucl.Phys.B, [**349**]{} (1991) 617.
G. Marchesini and B. Webber, Nucl.Phys.B, [**386**]{} (1992) 215.
H. Jung, Nucl.Phys.B, [**79**]{} (1999) 429.
H. Jung and G.P. Salam, Eur.Phys.J.C, [**19**]{} (2001) 351.
H Jung, J.Phys.G: Nucl.Part.Phys, [**28**]{} (2002) 971.
H. Jung, et al., Eur.Phys.J.C, **70** (2010) 1237.
H. Jung, M. Kraemer, A.V. Lipatov and N.P. Zotov, JHEP, **01** (2011) 085.
M.A. Kimber, A.D. Martin and M.G. Ryskin, Phys.Rev.D, [**63**]{} (2001) 114027.
M.A. Kimber, Unintegrated Parton Distributions, Ph.D. Thesis, University of Durham, U.K. (2001).
A.D. Martin, M.G. Ryskin, G. Watt, Eur.Phys.J.C, [**66**]{} (2010)163.
G. Watt, Parton Distributions, Ph.D. Thesis, University of Durham, U.K. (2004).
G. Watt, A.D. Martin, and M.G. Ryskin, Eur.Phys.J.C, [**31**]{} (2003) 73.
G. Watt, A.D. Martin, and M.G. Ryskin, Phys.Rev.D, [**70**]{} (2004) 014012.
M. Modarres, H. Hosseinkhani, N. Olanj, Nucl.Phys.A, [**902**]{} (2013) 21.
M. Modarres, H. Hosseinkhani, Few-Body Syst., [**47**]{} (2010) 237.
M. Modarres, H. Hosseinkhani, Nucl.Phys.A, [**815**]{} (2009) 40.
H. Hosseinkhani, M. Modarres, Phys.Lett.B, [**694**]{} (2011) 355.
H. Hosseinkhani, M. Modarres, Phys.Lett.B, [**708**]{} (2012) 75.
NMC: Arneodo et al., Nucl.Phys.B, [**483**]{} (1997) 3.
ZEUS: Derrick et al., Zeit.Phys.C, [**72**]{} (1996) 399.
H1 and ZEUS collaborations, JHEP [**01**]{} (2010) 109.
M. Modarres, H. Hosseinkhani and N. Olanj, Phys.Rev.D, [**89**]{} (2014) 034015.
M. Modarres, M.R. Masouminia, H. Hosseinkhani and N. Olanj, (2015) submitted for publication.
V.S. Fadin, E.A. Kuraev and L.N. Lipatov, Phys.Lett.B, [**60**]{} (1975) 50.
L.N. Lipatov, Sov.J.Nucl.Phys., [**23**]{} (1976) 642.
E.A. Kuraev, L.N. Lipatov and V.S. Fadin, Sov.Phys.JETP, [**44**]{} (1976) 45.
E.A. Kuraev, L.N. Lipatov and V.S. Fadin, Sov.Phys.JETP, [**45**]{} (1977) 199.
Ya.Ya. Balitsky and L.N. Lipatov, Sov.J.Nucl.Phys., [**28**]{} (1978) 822.
S. Catani, M. Ciafaloni and F. Hautmann, Phys.Lett.B, [**242**]{} (1990) 97.
S. Catani, M. Ciafaloni and F. Hautmann, Nucl.Phys.B, [**366**]{} (1991) 135.
J.C. Collins and R.K. Ellis, Nucl.Phys.B, [**360**]{} (1991) 3.
E.M. Levin, M.G. Ryskin, Yu.M. Shabelski and A.G. Shuvaev, Sov.J.Nucl.Phys., [**54**]{} (1991) 867.
J. Kwiecinski, A.D. Martin and A.M. Stasto, Acta Physica Polonica B, [**28**]{} (1997) 2577.
R.G. Robersts, “ The Structure of the proton”, Cambridge University Press (1993).
K. Golec-Biernat and A.M. Stasto, Phys.Rev.D, [**80**]{} (2009) 014006.
A.M. Stasto, Phys.Lett.B, [**679**]{} (2009) 288.
J. Kwiecinski, A.D. Martin and A.M. Stasto, Phys.Rev.D, [**56**]{} (1997) 3991.
F.D. Aaron et al (H1 Collaboration), Phys.Lett.B, [**665**]{} (2008) 139. V. Andreev et al (H1 Collaboration), Eur.Phys.J.C, [**74**]{} (2014) 2814. ZEUS collaboration. Desy Report No. Desy-09-045, (2009).
ZEUS collaboration, Phys.Rev.D [**90**]{} 072002 (2014) 072002.
A.D. Martin, R.G. Roberts, W.J. Stirling and R.S. Thorne, Eur.Phys.J.C, [**14**]{} (2000) 133.
A.D. Martin, W.J. Stirling, R.S. Thorne and G. Watt, Eur.Phys.J.C, **63** (2009) 189.
G. Marchesini and B.R. Webber, Nucl.Phys.B, **310** (1988) 461.
Yu.L. Dokshitzer, V.A. Khoze, S.I. Troyan and A.H. Mueller, Rev.Mod.Phys., **60** (1988) 373.
V.A. Saleev, Phys.Rev.D, [**80**]{} (2009) 114016.
A.V. Lipatov and N.P. Zotov, Phys.Rev.D, [**81**]{} (2010) 094027.
S.P. Baranov, A.V. Lipatov and N.P. Zotov, Phys.Rev.D, [**81**]{} (2010) 094034.
B.A. Kniehl, V.A. Saleev and A.V. Shipilova, Phys.Rev.D, [**81**]{} (2010) 094010.
H. Jung, M. Kraemer, A.V. Lipatov and N.P. Zotov, JHEP, **01** (2011) 085.
S.P. Baranov, A.V. Lipatov and N.P. Zotov, Eur.Phys.J.C, **71** (2011) 1631.
A.V. Lipatov, M.A. Malyshev and N.P. Zotov, Phys.Lett.B, **699** (2011) 93.
H. Jung, M. Kraemer, A.V. Lipatov and N.P. Zotov, arXiv:1105.5071 \[hep-ph\] (2011).
H. Jung, M. Kraemer, A.V. Lipatov and N.P. Zotov, Phys.Rev.D, **85**, 034035 (2012).
A.V. Lipatov and N.P. Zotov, Phys.Lett.B, **704** (2011) 189.
B.A. Kniehl, V.A. Saleev, A.V. Shipilova and E.V. Yatsenko, arXiv:1107.1462 \[hep-ph\] ( 2011).
H. Jung, M. Kraemer, A.V. Lipatov and N.P. Zotov, Proceedings of 19th International Workshop On Deep-Inelastic Scattering And Related Subjects (DIS 2011), arXiv:1107.4328 \[hep-ph\] (2011).
W. Furmanski, R. Petronzio, Phys.Lett.B, [**97**]{} (1980) 437.
S. Catani and F. Hautmann, Nucl.Phys.B, [**427**]{} (1994) 745.
M. Ciafaloni, Phys.lett.B, [**356**]{} (1995) 74.
A.J. Askew, J. Kwiecinski, A.D. Martin and P.J. Sutton, Phys.Rev.D, [**47**]{} (1993) 3775.
A.J. Askew, J. Kwiecinski, A.D. Martin and P.J. Sutton, Phys.Rev.D, [**49**]{} (1994) 4402.
A.J. Askew, Small x Physics, thesis presented for the degree of Doctor of Philosophy, University of Durham, (1995).
A.M. Stasto, Acta.Phys.Polo.B, [**27**]{} (1996) 1353.
A.M. Stasto, QCD Analysis of Deep Inelestic Lepton-Hadron Scattering in the Region of Small Values of the Bjorken Parameter, thesis presented for the degree of Doctor of Philosophy, University of Durham, (1999).
I.S. Gradshteyn and I.M. Ryzhik, “Table of integrals, Series, and Products, corrected and enlarged edition”, Academic Press (1980).
M.A. Kimber, J. Kwiecinski, A.D. Martin and A.M. Stasto, Phys.Rev.D, [**62**]{} (2000) 094006.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We calculate the ordered $K_0$-group of a graph $C^*$-algebra and mention applications of this result to AF-algebras, states on the $K_0$-group of a graph algebra, and tracial states of graph algebras.'
address: |
Department of Mathematics\
Dartmouth College\
Hanover\
NH 03755\
USA
author:
- Mark Tomforde
title: 'The ordered $\boldsymbol{K_0}$-group of a graph $\boldsymbol{C^*}$-algebra'
---
Preliminaries {#Pre}
=============
We provide some basic facts about graph algebras and refer the reader to [@KPR], [@BPRS], and [@RS] for more details. A (directed) graph $E=(E^0, E^1, r, s)$ consists of a countable set $E^0$ of vertices, a countable set $E^1$ of edges, and maps $r,s:E^1 \rightarrow E^0$ identifying the range and source of each edge. A vertex $v \in E^0$ is called a *sink* if $|s^{-1}(v)|=0$, and $v$ is called an *infinite emitter* if $|s^{-1}(v)|=\infty$. If $v$ is either a sink or an infinite emitter, we call $v$ a *singular vertex*. A graph $E$ is said to be *row-finite* if it has no infinite emitters. The *vertex matrix* of $E$ is the square matrix $A$ indexed by the vertices of $E$ with $A(v,w)$ equal to the number of edges from $v$ to $w$.
If $E$ is a graph we define a *Cuntz-Krieger $E$-family* to be a set of mutually orthogonal projections $\{p_v : v \in
E^0\}$ and a set of partial isometries $\{s_e : e \in E^1\}$ with orthogonal ranges which satisfy the *Cuntz-Krieger relations*:
1. $s_e^* s_e = p_{r(e)}$ for every $e \in E^1$;
2. $s_e s_e^* \leq p_{s(e)}$ for every $e \in E^1$;
3. $p_v = \sum_{\{e : s(e)=v\}} s_e s_e^*$ for every $v
\in E^0$ that is not a singular vertex.
The *graph algebra $C^*(E)$* is defined to be the $C^*$-algebra generated by a universal Cuntz-Krieger $E$-family.
The graph algebra $C^*(E)$ is unital if and only if $E$ has a finite number of vertices, cf. [@KPR Proposition 1.4], and in this case $1_{C^*(E)} = \sum_{v \in E^0} p_v$. If $E$ has an infinite number of vertices, and we list the vertices of $E$ as $E^0 = \{v_1, v_2, \ldots \}$ and define $p_n := \sum_{i=1}^n
p_{v_i}$, then $\{ p_n \}_{n=1}^\infty$ will be an approximate unit for $C^*(E)$.
The ordered $K_0$-group
=======================
If $A$ is a $C^*$-algebra let ${\mathcal{P}}(A)$ denote the set of projections in $A$. It is a fact that if $A$ is unital (or more generally, if $A \otimes {\mathcal{K}}$ admits an approximate unit consisting of projections), then $K_0(A) = \{ [p]_0-[q]_0 : p,q
\in {\mathcal{P}}(A \otimes{\mathcal{K}}) \}$. In addition, the positive cone $K_0(A)^+
= \{ [p]_0 : p \in {\mathcal{P}}(A \otimes {\mathcal{K}}) \}$ makes $K_0(A)$ a pre-ordered abelian group. If $A$ is also stably finite, then $(K_0(A),K_0(A)^+)$ will be an ordered abelian group.
Here we compute the positive cone of the $K_0$-group of a graph $C^*$-algebra. Throughout this section we let ${\mathbb{Z}}^K$ and ${\mathbb{N}}^K$ denote $\bigoplus_K {\mathbb{Z}}$ and $\bigoplus_K {\mathbb{N}}$, respectively.
\[K-theory-row-finite\] Let $E = (E^0,E^1,r,s)$ be a row-finite graph. Also let $W$ denote the set of sinks of $E$ and let $V:=E^0 \backslash W$. Then with respect to the decomposition $E^0 = V \cup W$ the vertex matrix of $E$ will have the form $$A_E = \begin{pmatrix} B & C \\
0 & 0 \end{pmatrix}.$$ For $v \in E^0$, let $\delta_v$ denote the element of ${\mathbb{Z}}^V \oplus {\mathbb{Z}}^W$ with a 1 in the $v^{\text{th}}$ entry and $0$’s elsewhere.
If we consider $\begin{pmatrix} B^t-I \\ C^t \end{pmatrix} :
{\mathbb{Z}}^V \rightarrow {\mathbb{Z}}^V \oplus {\mathbb{Z}}^W$, then $K_0(C^*(E)) \cong
{\operatorname{coker}}\begin{pmatrix} B^t-I \\ C^t \end{pmatrix}$ via an isomorphism which takes $[p_v]_0$ to $[\delta_v]$ for each $v
\in E^0$. Furthermore, this isomorphism takes $(K_0(C^*(E)))^+$ to $\{ [x] : x \in {\mathbb{N}}^V \oplus {\mathbb{N}}^W \}$, where $[x]$ denotes the class of $x$ in ${\operatorname{coker}}\begin{pmatrix} B^t-I \\ C^t
\end{pmatrix}$.
The fact that $K_0(C^*(E)) \cong
{\operatorname{coker}}\begin{pmatrix} B^t-I \\ C^t \end{pmatrix}$ is shown for row-finite graphs in [@RS Theorem 3.1]. Thus all that remains to be verified in our claim is that this isomorphism identifies $(K_0(C^*(E)))^+$ with $\{ [x] : x \in {\mathbb{N}}^V \oplus {\mathbb{N}}^W \}$. To do this, we will simply examine the proof of [@RS Theorem 3.1] to determine how the isomorphism acts. We will assume that the reader is familiar with this proof, and use the notation established in it without comment.
If $E \times_1 [m,n]$ is the graph defined in [@RS Theorem 3.1], then we see that $E \times_1 [m,n]$ is a row-finite graph with no loops and in which every path has length at most $n-m$. Therefore we can use the arguments in the proofs of [@KPR Proposition 2.1], [@KPR Corollary 2.2], and [@KPR Corollary 2.3] to conclude that $C^*(E \times_1 [m,n])$ is the direct sum of copies of the compact operators (on spaces of varying dimensions), indexed by the sinks of $E \times_1 [m,n]$ and that each summand contains precisely one projection $p_{(v,k)}$ associated to a sink as a minimal projection. Thus $$K_0(C^*(E \times_1 [m,n])) \cong \bigoplus_{v \in V}
{\mathbb{Z}}[p_{(v,n)}]_0 \oplus \bigoplus_{k=0}^{n-m} \bigoplus_{v \in W} {\mathbb{Z}}[p_{(v,n-k)}]_0$$ and $K_0(C^*(E \times_1
[m,n]))^+$ is identified with $$\bigoplus_{v \in V} {\mathbb{N}}[p_{(v,n)}]_0 \oplus
\bigoplus_{k=0}^{n-m} \bigoplus_{v \in W} {\mathbb{N}}[p_{(v,n-k)}]_0.$$ By the continuity of $K$-theory, one can let $m$ tend to $-\infty$ and deduce that $$\begin{aligned}
K_0(C^*(E \times_1 [-\infty,n])) &\cong \bigoplus_{v \in V} {\mathbb{Z}}[p_{(v,n)}]_0 \oplus \bigoplus_{k=0}^{\infty} \bigoplus_{v \in W}
{\mathbb{Z}}[p_{(v,n-k)}]_0 \\
&\cong {\mathbb{Z}}^V \oplus Z^W \oplus Z^W \oplus \ldots.\end{aligned}$$ Furthermore, it follows from [@RLL Theorem 6.3.2(ii)] that this isomorphism identifies $K_0(C^*(E \times_1 [-\infty,n]))^+$ with ${\mathbb{N}}^V \oplus {\mathbb{N}}^W \oplus {\mathbb{N}}^W \oplus \ldots$.
This computation is used later in the proof of [@RS Theorem 3.1], where the $K_0$ functor is applied to a commutative diagram to obtain Figure (3.5) of [@RS], which we reproduce here: $$\xymatrix{ {\mathbb{Z}}^V \oplus {\mathbb{Z}}^W \oplus {\mathbb{Z}}^W \oplus \ldots \ar[d]_{1-D}
\ar[r]^D & {\mathbb{Z}}^V \oplus {\mathbb{Z}}^W \oplus {\mathbb{Z}}^W \oplus \ldots
\ar[d]_{1-D} \ar[r]^-{\iota_*^{n+1}} & K_0(C^*(E \times_1 {\mathbb{Z}}))
\ar[d]_{1-\beta_*^{-1}} \\
{\mathbb{Z}}^V \oplus {\mathbb{Z}}^W \oplus {\mathbb{Z}}^W \oplus \ldots \ar[r]^{D} & {\mathbb{Z}}^V
\oplus {\mathbb{Z}}^W \oplus {\mathbb{Z}}^W \oplus \ldots \ar[r]^-{\iota_*^{n+1}} &
K_0(C^*(E \times_1 {\mathbb{Z}})) }$$
Now it is shown in [@RS Lemma 3.3] that the homomorphism $\iota_*^1$ induces an isomorphism $\overline{\iota}_*^1$ of ${\operatorname{coker}}(1-D)$ onto ${\operatorname{coker}}(1-\beta_*^{-1}) = K_0(C^*(E))$. We shall show that $\overline{\iota}_*^1 ( {\mathbb{N}}^V \oplus {\mathbb{N}}^W \oplus
{\mathbb{N}}^W \oplus \ldots ) = K_0(C^*(E))^+$. To begin, note that it follows from [@RLL Theorem 6.3.2(ii)] that $$K_0(C^*(E
\times_1 {\mathbb{Z}}))^+ = \bigcup_{n=1}^\infty \iota_*^n ( {\mathbb{N}}^V \oplus
{\mathbb{N}}^W \oplus {\mathbb{N}}^W \oplus \ldots ).$$ Since ${\operatorname{coker}}(1-\beta_*^{-1}) = K_0(C^*(E))$, this implies that $$K_0(C^*(E))^+
= \bigcup_{n=1}^\infty \{ [\iota_*^n(y)] : y \in {\mathbb{N}}^V \oplus {\mathbb{N}}^W
\oplus {\mathbb{N}}^W \oplus \ldots \}$$ where $[\iota_*^n(y)]$ denotes the equivalence class of $\iota_*^n(y)$ in ${\operatorname{coker}}(1-\beta_*^{-1})$. We shall show that the right hand side of this equation is equal to $\{ [\iota_*^1(y)] : y \in {\mathbb{N}}^V \oplus {\mathbb{N}}^W \oplus {\mathbb{N}}^W \oplus
\ldots \}$. Let $[\iota_*^n(y)]$ be a typical element in the right hand side. Then from the commutativity of the above diagram $\iota_*^n(y) - \iota_*^n(Dy) = \iota_*^n((1-D)y) =
(1-\beta_*^{-1}) (\iota_*^n(y))$ which is $0$ in ${\operatorname{coker}}(1-\beta_*^{-1})$. But then $\iota_*^1(y) = \iota_*^n(D^{n-1}y) =
\iota_*^n(y)$ in ${\operatorname{coker}}(1-\beta_*^{-1})$. Hence $$\label{iota-map}
K_0(C^*(E))^+ = \{ [\iota_*^1(y)] : y \in {\mathbb{N}}^V \oplus {\mathbb{N}}^W \oplus
{\mathbb{N}}^W \oplus \ldots \}.$$
Next, recall that [@RS Lemma 3.4] shows that the inclusion $j
: {\mathbb{Z}}^V \oplus Z^W \hookrightarrow {\mathbb{Z}}^V \oplus {\mathbb{Z}}^W \oplus {\mathbb{Z}}^W
\oplus \ldots$ induces an isomorphism $\overline{j}$ of ${\operatorname{coker}}K$ onto ${\operatorname{coker}}(1-D)$. We wish to show that $$\label{j-map}
\overline{j} (\{[x] : x \in {\mathbb{N}}^V \oplus {\mathbb{N}}^W \} ) = \{ [y] : y \in
{\mathbb{N}}^V \oplus {\mathbb{N}}^W \oplus {\mathbb{N}}^W \oplus \ldots \}.$$ It suffices to show that any element $(n,m_1,m_2,m_3, \ldots) \in
{\mathbb{N}}^V \oplus {\mathbb{N}}^W \oplus {\mathbb{N}}^W \oplus \ldots$ is equal to an element of the form $(a, b, 0, 0, 0, \ldots)$ in ${\operatorname{coker}}(1-D)$. But given $(n,m_1,m_2,m_3, \ldots)$ we see that since this element is in the direct sum, there exists a positive integer $k$ for which $i>k$ implies $m_i=0$. Thus $$\begin{pmatrix} n \\ m_1 + \ldots +m_k \\ 0 \\
\vdots \\ 0 \\ 0 \\ \vdots\end{pmatrix} - \begin{pmatrix} n \\ m_1 \\ m_2 \\ \vdots \\ m_k \\
0 \\ \vdots\end{pmatrix} = \begin{pmatrix} 1-B^t & 0
& 0 & 0 & .\\ -C^t & 1 & 0 & 0 & . \\ 0 & -1 & 1 & 0 & . \\ 0 &
0 & -1 & 1 & . \\ . & . & . & . & \ddots \end{pmatrix}
\begin{pmatrix} 0 \\ m_2 + \ldots +m_k \\ m_3 + \ldots + m_k \\
m_4 + \ldots + m_k \\ \vdots \\ m_k \\ 0 \\ \vdots\end{pmatrix}$$ and so $(n,m_1,m_2, \ldots)$ equals $(n,m_1+\ldots + m_k, 0, 0,
\ldots)$ in ${\operatorname{coker}}(1-D)$, and (\[j-map\]) holds.
Finally, the isomorphism between ${\operatorname{coker}}\begin{pmatrix}
B^t-I \\ C^t \end{pmatrix}$ and $K_0(C^*(E))$ is defined to be $\overline{\iota}_*^1 \circ \overline{j}$. But (\[j-map\]) and (\[iota-map\]) show that this isomorphism takes $\{ [x] : x \in
{\mathbb{N}}^V \oplus {\mathbb{N}}^W \}$ onto $(K_0(C^*(E)))^+$.
\[K-theory\] Let $E = (E^0,E^1,r,s)$ be a graph. Also let $W$ denote the set of singular vertices of $E$ and let $V:=E^0 \backslash W$. Then with respect to the decomposition $E^0 = V \cup W$ the vertex matrix of $E$ will have the form $$A_E = \begin{pmatrix} B & C \\
* & * \end{pmatrix}$$ where $B$ and $C$ have entries in ${\mathbb{Z}}$ and the $*$’s have entries in ${\mathbb{Z}}\cup \{ \infty \}$. Also for $v \in E^0$, let $\delta_v$ denote the element of ${\mathbb{Z}}^V \oplus {\mathbb{Z}}^W$ with a 1 in the $v^{\text{th}}$ entry and $0$’s elsewhere.
If we consider $\begin{pmatrix} B^t-I \\ C^t \end{pmatrix} :
{\mathbb{Z}}^V \rightarrow {\mathbb{Z}}^V \oplus {\mathbb{Z}}^W$, then $K_0(C^*(E)) \cong
{\operatorname{coker}}\begin{pmatrix} B^t-I \\ C^t \end{pmatrix}$ via an isomorphism which takes $[p_v]_0$ to $[\delta_v]$ for each $v
\in E^0$. Furthermore, this isomorphism takes $(K_0(C^*(E)))^+$ onto the semigroup generated by $\{ [\delta_v] : v \in E^0 \} \cup \{
[\delta_v] - \sum_{e \in S}[\delta_{r(e)}] : \text{$v$ is an infinite
emitter and $S$ is a finite subset of $s^{-1}(v)$} \}$.
The fact that $K_0(C^*(E)) \cong {\operatorname{coker}}\begin{pmatrix} B^t-I \\ C^t
\end{pmatrix}$ was established in [@DT2 Theorem 3.1] using the isomorphisms constructed in [@DT2 Lemma 2.3]. We shall examine the proof of [@DT2 Theorem 3.1] to determine where the positive cone of $K_0(C^*(E))$ is sent. Again, we shall assume that the reader is familiar with the proof, and use the notation established in it without comment.
We begin by letting $F$ denote a desingularization of $E$ (see [@DT2 §2]). Then [@DT1 Theorem 2.11] shows that there exists a homomorphism $\phi :C^*(E) \to C^*(F)$ which embeds $C^*(E)$ onto a full corner of $C^*(F)$ and takes each $p_v$ to the projection in $C^*(F)$ corresponding to $v$. Since $\phi$ is an embedding onto a full corner, it induces an isomorphism $\phi_* : K_0(C^*(E)) \to K_0(C^*(F))$ which takes the class of $p_v$ in $K_0(C^*(E))$ to the class of the corresponding projection in $K_0(C^*(F))$. By Theorem \[K-theory-row-finite\], if $A_F$ denotes the vertex matrix of $F$, then $K_0(C^*(E)) \cong {\operatorname{coker}}(A_F^t-I)$ and $K_0(C^*(E))^+$ is identified with $\{ [x] : x \in \bigoplus_V
{\mathbb{Z}}\oplus \bigoplus_W {\mathbb{Z}}\oplus \bigoplus_W Q \}$ where $Q := \bigoplus_{\mathbb{N}}{\mathbb{Z}}$. Now it is shown in the proof of [@DT2 Lemma 2.3] that the inclusion map $\rho : \bigoplus_V {\mathbb{Z}}\oplus \bigoplus_W {\mathbb{Z}}\to \bigoplus_V {\mathbb{Z}}\oplus
\bigoplus_W {\mathbb{Z}}\oplus \bigoplus_W Q$ induces an isomorphism $\overline{\rho} :
{\operatorname{coker}}\begin{pmatrix} B^t-I \\ C^t \end{pmatrix} \to {\operatorname{coker}}(A_F-I)$. Since this isomorphism identifies the class of $\delta_v \in \bigoplus_V {\mathbb{Z}}\oplus
\bigoplus_W {\mathbb{Z}}$ with the class of $\begin{pmatrix} \delta_v \\ 0
\end{pmatrix} \in \bigoplus_V {\mathbb{Z}}\oplus \bigoplus_W {\mathbb{Z}}\oplus
\bigoplus_W Q$, it follows that $[p_v]_0 \in K_0(C^*(E))$ is identified with $[\delta_v] \in {\operatorname{coker}}\begin{pmatrix} B^t-I \\ C^t
\end{pmatrix}$.
All that remains is to determine where this isomorphism sends the positive cone of $K_0(C^*(E))$. Let $\Gamma$ denote the semigroup of elements that $\overline{\rho}$ sends to $\{ [x] : x \in \bigoplus_V {\mathbb{Z}}\oplus \bigoplus_W {\mathbb{Z}}\oplus \bigoplus_W Q \}$. Now certainly $\{ [\delta_v] : v \in E^0 \}$ is in $\Gamma$. Furthermore, for any infinite emitter $v$ and finite subset $S
\subseteq s^{-1}(v)$ we have that $$[p_v]_0 - \sum_{e \in S} [p_{r(e)}]_0 =
[p_v]_0 - \sum_{e \in S} [s_e^*s_e]_0 = [p_v]_0 - \sum_{e \in S} [s_es_e^*]_0
= [p_v - \sum_{e \in S} s_es_e^*]_0$$ and this element belongs to $K_0(C^*(E))^+$. Since $K_0(C^*(E))^+$ is identified with $\{ [x] : x \in
\bigoplus_V {\mathbb{Z}}\oplus \bigoplus_W {\mathbb{Z}}\oplus \bigoplus_W Q \}$ this implies that the class of $ \begin{pmatrix} \delta_v \\ 0 \end{pmatrix} - \sum_{e \in S}
\begin{pmatrix} \delta_{r(e)} \\ 0 \end{pmatrix}$ is in $\{ [x] : x \in
\bigoplus_V {\mathbb{Z}}\oplus \bigoplus_W {\mathbb{Z}}\oplus \bigoplus_W Q \}$ and thus $[p_v] -
\sum_{e \in S} [p_{r(e)}]$ is in $\Gamma$. On the other hand, we know that $\Gamma$ is generated by the elements that $\overline{\rho}$ sends to the classes of the generators of $\bigoplus_V {\mathbb{Z}}\oplus \bigoplus_W {\mathbb{Z}}\oplus
\bigoplus_W Q$. Now certainly the inverse image under $\overline{\rho}$ of the class of $\begin{pmatrix} \delta_v \\ 0 \end{pmatrix}$ for $v \in V \cup W$ will be $[\delta_v]$. In addition, if $v_i$ is a vertex on the tail added to an infinite emitter $v$, then we see that the inverse image under $\overline{\rho}$ of the element $\begin{pmatrix} 0 \\ \delta_{v_i}
\end{pmatrix}$ will be $\begin{pmatrix} \mathbf{u} \\ \mathbf{v}
\end{pmatrix}$ where $\mathbf{u}$ and $\mathbf{v}$ are as defined in the final paragraph of [@DT2 Lemma 2.3]. However, one can verify from how $\mathbf{u}$ and $\mathbf{v}$ are defined that $\begin{pmatrix} \mathbf{u} \\
\mathbf{v} \end{pmatrix}$ will have the form $\delta_v - \sum_{e \in S}
\delta_{r(e)}$ for some finite $S \subseteq s^{-1}(v)$. Thus $\Gamma$ is generated by the elements $[\delta_v]$ and $[\delta_v] - \sum_{e \in S}
[\delta_{r(e)}]$.
Applications
============
AF-algebras
-----------
The graph algebra $C^*(E)$ is an AF-algebra if and only if $E$ has no loops [@KPR Theorem 2.4]. By Elliott’s Theorem AF-algebras are classified by their ordered $K_0$-groups. Hence for two graphs containing no loops, Theorem \[K-theory\] can be used to determine if their associated $C^*$-algebras are isomorphic (as well as stably isomorphic).
States on $\boldsymbol{K_0(C^*(E))}$
------------------------------------
If $A$ is a $C^*$-algebra containing a countable approximate unit $\{ p_n
\}_{n=1}^\infty$ consisting of projections, then a *state* on $K_0(A)$ is a homomorphism $f : K_0(A) \to {\mathbb{R}}$ such that $f(K_0(A)^+) \subseteq {\mathbb{R}}^+$ and $\lim_{n \to \infty} f([p_n]_0) = 1$. The set of all states on $K_0(A)$ is denoted $S(K_0(A))$ and we make it into a topological space by giving it the weak-$*$ topology.
\[graph-trace\] If $E$ is a graph, then a *graph trace* on $E$ is a function $g : E^0 \rightarrow {\mathbb{R}}^+$ with the following two properties:
1. \[g-t-1\] For any nonsingular vertex $v \in E^0$ we have $g(v) =
\sum_{ \{e \in E^1 : s(e) = v \} } g(r(e)).$
2. \[g-t-2\] For any infinite emitter $v \in E^0$ and any finite set of edges $e_1, \ldots, e_n \in s^{-1}(v)$ we have $g(v) \geq
\sum_{i=1}^n g(r(e_i)).$
We define the norm of $g$ to be the (possibly infinite) value $\|g
\| := \sum_{v \in E^0} g(v)$, and we shall use $T(E)$ to denote the set of all graph traces on $E$ with norm 1.
\[g-t-is-state\] If $E$ is a graph, then the state space $S(K_0(C^*(E)))$ with the weak-$*$ topology is naturally isomorphic to $T(E)$ with the topology generated by the subbasis $\{ N_{v,\epsilon} (g) : v \in E^0, \epsilon > 0,
\text{ and } g \in T(E) \}$, where $N_{v,\epsilon} (g):= \{h \in
T(E) : |h(v)-g(v)| < \epsilon \}$.
We define a map ${\iota}: S(K_0(C^*(E))) \to T(E)$ by ${\iota}(f)(v) := f([p_v]_0)$. We shall show that ${\iota}$ is an affine homeomorphism. To see that ${\iota}$ is injective note that if ${\iota}(f_1)={\iota}(f_2)$, then for each $v \in E^0$ we have that $f_1([p_v]_0) = {\iota}(f_1) (v) =
{\iota}(f_2) (v) = f_2([p_v]_0)$, and since the $[p_v]_0$’s generate $K_0(C^*(E))$ it follows that $f_1 = f_2$.
To see that ${\iota}$ is surjective, let $g : E^0 \to {\mathbb{R}}^+$ be a graph trace. We shall define a homomorphism $f : {\operatorname{coker}}\begin{pmatrix}
B^t-I \\ C^t \end{pmatrix} \to {\mathbb{R}}$ by setting $f ([\delta_v]) := g(v)$. Because $g$ satisfies (\[g-t-1\]) of Definition \[graph-trace\] we see that $f$ is well defined. Also, since the values of $g$ are positive and $g$ satisfies (\[g-t-2\]) of Definition \[graph-trace\] we see that $f(K_0(C^*(E))^+) \subseteq {\mathbb{R}}^+$. Finally, since $g$ has norm 1 we see that $\lim_{n \to \infty} f ( \sum_{i=1}^n p_{v_i} ) = \lim_{n \to \infty}
\sum_{i=1}^n g (v_i) = \| g \| =1$. So $f$ is a state on $K_0(C^*(E))$ and ${\iota}(f) = g$.
It is straightforward to verify that ${\iota}$ is an affine homeomorphism.
Tracial states on $\boldsymbol{C^*(E)}$
---------------------------------------
A *trace* on a $C^*$-algebra $A$ is a linear functional $\tau : A
\rightarrow {\mathbb{C}}$ with the property that $\tau(ab)=\tau(ba)$ for all $a,b \in
A$. We say that $\tau$ is *positive* if $\tau (a) \geq 0$ for all $a
\in A^+$. If $\tau$ is positive and $\| \tau \| = 1$ we call $\tau$ a *tracial state*. The set of all tracial states is denoted $T(A)$ and when $T(A)$ is nonempty we equip it with the weak-$*$ topology. Let $A$ be a $C^*$-algebra with a countable approximate unit consisting of projections. If $\tau$ is a trace on $A$, then it induces a map $K_0(\tau) : K_0(A) \to {\mathbb{R}}$ given by $K_0(\tau)([p]_0-[q]_0) = \tau(p)-\tau(q)$. The map $K_0(\tau)$ will be an element of $S(K_0(A))$ (see [@RLL §5.2] for more details) and thus there is a continuous affine map $r_A : T(A) \to S(K_0(A))$ defined by $r_A(\tau) :=
K_0(\tau)$.
It is a fact that any quasitrace on an exact $C^*$-algebra extends to a trace (this was proven by Haagerup for unital $C^*$-algebras [@Haa] and shown to hold for nonunital $C^*$-algebras by Kirchberg [@Kir2]). Furthermore, Blackadar and Rørdam showed in [@BR] that when $A$ is unital every element in $K_0(A)$ lifts to a quasitrace. It is straightforward to extend the result of Blackadar and Rørdam to $C^*$-algebras with a countable approximate unit consisting of projections. Thus when $A$ is a graph algebra we see that the map $r_A : T(A) \to S(K_0(A))$ is surjective.
If $A$ has real rank zero, then the span of the projections in $A$ is dense in $A$ and $r_A$ is injective. It was shown in [@Jeo] that a graph algebra $C^*(E)$ has real rank zero if and only if the graph $E$ satisfies Condition (K); that is, no vertex in $E$ is the base of exactly one simple loop. Therefore, when $A = C^*(E)$ and $E$ is a graph satisfying Condition (K), the map $r_A$ is a homeomorphism and Proposition \[g-t-is-state\] shows that the tracial states on $C^*(E)$ are identified in a canonical way with $T(E)$.
[99]{}
T. Bates, D. Pask, I. Raeburn, and W. Szymański, *The $C^*$-algebras of row-finite graphs*, New York J. Math. **6** (2000), 307–324.
B. Blackadar and M. Rørdam, *Extending states on preordered semigroups and the existence of quasitraces on $C^*$-algebras*, J. Algebra, **152** (1992), 240–247.
D. Drinen and M. Tomforde, *The $C^*$-algebras of arbitrary graphs*, Rocky Mountain J. Math, to appear.
D. Drinen and M. Tomforde, *Computing $K$-theory and Ext for graph $C^*$-algebras*, Illinois J. Math., to appear.
U. Haagerup, *Every quasi-trace on an exact $C^*$-algebra is a trace*, preprint (1991).
J. A Jeong, *Real rank of generalized Cuntz-Krieger algebras*, preprint (2000).
E. Kirchberg, *On the existence of traces on exact stably projectionless simple $C^*$-algebras*, Operator Algebras and their Applications (P. A. Fillmore and J. A. Mingo, eds.) Fields Institute Communications, vol. 13, Amer. Math. Soc. 1995, p. 171–172.
A. Kumjian, D. Pask, and I. Raeburn, *Cuntz-Krieger algebras of directed graphs*, Pacific J. Math. **184** (1998), 161–174.
I. Raeburn and W. Szymański, *Cuntz-Krieger algebras of infinite graphs and matrices*, preprint (2000).
M. Rørdam, F. Larsen, and N. J. Laustsen, An Introduction to $K$-theory for $C^*$-algebras, London Mathematical Society Student Texts **49**, Cambridge University Press, Cambridge, 2000.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'The mathematical basis for the Gaussian entanglement is discussed in detail, as well as its implications in the internal space-time structure of relativistic extended particles. It is shown that the Gaussian entanglement shares the same set of mathematical formulas with the harmonic oscillator in the Lorentz-covariant world. It is thus possible to transfer the concept of entanglement to the Lorentz-covariant picture of the bound state which requires both space and time separations between two constituent particles. These space and time variables become entangled as the bound state moves with a relativistic speed. It is shown also that our inability to measure the time-separation variable leads to an entanglement entropy together with a rise in the temperature of the bound state. As was noted by Paul A. M. Dirac in 1963, the system of two oscillators contains the symmetries of $O(3,2)$ de Sitter group containing two $O(3,1)$ Lorentz groups as its subgroups. Dirac noted also that the system contains the symmetry of the $Sp(4)$ group which serves as the basic language for two-mode squeezed states. Since the $Sp(4)$ symmetry contains both rotations and squeezes, one interesting case is the combination of rotation and squeeze resulting in a shear. While the current literature is mostly on the entanglement based on squeeze along the normal coordinates, the shear transformation is an interesting future possibility. The mathematical issues on this problem are clarified.'
---
[**Entangled Harmonic Oscillators and Space-time Entanglement**]{}
Sibel Ba[ş]{}kal [^1]\
Department of Physics, Middle East Technical University, 06531 Ankara, Turkey
Young S. Kim[^2]\
Center for Fundamental Physics, University of Maryland,\
College Park, Maryland 20742, U.S.A.
Marilyn E. Noz[^3]\
Department of Radiology,\
New York University New York, New York 10016, U.S.A.
Introduction
============
Entanglement problems deal with fundamental issues in physics. Among them, the Gaussian entanglement is of current interest not only in quantum optics [@gied03; @braun05; @kn05job; @ge15] but also in other dynamical systems [@kn05job; @ging02; @dodd04; @ferra05; @adesso07]. The underlying mathematical language for this form of entanglement is that of harmonic oscillators. In this paper, we present first the mathematical tools which are and may be useful in this branch of physics.
The entangled Gaussian state is based on the formula $$\label{seri00}
\frac{1}{\cosh\eta}\sum_{k} (\tanh{\eta})^k \chi_{k}(x) \chi_{k}(y) ,$$ where $\chi_{n}(x)$ is the $n^{th}$ excited-state oscillator wave function.
In Chapter 16 of their book [@walls08], Walls and Milburn discussed in detail the role of this formula in the theory of quantum information. Earlier, this formula played the pivotal role for Yuen to formulate his two-photon coherent states or two-mode squeezed states [@yuen76]. The same formula was used by Yurke and Patasek in 1987 [@yurke87] and by Ekert and Knight [@ekert89] for the two-mode squeezed state where one of the photons is not observed. The effect of entanglement is to be seen from the beam splitter experiments [@paris99; @mskim02].
In this paper, we point out first that the series of Eq.(\[seri00\]) can also be written as a squeezed Gaussian form $$\label{i01}
\frac{1}{\sqrt{\pi}}
\exp{\left\{-\frac{1}{4}\left[e^{-2\eta}(x + y)^2 +
e^{2\eta}(x - y)^{2}\right]\right\}} ,$$ which becomes $$\label{i02}
\frac{1}{\sqrt{\pi}}
\exp{\left\{-\frac{1}{2}\left(x^2 + y^2\right)\right\}} ,$$ when $\eta = 0$.
We can obtain the squeezed form of Eq.(\[i01\]) by replacing $x$ and $y$ by $x'$ and $y'$ respectively, where $$\label{sq01}
\pmatrix{x' \cr y'} = \pmatrix{\cosh\eta
& -\sinh\eta \cr -\sinh\eta & \cosh\eta}
\pmatrix{x \cr y} .$$ If $x$ and $y$ are replaced by $z$ and $t$, Eq.(\[sq01\]) becomes the formula for the Lorentz boost along the $z$ direction. Indeed, the Lorentz boost is a squeeze transformation [@kn05job; @hkn90].
The squeezed Gaussian form of Eq.(\[i01\]) plays the key role in studying boosted bound states in the Lorentz-covariant world [@dir27; @dir45; @dir49; @yuka53; @knp86], where $z$ and $t$ are the space and time separations between two constituent particles. Since the mathematics of this physical system is the same as the series given in Eq.(\[seri00\]), the physical concept of entanglement can be transferred to the Lorentz-covariant bound state, as illustrated in Fig. \[resonance\].
![One mathematics for two branches of physics. Let us look at Eq.(\[seri00\]) and Eq.(\[i01\]) applicable to quantum optics and special relativity respectively. They are the same formula from the Lorentz group with different variables as in the case of the LCR circuit and the mechanical oscillator sharing the same second-order differential equation.[]{data-label="resonance"}](resonance11b.eps)
We can approach this problem from the system of two harmonic oscillators. In 1963, Paul A. M. Dirac studied the symmetry of this two-oscillator system and discussed all possible transformations applicable to this oscillator [@dir63]. He concluded that there are ten possible generators of transformations satisfying a closed set of commutation relations. He then noted that this closed set corresponds to the Lie algebra of the $O(3,2)$ de Sitter group which is the Lorentz group applicable to three space-like and two time-like dimensions. This $O(3,2)$ group has two $O(3,1)$ Lorentz groups as its subgroups.
We note that the Lorentz group is the language of special relativity, while the harmonic oscillator is one of the major tools for interpreting bound states. Therefore, Dirac’s two-oscillator system can serve as a mathematical framework for understanding quantum bound systems in the Lorentz-covariant world.
Within this formalism, the series given in Eq.(\[seri00\]) can be produced from the ten-generator Dirac system. In discussing the oscillator system, the standard procedure is to use the normal coordinates defined as $$\label{norm00}
u = \frac{x + y}{\sqrt{2}}, \quad\mbox{and}\quad v = \frac{x - y}{\sqrt{2}}.$$ In terms of these variables, the transformation given in Eq.(\[sq01\]) takes the form $$\label{sq02}
\pmatrix{u' \cr v'} = \pmatrix{e^{-\eta} & 0 \cr
0 & e^{\eta}}
\pmatrix{u \cr v} ,$$ where this is a squeeze transformation along the normal coordinates. While the normal-coordinate transformation is a standard procedure, it is interesting to note that it also serves as a Lorentz boost [@dir49].
With these preparations, we shall study in Sec. \[2dim\], the system of two oscillators and coordinate transformations of current interest. It is pointed out in Sec. \[dirac63\] that there are ten different generators for transformations, including those discussed in Sec. \[2dim\]. It is noted that Dirac derived ten generators of transformations applicable to these oscillators, and they satisfy the closed set of commutation relations which is the same as the Lie algebra of the $O(3,2)$ de Sitter group containing two Lorentz groups among its subgroups. In Sec. \[wigf\], Dirac’s ten-generator symmetry is studied in the Wigner phase-space picture, and it is shown that Dirac’s symmetry contains both canonical and Lorentz transformations.
While the Gaussian entanglement starts from the oscillator wave function in its ground state, we study in Sec. \[excited\] the entanglements of excited oscillator states. We give a detailed explanation of how the series of Eq.(\[seri00\]) can be derived from the squeezed Gaussian function of Eq.(\[i01\]).
In Sec. \[shear\], we study in detail how the sheared state can be derived from a squeezed state. It appears to be a rotated squeezed state, but this is not the case. In Sec. \[restof\], we study what happens when one of the two entangled variables is not observed within the framework of Feynman’s rest of the universe [@fey72; @hkn99ajp].
In Sec. \[spt\], we note that most of the mathematical formulas in this paper have been used earlier for understanding relativistic extended particles in the Lorentz-covariant harmonic oscillator formalism [@kn73; @knp86; @kno79jmp; @kno79ajp; @kiwi90pl; @kn11symm]. These formulas allow us to transport the concept of entanglement from the current problem of physics to quantum bound states in the Lorentz-covariant world. The time separation between the constituent particles is not observable, and is not known in the present form of quantum mechanics. However, this variable gives its effect in the real world by entangling itself with the longitudinal variable.
Two-dimensional Harmonic Oscillators {#2dim}
====================================
The Gaussian form $$\label{wf01}
\left[\frac{1}{\sqrt{\pi}}\right]^{1/4}\exp{\left(-\frac{x^{2}}{2}\right)}$$ is used for many branches of science. For instance, we can construct this function by throwing dice.
In physics, this is the wave function for the one-dimensional harmonic oscillator in the ground state. This function is also used for the vacuum state in quantum field theory, as well as the zero-photon state in quantum optics. For excited oscillator states, the wave function takes the form $$\label{wf02}
\chi_{n}(x) = \left[\frac{1}{\sqrt{\pi}2^n n!}\right]^{1/2}
H_{n}(x) \exp{\left(\frac{-x^{2}}{2}\right)} ,$$ where $H_{n}(x)$ is the Hermite polynomial of the $n^{th}$ degree. The properties of this wave function are well known, and it becomes the Gaussian form of Eq.(\[wf01\]) when $n = 0$.
We can now consider the two-dimensional space with the orthogonal coordinate variables $x$ and $y$, and the same wave function with the $y$ variable: $$\label{wf05}
\chi_{m}(y) = \left[\frac{1}{\sqrt{\pi}2^m m!}\right]^{1/2}
H_{m}(y) \exp{\left(\frac{-y^{2}}{2}\right)} ,$$ and construct the function $$\label{wf07}
\psi^{n,m}(x,y) =\left[\chi_{n}(x)\right]\left[\chi_{m}(y)\right].$$ This form is clearly separable in the $x$ and $y$ variables. If $n$ and $m$ are zero, the wave function becomes $$\label{gau00}
\psi^{0,0}(x,y) = \frac{1}{\sqrt{\pi}}
\exp{\left\{-\frac{1}{2}\left(x^2 + y^2\right)\right\}}.$$ Under the coordinate rotation $$\label{trans11}
\pmatrix{x' \cr y'} = \pmatrix{\cos\theta & -\sin\theta \cr
\sin\theta & \cos\theta} \pmatrix{x \cr y}$$ this function remains separable. This rotation is illustrated in Fig. \[ctran11\]. This is a transformation very familiar to us.
We can next consider the scale transformation of the form $$\label{sq03}
\pmatrix{x' \cr y'} = \pmatrix{e^{\eta} & 0 \cr
0 & e^{-\eta}} \pmatrix{x \cr y} .$$ This scale transformation is also illustrated in Fig. \[ctran11\]. This area-preserving transformation is known as the squeeze. Under this transformation, the Gaussian function is still separable.
If the direction of the squeeze is rotated by $45^o$, the transformation becomes the diagonal transformation of Eq.(\[sq02\]). Indeed, this is a squeeze in the normal coordinate system. This form of squeeze is most commonly used for squeezed states of light as well as the subject of entanglements. It is important to note that, in terms of the $x$ and $y,$ variables, this transformation can be written as Eq.(\[sq01\]) [@dir49]. In 1905, Einstein used this form of squeeze transformation for the longitudinal and time-like variables. This is known as the Lorentz boost.
In addition, we can consider the transformation of the form $$\pmatrix{x' \cr y'} =
\pmatrix{ 1 & 2\alpha \cr 0 & 1}
\pmatrix{x \cr y} .$$ This transformation shears the system as is shown in Fig. \[ctran11\].
After the squeeze or shear transformation, the wave function of Eq.(\[wf07\]) becomes non-separable, but it can still be written as a series expansion in terms of the oscillator wave functions. It can take the form $$\label{seri22}
\psi(x, y) = \sum_{n,m} A_{n,m} \chi_{n}(x) \chi_{m}(y) ,$$ with $$\sum_{n,m} |A_{n,m}|^2 = 1 ,$$ if $\psi(x,y)$ is normalized, as was the case for the Gaussian function of Eq.(\[gau00\]).
Squeezed Gaussian Function
--------------------------
Under the squeeze along the normal coordinate, the Gaussian form of Eq.(\[gau00\]) becomes $$\label{gau03}
\psi_{\eta}(x,y) = \frac{1}{\sqrt{\pi}}
\exp{\left\{-\frac{1}{4}\left[e^{-2\eta}(x + y)^2 +
e^{2\eta}(x - y)^{2}\right]\right\}} ,$$ which was given in Eq.(\[i01\]). This function is not separable in the $x$ and $y$ variables. These variables are now entangled. We obtain this form by replacing, in the Gaussian function of Eq.(\[gau00\]), the $x$ and $y$ variables by $x'$ and $y'$ respectively, where $$\label{trans22}
x' = (\cosh\eta) x - (\sinh\eta) y, \quad\mbox{and}\quad
y'= ( \cosh\eta) y - (\sinh\eta) x .$$ This form of squeeze is illustrated in Fig. \[sqz00\], and the expansion of this squeezed Gaussian function becomes the series given in Eq.(\[seri00\]) [@knp86; @kno79ajp]. This aspect will be discussed in detail in Sec. \[excited\].
![Transformations in the two-dimensional space. The object can be rotated, squeezed, or sheared. In all three cases, the area remains invariant. []{data-label="ctran11"}](ctran11b.eps)
![Squeeze along the $45^o$ direction, discussed most frequently in the literature.[]{data-label="sqz00"}](sqz00b.eps)
.
In 1976 [@yuen76], Yuen discussed two-photon coherent states, often called squeezed states of light. This series expansion served as the starting point for two-mode squeezed states. More recently, in 2003, Giedke [*et al.*]{} [@gied03] used this formula to formulate the concept of the Gaussian entanglement.
There is another way to derive the series. For the harmonic oscillator wave functions, there are step-down and step-up operators [@dir45]. These are defined as $$a = \frac{1}{\sqrt{2}}\left(x + \frac{\partial}{\partial x} \right),
\quad\mbox{and}\quad
a^{\dag} = \frac{1}{\sqrt{2}} \left(x - \frac{\partial}{\partial x}\right) .$$ If they are applied to the oscillator wave function, we have $$a~\chi_n(x) = \sqrt{n}~\chi_{n-1}(x), \quad\mbox{and}\quad
a^{\dag}~\chi_n(x) = \sqrt{n+1}~ \chi_{n + 1}(x) .$$ Likewise, we can introduce $b$ and $b^{\dag} $ operators applicable to $\chi_{n}(y)$: $$b = \frac{1}{\sqrt{2}}\left( y + \frac{\partial}{\partial y}\right),
\quad\mbox{and}\quad
b^{\dag} = \frac{1}{\sqrt{2}} \left(y - \frac{\partial}{\partial y}\right) .$$ Thus $$\begin{aligned}
&{}& \left(a^{\dag}\right)^{n} \chi^{0}(x) = \sqrt{n!}~ \chi_{n}(x), \nonumber \\[1ex].
&{}& \left(b^{\dag}\right)^{n} \chi^{0}(y) = \sqrt{n!} \chi_{n}(y) ,\end{aligned}$$ and $$a~\chi_{0}(x) = b~\chi_{0}(y) = 0 .$$ In terms of these variables, the transformation leading the Gaussian function of Eq.(\[gau00\]) to its squeezed form of Eq.(\[gau03\]) can be written as $$\label{exp01}
\exp{\left\{\frac{\eta}{2}\left( a^{\dag}b^{\dag} - a~b \right)\right\}} ,$$ which can also be written as $$\label{exp02}
\exp{\left\{-\eta\left(x\frac{\partial}{\partial y} +
y\frac{\partial}{\partial x}\right)\right\}} .$$
Next, we can consider the exponential form $$\label{exp05}
\exp{\left\{(\tanh\eta) a^{\dag} b^{\dag}) \right\}} ,$$ which can be expanded as $$\label{seri11}
\sum_{n} \frac{1}{n!} (\tanh\eta)^{n} \left(a^{\dag} b^{\dag}\right)^{n}.$$ If this operator is applied to the ground state of Eq.(\[gau00\]), the result is $$\label{seri12}
\sum_{n} (\tanh\eta)^{n} \chi_{n}(x) \chi_{n}(y) .$$ This form is not normalized, while the series of Eq.(\[seri00\]) is. What is the origin of this difference?
There is a similar problem with the one-photon coherent state [@klaud73; @sal07]. There, the series comes from the expansion of the exponential form $$\label{exp22}
\exp{\left\{\alpha a^{\dag}\right\}} ,$$ which can be expanded to $$\sum_{n} \frac{1}{n!} \alpha^{n} \left(a^{\dag}\right)^{n} .$$ However, this operator is not unitary. In order to make this series unitary, we consider the exponential form $$\exp{\left(\alpha a^{\dag} - \alpha^{*} a \right)}$$ which is unitary. This expression can then be written as $$e^{-\alpha\alpha^*/2}
\left[ \exp{ \left(\alpha a^{\dag}\right) } \right ]
\left[\exp{\left(\alpha^* a \right)} \right],$$ according to the Baker-Campbell-Hausdorff (BCH) relation [@miller72; @hall03]. If this is applied to the ground state, the last bracket can be dropped, and the result is $$e^{-\alpha\alpha^*/2} \exp{\left[\alpha a^{\dag} \right]} ,$$ which is the unitary operator with the normalization constant $$e^{-\alpha\alpha^*/2}.$$
Likewise, we can conclude that the series of Eq.(\[seri12\]) is different from that of Eq.(\[seri00\]) due to the difference between the unitary operator of Eq.(\[exp01\]) and the non-unitary operator of Eq.(\[exp05\]). It may be possible to derive the normalization factor using the BCH formula, but it seems to be intractable at this time. The best way to resolve this problem is to present the exact calculation of the unitary operator leading to the normalized series of Eq.(\[gau00\]). We shall return to this problem in Sec. \[excited\], where squeezed excited states are studied.
Sheared Gaussian Function
-------------------------
In addition, there is a transformation called “shear,” where only one of the two coordinates is translated as shown in Fig. \[ctran11\]. This transformation takes the form $$\label{shr01}
\pmatrix{x' \cr y'} = \pmatrix{1 & 2\alpha \cr 0 & 1} \pmatrix{x \cr y} ,$$ which leads to $$\label{shr02}
\pmatrix{x' \cr y'} = \pmatrix{x + 2\alpha y \cr y} .$$ This shear is one of the basic transformations in engineering sciences. In physics, this transformation plays the key role in understanding the internal space-time symmetry of massless particles [@wig39; @wein64; @kiwi90jmp]. This matrix plays the pivotal role during the transition from the oscillator mode to the damping mode in classical damped harmonic oscillators [@bkn14; @bkn15].
Under this transformation, the Gaussian form becomes $$\label{gau33}
\psi_{shr}(x,y) = \frac{1}{\sqrt{\pi}}
\exp{\left\{-\frac{1}{2}\left[(x - 2\alpha y)^2 + y^2\right]\right\}}.$$ It is possible to expand this into a series of the form of Eq.(\[seri22\]) [@kimyeh92].
The transformation applicable to the Gaussian form of Eq.(\[gau00\]) is $$\label{shr05}
\exp{\left(-2\alpha y \frac{\partial}{\partial x} \right)} ,$$ and the generator is $$\label{shr07}
- i y \frac{\partial}{\partial x} .$$ It is of interest to see where this generator stands among the ten generators of Dirac.
However, the most pressing problem is whether the sheared Gaussian form can be regarded as a rotated squeezed state. The basic mathematical issue is that the shear matrix of Eq.(\[shr01\]) is triangular and cannot be diagonalized. Therefore, it cannot be a squeezed state. Yet, the Gaussian form of Eq.(\[gau33\]) appears to be a rotated squeezed state, while not along the normal coordinates. We shall look at this problem in detail in Sec. \[shear\].
Dirac’s Entangled Oscillators {#dirac63}
=============================
Paul A. M. Dirac devoted much of his life-long efforts to the task of making quantum mechanics compatible with special relativity. Harmonic oscillators serve as an instrument for illustrating quantum mechanics, while special relativity is the physics of the Lorentz group. Thus, Dirac attempted to construct a representation of the Lorentz group using harmonic oscillator wave functions [@dir45; @dir63].
In his 1963 paper [@dir63], Dirac started from the two-dimensional oscillator whose wave function takes the Gaussian form given in Eq.(\[gau00\]). He then considered unitary transformations applicable to this ground-state wave function. He noted that they can be generated by the following ten Hermitian operators $$\begin{aligned}
\label{dir10}
&{}& L_{1} = {1\over 2}\left(a^{\dag} b + b^{\dag} a\right) ,\qquad
L_{2} = {1\over 2i}\left(a^{\dag} b - b^{\dag} a\right) , \nonumber \\[3mm]
&{}& L_{3} = {1\over 2}\left(a^{\dag} a - b^{\dag} b \right) , \qquad
S_{3} = {1\over 2}\left(a^{\dag}a + bb^{\dag}\right) , \nonumber \\[3mm]
&{}& K_{1} = -{1\over 4}\left(a^{\dag}a^{\dag} + aa - b^{\dag}b^{\dag} - bb\right) , \nonumber \\[3mm]
&{}& K_{2} = {i\over 4}\left(a^{\dag}a^{\dag} - aa + b^{\dag}b^{\dag} - bb\right) , \nonumber \\[3mm]
&{}& K_{3} = {1\over 2}\left(a^{\dag}b^{\dag} + ab\right) , \nonumber \\[3mm]
&{}& Q_{1} = -{i\over 4}\left(a^{\dag}a^{\dag} - aa - b^{\dag}b^{\dag} + bb \right) , \nonumber \\[3mm]
&{}& Q_{2} = -{1\over 4}\left(a^{\dag}a^{\dag} + aa + b^{\dag}b^{\dag} + bb \right) , \nonumber \\[3mm]
&{}& Q_{3} = {i\over 2}\left(a^{\dag}b^{\dag} - ab \right) .\end{aligned}$$ He then noted that these operators satisfy the following set of commutation relations. $$\begin{aligned}
\label{dir12}
&{}& [L_{i}, L_{j}] = i\epsilon _{ijk} L_{k} ,\qquad
[L_{i}, K_{j}] = i\epsilon _{ijk} K_{k} , \qquad
[L_{i}, Q_{j}] = i\epsilon _{ijk} Q_{k} , \nonumber\\[1ex]
&{}&
[K_{i}, K_{j}] = [Q_{i}, Q_{j}] = -i\epsilon _{ijk} L_{k} , \qquad
[L_{i}, S_{3}] = 0 , \nonumber\\[1ex]
&{}&
[K_{i}, Q_{j}] = -i\delta _{ij} S_{3} , \qquad
[K_{i}, S_{3}] = -iQ_{i} , \qquad [Q_{i}, S_{3}] = iK_{i} .\end{aligned}$$
Dirac then determined that these commutation relations constitute the Lie algebra for the $O(3,2)$ de Sitter group with ten generators. This de Sitter group is the Lorentz group applicable to three space coordinates and two time coordinates. Let us use the notation $(x, y, z, t, s)$, with $(x, y, z)$ as space coordinates and $(t, s)$ as two time coordinates. Then the rotation around the $z$ axis is generated by $$\label{dir15}
L_{3} = \pmatrix{0 & -i & 0 & 0 & 0 \cr i & 0 & 0 & 0 & 0 \cr
0 & 0 & 0 & 0 & 0 \cr 0 & 0 & 0 & 0 & 0 \cr 0 & 0 & 0 & 0 & 0 } .$$ The generators $L_1$ and $L_2$ can be also be constructed. The $K_3$ and $Q_3$ generators will take the form $$K_{3} = \pmatrix{0 & 0 & 0 & 0 & 0 \cr 0 & 0 & 0 & 0 & 0 \cr
0 & 0 & 0 & i & 0 \cr 0 & 0 & i & 0 & 0 \cr 0 & 0 & 0 & 0 & 0 } , \qquad
Q_{3} =\pmatrix{0 & 0 & 0 & 0 & 0 \cr 0 & 0 & 0 & 0 & 0 \cr
0 & 0 & 0 & 0 & i \cr 0 & 0 & 0 & 0 & 0 \cr
0 & 0 & i & 0 & 0 } .$$ From these two matrices, the generators $K_1, K_2, Q_1, Q_2$ can be constructed. The generator $S_3$ can be written as $$\label{dir18}
S_{3} = \pmatrix{0 & 0 & 0 & 0 & 0 \cr 0 & 0 & 0 & 0 & 0 \cr
0 & 0 & 0 & 0 & 0 \cr 0 & 0 & 0 & 0 & -i \cr 0 & 0 & 0 & i & 0 } .$$ The last five-by-five matrix generates rotations in the two-dimensional space of $(t, s)$. If we introduce these two time variables, the $O(3,2)$ group leads to two coupled Lorentz groups. The particle mass is invariant under Lorentz transformations. Thus, one Lorentz group cannot change the particle mass. However, with two coupled Lorentz groups we can describe the world with variable masses such as the neutrino oscillations.
In Sec. \[2dim\], we used the operators $Q_{3}$ and $K_{3}$ as the generators for the squeezed Gaussian function. For the unitary transformation of Eq.(\[exp01\]), we used $$\label{exp11}
\exp{\left(-i\eta Q_{3}\right)} .$$ However, the exponential form of Eq.(\[exp05\]) can be written as $$\label{exp12}
\exp{\left\{-i(\tanh\eta) \left(Q_{3} + iK_{3}\right)\right\}} ,$$ which is not unitary, as was seen before.
From the space-time point of view, both $K_{3}$ and $Q_{3}$ generate Lorentz boosts along the $z$ direction, with the time variables $t$ and $s$ respectively. The fact that the squeeze and Lorentz transformations share the same mathematical formula is well known. However, the non-unitary operator $iK_{3}$ does not seem to have a space-time interpretation.
As for the sheared state, the generator can be written as $$Q_{3} - L_{2} ,$$ leading to the expression given in Eq.(\[shr07\]). This is a Hermitian operator leading to the unitary transformation of Eq.(\[shr05\]).
Entangled Oscillators in the Phase-Space Picture {#wigf}
================================================
Also in his 1963 paper, Dirac states that the Lie algebra of Eq. (\[dir12\]) can serve as the four-dimensional symplectic group $Sp(4)$. This group allows us to study squeezed or entangled states in terms of the four-dimensional phase space consisting of two position and two momentum variables [@hkn90; @knp91; @kn13].
![Transformations generated by $Q_{3}$ and $K_{3}$. As the parameter $\eta$ becomes larger, both the space and momentum distribution becomes larger. []{data-label="ctran33"}](ctran33b.eps)
In order to study the $Sp(4)$ contents of the coupled oscillator system, let us introduce the Wigner function defined as [@wig32] $$\begin{aligned}
\label{wigf01}
\lefteqn{W(x,y; p,q) = \left({1\over \pi} \right)^{2}
\int \exp \left\{- 2i (px' + qy') \right\} } \nonumber\\[3mm]
\mbox{ } & \mbox{ } & \mbox{ }
\times \psi^{*}(x + x', y + y')
\psi (x - x', y - y') dx' dy' .\hspace*{2cm}\end{aligned}$$ If the wave function $\psi(x,y)$ is the Gaussian form of Eq.(\[gau00\]), the Wigner function becomes $$\label{wigf00}
W(x,y:p,q) = \left(\frac{1}{\pi}\right)^{2}
\exp{\left\{-\left(x^2 + p^2 + y^2 + q^2\right)\right\} } .$$
The Wigner function is defined over the four-dimensional phase space of $(x, p, y, q)$ just as in the case of classical mechanics. The unitary transformations generated by the operators of Eq.(\[dir10\]) are translated into Wigner transformations [@knp91; @hkn95jmp; @kn13]. As in the case of Dirac’s oscillators, there are ten corresponding generators applicable to the Wigner function. They are $$\begin{aligned}
\label{rotphase}
L_{1} &=& +{i\over 2}\left\{\left(x{\partial \over \partial q} -
q{\partial \over \partial x} \right) +
\left(y{\partial \over \partial p} -
p{\partial \over \partial y} \right)\right\}, \nonumber \\[3mm]
L_{2} &=& -{i\over 2}\left\{\left(x{\partial \over \partial y} -
y{\partial \over \partial x}\right) +
\left(p {\partial \over \partial q} -
q{\partial \over \partial p}\right)\right\} ,\nonumber \\[3mm]
L_{3} &=& +{i\over 2}\left\{\left(x{\partial \over \partial p} -
p{\partial \over \partial x}\right) -
\left(y{\partial \over \partial q} -
q{\partial \over \partial y}\right)\right\} , \nonumber \\[3mm]
S_{3} &=& -{i\over 2}\left\{\left(x{\partial \over \partial p} -
p{\partial \over \partial x}\right) +
\left(y{\partial \over \partial q} -
q{\partial \over \partial y}\right)\right\} ,\end{aligned}$$ and $$\begin{aligned}
\label{sqphase}
K_{1} &=& -{i\over 2}\left\{\left( x{\partial \over \partial p} +
p{\partial \over \partial x} \right) -
\left(y{\partial \over \partial q} +
q{\partial \over \partial y} \right)\right\}, \nonumber \\[3mm]
K_{2} &=& -{i\over 2}\left\{\left(x{\partial \over \partial x} +
y{\partial \over \partial y}\right) -
\left(p{\partial \over \partial p} +
q{\partial \over \partial q}\right)\right\} , \nonumber \\[3mm]
K_{3} &=& +{i\over 2}\left\{\left(x{\partial \over \partial q} +
q{\partial \over \partial x}\right) +
\left(y{\partial \over \partial p} +
p{\partial \over \partial y}\right)\right\} , \nonumber \\[3mm]
Q_{1} &=& +{i\over 2}\left\{\left(x{\partial \over \partial x} +
q{\partial \over \partial q}\right) -
\left(y{\partial \over \partial y} +
p{\partial \over \partial p}\right)\right\} ,\nonumber \\[3mm]
Q_{2} &=& -{i\over 2}\left\{\left(x{\partial \over \partial p} +
p{\partial \over \partial x}\right) +
\left(y{\partial \over \partial q} +
q{\partial \over \partial y}\right)\right\} ,\nonumber \\[3mm]
Q_{3} &=& -{i\over 2}\left\{\left(y{\partial \over \partial x} +
x{\partial \over \partial y} \right) -
\left(q{\partial \over \partial p} + p{\partial \over
\partial q}\right)\right\} .\end{aligned}$$ These generators also satisfy the Lie algebra given in Eq.(\[dir10\]) and Eq.(\[dir12\]). Transformations generated by these generators have been discussed in the literature [@hkn90; @hkn95jmp; @kn13].
As in the case of Sec. \[dirac63\], we are interested in the generators $Q_{3}$ and $K_{3}$. The transformation generated by $Q_{3}$ takes the form $$\left[\exp{\left\{\eta \left(x\frac{\partial}{\partial y} +
y\frac{\partial}{\partial x}\right)\right\} } \right]
\left[\exp{\left\{-\eta \left(p\frac{\partial}{\partial q} +
q\frac{\partial}{\partial p}\right)\right\} } \right] .$$ This exponential form squeezes the Wigner function of Eq.(\[wigf00\]) in the $x~y$ space as well as in their corresponding momentum space. However, in the momentum space, the squeeze is in the opposite direction as illustrated in Fig. \[ctran33\]. This is what we expect from canonical transformation in classical mechanics. Indeed, this corresponds to the unitary transformation which played the major role in Sec. \[2dim\].
Even though shown insignificant in Sec. \[2dim\], $K_{3}$ had a definite physical interpretation in Sec. \[dirac63\]. The transformation generated by $K_{3}$ takes the form $$\left[\exp{\left\{\eta \left(x\frac{\partial}{\partial q} +
q\frac{\partial}{\partial x}\right)\right\} } \right]
\left[\exp{\left\{\eta \left(y\frac{\partial}{\partial p} +
p\frac{\partial}{\partial y}\right)\right\} } \right] .$$ This performs the squeeze in the $x~q$ and $y~p$ spaces. In this case, the squeezes have the same sign, and the rate of increase is the same in all directions. We can thus have the same picture of squeeze for both $x~y$ and $p~q$ spaces as illustrated in Fig. \[ctran33\]. This parallel transformation corresponds to the Lorentz squeeze [@knp86; @kno79jmp].
As for the sheared state, the combination $$Q_{3} - L_{2} = -i\left( y\frac{\partial}{\partial x} +
q\frac{\partial}{\partial p} \right) ,$$ generates the same shear in the $p~q$ space.
Entangled Excited States {#excited}
========================
In Sec. \[2dim\], we discussed the entangled ground state, and noted that the entangled state of Eq.(\[seri00\]) is a series expansion of the squeezed Gaussian function. In this section, we are interested in what happens when we squeeze an excited oscillator state starting from $$\label{wf10}
\chi_{n}(x)\chi_{m}(y) .$$ In order to entangle this state, we should replace $x$ and $y$ respectively by $x'$ and $y'$ given in Eq.(\[trans22\]).
The question is how the oscillator wave function is squeezed after this operation. Let us note first that the wave function of Eq.(\[wf10\]) satisfies the equation $$\label{cov01}
\frac{1}{2}\left\{\left(x^2 - \frac{\partial^2}{\partial x^2}\right)
- \left(y^2 - \frac{\partial^2}{\partial y^2}\right)\right\}
\chi_n(x) \chi_m(y) = (n - m) \chi_n(x) \chi_m(y) .$$ This equation is invariant under the squeeze transformation of Eq.(\[trans22\]), and thus the eigenvalue $(n - m)$ remains invariant. Unlike the usual two-oscillator system, the $x$ component and the $y$ component have opposite signs. This is the reason why the overall equation is squeeze-invariant [@kno79jmp; @kn05job; @fkr71].
We then have to write this squeezed oscillator in the series form of Eq.(\[seri22\]). The most interesting case is of course for $m = n =0 $, which leads to the Gaussian entangled state given in Eq.(\[gau03\]). Another interesting case is for $m = 0$ while $n$ is allowed to take all integer values. This single-excitation system has applications in the covariant oscillator formalism where no time-like excitations are allowed. The Gaussian entangled state is a special case of this single-excited oscillator system.
The most general case is for nonzero integers for both $n$ and $m$. The calculation for this case is available in the literature [@knp86; @rotbart81]. Seeing no immediate physical applications of this case, we shall not reproduce this calculation in this section.
For the single-excitation system, we write the starting wave function as $$\label{wf12}
\chi_{n}(x)\chi_{0}(y) = \left[\frac{1}{\pi~2^n n!}\right]^{1/2}
H_{n}(x) \exp{\left\{-\left(\frac{x^2 + y^2 }{2}\right)\right\}} .$$ There are no excitations along the $y$ coordinate. In order to squeeze this function, our plan is to replace $x$ and $y$ by $x'$ and $y'$ respectively and write $\chi_{n}(x')\chi_{0}(y')$ as a series in the form $$\chi_{n}(x')\chi_{0}(y') = \sum_{k',k} A_{k',k}(n) \chi_{k'}(x)\chi_{k}(y) .$$ Since $k' - k = n$ or $ k' = n + k$, according to the eigenvalue of the differential equation given in Eq.(\[cov01\]), we write this series as $$\chi_{n}(x')\chi_{0}(y') = \sum_{k',k} A_{k}(n) \chi_{(k + n)}(x)\chi_{k}(y) ,$$ with $$\sum_{k} |A_{k}(n)|^2 = 1 .$$ This coefficient is $$\label{coeff55}
A_{k}(n) = \int \chi_{k + n}(x)\chi_{k}(y)\chi_{n}(x')\chi_{0}(y')~ dx~dy .$$ This calculation was given in the literature in a fragmentary way in connection with a Lorentz-covariant description of extended particles starting from Ruiz’s 1974 paper [@ruiz74], subsequently by Kim [*et al.*]{} in 1979 [@kno79ajp] and by Rotbart in 1981 [@rotbart81]. In view of the recent developments of physics, it seems necessary to give one coherent calculation of the coefficient of Eq.(\[coeff55\]).
We are now interested in the squeezed oscillator function $$\begin{aligned}
\label{coef55}
&{}& A_{k}(n) =
\left[\frac{1}{\pi^2 ~2^n n! (k + n)^2 (n + k)! k^2 k!}\right]^{1/2} \nonumber \\[1ex]
&{}& \times \int H_{n+k}(x) H_{k}(y) H_{n}(x')
\exp{\left\{-\left(\frac{x^2 + y^2 + x'^2 + y'^2 }{2}\right)\right\}}dx dy .\end{aligned}$$ As was noted by Ruiz [@ruiz74], the key to the evaluation of this integral is to introduce the generating function for the Hermite polynomials [@magnus66; @doman16]: $$G(r,z) = \exp{\left(-r^2 + 2rz\right)} = \sum_{m} \frac{r^m}{m!} H_m(z) ,$$ and evaluate the integral $$I = \int G(r,x) G(s,y) G(r',x')
\exp{\left\{-\left(\frac{x^2 + y^2 + x'^2 + y'^2 }{2}\right)\right\}}dx dy .$$ The integrand becomes one exponential function, and its exponent is quadratic in $x$ and $y$. This quadratic form can be diagonalized, and the integral can be evaluated [@knp86; @kno79ajp]. The result is $$I = \left[\frac{\pi}{\cosh\eta}\right] \exp{( 2rs \tanh\eta)}
\exp{\left(\frac{2rr'}{\cosh\eta}\right) }.$$ We can now expand this expression and choose the coefficients of $r^{n+k}, s^{k}, r'^{n}$ for\
$H_{(n + k)}(x), H_{n}(y)$, and $H_{n}(z')$ respectively. The result is $$A_{n;k} = \left(\frac{1}{\cosh\eta}\right)^{(n+1)}
\left[\frac{(n+k)!}{n!k!}\right]^{1/2} (\tanh\eta)^{k}.$$ Thus, the series becomes $$\label{seri55}
\chi_{n}(x') \chi_{0}(y') = \left(\frac{1}{\cosh\eta}\right)^{(n+1)}
\sum_{k} \left[\frac{(n+k)!}{n!k!}\right]^{1/2}
(\tanh\eta)^{k} \chi_{k+n}(x)\chi_{k}(y) .$$ If $n = 0$, it is the squeezed ground state, and this expression becomes the entangled state of Eq.(\[gau03\]).
![Sear transformation of the Gaussian form given in Eq.(\[gau00\]).[]{data-label="shear00"}](shear00b.eps)
E(2)-sheared States {#shear}
===================
Let us next consider the effect of shear on the Gaussian form. From Fig. \[shear00\] and Fig. \[sqz00\], it is clear that the sheared state is a rotated squeezed state.
In order to understand this transformation, let us note that the squeeze and rotation are generated by the two-by-two matrices $$K = \pmatrix{0 & i \cr i & 0}, \qquad J = \pmatrix{0 & -i \cr i & 0}, \qquad$$ which generate the squeeze and rotation matrices of the form $$\begin{aligned}
&{}& \exp{\left(-i\eta K\right)} =
\pmatrix{\cosh\eta & \sinh\eta \cr \sinh\eta & \cosh\eta }, \nonumber\\[1ex]
&{}& \exp{\left(-i \theta J\right)} = \pmatrix{\cos\theta & -\sin\theta \cr
\sin\theta & \cos\theta} ,\end{aligned}$$ respectively. We can then consider $$\label{shr05a}
S = K - J = \pmatrix{0 & 2i \cr 0 & 0} .$$ This matrix has the property that $S^2 = 0$. Thus the transformation matrix becomes $$\label{shr10}
\exp{(-i\alpha S)} = \pmatrix{1 & 2\alpha \cr 0 & 1} .$$ Since $S^2 = 0$, the Taylor expansion truncates, and the transformation matrix becomes the triangular matrix of Eq.(\[shr02\]), leading to the transformation $$\label{shr09}
\pmatrix{x \cr y}
\quad\rightarrow\quad \pmatrix{x + 2\alpha y \cr y} .$$
The shear generator $S$ of Eq.(\[shr05a\]) indicates that the infinitesimal transformation is a rotation followed by a squeeze. Since both rotation and squeeze are area-preserving transformations, the shear should also be an area-preserving transformations.
In view of Fig. \[shear00\], we should ask whether the triangular matrix of Eq.(\[shr10\]) can be obtained from one squeeze matrix followed by one rotation matrix. This is not possible mathematically. It can however, be written as a squeezed rotation matrix of the form
$$\label{sqrot01}
\pmatrix{e^{\lambda/2} & 0 \cr 0 & e^{-\lambda/2}}
\pmatrix{\cos\omega & \sin\omega \cr -\sin\omega & \cos\omega}
\pmatrix{e^{-\lambda/2} & 0 \cr 0 & e^{\lambda/2}},$$
resulting in $$\label{mat63}
\pmatrix{\cos\omega & e^{\lambda}\sin\omega \cr -e^{-\lambda}\sin\omega & \cos\omega}.$$ If we let $$(\sin\omega) = 2\alpha e^{-\lambda}$$ Then $$\label{tri05}
\pmatrix{\cos\omega & 2\alpha \cr -2\alpha e^{-2\lambda} & \cos\omega }.$$ If $\lambda$ becomes infinite, the angle $\omega$ becomes zero, and this matrix becomes the triangular matrix of Eq.(\[shr10\]). This is a singular process where the parameter $\lambda$ goes to infinity.
If this transformation is applied to the Gaussian form of Eq.(\[gau00\]), it becomes $$\label{gau05}
\psi(x,y) = \frac{1}{\sqrt{\pi}}
\exp{\left\{-\frac{1}{2}\left[(x - 2\alpha y)^2 + y^2\right]\right\}}.$$ The question is whether the exponential portion of this expression can be written as $$\label{gau07}
\exp{\left\{-\frac{1}{2}\left[e^{-2\eta}(x~\cos\theta + y~\sin\theta)^2 +
e^{2\eta}(x~\sin\theta - y~\cos\theta)^2\right]\right\}} .$$ The answer is Yes. This is possible if $$\begin{aligned}
&{}& \tan(2\theta) = \frac{1}{\alpha}, \nonumber \\[1ex]
&{}& e^{2\eta} = 1 + 2\alpha^2 + 2\alpha \sqrt{\alpha^2 + 1}, \nonumber \\[1ex]
&{}& e^{-2\eta} = 1 + 2\alpha^2 - 2\alpha \sqrt{\alpha^2 + 1}.\end{aligned}$$
In Eq.(\[tri05\]), we needed a limiting case of $\lambda$ becoming infinite. This is necessarily a singular transformation. On the other hand, the derivation of the Gaussian form of Eq.(\[gau05\]) appears to be analytic. How is it possible? In order to achieve the transformation from the Gaussian form of Eq.(\[gau00\]) to Eq.(\[gau05\]), we need the linear transformation $$\pmatrix{\cos\theta & -\sin\theta \cr \sin\theta & \cos\theta}
\pmatrix{e^{\eta} & 0 \cr 0 & e^{-\eta}}.$$ If the initial form is invariant under rotations as in the case of the Gaussian function of Eq.(\[gau00\]), we can add another rotation matrix on the right hand side. We choose that rotation matrix to be $$\pmatrix{\cos(\theta - \pi/2) & -\sin(\theta - \pi/2) \cr
\sin(\theta - \pi/2) & \cos(\theta - \pi/2)} ,$$ write the three matrices as $$\pmatrix{\cos\theta' & -\sin\theta' \cr \sin\theta' & \cos\theta'}
\pmatrix{\cosh\eta & \sinh\eta \cr \sinh\eta & \cosh\eta}
\pmatrix{\cos\theta' & -\sin\theta' \cr \sin\theta' & \cos\theta'},$$ with $$\theta' = \theta - \frac{\pi}{4} .$$ The multiplication of these three matrices leads to $$\label{mat67}
\pmatrix{(\cosh\eta)\sin(2\theta) & \sinh\eta + (\cosh\eta)\cos(2\theta) \cr
\sinh\eta - (\cosh\eta)\cos(2\theta) & (\cosh\eta)\sin(2\theta)} .$$ The lower-left element can become zero when $\sinh\eta = \cosh(\eta)\cos(2\theta)$ and consequently this matrix becomes $$\pmatrix{1 & 2\sinh\eta \cr 0 & 1} .$$ Furthermore, this matrix can be written in the form of a squeezed rotation matrix given in Eq.(\[mat63\]), with $$\begin{aligned}
&{}& \cos\omega = (\cosh\eta)\sin(2\theta), \nonumber \\[1ex]
&{}& e^{-2\lambda} = \frac{ \cos(2\theta)- \tanh\eta }
{ \cos(2\theta)+ \tanh\eta } .\end{aligned}$$ The matrices of the form of Eq.(\[mat63\]) and Eq.(\[mat67\]) are known as the Wigner and Bargmann decompositions respectively [@wig39; @barg47; @hk88; @hkn99; @bkn14].
Feynman’s Rest of the Universe {#restof}
==============================
We need the concept of entanglement in quantum systems of two variables. The issue is how the measurement of one variable affects the other variable. The simplest case is what happens to the first variable while no measurements are taken on the second variable. This problem has a long history since von Neumann introduced the concept of density matrix in 1932 [@neu32]. While there are many books and review articles on this subject, Feynman stated this problem in his own colorful way. In his book on statistical mechanics [@fey72], Feynman makes the following statement about the density matrix.
[*When we solve a quantum-mechanical problem, what we really do is divide the universe into two parts - the system in which we are interested and the rest of the universe. We then usually act as if the system in which we are interested comprised the entire universe. To motivate the use of density matrices, let us see what happens when we include the part of the universe outside the system*]{}.
Indeed, Yurke and Potasek [@yurke87] and also Ekert and Knight [@ekert89] studied this problem in the two-mode squeezed state using the entanglement formula given in Eq.(\[gau03\]). Later in 1999, Han [*et al.*]{} studied this problem with two coupled oscillators where one oscillator is observed while the other is not and thus is in the rest of the universe as defined by Feynman [@hkn99ajp].
Somewhat earlier in 1990 [@kiwi90pl], Kim and Wigner observed that there is a time separation wherever there is a space separation in the Lorentz-covariant world. The Bohr radius is a space separation. If the system is Lorentz-boosted, the time-separation becomes entangled with the space separation. But, in the present form of quantum mechanics, this time-separation variable is not measured and not understood.
This variable was mentioned in the paper of Feynman [*et al.*]{} in 1971 [@fkr71], but the authors say they would drop this variable because they do not know what to do with it. While what Feynman [*et al.*]{} did was not quite respectable from the scientific point of view, they made a contribution by pointing out the existence of the problem. In 1990, Kim and Wigner [@kiwi90pl] noted that the time-separation variable belongs to Feynman’s rest of the universe and studied its consequences in the observable world.
In this section, we first reproduce the work of Kim and Wigner using the $x$ and $y$ variables and then study its consequences. Let us introduce the notation $\psi_{\eta}^{n}(x,y)$ for the squeezed oscillator wave function given in Eq.(\[seri55\]): $$\label{wf15}
\psi_{\eta}^{n}(x,y) = \chi_{n}(x') \chi_{0}(y'),$$ with no excitations along the $y$ direction. For $\eta = 0$, this expression becomes $\chi_{n}(x) \chi_{0}(y).$
From this wave function, we can construct the pure-state density matrix as $$\label{eq12}
\rho_\eta^{n}(x,y;r,s) = \psi_{\eta}^{n}(x,y) \psi_{\eta}^{n} (r,s) ,$$ which satisfies the condition $\rho^2 = \rho, $ which means $$\rho_\eta^{n}(x,y;r,s) = \int \rho_\eta^{n}(x,y;u,v)
\rho_\eta^{n}(u,v;r,s) du dv .$$
![Feynman’s rest of the universe. As the Gaussian function is squeezed, the $x$ and $y$ variables become entangled. If the $y$ variable is not measured, it affects the quantum mechanics of the $x$ variable.[]{data-label="restof66"}](restof66b.eps)
As illustrated in Fig. \[restof66\], it is not possible make measurements on the variable $y.$ We thus have to take the trace of this density matrix along the $y$ axis, resulting in $$\begin{aligned}
\label{eq14}
&{}& \rho_\eta^{n}(x,r) = \int \psi_{\eta}^{n}(x,y)
\psi_{\eta}^{n}(r,y) dy \nonumber \\[2ex]
&{}& \hspace{8mm} = \left(\frac{1}{\cosh\eta}\right)^{2(n + 1)}
\sum_{k} \frac{(n+k)!}{n!k!}
(\tanh\eta)^{2k}\chi_{n+k}(x)\chi_{k+n}(r) .\end{aligned}$$ The trace of this density matrix is one, but the trace of $\rho^2$ is $$\begin{aligned}
&{}& Tr\left(\rho^2\right) = \int \rho_\eta^{n}(x,r)
\rho_\eta^{n}(r,x) dr dx \nonumber \\[2ex]
&{}& \hspace{10mm} = \left(\frac{1}{\cosh\eta}\right)^{4(n + 1)}
\sum_{k} \left[\frac{(n+k)!}{n!k!}\right]^2 (\tanh\eta)^{4k} ,\end{aligned}$$ which is less than one. This is due to the fact that we are not observing the $y$ variable. Our knowledge is less than complete.
The standard way to measure this incompleteness is to calculate the entropy defined as [@neu32; @fano57; @wiya63] $$S = - Tr\left(\rho(x,r) \ln[\rho(x,r)]\right) ,$$ which leads to $$\begin{aligned}
&{}& S = 2(n + 1)[(\cosh\eta)^2 \ln(\cosh\eta) -
(\sinh\eta)^2 \ln(\sinh\eta)] \nonumber \\[1ex]
&{}& \hspace{10mm} - \left(\frac{1}{\cosh\eta}\right)^{2(n + 1)}
\sum_{k} \frac{(n+k)!}{n!k!}\ln\left[\frac{(n+k)!}{n!k!}\right]
(\tanh\eta)^{2k} .\end{aligned}$$
Let us go back to the wave function given in Eq.(\[wf15\]). As is illustrated in Fig. \[restof66\], its localization property is dictated by its Gaussian factor which corresponds to the ground-state wave function. For this reason, we expect that much of the behavior of the density matrix or the entropy for the $n^{th}$ excited state will be the same as that for the ground state with n = 0. For this state, the density matrix is $$\rho_{\eta}(x,r) = \left(\frac{1}{\pi \cosh(2\eta)}\right)^{1/2}
\exp{\left\{-\frac{1}{4}\left[\frac{(x + r)^2}{\cosh(2\eta)}
+ (x - r)^2\cosh(2\eta)\right]\right\}} ,$$ and the entropy is $$\label{entro11}
S_{\eta} = 2\left[ (\cosh\eta)^2 \ln(\cosh\eta) - (\sinh\eta)^2 \ln(\sinh\eta)\right] .$$ The density distribution $\rho_{\eta}(x,x)$ becomes $$\label{eq21}
\rho_{\eta}(x,x) = \left(\frac{1}{\pi \cosh(2\eta)}\right)^{1/2}
\exp{\left(\frac{-x^2}{\cosh(2\eta)}\right) }.$$ The width of the distribution becomes $\sqrt{\cosh(2\eta)}$, and the distribution becomes wide-spread as $\eta$ becomes larger. Likewise, the momentum distribution becomes wide-spread as can be seen in Fig. \[ctran33\]. This simultaneous increase in the momentum and position distribution widths is due to our inability to measure the $y$ variable hidden in Feynman’s rest of the universe [@fey72].
In their paper of 1990 [@kiwi90pl], Kim and Wigner used the $x$ and $y$ variables as the longitudinal and time-like variables respectively in the Lorentz-covariant world. In the quantum world, it is a widely accepted view that there are no time-like excitations. Thus, it is fully justified to restrict the $y$ component to its ground state as we did in Sec. \[excited\].
Space-time Entanglement {#spt}
=======================
The series given in Eq.(\[seri00\]) plays the central role in the concept of the Gaussian or continuous-variable entanglement, where the measurement on one variable affects the quantum mechanics of the other variable. If one of the variables is not observed, it belongs to Feynman’s rest of the universe.
The series of the form of Eq.(\[seri00\]) was developed earlier for studying harmonic oscillators in moving frames [@kn73; @knp86; @kno79jmp; @kno79ajp; @kiwi90pl; @kn11symm]. Here $z$ and $t$ are the space-like and time-like separations between the two constituent particles bound together by a harmonic oscillator potential. There are excitations along the longitudinal direction. However, no excitations are allowed along the time-like direction. Dirac described this as “c-number” time-energy uncertainty relation [@dir27]. Dirac in 1927 was talking about the system without special relativity. In 1945 [@dir45], Dirac attempted to construct space-time wave functions using harmonic oscillators. In 1949 [@dir49], Dirac introduced his light-cone coordinate system for Lorentz boosts, telling that the boost is a squeeze transformation. It is now possible to combine Dirac’s three observations to construct the Lorentz covariant picture of quantum bound states, as illustrated in Fig. \[diracqm\].
If the system is at rest, we use the wave function $$\label{wf81}
\psi_{0}^{n}(z,t) = \chi_{n}(z) \chi_{0}(t),$$ which allows excitations along the $z$ axis, but no exciations along the $t$ axis, according to Dirac’s c-number time-energy uncertainty relation.
If the system is boosted, the $z $ and $t$ variables are replaced by $z'$ and $t'$ where $$z' = (\cosh\eta) z - (\sinh\eta) t , \quad\mbox{and}\quad
t' = -(\sinh\eta) z + (\cosh\eta) t .$$ This is a squeeze transformation as in the case of Eq.(\[trans22\]). In terms of these space-time variables, the wave function of Eq.(\[wf15\]), can be written as $$\label{wf82}
\psi_{\eta}^{n}(z,t) = \chi_{n}(z') \chi_{0}(t'),$$ and the series of Eq.(\[seri55\]) then becomes $$\label{seri66}
\psi_{\eta}^n(z,t) = \left(\frac{1}{\cosh\eta}\right)^{(n+1)}
\sum_{k} \left[\frac{(n+k)!}{n!k!}\right]^{1/2}
(\tanh\eta)^{k} \chi_{k+n}(z)\chi_{k}(t) .$$
![Dirac’s form of Lorentz-covariant quantum mechanics. In addition to Heisenberg’s uncertainty relation which allows excitations along the spatial direction, there is the “c-number” time-energy uncertainty without excitations. This form of quantum mechanics can be combined with Dirac’s light-cone picture of Lorentz boost, resulting in the Lorentz-covariant picture of quantum mechanics. The elliptic squeeze shown in this figure can be called the space-time entanglement.[]{data-label="diracqm"}](diracqm11b.eps)
Since the Lorentz-covariant oscillator formalism shares the same set of formulas with the Gaussian entangled states, it is possible to explain some aspects of space-time physics using the concepts and terminologies developed in quantum optics as illustrated in Fig. \[resonance\].
The time-separation variable is a case in point. The Bohr radius is a well-defined spatial separation between the proton and electron in the hydrogen atom. However, if the atom is boosted, this radius picks up its time-like separation. This time-separation variable does not exist in the Schrödinger picture of quantum mechanics. However, this variable plays the pivotal role in the covariant harmonic oscillator formalism. It is gratifying to note that this “hidden or forgotten” variable plays its role in the real world while being entangled with the observable longitudinal variable. With this point in mind, let us study some of the consequences of this space-time entanglement.
First of all, does the wave function of Eq.(\[wf82\]) carry a probability interpretation in the Lorentz-covariant world? Since $dz dt = dz' dt'$, the normalization $$\label{norm11}
\int |\psi_{\eta}^{n} (z,t)|^2 dt dz = 1 .$$ This is a Lorentz-invariant normalization. If the system is at rest, the $z$ and $t$ variables are completely dis-entangled, and the spatial component of the wave function satisfies the Shcrödinger equation without the time-separation variable.
However, in the Lorentz-covariant world, we have to consider the inner product $$\label{norm22}
\left(\psi_{\eta}^{n} (z,t), \psi_{\eta'}^{m}(z,t)\right)
= \int \left[\psi_{\eta}^{n} (z,t)\right]^{*} \psi_{\eta'}^{m}(z,t) dz dt .$$ The evaulation of this integral was carried out by Michael Ruiz in 1974 [@ruiz74], and the result was $$\left(\frac{1}{|\cosh(\eta - \eta')|}\right)^{n + 1} \delta_{nm} .$$
In order to see the physical implications of this result, let us assume that one of the oscillators is at rest with $\eta'= 0$ and the other is moving with the velocity $\beta = \tanh(\eta)$. Then the result is $$\left(\psi_{\eta}^n (z,t), \psi_{0}^m(z,t)\right) =
\left(\sqrt{1 - \beta^2}\right)^{n + 1} \delta_{nm} .$$ Indeed, the wave functions are orthnormal if they are in the same Lorentz frame. If one of them is boosted, the inner product shows the effect of Lorentz contraction. We are familiar with the contraction $\sqrt{1 - \beta^2}$ for the rigid rod. The ground state of the oscillator wave function is contracted like a rigid rod.
The probability density $|\psi_{\eta}^{0}(z)|^2$ is for the oscillator in the ground sate, and it has one hump. For the $n^{th}$ excited state, there are $(n + 1)$ humps. If each hump is contracted like $\sqrt{1 - \beta^2}$, the net contraction factor is $\left( \sqrt{1 - \beta^2} \right)^{n + 1}$ for the $n^{th}$ excited state. This result is illustrated in Fig. \[ortho\].
![Orthogonality relations for two covariant oscillator wave funtions. The orthogonality relation is preserved for different frames. However, they show the Lorentz contraction effect for two different frames.[]{data-label="ortho"}](ortho66b.eps)
With this understanding, let us go back to the entanglement problem. The ground state wave function takes the Gaussian form given in Eq.(\[gau00\]) $$\label{gau30}
\psi_{0}(z,t) = \frac{1}{\sqrt{\pi}}
\exp{\left\{-\frac{1}{2}\left(z^2 + t^2\right)\right\}} ,$$ where the $x$ and $y$ variables are replaced by $z$ and $t$ respectively. If Lorentz-boosted, this Gaussian function becomes squeezed to [@kn73; @knp86; @kno79jmp] $$\label{gau35}
\psi_{\eta}^{0}(z,t) = \frac{1}{\sqrt{\pi}}
\exp{\left\{-\frac{1}{4}\left[e^{-2\eta}(z + t)^2 +
e^{2\eta}(z - t)^2\right]\right\}},$$ leading to the series $$\label{seri30}
\frac{1}{\cosh\eta}\sum_{k} (\tanh{\eta})^k \chi_{k}(z) \chi_{k}(t) .$$ According to this formula, the $z$ and $t$ variables are entangled in the same way as the $x$ and $y$ variables are entangled.
Here the $z$ and $t$ variables are space and time separations between two particles bound together by the oscillator force. The concept of the space separation is well defined, as in the case of the Bohr radius. On the other hand, the time separation is still hidden or forgotten in the present form of quantum mechanics. In the Lorentz-covariant world, this variable affects what we observe in the real world by entangling itself with the longitudinal spatial separation.
In Chapter 16 of their book [@walls08], Walls and Milburn wrote down the series of Eq.(\[seri00\]) and discussed what would happen when the $\eta$ parameter becomes infinitely large. We note that the series given in Eq.(\[seri30\]) shares the same expression as the form given by Walls and Milburn, as well as other papers dealing with the Gaussian entanglement. As in the case of Wall and Milburn, we are interested in what happens when $\eta$ becomes very large.
As we emphasized throughout the present paper, it is possible to study the entanglement series using the squeezed Gaussian function given in Eq.(\[gau35\]). It is then possible to study this problem using the ellipse. Indeed, we can carry out the mathematics of entanglement using the ellipse shown Fig. \[restof77\]. This figure is the same as that of Fig. \[restof66\], but it tells about the entanglement of the space and time separations, instead of the $x$ and $y$ variables. If the particle is at rest with $\eta = 0$, the Gaussian form corresponds to the circle in Fig. \[restof77\]. When the particle gains speed, this Gaussian function becomes squeezed into an ellipse. This ellipse becomes concentrated along the light cone with $t = z$, as $\eta$ becomes very large.
![Feynman’s rest of the universe. This figure is the same as that of Fig. \[restof66\]. Here, the space variable $z$ and the time variable $t$ are entangled.[]{data-label="restof77"}](restof77b.eps)
The point is that we are able to observe this effect in the real world. These days, the velocity of protons from high-energy accelators is very close to that of light. According to Gell-Mann [@gell64], the proton is a bound state of three quarks. Since quarks are confined in the proton, they have never been observed, and the binding force must be like that of the harmonic oscillator. Furthermore, the observed mass spectra of the hadrons exhbit the degeneracy of the three-dimentional harmonic oscillator [@fkr71]. We use the word “hadron” for the bound state of the quarks. The simplest hadron is thus the bound state of two quarks.
In 1969 [@fey69], Feynman observed that the same proton, when moving with its velocity close to that of light, can be regarded as a collection of partons, with the following peculiar properties.
- The parton picture is valid only for protons moving with velocity close to that of light.
- The interaction time between the quarks become dilated, and partons are like free particles.
- The momentum distribution becomes wide-spread as the proton moves faster. Its width is proportional to the proton momentum.
- The number of partons is not conserved, while the proton starts with a finite number of quarks.
![The transition from the quark to the parton model through space-time entanglement. When $\eta = 0$, the system is called the quark model where the space separation and the time separation are dis-entangled. Their entanglement becomes maximum when $\eta = \infty.$ The quark model is transformed continuously to the parton model as the $\eta$ parameter increases from $0$ to $\infty$. The mathematics of this transformation is given in terms of circles and ellipses.[]{data-label="quapar"}](quapar66b.eps)
![Entropy and temperature as functions of $[\tanh(\eta)]^2 = \beta^2$. They are both zero when the hadron is at rest, but they become infinitely large when the hadronic speed becomes close to that of light. The curvature for the temperature plot changes suddenly around $[\tanh(\eta)]^2 = 0.8$ indicating a phase transition.[]{data-label="entemp"}](entemp11b.eps)
Indeed, Fig. \[quapar\] tells why the quark and parton models are two limiting cases of one Lorentz-covariant entity. In the oscillator regime, the three-particle system can be reduced to two independent two-particle systems [@fkr71]. Also in the oscillator regime, the momentum-energy wave function takes the same form as the space-time wave function, thus with the same squeeze or entanglement property as illustrated in this figure. This leads to the wide-spread momentum distribution [@knp86; @kn77; @kim89].
Also in Fig. \[quapar\], the time-separation between the quarks becomes large as $\eta$ becomes large, leading to a weaker spring constant. This is why the partons behave like free particles [@knp86; @kn77; @kim89].
As $\eta$ becomes very large, all the particles are confined into a narrow strip around the light cone. The number of particles is not constant for massless particles as in the case of black-body radiation [@knp86; @kn77; @kim89].
Indeed, the oscillator model explains the basic features of the hadronic spectra [@fkr71]. Does the oscillator model tell the basic feature of the parton distribution observed in high-energy laboratories? The answer is YES. In his 1982 paper [@hussar81], Paul Hussar compared the parton distribution observed in high-energy laboratory with the Lorentz-boosted Gaussian distribution. They are close enough to justify that the quark and parton models are two limiting cases of one Lorentz-covariant entity.
To summarize, the proton makes a phase transition from the bound state into a plasma state as it moves faster, as illustrated in Fig. \[quapar\]. The un-observed time-separation variable becomes more prominent as $\eta$ becomes larger. We can now go back to the form of this entropy given in Eq.(\[entro11\]) and calculate it numerically. It is plotted against $(\tanh\eta)^2 = \beta^2$ in Fig. \[entemp\]. The entropy is zero when the hadron is at rest, and it becomes infinite as the hadronic speed reaches the speed of light.
Let us go back to the expression given in Eq.(\[eq14\]). For this ground state, the density matrix becomes $$\rho_{\eta} (z,z') =
\left( \frac{1}{\cosh\eta}\right)^2\sum_{k}
(\tanh\eta)^{2k} \chi_{k}(z)\chi_{k}(z') .$$ We can now compare this expression with the density matrix for thermally excited oscillator state [@fey72]: $$\rho_{\eta}(z,z') =
\left(1 - e^{-1/T}\right)\sum_{k}\left[e^{-1/T}\right]^{k}
\chi_{k}(z)\chi_{k}(z') .$$ By comparing these two expressions, we arrive at $$[\tanh(\eta)]^2 = e^{-1/T} ,$$ and thus $$T = \frac{-1}{\ln\left[(\tanh\eta)^2\right]} .$$ This temperature is also plotted against $(\tanh\eta)^2$ in Fig. \[entemp\]. The temperature is zero if the hadron is at rest, but it becomes infinite when the hadronic speed becomes close to that of light. The slope of the curvature changes suddenly around $(\tanh\eta)^2 = 0.8$, indicating a phase transition from the bound state to the plasma state.
In this section, we have shown how useful the concept of entanglement is in understanding the role of the time-separation in high energy hadronic physics including Gell-Mann’s quark model and Feynman’s parton model as two-limiting cases of one Lorentz-covariant entity.
Concluding Remarks {#concluding-remarks .unnumbered}
==================
The main point of this paper is the mathematical identity $$\label{conc11}
\frac{1}{\sqrt{\pi}}
\exp{\left\{-\frac{1}{4}\left[e^{-2\eta}(x + y)^2 +
e^{2\eta}(x - y)^{2}\right]\right\}} =
\frac{1}{\cosh\eta}\sum_{k} (\tanh{\eta})^k \chi_{k}(x) \chi_{k}(y) ,$$ which says that the series of Eq.(\[seri00\]) is an expansion of the Gaussian form given in Eq.(\[i01\]).
The first derivation of this series was published in 1979 [@kno79ajp] as a formula from the Lorentz group. Since this identity is not well known, we explained in Sec. \[excited\] how this formula can be derived from the generating function of the Hermite polynomials.
While the series serves useful purposes in understanding the physics of entanglement, the Gaussian form can be used to transfer this idea to high-energy hadronic physics. The hadron, such as the proton, is a quantum bound state. As was pointed in Sec. \[spt\], the squeezed Gaussian function of Eq.(\[conc11\]) plays the pivotal role for hadrons moving with relativistic speeds.
The Bohr radius is a very important quantity in physics. It is a spatial separation between the proton and electron in the the hydrogen atom. Likewise, there is a space-like separation between constituent particles in a bound state at rest. When the bound state moves, it picks up a time-like component. However, in the present form of quantum mechanics, this time-like separation is not recognized. Indeed, this variable is hidden in Feynman’s rest of the universe. When the system is Lorentz-boosted, this variable entangles itself with the measurable longitudinal variable. Our failure to measure this entangled variable appears in the form of entropy and temperature in the real world.
While harmonic oscillators are applicable to many aspects of quantum mechanics, Paul A. M. Dirac observed in 1963 [@dir63] that the system of two oscillators contains also the symmetries of the Lorentz group. We discussed in this paper one concrete case of Dirac’s symmetry. There are different languages for harmonic oscillators, such as the Schrödinger wave function, step-up and step-down operators, the Wigner phase-space distribution function. In this paper, we used extensively a pictorial language with circles and ellipses.
Let us go back to Eq.(\[conc11\]), this mathematical identity was published in 1979 as a textbook material in the American Journal of Physics [@kno79ajp], and the same formula was later included in a textbook on the Lorentz group [@knp86]. It is gratifying to note that the same formula serves as a useful tool for the current literature in quantum information theory [@leonh10; @furu11].
[99]{}
Giedke, G.; Wolf, M. M.; Krueger, O; Werner, R. F.; Cirac, J. J. Entanglement of formation for symmetric Gaussian states [*Phys. Rev. Lett.*]{} [**2003**]{} [*91*]{}, 10790.1 - 10790.4.
Braunstein, S. L.; van Loock, P. Quantum information with continuous variables [*Rev. Mod. Phys.*]{} [**2005**]{} [*28*]{} 513 - 676.
Kim, Y. S.; Noz, M. E. Coupled oscillators, entangled oscillators, and Lorentz-covariant Oscillators [*J. of Optics B: Quantum and Semiclassical*]{} [**2003**]{} [*7*]{}, s459 - s467.
Ge, W; Tasgin, M. E.; Suhail Zubairy, S. Conservation relation of nonclassicality and entanglement for Gaussian states in a beam splitter [*Phys. Rev. A*]{} [**2015**]{} [*92*]{} 052328:1 - 9.
Gingrigh, R. M.; Adami, C. Quantum Engtanglement of Moving Bodies [*Phys. Rev. Lett.*]{} [**2002**]{} [*89*]{}, 270402.1 - 4.
Dodd, P. J.; Halliwell, J. J. Disentanglement and decoherence by open system dynamics [*Phys. Rev. A*]{} [**2004**]{} [*69* ]{} 052105.1 - 6.
Ferraro, A; Olivares, S; Paris, M. G. A. Gaussian states in continuous variable quantum information [*EDIZIONI DI FILOSOFIA E SCIENZE (2005)*]{} http://arxiv.org/abs/quant-ph/0503237. 1 - 100.
Adesso, G.; Illuminati, F. Entanglement in continuous-variable systems: recent advances and current perspectives [*J. Phys. A* ]{} [**2007**]{} [*40*]{} 7821 - 7880
Walls, D. F.; Milburn, G. J. [*Quantum Optics.*]{} 2nd Ed.; Springer, Berlin, Germany, 2008.
Yuen, H. P. Two-photon coherent states of the radiation field [*Phys. Rev. A*]{} [**1976**]{} [*13*]{} 2226 - 2243.
Yurke, B; Potasek, M. Obtainment of Thermal Noise from a Pure State [*Phys. Rev. A*]{} [**1987**]{} [*36*]{} 3464 - 3466.
Ekert, A. K.; Knight, P. L. Correlations and squeezing of two-mode oscillations [*Am. J. Phys.*]{} [**1989**]{} [*57*]{} 692 - 697.
Paris, M. G. A. Entanglement and visibility at the output of a Mach-Zehnder interferometer [*Phys. Rev. A* ]{} [**1999**]{} [*59*]{} 1615: 1 -7.
Kim, M. S.; Son, W; Buzek, V; Knight, P. L. Entanglement by a beam splitter: Nonclassicality as a prerequisite for entanglement [*Phys. Rev. A*]{} [**2002**]{} [*65*]{} 02323: 1 - 7.
Han,D.; Kim, Y. S.; Noz, M. E. Linear Canonical Transformations of Coherent and Squeezed States in the Wigner phase Space III. Two-mode States [*Phys. Rev. A*]{} [**1990**]{} [*41*]{} 6233 - 6244.
Dirac, P. A. M. The Quantum Theory of the Emission and Absorption of Radiation [*Proc. Roy. Soc. (London)*]{} [**1927**]{} [*A114*]{} 243 - 265.
Dirac, P. A. M. Unitary Representations of the Lorentz Group. [*Proc. Roy. Soc. (London)*]{} [**1945**]{} [*A183*]{} 284 - 295.
Dirac, P. A. M. Forms of relativistic dynamics [*Rev. Mod. Phys.*]{} [**1949**]{} [*21*]{} 392 - 399.
Yukawa, H. Structure and Mass Spectrum of Elementary Particles. I. General Considerations [*Phys. Rev.*]{} [**1953**]{} [*91*]{} 415 - 416.
Kim, Y. S.; Noz, M. E. [*Theory and Applications of the Poincaré Group*]{}; Reidel: Dordrecht, Holland, 1986.
Dirac, P. A. M. A Remarkable Representation of the 3 + 2 de Sitter Group [*J. Math. Phys.*]{} [**1963**]{} [*4*]{} 901-909.
Feynman, R. P. [*Statistical Mechanics*]{}; Benjamin Cummings: Reading, Massachusetts, U.S.A., 1972.
Han, D.; Kim, Y. S.; Noz, M. E. Illustrative Example of Feynman’s Rest of the Universe [*Am. J. Phys.*]{} [**1999**]{} [*67*]{}, 61 - 66.
Kim, Y. S.; Noz, M. E. Covariant harmonic oscillators and the quark model [*Phys. Rev. D* ]{}[**1973**]{} [*8*]{}, 3521 - 3627.
Kim, Y. S.; Noz, M. E.; Oh, S. H. Representations of the Poincaré group for relativistic extended hadrons [*J. Math. Phys.*]{} [**1979**]{} [*20*]{} 1341 - 1344.
Kim, Y. S.; Noz, M. E.; Oh, S. H. A simple method for illustrating the difference between the homogeneous and inhomogeneous Lorentz groups [*Am. J. Phys.*]{} [**1979**]{} [*47*]{} 892 - 897.
Kim, Y. S.; Wigner, E. P. [*Entropy and Lorentz Transformations.*]{} Phys. Lett. A [**1990**]{} [*147*]{} 343 - 347.
Kim, Y. S.; Noz, M. E. Lorentz Harmonics, Squeeze Harmonics and Their Physical Applications [*Symmerty*]{} [**2011**]{} [*3*]{} 16 - 36.
Klauder, J. R.; E. C. G. Sudarshan, E. C. G. [*Fundamentals of Quantum Optics*]{}; Benjamin: New York, U.S.A, 1968.
Saleh, B. E. A.; Teich, M. C. [*Fundamentals of Photonics.*]{} 2nd Ed.; John Wiley and Sons: Hoboken, New Jersey, U.S.A, 2007.
Miller, W. [*Symmetry Groups and Their Applications*]{}; Academic Press: New York, New York, U.S.A., 1972.
Hall, B. C. [*Lie Groups, Lie Algebras, and Representations. An Elementary Introduction*]{} Second Edition, Springer International, Cham, Switzerland, 2015
Wigner, E. On Unitary Representations of the Inhomogeneous Lorentz Group [*Ann. Math.*]{} [**1939**]{} [*40*]{} 149-204.
Weinberg, S. Photons and gravitons in S-Matrix theory: Derivation of charge conservation and equality of gravitational and inertial mass [*Phys. Rev.*]{} [**1964**]{} [*135*]{} B1049 - B1056.
Kim, Y. S.; Wigner, E. P. Space-time geometry of relativistic-particles [*J. Math. Phys.*]{} [**1990**]{} [*31*]{} 55 - 60.
Ba[ş]{}kal, S; Kim, Y. S.; Noz, M. E. Wigner’s Space-Time Symmetries based on the Two-by-Two Matrices of the Damped Harmonic Oscillators and the Poincaré Sphere [*Symmetry*]{} [**2014**]{} [*6*]{} 473 - 515.
Ba[ş]{}kal, S; Kim, Y. S.; Noz, M. E. [*Physics of the Lorentz Group.*]{} IOP Science ; Morgan & Claypool Publishers: San Rafael, California, U.S.A., 2015.
Kim, Y. S.; Yeh, Y. E(2)-symmetric two-mode sheared states [*J. Math. Phys.*]{} [**1992**]{} [*33*]{}, 1237 - 1246
Han, D.; Kim, Y. S.; Noz, M. E. $O(3,3)$-like Symmetries of Coupled Harmonic Oscillators [*J. Math. Phys.*]{} [**1995**]{} [*36*]{} 3940 - 3954.
Kim, Y. S.; Noz, M. E. [*Phase Space Picture of Quantum Mechanics*]{}; World Scientific Publishing Company: Singapore, 1991.
Kim, Y. S.; Noz, M. E. [*Dirac Matrices and Feynman’s Rest of the Universe*]{} Symmetry [**2012**]{} [*4*]{}, 626 - 643.
Wigner, E. On the Quantum Corrections for Thermodynamic Equilibrium [*Phys. Rev.*]{} [**1932**]{} [*40*]{} 749 - 759.
Feynman, R. P.; Kislinger, M; Ravndal, F. Current Matrix Elements from a Relativistic Quark Model [*Phys. Rev. D*]{} [**1971**]{} [*3*]{} 2706 - 2732.
Ruiz, M. J. Orthogonality relations for covariant harmonic oscillator wave functions [*Phys. Rev. D*]{} [**1974**]{} [*10*]{} 4306 - 4307.
Rotbart, F. C. Complete orthogonality relations for the covariant harmonic oscillator [*Phys.Rev.D*]{} [**1981**]{} [*12*]{} 3078 - 3090.
Magnus, W.; Oberhettinger, F.; Soni, R. P. [*Formulas and theorems for the special functions of mathematical physics*]{}; Springer-Verlag: Heidelberg, Germany 1966.
Doman, B. G. S. [*The Classical Orthogonal Polynomials*]{} World Scientific: Singapore 2016
Bargmann, V. Irreducible unitary representations of the Lorentz group [*Ann. Math.*]{} [**1947**]{} [*48*]{} 568 - 640.
Han, D.; Kim, Y. S. Special relativity and interferometers [*Phys. Rev. A.*]{} [**1988**]{} [*37*]{} 4494 - 4496.
Han, D.; Kim, Y. S.; Noz, M.E. Wigner rotations and Iwasawa decompositions in polarization optics [*Phys. Rev. E.*]{} [**1999**]{} [*1*]{} 1036-1041.
von Neumann, J. [*Die mathematische Grundlagen der Quanten-mechanik*]{}; Springer: Berlin, Germany, 1932. See also von Neumann, I. [*Mathematical Foundation of Quantum Mechanics*]{}; Princeton University, Princeton, New Jersey, U.S.A., 1955.
Fano, U. Description of States in Quantum Mechanics by Density Matrix and Operator Techniques [*Rev. Mod. Phys.*]{} [**1957**]{} [*29*]{} 74 - 93.
Wigner E. P.; Yanase, M. M. Information Contents of Distributions [*Proc. National Academy of Sciences (U.S.A.)*]{} [**1963**]{} [*49*]{} 910 - 918.
Gell-Mann, M. A Schematic Model of Baryons and Mesons [*Phys. Lett.*]{} [**1964**]{} [*8*]{}, 214 - 215.
Feynman, R. P. Very High-Energy Collisions of Hadrons [*Phys. Rev. Lett.*]{} [**1969**]{} [*23*]{} 1415 - 1417.
Kim, Y. S.; Noz, M. E. Covariant harmonic oscillators and the parton picture [*Phys. Rev. D*]{} [**1977**]{} [*15*]{} 335 - 338.
Kim, Y. S. Observable gauge transformations in the parton picture [*Phys. Rev. Lett.*]{} [**1989**]{} [*63*]{} 348 - 351.
Hussar, P. E. Valons and harmonic oscillators [*Phys. Rev. D*]{} [**1981**]{} [*23*]{} 2781 - 2783.
Leonhardt, U. [*Essential Quantum Optics*]{} Cambridge University Press, London, 2010.
Furusawa, A; Loock, P. V. [*Quantum Teleportation and Entanglement: A Hybrid Approach to Optical Quantum Information Processing*]{} Wiley-VCH, Weinheim, 2010.
[^1]: electronic address: baskal@newton.physics.metu.edu.tr
[^2]: electronic address: yskim@physics.umd.edu
[^3]: electronic address: mairlyne.noz@gmail.com
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'The use of parity-check gates in information theory has proved to be very efficient. In particular, error correcting codes based on parity checks over low-density graphs show excellent performances. Another basic issue of information theory, namely data compression, can be addressed in a similar way by a kind of dual approach. The theoretical performance of such a Parity Source Coder can attain the optimal limit predicted by the general rate-distortion theory. However, in order to turn this approach into an efficient compression code (with fast encoding/decoding algorithms) one must depart from parity checks and use some general random gates. By taking advantage of analytical approaches from the statistical physics of disordered systems and SP-like message passing algorithms, we construct a compressor based on low-density non-linear gates with a very good theoretical [*and*]{} practical performance.'
author:
- Stefano Ciliberti
- Marc Mézard
- Riccardo Zecchina
title: 'Message passing algorithms for non-linear nodes and data compression'
---
Introduction
============
Information theory and statistical physics have common roots. In the recent years, this interconnection has found further confirmation in the analysis of modern error correcting codes, known as Low Density Parity Check (LDPC) codes [@Gallager; @Spielman; @mackay], by statistical physics methods [@Sourlas; @montanari; @nishimori]. In such codes, the choice of the encoding schemes maps into a graphical model: the way LDPC codes exploit redundancy is by adding bits which are bound to satisfy some random sparse linear set of equations – the so called parity checks. Such equations are used in the decoding phase to reconstruct the original codeword (and hence the message) from the corrupted signal. The parity checks can be represented by a graph indicating which variables participate in each check. The randomness and the sparsity of the parity checks is reflected into the characteristics of the associated graphs, a fact that makes mean-field type statistical physics methods directly applicable. Thanks to the duality between channel coding and source coding [@mackay], similar constructions can be used to perform both lossless and lossy data compression [@jelinek; @yedidia]. It must be mentioned that while there exist some practical algorithms for the lossless source coding which are very efficient [@lossless], much less work has been done so far for the lossy case [@bergergibson]. In this work we present a new coding technique which consists in a generalization of parity-check codes for lossy data compression to non-linear codes [@cimeze]. This issue has been addressed recently using methods from the statistical physics of disordered systems: a non-linear perceptron [@perceptron] or a satisfiability problem [@battaglia] have been used to develop practical source coding schemes. Other recent advances in the field can be found in [@garciazhao; @caire; @yamamoto].
We consider the following problem. We have a random string of uncorrelated bits $x_1,\ldots x_M$ (prob$(x_a=0)=$ prob$(x_a=1)=1/2$) and we want to compress it to the shorter coded sequence ${\sigma}_1,\ldots{\sigma}_N$ (encoding). The rate $R$ of the process is $R=N/M$. Once we recover the message (decoding), we may have done some errors and thus we are left in principle with a different sequence $x_1^*,\ldots x_M^*$. The number of different bits normalized by the length of the string is the distortion, $$D=\frac 1M \sum_{a=1}^M \left( 1-\delta(x_a,x^*_a)\right) \ .$$ The Shannon theorem states that the minimum rate at which we can achieve a given distortion $D$ is $R_{min}=1-H_2(D)$, where $H_2(D)$ is the binary entropy of the probability distribution used to generate the $x_a$’s. This $R_{min}$ is the value one has to compare to in order to measure the performance of a compression algorithm. This problem is a simple version of the lossy data compression problem. In real applications one is often more interested in compression through quantization of a signal with letters in a larger alphabet (or continuous signal [@quantization]). Here we shall keep to the memoryless case of uncorrelated unbiased binary sources, and we hope to be able to generalize the present approach to more realistic cases in the near future.
Constraint Satisfaction Problems
--------------------------------
The underlining structure of the protocol that we are going to introduce is provided by a constraint satisfaction problem (CSP). A CSP with $N$ variables is defined in general by a cost function $$E[{\sigma}] = \sum_{a=1}^M {\varepsilon}_a\left(\{{\sigma}_i\in a\}\right) \ ,$$ where the [*function node*]{} ${\varepsilon}_a({\sigma}_1,\ldots {\sigma}_{K_a})$ involves a subset of $K_a$ variables randomly chosen among the whole set. The problem is fully defined by choosing some ${\varepsilon}_a$ and an instance is specified by the actual variables involved in each constraint. A useful representation for a CSP is the factor graph (Fig. \[fig:factor\], left). The two kinds of nodes in this graph are the variables and the constraints. For the sake of simplicity we shall take the functions ${\varepsilon}_a$ to have values in $\{0,2\}$ (the clause is resp. SAT or UNSAT) and the variables ${\sigma}_i$ will be Ising spins (${\sigma}_i=\pm 1$). A somewhat special case of CSP which has been studied extensively occurs when $K_a=K$ for any $a$, and we shall restrict ourselves to this case in what follows. One can then show that the degree of a given variable is Poisson distributed with mean value $KM/N$ and that the typical length of a loop is $\mathcal{O}(\log N
/\log(KM/N))$. We are interested in the limit $M,N \to \infty$, where $\alpha\equiv M/N$ is fixed and plays the role of a control parameter. Once a problem is given in terms of its graph representation, an instance corresponds to a given graph chosen among all the legal ones with uniform probability. We will go back to this problem in section \[sec:formalism\]. We first show how a CSP can be used as a tool for compressing data.
Encoding
--------
We use the initial word $x_1,x_2,\ldots x_M$ to construct a set of $M$ constraints between $N$ boolean variables ${\sigma}_1,{\sigma}_2,\ldots {\sigma}_N$. The factor graph is supposed to be given. Each constraint $a$ involves $K_a$ variables and is defined by two complementary subsets of configurations $\mathcal{S}^a_0$ and $\mathcal{S}^a_1$. The value of $x_a$ controls the role of the subsets as follows: If $x_a=0$ then all the configurations $\{{\sigma}_1\ldots{\sigma}_{K^a}\}\in\mathcal{S}^a_0$ satisfy the constraint, [*i.e.*]{} ${\varepsilon}_a=0$; all the configurations $\{{\sigma}_1\ldots{\sigma}_{K^a}\}\in\mathcal{S}^a_1$ do not satisfy the constraint and thus ${\varepsilon}_a=2$. If $x_a=1$ the role of the two subsets is exchanged. This defines the CSP completely and we can look for a configuration of ${\sigma}_i$ that minimizes the number of violated constraints. This is the encoded word and the rate of the process is $R=1/\alpha$.
\[\]\[\]\[2\][$a_1$]{} \[\]\[\]\[2\][$a_2$]{} \[\]\[\]\[2\][$a_3$]{} \[\]\[\]\[2\][$a_4$]{} \[\]\[\]\[2\][$a_5$]{} \[\]\[\]\[2\][${\sigma}_1$]{} \[\]\[\]\[2\][${\sigma}_2$]{} \[\]\[\]\[2\][${\sigma}_3$]{} \[\]\[\]\[2\][${\sigma}_4$]{} \[\]\[\]\[2\][${\sigma}_5$]{} \[\]\[\]\[2\][${\sigma}_6$]{} \[\]\[\]\[2\][${\sigma}_7$]{} \[\]\[\]\[2\][$b_1$]{} \[\]\[\]\[2\][$b_2$]{} \[\]\[\]\[2\][$b_3$]{} \[\]\[\]\[2\][$b_4$]{} \[\]\[\]\[2\][$a$]{} \[\]\[\]\[2\][${\sigma}_0$]{} \[\]\[\]\[2\][${\sigma}_1$]{} \[\]\[\]\[2\][${\sigma}_2$]{} \[\]\[\]\[2\][${\sigma}_3$]{} \[\]\[\]\[2\][${\sigma}_4$]{} \[\]\[\]\[2\][${\sigma}_5$]{} ![Left: Factor graph for a constraint satisfaction problem with $N=7$ variables (circles) and $M=5$ constraints (squares). In this example the connectivity of the function nodes is kept fixed, that is $K_a=K$ in the notations of the text. Right: Message passing procedure ($K=6$): The probability $q_{a\to 0}(u_{a\to 0})$ depends on all the probabilities $q_{b_i\to i}(u_{b_i\to i})$, $i=1,\ldots 5$ (see text).[]{data-label="fig:factor"}](figs/fgraph.ps "fig:"){width=".45\columnwidth"} ![Left: Factor graph for a constraint satisfaction problem with $N=7$ variables (circles) and $M=5$ constraints (squares). In this example the connectivity of the function nodes is kept fixed, that is $K_a=K$ in the notations of the text. Right: Message passing procedure ($K=6$): The probability $q_{a\to 0}(u_{a\to 0})$ depends on all the probabilities $q_{b_i\to i}(u_{b_i\to i})$, $i=1,\ldots 5$ (see text).[]{data-label="fig:factor"}](figs/factor_graph.ps "fig:"){width=".45\columnwidth"}
Decoding
--------
Given the configuration ${\sigma}_1,{\sigma}_2,\ldots {\sigma}_N$, we compute for each node $a$ whether the variables ${\sigma}_{i_1^a},\ldots {\sigma}_{i_{K_a}^a}$ are in $\mathcal{S}_0$ (in this case we set $x^*_a=0$) or in $\mathcal{S}_1$ (leading to $x^*_a=1$). Since a cost ${\varepsilon}=2$ is paid for each UNSAT clause, the energy is related to the distortion by $$\label{eq:distortion}
D=\frac E{2M} = \frac E{2\alpha N} = R \frac E{2N}\ .$$
The Parity Source Coder
-----------------------
In order to give an example, we apply these ideas to the case where the CSP is a parity check (the optimization problem is then called XORSAT). In other words, the cost of the constraint $a$ is written as $${\varepsilon}_a = 1 +(-1)^{x_a} \prod_{i\in a} {\sigma}_i \ .$$ This Parity Source Coder (PSC) is the counterpart of the LDPC codes in error-correcting codes and then it seems natural to expect a good performance from it.
In Fig. \[fig:exor\] we show that the ground state energy of the XORSAT problem, that is the theoretical capacity of the PSC, quickly approaches the Shannon bound as $K$ increases [@cime]. A PSC with checks of degree $K\gtrsim 6$ has then a theoretical capacity close to the optimal one. The problem here is that there does not exist any fast algorithm that can encode a string (for example, the SID algorithm that we discuss in the next section is known not to converge).
We will thus investigate a new kind of constraint satisfaction problem, using non-linear nodes, with (almost) the same good theoretical performance as the XORSAT problem, but with a fast encoding algorithm. Before we do this, we introduce in the next section the general formalism used in the study of non-linear nodes.
The general formalism {#sec:formalism}
=====================
Given a CSP at some $\alpha$, we are interested in knowing whether a *typical* instance of the problem is satisfiable (i.e. all the constraints can be satisfied by one global configuration) as the number of variables goes to infinity. Generally speaking, there will be a phase transition between a SAT regime (at low $\alpha$ the problem has a small number of constraints and thus is solvable – at least in principle ) and an UNSAT regime where there are too many constraints and one cannot find a zero-energy configuration. In this UNSAT regime, one wants to minimize the number of violated constraints. This kind of problem can be approached by message passing algorithms.
Due to the large length of typical loops, the local structure of a typical graph is equivalent to a tree, and this is crucial in what follows. We introduce the [*cavity bias*]{} $u_{a\to i} \in \{-1,0,+1\}$ sent from a node $a$ to a variable $i$. A non-zero message means that the variable $i$ is requested to assume the actual value of $u_{a\to i}$ in order to satisfy the clause $a$. If $u_{a\to i}=0$ the variable $i$ is free to assume any value. It is clear that this message sent to $i$ should encode the information that $a$ receives from all the other variables attached to it. In order to clarify this point, we refer to Fig. \[fig:factor\] (right), that is we focus on a small portion of the graph. As the graph is locally tree-like, the variables ${\sigma}_i$, $i=1,2,...,K-1$, are only connected through clause $a$ if $N$ is large enough. If $a$ is absent, the total energy of the system can be written as $E^{N}({\sigma}_1,...{\sigma}_{K-1}) =
A-\sum_{i=1}^{K-1} h_i{\sigma}_i$. We have used here the assumption that the probability of these ${\sigma}_i$ factorizes. This is again motivated by the local tree structure, but we shall go back to this point. After clause $a$ is added, $E^{N+1}({\sigma}_0,{\sigma}_1,...{\sigma}_{K-1}) = E^{N}({\sigma}_1,...{\sigma}_{K-1}) +
{\varepsilon}_a({\sigma}_0,{\sigma}_1,...{\sigma}_{K-1})$ and the variables rearrange in order to minimize the total cost. The minimization then defines $\tilde{\varepsilon}({\sigma}_0)$ from $$A-\sum_{i=1}^{K-1}|h_i|+\tilde{\varepsilon}({\sigma}_0)
=
\min_{{\sigma}_1,\ldots{\sigma}_{K-1}} E^{N+1}({\sigma}_0,{\sigma}_1,...{\sigma}_{K-1}) \ .
\label{eq:mini}$$ This $\tilde{\varepsilon}({\sigma}_0)$ is then the cost to be paid for adding one variable with a fixed value ${\sigma}_0$. Without losing generality, it can be written as $$\tilde{\varepsilon}({\sigma}_0) \equiv \Delta_{a\to 0} - {\sigma}_0 u_{a\to 0} \ ,$$ where $u_{a\to 0}$ is the cavity bias acting on the new variable and $\Delta_{a\to 0}$ is related to the actual energy shift by $$\Delta E \equiv
\min_{{\sigma}_0,\ldots {\sigma}_{K-1}} \big[ E^{N+1}({\sigma}_0,{\sigma}_1,\ldots {\sigma}_{K-1})
- E^{N}({\sigma}_1,\ldots {\sigma}_{K-1})\big] = \Delta_{a\to 0} -|u_{a\to 0}| \ .
\label{eq:delta}$$ Given that ${\varepsilon}_a$ can be $0$ or $2$, depending on the set of fields $h_1,\ldots h_{K-1}$ one has four possibilities: $$\begin{aligned}
\tilde{\varepsilon}(+1)=0\quad\textrm{and}\quad\tilde{\varepsilon}(-1)=0 & \Rightarrow &
u=0 \ , \ \;\,\Delta=0 \,\label{eq:1}\\
\tilde{\varepsilon}(+1)=0\quad\textrm{and}\quad\tilde{\varepsilon}(-1)=2 & \Rightarrow &
u=+1 \ , \Delta=1 \,\label{eq:2}\\
\tilde{\varepsilon}(+1)=2\quad\textrm{and}\quad\tilde{\varepsilon}(-1)=0 & \Rightarrow &
u=-1\ , \Delta=1 \, \label{eq:3} \\
\tilde{\varepsilon}(+1)=2\quad\textrm{and}\quad\tilde{\varepsilon}(-1)=2 & \Rightarrow &
u=0 \ , \ \;\,\Delta=2 \ . \label{eq:4}\end{aligned}$$ In other words, a non-zero message $u$ is sent from clause $a$ to variable ${\sigma}_0$ only if the satisfiability of clause $a$ depends on ${\sigma}_0$. A null message ($u=0$) can occur in the two distinct cases (\[eq:1\]) and (\[eq:4\]).
The main hypothesis we have done so far consisted in assuming that the two variables are uncorrelated if they are distant (the energy is linear in the ${\sigma}_i$’s). It turns out that, in a large region of the parameter space [@mepaze; @biroli], including the regime we are interested in, this is not correct. This is due to the fact that the space of solutions breaks into many disconnected components if $\alpha$ is greater than a critical value. In order to deal with this case, one has to introduce, for each directed link, a probability distribution of the cavity biases, namely ${\sf
q}(u) \equiv \eta^+ \delta_{u,+1} + \eta^- \delta_{u,-1} + (1-\eta^+-\eta^-)
\delta_{u,0}$. The hypothesis of no correlation holds if the phase space is restricted to one component. The interpretation of ${\sf q}_{a\to i}(u_{a\to
i})$ is the probability that a cavity bias $u_{a\to i}$ is sent from clause $a$ to variable $i$ when one component is picked at random [@meze]. According to the rules (\[eq:1\],\[eq:2\],\[eq:3\],\[eq:4\]), and with the topology of Fig. \[fig:factor\] (right) as a reference for notations, the Survey Propagation (SP, [@sp]) equations are then $$\begin{aligned}
\eta^+_{a\to 0} & \propto & \textrm{Prob} \left[
\left\{ \left(u_{b\to i_1}\right)_{b\in i_1\backslash a },
\ldots
\left(u_{b\to i_{K-1}}\right)_{b\in i_{K-1}\backslash a } \right\}
\bigg| ~\left(\tilde{\varepsilon}_a(+1)<\tilde{\varepsilon}_a(-1)\right)
\right] e^{-y\Delta E}\ ,
\label{eq:etap}\\
\eta^-_{a\to 0} & \propto & \textrm{Prob} \left[
\left\{ \left(u_{b\to i_1}\right)_{b\in i_1\backslash a },
\ldots
\left(u_{b\to i_{K-1}}\right)_{b\in i_{K-1}\backslash a } \right\}
\bigg|~\left( \tilde{\varepsilon}_a(+1)> \tilde{\varepsilon}_a(-1)\right)
\right] e^{-y\Delta E}\ ,
\label{eq:etam}\\
\eta^0_{a\to 0} & \propto & \textrm{Prob} \left [
\left\{ \left(u_{b\to i_1}\right)_{b\in i_1\backslash a },
\ldots
\left(u_{b\to i_{K-1}}\right)_{b\in i_{K-1}\backslash a } \right\}
\bigg|~\left( \tilde{\varepsilon}_a(+1)=\tilde{\varepsilon}_a(+1)\right)
\right] e^{-y\Delta E} \ ,
\label{eq:eta0} \end{aligned}$$ where the energy shift $\Delta E$ is given in eq. (\[eq:delta\]) and is non-zero only when the constraint is UNSAT for any value of ${\sigma}_0$ (cf. eq. (\[eq:4\])). The crucial reweighting term exp$(-y\Delta E)$ thus acts as a “penalty” factor each time a clause can not be satisfied. This term is necessary in the UNSAT regime which we explore here (while simpler equations with $y=\infty$ are enough to study the SAT phase).
Encoding: SP and decimation
----------------------------
For each fixed value of $y$, the iterative solution for the SP equations can be implemented on a single sample [@sp], [*i.e.*]{} on a given graph where we know all the function nodes involved. The cavity probability distributions ${\sf q}(u)$’s are updated by picking up one edge at random and using (\[eq:1\],\[eq:2\],\[eq:3\],\[eq:4\]). This procedure is iterated until convergence. This yields a set of messages $\{\eta_{a\to i}^-, \eta_{a\to i}^0, \eta_{a\to i}^+\}$ on each edge of the factor graph which is the solution of the SP equations. This solution provides very useful information about the single instance that can be used for decimation. As explained in [@spy], the finite value of $y$ used for this purpose must be properly chosen. Given the solution for this $y$, one can compute the distribution $P(H_i)$ of the total bias $H_i=\sum_{a\in i} u_{a\to i}$ on each variable. One can then fix the most biased variables, [*i.e.*]{} the one with the largest $|P(H_i=+1)-P(H_i=-1)|$, to the value suggested by the $P(H_i)$ itself. This leads to a reduced problem with $N-1$ variables. After solving again the SP equations for the reduced problem, the new most-biased variable is fixed and one goes on until the problem is reduced to an “easy” instance. This can be finally solved by some conventional heuristic ([*e.g.*]{} walksat or simulated annealing). A significant improvement of the decimation performance can be obtained by using a backtracking procedure [@spy; @giorgioback]: At each step we also rank the fixed variables with a strongly opposed bias and unfix the most “unstable” variable with finite probability. The algorithm described here is called Survey Inspired Decimation (SID) and its peculiar versions have been shown to be very useful in many CSP problems recently. Unfortunately, the basic version of SID does not work for the XORSAT problem because of the symmetric character of the function nodes that is reflected in a large number of unbiased variables (some improvements appear to be possible [@private]).
Statistical analysis: Theoretical performance
---------------------------------------------
One can perform a statistical analysis of the solutions of the SP equations by population dynamics [@meze]. The knowledge of the function node ${\varepsilon}_a$ allows to build up a table of values of $u$ for each configuration of the local fields $h_1,\ldots,h_{K-1}$ (this is done according to the minimization procedure described above). We then start with some initial (random) population of $\vec\eta_i\equiv\{\eta^+_i,\eta^0_i,\eta^-_i\}$, for $i=1,\ldots N$. We extract a Poisson number $p$ of neighbors and $p$ probabilities $\vec\eta_{i_1},\dots \vec\eta_{i_p}$. According to these weights, $p$ biases $u_1,\ldots u_p$ are generated and their sum computed, $h_i=\sum_{a=1}^p u_{a}$. Once we have $K-1$ of these fields we perform the minimization in (\[eq:mini\]) and compute the new probability for $u_{a}$ according to the rules (\[eq:1\],\[eq:2\],\[eq:3\],\[eq:4\]). The whole process is then iterated until a stationary distribution of the cavity biases is reached. This method for solving the SP equations is very flexible with respect to the change of the choice of the node, because this choice just enters in the calculation once at the beginning of the algorithm in order to initialize some tables. One can also study problems with many different types of nodes in a given problem: in this case, one among them is randomly chosen each time the updating is performed. We finally stress that once the probability distribution of the cavity biases is known the ground state energy of the problem can be computed according to the formalism introduced in [@meze]. The expression for the energy in term of the probability distributions as $$\begin{aligned}
\label{eq:energy}
E(\alpha) &=& \max_{y} \Phi(\alpha,y)\ ,\\
\Phi(\alpha,y) &=&
\frac{(-1)}{y}\left\{ [1+(K-1)\alpha] \overline{\log A_p(y)}
-(K-1)\alpha \overline{\log A_{p+1}(y)} \right\} \ ,\\
A_p(y) & \equiv & \sum_{u_1,\ldots u_p} {\sf q}_1(u_1) \cdots {\sf q}_p(u_p)
\exp\left( y\big|\sum_{a=1}^p u_a\big | -y \sum_{a=1}^p| u_a|\right) \ ,\end{aligned}$$ the overline standing for an average over the Poisson distribution and the choice of the distributions ${\sf q}_1,\ldots{\sf q}_p$ in the population. Recalling that $R=1/\alpha$, the average distortion ([*i.e.*]{}, the theoretical capacity) of a compressor based on this CSP is computed through equations (\[eq:distortion\]) and (\[eq:energy\]).
Let us study here as an example a family of function nodes whose energy is fully invariant under permutations of the arguments. These nodes can be classified according to the energy of the node for a given value of the “magnetization” $m\equiv\sum_{i=1}^K {\sigma}_i$. We keep to odd $K$ and label a particular node by the sequence of values $\{{\varepsilon}(m=K),{\varepsilon}(m=K-2),\ldots,{\varepsilon}(m=1)\}$. In Fig. \[fig:k7\] we report the results of the population dynamics algorithm for some types of nodes at $K=7$. According to eq. (\[eq:energy\]), the ground state energy $E(\alpha)$ corresponds to the maximum over $y$ of the free energy $\Phi(\alpha,y)$ represented in the top plot. The theoretical performance of these nodes are quite close to the PSC case (the XOR node, characterized by the sequence $\{2,0,2,0\}$).
\[\]\[\]\[1.9\][$(\alpha,y)$]{} \[\]\[\]\[1.9\][$y$]{} \[\]\[\]\[1.7\][$\{2,2,2,0\}$]{} \[\]\[\]\[1.7\][$\{0,2,2,0\}$]{} \[\]\[\]\[1.7\][$\{2,0,2,0\}$]{} \[\]\[\]\[1.7\][$\{0,0,2,0\}$]{} \[\]\[\]\[1.7\][$\{2,2,0,0\}$]{} \[\]\[\]\[1.7\][$\{0,2,0,0\}$]{}
Non-linear nodes
================
We consider in this section the function nodes that give the best performance for compression, both form the theoretical point of view (theoretical capacity close the parity-check nodes) and from the algorithmic aspect (the SID algorithm at finite $y$ is found to converge in the UNSAT regime, thus giving an explicit encoding algorithm). These non-linear function nodes are defined as follows. We recall that the output ${\varepsilon}_{XOR}$ of a $K$-XORSAT node is given by twice the sum modulo 2 of the input bits. We label each configuration $\vec {\sigma}\in \{0,1\}^K$ with an integer $l\in[1,2^K]$ and consider a random permutation $\pi$ of the vector $\{1,2,\ldots,2^K\}$. Then, we can associate the output of the [*random node*]{} by letting ${\varepsilon}^{(\pi)}(l) = {\varepsilon}_{XOR}(\pi(l))$. In this way we are left with a random but balanced output which can be different from XOR. Also, it is clear that they are not more defined by a linear formula over the boolean variables.
We can take advantage of the formalism introduced in the previous section in order to study the theoretical performance of these new function nodes. In particular, we have used $10$ to $30$ different random nodes to build the factor graphs. In order to improve the performance, we have also forbidden “fully-canalizing” nodes, that is nodes whose SAT character depends on just one variable.
In Fig. \[fig:rncfr\] we show our results. The ground state energy is shown to quickly approach the Shannon bound as $K$ increases and for any $\alpha$. As an example, at $\alpha=2$ (corresponding to a compression rate $R=1/2$) the difference between the $K=8$ value and the theoretical limit is $\simeq 2\%$. This looks very promising from the point of view of data compression. Furthermore, the results obtained from the SID (same plot) show that in this case the algorithm does converge and its performance is very good. It should be noted that when $K$ becomes large the difference between SP and the Belief Propagation (BP) becomes small (this can be seen for instance in the analysis of [@cime]), so in fact BP does also provide a good encoding algorithm for $K\gtrsim 6$. At fixed $K$, the time needed to solve the SP equations is $\mathcal{O}(N\log N)$. The actual computational time required by the decimation process is of the same order and it slightly depends on the details of the SID algorithm ([*e.g.*]{} the number of variables fixed at each iteration, whether or not a backtracking procedure is used, which kind of heuristic is adopted to solve an “easy” reduced instance, the proper definition of the latter, etc...). The dependence on $K$ is exponential. Thus, even if increasing $K$ is good from the theoretical point of view, it turns out to be very difficult to work at high $K$. In practice, it takes a few hours to compress a string of $N=1000$ bits at $K=6$ by using our general purpose software. We think that some more specified code would lead to a better performance.
To conclude, we have shown how the methods of statistical mechanics, properly adapted to deal with a new class of constraint satisfaction problems, allow to implement a new protocol for data compression. The new tool introduced here, the non-linear gates, looks very promising for other practical applications in information theory.
Acknowledgments {#acknowledgments .unnumbered}
---------------
We thank A. Montanari, F. Ricci-Tersenghi, D. Saad, M. Wainwright and J. S. Yedidia for interesting discussions. S. C. is supported by EC through the network MTR 2002-00307, DYGLAGEMEM. This work has been supported in part by the EC through the network MTR 2002-00319 STIPCO and the FP6 IST consortium EVERGROW.
[10]{}
R. G. Gallager, Low Density Parity-Check Codes (MIT Press, Cambridge, MA, 1963)
D. A. Spielman, in Lecture Notes in Computer Science [**1279,**]{} pp. 67-84 (1997).
D. MacKay, *Information Theory, Inference, and Learning Algorithms*, Cambridge University Press, (2003).
N. Sourlas, Nature [**339,**]{} 693-694 (1989).
A. Montanari, Eur. Phys. J. [**23,**]{} 121 (2001).
H. Nishimori, *Statistical Physics of Spin Glasses and Information Processing : An Introduction*, (Oxford University Press, 2001).
F. Jelinek, IEEE Trans. Inform. Theory, [**15,**]{} No. 6, 584 (1969).
E. Martinian, J. S. Yedidia, MERL report 2003.
J. Ziv, A. Lempel, IEEE Trans. Inform. Theory [**23,**]{} 337 (1977); J. Cleary, I. Witten, IEEE Trans. Commun. [**32,**]{} 396 (1984); F. Willems, Y. Shtarkov, T. Tjalkens, IEEE Trans. Inform. Theory [**41,**]{} 653 (1995).
T. Berger, J.D. Gibson, IEEE Trans. Inform. Theory, [**44,**]{} No. 6, 2693 (1998).
Some of the results presented here have appeared in S. Ciliberti, M. Mézard, R. Zecchina, Phys. Rev. Lett. [**95,**]{} 038701 (2005).
T. Hosaka, Y. Kabashima, H. Nishimori, Phys. Rev. E [**66,**]{} 066126 (2002).
D. Battaglia, A. Braunstein, J. Chavas, R. Zecchina, Phys. Rev. E [**72,**]{} 015103 (2005).
J. Garcia-Frias, Y. Zhao, IEEE Commun. Lett., 394 (Sept 2002).
G. Caire, S. Shamai, S. Verdù, in Proc. of IEEE Inform. Theory Workshop, 291, Paris (2003);
Y. Matsunaga, H. Yamamoto, in Proc. of IEEE Intern. Symp. on Inform. Theory, 461, Lausanne (2002).
R. M. Gray, D. L. Neuhoff, IEEE Trans. Inform. Theory, [**44,**]{} 2325 (1998).
S. Ciliberti, M. Mézard, [cond-mat/0506652.]{}
M. Mézard, G. Parisi, R. Zecchina, Science [**297,**]{} 812 (2002).
G. Biroli, R. Monasson, M. Weigt. Eur. Phys. J. [**B14,**]{} 551 (2000).
M. Mézard, R. Zecchina, Phys. Rev. E [**66,**]{} 056126 (2002).
A. Braunstein, M. Mézard, R. Zecchina, in *Random Structures and Algorithms* [**27**]{}, 201-226 (2005)
D. Battaglia, M. Kolàr, R. Zecchina Phys. Rev. E [**70,**]{} 036107 (2004).
G. Parisi, [cond-mat/0308510]{}.
M. Wainwright, private communication.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'The formation of the first galaxies at redshifts $z\sim 10-15$ signaled the transition from the simple initial state of the universe to one of ever increasing complexity. We here review recent progress in understanding their assembly process with numerical simulations, starting with cosmological initial conditions and modelling the detailed physics of star formation. In this context we emphasize the importance and influence of selecting appropriate initial conditions for the star formation process. We revisit the notion of a critical metallicity resulting in the transition from primordial to present-day initial mass functions and highlight its dependence on additional cooling mechanisms and the exact initial conditions. We also review recent work on the ability of dust cooling to provide the transition to present-day low-mass star formation. In particular, we highlight the extreme conditions under which this transition mechanism occurs, with violent fragmentation in dense gas resulting in tightly packed clusters.'
author:
- |
T. H. Greif$^{1,2}$, D. R. G. Schleicher$^{1}$, J. L. Johnson$^{2}$, A.-K. Jappsen$^{3}$, R. S. Klessen$^{1}$, P. C. Clark$^{1}$, S. C. O. Glover$^{1,4}$,\
A. Stacy$^{2}$, & V. Bromm$^{2}$
title: 'The formation of the first galaxies and the transition to low-mass star formation'
---
Introduction
============
One of the key goals in modern cosmology is to study the assembly process of the first galaxies, and understand how the first stars and stellar systems formed at the end of the cosmic dark ages, a few hundred million years after the Big Bang. With the formation of the first stars, the so-called Population III (Pop III), the universe was rapidly transformed into an increasingly complex, hierarchical system, due to the energy and heavy elements they released into the intergalactic medium (IGM; for recent reviews, see Barkana & Loeb 2001; Miralda-Escud[é]{}; Bromm & Larson 2004; Ciardi & Ferrara 2005; Glover 2005). Currently, we can directly probe the state of the universe roughly a million years after the Big Bang by detecting the anisotropies in the cosmic microwave background (CMB), thus providing us with the initial conditions for subsequent structure formation. Complementary to the CMB observations, we can probe cosmic history all the way from the present-day universe to roughly a billion years after the Big Bang, using the best available ground- and space-based telescopes. In between lies the remaining frontier, and the first galaxies are the sign-posts of this early, formative epoch.
There are a number of reasons why addressing the formation of the first galaxies and understanding second-generation star formation is important. First, a rigorous connection between well-established structure formation models at high redshift and the properties of present-day galaxies is still missing. An understanding of how the first galaxies formed could be a crucial step towards undertanding the formation of more massive systems. Second, the initial burst of Pop III star formation may have been rather brief due to the strong negative feedback effects that likely acted to self-limit this formation mode (Madau et al. 2001; Ricotti & Ostriker 2004; Yoshida et al. 2004; Greif & Bromm 2006). Second-generation star formation, therefore, might well have been cosmologically dominant compared to Pop III stars. Despite their importance for cosmic evolution, e.g., by possibly constituting the majority of sources for the initial stages of reionization at $z>10$, we currently do not know the properties, and most importantly the typical mass scale, of the second-generation stars that formed in the wake of the very first stars. Finally, a subset of second-generation stars, those with masses below $\simeq 1~M_{\odot}$, would have survived to the present day. Surveys of extremely metal-poor Galactic halo stars therefore provide an indirect window into the Pop III era by scrutinizing their chemical abundance patterns, which reflect the enrichment from a single, or at most a small multiple of, Pop III supernovae (SNe; Christlieb et al. 2002; Beers & Christlieb 2005; Frebel et al. 2005). Stellar archaeology thus provides unique empirical constraints for numerical simulations, from which one can derive theoretical abundance patterns to be compared with the data.
Focusing on numerical simulations as the key driver of structure formation theory, the best strategy is to start with cosmological initial conditions, follow the evolution up to the formation of a small number ($N<10$) of Pop III stars, and trace the ensuing expansion of SN blast waves together with the dispersal and mixing of the first heavy elements, towards the formation of second-generation stars out of enriched material (Greif et al. 2007; Wise & Abel 2007a). In this sense some of the most pressing questions are: How does radiative and mechanical feedback by the very first stars in minihalos affect the formation of the first galaxies? How and when does metal enrichment govern the transition to low-mass star formation? Is there a critical metallicity at which this transition occurs? How does turbulence affect the chemical mixing and fragmentation of the gas? These questions have been addressed with detailed numerical simulations as well as analytic arguments over the last few years, and we here review some of the most recent work. For consistency, all quoted distances are physical, unless noted otherwise.
Feedback by Population III.1 Stars in Minihalos
===============================================
Feedback by the very first stars in minihalos plays an important role for the subsequent build-up of the first galaxies. Among the most prominent mechanisms are ionizing and molecule-dissociating radiation emitted by massive Pop III.1 stars, as well as the mechanical and chemical feedback exerted by the first SNe. In the next few sections, we briefly discuss these mechanisms in turn.
Radiative Feedback
------------------
Star formation in primordial gas is believed to produce very massive stars. During their brief lifetimes of $\simeq 2-3~\rm{Myr}$, they produce $\sim 4\times 10^{4}$ ionizing photons per stellar baryon and thus have a significant impact on their environment (Bromm et al. 2001b; Schaerer 2002). Based on the large optical depth measured by the [*Wilkinson Microwave Anisotropy Probe*]{} ([*WMAP*]{}) after one year of operation, Wyithe & Loeb (2003) suggested that the universe was reionized by massive metal-free stars. Even though the reionization depth according to [*WMAP*]{} 5 decreased significantly (Komatsu et al. 2008; Nolta et al. 2008), recent reionization studies indicate that massive stars are still required (Schleicher et al. 2008a). Considering different reionization scenarios with and without additional physics like primordial magnetic fields, they showed that stellar populations according to a Scalo-type initial mass function (IMF; Scalo 1998) are ruled out within $3\sigma$, unless very high star formation efficiencies of order $10\%$ are adopted. On the contrary, populations of very massive stars or mixed populations can easily provide the required optical depth.
Apart from their ionizing flux, Pop III.1 stars also emit a strong flux of H$_2$-dissociating Lyman-Werner (LW) radiation (Bromm et al. 2001b; Schaerer 2002). Thus, the radiation from the first stars dramatically influences their surroundings, heating and ionizing the gas within a few kiloparsec around the progenitor, and destroying the H$_2$ and HD molecules locally within somewhat larger regions (Ferrara 1998; Kitayama et al. 2004; Whalen et al. 2004; Alvarez et al. 2006; Abel et al. 2007; Johnson et al. 2007). Additionally, the LW radiation emitted by the first stars could propagate across cosmological distances, allowing the build-up of a pervasive LW background radiation field (Haiman et al. 2000). The impact of radiation from the first stars on their local surroundings has important implications for the numbers and types of Pop III stars that form. The photoheating of gas in the minihalos hosting Pop III.1 stars drives strong outflows, lowering the density of the primordial gas and delaying subsequent star formation by up to $100~\rm{Myr}$ (Whalen et al. 2004; Johnson et al. 2007; Yoshida et al. 2007). Furthermore, neighboring minihalos may be photoevaporated, delaying star formation in such systems as well (Shapiro et al. 2004; Susa & Umemura 2006; Ahn & Shapiro 2007; Greif et al. 2007; Whalen et al. 2008a). The photodissociation of molecules by LW photons emitted from local star-forming regions will, in general, act to delay star formation by destroying the main coolants that allow the gas to collapse and form stars.
The photoionization of primordial gas, however, can ultimately lead to the production of copious amounts of molecules within the relic H [ii]{} regions surrounding the remnants of Pop III.1 stars (Figure 1; see also Ricotti et al. 2001; Oh & Haiman 2002; Nagakura & Omukai 2005; Johnson & Bromm 2007). Recent simulations tracking the formation of, and radiative feedback from, individual Pop III.1 stars in the early stages of the assembly of the first galaxies have demonstrated that the accumulation of relic H [ii]{} regions has two important effects. First, the HD abundance that develops in relic H [ii]{} regions allows the primordial gas to re-collapse and cool to the temperature of the CMB, possibly leading to the formation of Pop III.2 stars in these regions (Johnson et al. 2007; Yoshida et al. 2007; Greif et al. 2008b). Second, the molecule abundance in relic H [ii]{} regions, along with their increasing volume-filling fraction, leads to a large optical depth to LW photons over physical distances of the order of several kiloparsecs. The development of a high optical depth to LW photons over such short length-scales suggests that the optical depth to LW photons over cosmological scales may be very high, acting to suppress the build-up of a background LW radiation field, and mitigating negative feedback on star formation.
Even absent a large optical depth to LW photons, Pop III.1 stars in minihalos may readily form at $z>15$. While star formation in more massive systems may proceed relatively unimpeded, through atomic line cooling, during the earliest epochs of star formation these atomic-cooling halos are rare compared to the minihalos which host individual Pop III stars. Although the process of star formation in atomic cooling halos is not well understood, for a broad range of models the dominant contribution to the LW background is from Pop III.1 stars formed in minihalos at $z\geq 15-20$. Therefore, at these redshifts the LW background radiation may be largely self-regulated, with Pop III.1 stars producing the very radiation which, in turn, suppresses their formation. Johnson et al. 2008 argue that there is a critical value for the LW flux, $J_{\rm{LW,crit}}\sim 0.04$, at which Pop III.1 star formation occurs self-consistently, with the implication that the Pop III.1 star formation rate in minihalos at $z>15$ is decreased by only a factor of a few. Simulations of the formation of the first galaxies at $z\geq 10$ which take into account the effect of a LW background at $J_{\rm{LW,crit}}$ show that Pop III.1 star formation takes place before the galaxy is fully assembled, suggesting that the formation of the first galaxies does indeed take place after chemical enrichment by the first SN explosions (Greif et al. 2007; Wise & Abel 2007a; Johnson et al. 2008; Whalen et al. 2008b).
![The chemical interplay in relic H [ii]{} regions. While all molecules are destroyed in and around active H [ii]{} regions, the high residual electron fraction in relic H [ii]{} regions catalyzes the formation of an abundance of H$_2$ and HD molecules. The light and dark shades of blue denote regions with a free electron fraction of $5\times 10^{-3}$ and $5\times 10^{-4}$, respectively, while the shades of green denote regions with an H$_2$ fraction of $10^{-4}$, $10^{-5}$, and $3\times 10^{-6}$, in order of decreasing brightness. The regions with the highest molecule abundances lie within relic H [ii]{} regions, which thus play an important role for subsequent star formation, allowing molecules to become shielded from photodissociating radiation and altering the cooling properties of the primordial gas (see Johnson et al. 2007).](f1.eps){width="12cm"}
Mechanical Feedback
-------------------
Numerical simulations have indicated that Pop III.1 stars might become as massive as $500~\rm{M}_{\odot}$ (Omukai & Palla 2003; Bromm & Loeb 2004; Yoshida et al. 2006; O’Shea & Norman 2007). After their main-sequence lifetimes of typically $2-3~\rm{Myr}$, stars with masses below $\simeq 100~\rm{M}_{\odot}$ are thought to collapse directly to black holes without significant metal ejection, while in the range $\simeq 140-260~\rm{M}_{\odot}$ a pair-instability supernova (PISN) disrupts the entire progenitor, with explosion energies ranging from $10^{51}-10^{53}~\rm{ergs}$, and yields of order $50\%$ (Heger & Woosley 2002; Heger et al. 2003). Less massive primordial stars with a high degree of angular momentum might explode with similar energies, but as jet-like hypernovae (Umeda & Nomoto 2002; Tominaga et al. 2007). The significant mechanical and chemical feedback effects exerted by such explosions have been investigated with a number of detailed calculations, but these were either performed in one dimension (Salvaterra 2004; Kitayama & Yoshida 2005; Machida et al. 2005; Whalen et al 2008b), or did not start from realistic initial conditions (Bromm et al. 2003; Norman et al. 2004). Recent work treated the full three-dimensional problem in a cosmological context at the cost of limited resolution, finding that the SN remnant propagated for a Hubble time at $z\simeq 20$ to a final mass-weighted mean shock radius of $2.5~\rm{kpc}$, roughly half the size of the H [ii]{} region (Greif et al. 2007). Due to the high explosion energy, the host halo was entirely evacuated. Additional simulations in the absence of a SN explosion were performed to investigate the effect of photoheating and the impact of the SN shock on neighboring minihalos. For the case discussed in Greif et al. (2007), the SN remnant exerted positive mechanical feedback on neighboring minihalos by shock-compressing their cores, while photoheating marginally delayed star formation. Although a viable theoretical possibility, secondary star formation in the dense shell via gravitational fragmentation (e.g. Machida et al. 2005; Mackey et al. 2003; Salvaterra et al. 2004) was not observed, primarily due to the previous photoheating by the progenitor and the rapid adiabatic expansion of the shocked gas.
Chemical Feedback
-----------------
The dispersal of metals by the first SN explosions transformed the IGM from a simple, pure H/He gas to one with ubiquituous metal enrichment. The resulting cooling ultimately enabled the formation of the first low-mass stars – the key question is then when and where this transition occurred. As indicated in the previous section, such a transition could only occur well after the explosion, as cooling by metal lines or dust requires the gas to re-collapse to high densities. Furthermore, the distribution of metals becomes highly anisotropic, since the shocked gas expands preferentially into the voids around the host halo. Due to the high temperature and low density of the shocked gas, dark matter (DM) halos with $M_{\rm{vir}}\sim 10^{8}~\rm{M}_{\odot}$ must be assembled to efficiently mix the gas. For this reason, the first galaxies likely mark the formation environments of the first low-mass stars and stellar clusters (see Section 5).
The First Galaxies and the Onset of Turbulence
==============================================
How massive were the first galaxies, and when did they emerge? Theory predicts that DM halos containing a mass of $\sim 10^8~M_{\odot}$ and collapsing at $z\sim 10$ were the hosts for the first bona fide galaxies. These systems are special in that their associated virial temperature exceeds the threshold, $\simeq 10^4~\rm{K}$, for cooling due to atomic hydrogen (Oh & Haiman 2002). These so-called ‘atomic-cooling halos’ did not rely on the presence of molecular hydrogen to enable cooling of the primordial gas. In addition, their potential wells were sufficiently deep to retain photoheated gas, in contrast to the shallow potential wells of minihalos (Madau et al. 2001; Mori et al. 2002; Dijkstra et al. 2004). These are arguably minimum requirements to set up a self-regulated process of star formation that comprises more than one generation of stars, and is embedded in a multi-phase interstellar medium. In this sense, we will term all objects with a viral temperature exceeding $10^4~\rm{K}$ as a ’first galaxy‘ (see Figure 2).
An important consequences of atomic cooling is the softening of the equation of state below the virial radius, allowing a fraction of the potential energy to be converted into kinetic energy (Wise & Abel 2007b). Perturbations in the gravitational potential can then generate turbulent motions on galactic scales, which are transported to the center of the galaxy. In this context the distinction between two fundamentally different modes of accretion becomes important. Gas accreted directly from the IGM is heated to the virial temperature and comprises the sole channel of inflow until cooling in filaments becomes important. This mode is termed hot accretion, and dominates in low-mass halos at high redshift. The formation of the virial shock and the concomitant heating are visible in Figure 3, where we show the hydrogen number density and temperature of the central $\simeq 40~\rm{kpc}$ (comoving) around the center of a first galaxy (Greif et al. 2008b). This case also reveals a second mode, termed cold accretion. It becomes important as soon as filaments are massive enough to enable molecule reformation, which allows the gas to cool and flow into the nascent galaxy with high velocities. These streams create a multitude of unorganized shocks near the center of the galaxy and could trigger the gravitational collapse of individual clumps (Figure 4). In concert with metal enrichment by previous star formation in minihalos, chemical mixing might be highly efficient and could lead to the formation of the first low-mass star clusters (Clark et al. 2008), in extreme cases possibly even to metal-poor globular clusters (Bromm & Clarke 2002). Some of the extremely iron-deficient, but carbon and oxygen-enhanced stars observed in the halo of the Milky Way may thus have formed as early as redshift $z\simeq 10$.
![The DM overdensity, hydrogen number density and temperature averaged along the line of sight within the central $\simeq 150~\rm{kpc}$ (comoving) of a simulation depicting the assembly of a first galaxy, shown at three different output times. White crosses denote Pop III.1 star formation sites in minihalos, and the insets approximately delineate the boundary of the galaxy, further enlarged in Figures 3 and 4. [*Top row:*]{} The hierarchical merging of DM halos leads to the collapse of increasingly massive structures, with the least massive progenitors forming at the resolution limit of $\simeq 10^{4}~\rm{M}_{\odot} $ and ultimately merging into the first galaxy with $\simeq 5\times 10^{7}~\rm{M}_{\odot}$. The brightest regions mark halos in virial equilibrium according to the commonly used criterion $\rho/\bar{\rho}>178$. Although the resulting galaxy is not yet fully virialized and is still broken up into a number of sub-components, it shares a common potential well and the infalling gas is attracted towards its center of mass. [*Middle row:*]{} The gas generally follows the potential set by the DM, but pressure forces prevent collapse in halos below $\simeq 2\times 10^{4}~\rm{M}_{\odot}$ (cosmological Jeans criterion). Moreover, star formation only occurs in halos with virial masses above $\simeq 10^{5}~\rm{M}_{\odot}$, as densities must become high enough for molecule formation and cooling. [*Bottom row:*]{} The virial temperature of the first star-forming minihalo gradually increases from $\simeq 10^{3}~\rm{K}$ to $\simeq 10^{4}~\rm{K}$, at which point atomic cooling sets in (see Greif et al. 2008b).](f2.eps){width="12cm"}
![The central $\simeq 40~\rm{kpc}$ (comoving) of a simulation depicting the assembly of a first galaxy. Shown is the hydrogen number density ([*left-hand side*]{}) and temperature ([*right-hand side*]{}) in a slice centered on the galaxy. The dashed lines denote the virial radius at a distance of $\simeq 1~\rm{kpc}$. Hot accretion dominates where gas is accreted directly from the IGM and shock-heated to $\simeq 10^{4}~\rm{K}$. In contrast, cold accretion becomes important as soon as gas cools in filaments and flows towards the center of the galaxy, such as the streams coming from the left- and right-hand side. They drive a prodigious amount of turbulence and create transitory density perturbations that could in principle become Jeans-unstable. In contrast to minihalos, the initial conditions for second-generation star formation are highly complex (see Greif et al. 2008b).](f3.eps){width="12cm"}
![The central $\simeq 40~\rm{kpc}$ (comoving) of a simulation depicting the assembly of a first galaxy. Shown is the divergence ([*left-hand side*]{}) and z-component of the vorticity ([*right-hand side*]{}) in a slice centered on the galaxy. The dashed lines denote the virial radius at a distance of $\simeq 1~\rm{kpc}$. The most pronounced feature in the left-hand panel is the virial shock, where the ratio of infall speed to local sound speed approaches unity and the gas decelerates over a comparatively small distance. In contrast, the vorticity at the virial shock is almost negligible. The high velocity gradients at the center of the galaxy indicate the formation of a multitude of shocks where the bulk radial flows of filaments are converted into turbulent motions on small scales (see Greif et al. 2008b).](f4.eps){width="12cm"}
Importance of Initial Conditions and Metal Enrichment
=====================================================
Related to the issue of metal enrichment, an important question is what controls the transition from a population of very massive stars to a distribution biased towards low-mass stars. In a seminal paper, Bromm et al. (2001a) performed simulations of the collapse of cold gas in a top-hat potential that included the metallicity-dependent effects of atomic fine-structure cooling. In the absence of molecular cooling, they found that fragmentation suggestive of a present-day IMF only set in at metallicities above a threshold value of $Z\simeq 10^{-3.5}~\rm{Z}_{\odot}$. However, they noted that the neglect of molecular cooling could be significant. Omukai et al. (2005) argued, based on the results of their detailed one-zone models, that molecular cooling would indeed dominate the cooling over many orders of magnitude in density.
The effects of molecular cooling at densities up to $n\simeq 500~\rm{cm}^{-3}$ have been discussed by Jappsen et al. (2007a) in three-dimensional collapse simulations of warm ionized gas in minihalos for a wide range of environmental conditions. This study used a time-dependent chemical network running alongside the hydrodynamic evolution as described in Glover & Jappsen (2007). The physical motivation was to investigate whether minihalos that formed within the relic H [ii]{} regions left by neighboring Pop III stars could form subsequent generations of stars themselves, or whether the elevated temperatures and fractional ionizations found in these regions suppressed star formation until larger halos formed. In this study, it was found that molecular hydrogen dominated the cooling of the gas for abundances up to at least $10^{-2}~\rm{Z}_{\odot}$. In addition, there was no evidence for fragmentation at densities below $500~\rm{cm}^{-3}$. Jappsen et al. (2007b) showed that gas in simulations with low initial temperature, moderate initial rotation, and a top-hat DM overdensity, will readily fragment into multiple objects, regardless of metallicity, provided that enough H$_{2}$ is present to cool the gas. Rotation leads to the build-up of massive disk-like structures in these simulations, which allow smaller-scale fluctuations to grow and become gravitationally unstable. The resulting mass spectrum of fragments peaks at a few hundred solar masses, roughly corresponding to the thermal Jeans mass in the disk-like structure (see Figure 5). These results suggest that the initial conditions adopted by Bromm et al. (2001a) may have determined the result much more than might have been appreciated at the time. To make further progress in understanding the role that metal-line cooling plays in promoting fragmentation, it is paramount to develop a better understanding of how metals mix with pristine gas in the wake of the first galaxies.
![Gas temperature versus hydrogen density ([*left-hand side*]{}) and mass distribution of clumps ([*right-hand side*]{}) for gas collapsing in a typical minihalo, shown for the primordial case and pre-enriched to $Z=10^{-3}$ and $10^{-1}~\rm{Z}_{\odot}$, from top to bottom (see also Jappsen 2007b). The temperature evolution of primordial gas is very similar to the $Z=10^{-3}~\rm{Z}_{\odot}$ case, showing that metal-line cooling becomes important only for very high metallicities. The fraction of low-mass fragments increases with higher metallicity, since more gas can cool to the temperature of the CMB before becoming Jeans-unstable. However, the fragments are still very massive, suggesting that metal-line cooling might not be responsible for the transition to low-mass star formation. Instead, this transition might be governed by dust cooling occurring at higher densities. For more definitive conslusions, one must perform detailed numerical simulations that follow the collapse to higher densities in a realistic cosmological environment.](f5.eps){width="12cm"}
Transition from Population III to Population II
===============================================
The discovery of extremely metal-poor subgiant stars in the Galactic halo with masses below one solar mass (Christlieb et al. 2002; Beers & Christlieb 2005; Frebel et al. 2005) indicates that the transition from primordial, high-mass star formation to the ‘normal’ mode of star formation that dominates today occurs at abundances considerably smaller than the solar value. At the extreme end, these stars have iron abundances less than $10^{-5}~\rm{Z}_{\odot}$, and carbon or oxygen abundances that are still $\leq 10^{-3}$ the solar value. These stars are thus strongly iron deficient, which could be due to unusual abundance patterns produced by enrichment from Pop III stars (Umeda & Nomoto 2002), or due to mass transfer from a close binary companion (Ryan et al. 2005; Komiya et al. 2007). Recent work has shown that there are hints for an increasing binary fraction with decreasing metallicity (Lucatello et al. 2005). However, if metal enrichment is the key to the formation of low-mass stars, then logically there must be some critical metallicity $Z_{\rm{crit}}$ at which the formation of low-mass stars first becomes possible. However, the value of $Z_{\rm{crit}}$ is a matter of ongoing debate. As discussed in the previous sections, some models suggest that low-mass star formation becomes possible only once atomic fine-structure line cooling from carbon and oxygen becomes effective (Bromm et al. 2001a; Bromm & Loeb 2003; Santoro et al. 2006; Frebel et al. 2007), setting a value for $Z_{\rm{crit}}$ at around $10^{-3.5}~\rm{Z}_{\odot}$. Another possibility is that low-mass star formation is a result of dust-induced fragmentation occurring at high densities, and thus at a very late stage in the protostellar collapse (Schneider et al. 2002; Omukai et al. 2005; Schneider et al. 2006; Tsuribe & Omukai 2006). In this model, $10^{-6}\leq Z_{\rm{crit}}\leq 10^{-4}~\rm{Z}_{\odot}$, where much of the uncertainty in the predicted value results from uncertainties in the dust composition and the degree of gas-phase depletion (Schneider et al. 2002, 2006).
(11.6,17.4) (0.0,11.6)[![Time evolution of the density distribution in the innermost $400~\rm{AU}$ of the gas cloud shortly before and shortly after the formation of the first protostar at $t_{\rm{SF}}$. Only gas at densities above $10^{10}~\rm{cm}^{-3}$ is plotted. The dynamical timescale at a density of $n=10^{13}~\rm{cm}^{-3}$ is of the order of $10$ years. Dark dots indicate the location of protostars as identified by sink particles forming at $n\ge 10^{17}~\rm{cm}^{-3}$. Note that without usage of sink particles to identify collapsed protostellar cores one would not have been able to follow the build-up of the protostellar cluster beyond the formation of the first object. There are $177$ protostars when we stop the calculation at $t=t_{\rm{SF}}+420~\rm{yr}$. They occupy a region roughly a hundredth of the size of the initial cloud (see Clark et al. 2008).](f6a "fig:"){width="5.8cm" height="5.8cm"}]{} (5.8,11.6)[![Time evolution of the density distribution in the innermost $400~\rm{AU}$ of the gas cloud shortly before and shortly after the formation of the first protostar at $t_{\rm{SF}}$. Only gas at densities above $10^{10}~\rm{cm}^{-3}$ is plotted. The dynamical timescale at a density of $n=10^{13}~\rm{cm}^{-3}$ is of the order of $10$ years. Dark dots indicate the location of protostars as identified by sink particles forming at $n\ge 10^{17}~\rm{cm}^{-3}$. Note that without usage of sink particles to identify collapsed protostellar cores one would not have been able to follow the build-up of the protostellar cluster beyond the formation of the first object. There are $177$ protostars when we stop the calculation at $t=t_{\rm{SF}}+420~\rm{yr}$. They occupy a region roughly a hundredth of the size of the initial cloud (see Clark et al. 2008).](f6b "fig:"){width="5.8cm" height="5.8cm"}]{} (0.0, 5.8)[![Time evolution of the density distribution in the innermost $400~\rm{AU}$ of the gas cloud shortly before and shortly after the formation of the first protostar at $t_{\rm{SF}}$. Only gas at densities above $10^{10}~\rm{cm}^{-3}$ is plotted. The dynamical timescale at a density of $n=10^{13}~\rm{cm}^{-3}$ is of the order of $10$ years. Dark dots indicate the location of protostars as identified by sink particles forming at $n\ge 10^{17}~\rm{cm}^{-3}$. Note that without usage of sink particles to identify collapsed protostellar cores one would not have been able to follow the build-up of the protostellar cluster beyond the formation of the first object. There are $177$ protostars when we stop the calculation at $t=t_{\rm{SF}}+420~\rm{yr}$. They occupy a region roughly a hundredth of the size of the initial cloud (see Clark et al. 2008).](f6c "fig:"){width="5.8cm" height="5.8cm"}]{} (5.8, 5.8)[ ![Time evolution of the density distribution in the innermost $400~\rm{AU}$ of the gas cloud shortly before and shortly after the formation of the first protostar at $t_{\rm{SF}}$. Only gas at densities above $10^{10}~\rm{cm}^{-3}$ is plotted. The dynamical timescale at a density of $n=10^{13}~\rm{cm}^{-3}$ is of the order of $10$ years. Dark dots indicate the location of protostars as identified by sink particles forming at $n\ge 10^{17}~\rm{cm}^{-3}$. Note that without usage of sink particles to identify collapsed protostellar cores one would not have been able to follow the build-up of the protostellar cluster beyond the formation of the first object. There are $177$ protostars when we stop the calculation at $t=t_{\rm{SF}}+420~\rm{yr}$. They occupy a region roughly a hundredth of the size of the initial cloud (see Clark et al. 2008).](f6d "fig:"){width="5.8cm" height="5.8cm"}]{} (0.0,0.0)[![Time evolution of the density distribution in the innermost $400~\rm{AU}$ of the gas cloud shortly before and shortly after the formation of the first protostar at $t_{\rm{SF}}$. Only gas at densities above $10^{10}~\rm{cm}^{-3}$ is plotted. The dynamical timescale at a density of $n=10^{13}~\rm{cm}^{-3}$ is of the order of $10$ years. Dark dots indicate the location of protostars as identified by sink particles forming at $n\ge 10^{17}~\rm{cm}^{-3}$. Note that without usage of sink particles to identify collapsed protostellar cores one would not have been able to follow the build-up of the protostellar cluster beyond the formation of the first object. There are $177$ protostars when we stop the calculation at $t=t_{\rm{SF}}+420~\rm{yr}$. They occupy a region roughly a hundredth of the size of the initial cloud (see Clark et al. 2008).](f6e "fig:"){width="5.8cm" height="5.8cm"}]{} (5.8,0.0)[![Time evolution of the density distribution in the innermost $400~\rm{AU}$ of the gas cloud shortly before and shortly after the formation of the first protostar at $t_{\rm{SF}}$. Only gas at densities above $10^{10}~\rm{cm}^{-3}$ is plotted. The dynamical timescale at a density of $n=10^{13}~\rm{cm}^{-3}$ is of the order of $10$ years. Dark dots indicate the location of protostars as identified by sink particles forming at $n\ge 10^{17}~\rm{cm}^{-3}$. Note that without usage of sink particles to identify collapsed protostellar cores one would not have been able to follow the build-up of the protostellar cluster beyond the formation of the first object. There are $177$ protostars when we stop the calculation at $t=t_{\rm{SF}}+420~\rm{yr}$. They occupy a region roughly a hundredth of the size of the initial cloud (see Clark et al. 2008).](f6f "fig:"){width="5.8cm" height="5.8cm"}]{} (1.9,11.9) (7.7,11.9) (1.9,6.1) (7.7,6.1) (1.9,0.3) (7.7,0.3)
In recent work, Clark et al. (2008) focused on dust-induced fragmentation in the high-density regime, with $10^5~\rm{cm}^{-3}\le n\le 10^{17}~\rm{cm}^{-3}$. They modeled star formation in the central regions of low-mass halos at high redshift adopting an equation of state (EOS) similar to Omukai et al. (2005), finding that enrichment of the gas to a metallicity of only $Z=10^{-5}~\rm{Z}_{\odot}$ dramatically enhances fragmentation. A typical time evolution is illustrated in Figure 6. It shows several stages in the collapse process, spanning a time interval from shortly before the formation of the first protostar (as identified by the formation of a sink particle in the simulation) to $420$ years afterwards. During the initial contraction, the cloud builds up a central core with a density of about $n=10^{10}~\rm{cm}^{-3}$. This core is supported by a combination of thermal pressure and rotation. Eventually, the core reaches high enough densities to go into free-fall collapse, and forms a single protostar. As more high angular momentum material falls to the center, the core evolves into a disk-like structure with density inhomogeneities caused by low levels of turbulence. As it grows in mass, its density increases. When dust-induced cooling sets in, it fragments heavily into a tightly packed protostellar cluster within only a few hundred years. One can see this behavior in particle density-position plots in Figure 7. The simulation is stopped $420$ years after the formation of the first stellar object (sink particle). At this point, the core has formed $177$ stars. The time between the formation of the first and second protostar is roughly $23$ years, which is two orders of magnitude higher than the free-fall time at the density where the sinks are formed. Note that without the inclusion of sink particles, one would only have been able to capture the formation of the first collapsing object which forms the first protostar: the formation of the accompanying cluster would have been missed entirely.
(18,12.5) (1.0,6.5)[![To illustrate the onset of the fragmentation process in the $Z=10^{-5}~\rm{Z}_{\odot}$ simulation, the graphs show the densities of the particles, plotted as a function of their position. Note that for each plot, the particle data has been centered on the region of interest. Results are plotted for three different output times, ranging from the time that the first star forms ($t_{\rm{SF}}$) to $221$ years afterwards. The densities lying between the two horizontal dashed lines denote the range over which dust cooling lowers the gas temperature. We also plot the mass function for a metallicity of $Z=10^{-5}~\rm{Z}_{\odot}$ and mass resolution $0.002~\rm{M}_{\odot}$ and $0.025~\rm{M}_{\odot}$, respectively. Note the similarity between the results of the low-resolution and high-resolution simulations. The onset of dust cooling in the $Z=10^{-5}~\rm{Z}_{\odot}$ cloud results in a stellar cluster which has a mass function similar to that for present-day stars, in that the majority of the mass resides in the low-mass objects. This contrasts with the $Z=10^{-6}~\rm{Z}_{\odot}$ and primordial clouds, in which the bulk of the cluster mass is in high-mass stars (see Clark et al. 2008).](f7a "fig:"){width="2.0in" height="2.0in"}]{} (7.0,6.5)[![To illustrate the onset of the fragmentation process in the $Z=10^{-5}~\rm{Z}_{\odot}$ simulation, the graphs show the densities of the particles, plotted as a function of their position. Note that for each plot, the particle data has been centered on the region of interest. Results are plotted for three different output times, ranging from the time that the first star forms ($t_{\rm{SF}}$) to $221$ years afterwards. The densities lying between the two horizontal dashed lines denote the range over which dust cooling lowers the gas temperature. We also plot the mass function for a metallicity of $Z=10^{-5}~\rm{Z}_{\odot}$ and mass resolution $0.002~\rm{M}_{\odot}$ and $0.025~\rm{M}_{\odot}$, respectively. Note the similarity between the results of the low-resolution and high-resolution simulations. The onset of dust cooling in the $Z=10^{-5}~\rm{Z}_{\odot}$ cloud results in a stellar cluster which has a mass function similar to that for present-day stars, in that the majority of the mass resides in the low-mass objects. This contrasts with the $Z=10^{-6}~\rm{Z}_{\odot}$ and primordial clouds, in which the bulk of the cluster mass is in high-mass stars (see Clark et al. 2008).](f7b "fig:"){width="2.0in" height="2.0in"}]{} (1.0,0.5)[![To illustrate the onset of the fragmentation process in the $Z=10^{-5}~\rm{Z}_{\odot}$ simulation, the graphs show the densities of the particles, plotted as a function of their position. Note that for each plot, the particle data has been centered on the region of interest. Results are plotted for three different output times, ranging from the time that the first star forms ($t_{\rm{SF}}$) to $221$ years afterwards. The densities lying between the two horizontal dashed lines denote the range over which dust cooling lowers the gas temperature. We also plot the mass function for a metallicity of $Z=10^{-5}~\rm{Z}_{\odot}$ and mass resolution $0.002~\rm{M}_{\odot}$ and $0.025~\rm{M}_{\odot}$, respectively. Note the similarity between the results of the low-resolution and high-resolution simulations. The onset of dust cooling in the $Z=10^{-5}~\rm{Z}_{\odot}$ cloud results in a stellar cluster which has a mass function similar to that for present-day stars, in that the majority of the mass resides in the low-mass objects. This contrasts with the $Z=10^{-6}~\rm{Z}_{\odot}$ and primordial clouds, in which the bulk of the cluster mass is in high-mass stars (see Clark et al. 2008).](f7c "fig:"){width="2.0in" height="2.0in"}]{} (7.0,0.5)[![To illustrate the onset of the fragmentation process in the $Z=10^{-5}~\rm{Z}_{\odot}$ simulation, the graphs show the densities of the particles, plotted as a function of their position. Note that for each plot, the particle data has been centered on the region of interest. Results are plotted for three different output times, ranging from the time that the first star forms ($t_{\rm{SF}}$) to $221$ years afterwards. The densities lying between the two horizontal dashed lines denote the range over which dust cooling lowers the gas temperature. We also plot the mass function for a metallicity of $Z=10^{-5}~\rm{Z}_{\odot}$ and mass resolution $0.002~\rm{M}_{\odot}$ and $0.025~\rm{M}_{\odot}$, respectively. Note the similarity between the results of the low-resolution and high-resolution simulations. The onset of dust cooling in the $Z=10^{-5}~\rm{Z}_{\odot}$ cloud results in a stellar cluster which has a mass function similar to that for present-day stars, in that the majority of the mass resides in the low-mass objects. This contrasts with the $Z=10^{-6}~\rm{Z}_{\odot}$ and primordial clouds, in which the bulk of the cluster mass is in high-mass stars (see Clark et al. 2008).](f7d "fig:"){width="2.1in" height="2.15in"}]{}
The fragmentation of low-metallicity gas in this model is the result of two key features in its thermal evolution. First, the rise in the EOS curve between densities $10^{9}~\rm{cm}^{-3}$ and $10^{11}~\rm{cm}^{-3}$ causes material to loiter at this point in the gravitational contraction. A similar behavior at densities around $n=10^3~\rm{cm}^{-3}$ is discussed by Bromm et al. (2001a). The rotationally stabilized disk-like structure, as seen in the plateau at $n\simeq 10^{10}~\rm{cm}^{-3}$ in Figure 7, is able to accumulate a significant amount of mass in this phase and only slowly increases in density. Second, once the density exceeds $n\simeq 10^{12}~\rm{cm}^{-3}$, the sudden drop in the EOS curve lowers the critical mass for gravitational collapse by two orders of magnitude. The Jeans mass in the gas at this stage is only $M_{\rm{J}}=0.01~\rm{M}_{\odot}$. The disk-like structure suddenly becomes highly unstable against gravitational collapse and fragments vigorously on timescales of several hundred years. A very dense cluster of embedded low-mass protostars builds up, and the protostars grow in mass by accretion from the available gas reservoir. The number of protostars formed by the end of the simulation is nearly two orders of magnitude larger than the initial number of Jeans masses in the cloud setup.
Because the evolutionary timescale of the system is extremely short – the free-fall time at a density of $n=10^{13}~\rm{cm}^{-3}$ is of the order of 10 years – none of the protostars that have formed by the time that the simulation is stopped have yet commenced hydrogen burning. This justifies neglecting the effects of protostellar feedback in this study. Heating of the dust due to the significant accretion luminosities of the newly-formed protostars will occur (Krumholz 2006), but is unlikely to be important, as the temperature of the dust at the onset of dust-induced cooling is much higher than in a typical Galactic protostellar core ($T_{\rm{dust}}\sim 100~\rm{K}$ or more, compared to $\sim 10~\rm{K}$ in the Galactic case). The rapid collapse and fragmentation of the gas also leaves no time for dynamo amplification of magnetic fields (Tan & Blackman 2004), which in any case are expected to be weak and dynamically unimportant in primordial and very low metallicity gas (Widrow 2002). However, other authors suggest that the Biermann battery effect may amplify weak initial fields such that the magneto-rotational instability can influence the further collapse of the star (Silk & Langer 2006). Simulations by Xu et al. (2008) show that this effect yields peak magnetic fields of $1~\rm{nG}$ in the center of star-forming minihalos. Jets and outflows may reduce the final stellar mass by $3-10\%$ (Machida et al. 2006). In the presence of primordial fields, the magnetic pressure may even prevent star formation in minihalos and thus increase the mass scale of star-forming objects (Schleicher et al. 2008a,b).
The mass functions of the protostars at the end of the $Z=10^{-5}~\rm{Z}_{\odot}$ simulations (both high and low resolution cases) are shown in Figure 7. When the simulation is terminated, collapsed cores hold $\sim 19~\rm{M}_{\odot}$ of gas in total. The mass function peaks somewhere below $0.1~\rm{M}_{\odot}$ and ranges from below $0.01~\rm{M}_{\odot}$ to about $5~\rm{M}_{\odot}$. This is not the final protostellar mass function. The continuing accretion of gas by the cluster will alter the mass function, as will mergers between the newly-formed protostars (which cannot be followed using our current sink particle implementation). Protostellar feedback in the form of winds, jets and H [ii]{} regions may also play a role in determining the shape of the final stellar mass function. However, a key point to note is that the chaotic evolution of a bound system such as this cluster ensures that a wide spread of stellar masses will persist. Some stars will enjoy favorable accretion at the expense of others that will be thrown out of the system (as can be seen in Figure 6), thus having their accretion effectively terminated (see the discussions in Bonnell & Bate 2006; Bonnell et al. 2007). The survival of some of the low-mass stars formed in the cluster is therefore inevitable.
The forming cluster represents a very extreme analogue of the clustered star formation that we know dominates in the present-day universe (Lada & Lada 2003). A mere $420$ years after the formation of the first object, the cluster has formed $177$ stars (see Figure 6). These occupy a region of only around $400~\rm{AU}$, or $2\times 10^{-3}~\rm{pc}$, in size, roughly a hundredth of the size of the initial cloud. With $\sim 19~\rm{M}_{\odot}$ accreted at this stage, the stellar density is $2.25\times 10^{9}~\rm{M}_{\odot}~\rm{pc}^{-3}$. This is about five orders of magnitude greater than the stellar density in the Trapezium cluster in Orion (Hillenbrand & Hartmann 1998) and about a thousand times greater than that in the core of 30 Doradus in the Large Magellanic Cloud (Massey & Hunter 1998). This means that dynamical encounters will be extremely important during the formation of the first star cluster. The violent environment causes stars to be thrown out of the denser regions of the cluster, slowing down their accretion. The stellar mass spectrum thus depends on both the details of the initial fragmentation process (e.g. as discussed by Clark & Bonnell 2005; Jappsen et al. 2005) as well as dynamical effects in the growing cluster (Bonnell et al. 2001, 2004). This is different to present-day star formation, where the situation is less clear-cut and the relative importance of these two processes may vary strongly from region to region (Krumholz et al. 2005; Bonnell & Bate 2006; Bonnell et al. 2007). In future work, it will be important to assess the validity of the initial conditions adopted for the present study, ideally by performing cosmological simulations that simultaneously follow the formation of the first galaxies and the metal enrichment by primordial SNe in minihalos.
Summary
=======
Understanding the formation of the first galaxies marks the frontier of high-redshift structure formation. It is crucial to predict their properties in order to develop the optimal search and survey strategies for the [*JWST*]{}. Whereas [*ab-initio*]{} simulations of the very first stars can be carried out from first principles, and with virtually no free parameters, one faces a much more daunting challenge with the first galaxies. Now, the previous history of star formation has to be considered, leading to enhanced complexity in the assembly of the first galaxies. One by one, all the complex astrophysical processes that play a role in more recent galaxy formation appear back on the scene. Among them are external radiation fields, comprising UV and X-ray photons, as well as local radiative feedback that may alter the star formation process on small scales. Perhaps the most important issue, though, is metal enrichment in the wake of the first SN explosions, which fundamentally alters the cooling and fragmentation properties of the gas. Together with the onset of turbulence (Wise & Abel 2007b; Greif et al. 2008b), chemical mixing might be highly efficient and could lead to the formation of the first low-mass stars and stellar clusters (Clark et al. 2008).
In this sense a crucial question is whether the transition from Pop III to Pop II stars is governed by atomic fine-structure or dust cooling. Theoretical work has indicated that molecular hydrogen dominates over metal-line cooling at low densities (Jappsen et al. 2007a,b), and that fragment masses below $\sim 1~\rm{M}_{\odot}$ can only be attained via dust cooling at high densities (Omukai et al. 2005; Clark et al. 2008). On the other hand, observational studies seem to be in favor of the fine-structure based model (Frebel et al. 2007), even though existing samples of extremely metal-poor stars in the Milky Way are statistically questionable (Christlieb et al. 2002; Beers & Christlieb 2005; Frebel et al. 2005). Moreover, these studies assume that their abundances are related to primordial star formation – a connection that is still debated (Lucatello et al. 2005; Ryan et al. 2005; Komiya et al. 2007). In light of these uncertainties, it is essential to push numerical simulation to ever lower redshifts and include additional physics in the form of radiative feedback, metal dispersal (Greif et al. 2008a), chemisty and cooling (e.g. Glover & Jappsen 2007) and the effects of magnetic fields (e.g. Xu et al. 2008). We are confident that a great deal of interesting discoveries, both theoretical and observational, await us in the rapidly growing field of early galaxy formation.
Acknowledgements {#acknowledgements .unnumbered}
================
The authors would like to thank the organisers of the IAU Symposium 255 for a very enjoyable and stimulating conference. DRGS thanks the LGFG for financial support. PCC acknowledges support by the Deutsche Forschungsgemeinschaft (DFG) under grant KL 1358/5. RSK thanks for partial support from the Emmy Noether grant KL 1358/1. VB acknowledges support from NSF grant AST-0708795. DRGS, PCC, RSK and TG acknowledge subsidies from the DFG SFB 439, Galaxien im frühen Universum. DRGS and TG would like to thank the Heidelberg Graduate School of Fundamental Physics (HGSFP) for financial support. The HGSFP is funded by the excellence initiative of the German government (grant number GSC 129/1).
T., [Wise]{} J. H., [Bryan]{} G. L., 2007, [ApJ]{}, 659, L87
K., [Shapiro]{} P. R., 2007, [MNRAS]{}, 375, 881
M. A., [Bromm]{} V., [Shapiro]{} P. R., 2006, [ApJ]{}, 639, 621
R., [Loeb]{} A., 2001, [Phys. Rept.]{}, 349, 125
T. C., [Christlieb]{} N., 2005, [ARA&A]{}, 43, 531
I. A., [Bate]{} M. R., 2006, [MNRAS]{}, 370, 488
I. A., [Clarke]{} C. J., [Bate]{} M. R., [Pringle]{} J. E., 2001, [MNRAS]{}, 324, 573
I. A., [Larson]{} R. B., [Zinnecker]{} H., 2007, in [Reipurth]{} B., [Jewitt]{} D., [Keil]{} K., eds, Protostars and Planets V [The Origin of the Initial Mass Function]{}. pp 149–164
I. A., [Vine]{} S. G., [Bate]{} M. R., 2004, [MNRAS]{}, 349, 735
V., [Clarke]{} C. J., 2002, [ApJ]{}, 566, L1
V., [Ferrara]{} A., [Coppi]{} P. S., [Larson]{} R. B., 2001a, [MNRAS]{}, 328, 969
V., [Kudritzki]{} R. P., [Loeb]{} A., 2001b, [ApJ]{}, 552, 464
V., [Larson]{} R. B., 2004, [ARA&A]{}, 42, 79
V., [Loeb]{} A., 2003, [Nature]{}, 425, 812
V., [Loeb]{} A., 2004, New Astronomy, 9, 353
V., [Yoshida]{} N., [Hernquist]{} L., 2003, [ApJ]{}, 596, L135
N., et al., 2002, [Nature]{}, 419, 904
B., [Ferrara]{} A., 2005, Space Science Reviews, 116, 625
P. C., [Bonnell]{} I. A., 2005, [MNRAS]{}, 361, 2
P. C., [Glover]{} S. C. O., [Klessen]{} R. S., 2008, [ApJ]{}, 672, 757
M., [Haiman]{} Z., [Rees]{} M. J., [Weinberg]{} D. H., 2004, [ApJ]{}, 601, 666
A., 1998, [ApJ]{}, 499, L17+
A., et al., 2005, [Nature]{}, 434, 871
A., [Johnson]{} J. L., [Bromm]{} V., 2007, [MNRAS]{}, 380, L40
S., 2005, Space Science Reviews, 117, 445
S. C. O., [Jappsen]{} A.-K., 2007, [ApJ]{}, 666, 1
T. H., [Bromm]{} V., 2006, [MNRAS]{}, 373, 128
T. H., [Glover]{} S. C. O., [Bromm]{} V., [Klessen]{} R. S., 2008a, [MNRAS]{}, submitted (arXiv:0808.0843)
T. H., [Johnson]{} J. L., [Bromm]{} V., [Klessen]{} R. S., 2007, [ApJ]{}, 670, 1
T. H., [Johnson]{} J. L., [Klessen]{} R. S., [Bromm]{} V., 2008b, [MNRAS]{}, 387, 1021
Z., [Abel]{} T., [Rees]{} M. J., 2000, [ApJ]{}, 534, 11
A., [Fryer]{} C. L., [Woosley]{} S. E., [Langer]{} N., [Hartmann]{} D. H., 2003, [ApJ]{}, 591, 288
A., [Woosley]{} S. E., 2002, [ApJ]{}, 567, 532
L. A., [Hartmann]{} L. W., 1998, [ApJ]{}, 492, 540
A.-K., [Glover]{} S. C. O., [Klessen]{} R. S., [Mac Low]{} M.-M., 2007a, [ApJ]{}, 660, 1332
A.-K., [Klessen]{} R. S., [Glover]{} S. C. O., [Mac Low]{} M.-M., 2007b, [ApJ]{}, submitted (arXiv:0709.3530)
A.-K., [Klessen]{} R. S., [Larson]{} R. B., [Li]{} Y., [Mac Low]{} M.-M., 2005, [A&A]{}, 435, 611
J. L., [Bromm]{} V., 2007, [MNRAS]{}, 374, 1557
J. L., [Greif]{} T. H., [Bromm]{} V., 2007, [ApJ]{}, 665, 85
J. L., [Greif]{} T. H., [Bromm]{} V., 2008, [MNRAS]{}, 388, 26
T., [Yoshida]{} N., 2005, [ApJ]{}, 630, 675
T., [Yoshida]{} N., [Susa]{} H., [Umemura]{} M., 2004, [ApJ]{}, 613, 631
E., et al., 2008, [ApJS]{}, submitted (arXiv:0803.0547)
Y., [Suda]{} T., [Minaguchi]{} H., [Shigeyama]{} T., [Aoki]{} W., [Fujimoto]{} M. Y., 2007, [ApJ]{}, 658, 367
M. R., 2006, [ApJ]{}, 641, L45
M. R., [McKee]{} C. F., [Klein]{} R. I., 2005, [Nature]{}, 438, 332
C. J., [Lada]{} E. A., 2003, [ARA&A]{}, 41, 57
S., [Tsangarides]{} S., [Beers]{} T. C., [Carretta]{} E., [Gratton]{} R. G., [Ryan]{} S. G., 2005, [ApJ]{}, 625, 825
M. N., [Omukai]{} K., [Matsumoto]{} T., [Inutsuka]{} S.-i., 2006, [ApJ]{}, 647, L1
M. N., [Tomisaka]{} K., [Nakamura]{} F., [Fujimoto]{} M. Y., 2005, [ApJ]{}, 622, 39
J., [Bromm]{} V., [Hernquist]{} L., 2003, [ApJ]{}, 586, 1
P., [Ferrara]{} A., [Rees]{} M. J., 2001, [ApJ]{}, 555, 92
P., [Hunter]{} D. A., 1998, [ApJ]{}, 493, 180
J., 2003, Science, 300, 1904
M., [Ferrara]{} A., [Madau]{} P., 2002, [ApJ]{}, 571, 40
T., [Omukai]{} K., 2005, [MNRAS]{}, 364, 1378
M. R., et al., 2008, [ApJS]{}, submitted (arXiv:0803.0593)
M. L., [O’Shea]{} B. W., [Paschos]{} P., 2004, [ApJ]{}, 601, L115
S. P., [Haiman]{} Z., 2002, [ApJ]{}, 569, 558
K., [Palla]{} F., 2003, [ApJ]{}, 589, 677
K., [Tsuribe]{} T., [Schneider]{} R., [Ferrara]{} A., 2005, [ApJ]{}, 626, 627
B. W., [Norman]{} M. L., 2007, [ApJ]{}, 654, 66
M., [Gnedin]{} N. Y., [Shull]{} J. M., 2001, [ApJ]{}, 560, 580
M., [Ostriker]{} J. P., 2004, [MNRAS]{}, 350, 539
S. G., [Aoki]{} W., [Norris]{} J. E., [Beers]{} T. C., 2005, [ApJ]{}, 635, 349
R., [Ferrara]{} A., [Schneider]{} R., 2004, New Astronomy, 10, 113
F., [Shull]{} J. M., 2006, [ApJ]{}, 643, 26
J., 1998, ASP Conference Series (arXiv:astro-ph/9712317)
D., 2002, [A&A]{}, 382, 28
D. R. G., [Banerjee]{} R., [Klessen]{} R. S., 2008a, Phys. Rev. D, submitted (arXiv:0807.3802)
D. R. G., [Banerjee]{} R., [Klessen]{} R. S., 2008b, [ApJ]{}, submitted (arXiv:0808.1461)
R., [Ferrara]{} A., [Natarajan]{} P., [Omukai]{} K., 2002, [ApJ]{}, 571, 30
R., [Omukai]{} K., [Inoue]{} A. K., [Ferrara]{} A., 2006, [MNRAS]{}, 369, 1437
P. R., [Iliev]{} I. T., [Raga]{} A. C., 2004, [MNRAS]{}, 348, 753
J., [Langer]{} M., 2006, [MNRAS]{}, 371, 444
H., [Umemura]{} M., 2006, [ApJ]{}, 645, L93
J. C., [Blackman]{} E. G., 2004, [ApJ]{}, 603, 401
N., [Umeda]{} H., [Nomoto]{} K., 2007, [ApJ]{}, 660, 516
T., [Omukai]{} K., 2006, [ApJ]{}, 642, L61
H., [Nomoto]{} K., 2002, [ApJ]{}, 565, 385
D., [Abel]{} T., [Norman]{} M. L., 2004, [ApJ]{}, 610, 14
D., [O’Shea]{} B. W., [Smidt]{} J., [Norman]{} M. L., 2008a, [ApJ]{}, 679, 925
D., [van Veelen]{} B., [O’Shea]{} B. W., [Norman]{} M. L., 2008b, [ApJ]{}, 682, 49
L. M., 2002, Reviews of Modern Physics, 74, 775
J. H., [Abel]{} T., 2007a, [ApJ]{}, accepted (arXiv:0710.3160)
J. H., [Abel]{} T., 2007b, [ApJ]{}, 665, 899
J. S. B., [Loeb]{} A., 2003, [ApJ]{}, 588, L69
H., [O’Shea]{} B. W., [Collins]{} D. C., [Norman]{} M. L., [Li]{} H., [Li]{} S., 2008, [ApJ]{}, submitted (arXiv:0807.2647)
N., [Bromm]{} V., [Hernquist]{} L., 2004, [ApJ]{}, 605, 579
N., [Oh]{} S. P., [Kitayama]{} T., [Hernquist]{} L., 2007, [ApJ]{}, 663, 687
N., [Omukai]{} K., [Hernquist]{} L., [Abel]{} T., 2006, [ApJ]{}, 652, 6
| {
"pile_set_name": "ArXiv"
} |
---
author:
- |
Amnon Geifman$^1$ Abhay Yadav$^2$ Yoni Kasten$^1$ Meirav Galun$^1$ David Jacobs$^2$ Ronen Basri$^1$\
\
$^1$Department of Computer Science, Weizmann Institute of Science, Rehovot, Israel\
$^2$Department of Computer Science, University of Maryland, College Park, MD
bibliography:
- 'citations.bib'
title: |
On the Similarity between the Laplace\
and Neural Tangent Kernels
---
Acknowledgements {#acknowledgements .unnumbered}
================
The authors thank the U.S.- Israel Binational Science Foundation, grant number 2018680, the National Science Foundation, grant no. IIS-1910132, the Quantifying Ensemble Diversity for Robust Machine Learning (QED for RML) program from DARPA and the Guaranteeing AI Robustness Against Deception (GARD) program from DARPA for their support of this project.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Thousands of complex natural language questions are submitted to community question answering websites on a daily basis, rendering them as one of the most important information sources these days. However, oftentimes submitted questions are unclear and cannot be answered without further clarification questions by expert community members. This study is the first to investigate the complex task of classifying a question as clear or unclear, i.e., if it requires further clarification. We construct a novel dataset and propose a classification approach that is based on the notion of similar questions. This approach is compared to state-of-the-art text classification baselines. Our main finding is that the similar questions approach is a viable alternative that can be used as a stepping stone towards the development of supportive user interfaces for question formulation.'
author:
- Jan Trienes
- Krisztian Balog
bibliography:
- 'bibliography.bib'
title: Identifying Unclear Questions in Community Question Answering Websites
---
### Acknowledgments.
We would like to thank Dolf Trieschnigg and Djoerd Hiemstra for their insightful comments on this paper. This work was partially funded by the University of Twente Tech4People Datagrant project.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We describe a new method that is both physically explicable and quantitatively accurate in describing the multifractal characteristics of intermittent events based on groupings of rank-ordered fluctuations. The generic nature of such rank-ordered spectrum leads it to a natural connection with the concept of one-parameter scaling for monofractals. We demonstrate this technique using results obtained from a 2D MHD simulation. The calculated spectrum suggests a crossover from the near Gaussian characteristics of small amplitude fluctuations to the extreme intermittent state of large rare events.'
author:
- Tom Chang
- 'Cheng-chin Wu'
title: 'Rank-ordered Multifractal Spectrum for Intermittent Fluctuations'
---
Intermittent fluctuations are popularly analyzed using the structure function and/or partition function methods. These methods investigate the multifractal characteristics of intermittency based on the statistics of the full set of fluctuations. Since most of the observed or simulated intermittent fluctuations are dominated by fluctuations with small amplitudes, the subdominant fractal characteristics of the minority fluctuations – generally of larger amplitudes – are easily masked by those characterized by the dominant population. It therefore appears prudent to search for a procedure that explores the singular nature of the subdominant fluctuations by first appropriately isolating out the minority populations and then perform statistical investigations for each of the isolated populations. Using an example, we demonstrate how this idea may be achieved with a rank-order method that subdivides the fluctuations into groupings based on the range of the scaled-sizes of the fluctuations.
To be more specific, we consider a generic fluctuating temporal event $X(t)$ and form from which a scale dependent difference series $\delta X(t,\tau)=X(t+\tau)-X(t)$ for a time lag $\tau$. We now consider the probability distribution functions (PDFs) $P(\delta X,\tau)$ of $\delta X(t,\tau)$ for different time lag values $\tau$. If the phenomenon represented by the fluctuating event $X(t)$ is monofractal – i.e., self-similar, the PDFs would scale (collapse) onto one scaling function $P_s$ as follows: $$P(\delta X,\tau)\tau^s=P_s(\delta X/\tau^s)$$ where $s$ is the scaling exponent. Such one-parameter scaling has been suggested for the stock market indices \[1\], magnetic fluctuations \[2, 3\] and fluctuating events of other natural or experimental systems \[4\]. If the PDFs are Gaussian distributions for all time lags similar to those characterizing self-similar random diffusion, the scaling exponent s is equal to 0.5. For other monofractal distributions, the scaling exponent may take on any real value.
In practical applications, expression (1) is sometimes approximately satisfied for the full range of the scaling variable $Y\equiv\delta X/\tau^s$ and sometimes only for a portion of the range of $Y$. If the scaling exponent $s$ is obtained by estimating the values of the PDFs at $\delta X=0$, the PDFs would generally scale at least for a range of $Y$ close to the origin. When scaling of the PDFs based on (1) is not fully satisfied or only approximately satisfied, the fluctuating phenomenon represented by $X(t)$ is multifractal. One conventional method of evaluating the degree of multifractal (intermittent) nature of $X(t)$ is to study the scaling behavior of the moments of the PDFs (conventionally called the structure functions). $$S_m(\tau)=\left<\left|\delta X(\tau)\right|^m\right>=\int_0^{\delta X_{\rm max}}
\left|\delta X(\tau)\right|^m P(\delta X,\tau) {\rm d}\delta X$$ where $<...>$ represents the ensemble average and $\delta X_{\rm max}$ is the largest value of $\delta X$ obtainable from the time series $X(t)$ for the time lag $\tau$. The choice of taking the ensemble average of the absolute values of the coarse-grained differences instead of the values of the raw differences is for the purpose of better statistical convergence \[5, 6\]. One then proceeds to search for the scaling behavior $S_m(\tau)\sim\tau^{\zeta_m}$. If such scaling is verified for a monofractal fluctuating event $X(t)$, the structure function exponents would vary linearly with the moment order as $\zeta_m=\zeta_1 m$ where $\zeta_1=s$. If the structure function exponents deviate from the above linear relationship, the fluctuating event is multifractal. There are several disadvantages of this approach. Firstly, the statistical analysis as prescribed above incorporates the full set of fluctuations represented by $X(t)$. As with most observed PDFs, the statistics are generally dominated by those fluctuations with small amplitudes. Thus, the fractal (multifractal) nature of the subdominant (larger amplitude) fluctuations is usually masked by the fractal nature of the dominant (smaller amplitude) fluctuations. Secondly, although deviations of the structure function exponents from the linear form would indicate that the fluctuating event $X(t)$ is multifractal, the physical interpretation of the multifractal nature is not easily deciphered by merely examining the curvature of the deviation from linearity. Thirdly, the structure functions are usually ill-defined for negative values of m. We therefore search for a procedure that would remedy the above defects as shown below.
From the above argument, it appears prudent to perform statistical analyses individually for subsets of the fluctuations that characterize the various fractal behaviors within the full multifractal set. We recognize that such grouping of fluctuations must depend somehow on the sizes of the fluctuations. However, we also realize that groupings cannot depend merely on the raw values of the sizes of the fluctuations because the ranges will be different for different scales (time lags $\tau$). Thus, we proceed to rank-order the sizes of the fluctuations based on the ranges of the scaled variable $Y$, defined above. For each chosen range of $\Delta Y$ we shall assume that the fluctuations of all time lags will exhibit monofractal behavior and be characterized by a scaling exponent $s$. The question is then how can this procedure be accomplished. We continue by constructing the structure functions for the chosen grouping of the fluctuations by performing moment – structure function – calculations as prescribed by (2) with the limits of integral replaced by the end points of the range of the chosen $\Delta Y$ for each time lag $\tau$. We then search for the scaling property of the structure functions that varies as $sm$. If such scaling property exists, then we have found one region of the multifractal spectrum of the fluctuations such that the PDFs in the range of $\Delta Y$ collapses onto one scaled PDF. Continuing this procedure for all ranges of $\Delta Y$ produces the rank-ordered multifractal spectrum $s(Y)$ that we are looking for. The determined values of $s$ for each grouping should be un-affected by the statistics of other subsets of fluctuations that are not within the chosen range $\Delta Y$ and therefore should be quantitatively quite accurate. The physical meaning of this spectrum is that the PDFs for all time lags collapse onto one master multifractal scaled PDF. The spectrum is implicit since $Y$ is a defined as a function of $s$.
We demonstrate the above outlined procedure via an example. The example is based on the results of a large-scale 2D magnetohydrodynamic (MHD) simulation. In the simulation, ideal compressible MHD equations expressed in conservative forms are solved numerically with 1024 x 1024 grid points in a doubly periodic $(x,z)$ domain of length $2\pi$ in both directions using the WENO code \[7\] so that the total mass, energy, magnetic fluxes and momenta are conserved. The initial condition consists of random magnetic field and velocity with a constant total pressure for a high beta plasma. After sufficient elapsed time, the system evolves into a set of randomly interacting multiscale coherent structures exhibiting classical aspects of intermittent fluctuations. More detailed description of the simulation was reported in one of our previous publications \[3\]. Spatial values (approximately one million data points) of the square of the strength of the magnetic field $B^2$ are collected over the entire $(x,z)$ - plane at a given time and PDFs $P(|\delta B^2|, \delta)$ are constructed for the absolute values of the spatial fluctuations $|\delta B^2|$ at scale intervals $\delta$=8, 16, and 64 grid points, Fig. 1. Thus, in lieu of temporal fluctuations, the example considers spatial fluctuations. The PDFs are non-Gaussian and become more and more heavy-detailed at smaller and smaller scales.
An attempt to collapse the unscaled PDFs according to the monofractal scaling formula that is analogous to (1) indicate approximate scaling with an estimated scaling exponent s=0.335, Fig. 2. Structure function calculations based on the full set of simulated fluctuations showed a nonlinear relation between the exponents and the moment order \[8\]. Because for this example the PDFs exhibited approximate monofractal scaling, such structure function calculations – though indicating multifractality – is strongly masked by the population of the smaller fluctuations. Thus, this example represents an ideal candidate to test the utility of the new method described in this paper.
We now proceed to construct the rank-ordered multifractal spectrum based on the afore-mentioned procedure. Thus we sort the fluctuations into ranges of $\Delta Y$ between ($Y_1$, $Y_2$) with $Y=|\delta B^2|/\delta^s$ and evaluate the rank-ordered structure functions within each range: $$S_m(|\delta B^2|,\delta)=\int_{a_1}^{a_2}|\delta B^2|^m P(|\delta B^2|,\delta) {\rm d}(|\delta B^2|)$$ where $a_1=Y_1\delta^s$ and $a_2=Y_2\delta^s$. Expression (3) is a nonlinear function of $s$ for each moment order $m$. We now search for the value(s) of $s$ such that $S_m\sim \delta^{sm}$ within each range of the fluctuations so that the rank-ordered fluctuations would exhibit monofractal behavior. Interestingly there exists one and only one value of $s$ in each range of $\Delta Y$, that satisfies the above constraint, indicating the appropriateness of the ansatz. Unlike the structure functions defined for the full range of fluctuations, the range-limited structure functions based on (3) exists also for negative real values of $m$. Figure 3 displays the calculated rank-ordered spectrum $s(Y)$ based on eight contiguous ranges of $\Delta Y$. It is noted that the spectrum has values of $s$ ranging between 0.5 and 0.0. The spectrum can be refined by choosing more range intervals with smaller range sizes of $\Delta Y$, although in practice this procedure is limited by the availability of simulated data points. At $Y=0$, the scaling exponent appears to approach the self-similar Gaussian value of 0.5. As the value of the scaled fluctuation size $Y$ increases the scaling exponent decreases accordingly indicating the fluctuations are becoming more and more intermittent. At the extremely intermittent state, the value of the scaling exponent would asymptotically approach the value of zero. This would occur at the limit of largest and rarest scaled fluctuations.
For each range of $\Delta Y$, the PDFs would collapse according to its correspondingly calculated exponent value $s$. For example, for the range of $Y$ between (40, 50), the PDFs should collapse for the calculated value $s \approx 0.2$. This is essentially verified as shown in Fig. 4. Figure 5 shows the results of the rank-ordered structure functions for the same range of $\Delta Y$. It also indicates scaling exponent $s =0.2$.
Such an implicit multifractal spectrum has several advantages over the results obtainable using the conventional structure function and/or partition function calculations. Firstly, the utility of the spectrum is to fully collapse the unscaled PDFs. Secondly, the physical interpretation is clear. It indicates how intermittent (in terms of the value of $s$) are the scaled fluctuations once the value of $Y$ is given. Thirdly, the determination of the values of the fractal nature of the grouped fluctuations is not affected by the statistics of other fluctuations that do not exhibit the same fractal characteristics. Fourthly, it provides a natural connection between the one-parameter scaling idea (1) and the multifractal behavior of intermittency.
To summarize, we have introduced a new rank-ordered procedure based on the sizes of the scaled fluctuations to view the multifractal nature of intermittent fluctuations. The suggested implicit multifractal spectrum analysis provides a physically meaningful description of intermittency and is quantitatively accurate because of the cleanliness of the procedure of statistical sampling. The method can easily be generalized to situations of higher dimensions as well as correlation and response functions of several independent variables involving intermittency of spatiotemporal fluctuations.
This research is partially supported by the NSF and AFOSR.
R.N. Mantegna and H.E. Stanley, Nature, 376, 46 (1995).
B. Hnat, S.C. Chapman, G. Rowlands, N.W. Watkins, W.M. Farrell, Geophys. Res. Lett. 29:1446, 10.1029/2001GL014587 (2002).
T. Chang, S.W.Y. Tam and C.C. Wu, Phys. Plasmas, 11, 1287 (2004).
D. Sornette, [*Phenomena in Natural Science; Chaos, Fractals, Selforganization and Disorder: Concepts and Tools*]{} (Springer-Verlag, Berlin, 2000).
D. Biskamp, [*Magnetohydrodynamic Turbulence*]{} (Cambridge University Press, Cambridge, 2003).
S. Vahnstein, K. Sreenivasan, R. Pierrehumbert, V. Kashyap, A. Juneja, Phys. Rev. E, 50, 1823 (1994).
G.S. Jiang and C.C. Wu, J. Comp. Phys., 150, 594 (1999).
T. Chang and C.C. Wu, in [*Turbulence and Nonlinear Processes in Astrophysical Plasmas*]{}, edited by D. Shaikh and G.P. Zank (AIP Conf. Proc., 2007) Vol. 932, p. 161.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Quantum spaces with $\frak{su}(2)$ noncommutativity can be modelled by using a family of $SO(3)$-equivariant differential $^*$-representations. The quantization maps are determined from the combination of the Wigner theorem for $SU(2)$ with the polar decomposition of the quantized plane waves. A tracial star-product, equivalent to the Kontsevich product for the Poisson manifold dual to $\mathfrak{su}(2)$ is obtained from a subfamily of differential $^*$-representations. Noncommutative (scalar) field theories free from UV/IR mixing and whose commutative limit coincides with the usual $\phi^4$ theory on $\mathbb{R}^3$ are presented. A generalization of the construction to semi-simple possibly non simply connected Lie groups based on their central extensions by suitable abelian Lie groups is discussed.'
author:
- 'Timothé Poulain, Jean-Christophe Wallet'
title: |
Quantum spaces, central extensions of Lie groups\
and related quantum field theories
---
*Laboratoire de Physique Théorique, Bât. 210\
CNRS and Université Paris-Sud 11, 91405 Orsay Cedex, France*\
[`timothe.poulain@th.u-psud.fr`](mailto:timothe.poulain@th.u-psud.fr), [`jean-christophe.wallet@th.u-psud.fr`](mailto:jean-christophe.wallet@th.u-psud.fr)\
10 true cm [*[Based on a talk presented by one of us (T. Poulain) at the XXVth International Conference on Integrable Systems and Quantum symmetries (ISQS-25), Prague, June 6-10 2017.]{}*]{}\
Introduction
============
In the recent years, deformations of $\mathbb{R}^3$ for which the algebra of coordinates forms a $\mathfrak{su}(2)$ Lie algebra have received some interest. This was in particular related either to developments in $3$-d gravity, in particular viewing $\mathbb{R}^3$ as the dual algebra of the relativity group [@ritals1; @ritals2], or to constructions of analytic formulas for the star-products [@fedele-vitale] defining these deformations as well as investigations of their structural properties [@lagraa; @selene; @KV-15; @q-grav] (see also [@galuccio; @fedele]). Field theories built on these noncommutative (quantum) spaces have been shown to have a perturbative quantum behaviour different from the one of field theories built on Moyal spaces, at least regarding renormalizability as well as UV/IR mixing [@vitwal; @vit-kust; @wal-16]. For earlier works on noncommutative field theories (NCFT) on Moyal spaces, see e.g [@Grosse:2003aj-pc; @Blaschke:2009c] and references therein.\
The purpose of this paper is to select salient features developed in our recent works [@jpw-16; @tp-jcw17] on quantum spaces with $\frak{su}(2)$ noncommutativity, hereafter denoted generically by $\mathbb{R}^3_\theta$ (see below). It appears that these latter can be modelled conveniently by exploiting a family of $SO(3)$-equivariant differential $^*$-representations as we will show in a while. It turns out that the use of differential representations [@polydiff-alg; @VK; @KV-15] may prove useful in the construction of star-products whenever the noncommutativity is of a Lie algebra type such as the case considered here. Consistency of the construction definitely requires that one works with $^*$-representations. Note that a similar construction can also be applied to the kappa-Minkowski spaces which are related to (the universal enveloping algebra of) a solvable Lie algebra. For constructions of star-products within kappa-Minkowski spaces, see e.g [@cosmogol4].\
The characterization of the related quantization maps defining the quantum spaces $\mathbb{R}^3_\theta$ can be achieved from a natural combination of the polar decomposition of the quantized plane waves with the Wigner theorem for $SU(2)$. Recall that the quantized plane waves are defined by the action of the quantization map on the usual plane waves. From this follows the characterization of the star-products. The use of star-product formulation of NCFT is convenient for fast construction of functional actions. However, it may lead to difficulties whenever the star-product is represented by a complicated formula and/or is not closed for a trace functional. In this respect, we then construct a tracial star-product with respect to the usual Lebesgue measure on $\mathbb{R}^3$, equivalent to the Kontsevich product [@deform] for the Poisson manifold dual to $\mathfrak{su}(2)$, thanks to a suitable use of the Harish-Chandra map [@harish; @duflo]. We then present noncommutative (scalar) field theories which are free from UV/IR mixing and whose commutative limit is the usual $\phi^4$ theory on $\mathbb{R}^3$. We discuss the generalization of the construction to semi-simple, possibly non simply connected, Lie groups based on their central extensions by suitable abelian Lie groups. Considering central extensions of Lie groups is the natural framework to deal with this problem, while the classification of the extensions needs to consider different types of cohomology including suitable group cohomologies. For some details on various relevant cohomologies in physics, see e.g [@brown; @stor-wal; @hoch-shap].
[**[Notations]{}**]{}: We will denote generically the involutions by the symbol $^\dag$. The actual nature of each involution should be clear from the context. $\mathcal{S}(\mathbb{R}^3)$ and $\mathcal{M}(\mathbb{R}^3)$ are respectively the algebra of Schwartz functions on $\mathbb{R}^3$ and its multiplier algebra. $\langle\cdot,\cdot\rangle$ is the hermitian product,i.e , $\langle f,g \rangle:=\int d^3x\ {\bar{f}}(x) g(x)$ for any $f,g\in\mathcal{M}(\mathbb{R}^3)$ (${\bar{f}}(x)$ is the complex conjugate of $f(x)$). $\tilde{f}(p)=\int d^3xf(x)e^{-ipx}$ is the Fourier transform of $f$. $\mathcal{L}(\mathcal{M}(\mathbb{R}^3))$ denotes the set of linear operators acting on $\mathcal{M}(\mathbb{R}^3)$.\
We will deal with a family of deformations of $\mathbb{R}^3$, indexed by 3 functionals $f,\ g,\ l$ depending on the Laplacian of $\mathbb{R}^3$, $\Delta$, and a positive real parameter $\theta$ (the so-called deformation parameter), i.e $\mathbb{R}^3_{\theta,f,g,\ell}:=(\mathcal{F}(\mathbb{R}^3),\star_{\theta,f,g,\ell})$ ($\mathcal{F}(\mathbb{R}^3$ is a suitable linear space of functions to be characterized below) where $\star_{\theta,f,g,\ell})$ is the deformed product. To simplify the notations, any element of this family will be denoted by $\mathbb{R}^3_{\theta}(=(\mathbb{R}^3,\star))$, the actual nature of the objects indexing the family should be clear from the context.\
$\mathfrak{su}(2)$-noncommutativity and differential $^*$-representations. {#section2}
==========================================================================
It is convenient to represent the abstract $^*$-algebra $\mathbb{A}[\hat{X}_\mu]$ generated by the self-adjoint operator coordinates $\hat{X}_\mu$ fulfilling $[\hat{X}_\mu,\hat{X}_\nu]=i2\theta\varepsilon_{\mu \nu}^{\hspace{11pt} \rho}\hat{X}_\rho$ ($\mu,\nu,\rho=1,2,3$) by making use of the poly-differential $^*$-representation $\pi:\mathbb{A}[\hat{X}_\mu]\to\mathcal{L}(\mathcal{M}(\mathbb{R}^3))$, $$\pi:\hat{X}_\mu\mapsto\pi(\hat{X}_\mu){= \hspace{-0.5pt} :}\hat{x}_\mu(x,\partial)=x_\nu\varphi^\nu_{\hspace{3pt}\mu}(\partial)+\chi_\mu(\partial), \label{defxhat}$$ where the functionals $\varphi^\nu_{\hspace{3pt}\mu}(\partial)$ and $\chi_\mu(\partial)$ are viewed as formal expansions in the usual derivatives of $\mathbb{R}^3$, $\partial_\mu$, $\mu=1,2,3$.\
Since by assumption $\pi$ is a morphism of $^*$-algebra, one has $[\hat{x}_\mu,\hat{x}_\nu]=i2\theta\varepsilon_{\mu \nu}^{\hspace{11pt} \rho}\hat{x}_\rho$ together with $\langle f,\hat{x}_\mu g\rangle=\langle \hat{x}_\mu f,g \rangle$ for any $f,g\in\mathcal{M}(\mathbb{R}^3)$, stemming from the self-adjointness of $\hat{x}_\mu$ so that $\hat{x}_\mu^\dag=\hat{x}_\mu$. By combining these two latter conditions with and using $[x_\lambda , h(x,\partial)] = - \frac{\partial h}{\partial (\partial^\lambda)}$, which holds true for any functional $h$ of $x_\mu$ and $\partial^\mu$ together with $\partial^\dag_\mu=-\partial_\mu,\ h^\dag(\partial)={\bar{h}}(-\partial)$, a standard computation gives rise to the following functional differential equations constraining $\varphi^\nu_{\hspace{3pt}\mu}$ and $\chi_\mu$: $$\begin{aligned}
i 2\theta \varphi_{\alpha \rho} &= \varepsilon_{\rho}^{\hspace{4pt} \mu \nu} \frac{\partial \varphi_{\alpha \mu}}{\partial (\partial_\beta)} \varphi_{\beta \nu},\label{master1}\\
\varphi^\dagger_{\alpha\rho} &= \varphi_{\alpha \rho} \label{master2}\\
i 2\theta \chi_\rho &= \varepsilon_{\rho}^{\hspace{4pt} \mu \nu} \frac{\partial \chi_\mu}{\partial (\partial_\alpha)} \varphi_{\alpha \nu}, \label{master3} \\
\frac{\partial \varphi^\dagger_{\alpha \rho}}{\partial (\partial_\alpha)} &= \chi_\rho - \chi_\rho^\dagger, \label{master4}\end{aligned}$$ where use has been made of the algebraic relation $\delta_{\mu \gamma} \delta_\nu^{\hspace{4pt} \sigma} - \delta_\mu^{\hspace{4pt} \sigma} \delta_{\nu \gamma} = \varepsilon_{\mu \nu}^{\hspace{11pt} \rho} \varepsilon_{\rho \gamma}^{\hspace{11pt} \sigma}$.\
In view of $\mathbb{R}^3_\theta\subsetneq U(\mathfrak{su}(2))\cong \mathbb{A}[\hat{X}_\mu]/[\hat{X}_\mu,\hat{X}_\nu]$, see e.g [@wal-16; @jpw-16], where $U(\mathfrak{su}(2))$ is the universal enveloping algebra of $\mathfrak{su}(2)$, there is a natural action of $SU(2)/\mathbb{Z}_2\simeq SO(3)$ on any $\mathbb{R}^3_\theta$. This selects $SO(3)$-equivariant $^*$-representations among those defined by -. A mere application of the Schur-Weyl decomposition theorem shows that the $SO(3)$-equivariance of the representation can be achieved whenever the functionals $\varphi^\mu_\nu$ and $\chi_\mu$ have the following form: $$\varphi_{\alpha \mu} (\partial) = f(\Delta) \delta_{\alpha \mu} + g(\Delta) \partial_\alpha \partial_\mu + i h(\Delta) \varepsilon_{\alpha\mu}^{\hspace{11pt} \rho} \partial_\rho,\label{polynomial_phi}$$ $$\chi_\mu(\partial) = \ell (\Delta) \partial_\mu \ \label{polynomial_chi},$$ where the [*[real]{}*]{} $f(\Delta)$, $g(\Delta)$, $h(\Delta)$ and [*[complex]{}*]{} $\ell(\Delta)$ $SO(3)$-invariant functionals are constrained by -.\
By solving the constraints, one easily finds that the admissible solutions (i.e those $\hat{x}_\mu$ admitting an expansion of the form $\hat{x}_\mu=x_\mu+\mathcal{O}(\theta)$) are such that $h=\theta$ and form a family indexed by 3 functional $f$, $g$ and $\ell$ defined by $$\label{general_rep}
\hat{x}_\mu = x^\alpha \left[ f(\Delta) \delta_{\alpha \mu} + g(\Delta) \partial_\alpha \partial_\mu + i\theta \varepsilon_{\alpha \mu}^{\hspace{11pt} \rho} \partial_\rho \right] + \ell(\Delta) \partial_\mu,$$ $$\begin{aligned}
2\left[(f+g\Delta)' + g \right] &= \ell + \ell^\dagger \ , \label{1st_condition} \\
2(f+g\Delta)f' &= gf + \theta^2\label{2nd_condition}.\end{aligned}$$ An interesting subfamily of poly-differential $^*$-representations arises whenever $f+g\Delta=1$, so that, setting $g(\Delta){: \hspace{-0.5pt} =}\frac{\theta^2}{3} G(2\theta^2 \Delta)$, , reduce to $$\begin{aligned}
l+l^\dag&=&2g(\Delta) \ ,\label{kv-chi}\\
0&=&2t \frac{dG}{dt} + 3\left(G(t)+1 \right) - \frac{t}{6} G^2(t),\ \ \label{equadiff-reduced} ,\end{aligned}$$ for which the Ricatti equation is solved by $G(t) = -6\sum_{n=1}^\infty \frac{2^n B_{2n}}{(2n)!} t^{n-1}$ where $B_n$ are Bernoulli numbers. The resulting subfamily[[^1]]{} takes the form $$\hat{x}_\mu=x^\alpha\left[(1-g(\Delta)\Delta)\delta_{\alpha\mu}+g(\Delta)\partial_\alpha\partial_\mu+
i\theta\varepsilon_{\alpha\mu}^{\hspace{11pt}\rho}\partial_\rho
\right]+g(\Delta)\partial_\mu . \label{kv-star}$$ We will derive from this subfamily a tracial star-product equivalent to the Kontsevich product for the Poisson manifold dual to the Lie algebra $\mathfrak{su}(2)$.\
Quantization maps, deformations of $\mathbb{R}^3$ and extensions.
=================================================================
Let $Q$ denotes the quantization map, i.e an invertible $^*$-algebra morphism $$Q: (\mathcal{M}(\mathbb{R}^3),\star) \to (\mathcal{L}(\mathcal{M}(\mathbb{R}^3)),\cdot)\label{quant-map},$$ where “.” is the product between differential operators omitted from now on, such that for any $f,\ g\in\mathcal{M}(\mathbb{R}^3)$, $$f\star g {: \hspace{-0.5pt} =}Q^{-1}\left(Q(f)Q(g)\right),\ Q(1)={{ \mathbb{I}}},\ Q(\bar{f})=\left(Q(f)\right)^\dag,\label{alg-morph}$$ $$Q(f)\rhd1=f(x) \ ,\label{Q-unitaction}$$ so that $Q^{-1}\left(Q(f)\right)=Q(f) \rhd 1$, where “$\rhd$” is the left action of operators. Therefore the star-product can be expressed as $$(f\star g)(x)=\int \frac{d^3p}{(2\pi)^3}\frac{d^3q}{(2\pi)^3}\tilde{f}(p)\tilde{g}(q)Q^{-1}\left(E_p(\hat{x})E_q(\hat{x}) \right) \ , \label{star-definition}$$ for any $f,g\in\mathcal{M}(\mathbb{R}^3)$, where the quantized plane waves $E_p(\hat{x})$ are defined by $$E_p(\hat{x}) {: \hspace{-0.5pt} =}Q(e^{ipx}).\label{nc-planew}$$ The quantized plane waves define a map $E:SU(2)\to\mathcal{L}(\mathcal{M}(\mathbb{R}^3))$, $$E:g\mapsto E(g):=E_p(\hat{x}), \label{themap}$$ $$E(g^\dag)=E^\dag(g) \ , \label{themapdag}$$ for any $g\in SU(2)$. Using polar decomposition, one can write $$E(g)=U(g)|E(g)| \ , \label{polar-dec}$$ where $|E(g)|:=\sqrt{E^\dag(g)E(g)} \neq 0$ and the unitary operator $U:SU(2)\to\mathcal{L}(\mathcal{M}(\mathbb{R}^3))$ can be expressed as $$U(g)=e^{i\xi_g^\mu\hat{x}_\mu},$$ in view of the Stone’s theorem, where $\xi_g^\mu\in\mathbb{R}$. Then, the Baker-Campbell-Hausdorff formula for $\mathfrak{su}(2)$ gives $$e^{i\xi_{g_1}\hat{x}}e^{i\xi_{g_2}\hat{x}}=e^{iB(\xi_{g_1},\xi_{g_2})\hat{x}} \ ,\label{expans-bak}$$ where the infinite expansion $B(\xi_{g_1},\xi_{g_2})$ fulfills $$B(\xi_{g_1},\xi_{g_2})=-B(-\xi_{g_2},-\xi_{g_1}) , \ B(\xi_g,0)=\xi_g \ .$$ Observe that $U(g)$ and $E(g)$ define representations of $SU(2)$. Then, one has for any $g_1,g_2\in SU(2)$ $$U(g_1)U(g_2)=U(g_1g_2) \ , \label{projectif-su2}$$ which holds true (up to unitary equivalence) as a mere application of the Wigner theorem to $SU(2)$, while we demand $$E(g_1)E(g_2)=\Omega(g_1,g_2)E(g_1g_2)\label{projectif-caracter}$$ where $\Omega(g_1,g_2)$ is to be determined. Combining $E(g^\dag g)=E({{ \mathbb{I}}})={{ \mathbb{I}}}$ with Eqn. , one obtains $E(g^\dag)E(g)=\Omega(g^\dag,g){{ \mathbb{I}}}$, for any $g\in SU(2)$. Therefore $\vert E(g)\vert=\sqrt{\Omega(g^\dag,g)}{{ \mathbb{I}}}$, so that $$\omega_g:=\sqrt{\Omega(g^\dag,g)} \in \mathbb{R},\ \omega_g>0,\label{positiv-omega}$$ together with $$[|E(g)|,U(g)]=0. \label{cond-central}$$ Using and , one get $$E(g_1)E(g_2)=|E(g_1)||E(g_2)|U(g_1g_2)=|E(g_1)||E(g_2)||E(g_1g_2)|^{-1}E(g_1g_2) \ , \label{intermediaire}$$ where the 2nd equality stems from , which combined with the expression for $|E(g)|$ yields $$E(g_1)E(g_2)=(\omega_{g_1}\omega_{g_2}\omega^{-1}_{g_1g_2})E(g_1g_2) \ , \label{planew-multiplic}$$ where $$E(g_1g_2)=\omega_{g_1g_2}e^{iB(\xi_{g_1},\xi_{g_2})\hat{x}}.$$ Note that insures the associativity of the star-product , since the 2-cocycle $\Omega(g_1,g_2):=\omega_{g_1}\omega_{g_2}\omega^{-1}_{g_1g_2}$ obeys $\Omega(g_1,g_2)\Omega(g_1g_2,g_3)=\Omega(g_1,g_2g_3) \Omega(g_2,g_3)$, for any $g_1,g_2,g_3\in SU(2)$. From eqn. , one infers that any unitary equivalent representations, $U$ and $U^\prime$ correspond to unitary equivalent products. Indeed, one has $U^\prime(g)=e^{i\gamma(g)}U(g)=e^{i\gamma(g)}e^{i\xi_g\hat{x}} $, where $\gamma$ is a real function, which implies the following equivalence relation $T(f\star^\prime g)=Tf\star Tg$ where $T$ is defined by $E^\prime_k(\hat{x})\equiv Q^\prime(e^{ikx}):=Q\circ T(e^{ikx})=e^{i\gamma(k)}Q(e^{ikx})=e^{i\gamma(k)}E_k(\hat{x}).$ Informally speaking, the star-product which will be determined in a while is essentially unique up to unitary equivalence.\
This result can be understood [@tp-jcw17] within the more general framework of central extensions of Lie groups which in addition offers a convenient way for generalizations of the present work. In the following $\mathcal{A}$ is a 1-d abelian Lie group which we will assumed to be $\mathcal{A}=\mathbb{R}/\mathcal{D}$ where $\mathcal{D}$ is a discrete group of $\mathbb{R}$, i.e $\mathcal{D}=p\mathbb{Z}$. Let $G$ be a connected Lie group with Lie algebra $\mathfrak{g}$ whose action on $\mathcal{A}$, $\rho:G\times \mathcal{A}\to\mathcal{A}$, is assumed to be trivial ($\rho(g,a)=a$, for any $g\in G$, $a\in\mathcal{A}$). $\mathcal{E}$ is a central extension of $G$ if $\mathcal{A}$ is isomorphic to a subgroup of the center of $G$, $\mathcal{Z}(G)$ and $G\simeq \mathcal{E}/\mathcal{A}$ as group isomorphism. Recall that inequivalent central extensions of groups encoded by the short exact sequence ${{ \mathbb{I}}}\to\mathcal{A}\to\mathcal{E}\overset{\pi}{\to} G\to {{ \mathbb{I}}}$ (where $\pi$ is the canonical projection, with in addition $Im(\mathcal{A})\subset\mathcal{Z}(\mathcal{E})$), are classified by $H^2(G,\mathcal{A})$, the 2nd group of the cohomologie of $G$ with values in $\mathcal{A}$. Actually, up to additional technical requirements, $\mathcal{E}$ defines a principal fiber bundle over $G$ with 1-form connection and structure group $\mathcal{A}$. For [*[simply connected]{}*]{} Lie groups $G$, one has $$H^2(G,\mathbb{R}/\mathcal{D})\simeq H^2_{alg}(\mathfrak{g},\mathbb{R}),$$ see e.g [@hoch-shap], where $H^2_{alg}(\mathfrak{g},\mathbb{R})$ is the 2nd group of (real) cohomology of the Lie algebra $\mathfrak{g}$.\
When $G$ is in addition semi simple, which is for instance the case for $SU(2)$, one has $H^2_{alg}(\mathfrak{g},\mathbb{R})=\{0\}$ so that $H^2(G,\mathbb{R}/\mathcal{D})$ is trivial. When $\mathcal{D}=\mathbb{Z}$, one has $\mathbb{R}/\mathcal{D}=U(1)$, hence the triviality of $H^2(SU(2),U(1))$ which explains the uniqueness of the above star-products up to unitary equivalence. The above conclusion extends to the central extension of any semi-simple and simply connected Lie group by $\mathbb{R}/\mathcal{D}$ so that the extension of the present construction to spaces with noncommutativity based on the corresponding Lie algebra should produce uniqueness of the star-product (up to equivalence).\
When $G$ is semi-simple but not necessarily simply connected, its inequivalent central extensions by $\mathbb{R}/\mathcal{D}$ are classified (up to additional technical requirements) by $H^1_{\hat{C}}(G,\mathbb{R}/\mathcal{D})$, where $H^\bullet_{\hat{C}}$ denotes the Cech cohomology. The following isomorphism holds true $$H^1_{\hat{C}}(G,\mathbb{R}/\mathcal{D})\simeq Hom(\pi_1(G)\to\mathbb{R}/\mathcal{D}).$$ Now assume $G=SL(2,\mathbb{R})$ and $\mathcal{D}=\mathbb{Z}$ so that $\mathbb{R}/\mathcal{D}\simeq U(1)$. The use of Iwazawa decomposition yields $SL(2,\mathbb{R})\simeq \mathbb{R}^2\times \mathbb{S}^1$, which combined with $\pi_1(X\times Y)=\pi_1(X)\times\pi_1(Y)$ for any topological spaces $X$ and $Y$ implies $\pi_1(SL(2,\mathbb{R}))\simeq\mathbb{Z}$. Hence, $Hom(\mathbb{Z}\to U(1))\simeq U(1)$ which classifies the inequivalent extensions of $SL(2,\mathbb{R})$ by $U(1)$. Accordingly, one expect that the extension of the present construction of $SL(2,\mathbb{R})$ gives rise to inequivalent classes of star-products.\
Quantized plane waves for $\mathbb{R}^3_\theta$
===============================================
The determination of which takes the generic form $$E_p(\hat{x})=\omega(p)e^{i\xi(p)\hat{x}} \label{generalform-ncexpo} ,$$ can be achieved through a standard albeit tedious computation whose main steps are summarized now.\
First, using $e^{i\xi(p)\hat{x}}\rhd 1 = \frac{e^{ipx}}{\omega(p)}$, one easily infers that $$e^{-i\xi(\lambda p) \hat{x}} \partial_\mu e^{i\xi(\lambda p) \hat{x}} = (i\lambda p_\mu) {{ \mathbb{I}}},\label{megaplus1}$$ which holds for any $\lambda\in\mathbb{R}$. Then, the combination of $[\partial_\mu, \hat{x}_\nu] = [\partial_\mu,x^a\varphi_{a\nu}] = \varphi_{\mu \nu}$ with the functional derivative of with respect to $\lambda$ yields $$\label{diff-xi}
\varphi_{\mu\nu}(i\lambda p) \frac{d}{d\lambda} \left[ \xi^\nu(\lambda p) \right] = p_\mu.$$ From $SO(3)$-covariance requirement, it can be shown [@tp-jcw17] that this latter functional differential equation is solved by $$\begin{aligned}
\xi^\mu(p) &=& \int_0^1 d\lambda (\varphi^{-1})^{\mu\nu}_{\vert_{i\lambda p}} p_\nu,\label{solution-xi}\\
(\varphi^{-1})^{\mu\nu}(ip) &=& \frac{1}{f^2+\theta^2 p^2} \left( f \delta^{\mu\nu} + 2f'p^\mu p^\nu + \theta \varepsilon^{\mu \nu \rho} p_\rho \right) \ , \label{phi-inverse}\end{aligned}$$ where $f$ and $f^\prime$, -, depend on $(-p^2)$. $\xi_\mu$ can be verified to be an injective antisymmetric real-valued function. Finally, Eqn. can be recast as a Volterra integral given by $$\xi^\mu(p) = \int_{-p^2}^0 \frac{dt}{2\vert p \vert\sqrt{-t}}\ \frac{f(t)-2tf^\prime(t)}{f^2(t)-\theta^2 t}p^\mu .\label{volterra-xi}$$ It can be verified that simplifies to $\xi^\mu=p^\mu$ when use it made of the subfamily of $^*$-representations .\
To determine $\omega(p)$, one observes that $$\frac{d}{d \lambda} \left[ e^{i\xi(\lambda p) \hat{x}} \right]= i (\varphi^{-1})^{\mu\nu}_{\vert_{i\lambda p}} p_\nu \hat{x}_\mu e^{i\xi(\lambda p) \hat{x}} \ ,$$ where has been used, implying $$\frac{d}{d \lambda} \left[ e^{i\xi(\lambda p) \hat{x}} \right] \rhd 1 = i \left( x^\nu + \chi_\mu (\varphi^{-1})^{\mu\nu}_{\vert_{i\lambda p}} \right) p_\nu \frac{e^{i\lambda px}}{\omega(\lambda p)} \ .$$ Besides, the following relation holds true $$\frac{d}{d \lambda} \left[ \frac{e^{i\lambda px}}{\omega(\lambda p)} \right] = \left( ix^\nu p_\nu - \frac{1}{\omega(\lambda p)} \frac{d}{d \lambda} \left[ \omega(\lambda p) \right] \right) \frac{e^{i\lambda px}}{\omega(\lambda p)}.$$ Now, one can verify that $\frac{d}{d\lambda} \left[ \hat{A} f(x) \right] = \frac{d\hat{A}}{d\lambda} f(x)$ where $\hat{A}$ and $f$ are any operator and function suitably chosen for the latter expression to be well defined. Hence $$\begin{aligned}
\frac{d}{d\lambda} \left[ e^{i\xi(\lambda p) \hat{x}} \rhd 1 \right] &=& \frac{d}{d\lambda} \left[ e^{i\xi(\lambda p) \hat{x}} \right] \rhd 1,\\
i \left( x^\nu + \chi_\mu (\varphi^{-1})^{\mu\nu}_{\vert_{i\lambda p}} \right) p_\nu &=& ix^\nu p_\nu - \frac{1}{\omega(\lambda p)} \frac{d}{d \lambda} \left[ \omega(\lambda p) \right] \ ,\end{aligned}$$ implying $$\frac{1}{\omega(\lambda p)} \frac{d}{d \lambda} \left[ \omega(\lambda p) \right] = - i \chi_\mu (\varphi^{-1})^{\mu\nu}_{\vert_{i\lambda p}} p_\nu \ ,$$ which is solved by [@tp-jcw17] $\omega(p) = e^{-i \int_0^1 d\lambda \ \chi_\mu(i\lambda p) (\varphi^{-1})^{\mu\nu}_{\vert_{i\lambda p}} p_\nu}$. This latter expression can be recast as a Volterra integral given by $$\label{volterra-omega}
\omega(p) = e^{\int_{-p^2}^0 dt\ \frac{f(t)-2tf^\prime(t)}{f^2(t)-\theta^2 t}\ell(t)}.$$ Consistency with requires $\omega(p)$ to be a positive real quantity therefore constraining $\ell$ to be a real functional, $\ell^\dag=\ell$. It follows that reduces to $\ell=(f+g\Delta)^\prime+g$, which thus constraints $\ell$ once $f$ and $g$ fulfilling are obtained.\
To summarize the above analysis, given a $^*$-representation belonging to the family -, eqns. and fully characterize the corresponding quantization map $Q$ , together with its star-product . Two remarks are in order:\
i) One can show [@tp-jcw17] that for the family of poly-differential $^*$-representations considered in this note, $Q$ [*[cannot]{}*]{} be the Weyl quantization map $W$related to the symmetric ordering for operators. In fact, the only poly-differential representation compatible with the Weyl quantization is defined by $\hat{x}_\mu=x^\alpha\left((1-g\Delta)\delta_{\alpha\mu}+g\partial_\alpha\partial_\mu+i\theta
\varepsilon_{\alpha\mu\rho}\partial_\rho\right) $ (in particular $\chi=0$), where $g$ has been defined above, which however is not a $^*$-representation.\
ii) One can verify that the star-product used in [@lagraa] and some ensuing works does not belong to the general family of star-products related to . The former star-product, defining a particular deformation of $\mathbb{R}^3$ called $\mathbb{R}^3_\lambda$, can be related to the Wick-Voros product [@wick-voros; @galuccio; @fedele] stemming from a twist. So far, whether or not the present family of star-products also admits a representation in terms of a twist is not known.
Tracial star-product and related quantum field theories {#subsection34}
=======================================================
In this section, we restrict ourselves to the particular subfamily of $^*$-representations . By combining this latter with eqns. and , one easily finds that the quantized plane waves are defined by $Q(e^{ipx})\equiv E_p(\hat{x}) = \left( \frac{\sin(\theta |p|)}{\theta |p|} \right)^2 e^{ip\hat{x}}$, from which follows $$(f\star_Q g)(x) = \int \frac{d^3p}{(2\pi)^3}\frac{d^3q}{(2\pi)^3}\tilde{f}(p)\tilde{g}(q) \mathcal{W}^2(p,q) e^{iB(p,q)x} \ ,$$ for any $f,g \in \mathcal{M}(\mathbb{R}^3)$, with $\mathcal{W}(p,q) {: \hspace{-0.5pt} =}\frac{|B(p,q)|}{\theta |p||q|}\frac{\sin(\theta |p|)\sin(\theta |q|)}{\sin(\theta |B(p,q)|)}$ where $B(p,q)$ is given by . Now, introduce a new quantization map $\mathcal{K}:\mathcal{M}(\mathbb{R}^3) \to \mathcal{L}(\mathcal{M}(\mathbb{R}^3))$ through $$\mathcal{K} {: \hspace{-0.5pt} =}Q \circ H,$$ where the operator $H$ acting on $\mathcal{M}(\mathbb{R}^3)$ is given by $$\label{Kontsevich}
H {: \hspace{-0.5pt} =}\frac{\theta \sqrt{\Delta}}{\sinh(\theta \sqrt{\Delta})},$$ and such that $$H(f\star_\mathcal{K}g)=H(f)\star_QH(g),$$ for any $f,g \in \mathcal{M}(\mathbb{R}^3)$. One obtains from a standard computation $$\mathcal{K}(e^{ipx}) = \frac{\sin(\theta |p|)}{\theta |p|} e^{ip\hat{x}} \label{checkpoint2},$$ from which it is easy to obtain finally $$(f\star_\mathcal{K}g)(x)=\int \frac{d^3p}{(2\pi)^3}\frac{d^3q}{(2\pi)^3}\tilde{f}(p)\tilde{g}(q) \mathcal{W}(p,q) e^{iB(p,q)x} \ , \label{kontsev-product}$$ for any $f,g \in \mathcal{M}(\mathbb{R}^3)$, where $\mathcal{W}(p,q)$ has been given above.\
The star-product $\star_\mathcal{K}$ is nothing but the Kontsevich product [@deform] related to the Poisson manifold dual to the finite dimensional Lie algebra $\mathfrak{su}(2)$,. This can be realized [@tp-jcw17] by noticing that can be recast into the form $\mathcal{K}(e^{ipx})=W(j^{\frac{1}{2}}(\Delta)(e^{ipx}))$ where $W$ is the Weyl quantization map, $W(e^{ipx})=e^{ip\hat{x}}$ and $$j^{\frac{1}{2}}(\Delta)=\frac{\sinh(\theta \sqrt{\Delta})}{\theta \sqrt{\Delta}} \ , \label{duflo}$$ is the Harish-Chandra map [@harish], [@duflo]. Hence, $$\mathcal{K}=W\circ j^{\frac{1}{2}}(\Delta) \ , \label{deriv-konts}$$ which coincides with the Kontsevich product in the present case, see e.g [@q-grav]. Note that and imply $$j^{\frac{1}{2}}(\Delta)=H^{-1},$$ so that $H$ is the inverse of the Harish-Chandra map. The star-product $\star_\mathcal{K}$ is tracial with respect to the trace functional which in the present case is simply defined by the Lebesgue integral on $\mathbb{R}^3$. Namely, one has $$\int d^3 x (f \star_\mathcal{K} g)(x) = \int d^3 x f(x) g(x).$$
This last interesting property can be exploited to built NCFT admitting standard (i.e commutative) massive real or complex scalar field theories with quartic interaction as formal commutative limits [@jpw-16], [@KV-15], namely $$\begin{aligned}
S_1&=&\int d^3x\big[\frac{1}{2}\partial_\mu\phi\star_\mathcal{K}\partial_\mu\phi+\frac{1}{2}m^2\phi\star_\mathcal{K}\phi+\frac{\lambda}{4!}\phi\star_\mathcal{K}\phi\star_\mathcal{K}\phi\star_\mathcal{K}\phi\big]\label{real-clasaction},\\
S_2&=&\int d^3x\big[\partial_\mu\Phi^\dag\star_\mathcal{K}\partial_\mu\Phi+m^2\Phi^\dag\star_\mathcal{K}\Phi+
{\lambda}\Phi^\dag\star_\mathcal{K}\Phi\star_\mathcal{K}\Phi^\dag\star_\mathcal{K}\Phi\big]\label{complx-clasaction}.\end{aligned}$$ It turns out that and do not have (perturbative) UV/IR mixing, as shown in [@jpw-16]. Indeed, the relevant contributions to the 2-point function can be written as $\Gamma_2^{(I)}=\int d^3x\ \phi(x)\phi(x)\omega_I$ and $\Gamma_2^{(II)}=\int \frac{d^3k_1}{(2\pi)^3}\frac{d^3k_1}{(2\pi)^3}\ \tilde{\phi}(k_1)\tilde{\phi}(k_2)\omega_{II}(k_1,k_2)$, with $$\begin{aligned}
\omega_{I}&\sim&\frac{4}{\theta^2}\int\frac{d^3p}{(2\pi)^3}\ \frac{\sin^2(\frac{\theta}{2}|p|)}{p^2(p^2+m^2)}=\frac{1-e^{-\theta m}}{2m\pi\theta^2}\\
\omega_{II}&\sim&\int d^3x\frac{d^3p}{(2\pi)^3}\ \frac{1}{p^2+m^2}(e^{ipx}\star_{\mathcal{K}}e^{ik_1x}\star_{\mathcal{K}}e^{-ipx}\star_{\mathcal{K}}e^{ik_2x}).\end{aligned}$$ When $\theta\ne0$, $\omega_I$ is obviously finite even for $m=0$ while $\omega_{II}(0,k_2)\sim\delta(k_2)\omega_I$. Similar expression holds for $\omega_{II}(k_1,0)$ so that no IR singularity occurs within . This signals the absence of UV/IR mixing. One can check [@jpw-16] the UV one-loop finiteness of $\omega_{II}$. Note that similar conclusions hold true for the complex scalar field case .\
The origin of the absence of UV/IR mixing as well as the present mild (finite) UV behaviour (which should extend to all orders) is likely due to the Peter-Weyl decomposition[[^2]]{}of the algebra modeling the noncommutative space, i.e $\mathbb{R}^3_\theta=\oplus_{j\in\frac{\mathbb{N}}{2}}\mathbb{M}_{2j+1}(\mathbb{C})$, which reflects the relationship between the algebra and the convolution algebra of $SU(2)$, see [@wal-16]. This is particularly apparent in a class of NCFT [@vitwal], with however kinetic operators different from the usual Laplacian used here, for which the theory splits into an infinite tower of (matrix) field theories, each on a finite geometry with the radius $\sim j$ of $\mathbb{M}_{2j+1}(\mathbb{C})$ serving as a natural cut-off. The use of the standard Laplacian in , complicates the analysis of the UV behaviour to arbitrary orders but one can reasonably conjecture that both and NCFT are (UV) finite to all orders.\
An interesting issue would be to extend the above analysis to the case of noncommutative gauge theories whose commutative limit would reproduce the usual gauge (Yang-Mills) theory on $\mathbb{R}^3$. Such an extension would presumably exclude the choice of a derivation-based differential calculus [@mdv-jcw] since this latter would produce natural Laplacians without (analog of) radial dependence. Note that one proposal aiming to include radial dependence presented in [@fedele] amounts to enlarge the initial algebra by incorporating the deformation parameter itself.\
4 true cm
[**[Acknowledgments:]{}**]{} J.-C. Wallet is grateful to F. Besnard, N. Franco and F. Latrémolière for discussions on various topics related to the present work. T. Poulain thanks COST Action MP1405 QSPACE for partial financial support. This work is partially supported by H2020 Twinning project No. 692194, “RBI-T-WINNING”.
[10]{}
See e.g C. Guedes, D. Oriti and M. Raasakka, “[*[Quantization maps, algebra representation and non-commutative Fourier transform for Lie groups ]{}*]{}”, J. Math. Phys. [**[54]{}**]{} (2013) 083508 and references therein. See also
L. Freidel and E. Livine, “[*[Effective 3-D quantum gravity and non-commutative quantum field theory ]{}*]{}”, Phys. Rev. Lett. [**[96]{}**]{} (2006) 221301. A. Baratin and D. Oriti, “[*[ ]{}*]{}”, Phys. Rev. Lett. [**[105]{}**]{}(2010) 221302
For a recent review on star-products and their applications, see e.g F. Lizzi, P. Vitale, "[*[Matrix Bases for Star Products: a Review ]{}*]{}, SIGMA [**[10]{}**]{} (2014) 086.
A. B. Hammou, M. Lagraa and M. M. Sheikh-Jabbari, “[*[Coherent state induced star-product on R\*\*3(lambda) and the fuzzy sphere]{}*]{}”, [Phys. Rev. D**66**, 025025 (2002)]{}.
J. M. Gracia-Bondía, F. Lizzi, G. Marmo and P. Vitale, “[*[Infinitely many star-products to play with]{}*]{}”, [[JHEP]{} **04** (2002) 026]{}. V.G. Kupriyanov and P. Vitale, “[*[Noncommutative $\mathbb{R}^d$ via closed star-product ]{}*]{}”, JHEP [**[08]{}**]{} (2015) 024.
See e.g L. Freidel and S. Majid, “[*[Noncommutative harmonic analysis, sampling theory and the Duflo map in 2+1 quantum gravity ]{}*]{}”, Class. Quant. Grav. [**[25]{}**]{} (2008) 045006.
S. Galluccio, F. Lizzi, P. Vitale“[*[Twisted Noncommutative Field Theory with the Wick-Voros and Moyal Products]{}*]{}”, Phys.Rev.D[**[78]{}**]{} (2008) 085007. See also P. Aschieri, F. Lizzi, P. Vitale "[*[Twisting all the way: from Classical Mechanics to Quantum Fields]{}*]{}, Phys.Rev.D[**[77]{}**]{} (2008) 025037.
J.M. Gracia-Bondia, F. Lizzi, F. Ruiz Ruiz, P. Vitale, “[*[Noncommutative spacetime symmetries: Twist versus covariance]{}*]{}”, Phys.Rev.D[**[74]{}**]{} (2006) 025014; Erratum-ibid.D[**[74]{}**]{} (2006)029901.
P. Vitale, J.-C. Wallet, ”[*[Noncommutative field theories on $\mathbb{R}^3_\lambda$: Toward UV/IR mixing freedom]{}*]{}”, [JHEP]{} **04** (2013) 115. A. Géré, P. Vitale, J.-C. Wallet, “[*[Quantum gauge theories on noncommutative three-dimensional space]{}*]{}”, [Phys. Rev. D**90** (2014) 045019 ]{}. A. Géré, T. Jurić and J.-C. Wallet, “[*[Noncommutative gauge theories on $\mathbb{R}^3_\lambda$: Perturbatively finite models ]{}*]{}”, [[JHEP]{} **12** (2015) 045]{}. P. Vitale, “Noncommutative field theory on $\mathbb{R}^3_\lambda$”, Fortschr. Phys. (2014) DOI 10.1002/prop.201400037 \[arxiv:1406.1372\].
J.-C. Wallet, “[*[Exact Partition Functions for Gauge Theories on $\mathbb{R}^3_\lambda$]{}*]{}”, Nucl. Phys. B[**[912]{}**]{} (2016) 354. H. Grosse and R. Wulkenhaar, ”[*[Renormalisation of $\varphi^4$-theory on noncommutative $\mathbb{R}^2$ in the matrix base]{}*]{}”, JHEP [**0312**]{} (2003) 019. H. Grosse and R. Wulkenhaar, ”[*[Renormalisation of $\varphi^4$-theory on noncommutative $\mathbb{R}^4$ in the matrix base]{}*]{}”, Commun. Math. Phys. [**256**]{} (2005) 305. A. de Goursac, A. Tanasa, J.-C. Wallet, ”[ *[Vacuum configurations for renormalizable non-commutative scalar models]{}*]{}”, Eur. Phys. J. C[**[53]{}**]{} (2008) 459. A. de Goursac, J.-C. Wallet, R. Wulkenhaar, “[*[On the vacuum states for noncommutative gauge theory]{}*]{}”, [Eur. Phys. J. C**56** (2008) 293–304]{}. P. Martinetti, P. Vitale, J.-C. Wallet, ”[*[ Noncommutative gauge theories on $\mathbb{R}^2_\theta$ as matrix models]{}*]{}”, *JHEP* **09** (2013) 051. Families of star products on the Moyal space $\mathbb{R}^4_\theta$ have been constructed in A. de Goursac, J.-C. Wallet, “[ *[Symmetries of noncommutative scalar field theory]{}*]{}”, J. Phys. A: Math. Theor. [**[44]{}**]{} (2011) 055401. See also J.-C. Wallet, “[*[Noncommutative Induced Gauge Theories on Moyal Spaces]{}*]{}”, [J. Phys. Conf. Ser. **103**, 012007 (2008)]{}. A. de Goursac, J.-C. Wallet and R. Wulkenhaar, *Noncommutative induced gauge theory*, *Eur. Phys. J.* **C51** (2007) 977. T. Jurić, T. Poulain, J.-C. Wallet, “[*[Closed star-product on noncommutative $\mathbb{R}^3$ and scalar field dynamics ]{}*]{}”, JHEP [**[05]{}**]{} (2016) 146. T. Jurić, T. Poulain, J.-C. Wallet, “[*[Involutive representations of coordinate algebras and quantum spaces ]{}*]{}”, arXiv:1702.06348 (2017).
For a general construction, see N. Durov, S. Meljanac, A. Samsarov and Z. Skoda, “[*[A universal formula for representing Lie algebra generators as formal power series with coefficient in the Weyl algebra ]{}*]{}”, J. Algebra [**[309]{}**]{} (2007) 318.
V. G. Kupriyanov and D. V. Vassilevich, “[*[star-products made (somewhat) easier ]{}*]{}”, Eur. Phys. J. [**[C58]{}**]{} (2008) 627.
V.G. Kupriyanov and P. Vitale, “[*[Noncommutative $\mathbb{R}^d$ via closed star-product ]{}*]{}”, JHEP [**[08]{}**]{} (2015) 024.
S. Meljanac, A. Samsarov, M. Stojic and K. S. Gupta, “[*[Kappa-Minkowski space-time and the star-product realizations]{}*]{}”, Eur. Phys. J. C [**53**]{}, 295 (2008)\[arXiv:0705.2471 \[hep-th\]\]. S. Meljanac and M. Stojic, “[*[New realizations of Lie algebra kappa-deformed Euclidean space]{}*]{}”, Eur. Phys. J. C [**47**]{} (2006) 531. For an approach based on twists, see e.g A. Borowiec, A. Pachol, “[*[kappa-Minkowski spacetime as the result of Jordanian twist deformation]{}*]{}”, Phys.Rev.D [**[79]{}**]{} (2009) 045012.
M. Kontsevich, “[*[Deformation quantization of Poisson Manifolds ]{}*]{}”, Lett. Math. Phys. [**[66]{}**]{} (2003) 157.
Harish-Chandra, Trans. Amer. Math. Soc. [**[70]{}**]{} (1951), 28-96.
M. Duflo, “[*[Opérateurs différentiels bi-invariants sur un groupe de Lie]{}*]{}, Ann. Sc. Ec. Norm. Sup. [**[10]{}**]{} (1977) 107, ”[*[Caractères des algèbres de Lie résolubles ]{}*]{}", C. R. Acad. Sci. Paris, Série A-B 269 (1969) A437.12.
See e.g K.S. Brown, “[*[Cohomology of Groups]{}*]{}”, Springer-Verlag Berlin and Heidelberg GmbH & Co. K (1982).
R. Stora, F. Thuillier and J.-C. Wallet, *[Algebraic structure of cohomological field theory models and equivariant cohomology]{}*, [in *Infinite dimensional geometry, non commutative geometry, operator algebras, fundamental interactions*, p.266-297, Cambridge Press (1995)]{}. J.-C. Wallet, *Algebraic setup for the gauge fixing of [BF]{} and super [BF]{} systems*, *Phys. Lett.* **B235** (1990) 71. L. Baulieu, M. Bellon, S. Ouvry, J.-C. Wallet, “[*[Batalin-Vilkovisky analysis of supersymmetric systems ]{}*]{}”, Phys.Lett. B[**[252]{}**]{} (1990) 387. G. Hochschild, “[*[Group extensions of Lie groups I & II]{}*]{}”, Ann. of Math. [**[54]{}**]{} (1951) 96. A. Shapiro, "[*[Group extensions of compact Lie groups]{}*]{}, Ann. of Math. [**[50]{}**]{} (1949) 581.
A. Wick-Voros, “Wentzel-Kramers-Brillouin method in the Bargmann representation”, Phys. Rev. A[**[40]{}**]{} (1989) 6814.
M. Dubois-Violette, “[*[Lectures on graded differential algebras and noncommutative geometry]{}*]{}”, Noncommutative Differential Geometry and Its Applications to Physics, Springer Netherlands, 245–306 (2001), \[arxiv:math/9912017\]. J.-C. Wallet, “[*[Derivations of the Moyal algebra and Noncommutative gauge theories]{}*]{}”, [SIGMA **5** (2009) 013]{},\[arxiv:0811.3850\]. E. Cagnache, T. Masson and J-C. Wallet, “[*[Noncommutative Yang-Mills-Higgs actions from derivation based differential calculus]{}*]{}”, [J. Noncommut. Geom. **5**, 39–67 (2011)]{}, \[arxiv:0804.3061\].
[^1]: Notice that consistency with below will require $\ell$ to be real.
[^2]: This should hold provided the kinetic operator has a reasonable behaviour, i.e has compact resolvant insuring a sufficient decay of the propagator.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We present a detailed investigation of the NLO polarization of the top quark in $t \bar t$ production at a polarized linear $e^+e^-$ collider with longitudinally polarized beams. By appropiately tuning the polarization of the beams one can achieve close to maximal values for the top quark polarization over most of the forward hemisphere for a large range of energies. This is quite welcome since the rate is largest in the forward hemisphere. One can also tune the beam polarization to obtain close to zero polarization over most of the forward hemisphere.'
author:
- |
[*S. Groote$^1$, J.G. Körner$^2$, B. Melić$^3$, S. Prelovsek$^4$*]{}\
$^1$Loodus- ja Tehnoloogiateaduskond, Füüsika Instituut, Tartu Ülikool, Riia 142, EE–51014 Tartu, Estonia\
$^2$Institut für Physik der Johannes-Gutenberg-Universität, Staudinger Weg 7, D–55099 Mainz, Germany\
$^3$ Rudjer Bošković Institute, Theoretical Physics Division, Bijenička c. 54, HR–10000 Zagreb, Croatia\
$^4$ Physics Department at University of Ljubljana and Jozef Stefan Institute, SI–1000 Ljubljana, Slovenia
title: |
Single top quark polarization at $O(\alpha_{s})$ in $t \bar t$ productionto0pt[to 0pt[-96ptto0pt[ MZ-TH/12-41]{}to0pt[August 2012]{} ]{}]{}\
at a polarized linear $e^+e^-$ collider
---
\[sec1\]Introductory remarks
============================
The top quark is so heavy that it keeps its polarization at production when it decays since $\tau_{\rm hadronization}\gg\tau_{\rm decay}$. One can test the Standard Model (SM) and/or non-SM couplings through polarization measurements involving top quark decays (mostly $t\to b+W^+$). New observables involving top quark polarization can be defined such as $\langle\vec{P}_t\cdot\vec{p}\rangle$ (see e.g. Refs. ). It is clear that the analyzing power of such observables is largest for large values of the polarization of the top quark. This calls for large top quark polarization values. One also wants a control sample with small or zero top quark polarization. Near maximal and minimal values of top quark polarization at a linear $e^+e^-$ collider can be achieved in $t \bar t$ production by appropiately tuning the longitudinal polarization of the beam polarization [@Groote:2010zf]. At the same time one wants to keep the top quark pair production cross section large. It is a fortunate circumstance that all these goals can be realized at the same time. A polarized linear $e^+e^-$ collider may thus be viewed as a rich source of close to zero and close to $100\%$ polarized top quarks.
Let us remind the reader that the top quark is polarized even for zero beam polarization through vector–axial vector interference effects $\sim v_ea_e,\,v_ea_f,\,v_fa_e,\,v_fa_f$, where $$\begin{aligned}
\label{ewcouplings}
v_e,a_e\quad&:& \mbox{electron current coupling}\nonumber\\
v_f,a_f\quad&:& \mbox{top quark current coupling}\end{aligned}$$ In Fig. \[fig:zeropol\] we present a NLO plot of the $\cos\theta$ dependence of the zero beam polarization top quark polarization for different characteristic energies at $\sqrt{s}=360$GeV (close to threshold), $\sqrt{s}=500$GeV (ILC phase 1), $\sqrt{s}=1000$GeV (ILC phase 2) and $\sqrt{s}=3000$GeV (CLIC).
![\[fig:zeropol\]Magnitude of NLO top quark polarization for zero beam polarization](korner_jurgen.Fig1.eps){width="60.00000%"}
\[sec2\]Top quark polarization at threshold and in the high energy limit
========================================================================
The polarization of the top quark depends on the c.m. energy $\sqrt{s}$, the scattering angle $\cos\theta$, the electroweak coupling coefficients $g_{ij}$ and the effective beam polarization $P_{\rm eff}$, i.e. one has $$\label{vecpol}
\vec{P}=\vec{P}\,(\sqrt{s},\cos\theta, g_{ij},P_{\rm eff})\,,$$ where the effective beam polarization appearing in Eq. (\[vecpol\]) is given by [@MoortgatPick:2005cw] $$\label{peff}
P_{\rm eff}=\frac{h_--h_+}{1-h_-h_+}\,.$$ and where $h_-$ and $h_+$ are the longitudinal polarization of the electron and positron beams $(-1<h_{\pm}<+1)$, respectively. Instead of the nonchiral electroweak couplings $g_{ij}$ one can alternatively use the chiral electroweak couplings $f_{mm'}$ ($m,m'=L,R$) introduced in Refs. [@Parke:1996pr; @Kodaira:1998gt]. The relations between the two sets of electroweak coupling coefficients can be found in Ref. [@Groote:2010zf]. In this report we shall make use of both sets of coupling parameters.
For general energies the functional dependence in Eq. (\[vecpol\]) is not simple. Even if the electroweak couplings $g_{ij}$ are fixed, one remains with a three-dimensional parameter space $(\sqrt{s},\cos\theta,P_{\rm eff})$. Our strategy is to discuss various limiting cases for the Born term polarization and then to investigate how the limiting values extrapolate away from these limits. In particular, we exploit the fact that, in the Born term case, angular momentum conservation (or $m$-quantum number conservation) implies $100\%$ top quark polarization at the forward and backward points for the $(e^-_Le^+_R)$ and $(e^-_Re^+_L)$ beam configurations.
In this section we discuss the behaviour of $\vec{P}$ at nominal threshold $\sqrt{s}=2m_t$ ($v=0$) and in the high energy limit $\sqrt{s}\to\infty$ ($v\to 1$). At threshold and at the Born term level one has $$\label{thresh}
\vec{P}_{\rm thresh}=\frac{P_{\rm eff}- A_{LR}}{1-P_{\rm eff}A_{LR}}
\,\,\,\hat{n}_{e^-}\,,$$ where $A_{LR}$ is the left–right beam polarization asymmetry $(\sigma_{LR}-\sigma_{RL})/(\sigma_{LR}+\sigma_{RL})$ and $\hat{n}_{e^-}$ is a unit vector pointing into the direction of the electron momentum. We use a notation where $\sigma(LR/RL)=\sigma(h_-=\mp 1;h_+=\pm 1)$. In terms of the electroweak coupling parameters $g_{ij}$, the nominal polarization asymmetry at threshold $\sqrt{s}=2m_t$ is given by $A_{LR}=-(g_{41}+g_{42})/(g_{11}+g_{12})=0.409$. Eq. (\[thresh\]) shows that, at threshold and at the Born term level, the polarization $\vec{P}$ is parallel to the beam axis irrespective of the scattering angle and has maximal values $|\vec{P}|=1$ for both $P_{\rm eff}=\pm1$ as dictated by angular momentum conservation. Zero polarization is achieved for $P_{\rm eff}=A_{LR}=0.409$.
In the high energy limit the polarization of the top quark is purely longitudinal, i.e. the polarization points into the direction of the top quark. At the Born term level one finds $\vec{P}(\cos\theta)=P^{(\ell)}(\cos\theta)\cdot\hat{p_t}$ with $$\label{helimit}
P^{(\ell)}(\cos\theta)
=\frac{(g_{14}+g_{41}+P_{\rm eff}(g_{11}+g_{44}))(1+\cos\theta)^2
+(g_{14}-g_{41}-P_{\rm eff}(g_{11}-g_{44})(1-\cos\theta)^2}
{(g_{11}+g_{44}+P_{\rm eff}(g_{14}+g_{41}))(1+\cos\theta)^2
+(g_{11}-g_{44}-P_{\rm eff}(g_{14}-g_{41}))(1-\cos\theta)^2}\,.$$ In the same limit, the electroweak coupling coefficients appearing in Eq. (\[helimit\]) take the numerical values $g_{11}=0.601$, $g_{14}=-0.131$, $g_{41}=-0.201$ and $g_{44}=0.483$. For $\cos\theta=\pm1$ and $P_{\rm eff}=\pm1$ the top quark is $100\%$ polarized as again dictated by angular momentum conservation. The lesson from the threshold and high energy limits is that large values of the polarization of the top quark close to $|\vec{P}|=1$ are engendered for large values of the effective beam polarization parameter close to $P_{\rm eff}=\pm1$.
Take, for example, the forward–backward asymmetry which is zero at threshold, and large and positive in the high energy limit. In fact, from the numerator of the high energy formula Eq. (\[helimit\]) one calculates $$A_{FB}=\frac34\,\,\frac{g_{44}+P_{\rm eff}g_{14}}{g_{11}+P_{\rm eff}g_{41}}
\,=\,0.61\,\frac{1-0.27P_{\rm eff}}{1-0.33P_{\rm eff}}\,.$$ The forward-backward asymmetry is large and only mildly dependent on $P_{\rm eff}$. More detailed calculations show that the strong forward dominance of the rate sets in rather fast above threshold [@Groote:2010zf]. This is quite welcome since the forward region is also favoured from the polarization point of view.
As another example take the vanishing of the polarization which, at threshold, occurs at $P_{\rm eff}=0.409$. In the high energy limit, and in the forward region where the numerator part of Eq. (\[helimit\]) proportional to $(1+\cos\theta)^2$ dominates, one finds a polarization zero at $P_{\rm eff}=(g_{14}+g_{41})/(g_{11}+g_{44})=0.306$. The two values of $P_{\rm eff}$ do not differ much from another.
\[sec3\]Overall rate and left-right (LR) and right-left (RL) rates
==================================================================
The overall rate $\sigma$ for partially longitudinal polarized beam production can be composed from the LR rate $\sigma_{LR}$ and the RL rate $\sigma_{RL}$ valid for $100\%$ longitudinally polarized beams. The notation is such that LR and RL refer to the $(e^-_Le^+_R)$ and $(e^-_Re^+_L)$ longitudinal polarization configurations, respectively. The relation reads [@Artru:2008cp] $$\begin{aligned}
\label{rate}
\frac{d\sigma}{d\cos\theta}
&=&\frac{1-h_-}2\frac{1+h_+}2\frac{d\sigma_{LR}}{d\cos\theta}
+\frac{1+h_-}2\frac{1-h_+}2\frac{d\sigma_{RL}}{d\cos\theta}\nonumber\\
&=&\frac14(1-h_-h_+)\Big(\frac{d\sigma_{LR}+d\sigma_{RL}}{d\cos\theta}
-P_{\rm eff}\frac{d\sigma_{LR}-d\sigma_{RL}}{d\cos\theta}\Big)\,.\end{aligned}$$ Using the left–right polarization asymmetry $$A_{LR}=\frac{d\sigma_{LR}-d\sigma_{RL}}{d\sigma_{LR}+d\sigma_{RL}}$$ one can rewrite the rate (\[rate\]) in the form $$\label{ratealr}
\frac{d\sigma}{d\cos\theta}=\frac14(1-h_-h_+)
\frac{d\sigma_{LR}+d\sigma_{RL}}{d\cos\theta}\Big(1-P_{\rm eff}A_{LR}\Big)\,.$$ The differential rate $d\sigma/d\cos\theta$ carries an overall helicity alignment factor $(1-h_-h_+)$ which enhances the rate for negative values of $h_-h_+$. Also, Fig. \[fig:asymrl\] shows that $A_{LR}$ varies in the range between $0.30$ and $0.60$ which leads to a further rate enhancement from the last factor in Eq. (\[ratealr\]) for negative values of $P_{\rm eff}$.
![\[fig:asymrl\]NLO left–right polarization asymmetry $A_{LR}$ for $\sqrt s=360$, $500$, $1000$, and $3000$GeV](korner_jurgen.Fig2.eps){width="60.00000%"}
Let us define reduced LR and RL rate functions $D_{LR/RL}$ by writing $$\frac{d\sigma_{LR/RL}}{d\cos\theta}
=\frac{\pi\alpha^2v}{3s^2}D_{LR/RL}(\cos\theta)$$ such that, in analogy to Eq. (\[rate\]), $$\label{ovrate}
D=\frac14(1-h_-h_+)\left(D_{LR}+D_{RL}-P_{\rm eff}(D_{LR}-D_{RL})\right)\,.$$ In the next step we express the reduced rate functions through a set of independent hadronic helicity structure functions. For the LR reduced rate function one has $$\begin{aligned}
2D_{LR}(\cos\theta)
&=&\frac38(1+\cos^2\theta)
\left((f_{LL}^2+f_{LR}^2)H_U^1+2f_{LL}f_{LR}H_U^2\right)\nonumber\\&&
+\frac34\sin^2\theta
\left((f_{LL}^2+f_{LR}^2)H_L^1+2f_{LL}f_{LR}H_L^2\right)\nonumber\\&&
+\frac34\cos\theta(f_{LL}^2-f_{LR}^2)H_F^4\end{aligned}$$ and accordingly for $D_{RL}$ with $f_{LL}\to f_{RR}$ and $f_{LR}\to f_{RL}$.
At NLO one has $H_a^j=H_a^j({\it Born\/})+H_a^j(\alpha_s)$. The radiatively corrected structure functions $H_a^j(\alpha_s)$ are listed in Ref. [@Groote:2010zf]. If needed they can be obtained from S.G. or B.M. in Mathematica format. For the non-vanishing unpolarized Born term contributions $H_a^j({\it Born\/})$ one obtains (see e.g.Ref. [@Groote:2008ux; @Groote:2010zf]) $$\begin{aligned}
\label{HUL1}
H_U^1({\it Born\/})=2N_cs(1+v^2),&&
H_L^1({\it Born\/})=\ H_L^2({\it Born\/})=N_cs(1-v^2),\nonumber\\[3pt]
H_U^2({\it Born\/})=2N_cs(1-v^2),&&
H_F^4({\it Born\/})=4N_csv.\end{aligned}$$
Following Refs. [@Parke:1996pr; @Kodaira:1998gt], $D_{LR}(\cos\theta)$ (and $D_{RL}(\cos\theta)$) can be cast into a very compact Born term form $$\label{dlr}
D_{LR}(Born)=\frac38\left(C_{LR}^2-2f_{LL}f_{LR}v^2\sin^2\theta\right)2N_cs,$$ where $$\label{clr}
C_{LR}(\cos\theta)=f_{LL}(1+v\cos\theta)+f_{LR}(1-v\cos\theta)\,.$$ The corresponding RL form $D_{RL}$ is obtained again by the substitution $(L\leftrightarrow R)$ in Eqs. (\[dlr\]) and (\[clr\]).
With the help of the compact expression in Eq. (\[dlr\]) and the translation table $2(g_{11}-g_{41})=(f_{LL}^2+f_{LR}^2)$, $2(g_{14}-g_{44})=-f_{LL}^2+f_{LR}^2$, $2(g_{11}+g_{41})=f_{RR}^2+f_{RL}^2$, $2(g_{14}+g_{44})=f_{RR}^2-f_{RL}^2$ one can easily verify the threshold value for $A_{LR}$ and the high energy limits for $A_{FB}$ discussed in Sec. \[sec2\].
\[sec4\]Single top polarization in $e^{+}e^{-} \to t \bar t$
============================================================
The polarization components $P^{(m)}$ ($m=\ell$: longitudinal; $m=tr$: transverse) of the top quark in $e^{+}e^{-} \to t \bar t$ are obtained from (the antitop quark spin is summed over) $$\label{pol}
P^{(m)}(P_{\rm eff})=\frac{N^{(m)}(P_{\rm eff})}{D(P_{\rm eff})}\,,$$ where the dependence on $P_{\rm eff}$ is given by $$N^{(m)}(P_{\rm eff})=\frac14(1-h_-h_+)\left(N^{(m)}_{LR}+N^{(m)}_{RL}
-P_{\rm eff}(N^{(m)}_{LR}-N^{(m)}_{RL})\right)\,.$$ $P^{(tr)}$ is the transverse polarization component perpendicular to the momentum of the top quark in the scattering plane. The overall helicity alignment factor $(1-h_-h_+)$ drops out when one calculates the normalized polarization components according to Eq. (\[pol\]). This explains why the polarization depends only on $P_{\rm eff}$ and not separately on $h_-$ and $h_+$ (see Eq. (\[vecpol\])).
The numerator factors $N^{(m)}_{LR}$ and $N^{(m)}_{RL}$ in Eq. (\[pol\]) are given by $$\begin{aligned}
\label{lpol}
-2N^{(\ell)}_{LR}(\cos\theta)&=&\frac38(1+\cos^2\theta)\,
(f_{LL}^2-f_{LR}^2)H_U^{4(\ell)}
+\frac34\sin^2\theta\,(f_{LL}^2-f_{LR}^2)H_L^{4(\ell)}\nonumber\\&&
+\frac34\cos\theta\,\bigg((f_{LL}^2+f_{LR}^2)H_F^{1(\ell)}
+2f_{LL}f_{LR}H_F^{2(\ell)}\bigg)\,,\end{aligned}$$ $$\begin{aligned}
\label{trpol}
-2N^{(tr)}(\cos\theta)\!\!\!&=&\!\!\!
-\frac3{\sqrt2}\sin\theta\cos\theta\,\,(f_{LL}^2-f_{LR}^2)\,H_I^{4(tr)}
\nonumber\\&&
-\frac3{\sqrt2}\sin\theta\left((f_{LL}^2+f_{LR}^2)H_A^{1(tr)}
+2f_{LL}f_{LR}H_A^{2(tr)}\right)\,,\end{aligned}$$ and $N^{(m)}_{RL}=-N^{(m)}_{LR}(L\leftrightarrow R)$. Note the extra minus sign when relating $N^{(m)}_{LR}$ and $N^{(m)}_{RL}$.
The LO longitudinal and transverse polarization components read (see e.g. Ref. [@Groote:2008ux; @Groote:2010zf]) $$\begin{aligned}
\label{ell}
H_U^{4(\ell)}({\it Born\/})=4N_csv,&&
H_F^{1(\ell)}({\it Born\/})=2N_cs(1+v^2),\nonumber\\[3pt]
H_L^{4(\ell)}({\it Born\/})=0,&&
H_F^{2(\ell)}({\it Born\/})=2N_cs(1-v^2),\end{aligned}$$ and $$H_I^{4(tr)}({\it Born\/})=2N_cs\frac{1}{2\sqrt{2}}v\sqrt{1-v^{2}},\quad
H_A^{1(tr)}({\it Born\/})=H_A^{2(tr)}({\it Born\/})
=2N_cs\frac{1}{2\sqrt{2}}\sqrt{1-v^2}\,.$$
The LO numerators (\[lpol\]) and (\[trpol\]) can be seen to take a factorized form [@Parke:1996pr; @Kodaira:1998gt] $$\begin{aligned}
N_{LR}^{(\ell)}(\cos\theta)&=&-\frac38\bigg(f_{LL}(\cos\theta+v)
+f_{LR}(\cos\theta-v)\bigg)\,C_{LR}(\cos\theta)\,\,2N_cs\,,\label{psell}
\nonumber \\
N_{LR}^{(tr)}(\cos\theta)&=&\frac38\sin\theta\sqrt{1-v^2}\,
(f_{LL}+f_{LR})\,C_{LR}(\cos\theta)\,\,2N_cs\,,\label{pstr}\end{aligned}$$ where the common factor $C_{LR}(\cos\theta)$ has been defined in Eq. (\[clr\]).
One can then determine the angle $\alpha$ enclosing the direction of the top quark and its polarization vector by taking the ratio $N^{(tr)}_{LR}/N^{(\ell)}_{LR}$. One has $$\label{offdiag}
\tan\alpha_{LR}=\frac{N_{LR}^{(tr)}(\cos\theta)}{N_{LR}^{(\ell)}(\cos\theta)}
=-\frac{\sin\theta\sqrt{1-v^2}\,(f_{LL}+f_{LR})}{f_{LL}(\cos\theta+v)
+f_{LR}(\cos\theta-v)}\,.$$ For $v=1$ one finds $\alpha_{LR}=0$, i.e. the polarization vector is aligned with the momentum of the top quark, in agreement with what has been said before. In Ref. [@Groote:2010zf] we have shown that radiative corrections to the value of $\alpha_{LR}$ are small in the forward region but can become as large as $\Delta\alpha_{LR}=10^{\circ}$ in the backward region for large energies.
Eqs. (\[psell\]) and (\[pstr\]) can be used to find a very compact LO form for $|\vec{P}_{LR}|$. One obtains [@Groote:2010zf] $$\label{expanda}
|\vec{P}_{LR}|=\frac{\sqrt{N_{LR}^{(\ell)2}+N_{LR}^{(tr)2}}}{D_{LR}}
=\frac{\sqrt{1-4a_{LR}}}{1-2a_{LR}}=1-2a_{LR}^2-8a_{LR}^3-18a_{LR}^3\ldots,$$ where the coefficient $a_{LR}$ depends on $\cos\theta$ through $$a_{LR}(\cos\theta)=\frac{f_{LL}f_{LR}}{C_{LR}^2(\cos\theta)}v^2\sin^2\theta\,.$$ Again, the corresponding expressions for $|\vec{P}_{LR}|$ and $a_{LR}$ can be found by the substitution $(L\leftrightarrow R)$.
For the fun of it we also list a compact LO form for $|\vec{P}(P_{\rm eff}=0)|$. One has $$\label{polpeff0}
|\vec{P}(P_{\rm eff}=0)|=\frac{\sqrt{(C_{LR}^2-C_{RL}^2)^2
-4v^2\sin^2\theta(C_{LR}f_{LL}-C_{RL}f_{RR})(C_{LR}f_{LR}-C_{RL}f_{RL})}}
{C_{LR}^2+C_{RL}^2-2v^2\sin^2\theta(f_{LL}f_{LR}+f_{RR}f_{RL})}\,.$$ Eq. (\[polpeff0\]) would produce a LO version of Fig \[fig:zeropol\].
\[sec5\]Effective beam polarization
===================================
As described in Sec. \[sec2\], large values of the effective beam polarization $P_{\rm eff}$ are needed to produce large polarization values of $\vec{P}$. It is a fortunate circumstance that nearly maximal values of $P_{\rm eff}$ can be achieved with non-maximal values of $(h_{-},h_{+})$. This is shown in Fig. \[fig:contour\] where we drawn contour plots $P_{\rm eff}={\it const}$ in the $(h_-,h_+)$ plane. The two examples shown in Fig. \[fig:contour\] refer to $$\begin{aligned}
&&(h_{-}=-0.80,\,h_{+}=+0.625)\qquad\mbox{leads to}\quad P_{\rm eff}=-0.95,
\nonumber\\
&&(h_{-}=+0.80,\,h_{+}=-0.625)\qquad\mbox{leads to}\quad P_{\rm eff}=+0.95.\end{aligned}$$ These two options are at the technical limits that can be achieved [@Alexander:2009nb]. In the next section we shall see that the choice $P_{\rm eff}\sim-0.95$ is to be preferred since the polarization is more stable against small variations of $P_{\rm eff}$. Furthermore, negative values of $P_{\rm eff}$ gives yet another rate enhancement as discussed after Eq. (\[ratealr\]).
![\[fig:contour\]Contour plots of $P_{\rm eff}={\it const}$ in the $(h_+,h_-)$ plane](korner_jurgen.Fig3.eps){width="40.00000%"}
\[sec6\]Stability of polarization against variations of $P_{\rm eff}$
=====================================================================
Extrapolations of $|\vec{P}|$ away from $P_{\rm eff}=\pm 1$ are more stable for $P_{\rm eff}=-1$ than for $P_{\rm eff}=+1$. Because the derivative of the magnitude of $|\vec{P}|$ leads to rather unwieldy expressions, we demonstrate this separately for the two polarization components $P^{(\ell)}$ and $P^{(tr)}$. The polarization components are given by ($m=\ell,tr$) $$P^{(m)}=\frac{N_0^{(m)}-P_{\rm eff}N_P^{(m)}}{D_0-P_{\rm eff}D_P}\,,$$ where $N_0^{(m)}=N_{LR}^{(m)}+N_{RL}^{(m)}$ and $N_P^{(m)}=N_{LR}^{(m)}-N_{RL}^{(m)}$ and similarly for $D_0$ and $D_P$. Upon differentiation w.r.t. $P_{\rm eff}$ one obtains $$\frac{dP^{(m)}\,}{dP_{\rm eff}}=
\frac{-N_{0}^{(m)}D_{P}+N_{P}^{(m)}D_{0}}
{\left(D_{0}-P_{\rm eff}D_{P}\right)^{2}}\,.$$
For the ratios of the slopes for $P_{\rm eff}=-1$ and $P_{\rm eff}=+1$ one finds $$\frac{dP^{(m)}\,}{dP_{\rm eff}}\Big|_{P_{\rm eff}=-1}\,\bigg/\,
\frac{dP^{(m)}\,}{dP_{\rm eff}}\Big|_{P_{\rm eff}=+1}
=\left(\frac{D_0-D_P}{D_0+D_P}\right)^2
=\left(\frac{D_{RL}}{D_{LR}}\right)^2
=\left(\frac{1-A_{LR}}{1+A_{LR}}\right)^2\,.$$ Depending on the energy and the scattering angle, Fig. \[fig:asymrl\] shows that $A_{LR}$ varies between $0.3$ and $0.7$ which implies that $(D_{RL}/D_{LR})^2$ varies between $0.29$ and $0.06$, i.e. for $P_{\rm eff}=-1$ the polarization components are much more stable against variations of $P_{\rm eff}$ than for $P_{\rm eff}=+1$. At threshold the ratio of slopes of $|\vec{P}_{\rm thresh}\,|$ for $P_{\rm eff}=-1$ and $P_{\rm eff}=+1$ is given by $-(D_{RL}/D_{LR})^2=-0.18$ where the minus sign results from having taken the derivative of the magnitude $|\vec{P}\,|$ (see Eq. (\[thresh\])).
\[sec7\]Longitudinal and transverse polarization\
$P^{(\ell)}$ vs. $P^{(tr)}$ for general angles and energies
===========================================================
In Fig. \[fig:translong1\] we plot the longitudinal component $P^{(\ell)}$ and the transverse component $P^{(tr)}$ of the top quark polarization for different scattering angles $\theta$ and energies $\sqrt{s}$ starting from threshold up to the high energy limit. The left and right panels of Fig. \[fig:translong1\] are drawn for $P_{\rm eff}=(-1,-0.95)$ and for $P_{\rm eff}=(+1,+0.95)$, respectively. The apex of the polarization vector $\vec{P}$ follows a trajectory that starts at $\vec{P}=P_{\rm thresh}(-\cos\theta,\sin\theta)$ and $\vec{P}=P_{\rm thresh}(\cos\theta,-\sin\theta)$ for negative and positive values of $P_{\rm eff}$, respectively, and ends on the line $P^{(tr)}=0$ in the high energy limit. The two $60^{\circ}$ trajectories show that large values of the size of $|\vec{P}|$ close to the maximal value of $1$ can be achieved in the forward region for both $P_{\rm eff}\sim\mp 1$ at all energies. However, the two figures also show that the option $P_{\rm eff}\sim -1$ has to be preferred since the $P_{\rm eff}\sim-1$ polarization is more stable against variations of $P_{\rm eff}$.
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![\[fig:translong1\]Parametric plot of the orientation and the length of the polarization vector in dependence on the c.m. energy $\sqrt{s}$ for values $\theta=60^\circ$, $90^\circ$, $120^\circ$, and $150^\circ$ for i) (left panel) $P_{\rm eff}=-1$ (solid lines) and $P_{\rm eff}=-0.95$ (dashed lines) and ii) (right panel) $P_{\rm eff}=+1$ (solid lines) and $P_{\rm eff}=+0.95$ (dashed lines). The three ticks on the trajectories stand for $\sqrt{s}=500\,GeV$, $1000\,GeV$, and $3000\,GeV$.](korner_jurgen.Fig4a.eps "fig:"){width="40.00000%"} ![\[fig:translong1\]Parametric plot of the orientation and the length of the polarization vector in dependence on the c.m. energy $\sqrt{s}$ for values $\theta=60^\circ$, $90^\circ$, $120^\circ$, and $150^\circ$ for i) (left panel) $P_{\rm eff}=-1$ (solid lines) and $P_{\rm eff}=-0.95$ (dashed lines) and ii) (right panel) $P_{\rm eff}=+1$ (solid lines) and $P_{\rm eff}=+0.95$ (dashed lines). The three ticks on the trajectories stand for $\sqrt{s}=500\,GeV$, $1000\,GeV$, and $3000\,GeV$.](korner_jurgen.Fig4b.eps "fig:"){width="40.00000%"}
\[2ex\]
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
It is noteworthy that the magnitude of the polarization vector remains closer to $|\vec{P}|=1$ in the forward region than in the backward region when $\cos\theta$ is varied. Let us investigate this effect for $P_{\rm eff}=-1$ by expanding the high energy formula (\[helimit\]) in $\Delta\cos\theta$ around $\cos\theta=+1$ and $\cos\theta=-1$. Since the first derivative vanishes, one has to expand to the second order in $\Delta\cos\theta$. The result is $$\begin{aligned}
{\rm Forward}\qquad|\vec{P}_{LR}|
&=&1-\frac12\left(\frac{f_{LR}}{f_{LL}}\right)^2(\Delta\cos\theta)^2
+\ldots\nonumber\\
{\rm Backward}\qquad|\vec{P}_{LR}|
&=&1-\frac12\left(\frac{f_{LL}}{f_{RL}}\right)^2(\Delta\cos\theta)^2
+\ldots\,.\end{aligned}$$
Numerically, one has $f_{LR}^2/f_{LL}^2=0.13$ and $f_{LL}^2/f_{LR}^2=7.53$. The second derivative is very much smaller in the forward direction than in the backward direction. This tendency can be clearly discerned in Fig. \[fig:translong1\]. A similar but even stronger conclusion is reached for the second derivative of $|\vec{P}_{RL}|$ where the corresponding second order coefficients are given by $f_{RL}^2/f_{RR}^2=0.064$ for $\cos\theta=+1$, and by $f_{RR}^2/f_{RL}^2=15.67$ for $\cos\theta=-1$. Corresponding $v$-dependent expansions can be obtained from Eq. (\[expanda\]).
We mention that at NLO there is also a normal component of the top quark polarization $P^{(n)}$ generated by the one–loop contribution which, however, is quite small (of $O(3\%))$ [@Groote:2010zf].
\[sec8\]Summary
===============
The aim of our investigation was to maximize and to minimize the polarization vector of the top quark $\vec{P}\,(\sqrt{s},\cos\theta,g_{ij},P_{\rm eff})$ by tuning the beam polarization. Let us summarize our findings which have been found in NLO QCD in the context of the SM.\
[**A. Maximal polarization:**]{} Large values of $\vec{P}$ can be realized for $P_{\rm eff}\sim\pm1$ at all intermediate energies. This is particularly true in the forward hemisphere where the rate is highest. Negative large values for $P_{\rm eff}$ with aligned beam helicities ($h_-h_+$ neg.) are preferred for two reasons. First there is a further gain in rate apart from the helicity alignment factor $(1-h_-h_+)$ due to the fact that generally $\sigma_{LR}>\sigma_{RL}$ as explained after Eq. (\[rate\]). Second, the polarization is more stable against variations of $P_{\rm eff}$ away from $P_{\rm eff}=-1$. The forward region is also favoured since the $100\%$ LO polarization valid at $\cos\theta=1$ extrapolates smoothly into the forward hemisphere with small radiative corrections.\
[**B. Minimal polarization:**]{} Close to zero values of the polarization vector $\vec{P}$ can be achieved for $P_{\rm eff}\sim 0.4$. Again the forward region is favoured. In order to maximize the rate for the small polarization choice take quadrant IV in the $(h_-,\,h_+)$ plane.
Acknowledgements {#acknowledgements .unnumbered}
================
J.G.K. would like to thank X. Artru and E. Christova for discussions and G. Moortgat-Pick for encouragement. The work of S.G. is supported by the Estonian target financed project No. 0180056s09, by the Estonian Science Foundation under grant No. 8769 and by the Deutsche Forschungsgemeinschaft (DFG) under grant 436 EST 17/1/06. B.M. acknowledges support of the Ministry of Science and Technology of the Republic of Croatia under contract No. 098-0982930-2864. S.P. is supported by the Slovenian Research Agency.
[99]{}
E. Christova and D. Draganov, Phys. Lett. [**B434**]{} (1998) 373
M. Fischer, S. Groote, J.G. Körner, M.C. Mauser and B. Lampe, Phys. Lett. [**B451**]{} (1999) 406
M. Fischer, S. Groote, J.G. Körner and M.C. Mauser, Phys. Rev. [**D65**]{} (2002) 054036
S. Groote, W.S. Huo, A. Kadeer and J.G. Körner, Phys. Rev. [**D76**]{} (2007) 014012
J.A. Aguilar-Saavedra and J. Bernabeu, Nucl. Phys. [**B840**]{} (2010) 349;\
J.A. Aguilar-Saavedra and R.V. Herrero-Hahn, arXiv:1208.6006
J. Drobnak, S. Fajfer and J.F. Kamenik, Phys. Rev. [**D82**]{} (2010) 114008
S. Groote and J. G. Körner, Phys. Rev. D [**80**]{} (2009) 034001
S. Groote, J.G. Körner, B. Melic and S. Prelovsek, Phys. Rev. [**D83**]{} (2011) 054018
G.A. Moortgat-Pick [*et al.*]{}, Phys. Rept. [**460**]{} (2008) 131
X. Artru, M. Elchikh, J.M. Richard, J. Soffer and O.V. Teryaev, Phys. Rept. [**470**]{} (2009) 1
S. Parke and Y. Shadmi, Phys. Lett. [**B387**]{} (1996) 199 J. Kodaira, T. Nasuno and S.J. Parke, Phys. Rev. [**D59**]{} (1998) 014023 G. Alexander, J. Barley, Y. Batygin, S. Berridge, V. Bharadwaj, G. Bower, W. Bugg and F.J. Decker [*et al.*]{},\
Nucl. Instrum. Meth. [**A610**]{} (2009) 451
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Poincarè gauge theory (PGT) is an alternative gravity theory, which attempts to bring the gravity into the gauge-theoretic frame, where the Lagrangian is quadratic in torsion and curvature. Recently, the cosmological models with torsion based on this theory have drawn many attentions, which try to explain the cosmic acceleration in a new way. Among these PGT cosmological models, the one with only even parity dynamical modes – SNY model, for its realistic meaning, is very attractive. In this paper, we first analyze the past-time cosmic evolution of SNY model analytically. And based on these results we fit this model to the most comprehensive SNeIa data (Union 2) and thus find the best-fit values of model parameters and initial conditions, whose related $\chi^{2}$ value is consistent with the one from $\Lambda$CMD at the 1$\sigma$ level. Also by the $\chi^{2}$ estimate, we provide certain constraints on these parameters. Using these best-fit values for the Union 2 SNeIa dataset, we are able to predict the evolution of our real universe over the late time. From this prediction, we know the fate of our universe that it would expand forever, slowly asymptotically to a halt, which is in accordance with the earlier works.'
author:
- 'Xi-Chen Ao'
- 'Xin-Zhou Li'
bibliography:
- 'pgtc.bib'
title: Torsion Cosmology of Poincaré gauge theory and the constraints of its parameters via SNeIa data
---
\[sec:introduction\]Introduction
================================
About 12 years ago, the high redshift SNeIa observations, suggesting that our universe is not only expanding but also accelerating, have changed our basic understanding of the universe, which led us to the new era of cosmology. In order to explain this weird phenomena, cosmologists introduced the cosmological constant back again, and constructed a new standard model also known as the concordance model, which is intended to satisfy all the main observations such as type Ia supernovae (SNeIa), cosmic microwave background radiation (CMBR) and large scale structure (LSS). Although the cosmological constant accounts for almost 74% of the whole energy density in this concordance model, the value is still too small to be explained by any current fundamental theories. Lacking the underlying theoretical foundations, the particular value of cosmological constant is just selected phenomenologically, which means the model is highly sensitive to the value of model parameter, resulting in the so-called the fine-tuning problem. This problem is considered as the biggest issue for almost all cosmological models. In order to alleviate this troublesome problem, various dynamical dark energy theories have been proposed and developed these years, such as quintessence [@Peebles:2002gy; @Li:2001xaa] and phantom [@Caldwell:1999ew; @Li:2003ft], in which the energy composition depends on time. But these exotic fields are still phenomenological, lacking theoretical foundations. Besides adding some unknown fields, there is another kind of theories known as modified gravity, which use alternative gravity theory instead of Einstein theory, such as $ f(R)$ theory [@Nojiri:2010wj; @Du:2010rv], MOND cosmology [@Zhang:2011uf], Poincaré gauge theory [@Hehl:1994ue; @Blagojevic2002], and de Sitter gauge theory [@Ao:2011dn]. Among these theories, Poincaré gauge theory (PGT) has a solid theoretical motivation, which tries to bring the gravity into the gauge-theoretic framework.
The attempt to treat gravitation as a gauge interaction could date back to the 1950s. Utiyama first presented a groundwork for a gauge theory of gravitation [@Utiyama:1956sy]. Later Kibble and Sciama [@Kibble:1961ba] inherited Utiyama’s work and developed a gauge theory of gravitation commonly called Einstein-Cartan-Sciama-Kibble (ECSK) theory, where the symmetry group is Poincaré group. In ECSK theory, the Lagrangian has only one linear curvature term, thus the equation of torsion is algebraic, and torsion is non-dynamical. Some follow-up cosmological models with torsion, based on ECSK model, were investigated, which try to avoid the cosmological singularity [@Kerlick:1976uz], and from these cosmological discussions, they found the torsion was imagined as playing role only at high densities in the early universe. In order to enable torsion to propagate, one has to introduce quadratic term of torsion and curvature, which is spoken of Poincaré gauge theory, proposed by Hehl in 1980 [@Hehl1980]. Poincaré group is a semiproduct of translation group and Lorentz rotation group, which is the global symmetry group of a space-time in absence of gravity. If one localize this global symmetry as a gauge symmetry, the gravity would emerge spontaneously. Note that in this transition some independent compensating fields have to be introduced, the orthonormal coframe and the metric-compatible connection, which correspond to the translation and local rotation potentials, respectively. And their corresponding field strength are torsion and curvature. The early works of PGT are intended to unify gravity and the other three interactions in the framework of gauge field theory. However now it might be the solution to the cosmic acceleration problem, in which the dynamical torsion could play the role as dark energy to drive our universe to accelerate[@Shie:2008ms; @Chen:2009at; @Ho:2011qn; @Ho:2011xf].
In PGT, the propagating torsion field can be identified as six possible dynamic modes, carrying spins and parity: $2^{\pm}, 1^{\pm},0^{\pm}$, respectively [@Hayashi:1979wj; @Hayashi:1980ir]. However, expect for two “scalar modes”, $0^{\pm}$, other dynamical modes are not physically acceptable, which are ruled out by certain theoretical constraints (e.g., “no-ghosts” and “no-tachyons”). The pseudoscalar mode with the odd parity, $0^{-}$, is reflected in the axial vector torsion, which is driven by the intrinsic spin of elementary fermions. It is generally thought that the axial torsion is large and has notable effects only in the very early universe, where the spin density is large. Therefore, this part probably have insignificant contribution to the current evolution of our universe. Whereas the scalar mode with the even parity, $0^{+}$, which is associated with the vector torsion, does not interact in any direct way with any known of matter. Consequently, it is rational to imagine it has an significant magnitude but yet not been noticed. In 2008, Shie, Nester and Yo (SNY) proposed a cosmological model based on PGT, where the dynamical scalar torsion $0^{+}$ accounts for the cosmic acceleration [@Shie:2008ms; @Chen:2009at]. Later, we investigated the dynamical properties of SNY model and applied the statefinder diagnostics to it[@Li:2009zzc; @Li:2009gj], and did some analytical analysis on the late-time evolution in [@Ao:2010mg]. In 2010, Baekler, Hehl and Nester generalized the SNY model to the BHN model which includes the $0^{-}$ mode and even the cross parity couplings and gave a comprehensive picture of PGT cosmology [@Baekler:2010fr]. And some follow-up works suggest the cosmic acceleration is still mainly due to the even mode. [@Ho:2011qn; @Ho:2011xf]
In this paper, we study the evolution of our universe based on SNY model. In SNY model, the cosmic evolution is described in terms of three variables, ($H, \Phi, R$), by three first-order dynamical equations. We obtain three exact solutions of scale factor with certain particular constant affine scalar curvature, and show that these solutions are not physically real, which requires us to study the non-constant curvature case. Then we extend the investigation to the non-constant affine scalar curvature, and find the analytical solutions for the past-time evolution, which could be compared with the observational data to constrain the values of parameters and determine the goodness of this model. Then we fit these theoretical results to the Union 2 supernovae dataset, and find the best-fit values of the model parameters and initial conditions, by means of the $\chi^{2}$ estimate. Also, we plot the contours of certain confidence levels, which shows some constraints on the parameters. Finally, we study the future evolution. Using these constraints of parameters, we predict that the real universe would expand forever, slowly asymptotically to a halt, which is consistent with some earlier works.[@Li:2009gj; @Ao:2010mg]
\[sec:poinc-gauge-theory\]SNY model and its some exact solutions
================================================================
In order to localize the global Poincaré symmetry, one has to introduce two compensating fields: coframe field $\vartheta^{\alpha}$ and metric-compatible connection field $\Gamma^{\alpha\beta}=\Gamma^{[\alpha\beta]}_{i}\mathrm{d}x^{i}$ [^1]. And their associated field strengths are the torsion and curvature 2-forms $$\begin{aligned}
\label{eq:gauge-field-strength1}
T^{\mu} &\equiv& \frac{1}{2}T^{\mu}_{\alpha\beta}~ \vartheta^{\alpha}\wedge
\vartheta ^{\beta}=\mathrm{d}\vartheta ^{\mu} +\Gamma^{\mu}_{\nu}\wedge
\vartheta ^{\nu}\\
\label{eq:gauge-field-strength2}
R^{\mu\nu}&\equiv&
\frac{1}{2}R^{\mu\nu}_{\alpha\beta}~\vartheta^{\alpha}\wedge\vartheta^{\beta}=\mathrm{d}
\Gamma^{\mu\nu}+\Gamma^{\mu}_{\rho}\wedge\Gamma^{\rho\nu}\end{aligned}$$ which satisfy the respective Bianchi identities: $$\begin{aligned}
\label{eq:bianchi}
\mathrm{D}T^{\mu}\equiv R^{\mu}_{\, \nu}\wedge \vartheta^{\nu},\quad
\mathrm{D} R^{\mu}_{\, \nu}\equiv0\end{aligned}$$ The Lagrangian denstiy in PGT takes the standard quadratic Yang-Mills form, qualitively, $$\begin{aligned}
\label{eq:lagrangian-1}
\mathcal{L}[\vartheta,
\Gamma]\sim \mathrm{curvature} +(\mathrm{ torsion})^{2}+(\mathrm {curvature})^{2}.\end{aligned}$$
Some early works on PGT concluded that in the weak field approximation, the torsion field could be identified as six irreducible parts with certain dynamical modes, which propagate with $2^{\pm}$, $1^{\pm}$, $0^{\pm}$. Later some investigations by linearized theory and Hamiltonian analysis concluded that only the two “scalar modes” are physically acceptable, carrying $0^{\pm}$. Furthermore, because the $0^{-}$ is driven by spin density, this mode would not have a significant effects, expect in the very early universe. And some numerical results demonstrate that the acceleration is mainly due to the $0^{+}$. So here we only consider the simple $0^{+}$ case, i.e., SNY model. In this case, the gravitational lagrangian density is simplified to this specific form, $$\begin{aligned}
\label{eq:lagrangian-2}
\mathcal{L}[\vartheta,
\Gamma]=\frac{1}{2\kappa}\left[-a_{0}R+\sum^{3}_{n=1}a_{n}{\buildrel
(n)\over T}{}^{2}+\frac{b}{12}R^{2}\right],\end{aligned}$$ where is the algebraically irreducible parts of torsion, R is the scalar curvature, which is the non-vanishing irreducible parts of curvature [@Baekler:2010fr]. Note that $a_{0}$ and $a_{n}$ are dimensionless parameters, whereas $b$ have the same dimension with $R^{-1}$.
Since current observations favor a homogeneous, isotropic and spatially flat universe, it is reasonable to work on the Friedman-Lemaitre-Robertsen-Walker (FLRW) metric, where the isotropic orthonormal coframe takes the form: $$\begin{aligned}
\label{eq:coframe}
\vartheta ^{0}= \mathrm{d}t, \qquad \vartheta^{i}=a(t) \mathrm{d}x^{i};\end{aligned}$$ and the only non-vanishing connection 1-form coefficients are of the form: $$\begin{aligned}
\label{eq:connection}
\Gamma^{i}_{0}=\Psi(t)\mathrm{x}^{i},\end{aligned}$$ Consequently, the nonvanishing torsion tensor components take the form $$\begin{aligned}
\label{eq:torsion-tensor}
T^{i}_{~j0}=a(t)^{-1}(\Psi(t)-\dot{a}(t))\delta^{i}_{j}\equiv \frac{-\Phi(t)}{3}\delta^{i}_{j}\end{aligned}$$ *w*here $\Phi$ represents torsion. By variation w.r.t. the coframe and connection, one could find the cosmological equations of SNY model. $$\begin{aligned}
\label{eq:motion-equation-H}
\dot H &=& \frac{\mu}{6 a_{2}}R-\frac{\kappa\;\rho_{m}}{6 a_{2}}-2
H^{2}\\
\label{eq:motion-equation-phi}
\dot \Phi &=& \frac{a_{0}}{2a_{2}}R-\frac{\kappa\;\rho_{m}}{2
a_{2}}-3H\Phi +\frac{1}{3}\Phi^{2}\\
\label{eq:motion-equation-R}
\dot R &=& -\frac{2}{3}\left(R+\frac{6\mu}{b}\right)\Phi,\end{aligned}$$ where the $H=\dot{a}/a$ is the Hubble parameter, $\mu=a_{2}-a_{0}$, which represents the mass of $0^{+}$ mode, and the energy density of matter component is $$\begin{aligned}
\label{eq:energy-density}
\kappa
\rho_{m}=\frac{b}{18}\left(R+\frac{6\mu}{b}\right)(3H-\Phi)^{2}-\frac{b}{24}R^{2}-3a_{2}H^{2}.\end{aligned}$$ Note that the universe here is assumed to be the dust universe, for the effect of radiation is almost negligible in the late-time evolution. The Newtonian limit requires $a_{0}=-1$. From Eq. , it is easy to find the scalar affine curvature remains a constant $R = -6\mu/b$ forever as long as its initial data has this special value. In this case, Eq. can be rewritten as $$2a\ddot{a}+\dot{a}^{2}+\frac{3}{2}\frac{\mu^{2}}{a_{2}b}a^{2}=0,$$ where the scale factor $a(t)$ is decoupled to $\Phi$ and $R$ fields. Thus, we could obtain some simple exact solution of $a(t)$.
The positivity of the kinetic energy requires $a_2 > 0$ and $b > 0$ [@Yo:1999ex], so we have the solution $$\label{eq:sol-1}
a(t)=a_{0}\left(\frac{\cos\left[\frac{3\zeta}{2}\left(t-t_{0}\right)-\arctan\left(\frac{H_{0}}{\zeta}\right)\right]}
{\cos\left[\arctan\left(\frac{H_{0}}{\zeta}\right)\right]} \right)^{\frac{2}{3}},$$ where $\zeta = \mu/\sqrt{2a_2b}$ and $H_0 = H(t_0)$. However, such a choice conflicts with the assumption of energy positivity in the $R
= -6\mu/b$ case.
If we audaciously relax the parameter requirement for positive kinetic energy, *i.e.*, $a_2 < -1$ and $\mu < 0$, this phantom scenario will turn out to be interesting. Now we have the solution $$\begin{aligned}
\label{eq:sol-2}
& a(t)&=a_{0}\exp\left[(\xi-2)(t-t_{0})+\frac{2}{3\xi}\ln \frac{(\xi-H_{0})+(\xi+H_{0})\exp[3\xi(t-t_{0})]}{2\xi}\right],\end{aligned}$$ where $\xi = \mu/\sqrt{-2a_2b}$. From this expression, it is obvious that the late-time behavior would be analogous with the exponential expansion of the de Sitter universe. In other words, dark energy can be mimicked in such case. Using the dynamical analysis, we have pointed out that there is a late-time de Sitter attractor [@Li:2009zzc; @Ao:2010mg]. Note that the solution is just corresponding to the de Sitter attractor.
Especially, as $a_2 = -1$, we have a solution $$a(t) = a_0\left(\frac{2+3H_0t}{2+3H_0t_0}\right)^{\frac{2}{3}}.$$ From this expression, we could find that the late-time evolution is similar to the $t^{2/3}$ expansion of the matter dominant universe.
In this constant affine scalar curvature case, the three exact solutions we find are all not physically acceptable: the first conflicts with the assumption of energy positivity; while the second and third ones breaks the positivity of kinetic energy. For this reason, these solutions with constant scalar curvature cannot describe our real universe, and thus we have to study the case of non-constant scalar curvature.
Unfortunately, for such a complex nonlinear system, Eqs. -, the whole evolution of our universe is difficult to obtain, so we are forced to divide it into two seperate parts, the past and the future, and discuss them respectively. Among these two parts, it is the past one that could be compared with the existing observational results and impose constraints on the values of model parameters and initial conditions. Based on these constraints, we could obtain the future evolution and the fate of our real universe. Therefore, we study the past-time evolution first.
The Analytical Approach for the $a(t)<a(t_{0})$ {#sec:Past}
===============================================
In Ref.[@Ao:2010mg], we analyzed the evolution when $a(t)>a(t_{0})$, i.e. the future evolution. In this section, we investigate the evolution when $a(t)<a(t_0)$, i.e. the evolution of universe at $t<t_{0}$.
For such a dust universe, the continuity equation is $$\begin{aligned}
\label{eq:continuity-equation}
\dot{\rho}=-3H\rho,\end{aligned}$$ which could also be derived directly by the dynamical equations Eqs.- and the constraint equation Eq. . This equation is easy to solve, $$\begin{aligned}
\label{eq:rho}
\rho=\frac{\rho_{0}}{a^{3}},\end{aligned}$$ where $\rho_{0}$ is the current matter density of our universe.
If we rescale the variables and parameters as $$\begin{aligned}
&&t\rightarrow t/l_0;\quad H\rightarrow l_0 H;\quad R\rightarrow l_0^{2} R;\nonumber \\
&&\Phi\rightarrow l_0 \Phi;\quad \rho \rightarrow l_{0}^{2}\kappa \rho;\quad
b\rightarrow b/l_{0}^{2},\label{transformation}\end{aligned}$$ where $l_0=c/H_0$ is the Hubble radius, these motion equations Eqs.- would be dimensionless.
By the substitution Eq. back to Eqs.- and rescaling transformations, one could obtain the new equation set, which is more convenient to solve, $$\begin{aligned}
\label{eq:modified-motion-equation-h}
a H H'&=&\frac{\mu}{6 a_{2}}R- \frac{\rho_{0}}{6
a_{2}}\frac{1}{a^{3}}-2H^{2}\\
a H \Phi'&=&\frac{1}{2 a_{2}}R-\frac{\rho_{0}}{2 a_{2}} \frac{1}{a^{3}}
-3H \Phi + \frac{1}{3} \Phi^{2}\\
\label{eq:modified-motion-equation-r}
aH R'&=&-\frac{2}{3}(R+\frac{6\mu}{b})\Phi,\end{aligned}$$ where the prime means the derivation with respect to scale factor $a$. The current scale factor is generally supposed to be unity, so it is reasonable to assume the ans$\ddot{\mathrm{a}}$tz is as follows, $$\begin{aligned}
\label{eq:ansatz-h}
H(a)&=&h_{0}+ \sum^{\infty}_{n=1}h_{n}(a-1)^{n},\\
\Phi(a)&=& \varphi_{0}+\sum^{\infty}_{n=1}\varphi_{n}(a-1)^{n},\\
\label{eq:ansatz-r}
R(a)&=&r_{0}+ \sum^{\infty}_{n=1}r_{n}(a-1)^{n},\end{aligned}$$ which has a good convergence when $a(t)<a(t_{0})$.[^2]
Substitute this ans$\ddot{\mathrm{a}}$tz back to the Eqs.-, we find the recursion relation of coefficients, $$\begin{aligned}
\label{eq:n=0-h}
&& h_{1}=\frac{1}{h_{0}}\left(\frac{\mu r_{0}}{6 a_{2}}-\frac{ \rho_{0}}{6 a_{2}}-2h_{0}^{2}\right),\\
&&\varphi_{1}=\frac{1}{h_{0}}\left(\frac{r_{0}}{2a_{2}}-\frac{\rho_{0}}{2a_{2}}-3h_{0}\varphi_{0}
+\frac{\varphi_{0}^{2}}{3}\right),\\
&&r_{1}=-\frac{2}{3h_{0}}\left(r_{0}+\frac{6\mu}{b}\right)\varphi_{0},\end{aligned}$$ when $n\ge 2$, $$\begin{aligned}
\label{eq:n>0-h}
h_{n}&=&\frac{1}{n h_{0} }\left[\frac{\mu r_{n-1}}{6a_{2}}-\frac{\rho_{0}}{12a_{2}}n(n+1)-2\sum^{n-1}_{i=0}h_{i}h_{n-i}-\sum^{n-1}_{i=0}(n-1-i)h_{i}h_{n-1-i}
\right.\nonumber\\
&&\left.-\sum^{n-1}_{i=1}(n-i)h_{i}h_{n-i}\right],\\
\label{eq:n>0-phi}
\varphi_{n}&=&\frac{1}{nh_{0}}\left[\frac{r_{n-1}}{2a_{2}}-\frac{\rho_{0}}{4a_{2}}n(n+1)
-3\sum^{n-1}_{i=0}h_{i}\varphi_{n-1-i}+\sum^{n-1}_{i=0}\frac{\varphi_{i}\varphi_{n-1-i}}{3}\right.\nonumber\\
&&\left.
-\sum^{n-1}_{i=0}(n-1-i)h_{i}\varphi_{n-i}
-\sum^{n-1}_{i=1}(n-i)h_{i}\varphi_{n-i}\right],\\
\label{eq:n>0-r}
r_{n}&=&\frac{1}{n h_{0}}\left[\frac{2}{3}\sum^{n-1}_{i=0}r_{i}\varphi_{n-1-i}+\frac{4\mu}{b}\varphi_{n}\sum^{n-1}_{i=1}(n-i)h_{i}r_{n-i}\right],\end{aligned}$$ with $$\begin{aligned}
\label{eq:initial-rho}
\rho_{0}=\frac{b}{18}\left(r_{0}+\frac{6\mu}{b}\right)(3h_{0}-\varphi_{0})^{2}-\frac{b}{24}r_{0}^{2}-3a_{2}h_{0}^{2},\end{aligned}$$ where $h_{0},~\phi_{0}$ and $ r_{0}$ are the initial conditions of $H(1),~\Phi(1)$ and $R(1)$, i.e. the present time. Note that for the rescaling transformation Eq., the initial value for $H$ is supposed to be unity, and therefore $h_{0}=1$. We here obtained the analytical solution of the past evolution of SNY model, which can be tested by the observational results, realistically.
The Constraints of Parameters via SNeIa {#sec:fitting}
=======================================
The most common approach to test a cosmological model is the supernovae fitting. In this section we attempt to fit the model parameters and the initial values to the current type Ia supernovae data.
The supernovae data we use here is the Union 2 dataset (N=557), which is the most comprehensive one up to date, combining the former SNeIa dataset in a homogeneous manner. It consists of distance modulus $\mu_{obs}$, which equals to the difference between apparent magnitude $m_{i}$ and the absolute magnitude $M_{i}$, redshifts $z_{i}$ of supernovae, and the covariance matrix $C_{SN}$ represents the statistical and systematic errors. By comparing the theoretical distance modulus $\mu_{th}$, derived from the cosmological model, with the observational data, we could obtain the constraints on the model parameters and initial value as well as the goodness of the model.
As stated above, the SNY model predicts a specific form of the Hubble constant $H(a; a_{2},b,\varphi_{0},r_{0})$ as a function of scale factor, which is explicitly expressed in Eqs., and . For the relation between scale factor $a$ and redshift $z$, it is easy to rewrite $H(a)$ in terms of redshift, $$\begin{aligned}
\label{eq:hubble-constant}
H(z; a_{2},b,\varphi_{0},r_{0})=\sum^{\infty}_{n=0}(-1)^{n}h_{n}\left(\frac{z}{1+z}\right)^{n},\end{aligned}$$ which is more convenient to compare to the supernovae data.
The theoretical distance modulus is related to the luminosity distance $d_{L}$ by $$\begin{aligned}
\label{eq:modulus}
\mu_{th}(z_{i})&=&5 \log_{10}\left(\frac{d_{L}(z_{i})}{\mathrm{Mpc}}\right)+25 \nonumber\\
&=&5 \log_{10}D_{L}(z_{i})-5\log_{10}\left( \frac{c H_{0}^{-1}}{\mathrm{Mpc}} \right)+25 \nonumber\\[0.12cm]
&=& 5 \log_{10}D_{L}(z_{i})-5\log_{10}h +42.38,\end{aligned}$$ where the $D_{L}(z)$ is the dimensionless ’Hubble-constant free’ luminosity distance defined by $D_{L}(z)=H_{0}d_{L}(z)/c$.
For the spatially flat model we consider here, the ’Hubble-constant free’ luminosity distance could be expressed in terms of Hubble parameter $H(z;a_{2},b,\phi_{0},r_{0})$, Eq. , $$\begin{aligned}
\label{eq:dL}
D_{L}(z)&=& (1+z)\int^{z}_{0} \mathrm{d}z' \frac{1}{H(z'; a_{2},b,\phi_{0},r_{0})}.\end{aligned}$$ Due to the normal distribution of errors, the $\chi^{2}$ method could be used as the maximum likelihood estimator to compare the theoretical models and the observational data, which would show us the best-fit parameters ($a_{2},
b, \varphi_{0}$ $,r_{0}$) and the goodness of model. The $\chi^{2}$ here for the SNeIa data is $$\begin{aligned}
\label{eq:chi22}
\chi^{2}(\theta)&=&\sum^{N}_{i,j}\left[\mu_{obs}(z_{i})-\mu_{th}(z_{i})\right]
(C_{SN}^{-1})_{i j}[\mu_{obs}(z_{j})-\mu_{th}(z_{j})]\nonumber \\
&=&\sum^{N}_{i,j}\left[\mu_{obs}(z_{i})-5\log_{10}D_{L}(z_{i};\theta)-\mu_{0}\right] (C_{SN}^{-1})_{i j}[\mu_{obs}(z_{j})-5\log_{10}D_{L}(z_{i};\theta)-\mu_{0}].\end{aligned}$$ where $\mu_{0}=-5\log_{10}h + 42.38$ and $\theta$ denotes the model parameters and initial values. The parameter $\mu_{0}$ is a nuisance parameter, whose contribution we are not interested in. By marginalization over this parameter $\mu_{0}$, we obtain $$\begin{aligned}
\label{eq:chi23}
\tilde{\chi}^{2}(\theta)=A(\theta)-\frac{B(\theta)^{2}}{C}+\ln\left(\frac{C}{2\pi}\right),\end{aligned}$$ where $$\begin{aligned}
\label{eq:marginalization}
A(\theta)&=&\sum^{N}_{i,j}\left[\mu_{obs}(z_{i};\theta)-5\log_{10}D_{L}(z_{i};\theta)\right](C_{SN}^{-1})_{i
j}[\mu_{obs}(z_{j};\theta)-5\log_{10}D_{L}(z_{i};\theta)],\\
B(\theta)&=&\sum^{N}_{j} (C_{SN}^{-1})_{i j}[\mu_{obs}(z_{j};\theta)-5\log_{10}D_{L}(z_{i};\theta)],\\
C(\theta)&=&\sum^{N}_{i,j}=(C_{SN}^{-1})_{i,j}.\end{aligned}$$ By minimizing the $\chi^{2}$, we find the best-fit values of parameters and initial conditions ($a_{2}, b, \varphi_{0}, r_{0}$) of SNY model [^3], as shown in Tab.\[tab:best-fit\].
$a_{2}$ $b$ $\varphi_{0}$ $r_{0}$ $\chi^{2}$
------------- --------- ------- --------------- --------- -------------
\[-0.25cm\] 1.336 0.992 0.584 5.839 535.284
: \[tab:best-fit\]The best-fit initial data and parameters
The minimal $\chi^{2}$ here is $535.284$ , whereas the value for $\Lambda$CDM is $536.634$, with $\Omega_{m}=0.27$, $\Omega_{\Lambda}=0.73$, which implies that the torsion cosmology is consistent with $\Lambda$CMD at the 1$\sigma$ level. Substitute these best-fit values back to , we obtain the initial value of $\rho_{0}=1.004$. Since the parameter space here is 4-dimensional, we cannot show the contour of confidence level in one picture. We have to analyze it in two “cases”. First, we fix the initial values at their best-fit values, and obtain the corresponding contours of some particular confidence levels of the model parameters, as shown in Fig.\[fig:mpcz\]. Second, we fix the model parameter at their best-fit values, and obtain the contours of confidence levels of initial conditions, as shown in Fig.\[fig:ivcz\]. From these two contours, it is easy to find that, to some extent, the sensitivity of model parameters and initial conditions has been lowered in torsion cosmology.
![\[fig:mpcz\] The 68.3%, 95.4% and 99.7% confidence contours with respect to model parameters $a_{2},~b$, using the Union 2 dataset. Here we assume the initial conditions are $\varphi_{0}=0.584$ and $r_{0}=5.839$. The yellow point denotes the best-fit point.](fig1-mpcz.eps){width="8.5cm"}
![\[fig:ivcz\]The 68.3%, 95.4% and 99.7% confidence contours with respect to the initial values of $\varphi_{0},~r_{0}$, using the Union 2 dataset. Here we assume the model parameters are $\varphi_{0}=0.584$ and $r_{0}=5.839$. The yellow point denotes the best-fit point.](fig2-ivcz.eps){width="8.5cm"}
Fate of the Universe {#sec:future}
====================
\[sec:analytical-late-time\]
The fate of the universe is an essential issue, which is discussed widely, for almost every cosmological model. In PGT cosmology, many works have also been conducted. In [@Shie:2008ms; @Chen:2009at], some numerical analyses have been done, which showed that $H$, $\Phi$ and $R$ have a periodic character at late-time of the evolution for $a_{2}>0$ and $b>0$, approximately. Some follow-up dynamics analysis and statefinder diagnostic done in [@Li:2009zzc; @Li:2009gj] indicate that this character is corresponding to an asymptotically stable focus. And the related analytical discussion in [@Ao:2010mg] confirmed this conclusion. However, the researches mentioned above only presented some qualitative results, where parameters $a_{2}$ and $b$ and initial values are set to certain particular positive values by hand rather than the values constrained via the observational data, and thus, quantitative analyses are still needed to be done. Therefore, in order to investigate the evolution of the real universe, it is necessary to place the constraints of parameters obtained via the SNeIa data on this model.
First we set model parameters and intial conditions to their best-fit values, and solve the evolution equations numerically. The solution of Hubble parameter and the trajectory in $H-\Phi-R$ space are plotted in Fig. \[fig:best-fit\] and Fig. \[fig:evolution\], respectively. Then we test some other values of model parameters in the confidence interval of $3\sigma$ with fixed intial conditions which are still at the best-fit value, and plot the numerical solutions of $H(t)$ in the right column in Fig. \[fig:best-fit-model-initial\]. At last, we fixed the model parameters at their best-fit value, and solve the evolution equations numerically with various values of initial conditions in the confidence interval of $3\sigma$. Then we plot the results in the left column in Fig. \[fig:best-fit-model-initial\]. From these numerical results, it is easy to find that ($H$, $\Phi$, $R$) would tend to ($0,~0,~0$) in the far future, which indicate that our universe would expand forever asymptotically to a halt. This description of the fate of the universe is in accordance with the earlier analytical work.[@Ao:2010mg]
![\[fig:evolution\]The evolution orbit of ($H,\Phi,R$) with the best-fit values. The blue point indicates the initial condition (1, 0.584, 5.839), whereas the red point (0, 0, 0) denotes the final state of our universe.](fig3-evolution.eps){width="7cm"}
Summary and Conclusion {#sec:summary}
======================
We studied the cosmology based on PGT with even parity scalar mode dynamical connection, $0^{+}$. So the Lagrangian we use in this paper takes the form of SNY model Eq.. We rewrote the motion equations Eqs.-Eqs. as a dimensionless equation w.r.t scale factor $a$ rather than cosmological time. And we obtained the analytical solution of past time evolution of this new set of equations, as shown in Eqs.-. Then we attempted to investigate the goodness of this model, by comparing these theoretical results with the latest observational data, Union 2 SNeIa dataset. Finally, we found the best-fit values of model parameters and initial conditions ($a_{2}=1.336,~b=0.992,$ $ ~\varphi_{0}=0.584$ and $r_{0}=5.839$), and that the associated minimal $\chi^{2}$ (535.284) is consistent with the $\Lambda$CDM at the 1$\sigma$ level. Furthermore, from the contours of the dynamics analysis conducted in some confidence level Fig.\[fig:mpcz\] and \[fig:ivcz\], it is easy to see that, to some extent, the fine-tuning problem has been alleviated in SNY model. Next, we extended our investigation to future evolution. We plotted the whole evolution orbit of ($H,\Phi,R$), from the past to the future, with the best-fit values, in Fig. \[fig:evolution\], which gives us a raw picture of the whole evolution. Finally, we tested some other values of parameters and initial conditions and found that $H,\Phi,R$ all tend to zero in the infinite future, which indicate our universe will expand forever asymptotically to a halt. This description of the fate of our universe is consistent with the analysis conducted above and the earlier works [@Li:2009zzc; @Li:2009gj; @Ao:2010mg]. Thus, torsion cosmology is a “competitive” model to explain the cosmic acceleration, which need not introduce some exotic matter composition.
In comparison to other models of accelerating universe, torsion cosmology of PGT is new, which still has a great number of issues to study. For instance, we could extend the research on SNY model to BHN model, which generalizes SNY model to a model with both $0^{\pm}$ and even the coupled term of these two modes. Also, we could investigate the effect of $0^{-}$ in the very early universe, which might has imprints in CMBR. These issues will considered in the upcoming papers.
We wish to acknowledge the support of the SRFDP under Grant No 200931271104 and Shanghai Natural Science Foundation, China Grant No. 10ZR1422000.
[^1]: the Greek indices $\alpha,\beta, \gamma...$ denote the 4d orthonormal indices, whereas the Latin indices denote 4d coordinate (holonomic) indices and $i,j,k,...$ denotes
[^2]: Here, $a(t)$ is set to unity.
[^3]: The full numerical calculation are performed using Matlab here.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'This paper presents optical($ugri$H$\alpha$)–infrared([*JHK*]{}s,3.6–8.0$\mu$m) photometry, and [*[*Gaia*]{}*]{} astrometry of 55 Classical T-Tauri stars (CTTS) in the star-forming region Sh2-012, and it’s central cluster NGC6383. The sample was identified based on photometric H$\alpha$ emission line widths, and has a median age of 2.8$\pm$1.6Myr, with a mass range between 0.3–1$M_{\odot}$. 94% of CTTS with near-infrared cross-matches fall on the near-infrared T-Tauri locus, with all stars having mid-infrared photometry exhibiting evidence for accreting circumstellar discs. CTTS are found concentrated around the central cluster NGC6383, and towards the bright rims located at the edges of Sh2-012. Stars across the region have similar ages, suggestive of a single burst of star formation. Mass accretion rates ($\dot{M}_{\rmn{acc}}$) estimated via H$\alpha$ and $u$-band line intensities show a scatter (0.3dex) similar to spectroscopic studies, indicating the suitability of H$\alpha$ photometry to estimate $\dot{M}_{\rmn{acc}}$. Examining the variation of $\dot{M}_{\rmn{acc}}$ with stellar mass ($M_{\ast}$), we find a smaller intercept in the $\dot{M}_{\rmn{acc}}$-$M_{\ast}$ relation than oft-quoted in the literature, providing evidence to discriminate between competing theories of protoplanetary disc evolution.'
author:
- |
V. M. Kalari$^{1,2}$[^1]\
$^{1}$Gemini Observatory, Southern Operations Center, c/o AURA, Casilla 603, La Serena, Chile\
$^{2}$Gemini-CONICYT, Departamento de Astronom[í]{}a, Universidad de Chile, Casilla 36-D Santiago, Chile
bibliography:
- '1.bib'
date: 16 December 2014
title: 'Classical T-Tauri stars with VPHAS+: II: NGC6383 in Sh2-012[^2][^3]'
---
\[firstpage\]
accretion, accretion discs, stars: pre-main sequence, stars: variables: TTauri, open clusters and associations: individual: Sh2-012, NGC6383
Introduction
============
In the current paradigm of pre-main sequence (PMS) evolution, optically visible, PMS stars, i.e. Classical T-Tauri stars (CTTS) accrete gas from their circumstellar disc via a stellar magnetosphere [@konigl; @Calhart92], facilitated in part by removal of angular momentum via outflows. This accretion processes results in a combination of unique signatures including excess ultraviolet continuum and line emission (notably in H$\alpha$); and infrared excess (at $\lambda>$2$\mu$m) due to the circumstellar disc [@gullbring98; @muz03]. Mass accretion ($\dot{M}_{\rmn{acc}}$) results in mass gain eventually leading to the main-sequence turn on, while the remaining disc coagulates and forms planetary systems. How T-Tauri stars accrete mass from their disc while losing angular momentum remains an outstanding question [@Hart16]. Understanding this process is essential as it controls the evolution of protoplanetary discs and the planetary systems within [@disc]. Currently, viscous evolution [@hart98; @balbus] acting in conjunction with sources of ionisation (for e.g. gamma-rays in [@gammie], or as summarised in [@gorti; @ercolano] from internal photoevaporation, UV-radiation, or X-rays) are considered the chief accretion drivers. Hydrodynamic turbulence [@hart18], or Bondi-Hoyle accretion [@pad06; @Hart08] have also been proposed. These competing theories predict variations in how $\dot{M}_{\rmn{acc}}$ of PMS stars changes with respect to stellar mass and age. Hence, obtaining large samples of stellar and accretion estimates of PMS stars is the key to understanding disc evolution.
The observational study of accretion in young stars has been the subject of a large body of observational work (see @Hart16 for a recent review). Conventionally, accretion properties of PMS stars are obtained from spectroscopic studies of a few select objects (e.g., @muz03 [@natta06; @herc08; @kalarivink; @manara17]), either based on their UV continuum excess, or emission line strengths. Recently [@de10; @geert11], and [@Kalari15] demonstrated a method using H$\alpha$ photometry to estimate $\dot{M}_{\rmn{acc}}$, with the results comparable to emission-line spectroscopy, and $U$-band photometry respectively. This opens the possibility of identifying CTTS and estimating their $\dot{M}_{\rmn{acc}}$ and stellar properties in a homogeneous manner with clear detection limits. Thereby, populating the sample of measured accretion rates as a function of stellar mass and age uniformly. In tandem with current astrometric [@gaia] and deep infrared photometric (e.g., @glimpse [@vvv]) surveys to remove possible outliers and identify discs among the PMS population respectively, we are endowed with powerful tools to study the accretion and disc properties of pre-main sequence stars homogeneously in a given star-forming region.
Such work utilising large surveys to estimate accretion properties homogeneously was previously carried out by [@roman02; @de10; @geert11; @rig11; @manara12] and [@Kalari15]. In [@Kalari15], we used $ugri$H$\alpha$ optical photometry from VPHAS+ (VST/OmegaCAM Photometric H-Alpha Survey), along with archival near and mid-infrared data to identify CTTS in the star-forming region Lagoon Nebula to estimate for the first time $\dot{M}_{\rmn{acc}}$ from H$\alpha$ and $u$-band data, and compare our results to spectroscopic measurements (finding an almost 1:1 relation within 2$\sigma$). In this paper, we continue the work started by studying another star-forming region in the Sagittarius OB1 association (along with NGC6530 studied in [@Kalari15] and NGC6531) Sh2-012. In addition to the optical VPHAS+ and infrared survey photometry, we utilise the recent [*Gaia*]{} data release (DR2) which provides proper motions, and parallaxes of stars with similar limiting magnitudes as the optical and infrared data. Taken together, these data allow for a valuable insight into the accretion, disc and kinematic properties of CTTS in Sh2-012 in a homogeneous manner.
Sh2-012 and the central cluster NGC6383
---------------------------------------
Sh2-012 (also known as RCW132; or Gum67) is a prime target for this study [@sharpless]. It is a comparatively poorly studied star-forming H[II]{} region in the Carina-Sagittarius arm [@Rauw08], and forms part of the larger Sagittarius OB1 association of star-forming regions found along this section. The central open cluster NGC6383 (also Collinder 335) is surrounded by the larger H[II]{} region encompassing a total area of around 2 sq. deg. It is bounded by a diffuse shell-shaped structure having a radius of nearly 1$\degr$. Most likely, the diffuse shell is ionised by the central star of NGC6383, HD159176. An O7V binary, HD159176, has an age of 2.3–2.8Myr [@Rauw10]. The diffuse shell is resplendent with numerous bright pillars, and rims, which may be the site of current and future star formation. The region is dotted with small optically dark clouds, indicating regions of possibly higher extinction [@Rauw10]. Although approximately circular in shape, the H[II]{} region is not uniformly ionised.
Majority of the cluster population of NGC6383 has a rather low and uniform reddening of $E(B-V)$ of 0.32. NGC6383 is conveniently located at a distance of around 1300pc (although low and high distances have been reported in the review by @Rauw08) for studying the low-mass CTTS population with VPHAS+ photometry; as the dynamic magnitude range of VPHAS+ (13$<r<$21) corresponds to a mass range of 0.2–2$M_{\odot}$ at this distance and reddening. Assuming that the central cluster and the surrounding population have similar reddening and distance, Sh2-012 is an interesting and fruitful place to observe on-going star formation through the study of CTTS. In the literature, the central cluster has been subject of past studies. Briefly, early optical broadband photoelectric photometry suggested a young age of $\sim$2Myr for NGC6383 [@eggen; @fitz; @loydevans]. Follow-up optical and near-infrared photometric and spectroscopic studies found that many stars had infrared excesses resembling discs, further providing evidence for the presence of CTTS, and on-going star-formation [@the; @denancker; @paun]. A X-ray study by [@rauw02] found evidence for a number of candidate pre-main sequence stars. Finally, a detailed recent study using a combination of optical spectroscopy and $UBV$($RI$)$_c$H$\alpha$ photometry was conducted by [@Rauw10] in a large area around the central cluster and covering the H[II]{} region. The authors in addition to confirming a distance of around 1300pc and a mean reddening towards cluster members of 0.32, found that X-ray detected pre-main sequence candidates had low levels of H$\alpha$ emission (which may be likely due to them being Weak Line T-Tauri stars; see e.g. @Hart16), and also severe contamination by H$\alpha$ emitters from foreground/background populations. This is not surprising given the close location of the region to the Galactic centre, and it’s location in our line of sight towards the Galactic arm. Although no detailed study of the formation scenario has been put forward in the literature, [@Rauw08] have suggested that the numerous bright rims and pillars maybe the site of future triggered star formation.
Therefore, it is interesting to study the CTTS in the region from a perspective of gaining information about pre-main sequence accretion, but also understanding the formation scenario of the region. From a study of the complete H[II]{} region, we are able to add significantly to the literature by presenting the first detailed study of CTTS in the entire Sh2-012 region. As this study will be based on identifying accretors based on H$\alpha$ emission and infrared excesses, it negates the contaminants introduced by studying pre-main sequence stars identified purely from colour-magnitude diagrams. With the addition of kinematic information from the recent [*Gaia*]{} data release, we can exclude the large number of contaminants found in the H$\alpha$ based study of [@Rauw10]. With this information in hand, we can study the properties of a large number of CTTS identified in a homogeneous manner, and also study the detailed formation scenario and substructures in the Sh2-012 region.
This paper is organised as follows– in Section 2 we present the data utilised in this study. In section 3 we identify CTTS and estimate their stellar, accretion and disc properties. In Section 4, we discuss the star formation scenario in Sh2-012, and accretion properties of the identified CTTS. Finally, a summary of our results is presented in Section 5.
![($r-$H$\alpha$) vs. ($r-i$) diagram. The dashed line is the interpolated model track reddened by $E(B-V)$=0.32 assuming a standard Galactic reddening law. Grey dots are stars meeting our selection criterion, and blue crosses are stars meeting our selection criterion that are identified as candidate CTTS based on their EW$_{\rmn{H}\alpha}$. The reddening vector for $A_{V}$=1 is also shown.[]{data-label="fig:rha"}](2colour2.png){width="86mm" height="58mm"}
Data
====
VPHAS+ imaging
--------------
The VPHAS+ survey [@drew14] observed the Southern Galactic Plane and bulge in the $ugri$H$\alpha$ filters using the OmegaCAM CCD imager mounted on the 2.6metre VLT Survey Telescope (VST) located at Cerro Paranal, Chile. The OmegaCAM CCD consists of 32$\times$32 E2V chips which capture a $1^{\circ}$ square field of view at a resolution of 0.21$\arcsec$pixel$^{-1}$. Gaps between the chips are minimised by offset exposures, and the total coverage of each field of view is greater than 98percent. The $ugri$ band passes are Sloan broadband filters, while the custom-built H$\alpha$ filter has a central wavelength and bandpass of 6588 and 107Å respectively. Exposure times in these filters are 150, 30, 25, 25, and 120*s* respectively, and observations reach a 5$\sigma$ depth at H$\alpha$=20.5-21.0mag and $g$=22.2-22.7mag in crowded fields. Practical constraints have meant that the blue ($ug$) and red ($ri$H$\alpha$) observations are carried out separately. Therefore, an additional $r$ observation is carried out with every blue observation to serve as a linking reference.
To identify and characterise CTTS candidates in the star-forming region Sh2-012, we define our area of study as a 2$\degr$$\times$2$\degr$ region centred on Right Ascension (J2000) 17$^{h}$34$^{m}$09$^{s}$, Declination (J2000) $-$32$\degr$32$\arcmin$10$\arcsec$. The data used in this publication were part of the VPHAS+ second data release. The reader is referred to the release document[[^4]]{} for details of the data reduction procedure. To summarise, the IRAF[[^5]]{} procedure DAOFIND is run on each $i$-band exposure to create master-list of sources, which is supplemented by $ugr$H$\alpha$ source detection’s to capture any faint blue or H$\alpha$ bright sources. A point source function was fit using the DAOPHOT procedure to obtain photometry in all-bands in Vega magnitudes. AAVSO photometric all-sky survey[[^6]]{} (APASS) photometry is utilised to bring the survey photometry on a global calibration scale. To select sources with photometric accuracy we apply the following selection criteria-
1. $22$$>$$r$$>$13 in both the red and blue filter sets to avoid saturated and faint sources;
2. signal to noise ratio $>$10 in $ri$H$\alpha$ bands;
3. point source function fit of $\chi$$<$1.5 to select stellar or star-like sources
In the resulting sample, 2091573 unique sources have $ri$H$\alpha$ photometry meeting our quality criteria. The saturation limit of the VPHAS+ photometry ($r$$\sim$13mag) means that the upper main sequence of the cluster is saturated.
[*Gaia*]{} DR2 astrometry
-------------------------
In addition to the VPHAS+ data, we make use of astrometric data from [*Gaia*]{}DR2 [@gaia]. This data contains parallaxes ($\pi$), and proper motions vectors in Right Ascension ($\mu_{\alpha}$) and Declination ($\mu_{\delta}$) calculated from the first two years of [*Gaia*]{} observations (2014-2016). The [*Gaia*]{} DR2 dataset has a detection threshold of $G\sim$20.5mag towards Sh2-012, although the very bright ($G<$7mag) and high proper-motion ($>$0.6 arcsecyr$^{-1}$) stars are incomplete. Global correlations on the parallax and proper motions are believed to be around $\pm$0.1mas and $\pm$0.1masyr$^{−1}$. In addition, comparison with known quasars indicates an overall negative bias of 0.029mas [@gaia2] which we add to our parallax measurements. We cross-matched our VPHAS+ source list with the [*Gaia*]{}DR2 dataset within a radius of 0.1$\arcsec$ (the cross match radius was chosen experimenting with the astrometric fidelity of both catalogues). Finally, we applied the [@gaia2] C-1 astrometric equation to remove sources with unreliable astrometry. We impose the requirement of astrometric criteria on the VPHAS+ dataset as the cluster is in the direction of the Galactic plane from our viewpoint, and a significant proportion of background stars would be erroneously selected using purely photometry. In total, we have 1296410 stars with high-quality astrometry and photometry. These form the source dataset from which we will identify CTTS.
Near and mid-Infrared imaging
-----------------------------
In addition to optical photometry, we make use of near and mid-infrared survey photometry to identify the disc stage of CTTS. To do, we cross matched our catalogue with the $ZYJHK$s VVV (Vista Variables in the Via Lactea; @vvv) survey using a cross-match radius of 0.3$\arcsec$. We discarded sources having random errors $>$0.1mag in $JHK$s, and classified as having a non-stellar profile. In addition, we cross-matched our sample with the GLIMPSE (The Galactic Legacy Infrared Midplane Survey Extraordinaire) catalogue providing photometry at 3.6, 4.5, 5.8 and 8.0$\mu$m. The survey was conducted using the [*Spitzer*]{} space telescope. We cross-matched with our parent catalogue using a radius of 1$\arcsec$, and checking the $K$s magnitudes of stars given in the GLIMPSE survey catalogue (which are taken from the 2MASS survey of [@cutri03]) against the VVV provided $K$s magnitudes. In addition, we discarded stars having random photometric uncertainties in any band greater than 0.2mag.
Classical TTauri stars
======================
Identifying CTTS using their H$\alpha$ photometric equivalent width
-------------------------------------------------------------------
The ($r-i$) vs. ($r-$H$\alpha$) diagram (Fig. \[fig:rha\]) is used for identifying H$\alpha$ emission line stars. The ($r-$H$\alpha$) colour measures the strength of the H$\alpha$ line relative to the $r$-band photospheric continuum. It is only slightly affected by relative extinction, due to the negligible extinction coefficient between these two bands. For late-type stars, the ($r-i$) colour is a good proxy for spectral type, for stars with low extinction. Since most main-sequence stars do not have H$\alpha$ in emission, their ($r-$H$\alpha$) colour at each spectral type provides a template against which ($r-$H$\alpha$) colour excess caused by any H$\alpha$ emission can be measured. From [@Kalari15], it is given by, ($r-$H$\alpha$)$_{\rmn{excess}}$=($r-$H$\alpha$)$_{\rmn{observed}}-$($r-$H$\alpha$)$_{\rmn{model}}$. The ($r-$H$\alpha$)$_{\rmn{excess}}$ is used to compute the H$\alpha$ equivalent width (EW$_{{\rmn{H}}\alpha}$) from the equation
$$\ \rmn{EW}_{\rmn{H}\alpha}= \rmn{W}\times[1-10^{0.4\times(r-\rmn{H}\alpha)_{\rmn{excess}}}]\$$
following [@de10]. W is the rectangular bandwidth of the H$\alpha$ filter. The photometric EW$_{\rmn{H}\alpha}$ for all stars having $ri$H$\alpha$ photometry in our sample are thus measured.
We use the spectral type-EW$_{\rmn{H}\alpha}$ criteria of [@barr03] to select CTTS from the identified H$\alpha$ emission line stars. H$\alpha$ emission line stars having spectral type earlier than K5 and EW$_{\rmn{H}\alpha}$$<$$-18$Å, K5-M2.5 and EW$_{\rmn{H}\alpha}$$<$$-25$Å and M2.5-M6 and EW$_{\rmn{H}\alpha}$$<$$-38$Å are selected. This criteria includes the consideration that the maximum errors due to a combination of the reddening uncertainty, and random photometric errors are around 9–12Å depending on spectral type. The selection criteria is designed to exclude any possible interlopers such as chromospherically active late-type stars. We identify 156 CTTS on this basis. For a detailed comparison of the accuracy of photometric EW$_{\rmn{H}\alpha}$ to spectroscopically measured ones, the reader is referred to [@geert11] and [@Kalari15]. In addition, we present in electronic form the code and stellar models utilised for calculating photometric EW$_{\rmn{H}\alpha}$$^{*}$.
![Histogram of proper motions in Right Ascension ($\mu_{\alpha}$). Overplotted as a solid line is the fit of a double-peaked Gaussian to the histogram simulating the cluster and field population. The inter-quartile range of the distribution, and the wings of the primary Gaussian coincide, and are taken as the range of $\mu_{\alpha}$ of the cluster members (blue dashed lines).[]{data-label="mur"}](fitz.pdf){width="78mm" height="60mm"}
![Histogram of proper motions in Declination ($\mu_{\delta}$). Overplotted as a solid line is the fit of a double-peaked Gaussian to the histogram simulating the cluster and field population. The inter-quartile range of the distribution, and the wings of the primary Gaussian coincide, and are taken as the range of $\mu_{\delta}$ of the cluster members (blue dashed lines).[]{data-label="mud"}](fitz2.pdf){width="78mm" height="60mm"}
![Position of stars selected using the EW$_{\rmn{H}\alpha}$ criteria (grey circles) in proper motion ($\mu_{\alpha}$-$\mu_{\delta}$) space. The limits of the cluster are shown as a solid box, with the kinematically selected members shown as blue crosses.[]{data-label="kinematic"}](pms.pdf){width="72mm" height="58mm"}
Removal of kinematic outliers
-----------------------------
A significant number of stars identified as CTTS based on their photometric EW$_{\rmn{H}\alpha}$ have kinematic properties statistically different from the mean of the population. These stars are very likely contaminant non-members that are found further in the spiral arm or located in front of the region [@Rauw10], although a small fraction might be runaway or dynamically ejected stars that might be true members. However, we remain conservative in our choice and exclude all kinematic outliers as non-members.
To select the kinematic outliers, we first identify the cluster population in proper-motion space. We fit to distributions of $\mu_\alpha$ and $\mu_\delta$ a double-peaked Gaussian, where the broad peak identifies the background population and the narrow peak the cluster population. The resulting fits give the centre of the cluster in proper motion space as $\mu_\alpha$=2.22$\pm$0.65masyr$^{-1}$, $\mu_\delta$=$-$1.50$\pm$0.5masyr$^{-1}$. In Fig.\[kinematic\], we show the selected CTTS in $\mu_\alpha$-$\mu_\delta$ space. The cluster exhibits a well defined centre, corresponding to the values found in the histograms. To define the cluster members, we select all stars within the inter-quartile range of the mean (this is resistant to outliers in the distribution), which are indicated by the solid lines. From this selection, 55 CTTS are selected as kinematic members of Sh2-012. On the sky, the kinematically selected sub-sample are clustered, where as the outliers are evenly distributed further suggesting that the applied rejection criteria is useful to remove background/foreground contaminants. Finally, we calculate the median distance of the kinematically selected population as $d$=1342$\pm$130pc based on [*Gaia*]{}DR2 parallaxes. This distance is in excellent agreement (within errors) with those measured based on analysis of the eclipsing binary V701Sco [@bell87], and spectroscopic parallaxes [@Rauw08; @Rauw10]. [*Gaia*]{}DR2 results rule out the high and low distances estimates of 1700 and 985pc from [@paun] and [@khar] respectively. The distance we adopt for our analysis is 1340pc.
Stellar properties
------------------
![image](cmddist1700pc.pdf){width="90mm" height="75mm"}
The stellar mass ($M_{*}$) and age ($t_{*}$) of each kinematically selected CTTS is estimated by interpolating its position in the observed $r$ versus $(r-i)$ CMD with respect to PMS tracks and isochrones (Fig. \[fig:cmd\]). We employ [@bress12] solar metallicity single star isochrones and tracks, and compare the estimated parameters with those estimated using the [@siess00] and [@pisa] stellar models. We assume the mean reddening $E(B-V)$=0.32, and a standard Galactic reddening law of $R_{V}$=3.1, although this may not be reflective of the entire cluster [@Rauw10], but only of the bulk population. The [@pisa] and [@siess00] models were converted from the theoretical plane to the VPHAS+ magnitudes using the ATLAS9 stellar models [@cast04]. Errors on the interpolated values were calculated by propagating the photometric uncertainties, and including an absolute extinction and distance uncertainty of 0.2mag and 100pc respectively.
The mean ages of CTTS derived from the [@bress12] isochrones is 2.8$\pm$1.6Myr, and the range is between 0.4-18Myr (Fig. \[fig:comphist\]a), with an interquartile range between 1.4–3.8Myr. The error on the mean age takes into account the reddening uncertainties and photometric errors. The standard deviation of CTTS ages is 3.4Myr, and 70% of all CTTS have ages between 1–4Myr. The individual ages of CTTS vary with the isochrones used (see Fig. \[fig:comphist\]a). The mean age estimated using the [@pisa] (2.9$\pm$1.7Myr) and the [@siess00] (3.5$\pm$2.4Myr) isochrones is higher than the [@bress12] isochrones. It is important to stress that the statistical age of a CTTS population is considered relatively accurate compared to the individual ages of CTTS due to the combination of reddening and distance uncertainties [@mayne08]. The differences in ages derived from different isochrones follows the pattern observed in [@herc15]. The authors of that paper found that for stars with spectral types earlier than M (the bulk of our sample), the [@bress12] isochrones give a smaller age than the [@siess00] isochrones.
Our results agree well with literature age estimates of stars in the central open cluster NGC6383. In particular, the age of NGC6383 X-ray emitting stars derived by [@Rauw10] is 2.8$\pm$0.5Myr. The ages of cluster members identified using photometric criteria by [@paun] are between 1–4Myr, although we caution that their distance estimate is significantly higher (1700pc). [@fitz] quote an age of 1.7$\pm$0.4Myr for bright cluster members, assuming a distance of 1500pc. The central O7V binary HD159176 has an age of 2.8Myr according to [@fitz] and around 2.3–2.8Myr following [@Rauw10], which all agree well with our mean ages. It is important here to note that both [@Rauw10] and [@paun] ages are derived for photometrically identified members. In the case of [@Rauw10], the authors derive the age using X-ray identified cluster members. It is also generally accepted that X-ray emitting PMS stars (likely Weak Line TTauri stars) are generally older than CTTS.
The interpolated mass distribution is shown in Fig. \[fig:comphist\]b. The masses determined using different isochrones agree well within the errors for individual stars (barring few outliers). The mass range is between 0.3–0.9$M_{\odot}$, with the median mass 0.5$M_{\odot}$. The lower mass cut-off corresponds to the H$\alpha$-band cutoff, and we are relatively complete at masses above 0.4$M_{\odot}$ for the 2Myr isochrone.
![The distribution of masses (a), ages (b) and $\dot{M}_{\rmn{acc}}$ (c) determined using different stellar evolutionary models. Solid blue lines are results estimated using the Bressan et al. (2012) models, dashed red lines using the Tognelli et al. (2011) models and dotted green lines using the Siess et al. (2000) models.[]{data-label="fig:comphist"}](histcompare.pdf){width="80mm" height="135mm"}
Accretion properties
--------------------
It is generally accepted that the stellar magnetosphere truncates the circumstellar disc at an inner radius ($R_{\rmn{in}}$). Matter is transferred along the magnetic field lines at this truncation radius, and releases energy when impacting on the star. The energy generated by magnetospheric accretion ($L_{\rmn{acc}}$) heats and ionises the circumstellar gas, thereby causing line emission. The reradiated energy that has gone towards ionising the gas can be measured from the luminosity of the line emission, which is correlated to $L_{\rmn{acc}}$ (@Hart16). Assuming matter is free-falling from $R_{\rmn{in}}$, the $\dot{M}_{\rmn{acc}}$ can be estimated using the free-fall equation from $R_{\rmn{in}}$ to the $R_{\ast}$ if one knows the $M_{\ast}$ and energy released, i.e. $L_{\rmn{acc}}$. We can therefore measure the H$\alpha$ line luminosity (${L_{\rmn{H}\alpha}}$) from the EW$_{{\rmn{H}}\alpha}$ to estimate the $L_{\rmn{acc}}$ and $\dot{M}_{\rmn{acc}}$.
To estimate the ${L_{\rmn{H}\alpha}}$, we first estimate the H$\alpha$ line flux ($F_{\rmn{H}\alpha}$) by subtracting the H$\alpha$ continuum flux ($F_{\rmn{continuum}}$) from the total flux given by the H$\alpha$ unreddened magnitude. We refer the reader to [@Kalari15] for details on how the $F_{\rmn{H}\alpha}$ is calculated. Following from [@de10], we assume 2.4 percent of the H$\alpha$ intensity is caused by contamination from the N\[[II]{}\] $\lambda$$\lambda$ 6548, 6584Ålines and subtract it by that amount.
$L_{\rmn{acc}}$ is related to $L_{\rmn{H}\alpha}$ as: $$\rmn{log}~L_{\rmn{acc}} = (1.13\pm 0.07)\rmn{log}~L_{\rmn{H}\alpha}+(1.93\pm 0.23)$$ [@geert11]. Here, $\rmn{log}~L_{\rmn{acc}}$ and $\rmn{log}~L_{\rmn{H}\alpha}$ are measured in units of solar luminosity, $L_{\odot}$. The root mean scatter is 0.54dex. The scatter in the observed relationship is likely caused by a combination of circumstellar absorption, variability in accretion, and unrelated sources of line emission.
The $\dot{M}_{\rmn{acc}}$ can then be estimated from the free-fall equation: $$\dot{M}_{\rmn{acc}}= \frac{L_{\rmn{acc}}R_{\ast}}{\rmn{G}M_{\ast}}\Bigg(\frac{R_{\rmn{in}}}{R_{\rmn{in}}-R_{\ast}}\Bigg).$$ $M_{\ast}$ and $R_{\ast}$ are the stellar mass and radius respectively. $R_{\rmn{in}}$ is the truncation radius. We adopt $R_{\rmn{in}}$=5$\pm$2$R_{\ast}$ from (@gullbring98 [@vink05]). The resultant distribution of mass accretion rates are plotted in Fig. \[fig:comphist\]c. The median logarithm of $\dot{M}_{\rmn{acc}}$ is $-$8.5$M_{\odot}yr^{-1}$. In addition to the propagated uncertainties in the interpolated stellar mass and radii, the major error on the $\dot{M}_{\rmn{acc}}$ is due to the scatter in the $\rmn{log}~L_{\rmn{acc}}$–$\rmn{log}~L_{\rmn{H}\alpha}$ relation. The estimated results are given in Table 1.
$u$-band accretion properties
-----------------------------
As discussed in Section 3.4, it is thought that mass infall along the magnetic field lines results in an accretion shock as matter hits the stellar surface. This phenomena is seen not only in the reradiated energy caused from the ionisation lines, but also in the excess continuum luminosity at short wavelengths due to the accretion shocks. Similarly to the method described in Section 3.4, the $L_{\rmn{acc}}$ is correlated to the excess luminosity but at continuum ultraviolet wavelengths.
Forty CTTS have $ugr$ photometry (i.e. the blue filter set described in Section 2). Their positions in the ($u-g$) vs. ($g-r$) diagram are shown in Fig. \[fig:ugr\]a. To measure their excess $u$-band emission, we take the $(g-r)$ colour as a proxy for spectral type. The excess emission flux ($F_{u,\rmn{excess}}$) is given by $$\ F_{u,\rmn{excess}}= F_{0,u}\times[10^{-u_{0}/2.5}-10^{-(u-g)_{\rmn{model}}+g_{0})/2.5}].\$$ $F_{0,u}$ is the $u$-band integrated reference flux. $u_{0}$ and $g_{0}$ are the dereddened magnitudes, and $(u-g)_{model}$ is the corresponding model colour from [@cast04]. The excess flux is used to estimate the $L_{\rmn{acc}}$ using the empirical relation [@gullbring98] $$\rmn{log}~L_{\rmn{acc}} = {log}~L_{u,\rmn{excess}}+0.98.$$ $L_{u,\rmn{excess}}$ is the $u$-band excess luminosity. $L_{\rmn{acc}}$ and $L_{u,\rmn{excess}}$ are in units of solar luminosity $L_{\odot}$. Mass accretion rates are calculated using Eq. 2, with results compared to H$\alpha$ derived $\dot{M}_{\rmn{acc}}$ in Fig.\[fig:ugr\]b.
On comparison, we find that the mean scatter from the 1:1 relation between H$\alpha$ and $u$-band derived accretion rates is 0.3dex. Considering that the accretion variability is around 0.5dex [@cost12], the scatter observed maybe due to a combination of variability (since the optical and ultraviolet observations are non-contemporaneous; see Section 2) and contamination in the measured accretion luminosity. These are however within the scatter measured spectroscopically using a variety of contemporaneously measured line and continuum indicators [@manara17]. This is approximately similar to the result found in [@Kalari15]. It indicates that the use of accretion luminosities derived from H$\alpha$ photometry is comparable to oft-used line luminosities from spectroscopy, or $u$-band photometry, and is a reliable indicator of accretion line intensity. The availability of H$\alpha$ photometry with large surveys such as VPHAS+ now allows to determine efficiently accretion rates across the plane of the Milky Way.
![image](compareu2.png){width="175mm" height="65mm"}
Disc properties
---------------
### Near-infrared properties
CTTS display near-infrared excesses when compared to a purely stellar template due to the presence of dust in their inner circumstellar discs [@cohen79]. This allows for a simple independent check on our H$\alpha$ identified CTTS.
We identify near-infrared [*[JHK]{}*]{}s counterparts of the CTTS in the VVV survey as described in Section 2.3. The limiting magnitude of the VVV survey depends on the region of sky observed. Around Sh2-012, VVV photometry reaches a depth of $K$$\sim$18mag, which translates to roughly a star of spectral type K7, suggesting that a considerable number of the stars in our sample should be identifiable in the VVV survey. 51 stars in our sample of 55 were found to have VVV counterparts with high-quality photometry. In Fig. \[fig:ir\]a, the ($J-H$) vs. ($H-K$) colour-colour diagram of CTTS in Sh2-012 are plotted. The dashed line is the CTTS locus of [@meyer97] which predicts the excess emission using disc accretion models having log$\dot{M}_{\rmn{acc}}$ between $-$6 to $-8$$M_{\odot}yr^{-1}$. The solid line is the main sequence locus from [@bess98]. Most stars ($\sim$94percent) having near-infrared colours in our sample have ($H-K$s) excesses similar to the CTTS locus, suggesting on the basis of the near-infrared diagram alone that they are CTTS. This confirms our sample as consisting primarily of CTTS undergoing accretion. Five stars lie near the main sequence locus, indicating that they are either weak-line TTauri stars, or have no excess infrared emission suggestive of a circumstellar disc, and might be interlopers.
### Mid-infrared properties
We also found mid-infrared counterparts in the [*[Spitzer]{}*]{} IRAC data described in Section 2.2. 26 cross-matches within 1$\arcsec$ having high-quality photometry were found. In Fig. \[fig:ir\]b, the 3.6$\micron-$4.5$\mu$m vs. 5.8$\mu$m$-$8$\micron$ colour-colour diagram is plotted. The solid box represents the typical colours for CTTS having mean $\dot{M}_{\rmn{acc}}$=$10^{-8}$$M_{\odot}yr^{-1}$, while the areas occupied by the ClassI and ClassIII sources are also shown. Most of the cross-matched CTTS candidates in our sample lie in this box, providing an additional sanity check on the results.
The slope of the spectral energy distribution (SED) in the infrared, $$\alpha_{\rm{IR}}\,=\,\frac{d\,{\rm{log}}(\lambda F_{\lambda})}{d\,{\rm{log}}\lambda},$$ (at $\lambda$$>$3$\micron$) is used to diagnose the evolutionary stage of the disk-star system [@lada76], and provides another sanity check on the presence of a circumstellar disc around the CTTS. We adopt the classification scheme of [@greene] to distinguish between systems with protostellar disks (Class I), optically thick disks (Class II), transitional/anaemic disc (Class III), or main-sequence stars. $\alpha_{\rm{IR}}$ was derived by fitting the [*[Spitzer]{}*]{} magnitudes at 3.6, 4.5, 5.8, and 8$\micron$. The resultant $\alpha_{\rm{IR}}$ values are given in Fig.\[slope\].
Out of the 26 accreting PMS stars that have [*[Spitzer]{}*]{} photometry at all observed wavelengths, we find that 1 has a SED slope resembling a Flat spectrum source, and 21 have SED slopes resembling ClassII YSOs. The remaining 4 stars have SED slopes of ClassIII objects, resembling a transitional/anaemic disc. Overall, most of the PMS star sample have infrared-excess akin to circumstellar disks, verifying that we deal with genuine accreting PMS stars. Overall, the results from the IRAC colour-colour diagrams correlate well with the SED slopes.
![image](irac.pdf){width="175mm" height="65mm"}
![Histogram of SED slopes of CTTS. Only 26 CTTS candidates having cross matches in [*[Spitzer]{}*]{} photometry are shown, with 21 sources identified as Class II sources, 4 as Class III sources and 1 as a Flat spectrum source.[]{data-label="slope"}](sed.pdf){width="78mm" height="60mm"}
Contaminants
------------
From infrared photometry, we find that five of our stars do not have infrared colours resembling CTTS. These five stars fall on the main sequence locus in the ($J-H$) vs. ($H-K$s) colour-colour diagram. On inspection, they have distances varied more than 3$\sigma$ from the mean according to [*Gaia*]{} data. Their proper motions, or spatial locations are not clustered, and they are distributed near the central locus of stars in both diagrams. Neither are their locations in the colour-magnitude diagram deviant from the main bulk of stars.
Neglecting that this difference is not due to incorrect cross-matches, or accretion variability, we suggest that one in approximately 10 stars is a likely contaminant in our sample. This would put the total contaminants in our sample to be around $\sim$10%, or 6. Our contamination is expected to be very low, as we have used exclusive EW$_{\rm{H\alpha}}$ criteria to remove chromospheric outliers. In addition, the utilisation of proper motions from [*Gaia*]{} DR2 was essential in removing the bulk of the foreground/background population observed in NGC6383 by [@Rauw10]. This ensures that we have a relatively clean sample of CTTS in Sh2-012 for which we have determined the stellar and acccretion properties.
Discussion
==========
$\dot{M}_{\rmn{acc}}$ and stellar properties
--------------------------------------------
The $\dot{M}_{\rmn{acc}}$ of CTTS has been observed to correlate with the stellar mass, following a power-law form of $\dot{M}_{\rmn{acc}}$$\propto$$M_{\ast}^{\alpha}$. Although a double power-law has been recently suggested with the break at masses$\sim$ 0.3$M_{\odot}$ [@manara17]. From the literature, $\alpha$ is known to vary between 1–3 [@Hart16] in the mass range of 0.1–2$M_{\odot}$. From a survey of the literature results, the best fit across this mass range is found to be $2.1$, and the resultant fit is plotted in Fig.\[maccm\] (see @thesis for details of the literature results). Overplotted are the accretion rates of CTTS in Sh2-012. The best-fit relation is 1.45$\pm$0.3, which was calculated using a linear-regression fit following the Buckley-James method accounting for the limits of our detection, and errors on both variables. The detection limits on the data were calculated assuming the lower limits of our EW$_{\rm{H\alpha}}$ criteria, and a 2Myr isochrone.
For the stars in Sh2-012, the range of $\dot{M}_{\rmn{acc}}$ falls well within those observed in the literature at any given mass. Based on this, and the detection limits of our study, it is apparent that we are unable to detect small accretion rates at the lowest masses, which would explain the shallowness of our slope compared to the literature. In addition to the observed slope, there is a scatter of around 1dex at any given mass in our sample. Much of the observed scatter in this relation is real as the amount that can be attributed to variability is too small ($\sim$0.5dex; @cost12). Additionally, the variation in the slope cannot be attributed in our case to differences between the stellar models. At these masses (0.3–1$M_{\odot}$), differences between stellar models are smaller than at lower and higher masses [@herc15], resulting in a negligible difference in the estimated stellar masses and accretion rates (Fig.\[fig:comphist\]). The resulting value of $\alpha$ using the [@siess00] and [@pisa] models are 1.5 and 1.41 respectively. The differences between the slopes using different stellar models are much smaller than the error on the resultant fits.
Interestingly, the observed relation may indicate favoured methods of disc dispersal. In particular, the determined slope in the $\dot{M}_{\rmn{acc}}$–$M_{\ast}^{\alpha}$ relation. The value of $\alpha$ from our study is is different from the simple viscous disc evolutionary models of [@hart06]. It is consistent with viscous models combined with X-ray induced ionisation [@ercolano],which suggest a value of 1.7. But, as seen earlier our determined slope maybe affected by detection limits. Additionally, our values fall entirely within the range of previously determined $\dot{M}_{\rmn{acc}}$ from the literature, which also indicates that the measured slope could be due to detection limits. It is also the reason we do not attempt to fit a broken power law suggested by [@manara17] (additionally, the break happens at the lower end of stellar mass limit).
However, the difference in the intercept of the CTTS in Sh2-012 compared to younger 1–2Myr old CTTS from the literature is around 0.3dex, which is entirely consistent with the the viscous accretion model [@hart06]. In the viscious disc accretion paradigm, differences in the overall age of the region with respect to much younger regions ($<$2Myr CTTS) would result in the lower number of accretors, and smaller accretion rates in Sh2-012. However, the large spread, and difference in ages between various stellar models prevents us from exhaustively testing this scenario.
An interesting explanation of the difference in intercept is the idea that mass is gained by the circumstellar discs from the environment after initial collapse. The disks in this scenario transport material from the environment and onto the star, acting as a conveyor belt for interstellar material of the molecular cloud [@manara18]. In this scenario, both the accretion of matter onto the star, and matter from the environment on the disc follow Bondi-Hoyle accretion where $\dot{M}_{\rmn{acc}}$–$M_{\ast}^{2}$ [@throop]. However, the intercept of the slope may be higher or lower depending on the environment, and the mean of $\dot{M}_{\rmn{acc}}$ is higher than the median. Given that region is relatively older (at roughly 3Myr) than most nearby star-forming regions around 1Myr [@Hart08], we can surmise that possibly the lower accretion rates if not due to the higher age of the region might be due to smaller amount of surrounding material from which discs gain mass to accrete. If we assume that this value is an average depending on global molecular cloud density within a certain region rather than dependent on local over densities, the intercept of the $\dot{M}_{\rmn{acc}}$–$M_{\ast}$ relation might vary depending on this. We stress therefore for the need of sub-mm imaging of regions alongside the growing large dataset of optical-infrared photometry and astrometry to measure disc and molecular cloud properties associated to CTTS, to understand how discs accrete and evolve.
![$\dot{M}_{\rmn{acc}}$-$M_{\ast}$ relation of our sample. CTTS in Sh2-012 are shown as blue crosses, with red arrows marking the detectability limits. The blue dashed line is the best-fit relation for only the Sh2-012 stars, following a slope of 1.45. The fit to the literature data shown as solid circles has a slope of 2.1 (solid line). The difference in the intercept of the fit is 0.27dex[]{data-label="maccm"}](maccm2a.pdf){width="78mm" height="60mm"}
![image](6383.pdf){width="170mm" height="140mm"}
Spatial distribution of CTTS candidates
---------------------------------------
Sh2-012 is a star-forming region centered on the open cluster NGC6383. The region has an approximate diameter of 35pc, with rims bright in H$\alpha$ visible towards the edges. Multiple dark cavities in the optical are noted towards the east and north of the cluster, bounded by the brightly ionised rims. We plot the distribution of CTTS on the sky overlaid with a three-colour image highlighting the morphology in H$\alpha$, PolyAromatic Hydrocardon (PAH) emission (8$\mu$m), and dust (24$\mu$m) in Fig.\[maccplt\]. We find that the CTTS can be isolated into three different sub-regions within the larger region. Firstly, the central cluster contains the bulk of the CTTS ($\sim$42%). Near the northern-western edge, towards a ionised bright rim at $\alpha$ of 17$^h$32$^m$16$^s$44 and $\delta$ of $-31^d$55$'08''$ there exists the other significant clustered population of CTTS ($\sim$22%). It is noted that although these regions are separated by $\sim$17pc, the CTTS within them exhibit similar ages ($\sim$2Myr), and have the highest measured accretion rates. Finally, a spread of stars towards the western edge of the nebula, and in the interface between the northern-western edge and the central cluster contain the remaining CTTS. Interestingly, we note regions devoid of CTTS corresponding to dark cavities in H$\alpha$, but not with excess dust emission highlighting that the lack of CTTS is not likely due to obscuring dust. Overall, we detect two significant clustered population of CTTS around the central cluster and the north-west edge. They exhibit densities of around $\sim$5 CTTSpc$^{-1}$. The remaining stars are spread out towards the region west of the central cluster in the north-south direction. Comparing the position of the CTTS, and the morphology of the region a shell-like expansion structure is noted, with a bright rim at the extreme edge containing a significant population of CTTS. Such rims have been suggested as the location for future star-formation, likely triggered by the central cluster [@Rauw08].
An useful way to test such scenarios is by comparing the spatial distribution of accretion rates. This circumvents the uncertainties in individual ages of stars. Underpinning this analysis is the assumption that the distribution of accretion rates is due to the star formation history within a particular region. Younger CTTS with higher accretion rates are located closer to their birth locations, while older CTTS may have had time to dynamically evolve and move away from their natal molecular cloud leading to larger spacings, and lower accretion rates. Assuming the $\dot{M}_{\rmn{acc}}$ differs by around $\sim$0.3dex in our mass range (Fig.\[maccm\]), larger variations are reflective of dependencies on stellar age. Therefore, the distribution of $\dot{M}_{\rmn{acc}}$ maybe reflective of intrinsic age spreads.
Fig.\[maccplt\] shows the distribution of CTTS in the sky with the $\dot{M}_{\rmn{acc}}$ of individual stars indicated by the hue of each symbol. From this, we find that the strongest accretors are concentrated around the central open cluster. In addition, the strongest accretors are also found towards the north-western edge, around a bright rim. If we consider the simple scenario that the central cluster formed first, triggering the formation of stars in the edges, there would be noticeable age difference between the two populations. Considering the upper limit on the age of the most massive central O7V star (2.3–2.8Myr; @Rauw10), and the ages of the accretors in the bright rims there is a difference of around $\sim$1Myr. The distance between these regions is $\sim$17pc, suggesting that a cloud expansion speed of 17kms$^{-1}$ is necessary to trigger the observed population of CTTS, which is higher than average sound speeds in H[II]{}. Based on the fact that the accretion rates and ages of CTTS in the rims, and the central cluster are similar, we suggest that the morphology observed in Sh2-012 is not a result of triggered or sequential star formation, and that the stars across the whole region appear to have similar ages and are likely to have formed in a single burst of star formation around 2-3Myr ago.
Kinematics
----------
![ of Class II transitional discs (red circles, i.e. their slope at 8$\mu$m is between -0.3 amd -1.8), and transitional discs (i.e their slope at 8 $\mu$m is less than -1.8) as green circles. []{data-label="fig:cmd"}](pmplot.pdf){width="78mm" height="60mm"}
Summary
=======
Based on H$\alpha$ excess emission, we identified 55 CTTS in the star-forming region Sh2-012. The identified CTTS have an age of 2.8Myr, and fall between 0.3–0.9$M_{\odot}$. From their H$\alpha$ and $u$-band excess intensities, we measured their mass accretion rates. The accretion rates correlate well with a scatter of 0.3dex, indicating the accuracy of $\dot{M}_{\rmn{acc}}$ measured using H$\alpha$ photometry which is now accessible for the entire Galactic plane with VPHAS+ survey data. The identified CTTS correspond well with the location of circumstellar disc bearing stars in the near and mid-infrared colour-colour diagrams, with the infrared SED slopes of all stars (having mid-infrared photometry) indicative of a circumstellar disc.
When plotting the distribution of $\dot{M}_{\rmn{acc}}$ against $M_{\ast}$, we find a lower slope (although this also partly in cause due to our detection limits) and intercept than compared with the literature, and propose multiple explanations for this result, in line with known protoplanetary disc evolution theories. Viscous disc accretion may explain the smaller intercept, while X-ray photoevaporation can explain the observed slope in the relation. Finally, it is intriguing that the observed result may also be because protoplanetary discs accrete mass from the environment after collapse. The combination of VPHAS+, current infrared sky surveys with measurements of the dust distribution at sub-mm wavelengths in multiple-star forming regions can truly identify if such differences exist.
Finally, examining the distribution of CTTS on the sky, we find that CTTS are concentrated in the central cluster, and towards the bright rims. The stars have similar ages, and accretion rates which favour a scenario of a single burst of star formation.
Acknowledgements {#acknowledgements .unnumbered}
================
We thank the anonymous referee, J.E. Drew, and J.S. Vink for constructive comments. V. M. K. acknowledges funding from CONICYT Programa de Astronomia Fondo Gemini-Conicyt as a GEMINI-CONICYT 2018 Research Fellow (32RF180005). This research has been supported in part by the Gemini Observatory, which is operated by the Association of Universities for Research in Astronomy, Inc., on behalf of the international Gemini partnership of Argentina, Brazil, Canada, Chile, the Republic of Korea, and the United States of America. Based in part on observations collected at the European Southern Observatory Very Large Telescope in programme 177.D-3023(C) and 179.B-2002. This work presents partial results from the European Space Agency (ESA) space mission [*Gaia*]{}. [*Gaia*]{} data are being processed by the [*Gaia*]{} Data Processing and Analysis Consortium (DPAC). Funding for the DPAC is provided by national institutions, in particular the institutions participating in the [*Gaia*]{} MultiLateral Agreement (MLA). This work is based in part on observations made with the Spitzer Space Telescope, which is operated by the Jet Propulsion Laboratory, California Institute of Technology under a contract with NASA.
\[lastpage\]
[^1]: E-mail: :venukalari@gmail.com
[^2]: Source code to calculate the photometric H$\alpha$ emission line widths and associated stellar models for the VPHAS+ filters can be found at https://github.com/astroquackers/HaEW
[^3]: Table 1 is available in electronic form only
[^4]: http://www.eso.org/qi/catalogQuery/index/59
[^5]: IRAF is distributed by the National Optical Astronomy Observatory, which is operated by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation.
[^6]: https://www.aavso.org/apass
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We perform a series of so-called “synthetic observations” on a set of 3D MHD jet simulations which explicitly include energy-dependent transport of relativistic electrons, as described in the companion paper by Jones, Tregillis, & Ryu. Analyzing them in light of the complex source dynamics and energetic particle distributions described in that paper, we find that the standard model for radiative aging in radio galaxies does not always adequately reflect the detailed source structure.'
author:
- 'I. L. Tregillis, T. W. Jones'
- Dongsu Ryu
title: Nonthermal Emission in Radio Galaxies from Simulated Relativistic Electron Transport in 3D MHD Flows
---
Introduction
============
Jones, Ryu, & Engel (1999) presented the first multidimensional MHD simulations to include explicitly time-dependent transport of the relativistic electrons responsible for the nonthermal emission observed from extragalactic radio sources. The companion paper by Jones, Tregillis, & Ryu (2000; paper I) in this proceedings describes results from the extension of that work to three dimensions via exploratory simulations designed to probe the relationships between the source dynamics and the spatial and energy distributions of nonthermal electrons.
We compute self-consistent emission properties for the simulated sources, thereby producing the first “synthetic observations” of their kind, including synchrotron surface brightness and spectral index maps. This paper briefly summarizes some of these synthetic observations. A full report is in preparation.
Synthetic Observation Methods
=============================
In every zone of the computational grid we compute an approximate, self-consistent synchrotron emissivity as described in Jones et al. (1999). This calculation is performed in the rest frame of the simulated radio galaxy, with the appropriate redshift correction made for the observation frame. It also explicitly takes into account the three-dimensional magnetic field geometry. Note that because we are explicitly calculating the momentum distribution of nonthermal electrons, we obtain the local synchrotron spectral index $\alpha$ directly. Previously, spectral indices in synthetic observations from purely MHD simulations had to be included in an ad hoc fashion (Matthews & Scheuer 1990; Clarke 1993).
Surface brightness maps for the optically thin emission are then produced via raytracing through the computational grid to perform line-of-sight integrations, thereby projecting the source on the plane of the sky at any arbitrary orientation. The synthetic observations can be imported into any standard image analysis package and subsequently analyzed like real observations; the analysis here was performed using the <span style="font-variant:small-caps;">MIRIAD</span> and <span style="font-variant:small-caps;">KARMA</span> (Gooch 1995) packages. For example, it is a straightforward matter to construct spectral-index maps from a set of observations over a range of frequencies. To make this exercise as realistic as possible we place the simulated object at an appropriate luminosity distance, set to $100$ Mpc for the observations included here. Because our primary interest here is in identifying general trends, the observations are presented at their full resolution with very high dynamic range, although it is straightforward to convolve the images down to lower resolution before making comparisons to true observations.
Discussion
==========
The dynamical effects outlined in the paper I have a profound impact on the nonthermal electron populations in these simulated sources. For the purpose of contrasting two extreme cases, here we will consider only the “injection” and “strong-cooling” electron-transport models (models 2 and 3 in paper I).
Figure 1 shows synthetic synchrotron surface brightness maps computed at 1.4 GHz, corresponding to a time when the jet has propagated about 30 jet radii. Consider first the “injection” transport model. A jet, hotspot complex, and lobe are all readily visible. The bright ring in the upper right is the orifice where the jet enters the computational grid.
The apparent hotspot complex consists of a bright, compact primary hotspot and a weaker, more diffuse secondary hotspot slightly below it. This secondary hotspot is a “splatter spot” (Cox, Gull, & Scheuer 1991; Lonsdale & Barthel 1986); we also see “dentist’s drill” type hotspots (Scheuer 1982) at other times.
In the case shown here, the bright primary hotspot does correspond to a strong shock at the jet terminus, although this is often not the case. The synchrotron emissivity jumps by a factor of nearly $10^{4}$ across this shock in the injection model because the model assumes almost no relativistic electrons enter with the jet flow. Rather, they are supplied through shock injection. The magnetic pressure increases by merely a factor of $2.5$. Thus, we have an example of a prominent hotspot where the brightness is primarily the result of enhanced particle populations rather than dramatic magnetic field growth.
Contrast this with the situation in the “strong cooling” transport model. There, the jet core dominates the emission, because the entire nonthermal particle population enters the grid with the jet. Although there is a minor brightness enhancement, we no longer see such a dramatic hotspot at the location of the terminal shock, despite the fact that these two models are dynamically identical. Virtually all of the brightness variations in this model are signaling variations in the underlying magnetic field structure.
These models also give rise to fascinating spectral index maps. Figure 2 shows two-frequency spectral index maps computed from surface brightness maps at 1.4 and 5.2 GHz. Once again considering the injection model first, we see that the jet, hotspot, and lobe are easily identifiable. The jet has a synchrotron spectral index of 0.7, consistent with the momentum index of 4.4 for the particles sent down the jet. The primary hotspot is slightly flatter than this, in accordance with the presence of a strong shock. The lobe spectrum is somewhat peculiar, however, with material in the head region steeper than that towards the core; this is quite the opposite from what would be expected based on the standard paradigm for radio galaxy aging. This surprising result is an excellent example of the impact of the shock-web complex described in paper I. Injection at the numerous weak shocks spread throughout the head region increases the steep-spectrum particle populations enough that they can temporarily dominate the emission, a task made easier by the relatively small flatter population sent down the jet in this model. The lobe emission becomes less steep as the two populations begin to mix in the backflow, since radiative losses are minimal here. Since injection acts to create convex spectra, the emission is flattened even further as the emitting particles diffuse into the large low-field volumes in the cocoon.
The complicated spectral index map from the strong cooling model vividly demonstrates the extreme variability of the magnetic field in these sources. Strong fluctuations in the field make it possible to create distinct regions within the same source between which the effective cooling rate differs by orders of magnitude. Also, these sources are rather young in terms of their histories, so individual events in the source evolution can still have a discernible effect upon the overall appearance at these times. For instance, the region of moderately-steep material located below the jet midway along the cocoon can be traced to a particular instance where shearing flows generated at the jet tip lead to a dramatic but transient enhancement of the local magnetic field.
This work is supported by the U. S. National Science Foundation and the University of Minnesota Supercomputing Institute, and in Korea by KOSEF.
Clarke, D. A. 1993, ‘3-D MHD simulations of extragalactic jets’ in Lecture Notes in Physics Vol. 421, Jets in Extragalactic Radio Sources, ed. H.-J. Röser & K. Meisenheimer (Berlin: Springer), 243–252
Cox, C. I., Gull, S. F., & Scheuer, P. A. G. 1991, ‘Three-dimensional simulations of the jets of extragalactic radio sources’, , 252, 558–585
Gooch, R. E. 1995, ‘Karma: a visualisation test-bed’ in ASP Conf. Ser. Vol. 152, Astronomical Data Analysis Software and Systems V, ed. G. H. Jacoby & J. Barnes, (San Francisco: ASP), 80–83
Jones, T. W., Tregillis, I. L., Ryu, Dongsu 2001, ‘3D MHD simulations of relativistic electron acceleration and transport in radio galaxies’, these proceedings (paper I)
Jones, T. W., Ryu, Dongsu, & Engel, Andrew 1999, ‘Simulating electron transport and synchrotron emission in radio galaxies: shock accleration and synchrotron aging in axisymmetric flows’, , 512, 105–124
Lonsdale, C. J., & Barthel, P. D. 1986, ‘Double hotspots and flow redirection in the lobes of powerful extragalactic radio sources’, , 92, 12–22
Matthews, A. P., & Scheuer, P. A. G. 1990, ‘Models of radio galaxies with tangled magnetic fields - I. calculation of magnetic field transport, Stokes parameters and synchrotron losses’, , 242, 616–622
Scheuer, P. A. G. 1982, ‘Morphology and power of radio sources’, in IAU Symp. 97, Extragalactic Radio Sources, ed. D. S. Heeschen & C. M. Wade (Dordrecht: Reidel), 163–165
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Sparse reconstruction approaches using the re-weighted $\ell_1$-penalty have been shown, both empirically and theoretically, to provide a significant improvement in recovering sparse signals in comparison to the $\ell_1$-relaxation. However, numerical optimization of such penalties involves solving problems with $\ell_1$-norms in the objective many times. Using the direct link of reweighted $\ell_1$-penalties to the concave log-regularizer for sparsity, we derive a simple prox-like algorithm for the log-regularized formulation. The proximal splitting step of the algorithm has a closed form solution, and we call the algorithm [*log-thresholding*]{} in analogy to soft thresholding for the $\ell_1$-penalty. We establish convergence results, and demonstrate that log-thresholding provides more accurate sparse reconstructions compared to both soft and hard thresholding. Furthermore, the approach can be directly extended to optimization over matrices with penalty for rank (i.e. the nuclear norm penalty and its re-weigthed version), where we suggest a [*singular-value log-thresholding*]{} approach.'
address: 'T.J. Watson IBM Research center'
bibliography:
- 'strings\_ilt.bib'
- 'refs\_ilt.bib'
title: Iterative log thresholding
---
sparsity, reweighted $\ell_1$, non-convex formulations, proximal methods
Introduction {#sec:intro}
============
We consider sparse reconstruction problems which attempt to find sparse solutions to over-determined systems of equations. A basic example of such a problem is to recover a sparse vector $\bx \in {\mathbb{R}}^N$ from measurements $\by = A \bx + \bn$, where $\by \in {\mathbb{R}}^M$ with $M < N$, and $\bn$ captures corruption by noise. Attempting to find maximally sparse solutions is known to be NP-hard, so convex relaxations involving $\ell_1$-norms have gained unprecedented popularity. Basis pursuit (or LASSO in statistics literature) minimizes the following objective: $$\begin{aligned}
\label{eqn:LASSO}
\min \Vert \by - A \bx \Vert_2^2 + \lambda \Vert \bx \Vert_1\end{aligned}$$ Here $\lambda$ is a parameter that balances sparsity versus the norm of the residual error. There is truly a myriad of algorithms for solving (\[eqn:LASSO\]) (see e.g. [@Figueiredo2007; @EwoutvandenBerg; @Osher2010]), and for large-scale instances, variations of iterative soft thresholding have become very popular: $$\label{eqn:ista}
\bx^{(n+1)} = {\mathcal{S}}_{\lambda} \left( \bx^{(n)} + A^T(\by - \bx^{(n)}) \right)$$ where ${\mathcal{S}}_{\lambda}(\bz)$ applies soft-thresholding for each entry: $$\label{eqn:soft_thresh}
{\mathcal{S}}_{\lambda}(z_i) = \mbox{sign}(z_i) \max(0, |z_i| - \lambda).$$ Based on operator splitting and proximal projection theories, the algorithm in (\[eqn:ista\]) converges if the spectral norm $\Vert A \Vert < 1$ [@daubechies_ISTA; @combettes]. This can be achieved simply by rescaling $A$. Accelerated versions of iterative thresholding have appeared [@fista].
An exciting albeit simple improvement over $\ell_1$-norms for approximating sparsity involves weighting the $\ell_1$-norm: $\sum_i w_i |x_i|$ with $w_i > 0$. Ideal weights require knowledge of the sparse solution, but a practical idea is to use weights based on solutions of previous iterations [@fazel_log_det; @candes_rev_l1]: $$w_i^{(n+1)} = \frac{1}{\delta + |\hat{x}^{(n)}_i|}$$ This approach can be motivated as a local linearization of the $\log$-heuristic for sparsity [@fazel_log_det]. There is strong empirical [@candes_rev_l1] and recent theoreical evidence that reweighted $\ell_1$ approaches improve recovery of sparse signals, in the sense of enabling recovery from fewer measurements [@needell2009noisy; @amin2009weighted].
In this paper, we consider the log-regularized formulation that gives rise to the re-weighting schemes mentioned above, and propose a simple prox-like optimization algorithm for its optimization. We derive a closed-form solution for the proximal step, which we call $\log$-thresholding. We establish monotone convergence of iterative $log$-thresholding (ILT) to its fixed points, and derive conditions under which these fixed points are local minima of the log-regularized objective. Sparse recovery performance of the method on numerical examples surpasses both soft and hard iterative thresholding (IST and IHT). We also extend the approach to minimizing rank for matrix functions via singular value $\log$-thresholding.
To put this into context of related work, [@chartrand_lp_thresh] has considered iterative thresholding based on non-convex $\ell_p$-norm penalties for sparsity. However, these penalties do not have a connection to re-weighted $\ell_1$ optimization. Also, [@mazumder2011sparsenet] have investigated coordinate descent based solutions for non-convex penalties including the log-penalized penalty, but their approach does not use closed form log-thresholding. Finally, general classes of non-convex penalties, their benefits for sparse recovery, and reweighed convex-type methods for their optimization are studied in [@Lin2010]. This class of methods is different from the log-thresholding we propose.
ISTA as proximal splitting {#S:ista_prox_splitting}
==========================
We briefly review how soft-thresholding can be used to solve the sparse reconstruction problem in (\[eqn:LASSO\]). Functions of the form $f(x) = h(x) + g(x)$ where $h(x)$ is convex differentiable with a Lipschitz gradient, and $g(x)$ is general convex can be solved by a general proximal splitting method [@combettes]: $$\label{eqn:prox_split}
\hat{\bx}^{(n+1)} = \prox_g \left( \bx^{(n)} - \nabla h(\bx^{(n)}) \right).$$ The prox-operation is a generalization of projection onto a set to general convex functions: $$\prox_h(\bx) = \arg \min_{\bz} h(\bz) + \frac{1}{2} \Vert \bx - \bz \Vert_2^2.$$ If $h(\bx)$ is an indicator function for a convex set, then the prox-operation is equivalent to the projection onto the set, and ISTA itself is equivalent to the projected gradient approach.
Forward-backward splitting can be applied to the sparse recovery problem (\[eqn:LASSO\]) by deriving the proximal operator for $\ell_1$-norm, which is precisely the soft-thresholding operator in (\[eqn:soft\_thresh\]). The convergence of ISTA in (\[eqn:ista\]) thus follows directly from the theory derived for forward-backward splitting [@combettes].
Log-thresholding {#S:log_thresh}
================
The reweighted-$\ell_1$ approach can be justified as an iterative upper bounding by a linear approximation to the concave $\log$-heuristic for sparsity (here $\delta$ is a small positive constant) [@fazel_log_det]: $$\label{eqn:log_reg}
\min f(\bx) = \min \Vert \by - A \bx \Vert_2^2 + \lambda \sum_i \log(\delta + | x_i|).$$ While the $\log$-penalty is concave rather than convex, we still consider the scalar proximal objective around a fixed $x$: $$\label{eqn:log_prox}
g_\lambda (z) \triangleq (z - x)^2 + \lambda \log(\delta + |z|).$$ We note that for $\delta$ small enough, the global minimum of $g_\lambda (z)$ over $z$ (with $x$ held constant) is always at $0$. However, when $|x| > x_0 \triangleq \sqrt{2 \lambda} - \delta$, the function also exhibits a local minimum, which disappears for small $x$. We show that it is the local, rather than the global minimum, that provides the link to re-weighted $\ell_1$ minimization and is key to the log-prox algorithm we propose. Indeed, it is also the local (and not global) minimum that provides the link to iteratively re-weighteld $\ell_1$ algorithms.
Our algorithm arises directly from first order necessary conditions for optimality. For $|x| > x_0$, we solve the equation $\nabla g_{\lambda} = 0$ to find the local minimum in closed-form. We call this operation [*log-thresholding*]{} , ${\mathcal{L}}_{\lambda}(x)$: $$\label{eqn:log_thresh}
{\mathcal{L}}_{\lambda}(x) = \begin{cases} \frac{1}{2}\left((x_i-\delta) + \sqrt{ (x_i+\delta)^2 - 2\lambda }\right),
~ x > x_0 \\
\frac{1}{2} \left( (x_i+\delta) - \sqrt{ (x_i-\delta)^2 - 2\lambda} \right),
~ x < - x_0\\
0, \mbox{ otherwise} \end{cases}$$ where $x_0 = \sqrt{2 \lambda} - \delta$. We illustrate log-thresholding in Figure \[fig:log\_local\_min\]. The left plot shows $g_{\lambda} (z)$ as a function of $z$ for several values of $x$. For large $x$ the function has a local minimum, but for small $x$ the local minimum disappears. For $\log$-thresholding we are specifically interested in the the local minimum: an iterative re-weighted $\ell_1$ approach with small enough step size starting at $x$, i.e. beyond the local minimum, will converge to the local minimum, avoiding the global one. The right plot in Figure \[fig:log\_local\_min\] shows the $\log$-thresholding operation ${\mathcal{L}}_{\lambda}(x)$ with $x_0 = 1$ as a function of $x$. It can be seen as a smooth alternative falling between hard and soft thresholding.
![Illustration of log-thresholding.[]{data-label="fig:log_local_min"}](figures/log_thresh_ill.pdf){width="9.5cm"}
In analogy to ISTA, we can now formally define the iterative $\log$-thresholding algorithm: $$\label{eqn:ILT}
\hat{\bx}^{n+1} = {\mathcal{L}}_{\lambda} \left( \bx^{n} + A^T(\by - A\bx^{n}) \right)$$ where ${\mathcal{L}}_{\lambda}(\bz)$ applied the element-wise $\log$-thresholding operation we obtained in (\[eqn:log\_thresh\]). We establish its convergence next.
Convergence of iterative log-thresholding
-----------------------------------------
The theory of forward-backward splitting does not allow an analysis of $\log$-thresholding, because the log is non-convex, and log-thresholding is not a contraction (in particular, it is not firmly non-expansive). Therefore, for the analysis we use an approach based on optimization transfer using surrogate functions [@Lange_surrogate] to prove convergence of ILT to its fixed points. At a high-level the analysis follows the program for IHT in [@blumensath2008iterative], but some of the steps are notably different, and in particular some assumptions on the operator action are necessary to establish that fixed points correspond to local minima of our formulation. In the appendix we establish:
\[thm:ilt\_convergence\] Under the assumption $\|A\|_2 < 1$, the ILT algorithm in (\[eqn:ILT\]) monotonically decreases the objective $f(\bx)$ in (\[eqn:log\_reg\]), and converges to fixed points. A sufficient condition for these fixed points to be local minima is that $A$ restricted to the non-zero coefficients is well-conditioned, specifically that the lowest singular values of the restriction are greater than $\frac{1}{2}$.
Singular Value Log-thresholding
================================
A closely related problem to finding sparse solutions to systems of linear equations is finding low-rank matrices from sparse observations, known as matrix completion: $$\label{eqn:matrix_completion}
\min \mbox{rank}(X) \mbox{ such that} ~
X_{i,j} = Y_{i,j}, \{ (i,j) \in \Omega \}$$ Similar to sparsity, rank is a combinatorial objective which is typically intractable to optimize directly. However, the nuclear norm $\Vert X \Vert_* \triangleq \sum_i \sigma_i(X)$, where $\sigma_i(X)$ are the singular values of X, serves as the tightest convex relaxation of rank, analogous to $\ell_1$-norm being the convex relaxation of the $\ell_0$-norm. In fact, the nuclear norm is exactly the $\ell_1$-norm of the singular value spectrum of a matrix. This connection enables the application of various singular value thresholding algorithms: for instance, the SVT algorithm of [@candes_SVT] alternates soft-thresholding of the singular value spectrum with gradient descent steps. In the experimental section we investigate a simplified singular-value log-thresholding algorithm for matrix completion, where we replace soft thresholding with hard and log-thresholdings. We present very promising empirical results of singular value log-thresholding in Section \[S:experiments\], and a full convergence analysis will appear in a later publication.
![Noiseless sparse recovery: (a) average error-norm (b) probability of exact recovery after $250$ iterations over $1000$ random trials. $M = 100, N = 200$.[]{data-label="fig:results_no_noise"}](figures/ilt_ex1.pdf){width="8.5cm"}
Experiments {#S:experiments}
===========
We investigate the performance of iterative log thresholding via numerical experiments on noiseless and noisy sparse recovery. Intuitively we expect ILT to recover sparser solution than soft-thresholding due to the connection to re-weighted-$\ell_1$ norms, and also to behave better than the non-smooth iterative hard thresholding.
First we consider sparse recovery without noise, i.e. we would like to find the sparsest solution that satisfies $\by = A \bx$ exactly. One could in principle solve a sequence of problems (\[eqn:LASSO\]) with decreasing $\lambda$, i.e. increasing penalty on $\Vert \by - A \bx \Vert_2^2$ via IST, IHT, ILT. Howeveer, when we know an upper bound $K$ on the desired number of non-zero coefficients, a more successful approach is to adaptively change $\lambda$ to eliminate all except the top-$K$ coefficients in each iteration[^1] as used e.g. in [@maleki2009coherence]. We compare the performance of IST, IHT, and the proposed ILT in Figure \[fig:results\_no\_noise\]. We have $N = 200, M = 100$ and we vary $K$. Apart from changing the thresholding operator, all the algorithms are exactly the same. The top plot shows the average reconstruction error from the true sparse solution $\Vert \hbx - \bx^* \Vert_2$. It is averaged over $1000$ trials allowing IST, IHT and ILT to run for up-to $250$ iterations. The bottom plot shows probability of recovering the true sparse solution. We can see that ILT is superior in both probability of recovery (higher probability of recovery) and in reconstruction error (lower reconstruction error) over both IST and IHT.
![Sparse recovery with noise. Average error vs. sparsity over $100$ trials, after $250$ iterations.[]{data-label="fig:results_noisy"}](figures/noisy_solution_path_p2.pdf){width="9.5cm"}
Our next experiment compares the three iterative thresholding algorithms on noisy data. Since regularization parameters have a different meaning for the different penalties, we plot the whole solution path of squared residual error vs. sparsity for the three algorithms in Figure \[fig:results\_noisy\]. We compute the average residual norm for a given level of sparsity for all three algorithms, averaged over $100$ runs. We have $M = 100, N = 200$, $K = 10$ and a small amount of noise is added. We can see that the iterative log thresholding consistently achieves the smallest error for each level of sparsity.
In our final experiment we consider singular value log-thresholding for matrix completion. We study a simplified algorithm that parallels the noiseless sparse recovery algorithm with known number of nonzero-elements $K$. We alternate gradient steps with steps of eliminating all but the first $K$ singular values by soft, hard and log-thresholding. We have an $N \times N$ matrix with $30\%$ observed entries, $N = 100$ and rank, $K = 2$. We show the average error in Frobenius norm from the true underlying solution as a function of iteration number over $100$ random runs in Figure \[fig:results\_svlt\]. We see that the convergence of log-SV-thresholding to the correct solution is consistently faster. We expect similar improvements to hold for other algorithms involving soft-thresholding, and to other problems beyond matrix completion, e.g. robust PCA.
Convergence of ILT {#S:ILT_convergece}
==================
Here we establish Proposition \[thm:ilt\_convergence\]. We first define a surrogate function for $f(\bx)$ in (\[eqn:log\_reg\]): $$\begin{aligned}
\label{eqn:surrogate}
\nonumber Q(\bx, \bz) = \Vert \by - A \bx \Vert_2^2 + \lambda \sum_i \log(\delta + | x_i|)+ \\
\Vert \bx - \bz \Vert_2^2 - \Vert A (\bx - \bz) \Vert_2^2\end{aligned}$$ Note that $Q(\bx, \bx) = f(\bx)$. Simplifying (\[eqn:surrogate\]) we have $$\label{funcQ}
Q(\bx, \bz) =
\sum_i \left( x_i - k_i(\bz))^2 + \lambda \log( x_i + |\delta|) \right)+ K(\bz),$$ where $k_i(\bz) = z_i + a_i^T \by - a_i^T A \bz$ and $K(\bz)$ contains terms independent of $\bx$. The optimization over $\bx$ is now separable, i.e. can be done independently for each coordinate. We can see that finding local minima over $\bx$ of $Q(\bx, \bz)$ corresponds to iterative log-thresholding.
![Illustration of singular-value log-thresholding.[]{data-label="fig:results_svlt"}](figures/SV_thresh.pdf){width="9.5cm"}
Using this motivation for ILT, we can now prove convergence to fixed points of $f(\bx)$. First, we have:
$f(\hbx^n) = Q(\hbx^{n}, \hbx^n)$ and $Q(\hbx^{n+1}, \hbx^{n})$, are monotonically decreasing with iterations $n$ as long as the spectral norm $\Vert A \Vert_2 < 1$.
The proof parallels the IHT proof of [@blumensath2008iterative] using the fact that $Q(\bx^{n+1}, \bx^n) = f(\bx^n) + \|\bx^{n+1} - \bx^n\|^2 - \|A(\bx^{n+1} - \bx^n)\|^2$, which is independent of the thresholding used. The main difference for ILT is that $\hbx^{n+1}$ is not the global minimum of $Q(\bx, \hbx^{n})$ but it still holds that $Q(\hbx^{n+1}, \hbx^{n}) < Q(\hbx^{n}, \hbx^{n})$. $\diamond$\
Next, we have:
\[GradAnalysis\] Any fixed point of satisfies the following: $$\begin{cases}
\ba_i^T(\by - A\bar \bx) = \frac{\lambda}{2 (\bar x_i + \delta)} & \text{if} \quad \bar x_i > x_0\\
\ba_i^T(\by - A\bar \bx) = \frac{\lambda}{2 (\bar x_i - \delta)} & \text{if} \quad \bar x_i < -x_0\\
|\ba_i^T(\by - A\bar \bx)| \leq x_0 &\text{otherwise}
\end{cases}$$ In other words, if $|\bar x_i| > x_0$, then the corresponding gradient component satisfies local stationarity conditions for problem , and if $|\bar x_i| < x_0$, the gradient is bounded.
[*Proof:*]{} Given a fixed point $\bar \bx$ of define $$\label{si}
s_i = \ba_i^T(\by - A\bar \bx),$$ Suppose first that $\bar x_i + s_i > x_0$. Explicitly writing , $$\begin{aligned}
\bar x_i -s_i + \delta &= \sqrt{ (\bar x_i + s_i+\delta)^2 - 2\lambda},
\end{aligned}$$ squaring both sides, and simplifying, we have $$\ba_i^T(\by - A\bar \bx) = \frac{\lambda}{2(\bar x_i +\delta)}\;,$$ which is precisely equivalent to local optimality of with respect to the $i$th coordinate. Otherwise, suppose $0 \leq \bar x_i + s_i < x_0$. Then we have $\bar x_i = 0$, and so $s_i \leq x_0$.
For any fixed point $\bar x$ of the ILT algorithm and any small perturbation $\|\eta\|_{\infty}< \epsilon$, if $\delta$ is small enough, for small $\eta$ we have $$Q(\bar x + \eta, \bar x) > Q(\bar x) + \|\mathcal{P}_0\eta\|^2 + \frac{3}{4}\|\mathcal{P}_1\eta\|^2\;,$$ where $\mathcal{P}_0$ and $\mathcal{P}_1$ denote the projections onto the zero and nonzero indices of $\bar x$. The precise condition on $\delta$ is as follows: $$\label{lambdaCond}
\frac{\lambda}{\delta} + 2\delta> 2\sqrt{2\lambda},$$
[*Proof:*]{} This result follows by Proposition \[GradAnalysis\], together with the proof technique of [@blumensath2008iterative]\[Lemma 3\]. In particular, for any perturbation $\eta$, we can write $Q(\bar x + \eta, \bar x) - Q(\bar x, \bar x)$ as $$\begin{aligned}
& = \sum_i \left(-2\eta_is_i + \eta_i^2 + \lambda \log\left(\frac{|\bar x_i + \eta_i| + \delta}{|\bar x_i| + \delta}\right)\right)\\
\end{aligned}$$ The above inequality is easily verified using . Defining now $\Gamma_0 = \{i: \bar x_i = 0\}$ and $\Gamma_1 = \{i: \bar x_i \neq 0\}$, we can use the optimality properties of Proposition \[GradAnalysis\] to rewrite $Q(\bar x + \eta, \bar x) - Q(\bar x, \bar x)$:
$$\begin{aligned}
\|\eta\|^2 + \sum_{i\in \Gamma_0} \left( -2\eta_is_i + \lambda \log\left(\frac{|\eta_i| + \delta}{|\delta|}\right) \right) +\\
\sum_{i\in \Gamma_1} \left( -\frac{\eta_i\lambda}{\bar x_i + |\delta|}+ \lambda \log\left(\frac{|\bar x_i + \eta_i| + \delta}{|\bar x_i| + \delta}\right) \right)
\end{aligned}$$
We now consider lower bounds for each of these two sums taking all $x_i \geq 0$ WLOG:
$$\begin{aligned}
\sum_{i\in \Gamma_0} &-2\eta_is_i + \lambda \log\left(\frac{|\eta_i| + \delta}{\delta}\right) \geq\\
& = \sum_{i\in \Gamma_0} \lambda\log\left(1 + \frac{|\eta_i|}{\delta}\right) - 2\eta_i |x_0|\\
& = \sum_{i\in \Gamma_0} \left(\lambda\frac{|\eta_i|}{\delta} - 2|\eta_i| x_0 \right)- O(\|\eta\|^2)
\end{aligned}$$ Given that holds, the quantity on the last line is positive for $\eta$ small enough. For $\Gamma_1$, we have $$\begin{aligned}
&\sum_{i\in \Gamma_1} -\frac{\eta_i\lambda}{\bar x_i + \delta}+ \lambda \log\left(\frac{|\bar x_i + \eta_i| + \delta}{|\bar x_i| + \delta}\right) \geq \\
&\sum_{i\in \Gamma_1} \frac{-\eta_i\lambda+\eta_i\lambda}{\bar x_i + \delta}-\frac{1}{4} \frac{2\lambda\eta_i^2}{(|\bar x_i|+\delta)^2} \geq -\frac{1}{4} \|\eta\|_2^2
\end{aligned}$$ where the last inequality comes from the fact that for any $i \in \Gamma_1$, we have $\frac{(\bar x_i + \delta)^2}{2\lambda} \geq 1$.
ILT converges to its fixed points if $\Vert A \Vert_2 < 1$. Moreover, if the singular values of the restriction of $A$ to the $\Gamma_1$ columns are greater than $\frac{1}{2}$, these fixed points must be the local minima of .
The result follows from the proof technique of [@blumensath2008iterative]\[Theorem 3\]; in particular the sums $\sum_{n=1}^{N-1}\|x^{n+1} - x^n\|^2$ are monotonically increasing and bounded, so the iterates $\{x_i\}$ must converge. Finally, we have $$\begin{aligned}
Q(\bar \bx + \eta) &= Q(\bar \bx + \eta, \bar \bx) -\|\eta\|^2 + \|A\eta\|^2\\
&\geq Q(\bar \bx) - \frac{1}{4}\|\mathcal{P}_1\eta\|^2 + \|A\mathcal{P}_1\eta\|^2.
\end{aligned}$$ This quantity is non-negative provided that the singular values of the restriction of $A$ to $\Gamma_1$ are greater than $\frac{1}{2}$.
$\square$
[^1]: This is easy for IST and IHT by sorting $|\bx|$ in descending order: let $\bs = \mbox{sort} |\bx|$ then $\lambda = s_{K+1}$. For ILT we have $\lambda = \frac{(x_{K+1} + \delta)^2}{4}$ from (\[eqn:log\_thresh\]).
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'For a certain parametrized family of maps on the circle with critical points and logarithmic singularities where derivatives blow up to infinity, we construct a positive measure set of parameters corresponding to maps which exhibit nonuniformly expanding behavior. This implies the existence of “chaotic" dynamics in dissipative homoclinic tangles in periodically perturbed differential equations.'
address:
- 'Institute of Industrial Science, The University of Tokyo, Tokyo 153-8505, JAPAN'
- 'Department of Mathematics, University of Arizona, Tucson, AZ 85721'
author:
- Hiroki Takahasi and Qiudong Wang
title: Nonuniformly Expanding 1D Maps With Logarithmic Singularities
---
[^1]
Introduction
============
Let $f_{a}: {\mathbb R} \to {\mathbb R}$ be such that $$\label{f1-s1}
f_{a}\colon x\mapsto x+a+L\cdot\ln |\Phi(x)|, \ \ L>0,$$ where $a \in [0,1]$ and $\Phi\colon\mathbb R\to\mathbb R$ is $C^2$ satisfying: (i) $\Phi(x+1) = \Phi(x)$; (ii) $\Phi'(x)\neq0$ if $\Phi(x)=0$, (iii) $\Phi''(x)\neq0$ if $\Phi'(x)=0$. The family $(f_{a})$ induces a parametrized family of maps from $S^1={\mathbb R} /{\mathbb Z}$ to itself. In this paper we study the abundance of nonuniform hyperbolicity in this family of circle maps.
Our study of $(f_{a})$ is motivated by the recent studies of [@W; @WOk; @WOk11; @WO] on homoclinic tangles and strange attractors in periodically perturbed differential equations. When a homoclinic solution of a dissipative saddle is periodically perturbed, the perturbation either pulls the stable and the unstable manifold of the saddle fix point completely apart, or it creates chaos through homoclinic intersections. In both cases, the separatrix map induced by the solutions of the perturbed equation in the extended phase space is a family of two-dimensional maps. Taking a singular limit, one obtains a family of one-dimensional in the form of (\[f1-s1\]) (with the absolute value sign around $\Phi(x)$ removed). Let $\mu$ be a small parameter representing the magnitude of the perturbation and $\omega$ be the forcing frequency. We have $a \sim \omega \ln
\mu^{-1}$ ([mod]{} $1$), $L \sim \omega$; and $\Phi$ is the classical Melnikov function (See [@WOk; @WOk11; @WO]).
When we start with [*two*]{} unperturbed homoclinic loops and assume symmetry, then the separatrix maps are a family of annulus maps, the singular limit of which is precisely $f_{a}$ in (\[f1-s1\]) (See [@W]). If the stable and unstable manifolds of the perturbed saddle are pulled completely apart by the forcing function, then $\Phi(x) \neq 0$ for all $x$. In this case we obtain strange attractors, to which the theory of rank one maps developed in [@WY3] apply. If the stable and unstable manifold intersect, then $\Phi(x) = 0$ is allowed and the strange attractors are associated to homoclinic intersections. For the modern theory of chaos and dynamical systems, this is a case of historical and practical importance; see [@GH; @SSTC1; @SSTC2]. To this case, unfortunately, the theory of rank one maps in [@WY3] does not apply because of the existence of the singularities of $f_{a}$. Our ultimate goal is to develop a theory that can be applied to the separatrix maps allowing $\Phi(x) = 0$. This paper is the first step, in which we develop a 1D theory.
For $f = f_{a}$, let $C(f) = \{ f'(x) = 0 \}$ be the set of critical points and $S(f) = \{ \Phi(x) = 0 \}$ be the set of singular points. In this paper we are interested in the case $L\gg1$. As $L$ gets larger, the contracting region gets smaller and the dynamics is more and more expanding in most of the phase space. Nevertheless, the recurrence of the critical points is inevitable, and thus infinitesimal changes of dynamics occur when $a$ is varied. In addition, the logarithmic nature of the singular set $S$ turns out to present a new phenomenon [@T] which is unknown to occur for smooth one-dimensional maps with critical points.
Our main result states that nonuniform expansion prevails for “most" parameters, provided $L\gg1$. Let $\lambda=10^{-3}$ and let $|\cdot|$ denote the one-dimensional Lebesgue measure.
For all large $L$ there exists a set $\Delta=\Delta(L) \subset [0,1)$ of $a$-values with $|\Delta|>0$ such that if $a \in
\Delta$ then for $f=f_a$ and each $c \in C$, $|(f^n)'(fc)|\geq L^{\lambda n}$ holds for every $n\geq 0$. In addition, $|\Delta|\to1$ holds as $L \to \infty$.
For the maps corresponding to the parameters in $\Delta$, our argument shows a nonuniform expansion, i,e, for Lebesgue a.e. $x\in S^1$, $\displaystyle{\varlimsup_{n\to\infty}}\frac{1}{n}\ln|(f_a^n)'x|\geq\frac{1}{3}\ln L.$ In addition, combining our argument with an argument in \[[@WY1] Sect.3\] one can construct invariant probability measures absolutely continuous with respect to Lebesgue measure (acips). The main difference from the smooth case is to bound distortions, which can be handled with Lemma \[dist\] in this paper. A careful construction exploiting the largeness of $L$ shows the uniqueness of acips and some advanced statistical limit theorems (See [@T]).
Since the pioneering work of Jakobson [@J], there have been quite a few number of papers over the last thirty years dedicated to proving the abundance of chaotic dynamics in increasingly general families of smooth one-dimensional maps [@BC1; @BC2; @R; @TTY; @T1; @T2; @WY1]. Families of maps with critical and singular sets were studied in [@LT; @LV; @PRV]. One key aspect of the singularities of our maps that has no analogue in [@LT; @LV; @PRV] is that, returns to a neighborhood of the singular set can happen very frequently. The previous arguments seem not sufficient to deal with points like this. To avoid problems arising from the logarithmic singularities, and to get the asymptotic estimate on the measure of $\Delta$, we introduce new arguments:
- our definition of bound periods (see Sect.\[s2.3\]) incorporates the recurrence pattern of the critical orbits to both $C$ and $S$. Thus, the resultant bound period partition depends on the parameter $a$, and is not a fixed partition, as is the case in [@BC1; @BC2];
- to get the asymptotic estimate $|\Delta|\to1$ as $L\to1$, we need to adandon starting an inductive construction with small intervals around Misiurewicz parameters. Instead we start with a large parameter set (denoted by $\Delta_{N}$), which is a union of a finite number of intervals. This necessitates additional works on establishing uniform hyperbolicity outside of a neighborhood of $C$, which is rather straightforward around Misiurewicz parameters.
The rest of this paper consists of two sections. In Sect.2 we perform phase space analyses. Building on them, in Sect.3 we construct the parameter set $\Delta$ by induction. To estimate the measure of the set of parameters excluded at each step, instead of Benedicks $\&$ Carleson [@BC1; @BC2] we elect to follow the approach of Tsujii [@T1; @T2], primarily because partitions depend on $a$, and the extension of this approach is more transparent in our dealing with the issues related to the singularities. Unlike [@BC1; @BC2], the current strategy relies on a geometric structure of the set of parameters excluded at each step. In addition, there is no longer the need for a large deviation argument, introduced originally in [@BC2] as an independent step of parameter exclusions.
Phase space analysis {#s2}
====================
In this section we carry out a phase space analysis. Elementary facts on $f_{a}$ are introduced in Sect. \[s2.1\]. In Sect.\[distortion\] we prove a statement on distortion. In Sect.\[s2.2\] we discuss an initial set-up. In Sect.\[s2.3\] we introduce three conditions, which wil be taken as assumptions of induction for the construction of the parameter set $\Delta$, and develop a binding argument. In Sect.\[s2.4\] we study global dynamical properties of maps satisfying these conditions.
Elementary facts {#s2.1}
----------------
For $\varepsilon > 0$, we use $C_{\varepsilon}$ and $S_\varepsilon$ to denote the $\varepsilon$-neighborhoods of $C$ and $S$ respectively. The distances from $x \in S^1$ to $C$ and $S$ are denoted as $d_C(x)$ and $d_S(x)$ respectively. We take $L$ as a base of ${\rm log}(\cdot)$.
\[derivative\] There exist $K_0>1$ and $\varepsilon_0 >0$ such that the following holds for all sufficiently large $L$ and $f=f_{a}$:
- for all $x\in S^1$, $$K_0^{-1}L
\frac{d_C(x)}{d_S(x)} \leq|f'x| \leq K_0L
\frac{d_C(x)}{d_S(x)}, \ \ \ \ \ \ |f''x| \leq
\frac{K_0L}{d^2_S(x)};$$
- for all $\varepsilon
>0$ and $x \not \in C_{\varepsilon}$, $ |f'x|\geq K_0^{-1} L\varepsilon$;
- for all $x \in C_{\varepsilon_0}$, $K^{-1}_0 L < |f''
x| < K_0 L$.
This lemma follows immediately from $$f'=1+L \cdot \frac{\Phi'}{\Phi};
\ \ \ \ \
f''=L \cdot \frac{\Phi''\Phi-
(\Phi')^2}{\Phi^2}$$ and our assumptions on $\Phi$ in the beginning of the introduction.
Bounded distortion {#distortion}
------------------
Let $c\in C$, $c_0=fc$, and $n\geq1$. Let $$\label{Theta}
D_n(c_0)=\frac{1}{\sqrt{L}}\cdot\left[\sum_{i=0}^{n-1}
d_i^{-1}(c_0)\right]^{-1} \ \ \ \ \text{where} \ \ \ \ \
d_i(c_0)=\frac{d_C(c_i)\cdot
d_S(c_i)}{J^i(c_0)}.$$
\[dist\] For all $x,y \in[c_0-D_n(c_0),c_0+D_n(c_0)]$ we have $J^n(x)\leq 2J^n(y),$ provided that $c_i\notin C\cup S$ for every $0\leq i<n$.
Write $D_n$ for $D_n(c_0)$, and let $I
=[c_0-D_n,c_0+D_n]$. Then $$\log \frac{J^n(x)}{J^n(y)}=\sum_{j=0}^{n-1}\log \frac{J(f^jx)}{J(f^jy)} \leq\sum_{j=0}^{
n-1}|f^jI| \sup_{\phi\in f^jI}\frac{|f''\phi|}{|f' \phi|}.$$ Lemma \[dist\] would hold if for all $j \leq n-1$ we have $f^jI\cap (S\cup C)=\emptyset$ and $$\label{disteq}
|f^jI|\sup_{\phi\in f^jI}\frac{|f''\phi|}{|f' \phi|}\leq
\log 2 \cdot d_j^{-1}(c_0) \left[\sum_{i=0}^{
n-1}d_i^{-1}(c_0)\right]^{-1}.$$ We prove (\[disteq\]) by induction on $j$. Assume (\[disteq\]) holds for all $j < k$. Summing (\[disteq\]) over $j=0,1,\cdots,
k-1$ implies $
\frac{1}{2} \leq \frac{J^k(\eta)}{J^k(c_0)}\leq 2$ for all $\eta \in I$. We have $$\label{derivative6}
|f^{k}I|\leq 2 J^{k}(c_0) D_n = 2d_{k}^{-1}
\cdot d_C(c_k) d_S(c_k) D_n \leq
\frac{2}{\sqrt{L}}d_C(c_k)d_S(c_k).$$ We have $f^k I \cap (C\cup S)
= \emptyset$ from (\[derivative6\]), and for $\phi\in f^k
I$, $$\begin{aligned}
|f^kI|\frac{|f''\phi|}{|f'\phi|} &\leq 2
d_k^{-1} d_C(c_k) d_S(c_k) D_n \cdot \frac{K_0^2}{d_C(\phi) d_S(\phi)} \\
& = 2K_0^2 d_k^{-1}D_n \cdot \frac{d_C(c_k)
d_S(c_k) }{d_C(\phi) d_S(\phi)}
\leq\frac{2K_0^2}{\sqrt{L}}\cdot d_{k}^{-1} \left[\sum_{i=0}^{n-1}
d_i^{-1}\right]^{-1},\end{aligned}$$ where we used Lemma \[derivative\](a) for $\frac{|f''\phi|}{|f'\phi|}$ for the first inequality. For the last inequality we observe that the second factor of the left-hand-side is $< 2$ by (\[derivative6\]).
Initial setup {#s2.2}
-------------
In one-dimensional dynamics, a general strategy for constructing positive measure sets of “good" parameters is to start an inductive construction in small parameter intervals, in which orbits of critical points are kept out of bad sets for certain number of iterates. One way to find these intervals is to first look for Misiuriewicz parameters, for which all critical orbits stay out of the bad sets under any positive iterate. We would then confine ourselves in small parameter intervals containing the Misiuriewicz parameters, and would eventually prove that the Misiuriewicz parameters are Lebesgue density points of the good parameter sets. This approach for initial set-ups, however, is with some drawbacks. First, for a one-parameter family of maps with multiple critical points, the Misiuriewicz parameters are relatively hard to find because of the need of controlling multiple critical orbits with one parameter. Though the argument in [@WY2] is readily extended to cover our family, we are nevertheless up to a hard start. Second, with the rest of the study confined in a small parameter interval containing a Misiuriewicz parameter, it is not clear how we could prove the global asymptotic measure estimate ($|\Delta| \to 1$ as $L \to \infty$) of the theorem.
An alternative route that is made possible by the approach of this paper is to start with a rather straight forward and relatively weak assumption. Let $\sigma = L^{-\frac16}$. Let $N$ be a large integer independent of $L$. For $0\leq n\leq N$, let $$\Delta_n =\{a\in [0,1)\colon f_a^{i+1}(C)\cap(C_\sigma\cup
S_\sigma)=\emptyset\text{ for every }0\leq i\leq n\}.$$ Observe that $\Delta_n$ is a union of intervals. We start with the following statement, the proof of which is given in Appendix.
\[initial-1\] For any large integer $N$ there exists $L_0 = L_0(N)\gg1$ such that if $L\geq L_0$, then $
|\Delta_{N}|\geq 1 - L^{-\frac19}.
$
Lemma \[initial-1\] is sufficient for us to move forward. This approach for initial setups is easier, and leads to the desired asymptotic measure estimate on $\Delta$ as $L\to\infty$.
We move to the expanding property of the maps corresponding to parameters in $\Delta_{N}$. We frequently use the following notation: for $c\in C$ and $n\geq1$, $c_0=fc$ and $c_n=f^nc_0$: for $x\in S^1$ and $n\geq1$, $J(x)=|f'x|$ and $J^n(x)=J(x)J(fx)\cdots J(f^{n-1}x)$.
Let $\alpha=10^{-6}$ and $\delta = L^{-\alpha N}$. In what follows, we suppose $N$ to be a large integer for which $\delta\ll\sigma$, and the conclusion of Lemma \[initial-1\] holds. The value of $N$ will be replaced if necessary, but only a finite number of times. The letter $K$ will be used to denote generic constants which are independent of $N$ and $L$.
The next lemma, the proof of which is given in Appendix, ensures an exponential growth of derivatives for orbit segments lying outside of $C_{\delta}$.
\[outside\] There exists $L_1\geq L_0$ such that if $L\geq L_1$ and $f = f_{a}$ is such that $a \in \Delta_{N}(L)$, then the following holds:
- if $n\geq1$ and $x$, $fx,\cdots, f^{n-1}x\notin C_{\delta}$, then $J^{n}(x)\geq \delta L^{2\lambda n}$;
- if moreover $f^nx\in C_{\delta}$, then $J^{n}(x) \geq
L^{2 \lambda n}$.
[*Standing assumption for the rest of this section: $L\geq L_1$ and $a \in \Delta_{N}$.*]{}
Recovering expansion {#s2.3}
--------------------
For $f = f_a$, $c \in C$ and $n>N$ we introduce three conditions:
- $J^{j-i}(c_i)\geq L \min\{\sigma, L^{-\alpha
i}\}\ \ 0\leq \forall i<\forall j\leq n+1;$
- $J^i(c_0)\geq L^{\lambda i}\ \ 0<
\forall i\leq n+1;$
- $d_S(c_i)\geq L^{-4\alpha i}\ \ N
\leq \forall i\leq n$.
We say $f$ satisfies (G1$)_n$ if (G1$)_{n,c}$ holds for every $c\in C$. The definitions of (G2)$_n$, (G3)$_n$ are analogous. These conditions are taken as inductive assumptions in the construction of the parameter set $\Delta$.
We establish a recovery estimate of expansion. Let $c\in C$, $c_0=fc$ and assume that (G1)$_{n,c}$-(G3)$_{n,c}$. For $p\in[2,n]$, let $$I_p(c)=\begin{cases} &f^{-1}[c_0+D_{p-1}(c_0),c_0+D_p(c_0))\ \text{ if $c$ is a local minima of $x\to x+a+L\cdot\ln|\Phi(x)|$};\\
& f^{-1}(c_0-D_{p}(c_0),c_0-D_{p-1}(c_0)]\ \text{ if $c$ is a local maxima of $
x\to x+a+L\cdot\ln|\Phi(x)|$}.
\end{cases}$$ By the non-degeneracy of $c$, $I_p(c)$ is the union of two intervals, one at the right of $c$ and the other at the left. According to Lemma \[dist\], if $x\in I_p(c)$ then the derivatives along the orbit of $fx$ shadow that of the orbit of $c_0$ for $p-1$ iterates. We regard the orbit of $x$ as been bound to the critical orbit of $c$ up to time $p$; and we call $p$ the [*bound period*]{} of $x$ to $c$.
\[reclem1\] If (G1)$_{n,c}$-(G3)$_{n,c}$ holds, then for $p\in[2,n]$ and $x\in
I_{p}(c)$ we have:
- $p\leq\log |c-x|^{-\frac{2}{\lambda}}$;
- if $x\in C_\delta$, then $J^p(x)|\geq
|c-x|^{-1+\frac{16\alpha}{\lambda}}\geq L^{\frac{\lambda}{3}p}$.
By definition we have $$|c-x|^2\leq D_{p-1}(c_0)\leq L^{-\frac{1}{2}}d_{p-2}(c_0) <
L^{-\frac{1}{2}} J^{p-2}(c_0)^{-1}.$$ Then by (G2), $$\label{low}
|c-x|^2\leq L^{-\frac{1}{2}-\lambda(p-2)}\leq L^{-\lambda p}
,$$ from which (a) follows. The second inequality of (b) follows from (\[low\]).
\[add-lem1-s2.3B\] For $0\leq i \leq n$ we have:
- $d_C(c_i)\geq
K^{-1} \sigma L^{-\alpha i}$;
- $J^i(c_0)D_{i+1}(c_0)\geq
L^{-1-7\alpha i}$.
We finish the proof of Lemma \[reclem1\] assuming the conclusion of this sublemma. We have $$\begin{aligned}
J^{p}(x)=
J^{p-1}(fx)J(x)\geq K^{-1}J^{p-1}(c_0)
\cdot L|c-x|\geq K^{-1}J^{p-1}(c_0) \cdot |c-x|^{-1}
D_{p}(c_0),\end{aligned}$$ where for the first inequality we use Lemma \[dist\] and Lemma \[derivative\](c), and for the last inequality we use $x\in I_{-p}(c)\cup I_p(c)$. Using Sublemma \[add-lem1-s2.3B\](b) for $i = p-1$ we obtain $$J^p(x)\geq
K^{-1} L^{-1-7\alpha p} |c-x|^{-1}.$$ Substituting into this the upper estimate of $p$ in Lemma \[reclem1\](a) we obtain $$J^p(x)\geq K^{-1}
L^{-1}|c-x|^{-1+\frac{15\alpha}{\lambda}} \geq
|c-x|^{-1+\frac{16\alpha}{\lambda}}.$$ We have used $|c-x|\leq\delta=L^{-\alpha N_0}$ for the last inequality.
It is left to prove the sublemma. (G1) implies $|f'c_i|\geq
L\min\{\sigma,L^{-\alpha i}\}$. Then (a) follows from Lemma \[derivative\](a). As for (b), let $j\in[0,i]$. By definition, $$J^i(c_0)d_j(c_0)=
\frac{J^{i}(c_0)}{J^j(c_0)}d_C(c_j)d_S(c_j).$$ We have: $\frac{J^i(c_0)}{J^j(c_0)}\geq
L^{-\alpha j}\sigma$ from (G1); $d_C(v_j)\geq K^{-1} \sigma
L^{-\alpha j}$ from (a); $d_S(v_j)\geq \sigma
L^{-4\alpha j}$ from (G3). Hence, $J^i(c_0)d_j\geq K^{-1}
\sigma^3L^{-6\alpha j}$, and thus $$\begin{aligned}
\sum_{j=0}^{i}J^i(c_0)^{-1}d_j^{-1}(c_0) &\leq \sigma^{-3}
L^{7\alpha i}.\end{aligned}$$ Taking reciprocals implies (b).
Global dynamical properties {#s2.4}
---------------------------
At step $n$ of induction, we wish to exclude all parameters for which one of (G1-3$)_n$ is violated for some $c\in C$, and to estimate the measure of the parameters deleted. Conditions (G1) (G2), however, can not be used directly as rules for exclusion, since they do not care about cummulative effects of “shallow returns". Hence we introduce a stronger condition, based on the notion of *deep returns*, and will use it as a rule for deletion in Section \[s3\].
[*Hypothesis in Sect.\[s2.4\]:*]{} $f =
f_{a}$ is such that $a \in \Delta_{N_0}$. $n\geq N_0$ and (G1)$_{n-1}$-(G3)$_{n-1}$ hold for all $c\in C$.
For all $c\in C$ we have:
- $f^{i+1}c\notin C\cup S$ for all $0\leq i\leq n$;
- for the orbit of $c_0=fc$, the bound period initiated at all returns to $C_{\delta}$ before $n$ is $\ll n$.
([*Bound/free structure*]{}) We divide the orbit of $c_0$ into alternative bound/free segments as follows. Let $n_1$ be the smallest $j\geq0$ such that $c_j\in C_\delta$. For $k>1$, we define free return times $n_{k}$ inductively as follows. Let $p_{k-1}$ be the bound period of $c_{n_{k-1}}$, and let $n_{k}$ be the smallest $j\geq n_{k-1}+p_{k-1}$ such that $c_j\in C_\delta$. We decompose the orbit of $c_0$ into bound segments corresponding to time intervals $(n_k,n_k+p_k)$ and free segments corresponding to time intervals $[n_k+p_k,n_{k+1}]$. The times $n_k$ are the [*free return*]{} times. We have $$n_1<n_1+p_1 \leq n_2<n_2+p_2 \leq \cdots.$$
[Let $c\in C$ and assume that $c_0=fc$ makes a free return to $C_\delta$ at time $\nu\leq n$. We say $\nu$ is a [*deep return*]{} of $c_0$ if for every free return $i\in [0,\nu-1]$; $$\label{inessential}
\sum_{\stackrel{j\in[i+1,\nu]}{free \ return}}2\log
d_C(c_{j})\leq \log d_C(c_{i}).$$]{}
We say (R$)_{n,c}$ holds if $$\prod_{i\in[0, j]: \ \text{deep
return}} d_C(c_i) \geq L^{-\frac{1}{20} \lambda \alpha j}\ \ \text{for every } N_0\leq j\leq n.$$ We say (R$)_n$ holds if (R$)_{n,c}$ holds for every $c\in C$.
\[derive0\] If (R$)_{n,c}$ holds, then (G1$)_{n,c}$, (G2$)_{n,c}$ hold.
The first step is to show $$\label{prop-s3.1}
\prod_{\stackrel{N_0 < i \leq n}{\text{\rm free return }}} \ d_C(c_i)\geq L^{- \alpha n}.$$ We call a free return [*shallow*]{} if it is not a deep free return. Let $\mu\in(0,n)$ be a shallow free return time, and $i(\mu)$ be the largest deep free return time $< \mu$. We claim that $$\label{inessential}
\sum_{\stackrel{i(\mu) +1\leq j\leq\mu}{\text{free return}}} \log
d_C(c_j) \geq \log d_C(c_{i(\mu)}).$$ We finish the proof of (\[prop-s3.1\]) assuming (\[inessential\]). Let $\mu_1$ be the largest free shallow return time in $(0, n]$, and $i_1$ be the largest deep free return time $< \mu_1$. We then let $\mu_2$ be the largest shallow free return time $< i_1$, and $i_2$ be the largest deep free return time $<\mu_2$, and so on. We obtain a sequence of deep free return times $i_1 > i_2 \cdots
> i_q$, and we have $$\label{f4-s3.1}
\sum_{\stackrel{0\leq j\leq n}{\text{shallow return}}} \log
d_C(c_{j}) \geq \sum_{j=1}^q\log d_C (c_{i_j})\geq
\sum_{\stackrel{0\leq i\leq k}{\text{deep return}}}\log
d_C(c_{i})$$ where the first inequality is from (\[inessential\]). We then have $$\sum_{0\leq j\leq n\colon \text{free return}} \log d_C(c_{j}) \geq
2\sum_{\stackrel{0\leq j\leq n}{\text{free return}}} \log
d_C(c_{j})
\geq \alpha n,$$ where the last inequality is from (R)$_{n,c}$.
To prove (\[inessential\]), we let $\beta_1$ be the smallest free return time $\leq \mu-1$ such that $$\label{deep-add2}
\sum_{\stackrel{\beta_1 +1\leq j\leq\mu}{\text{free return}}}2\log
d_C(c_j) > \log d_C(c_{\beta_1}).$$ We claim that no deep free return occurs during the period $[\beta_1+1,\mu]$. This is because if $i' \in [\beta_1 + 1,
\mu]$ is a deep return, then we must have $$\sum_{\stackrel{\beta_1 +1\leq j\leq\mu}{\text{free return}}}2\log
d_C(c_j) \leq \sum_{\stackrel{\beta_1 +1\leq j\leq
i'}{\text{free return}}}2\log d_C(c_j) \leq \log
d_C(c_{\beta_1}),$$ contradicting (\[deep-add2\]). If $\beta_1$ is a deep return, we are done. Otherwise we find a $\beta_2 < \beta_1$ so that $$\label{deep-add1}
\sum_{\stackrel{\beta_2+1 \leq j\leq \beta_1}{\text{free
return}}}2\log d_C(c_j) > \log d_C(c_{\beta_2}),$$ and so on. This process will end at a deep free return time, which we denote as $\beta_q : = i(\mu)$. (\[inessential\]) follows from adding (\[deep-add2\]), (\[deep-add1\]) and so on up to the time for $\beta_q = i(\mu)$.
We first prove (G1)$_{n,c}$. Let $0\leq i < j \leq
n+1$. Observe that the bound periods for all returns to $C_{\delta}$ for the orbit of $c_0$ up to time $n$ is $\leq \frac{2
\alpha}{\lambda}n \ll n$. This follows from (\[prop-s3.1\]) and Lemma \[reclem1\](a). Hence, it is possible to introduce the bound-free structure starting from $c_i$ to $c_j$. We consider the following two cases separately.
[*Case I: $j$ is free.*]{} For free segments we use Lemma \[outside\], and for bound segments we use Lemma \[reclem1\](c). We obtain exponential growth of derivatives from time $i$ to $j$, which is much better than what is asserted by (G1$)_{n,c}$.
[*Case II: $j$ is bound.*]{} Let $\hat j$ denote the free return with a bound period $p$ such that $j\in[\hat j+1,\hat
j+p]$. We have $\hat j \leq n$, for otherwise $j>n+1$. Consequently, $$J^{j-i}(c_i)= \frac{J^{\hat
j}(c_0)}{J^i(c_0)} \cdot \frac{J^{\hat j+1}(c_0)}{J^{\hat
j}(c_0)} \cdot \frac{J^j(c_0)}{J^{\hat j +1}(c_0)} >
L^{\frac{1}{3} \lambda(\hat j-i)} \cdot K^{-1} Ld_C(v_{\hat j})
\cdot K^{-1} L^{\lambda(j-\hat j-1)}$$ where for the last inequality, we use Lemma \[reclem1\](c) combined with Lemma \[outside\] for the first factor. For the third factor we use bound distortion and (G2)$_{n-1}$ for the binding critical orbit. It then follows that $
J^{j-i}(c_0) \geq L^{\alpha(j-i)- \alpha \hat
j}\geq L^{-\alpha i}.
$ Hence (G1$)_{n,c}$ holds.
As for (G2$)_{n,c}$, we introduce the bound-free structure starting from $c_0$ to $c_{n+1}$. Observe that the sum of the lengths of all bound periods for the orbit of $c_0$ up to time $n$ is $\leq \frac{2
\alpha}{\lambda}n \ll n$. This follows from (\[prop-s3.1\]) and Lemma \[reclem1\](a). Using Lemma \[reclem1\](b) for each bound segment and Lemma \[outside\] for each free segment in between two consecutive free returns, we have $$J^{n+1}(c_0) \geq \delta L^{2 \lambda n(1-\frac{\alpha}{\lambda})}\geq L^{ \lambda n}.$$ This completes the proof of Lemma \[derive0\].
The next expansion estimate at deep return times will be used in a crucial way in the construction of the parameter set $\Delta$.
\[exp\] If $c \in C$ and $\nu\leq n+1$ is a deep return time of $c_0=fc$, then $$J^{\nu}(c_0) \cdot D_{\nu}(c_0) \geq \sqrt{d_C(c_{\nu})}.$$
Let $0<n_1<\cdots<n_t<\nu$ denote all free returns in the first $\nu$ iterates of $c_0$, with $p_1,\cdots,p_t$ the corresponding bound periods. Let $$\Theta_{n_k}=\sum_{i=n_k}^{n_k+p_k-1}d_i^{-1}(c_0) \ \ \ \text{
and } \ \ \ \Theta_{0}=\sum_{i=0}^{\nu-1}d_i^{-1}(c_0)
-\sum_{k=1}^{t} \Theta_{n_k}.$$
[*Step 1 (Estimate for bound segments):*]{} Observe that $$\begin{aligned}
& \frac{\Theta_{n_k}}{J^{n_k+p_k}(c_0)} =\frac{1}{J^{p_k}(c_{n_k}) d_C(c_{n_k})
d_S(c_{n_k})} + \sum_{i = n_k+1}^{n_k+p_k-1}
\frac{1}{J^{n_k+p_k-i}(c_i) d_C(c_i) d_S(c_i)}.\end{aligned}$$ To estimate the first term we use Lemma \[reclem1\](b) to obtain $$\label{f1-deep}
\frac{1}{J^{p_k}(c_{n_k}) d_C(c_{n_k})
d_S(c_{n_k})}\leq \sigma^{-1}
(d_C(c_{n_k}))^{- \frac{16 \alpha}{\lambda}}.$$ To estimate the second term we let $\tilde c$ be the critical point to which $c_{n_k}$ is bound. By using Lemma \[dist\] and (\[derivative6\]) in the proof of Lemma \[dist\], which implies $d_C(c_i)>\frac{1}{2} d_C(\tilde c_{i-n_k-1})$ and $d_S(c_i)> \frac{1}{2} d_S(\tilde c_{i-n_k-1})$ for $i \in [n_k+1,
n_k+p_k-1]$, we have $$\begin{aligned}
J^{n_k+p_k-i}(c_i)d_C(c_i)d_S(c_i)
\geq K^{-1} J^{n_k+p_k-i}(\tilde c_{i-n_k-1})d_C(\tilde c_{i-n_k-1})d_S(\tilde c_{i-n_k-1}) \geq
\sigma^2L^{-5\alpha(i-n_k-1)}\end{aligned}$$ where the last inequality is obtained by using (G1), Lemma \[add-lem1-s2.3B\](a) and (G3) for $\tilde c$. Summing this estimate over all $i$ and combining the result with (\[f1-deep\]), $$\label{sublem2}
|(f^{n_k+p_k})'c_0|^{-1} \Theta_{n_k}\leq
\sigma^{-1} (d_C(c_{n_k}))^{- \frac{16 \alpha}{\lambda}} + \sigma^{-2}L^{6\alpha
p_k} \leq (d_C(c_{n_k}))^{-\frac{18 \alpha}{\lambda}}.$$ Here for the last inequality we use $\sigma \gg \delta$ and $L^{6\alpha p_k}\leq|d_C(c_{n_k})|^{-\frac{12\alpha}{\lambda}}$ from Lemma \[reclem1\](a).
[*Step 2 (Estimate for free segments):*]{} By definition, $$\frac{\Theta_0}{J^{\nu}(c_0)} = \sum_{i \in [0, \nu-1] \setminus
(\cup [n_k, n_k + p-1])}\frac{1}{J^{\nu - i}(c_i)d_C(c_i)
d_S(c_i)}.$$ Here we can not simply use (G3) for $d_S(c_i)$ in proving (\[quatro\]). We observe, instead, that either we have $v_i \not
\in S_{\sigma}$, for which $d_S(v_i)
> \sigma$; or $c_i \in S_{\sigma}$ for which we have
$$J^{\nu - i}(c_i) =J^{\nu-i+1}(c_{i+1})J(c_i)\geq
K^{-1} L^{\frac{\lambda}{3} (\nu - i + 1)} (d_S(c_i))^{-1}.$$ It then follows, by using $d_C(c_i) > \delta$, that $$\label{deep}
\frac{\Theta_0}{J^{\nu}(c_0)} \leq \sum_{i \in [0, \nu-1]
\setminus (\cup [n_k, n_k + p-1])} K L^{-\frac{\lambda}{3}
(\nu-i)} (\sigma \delta)^{-1} \leq \frac{1}{\sigma \delta}.$$ This estimate is unfortunately not good enough for (\[quatro\]). To obtained (\[quatro\]), we need to use $C_{\delta^{\frac{1}{20}}}$ in the place of $C_{\delta}$ to define a new bound/free structure for each free segment out of $C_{\delta}$. For the new free segments, we can now replace $\delta$ by $\delta^{\frac{1}{20}}$ in (\[deep\]); for the bound segments, we use (\[sublem2\]) with $d_C
> \delta$. We then obtain $$\label{quatro}
\frac{\Theta_0}{J^{\nu}(c_0)}\leq \frac{1}{\sigma
\delta^{\frac{1}{20}}} + \sum_{i \in [0, \nu-1] \setminus (\cup
[n_k, n_k + p-1])} K L^{-\frac{1}{3} \lambda (\nu - i)} \delta^{-
\frac{18 \alpha}{\lambda}} < \frac{1}{\delta^{\frac{1}{3}}}.$$
[*Step 3 (Proof of the Lemma):*]{} From the assumption that $\nu$ is a deep free return, we have $$|d_C(c_{n_k})|^{-1}\leq
|d_C(c_{\nu})|^{-2}\prod_{j\colon n_j\in(n_k,\nu)}
|d_C(c_{n_j})|^{-2}.$$ Substituting this into (\[sublem2\]) gives $$\label{plu}
J^{n_k+p_k}(c_0)^{-1}\Theta_{n_k}\leq
|d_C(c_{{\nu}})|^{-\frac{36\alpha}{\lambda}}\prod_{j\colon
n_j\in(n_k,\nu)}|d_C(c_{n_j})|^{-\frac{36\alpha}{\lambda}}.$$ Meanwhile, splitting the orbit from time $n_k+p_k+1$ to $\nu$ into bound and free segments and we have $$\label{plu2}
J^{\nu-n_k-p_k}(c_{n_k+p_k})^{-1}\leq \left(\prod_{j\colon
n_j\in(n_k,\nu)} J^{p_j}(c_{n_j})\right)^{-1}.$$ Multiplying (\[plu\]) with (\[plu2\]) gives $$\begin{aligned}
J^{\nu}(c_0)^{-1} \Theta_{n_k} \leq
|d_C(c_{\nu})|^{-\frac{36\alpha}{\lambda}}\prod_{j: \ n_j \in
(n_k, \nu)} \left(J^{p_j}(c_{n_j}) \cdot
|d_C(c_{n_j})|^{\frac{36\alpha}{\lambda}}\right)^{-1}\end{aligned}$$ $$\begin{aligned}
\leq |d_C(c_{\nu})|^{-\frac{36\alpha}{\lambda}} \prod_{j: \ n_j
\in (n_k, \nu)} (d_C(c_{n_j}))^{\frac{1}{2}} \ \leq \
\delta^{(t-k)/2}|d_C(c_{\nu})|^{-\frac{36\alpha}{\lambda}}.\end{aligned}$$ where for the second inequality we use Lemma \[reclem1\](b) for $J^{p_j}(c_{n_j})$, and for the last we use $d_C(c_{n_j}) < \delta$. Thus $$\begin{aligned}
\sum_{n_k \in [0, \nu-1]} J^{\nu}(c_{0}) ^{-1}\Theta_{n_k}&\leq
|d_C(c_{\nu})|^{-\frac{36\alpha}{\lambda}}
\sum_{k=1}^{t}\delta^{(t-k)/2}\leq
2|d_C(c_{\nu})|^{-\frac{36\alpha}{\lambda}}.\end{aligned}$$ Combining this with (\[quatro\]) we obtain $$\begin{aligned}
J^{\nu}(c_{0})^{-1}D_{\nu}^{-1}=
\sqrt {L}\left( \sum_{1 \leq k \leq t} J^{\nu}(c_{0}) ^{-1}
\Theta_{n_k}+ J^{\nu}(c_{0}) ^{-1} \Theta_0\right)\leq
\frac{1}{\sqrt{d_C(c_{\nu})}}.\end{aligned}$$ This completes the proof of Lemma \[exp\].
Measure of the set of excluded parameters {#s3}
=========================================
The rest of the proof of the theorem goes as follows. For $n > N_0$, let $$\Delta_n = \{ a \in \Delta_{N}: \ \ \text{(R1$)_n$ and (G3$)_n$ hold}\},$$ and set $\Delta=\cap_{n\geq N}\Delta_n$. This is our parameter set in the theorem. In this section we show that $\Delta$ has positive Lebesgue measure, and $|\Delta|\to1$ as $L\to\infty$. To this end, define two parameter sets as follows: $$E_n=\{a\in\Delta_{n-1}\setminus\Delta_n\colon \text{(R$)_{n}$ fails for $f_a$}\};$$ $$E_n'=\{a\in\Delta_{n-1}\setminus\Delta_n\colon \text{(G3$)_{n}$ fails for $f_a$}\}.$$ Obviously, $\Delta_{n-1}\setminus\Delta_n\subset E_n\cup E_n'$. We show that the measures of these two sets decrease exponentially fast in $n$. Building on preliminary results in Sect.\[paradist\], in Sect.\[R1\] we estimate the measure of $E_n$. In Sect.\[R2\] we estimate the measure of $E_n'$ and complete the proof of the theorem.
\[indhyp\] [The following will be used in the argument: Let $a\in\Delta_{n-1}$. By the definition of $\Delta_{n-1}$, (R$)_{n-1}$ holds, and thus by Lemma \[derive0\], (G1$)_{n-1}$ (G2$)_{n-1}$ hold.]{}
Equivalence of derivatives and distortion in parameter space {#paradist}
------------------------------------------------------------
For $c \in C$ and $i\geq0$, we define $\gamma_i^{(c)}: \Delta_{N} \to S^1$ by letting $\gamma_i^{(c)}(a) = f^{i+1}_a(c)$. In this subsection we denote $c_i(a)=\gamma_i^{(c)}(a)$ and $\tau_i(a) = \frac{d
c_i(a)}{da}$.
\[trans\] Let $a\in\Delta_{n-1}$. Then, for all $c\in C$ and $k \leq n$ we have $$\frac{1}{2}\leq\frac{\left|\tau_k(a)\right|}
{\left|(f_a^k)'c_0(a)\right|}\leq 2.$$
We have $$\label{f1-s3.2A}
\tau_{k}(a)=1+f_a'c_{k-1}(a)\cdot\tau_{k-1}(a).$$ Using this inductively and then dividing the result by $(f_a^{k})'c_0(a)$, which is nonzero by (G2$)_{n-1}$ we obtain $$\frac{\tau_{k}(a)}{(f_a^{k})'c_0(a)}=1+\sum_{i=1}^{k}\frac{1}
{(f^i)'c_0(a)}.$$ This lemma then follows from applying (G2)$_{n-1}$.
For $a_*\in[0,1)$, $c \in C$ and $c_0=fc$, define $$I_n(a_*,c)=[a_*-D_{n}(c_0), a_*+D_{n}(c_0)]$$ where $
D_n(c_0)$ is the same as the one in (\[Theta\]) with $f = f_{a_*}$.
\[samp\] Let $a_*\in\Delta_{n-1}$. For all $c\in C$, $a \in I_{n}(a_*,c)$ and $k \leq n$ we have $$\frac{1}{2} < \frac{|\tau_{k}(a)|}{|\tau_{k}(a_*)|}\leq 2.$$
Let $k \leq n$. To prove this lemma we inductively assume that, for all $j < k$, $$\label{samp-induct}
\frac{|\tau_j(a)|}{|\tau_j(a_*)|}\leq 2, \ \ \ \text{for all} \ a
\in I_{n}(a_*, c).$$ We then prove the same estimate for $j = k$. Write $I_{n}$ for $I_{n}(a_*,c)$. For all $a \in I_{n}$ we have $$\begin{aligned}
\left|\log\frac{|\tau_{j+1}(a)|}{|\tau_{j+1}(a_*)|}-
\log\frac{|\tau_{j}(a)|}{|\tau_{j}(a_*)|}\right| \leq&
\left|\log\frac{|\tau_{j+1}(a)|}{|\tau_j(a)|}-
\log|f_{a_*}'c_j(a_*)|\right|\\
+\left|\log\frac{|\tau_{j+1}(a_*)|}{|\tau_j(a_*)|}-
\log|f_{a_*}'c_j(a_*)|\right|&\leq (I)_a +(I)_{a_*} + (I\!I),\end{aligned}$$ where $$(I)_a = \left|\log\frac{\tau_{j+1}(a)} {\tau_j(a)}-
\log|(f_{a})'c_j(a)|\right|, \ \ \ (I\!I) = \left|\log
\frac{(f_{a})'c_j(a)}{(f_{a_*})'c_j(a_*)} \right|.$$ We claim that $$\label{twoa}
|(I\!I)|\leq 2L^{-\frac{1}{3}}\cdot d_j^{-1}\left[\sum_{i=0}^{n-1}d_i^{-1}\right]^{-1},$$ where $d_i$ is the same as the one in (\[Theta\]) with $f=f_{a_*}$.
To prove (\[twoa\]), first we use (\[samp-induct\]) and Lemma \[trans\] to obtain $$\begin{aligned}
\label{R1add}
|\gamma_j^{(c)}(I_{n})|\leq 2|\tau_j(a_*)||I_{n}|\leq
4|(f_{a_*}^j)' c_0(a_*)||I_{n}| \leq
2L^{-\frac{1}{2}}d_C(c_j(a_*))d_S(c_j(a_*)).\end{aligned}$$ This implies $d_S(c_j(a))\geq \frac{1}{2} d_S(c_j(a_*))$ for all $a\in I_{n}$. Thus from Lemma \[derivative\](b), $$|f''|\leq \frac{KL}{d_S(c_j(a_*))^{2}}\quad\text{on $\gamma_j^{(c)}(I_{n})$.}$$ It then follows that $$\left|f_{a}'c_j(a)-f_{a_*}'c_j(a_*)\right| =
\left|f_{a_*}'c_j(a)-f_{a_*}'c_j(a_*)\right| \leq
\frac{KL}{d_S(c_j(a_*))^{2}}|c_j(I_{n})| \leq
\frac{KL}{d_S(c_j(a_*))^{2}}|\tau_j(a_*)| \cdot |I_{n}|,$$ where (\[samp-induct\]) is again used for the last inequality. We have then $$\left|f_{a}'c_j(a)-f_{a_*}'c_j(a_*)\right| \leq
\frac{KL^{\frac{1}{2}}d_C(c_j(a_*))}{d_S(c_j(a_*))}d_j^{-1}
\left(\sum_{i=0}^{n-1} d_i^{-1}\right)^{-1}\leq L^{-\frac{1}{3}}|(f_{a_*}')c_j(a_*)|d_j^{-1}\left(\sum_{i=0}^{n-1} d_i^{-1}\right)^{-1},$$ where for the last inequality we used Lemma \[derivative\](a). (\[twoa\]) follows directly from the last estimate.
As for $(I)_a$, we have from (\[f1-s3.2A\]), $$\label{onea}
(I)_a\leq \log \left(1+ \frac{1}{|(f_{a})'c_j(a)| \cdot
|\tau_j(a)|}\right) < \frac{1}{|(f_{a})'c_j(a)| \cdot|\tau_j(a)|}
< L^{-\frac{\lambda}{3}j}$$ where for the last estimates we use the inductive assumption (\[samp-induct\]) and Lemma \[trans\] for $|\tau_j(a_*)|$. We also use $$|(f_{a})'c_j(a)| \geq\frac{1}{2}|(f_{a_*})'c_j(a_*)| \geq \frac{
L}{2}\cdot\min\{\sigma,L^{-\alpha j} \}$$ where the second inequality follows from from (G1). Then $\frac{|\tau_k(a)|}{|\tau_k(a_*)|}\leq 2$ now follows from combining (\[onea\]) and (\[twoa\]) for all $j <
k$.
Exclusion on account of (R) {#R1}
---------------------------
To estimate the measure of parameters excluded due to (R$)_n$, we divide the critical orbit $\{ v_i, \ i \in [0, n] \}$ into free/bound segments for $a \in \Delta_{n-1}$, $c \in C$, and let $t_1 < t_2 < \cdots < t_{q} \leq n$ be the consecutive times for [*deep free returns*]{} to $C_{\delta}$. Let $c^{(i)}$ be the corresponding binding critical point at time $t_i$, $r_i$ is the unique integer such that $|c_i-f^{\nu_i}x_*|\in(L^{-r_i},L^{-r_i+1}].$ We call $${\bf i}: = (t_1, r^{(1)}, c^{(1)}; \ t_2, r^{(2)}, c^{(2)}; \
\cdots; \ t_q, r^{(q)}, c^{(q)})$$ the [*itinerary*]{} of $v_0 = f_a(c)$ up to time $n$.
Let $E_n(c, {\bf i})$ denote the set of all $a\in E_n$ for which (R$)_{n,c}$ fails at time $n$, and the itinerary for $c_0(a)=f_ac$ up to time $n$ is ${\bf i}$. To estimate the measure of $E_n$, we first estimate the measure of $E_n(c, {\bf i})$. We then combine it with a bound on the nubmer of all feasible itineraries.
\[pro-R1\] $|E_n(c, {\bf i})| \leq L^{-\frac{1}{3} R}$, where $R = r_1 + r_2
\cdots + r_q$.
First of all, for $a_* \in R_n(c, {\bf i})$ and $1\leq k\leq q$ we define a parameter interval $I_{t_k}(a_*)$ as follows. If $|\gamma_{t_k}^{(c)}(I_n(a_*,
c))|<\frac{1}{4}$, then we let $I_{t_k}(a_*)=I_{t_k}(a_*, c)$. Otherwise, take $I_{t_k}(a_*)$ to be the interval of length $\frac{1}{10|\gamma_{t_k}^{(c)} (I_{t_k}(a_*, c))|}|{I}_{t_k}(a_*, c)|$ centered at $a_*$.
The proof of Lemma \[pro-R1\] is outlined as follows. For a compact interval $I$ centered at $a$ and $r>0$, let $ r\cdot I$ denote the interval of length $r|I|$ centered at $a$. For each $k\in[1,q]$, we choose a countable subset $\{a_{k,i}\}_{i}$ of $E_n(c,\bold i)$ with the following properties:
- the intervals $\{I_{t_k}(a_{k,i})\}_{i}$ are pairwise disjoint and $E_n(c,\bold i)\subset \bigcup _{i}L^{-r_k/3}\cdot
{I}_{t_k}(a_{k,i});$
- for each $k\in[2,q]$ and $a_{k,i}$ there exists $a_{k-1,j}$ such that ${I}_{t_k}(a_{k,i})\subset
2L^{-r_{k-1}/3}\cdot{I}_{{t_{k-1}}}(a_{k-1,j})$.
Observe that the desired estimate follows from this.
For the definition of such a subset we need two combinatorial statements. The following elementary fact from Lemma \[exp\] is used in the proofs of these two sublemmas: if $a\in E_n(c,\bold i)$, then $(\gamma_{t_k}^{(c)}|I_{t_k}(a))^{-1}(c^{(k)})$ consists of a single point and is contained in $L^{-r_k/3}\cdot I_{t_k}(a)$.
\[lem1\] If $a$, $a'\in E_n(c,\bold i)$ and $a'\notin I_{t_k}(a)$, then $I_{t_k}(a)\cap I_{t_k}(a')=\emptyset$ .
Suppose $I_{t_k}(a)\cap I_{t_k}(a')\neq\emptyset$. Lemma \[dist\] gives $|I_{t_k}(a)|\approx |I_{t_k}(a')|$. This and $a'\in I_{t_k}(a)$ imply $(\gamma_{t_k}^{(c)}|I_{t_k}(a))^{-1}(c^{(k)})\neq(\gamma_{t_k}^{(c)}|I_{t_k}(a'))^{-1}(c^{(k)}).$ On the other hand, by the definition of the intervals $I_{t_k}(\cdot)$, $\gamma_{t_k}^{(c)}$ is injective on $I_{t_k}(a)\cup I_{t_k}(a')$. A contradiction arises.
\[lem2\] If $a$, $a'\in E_n
(c,\bold i)$ and $a'\in L^{-r_k/3}\cdot I_{k}(a)$, then $I_{t_{k+1}}(a')\subset
2L^{-r_k/3}\cdot I_{t_k}(a)$.
We have $(\gamma_{t_k}^{(c)}|I_{t_k}(a))^{-1}(c^{(k)})
\notin I_{t_{k+1}}(a')$, for otherwise the distortion of $\gamma_{\nu_{k+1}}^{(c)}$ on $I_{t_{k+1}}(a')$ is unbounded. This and the assumption together imply that one of the connected components of $I_{t_{k+1}}(a')-\{a'\}$ is contained in $L^{-r_k/3}\cdot I_{t_k}(a)$. This implies the inclusion.
We are in position to choose subsets $\{a_{k,i}\}_{i}$ satisfying (i) (ii). Lemma \[lem1\] with $k=1$ allows us to pick a subset $\{a_{1,i}\}$ such that the corresponding intervals $
\{I_{t_1}(a_{1,i})\}$ are pairwise disjoint, and altogether cover $E_n(c,\bold i)$. Indeed, pick an arbitrary $a_{1,1}$. If $
I_{t_1}(a_{1,1})$ covers $E_n(c,\bold i)$, then the claim holds. Otherwise, pick $a_{1,2}\in E_n(c,\bold i)-{I}_{t_1}(a_{1,1})$. By Lemma \[lem1\], $I_{t_1}(a_{11}),{I}_{t_1}(a_{12})$ are disjoint. Repeat this. By Lemma \[lem1\], we end up with a countable number of pairwise disjoint intervals. To check the inclusion in (i), let $a\in
{I}_{1}(a_{1i})-L^{-r_1/3}\cdot{I}_{1}(a_{1i})$. By Lemma \[exp\], $|f_a^{t_1+1}c-c^{(1)}|\gg L^{-r_1}$ holds. Hence $x\notin
E_n(c,\bold i)$.
Given $\{a_{k-1,j}\}_j$, we choose $\{a_{k,i}\}_i$ as follows. For each $a_{k-1,j}$, similarly to the previous paragraph it is possible to choose parameters $\{a_{m}\}_m$ in $E_n(c,\bold i )\cap
L^{-r_{k-1}/3}\cdot{I}_{t_{k-1}}(a_{k-1,j})$ such that the corresponding intervals $\{I_{t_k}(a_m)\}_m$ are pairwise disjoint and altogether cover $E_n(c,\bold i )\cap
L^{-r_{k-1}/3}\cdot{I}_{t_{k-1}}(a_{k-1,j})$. In addition, Lemma \[lem2\] gives $\bigcup_m{I}_{t_k}(a_{m})\subset 2L^{-r_{k-1}/3}
\cdot{I}_{t_{k-1}}(a_{k-1,j}).$ Let $\{a_{k,i}\}_{i}=
\bigcup_{j}\{a_{m}\}$. This finishes the proof of Lemma \[pro-R1\].
\[boundlem2\] Assume that $f_a$ is such that $a \in \Delta_{N}$. Then the lengths of any given bound period due to a return to $C_\delta$ is $\geq \frac{1}{2} \alpha N$.
From $a \in \Delta_{N}$ and Lemma \[derivative\](a) we have $J^{[\frac{1}{2} \alpha
N]}(c_0)\leq(\sigma^{-1} L)^{\frac{1}{2} \alpha N}$ where $\sigma = L^{-\frac{1}{6}}$. It then follows that $$D_{[\frac{1}{2} \alpha
N]}(c_0)\geq\frac{L^{-\frac{1}{2}}\sigma^2} {J^{[\frac{1}{2}
\alpha N]}(c_0)}\geq L^{-\frac{1}{2}}\sigma^2
(L^{-1}\sigma)^{\frac{1}{2} \alpha N}\gg \delta.$$ This implies the lemma.
Observe that $E_n= \bigcup E_n(c, {\bf i}),
$ where the union runs over all $c \in C$ and all feasible itineraries ${\bf i} = (t_1,
r^{(1)}, c^{(1)}; \cdots; t_q, r^{(q)},
c^{(q)})$. By Lemma \[boundlem2\] we have $
q \leq \frac{2 n}{\alpha N}.
$ We also have $
R = r_1 + r_2 + \cdots + r_q > \frac{\lambda \alpha n}{20}.
$ The number of choices for $(r_1,
\cdots r_q)$ satisfying $r_1 + \cdots + r_q = R$ is $
\left(\begin{smallmatrix} R+q\\
q\end{smallmatrix}\right)$, and the possible number of choices for the deep free return times is $\left(\begin{smallmatrix} n \\
q\end{smallmatrix}\right)$. By Stirling’s formula for factorials, $
\left(\begin{smallmatrix} n \\
q\end{smallmatrix}\right) \leq e^{\beta(N) n}$ and $\left(\begin{smallmatrix} R+q \\
q\end{smallmatrix}\right) \leq e^{\beta(N) R}$, where $\beta(N) \to 0$ as $N \to \infty$. Using these and Lemma \[pro-R1\] we conclude that $$\label{Delete-R1}
|E_n| \leq \sum_{1
\leq q \leq \frac{2 n}{\alpha N}} \sum_{R > \frac{\lambda \alpha n}{20}}(\# C)^q \left(\begin{matrix} n \\
q\end{matrix}\right) \left(\begin{matrix} R+q \\
q\end{matrix}\right) L^{- \frac{R}{3} }\leq L^{-\frac{\lambda \alpha n}{100}},$$ where the last inequality holds for sufficiently large $N$.
Exclusion on account of (G3) {#R2}
----------------------------
Estimates for exclusions due to (G3$)_n$ are much simpler.
\[wrap2\] For $a_* \in \Delta_{n-1}$, $c \in C$, $
|\gamma_{n}^{(c)}(I_{n}(a_*, c))|\geq L^{-3\alpha n}$.
Write $f=f_{a_*}$, $c_i=f^{i+1}c$. Since $a_* \in \Delta_{n-1}$, (G1)$_{n-1}$ and (G2)$_{n-1}$ hold for $c_0$ by Lemma \[derive0\]. Using Lemma \[trans\] and \[samp\], we have $$|\gamma_n^{(c)}(I_n(a_*, c))| \geq 4 J^{n}(c_0) \cdot |I_n(a_*,c)| \geq \frac{1}{\sqrt{L}} \left(\sum_{i=0 }^{n-1}
(J^{n}(c_0) d_i)^{-1}\right)^{-1}.$$ If $c_i\notin S_\sigma$, then $$\begin{aligned}
J^{n}(c_0)d_i&=
J^{n-i}(c_i)d_C(c_i)d_S(c_i)\geq K L^{-2\alpha i}\sigma.\end{aligned}$$ If $c_i\in S_\sigma$, then $d_S(c_i)\geq |f'c_i|^{-1}$ from Lemma \[derivative\](a), and we have $$\begin{aligned}
J^{n}(c_0)d_i&=
J^{n-i}(c_i)d_C(c_i)d_S(c_i)\geq J^{n-i-1}(c_{i+1})
d_C(c_i)\geq L^{-\alpha(i+1)}\sigma\end{aligned}$$ where (G1)$_{n-1}$ is used for the last inequality. Hence we obtain $$\begin{aligned}
|\gamma_n^{(c)}(I_n(a_*, c))| >
\frac{1}{\sqrt{L}}\left(\sum_{i=0}^{n-1}K L^{2\alpha i} \sigma
\right)^{-1}\geq L^{- 3\alpha n}.\end{aligned}$$
We now estimate the measure of $E_n'$. For $c\in C$ and $s\in S$, let $E_n'(c,s)$ denote the set of all $a\in E_n'$ such that $d(\gamma_n^{(c)}(a),s)< L^{-4 \alpha n}$. For $a\in E_n'(c,s)$, define an interval $I_n(a)$ centered at $a$, similarly to the definition of $I_{t_k}(\cdot)$ in the beginning of the proof of Lemma \[pro-R1\] (replace $t_k$ by $n$). We claim that:
- if $a\in E_n'(c,s)$, then $I_n(a)\setminus L^{-\alpha n/2}\cdot I_n(a)$ does not intersect $E_n'(c,s)$;
- if $a,\tilde a\in E_n'(c,s)$ and $\tilde a\notin
I_n(a)$, then $I_n(a)\cap I_n(\tilde a)=\emptyset$.
The first item follows from Lemma \[wrap2\] and Lemma \[samp\]. The second follows from the injectivity argument used in the proof of Lemma \[lem2\]. It follows that $
|E_n'(c, s)| \leq L^{-\frac{1}{2}\alpha n},
$ and threfore $$\label{delete-D2}
|E_n'| < \# C \# S \cdot L^{-\frac{1}{2} \alpha
n} < L^{- \frac{1}{3} \alpha n}.$$ (\[Delete-R1\]), (\[delete-D2\]) and Lemma \[initial-1\] altogether yield $|\Delta|\to1$ as $L\to\infty$.
Proofs of Lemmas \[initial-1\] and \[outside\]
==============================================
In this appendix we prove Lemmas \[initial-1\] and \[outside\]. Since various ideas and technical tools developed in the main text are needed, the correct order for the reader is to go over the main text first before getting into the details of these two proofs.
Proof of Lemma \[initial-1\].
-----------------------------
Let $f=f_{a_*}$ and $c\in C$. Let $I_n(a_*,c)=[a_*-D_n(c_0),a_*+D_n(c_0)]$, where $D_n(c_0)$ is the same as the one in (\[Theta\]) with $f=f_{a_*}$. Let $\gamma_n^{(c)}(a)=f_a^{n+1}c$ and $\tau_n^{(c)}(a) =
\frac{d}{da} \gamma_n^{(c)}(a)$.
\[lem1-appA\] Let $1\leq n\leq N_0$, $f=f_{a_*}$, $a_* \in \Delta_{n-1}$ and let $c \in C$. We have:
- $J^{j-i}(c_i) \geq (K^{-1} L
\sigma)^{j-i} $ for all $0 \leq i < j \leq n$;
- $
J^n (x)\leq 2J^n(y)$ for all $x,y
\in[c_0-D_n(c_0),c_0+D_n(c_0)]$;
- $\frac{1}{2}\leq\frac{|\tau_n^{(c)}(a_*)|}{J^n(c_0)}\leq 2$;
- $\frac{1}{2}\leq\frac{|\tau_n^{(c)}(a)|}{|\tau_n^{(c)}(a_*)|}\leq2$ for all $a\in I_n(a_*,c)$;
- $|\gamma_n^{(c)}(I_n(a_*,c))|\geq L^{1/7}\sigma$.
Item (a) follows directly from Lemma \[derivative\]. (b) is in Lemma \[dist\], (c) is in Lemma \[trans\] and (d) is in Lemma \[samp\]. As for (e), let $i\in[0,n-1]$. (a) gives $$\begin{aligned}
J^n(c_0)d_i(c_0)=
J^{n-i}(c_i)
d_C(c_i)d_S(c_i)\geq
(K_0^{-1}L\sigma)^{n-i}\sigma^2,\end{aligned}$$ where $d_i(c_0)$ is the same as the one in (\[Theta\]) with $f=f_{a_*}$. Taking reciprocals and then summing the result over all $i\in[0,n-1]$ we have $$\sum_{i=0}^{n-1}J^n(c_0)^{-1}d_i(c_0)^{-1}
\leq\frac{K_0}{L\sigma^3}.$$ Recall that $\sigma = L^{-\frac16}$. We have $$|\gamma_n^{(c)}(I_n(a_*,c))|^{-1}\leq KJ^n(c_0)^{-1}D_n^{-1}(c_0)= K
\sqrt{L}\cdot
\sum_{i=0}^{n-1}J^n(c_0)^{-1}d_i(c_0)^{-1}\leq
\frac{1}{L^{\frac{1}{7}}\sigma},$$ where the first inequality follows from (c) and (d).
For $c \in C$, $s \in C \cup S$, let $E_n(c,s)$ denote the set of all $a\in\Delta_{n-1}\setminus\Delta_n$ so that $d(\gamma_n^{(c)}(a), s)\leq\sigma$. For $a\in E_n(c, s)$, define a parameter interval $I_n(a)$ as follows. If $|\gamma_n^{(c)}(I_n(a,
c))|<\frac{1}{4}$, then we let $I_n(a)=I_n(a, c)$. Otherwise, define $I_n(a)$ to be the interval of length $\frac{1}{10|\gamma_n^{(c)} (I_n(a, c))|}|{I}_n(a, c)|$ centered at $a$. We claim that:
- if $a\in E_n(c,s)$, then $I_n(a)\setminus L^{-1/8}\cdot I_n(a)$ does not intersect $E_n(c,s)$;
- if $a,\tilde a\in E_n(c,s)$ and $\tilde a\notin
I_n(a)$, then $I_n(a)\cap I_n(\tilde a)=\emptyset$.
The first item follows from Lemma \[lem1-appA\](c)-(e). The second follows from the fact that the map $\gamma_n^{(c)}$ is injective on $I_n(a)$. Observe that from this we have $|E_n(c, s)| < L^{-\frac{1}{8}}$, and it follows that $
|\Delta_{n-1} \setminus \Delta_n| < \#C (\#C + \#S) L^{-\frac18},
$ and we obtain $$|\Delta_{N}| \geq 1 - \sum_{n=1}^{N} |\Delta_{n-1}
\setminus \Delta_n| \geq 1 - L^{-\frac19}.$$ This completes the proof of Lemma \[initial-1\].
Proof of Lemma \[outside\]
--------------------------
We need the materials in Sect.\[s2.3\] and the definition of the bound/free structure in Sect.\[s2.4\].
Let $\delta_0 = L^{-\frac{11}{12}}$. For the orbit of $x$ lying out of $C_\delta$ we first introduce the bound/free structure of Sect.\[s2.4\] by using $C_{\delta_0}$ in the place of $C_{\delta}$. We then prove a result that is similar to Lemma \[reclem1\], from which Lemma \[outside\] would follow directly. To make this argument work, we first need to show that the bound periods on $C_{\delta_0} \setminus
C_{\delta}$, as defined in Sect.\[s2.3\], are $\leq
N$. Let $I_{p}(c)$ and the bound period be defined the same as in Sect.\[s2.3\].
\[cover\] If $f = f_a$ is such that $a \in \Delta_{N}$, then for each $c\in C$ we have $[c-\delta_0,c-\delta]\cup [c+\delta,c+\delta_0]\subset \bigcup_{1\leq p\leq N}I_{p}(c)$.
Let $c_0 = fc$. It suffices to show that $$\label{2.3.1}
L^{-1}D_{N}(c_0)<\delta^2<\delta_0^2 < L^{-1}D_1(c_0)$$ where $D_n(c_0)$ is as in (\[Theta\]). Since $a\in\Delta_{N}$ we have $J^{N-1}(c_0)\geq
(KL\sigma)^{N-1}$ from Lemma \[derivative\]. It then follows that $$D_{N}(c_0)\leq d_{N-1}(c_0) \leq
(KL\sigma)^{-N+1}<L\delta^2.$$ On the other hand we have $$D_1(c_0)=\frac{1}{\sqrt{L}}\cdot d_C(c_0)d_S(c_0)\geq
L^{-\frac{1}{2}}\sigma^2,$$ from which the last inequality of (\[2.3.1\]) follows directly.
With the help of Lemma \[cover\], we know that the bound/free structure for the orbit of $x$ out of $C_\delta$ is well-defined.
\[initial\] Let $p\geq2$ be the bound period for $y\in
C_{\delta_0}\setminus C_\delta$. Then:
- for $i\in[1,p]$, we have $d_C(f^{i}y)>\delta_0$ and $|(f^{i-1})'(c_0)| D_{i}(c_0) \geq L^{-\frac12}\sigma^{2}$;
- $J^{p}(y)\geq K^{-1} L^{-\frac12}\sigma^2
|c-y|^{-1}\geq K^{-1} L^{-\frac12} \sigma^2
(K_0^{-1}L\sigma)^{p-2}$;
- $J^{p}(y) \geq L^{\frac{1}{300}p}.$
\(a) is a version of Sublemma \[add-lem1-s2.3B\]. The estimates are better because here we have $f^{i-j}(c_j) > (KL\sigma)^{i-j}$, and $d_C(c_i), d_S(c_i)>
\sigma$. As for (b) we have $$J^{p}(y)\geq K^{-1} L |c-y|J^{p-1}(c_0)| \geq
K^{-1}
|c -y|^{-1} J^{p-1}(c_0) D_{p}(c_0),$$ where for the last inequality we use $D_{p}(c_0) \leq K L |c -
y|^2$ by the definition of $p$. The first inequality of (b) then follows by using (a). The second inequality of (b) follows from $$|c-y|\leq D_{p-1}(c_0)\leq\frac{d_{p-2}(c_0)}{\sqrt{L}}
\leq\frac{(K_0^{-1}L\sigma)^{-p+2}}{\sqrt{L}}.$$ Here, last inequality is because $a\in\Delta_{N}$.
If $p \geq 10$, then (c) is much weaker than the second inequality of (b). If $p < 10$, then (c) follows from the first inequality of (b) and the fact that $
|c - y|^{-1} \geq \delta_0^{-1} = L^{\frac{11}{12}}.
$
We are in position to finish the proof of Lemma \[outside\]. If $f^nx$ is free (this includes the case $f^nx \in C_\delta$), then we use Lemma \[initial\](c) for bound segments and $|f'| >
K^{-1} L^{\frac{1}{12}}$ for iterates in free segments, which are out of $C_{\delta_0}$. This proves Lemma \[outside\](b). If $f^nx$ is bound, then there is a drop of a factor $> \delta$ at the last free return that can not be recovered. In this case we need the factor $\delta$ in Lemma \[outside\](a).
[\[1\]]{} M. Benedicks and L. Carleson, On iterations of $1-ax^2$ on $(-1, 1)$, [*Ann. of Math.*]{} [**122**]{} (1985), 1-25.
M. Benedicks and L. Carleson, The dynamics of the Hénon map, [*Ann. of Math.*]{} [**133**]{} (1991), 73-169.
J. Guckenheimer and P. Holmes, [*Nonlinear oscillators, dynamical systems and bifurcations of vector fields*]{}, Springer-Verlag, Appl. Math. Sciences [**42**]{} (1983).
M. Jakobson, Absolutely continuous invariant measures for one-parameter families of one-dimensional maps, [*Comm. Math. Phys.*]{} [**81**]{} (1981), 39-88.
S. Luzzatto and W. Tucker, Non-uninformly expanding dynamics in maps with criticalities and singularities, [*Publ. Math. I.H.E.S.*]{} [**89**]{} (1999) 179-226.
S. Luzzatto and M. Viana, Positive Lyapunov exponents for Lorenz-like families with criticalities, [*Astérisque.*]{} [**261**]{} (2000) 201-237.
M. J. Pacifico, A. Rovella, M. Viana, Infinite-modal maps with global chaotic behavior, [*Ann. of Math.*]{} [**148**]{} (1998), 441-484.
M. Rychlik, Another proof of Jakoson’s theorem and related results, [*Ergod. Th. $\&$ Dynam. Sys,*]{} [**8**]{} (1988), 83-109.
L. Shilnikov, A. Shilnikov, D. Turaev, and L. Chua, [*Methods of qualitative theory in nonlinear dynamics. Part I,*]{} World Scientific Series on Nonlinear Science. Series A: Monographs and Treatises, 4. World Scientific Publishing Co., Inc., River Edge, NJ, (1998).
L. Shilnikov, A. Shilnikov, D. Turaev, and L. Chua, [*Methods of qualitative theory in nonlinear dynamics. Part II,* ]{} World Scientific Series on Nonlinear Science. Series A: Monographs and Treatises, 5, World Scientific Publishing Co., Inc., River Edge, NJ, (2001).
H. Takahasi, Statistical properties of nonuniformly expanding 1d maps with logarithmic singularities, submitted.
P. Thieullen, C. Tresser and L.-S. Young, Positive Lyapunov exponent for generic one-parameter families of one-dimensional maps, [*Journal d’Analyse Mathématique*]{} [**64**]{} (1994) 121-172.
M. Tsujii, A proof of Benedicks-Carleson-Jakobson theorem, [*Tokyo J. Math.*]{} [**16**]{} (1993), 295-310.
M. Tsujii, Positive Lyapunov exponents in families of one-dimensional dynamical systems, [*Invent. Math.*]{} [**111**]{} (1993), no. 1, 113-137.
Q.D. Wang, Periodically forced double homoclinic loops to a dissipative saddle, Preprint.
Q.D. Wang and Ali, Oksasoglu, Periodic occurrence of dynamical behavior of homoclinic tangles, [*Physica D: Nonlinear Phenomena,*]{} [**239**]{}(7) (2011), 387-395.
Q.D. Wang and Ali, Oksasoglu, Dynamics of homoclinic tangles in periodically perturbed second order Equations, [*J. Differential Equations,*]{} [**250**]{} (2011), 710-751.
Q.D. Wang and W. Ott, Dissipative homoclinic loops and rank one chaos, to appear in [*Commun. Pure Appl. Math.*]{}
Q.D. Wang and L.-S. Young, Nonuniformly Expanding 1D Maps, [*Comm. Math. Phys.*]{} [**264**]{}(1) (2006), 225-282
Q.D. Wang and L.-S. Young, From invariant curves to strange attractors, [*Comm. Math. Phys.*]{} [**225**]{} (2002), 275-304.
Q.D. Wang and L.-S. Young, Strange attractors with one direction of instability, [*Comm. Math. Phys.*]{} [**218**]{} (2001), 1-97.
[^1]: The first named author is partially supported by Grant-in-Aid for Young Scientists (B) of the Japan Society for the Promotion of Science (JSPS), Grant No.23740121, and by Aihara Project, the FIRST Program from JSPS, initiated by the Council for Science and Technology Policy. The second named author is partially supported by a grant from the NSF
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'The formation of low surface brightness galaxies is an unavoidable prediction of any hierarchical clustering scenario. In these models, low surface brightness galaxies form at late times from small initial overdensities, and make up most of the faint end of the galaxy luminosity function. Because there are tremendous observational biases against finding low surface brightness galaxies, the observed faint end of the galaxy luminosity function may easily fall short of predictions, if hierarchical structure formation is correct. We calculate the number density and mass density in collapsed objects as a function of baryonic surface density and redshift, and show that the mass in recently formed low surface brightness galaxies can be comparable to the mass bound into “normal” high surface brightness galaxies. Because of their low gas surface densities, these galaxies are easily ionized by the UV background and are not expected to appear in HI surveys. Low surface brightness galaxies (LSBs) are not a special case of galaxy formation and are perhaps better viewed as a continuance of the Hubble sequence.'
author:
- 'Julianne J. Dalcanton'
- 'David N. Spergel, & F J Summers'
title: The Formation of Low Surface Brightness Galaxies
---
=cmcsc10 å[[*Astr. Ap.*]{}, ]{} \#1[to 0pt[\#1]{}]{}
Introduction
============
Over the past few decades, more and more machinery has been developed to attempt to explain the distributions of galaxy luminosities and morphologies that we see today. A standard picture has developed in which galaxies form through gravitational collapse of small density perturbations. In hierarchical models, the smallest astronomical objects form early, with progressively larger objects forming at progressively later times through merging of the earlier generations of smaller objects. Galaxies form fairly late in these models, with clusters of galaxies assembling only in the most recent times. Hundreds of analytical and numerical papers have explored the predictions of hierarchical structure formation and have been able to reproduce many of the properties of normal galaxies and clusters (see for example White & Frenk 1991, Efstathiou & Silk 1983, and the recent review by White 1994). However, as discussed in these papers, hierarchical structure formation consistently predicts many more faint galaxies than are actually observed. The faint-end of the predicted luminosity function is much steeper than luminosity functions generated from catalogs of nearby galaxies.
Simultaneously, there has been increasing attention focused on the existence of an often overlooked population of low surface brightness galaxies (LSB’s). Because of the brightness of the night sky, observers are naturally biased towards detecting high-surface brightness galaxies, whose high contrast against the background makes them easily detectable. This strong selection effect, noted by Zwicky (1957) and further explored by Disney (1976), can lead to strong correlations in the properties of observed galaxies that are not intrinsic to the objects (Disney 1976, Allen & Shu 1979), most notably the universal surface brightness of spiral disks discovered by Freeman (1970). Whenever the bias against finding low surface brightness galaxies has been reduced, however, LSB’s have appeared which violate the Freeman relation (see surveys by Impey, Bothun, & Malin 1988, Bothun, Impey, & Malin 1991, Schombert et al. 1992, Schombert & Bothun 1988,Irwin, Davies, Disney, & Phillipps 1990, Turner, Phillipps, Davies, & Disney 1993, Sprayberry et al 1995, and references therein). These surveys show that LSB’s do exist and have been previously overlooked as a potentially significant species in the galaxy menagerie. The existence of elusive but omnipresent LSB’s implies that we have been attempting to solve the puzzle of galaxy formation with many pieces missing.
Thankfully, our ignorance is succumbing to a growing body of observations of low surface brightness galaxies. First, LSB’s exist at every surface brightness to which surveys have been sensitive. They have been detected down to central surface brightnesses of 26.5 mag/arcsec$^2$ in $V$ (Dalcanton 1995); fainter than this, it is difficult to separate LSB’s from true fluctuations in the optical extragalactic background due to distant clusters of galaxies (Shectman 1973, Dalcanton 1994). Second, LSB’s exist at every size, from minute dwarf galaxies in the Local Group, through galaxies with scale lengths typical of “normal” high surface brightness (HSB) galaxies (McGaugh & Bothun 1994), up to a handful of truly giant galaxies with scale lengths of $\approx 50$ kpc, typified by Malin I[^1] (Bothun et al. 1987). Third, LSB’s are roughly a factor of two less correlated than HSB galaxies in the CfA and IRAS catalogs on scales $>1\Mpc$ (Mo, McGaugh, & Bothun 1994), and even less correlated on smaller scales (Bothun et al. 1993). Finally, extensive studies of LSB colors (McGaugh & Bothun 1994, Knezek 1993) and spectroscopy of HII regions (McGaugh 1994, McGaugh & Bothun 1994) show that LSB’s have rather blue colors (although with large scatter) that cannot be explained by either low metallicity or high current star formation rates; the blue colors are more readily explained by LSB’s having a relatively young mean age with a relatively long time-scale for star formation.
The properties of known LSB’s fit naturally with their likely formation scenario. Their clustering properties and long formation timescales immediately suggest that LSB’s form from the collapse of smaller amplitude overdensities than normal HSB galaxies (Mo, McGaugh, & Bothun 1994), with the exception of Malin-type LSB’s which likely form from rare, isolated $3\sigma$ peaks (Hoffman, Silk, & Wyse 1992). For a simple top-hat collapse of an overdense region, small amplitude peaks in the background density take longer to reach their maximum size and longer to recollapse than higher amplitude peaks, implying that galaxies that collapse from the smaller peaks will have later formation times and longer collapse times. In any theory with Gaussian fluctuations, small amplitude galaxy-sized peaks are more likely to be found in underdense regions, where their mean level is pulled down by the large scale underdensity, suggesting that objects which collapse from small amplitude peaks will be less correlated than those that collapse from larger ones (Kaiser 1984, White et al. 1987). These are exactly the properties being uncovered for LSB galaxies.
In this paper, we expand upon this idea, showing that LSB’s are a general [*prediction*]{} of existing hierarchical theories of structure formation. LSB’s naturally make up most of the faint end of the predicted galaxy luminosity function, which, given the severe underrepresentation of these galaxies in all existing catalogs, explains how current low measurements of the faint end of the luminosity function can be reconciled with the theoretical predictions of large numbers of low luminosity galaxies. Furthermore, we show that the mass density in recently forming LSB’s can be comparable to the mass in “normal” galaxies, particularly in models of high bias. In §\[otherastro\], we conclude with a discussion the astrophysical implications of the existence of large population of LSBs, in particular considering HI surveys, Lyman-$\alpha$ absorbers, the faint blue galaxy excess, and the Tully-Fisher relation.
The Formation of LSB’s
======================
Theoretical Assumptions
-----------------------
In the standard gravitational instability picture for galaxy formation, initial overdensities in the distribution of mass expand more slowly than the universe as a whole, eventually separating from the global expansion and collapsing onto themselves (Lifshitz 1946). The collapsed regions still continue to grow roughly isothermally in size and mass, as successively larger shells of material themselves collapse onto the initial overdensity and virialize, or as adjacent collapsed regions merge together. It is straight-forward to track the collapse of the non-dissipative material to form a dark halo. The collapse of the baryonic matter, however, is more complicated. The baryons are dissipative, subject to pressure, heating, cooling, and feedback due to star formation. They undergo a more complicated collapse within the dark matter halo to form the stellar disks and ellipsoids.
Because the surface brightness of a galaxy will be more closely linked to the baryonic surface density than to the dark matter surface density, we must find a way to relate the easily calculated collapse of the dark matter to the collapse of the baryons. Thankfully, with a few reasonable assumptions, the structure of the dissipative baryonic matter can be simply related to the structure of the dark halo (Faber 1982). These assumptions are (i) the baryonic fraction within initial overdensities is constant and (ii) the net angular momentum that the baryonic component acquires during the collapse is ultimately responsible for halting the collapse. The first assumption is supported by the hydrodynamic simulations of Evrard, Summers, & Davis (1994), which finds a roughly constant ratio between the baryonic and dark matter for all galaxies (see their Figure 12). The second assumption appears to be valid for disk systems that are supported by rotation rather than by random motions. Although we will use the term “baryonic” interchangeably with “dissipative” for the duration of this paper, we recognize that there may be baryons which have been bound into an early generation of stars (Population III) or black holes and thus will not participate in a dissipative collapse.
With these assumptions, if the non-baryonic component of a shell collapses to form an isothermal halo of radius $r_H$, then the baryonic component collapses to a radius of $r_*$ where the collapse factor $\Lambda(\lambda)$ is defined to be
$$\label{collapse}
\Lambda^{-1}(\lambda) \equiv {r_* \over r_H}
= \lambda \, [\sqrt{(F/2\lambda)^2 + 2}
- (F/2\lambda)]$$
(Faber 1982). $F$ is the ratio of the dissipative baryonic mass to the total mass within the shell, and $\lambda$ is the dimensionless spin parameter
$$\lambda = {\cal L} \, |E|^{1/2} \, G^{-1} \, M^{-5/2}$$
(Peebles 1969). Here, ${\cal L}$ is total angular momentum of the luminous and non-luminous matter, $E$ is its energy, and $M$ its mass. Because the baryonic component collapses to a fixed fraction of the size of the non-baryonic component (for a given value of $\lambda$), we may now use the properties of the dark matter halos to predict the properties of the visible matter.
If we further assume that galaxies, on average, have a similar ratio between their baryonic mass and their luminosity (i.e., a similar efficiency in converting baryons to light-producing stars), then we now have a way to relate the surface density of the dark halo to the surface brightness of the luminous galaxy. While other scenarios may argue that the low surface brightness of LSB’s is a result of baryonic physics that we do not consider in this calculation (for example, low star-formation efficiencies due to low metallicity or the absence of tidal triggering, gas loss through supernovae explosions or ionization by the UV background), we prefer to explore the straight forward assumption that a galaxy with fewer baryons per unit area will in general form fewer stars per unit area. For the purposes of this paper, we relegate this rich astrophysics to being only a perturbation to a global proportionality between baryonic surface density and surface brightness. The results will show that even with this naive assumption, the bulk properties of LSBs can be understood as the inevitable result of the properties of low surface density galaxies.
We would venture that our neglect of detailed physical processes within the galaxies causes us to systematically overestimate the surface brightness of LSBs; almost all mechanisms for reducing the star formation efficiency are likely to be more effective in low surface density galaxies than in high surface density ones. First, if star formation requires a large reservoir of neutral gas, then the extragalactic ultraviolet background, which will completely ionize low surface density galaxies (§\[HI\]), may suppress or shut off the formation of stars in these galaxies (Babul & Rees 1992). Second, if star formation is associated with disks becoming Toomre unstable to the formation of spiral structure (implying that the Toomre stability parameter $Q\propto \sigma\kappa / G\Sigma$ is smaller than some constant, where $\sigma$ is the velocity dispersion of the gas, $\Sigma$ is the surface density of the disk, and $\kappa$ is the epicyclic frequency, set by the disk structural parameters (see Kennicutt 1989)), then one would expect lower surface density disks to be less unstable for star formation. Observations by van der Hulst et al. (1993) find that LSB galaxies do have HI surface densities that that fall below the critical density implied by $Q$ and are about a factor of 2 lower than the HI surface densities of HSB galaxies. Third, if star formation is enhanced by tidal interactions with nearby galaxies, then galaxies that have fewer close neighbors will have lower star formation efficiencies. We will argue that, as seen observationally and discussed by (Mo, McGaugh, & Bothun 1994), low surface density galaxies should have lower correlation amplitudes and thus fewer close neighbors, which would once again lead to lower surface brightnesses[^2]. Fourth, low surface density, low mass galaxies are more likely to lose their gas through supernova driven winds than high surface density, high mass galaxies. This both shuts off star formation prematurely and evolves the galaxy towards larger sizes through sudden mass loss and subsequent revirialization (Dekel & Silk 1986, DeYoung & Heckman 1994) – both mechanisms which lead to lower surface brightnesses for low surface density objects.
In spite of our previous assertion to the contrary, there is one piece of baryonic physics which we cannot ignore, namely pressure. We are ultimately interested in the gravitational collapse of baryons, but a collapsing baryonic gas both the inward tug of gravity and a resistive force due to its own internal pressure. For sufficiently large masses, the self-gravity of the combination of baryons and dark matter is strong enough for the effects of pressure to be negligible. However, for small masses the pressure is sufficient to support the baryonic gas against collapse, in spite of the additional gravitational pull provided by the collapsed dark matter halo. Because low surface density galaxies are more likely to have lower total masses than high surface density galaxies, we must consider the mass scales on which the effects of pressure become important; we will do so in §\[jeans\].
Properties of LSBs
------------------
We now examine the conditions that lead to variations in baryonic surface density (and thus surface brightness) among galaxies. We choose to compare the properties of galaxies within a fixed radius rather than at a constant mass scale.
At constant mass, comparing LSB’s to HSB’s is a comparison between galaxies of widely different scale lengths: high surface brightness dwarf ellipticals would be compared to giant low-surface brightness galaxies. By comparing the surface brightness of galaxies at constant mass, one is saying that what distinguishes LSB’s from HSB’s is that LSB’s have abnormally large scale lengths for their mass. Instead, we will compare galaxies of different masses, but with similar scale lengths. This more closely mimics how one draws the distinction between LSB’s and HSB’s. We therefore define the effective surface density, ${\bar\Sigma}$ to be the mean surface density within some radius $r_*$,
$$\label{sigma_m}
{\bar\Sigma} = { M(r<r_*) \over {\pi\,r_*^2}}.$$
where $M(r<r_*)$ is the mass of the baryonic component within a radius $r_*$. We can relate $M(r<r_*)$ to the total mass of the halo,
$$\label{mass}
M(r<r_*) = F \, M_{tot}(r<r_H) = F \, {{4\pi}\over 3} \, \rho_0 \, r_0^3,$$
where $\rho_0$ is the current density of the universe and assuming that the dark matter that initially was in a shell of comoving radius $r_0$ has collapsed to a virial radius $r_H$, while baryons from the same shell have collapsed and dissipated to a radius $r_*$, where they are supported by their angular momentum. Equations \[sigma\_m\] and \[mass\] imply that $r_0$ is determined by the choice of ${\bar\Sigma}$ and $r_*$:
$$\label{r0_sigma}
r_0({\bar\Sigma},r_*) = \, r_* \,
\left( {3 {\bar\Sigma} \over 4 {F\,\rho_0\,r_*}}\right)^{1/3}$$
In turn, the redshift of collapse can be determined from $r_0$ and $r_*$ as follows. In a spherical symmetric top-hat collapse (Gott and Gunn 1972, Peebles 1980), the dark matter virializes at a radius of half its size at maximum expansion, the density within a shell at maximum expansion is $(3\pi/4)^2$ times the background density, and the collapse time is twice the time of maximum expansion. Therefore, for top-hat collapse in a $\Omega=1$ universe,
$$\begin{aligned}
r_0 &=& r_H \, (18\pi^2)^{1/3} \, (1 + z_c) \\
&=& r_* \, \Lambda \, (18\pi^2)^{1/3} \, (1 + z_c) \label{r0_rH}\end{aligned}$$
where $z_c$ is the redshift at which the shell has collapsed and virialized. Equations \[mass\] and \[r0\_rH\] may be substituted into equation \[sigma\_m\] to solve for the redshift of collapse of a galaxy with mean baryonic surface density ${\bar \Sigma}$ within $r_*$:
$$\label{zc}
z_c({\bar\Sigma},r_*) =
\left({{\bar\Sigma} \over {24 \pi^2
F\,\rho_0\,r_*}}\right)^{1/3} \,
\Lambda^{-1}(\lambda) - 1.$$
Alternatively, this equation may be rearranged to express ${\bar\Sigma}$ in terms of $z_c$:
$$\label{sigma_z}
{\bar\Sigma}=24\pi^2 \, F \, \rho_0 r_* \, \Lambda^3(\lambda) \,
(1+z_c)^3,$$
which immediately implies that low surface density galaxies form at later times than high surface density galaxies with the same angular momentum. If, as we have asserted, surface density is proportional to surface brightness, then low-surface brightness galaxies have formed more recently than high-surface brightness galaxies of similar size. This also suggests that for galaxies of a given size and angular momentum, there is a minimum surface density ${\bar\Sigma}_0=24\pi^2\,F\,\rho_0\,r_* \,\Lambda^3(\lambda)$ below which few galaxies should exist.
To derive equation \[sigma\_z\], we have ignored infall due to the collapse of shells larger than $r_0$ after $z_c$. This is not a bad assumption for the dark matter; the larger shells virialize at larger radii, and thus the increase in mass occurs primarily at large radii. Baryons are dissipative, however, and can collapse until halted by their angular momentum. Including the effects of infall at late times would only exacerbate the surface brightness distinction between early-forming HSB’s and late-forming LSB’s, as HSB’s would have more time to accrete mass and further increase their surface brightness. Thus, late-time infall likely enhances rather than erases the correlation between formation time and surface brightness. Simulations of Evrard et al. (1994) justify our neglect of late time infall as they find that the rate at which galaxies accrete mass slows dramatically at late times.
It is revealing to note that we have implicitly made the assumption that there is a linear correspondence between a galaxy’s mean surface brightness and its luminosity (eq. \[sigma\_m\], assuming a single mass-to-light ratio). This is an artifact from associating a fixed physical scale $r_*$ with all galaxies; for example, if instead one were to consider the luminosity within some isophotal level, the linearity between surface brightness and luminosity would break down. There are also additional sources of scatter in the relationship which we have ignored: variations in angular momentum, variations in collapse time due to changes in the mean local overdensity, variations in the mass-to-light ratio with mass, and deviations from spherical collapse. Regardless of these complications, it is difficult to conceive of eq. \[sigma\_m\] not being a reasonable first-order approximation to the relationship between surface density (i.e. brightness) and mass (i.e. luminosity); certainly the trend is displayed when considering spirals and ellipticals. The relationship between surface brightness and luminosity has immediate implications for redshift surveys. If a redshift survey has a limiting surface brightness for spectroscopy, set either by design or by the limits of the spectrograph, then there will be an associated limiting [*luminosity*]{} to the galaxies that are observed. Decreasing the magnitude limit of a spectroscopic survey without decreasing the surface brightness limit will lead one to pick up galaxies of the same luminosity as were observed previously, only further away. This will inevitably lead to underestimates in the derived faint-end slope of the luminosity function, as well as variations among different surveys which have different, often unstated, surface brightness limits.
Angular Momentum and Profiles of LSBs {#profiles}
-------------------------------------
In addition to suggesting that galaxies that form late are more likely to be LSB’s, Equation \[sigma\_z\] also implies that LSBs could be galaxies whose baryons have not collapsed much further beyond the non-baryonic dark halo. Galaxies whose baryons have collapsed very little must have either acquired large amounts of angular momentum during their formation, or have been inefficient at transporting angular momentum outwards during their dissipative collapse. Both these traits can be associated with galaxies that form at lower redshifts.
First, galaxies that formed late are unlikely to have extremely low angular momenta. Analytic calculations and numerical simulations have shown that galaxies acquire angular momentum through tidal torquing. While systems that form very early, such as globular clusters and (presumably) most ellipticals, may have collapsed so quickly that they had little time to acquire angular momentum, late forming galaxies have had ample time to be torqued by nearby galaxies and by the shear in the global gravitational field. There have been suggestions of this trend in numerical work, but it has not yet been well quantified, particularly for the low amplitude peaks that are associated with LSBs. Eisenstein & Loeb (1994, private communication) find that, on a given mass scale, the value of $\lambda$ is anti-correlated with peak height, leading to a correlation between angular momentum and collapse times (see eq. \[sigma\_delta\]). They suggest that the relation arises because the separate axes of an object with a small overdensity collapse at very different times, giving a larger quadrupole moment and making the object more susceptible to external torques.
Second, the trend from ellipticals to spirals suggest that early forming galaxies have been particularly efficient at shedding angular momentum. While ellipticals appear to have collapsed dramatically, they have very low angular momenta, suggesting that large amounts were transported outwards during collapse. Spiral disks, which are in general younger systems than ellipticals, have large angular momenta, suggesting that the dramatic angular momentum transport that was operating during the epoch of elliptical formation was much less spectacular during the formation of spiral disks. This trend between angular momentum transport and formation epoch may have several different origins. It could reflect changes in the galaxy environment with time (e.g. higher external pressure, more gravitational shear, evolution in the triaxiality of halos) or it could reflect processes within the galaxy halo that depend upon the duration of collapse time (e.g. feedback from star formation, interactions between the halo and baryons). However, regardless of the origins of the trend, it can be extrapolated to the present day to suggest that any galaxies that are currently forming are more likely to have inefficient angular momentum transport. If this were not true, then one might expect a large population of very young, blue elliptical galaxies to be forming today. Except for the very smallest dwarf galaxies ($M_B > -14$), the young blue galaxies that are observed tend to be irregular galaxies and be supported by rotation (Lo, Sargent, & Freeman 1993, Gallagher & Hunter 1984, and references therein). We note that classical ellipticals are exempted from much of the discussion in this paper. Because they have obviously not conserved angular momentum during their collapse, they do not obey equation \[collapse\] or any equation that follows from it.
When angular momentum transport is inefficient, the distribution of angular momentum per unit mass is likely to be constant during collapse, which leads to the formation of exponential disks (Gunn 1982). Thus, if LSBs formed late, as suggested by equation \[sigma\_z\] and observed stellar populations (Knezek 1993, McGaugh 1994, McGaugh & Bothun 1994), then they should be disk systems with exponential profiles. For non-dwarf LSBs, exponential profiles are an excellent fit to the stellar distribution in both infrared and visible bands (Davies, Phillips, & Disney 1990, McGaugh & Bothun 1994, James 1994, McGaugh et al 1995, Dalcanton 1995). Alternatively, this argument can be inverted and the ubiquity of exponential disks in LSBs can be used to argue for inefficient angular momentum transport and large values of $\lambda$.
With this view of LSB’s, one can interpret LSB’s as being an extension of the Hubble sequence. At one end of the sequence are ellipticals, consisting entirely of highly collapsed, low angular momentum systems with short formation timescales. In the middle of the sequence are spiral galaxies, with an increasing fraction of high angular momentum disk stars, an increasing timescale for star formation, and lower masses. LSB’s could be the obvious next step in the sequence, with even longer star formation times, higher angular momenta, and smaller masses and bulges. In the absence of strong arguments for why disks should have stopped forming past the Sc end of the Hubble sequence, and in the undeniable presence of rapidly rotating low surface brightness disks (Schombert et al 1992, Sprayberry et al 1995), it is difficult to argue that the Hubble sequence truly ends at its standard terminus.
Correlations of LSBs {#correlations}
--------------------
If LSBs formed at late times from low amplitude fluctuations, then the LSBs should be less correlated than HSBs, which formed from earlier from higher amplitude fluctuations. Thus, LSBs extend the morphology-density relationship seen within elliptical and spirals: high overdensities implies early formation and high correlation, which in turn implies high surface brightness. Recasting equation \[sigma\_z\] in terms of $\delta = \delta\rho/\rho$, the initial overdensity from which the galaxy collapsed,
$$\label{sigma_delta}
{\bar\Sigma} \propto F\, r_* \, \Lambda^3(\lambda) \,\delta^3.$$
Here, we assume for simplicity, $\Omega = 1$ and thus, $\delta \propto (1+z)^{-1}$. In a Gaussian theory, the probability of finding two peaks of amplitudes between $\nu\sigma$ and $(\nu + \epsilon)\sigma$ within a sphere of radius of $r$ is:
$$P_2 = {1 \over 2 \pi} {\epsilon^2 \sigma^2 \over
\sqrt{\xi(0)^2 - \xi(r)^2}} \exp\left[-{\nu^2 \sigma^2 \over
\xi(0)+ \xi(r)}\right],$$
where $\epsilon << \nu$ and $\sigma^2$ is the variance of the Gaussian fluctuations (Kaiser 1984). Recall $\xi(0) = \sigma^2$ and that this assumption of Gaussian fluctuations ignores the non-linear evolution of the density field. As the probability of finding a single peak within the sphere, $P_1$ is $(\epsilon/\sqrt{2\pi}\sigma) \exp(-\nu^2/2)$, the correlation of peaks of amplitude $\nu$ is
$$\xi_\nu(r) = {P_2 \over P_1^2} -1 = {\xi(0)^2
\over \sqrt{\xi(0)^2 - \xi(r)^2}} \exp\left[{\nu^2 \xi(r) \over \xi(0) + \xi(r)
}\right] - 1$$
At large separation, where $\xi(r) << \xi(0)$, then
$$\label{correlation}
\xi_\nu(r) = {\nu^2 \over \sigma^2} \xi(r)$$
Thus, the proportionality between fluctuation amplitude and correlation strength is expected not only for the high amplitude peaks that form clusters (Kaiser 1984), but for all collapsed objects. Equations \[correlation\] and \[sigma\_delta\] predict that LSBs are less correlated than HSBs, consistent with the Mo, McGaugh, & Bothun (1994) analysis of observed LSB and HSB samples. Further evidence of this reduced correlation may be manifested in weak correlation of faint galaxies (Koo & Szalay 1984, Stevenson et al. 1985, Efstathiou et al. 1991, Pritchett & Infante 1992, Bernstein et al. 1993), given that the deep observations are more sensitive to LSBs than local surveys (Phillips, Davies, & Disney 1990, McGaugh 1994, Ferguson & McGaugh 1995).
The relationship between ${\bar\Sigma}$ and $\delta$ given in equation \[sigma\_delta\] also explains why differences in the correlation function of LSB’s and HSB’s have only recently become apparent with the development of new samples of truly low-surface brightness galaxies. Because surface density is a strong power of the initial overdensity, only a sample with a large range of surface densities would manifest properties that depend on a weaker power of $\delta$. The amplitude of the correlation function traced by galaxies is proportional to $\delta^2$ (eq. \[correlation\]), and thus proportional to ${\bar\Sigma}^{2/3}/\Lambda^{2}(\lambda)$. This suggests that to detect a 50% difference between the correlation amplitudes for HSB’s and for LSB’s, as was seen in Mo, McGaugh, & Bothun (1994), one needs the LSB sample to be cleanly separated in surface brightness from the HSB sample by at least one magnitude per square arcsecond, and possibly more if low surface brightness galaxies have smaller values of the collapse factor $\Lambda(\lambda)$. The required range of surface brightness did not exist in earlier work comparing the CfA galaxies to galaxies drawn from the UGC catalog (Thuan, Alimi, & Gott 1991), nor in the work of Bothun et al. (1986) where the larger range in surface brightness was swamped by large uncertainties in the measurement of the surface brightnesses. Only by pressing surveys for LSB’s to the lowest possible surface brightnesses would one hope to uncover a population of galaxies encroaching upon the voids.
The Surface Brightness Distribution of Galaxies {#surfbriden}
===============================================
With the formulas developed above, we are now in a position to calculate the number density of galaxies as a function of surface density and redshift. We will use the Press-Schechter formalism, which uses linear theory to calculate the number density of regions of initial radius $r_0$ whose extrapolated linear overdensities are sufficiently large that the region has in fact undergone a non-linear collapse to form a virialized object at redshift $z_c$ (Press & Schechter 1974). The formalism assumes that the initial fluctuations are Gaussian and that the overdense regions undergo a spherical top-hat collapse which stops when the systems virialize at a radius of half the radius at maximum expansion (see Peacock & Heavens 1989, Bond et al. 1991, & Bower et al 1991 for discussions of this formalism). With these assumptions, the number density of objects with initial radius $r_0$ that have just collapsed at $z_c$ is,
$$\label{press-schechter}
n_{PS}(r_0,z_c) = -\left({2\over \pi}\right)^{1/2} \,
\left[{{\delta_c(1+z_c)}\over{\Delta^2(r_0)}} \,
{{\partial\Delta(r_0)}\over{\partial r_0}} \,
e^{-{ {\delta_c^2 (1+z_c)^2} \over
{2\Delta^2(r_0)}}} \right]
\, \frac{\rho_0}{M(r_0)},$$
where $M(r_0)$ is the mass within an initial radius $r_0$ and $\Delta(r_0)$ is the variance in density within shells of radius $r_0$ for a power spectrum of fluctuations $P(k)$, defined as
$$\Delta^2(r_0) = \int^\infty_0 4\pi k^2 \, dk P(k) W^2(kr_0)$$
where $W(x) = 3(\sin{x} - x\cos{x}) / x^3$ and $\delta_c$ is the extrapolated linear overdensity that the clump would have had at $z_c$ if it had not collapsed, taken to be 1.68 to agree with numerical simulations. For a CDM power-spectrum, we use
$$P(k) = 1.94\times10^4 \, b^{-2} \,
k(1+6.8k + 72k^{3/2} + 16k^2)^{-2} \Mpc^3,$$
for $H_0=50$ km/s/Mpc, taken from Davis et al. (1985). The bias parameter, $b$ is defined to be the ratio between the variances of the galaxy and the mass fluctuations within 16 Mpc radius spheres. This particular approximation is accurate to 10% for scales between $0.05\Mpc$ and $40\Mpc$, and is too high on small scales.
The collapsed objects traced by equation \[press-schechter\] never stop increasing their mass; they continue to accrete matter from progressively larger shells which continue to collapse around and merge into the object. Therefore, something that we might identify as a single object is associated with a different mass and different $r_0$, depending upon the redshift at which we choose to identify it. If we associate these “collapsed objects” with the more prosaic term “galaxy”, we immediately see how difficult it is to define exactly when a galaxy has formed. In the absence of any physics beyond gravity, there is no particular scale that naturally selects the criteria for labelling a galaxy.
As motivated by our discussion of surface brightness above, we will chose a radius criterion for deciding when a galaxy forms; we will say that a galaxy has formed when the baryons originating in some particular shell collapse to a final size $r_*$. For a given surface density at the time of formation, the choice of $r_*$ fully specifies both the redshift at which the galaxy formed (eq. \[zc\]), and the initial size of the shell $r_0$ from which the galaxy formed (i.e. the shell whose baryons collapsed to size $r_*$ at $z$) (eq. \[r0\_sigma\]). Equation \[r0\_rH\] can be used with equation \[press-schechter\] to calculate the number density of galaxies that have already collapsed by redshift $z$ from an initial radius $r_0$ to a final radius $r_*$, with spin parameter $\lambda$:
$$n_r(r_0,\lambda|z) = n_{PS}(r_0,z_c(r_0,\lambda))
\, p(\lambda|z_c(r_0,\lambda))
\, \left|\frac{\partial z_c}{\partial r_0}\right|
\, \Theta(z_c(r_0,\lambda)-z),$$
where $p(\lambda|z)$ is the probability that a galaxy forming at redshift $z$ has a spin parameter $\lambda$, and where $\Theta(x)=1$ if $x>0$, and equals zero otherwise. For the sake of notational simplicity, we do not make the dependence on $r_*$ explicit.
Transferring variables from $r_0$ to ${\bar\Sigma}$ (eq. \[r0\_sigma\]), and integrating over $\lambda$ gives the total number density of galaxies that have formed by $z$ with a given surface density ${\bar\Sigma}$:
$$\label{n_sigma_z}
n({\bar\Sigma}|z) = \int^\infty_0
n_r\left(r_0({\bar\Sigma}),
\lambda |
z<z_c\left(r_0({\bar\Sigma}),\lambda\right)\right)
\left|{ {\partial r_0} \over {\partial {\bar\Sigma}}}\right|
\, d\lambda,$$
where $z_c\left(r_0({\bar\Sigma}),\lambda\right)$ reduces to equation \[zc\].
Analytical calculations and numerical simulations find that large protogalaxies typically form with values of $\lambda\sim0.05\pm0.05$ (Peebles 1969, Barnes & Efstathiou 1987, Warren et al. 1992, Steinmetz & Bartelmann 1994, Eisenstein & Loeb 1994). Instead of choosing one particular, highly uncertain model for $p(\lambda|z)$, we find it more illustrative to assume single fixed values for $\lambda$:$p(\lambda|z)=\delta(\lambda-\lambda_0)$. In §\[profiles\] we argued that LSBs can be treated as late-forming, high angular momentum extensions to the Hubble sequence, and as such we should consider values of $\lambda$ that are appropriate for the end of the Hubble sequence and beyond. Following an argument given by Faber (1982), the measured fraction of baryonic mass within the optical radius ($\equiv X$), can be used to estimate the factor by which the baryons have collapsed. For collapse within a fixed, non-dissipative, isothermal halo, the baryonic collapse factor is
$$\Lambda = \left[\frac{X}{1-X}\right] \, \left[\frac{1-F}{F}\right]$$
For the values of $X$ given in Faber’s Table 3 for Sa, Sc, and Irr galaxies[^3], the collapse factors are roughly 100 for the Sa’s, 10-15 for the Sc’s, and upper limits of 7-11 for the Irr’s, which corresponds to $\lambda=0.02,\,0.06-0.09,\,>(0.09-0.12)$, assuming $F=0.05$. Based on this, we examine cases where $\lambda=0.075,0.15$, which are the 90% and 99% percentiles of the distribution of $\lambda$ found numerically by Eisenstein & Loeb (1994) for $>2.5\sigma$ peaks with $M=10^{12}\msun$. Note that types Sc and later make up 30% of the number density of galaxies with $L/L_*>0.1$ (Marzke et al 1994), a larger fraction than would be implied by the distribution of $\lambda$ in Eisenstein & Loeb (1994). This suggests that numerical simulations may either underestimate the fraction of high-spin halos or that baryons acquire more specific angular momentum during their collapse than does the dark matter. It could also be taken as evidence that galaxies with types later than Sc form smaller overdensities than were assumed in Eisenstein & Loeb (1994), and thus form in greater numbers and with larger values of $\lambda$.
The resulting distributions are plotted in the first two columns of Figures 1 & 2 for $b=1$ & $2.5$, $z=0-5$ (light to dark) with $\lambda=0.03,0.075,0.15$ from the top row to the bottom. To interpret the distributions, it is necessary to first establish some tie with “normal” galaxies (by which we mean high surface brightness, nearby, cataloged galaxies with $L/L_*>0.1$) through their redshift of formation, angular momenta, surface densities, and number densities. First, we choose to associate normal galaxies with formation times of $z=2$ or earlier. The lookback time to $z=2$ is $7-8h_{75}^{-1}\Gyr$ for $\Omega=0.2-1$ and $h_{75}=H_0/75\hnot$. In the Milky Way, the ages of F and G dwarfs show that a significant population of stars in the disk were formed over $8\Gyr$ ago from high metallicity gas, suggesting that the disk of the galaxy had already settled into place to some degree by $z=2$ (Barry 1988, Carlberg et al. 1985, Twarog 1980), bolstering our identification of this epoch as being the one by which the mass of galaxy disks had been assembled. Second, we chose $\lambda=0.075$ to be the fiducial case for identifying “normal” galaxies because it roughly demarks the maximum spin angular momentum of the most numerous classes of galaxies (Sd and earlier). Galaxies that form with smaller values of $\lambda$ will all form with larger surface densities, and thus a galaxy that forms with $\lambda=0.075$ can be used to define the limit where normal surface brightnesses end, and low surface brightnesses begin. Furthermore, by choosing a value of $\lambda$ that corresponds to galaxies with very small spheroidal components, we hopefully avoid the need to determine the fraction of baryons that wind up in the disk rather than the bulge.
We use the Milky Way to estimate the surface density associated with normal galaxies. If the baryonic surface density in the solar neighborhood is $75\surfden$, the mean baryonic surface density of the Milky Way is roughly $200-300\surfden$ for $r_*=7.5-10\kpc$, including a 20% correction for the mass in the bulge. (For a disk mass-to-light ratio of 5, this gives the correct Freeman disk central surface brightness.) In Figures 1 & 2 we have chosen $r_*$ such that for $\lambda=0.075$ the integrated number density of galaxies that have collapsed by $z=2$ is roughly the total number density of normal galaxies measured today (corresponding to the horizontal dashed and dotted lines in the second columns of Figures 1& 2. Note that this choice of $r_*$ automatically gives surface densities for the galaxies forming at $z=2$ that are in good agreement with the Milky Way value. Galaxies that form before $z=2$ with larger surface densities are only a small fraction of the number density at $z=2$; 75-99% of the galaxies that form by $z=2$ (for $b=1-2.5$) have collapsed between $z=2$ and $z=3$.
The particular form of $n({\bar\Sigma}|z)$ seen in Figures 1 & 2 arises because, one, we’ve assumed that the surface density of a galaxy doesn’t change after it forms, and two, galaxies of a given surface density all form at the same redshift. With these two assumptions, the only change in the distribution of surface densities with redshift is the creation of new galaxies at increasingly lower surface brightnesses (eq. \[sigma\_z\]). Because low surface density galaxies form from low amplitude peaks, which are more common than the high amplitude peaks which are thought to form normal galaxies (Eq. \[sigma\_delta\]), there is a dramatic increase in the number density of galaxies with decreasing surface density. Depending on the choice of bias, there are $10-100$ times more young low surface density galaxies than normal galaxies, assuming that all galaxies form with $\lambda=.075$.
If instead we had assumed that all galaxies form with a much smaller value of $\lambda$, say $\lambda=0.03$, then the total number density of galaxies formed by $z=0$ would have been a factor of 10 below the observed number density of normal galaxies. Because galaxies with low angular momenta have very large collapse factors, they must have collapsed from very large shells ($r_0>>r_*$). Such shells are rare and have long collapse times, causing the reduced number density seen if Figures \[bias1\]&\[bias2.5\].
This points to a hidden assumption in the calculation. By using the Press-Schechter formalism, we are expressly counting galaxies at the time when their halo collapses and virializes. Thus, there is an implicit assumption that the collapse of the baryons occurs simultaneously with the collapse of the halo. For the collapse of the baryons and the dark matter to be asynchronous, the baryons must collapse significantly faster than the free-fall time from the radius of maximum expansion ($=2\Lambda r_*$). Such large amounts of dissipation could only occur after gravity had organized the baryons into a dense enough system for radiative shocks and star formation to take place, in other words, after a gravitational collapse time; this suggests that asynchronous collapse of the baryons and halo is not a problem for the Press-Schechter system of accounting. On the other hand, a fragmentary collapse, where baryons clump during infall, could speed both dissipation and angular momentum loss; this may be the formation pathway for spheroids, perhaps leaving globular clusters as debris.
While from the first two columns of Figures 1 & 2 it is clear that galaxies of very low surface densities form in large numbers, it is ambiguous whether or not the mass (or luminosity) of the galaxies has any leverage when weighed against the mass in normal galaxies. The baryonic mass density, $\rho({\bar\Sigma}|z)$, may be calculated by multiplying $n({\bar\Sigma}|z)$ by the baryonic mass of the galaxies formed with surface density ${\bar\Sigma}$ (see equation \[sigma\_m\]). Integrating $\rho({\bar\Sigma}|z)$ to get the cumulative distribution of baryonic density, $\Gamma({\bar\Sigma}|z)$:
$$\begin{aligned}
\Gamma({\bar\Sigma}|z) &\equiv& \int^{\bar\Sigma}_0 \rho({\bar\Sigma}^\prime|z)
/ F\rho_c \, d{\bar \Sigma}^\prime \\
&=& \int^{\bar\Sigma}_0 n({\bar\Sigma}^\prime|z) *
M({\bar\Sigma}^\prime) / F\rho_c \, d{\bar \Sigma}^\prime.\end{aligned}$$
The resulting distributions for $\rho({\bar\Sigma}|z)$ and $\Gamma({\bar\Sigma}|z)$ are plotted in the last two columns of Figures 1 & 2. Note that the cumulative density $\Gamma({\bar\Sigma}|z)$ does not always integrate to 1 because $\rho({\bar\Sigma}|z)$ does not include mass from shells that collapse to radii outside of $r_*$ or that have not yet collapsed to $r_*$. For small $\Lambda$ (short collapse times), the cumulative density is much larger, as there has been ample time for most overdensities to collapse. The small normalization problem for low $\Lambda$ models reflects the inaccuracy of the approximation to the CDM power spectrum at small $r_0$.
One may read off the density contributed by galaxies in a particular range of surface density; the change in $\Gamma({\bar\Sigma}=\infty|z)$ between any $z_1$ and $z_2$ is the mass density in galaxies between the corresponding minimum surface densities at those redshifts, ${\bar\Sigma}_0(z_1)$ and ${\bar\Sigma}_0(z_2)$. Therefore, we may read off the mass density in low surface density galaxies as $P(\infty|z=0)-P(\infty|z=2)$, and compare it to the mass density in normal galaxies $P(\infty|z=2)$. The mass density of galaxies with “sub-normal” surface density is comparable to or significantly greater than the mass density in normal galaxies. In particular, for models with high bias the fraction of the total mass density that is tied up in LSB’s is dramatically large. This suggests that there can easily be as much mass tied up in a population of low surface brightness galaxies as there are in normal galaxies! Thus, LSBs have all the properties that a theorist could hope for: they are almost entirely unconstrained by observations although they are known to exist, [*and*]{} they may contain a large fraction of mass of the universe.
To develop an appreciation for the impressively low surface brightness implied by Figures 1 & 2, consider the spread in the surface brightnesses of recently formed galaxies. If $350\surfden$ is the baryonic surface density for normal spirals, then Figures 1 & 2 show that galaxies formed between $z=2$ and the present may have surface densities that are several hundred times smaller than normal galaxies. Assuming a linear relationship between surface density and surface brightness, this corresponds to a $6\surfb$ range in surface brightness, implying a cosmologically significant population of galaxies with central surface brightnesses in $V$ of $27\surfb$! This range in surface brightness is being probed in a survey which should soon yield interesting measures of the density of LSB galaxies (Dalcanton 1995). Unfortunately, at much lower surface brightnesses the signal from LSBs may easily be swamped by fluctuations from very distant cosmologically dimmed clusters of galaxies (Dalcanton 1995); this will strongly limit the ability of observations to reveal galaxies with surface brightnesses much fainter than $V=27\surfb$.
On the Effects of Pressure {#jeans}
--------------------------
In the preceeding calculation of the number density of galaxies as a function of surface density and formation time, we made an implicit assumption that the baryons will collapse along with the dark matter halos. However, unlike the dark matter (presumably), a baryonic gas experiences pressure forces in addition to gravitational forces. For small masses, internal pressure may support the gas against collapse, in spite of the inward gravitational pull of the collapsed dark matter halo (Jeans 1929).
The question arises, then, of whether or not low-surface brightness galaxies (which we postulate to have low mass) are capable of collapsing to form stars at all. This question may be addressed by considering the detailed interaction between pressure, gravitational collapse, and the numbers of collapsed objects, effectively extending the Press-Schechter formalism to include the effects of pressure (Babul 1987, Shapiro et al 1994). However, this level of complication may be avoided by rephrasing the above question to ask: what is the range of surface densities for which the gas is bound to the dark matter halo? The gas will be bound if the gravitational binding energy is greater than its thermal energy. Assuming that the gas is fully ionized, it will be gravitationally bound to the dark halo if
$$\label{thermgrav}
\frac{m_p V_c^2}{2} > k T,$$
where $V_c$ is the circular velocity of the dark matter halo, $m_p$ is the proton mass, and $T$ is the temperature of the gas. The circular velocity is independent of radius or mass scale if the dark matter collapses to form an isothermal sphere, and can be expressed as
$$\label{Vc}
V_c^2 = \frac{(18\pi^2)^{1/3}}{2}\,(1+z_c)\,H_0^2\,r_0^2$$
(Narayan & White 1988). Using Eqns. \[r0\_sigma\]&\[sigma\_z\] to eliminate $z_c$ and $r_0$ in favor of ${\bar \Sigma}$ and $r_*$, we can express the condition for collapse as a baryonic surface density threshold:
$$\label{sigthresh}
{\bar \Sigma} > \frac{2 k T F \Lambda}{\pi m_p r_* G},$$
where $G$ is the gravitational constant. Note that the resulting threshold surface density has no dependence on $z_c$ or $r_0$. This reflects that the gravitational potential well of a dark matter halo depends only on its internal size scale and mass, which are fully determined by ${\bar \Sigma}$, $\Lambda$, and $r_*$.
The temperature of the gas in the final halo depends on its ability to cool during collapse. If the cooling time of the gas is shorter than the dynamical time of the system, then the gas cools to roughly $T\approx10^4\,{\rm K}$ at which point cooling becomes highly inefficient. Using Figure 3 of Blumenthal et al (1984), we can examine where forming LSBs are likely to lie with regard to the “cooling curve” – the locus on the temperature-density plane that delineates the region of parameter space where cooling is rapid compared to the dynamical time (Rees & Ostriker 1978). Blumenthal et al plot the equilibrium positions of collapsed, non-dissipative structures (i.e. dark matter halos) on this plane, as a function of the amplitude of the initial overdensity ($0.5\sigma$ - $3\sigma$). Even for the small amplitude overdensities which are the likely LSB precursors, the gas in halos with masses between roughly a few times $10^8\msun$ and $10^{12}\msun$ is capable of rapid cooling to $T\approx10^4\,{\rm K}$. Associating normal galaxies with halo masses of $>10^{10}\msun$, and taking a naive proportionality between mass and surface brightness (Eqn. \[sigma\_m\]), there is a factor of roughly 100 in surface density over which cooling is efficient. For a direct proportionality between surface density and surface brightness, this corresponds to a disk central surface brightness of 5 magnitudes below the Freeman value, or $\mu_0(V) = 26.5\surfb$. Therefore, over an enormous range of surface brightnesses, we may assume that the baryonic gas in the halo cools to $T\approx10^4\,{\rm K}$.
Subsituting this temperature into Eqn. \[sigthresh\], we find a baryonic surface density threshold of
$$\label{sigthreshnum}
{\bar \Sigma} > 0.61 \msun/\pc^2 \, \left(\frac{\Lambda}{10}\right)
\, \left(\frac{F}{0.05}\right)
\, \left(\frac{10\kpc}{r_*}\right)
\, \left(\frac{T}{10^4\,{\rm K}}\right).$$
Comparing to Figures 1&2, this threshold lies below the lowest surface density expected to be collapsing today and therefore, none of the baryons in these galaxies should be pressure supported against collapse. There is a slight problem with self-consistency, however, in our assumption of a $10^4\,{\rm K}$ gas temperature throughout this range of surface density. The assumption of efficient cooling breaks down for surface densities that are roughly a factor of 100 below the surface density of normal galaxies. In §\[surfbriden\] we associated normal galaxies with surface densities of a few times $10^2\msun/\pc^2$, and thus the regime of efficient cooling breaks down at a few $\msun/\pc^2$. Thus, the likely cut-off in the distribution of galaxy surface densities is somewhat higher than given in Eqn. \[sigthreshnum\], and thus about a factor of three higher than the smallest surface density found in Figures 1&2. Note, however, that this slight change in cut-off hardly changes the conclusions of the previous section; there are still enormous numbers of low surface density galaxies for every normal galaxy, as well as substantial masses tied up in these galaxies.
The Role of Dwarf Galaxies {#dwarfs}
==========================
The discussion above has focussed on large galaxies. The choice of $r_*\approx10\kpc$ has led us to consider only galaxies with scales that are typical of normal spiral and elliptical galaxies. However, there are also many smaller dwarf galaxies of both high and low surface brightnesses which exist at the present day. We now turn our discussion to these galaxies. Although galaxies exist with a continuum of sizes, we will use the term “dwarf galaxy” to refer to galaxies with sizes typical of the sub-$0.1L_*$ galaxies in the local group, effectively the Large Magellenic Cloud and smaller.
The formation and subsequent evolution of dwarf galaxies is necessarily more complicated than for large galaxies. First, star-formation in dwarf galaxies is more subject to interruptions due to supernovae, ionization, or ram-pressure stripping than their more massive counterparts (see Dekel & Silk 1986, DeYoung & Heckman 1994, Efstathiou 1992). Secondly, there are other mechanisms besides gravitational collapse which are capable of producing dwarf galaxies, for example, clumping in tidal debris during mergers (Barnes & Hernquist 1992, Mirabel, Dottori, & Lutz 1992, Mirabel, Lutz, & Maza 1991, Elmegreen, Kaufman, Thomasson 1993), and compression of the intergalactic medium (Silk, Wyse, & Shields 1987). Finally, if structure grows hierarchically, then many of the dwarfs that exist at early times merge together and are assimilated into larger galaxies by the present time. There is a wealth of papers that treat the formation and evolution of dwarf galaxies in more detail than we are capable of doing justice to here. Recognizing our limitation in treating only the gravitational physics of dwarf galaxies, we restrict ourselves to a general discussion of how dwarf galaxies fit into our scheme of early-forming HSB’s and late-forming LSB’s.
Because dwarf galaxies in principle collapse from smaller initial radii $r_0$ than do normal galaxies, they in general collapse at earlier times. Similarly to large galaxies, dwarfs that collapse from high-amplitude peaks will collapse at earlier times and to higher surface densities than those dwarfs which collapse from low-amplitude peaks. However, while we were somewhat justified in neglecting merging of large galaxies, by no means are we justified in doing so for dwarfs. Some large fraction of the dwarfs that exist at large redshifts will merge together and have disappeared by the present day. The high surface density dwarfs have the earliest formation times, the largest amplitude correlation function and thus the largest probability of being absorbed into the large galaxy population. Late-forming, weakly correlated, low surface density dwarfs have the smallest chance of being absorbed. This will tend to deplete the distribution of dwarf surface brightnesses at the high surface brightness end. Dense dwarfs are less easily disrupted than tenuous low surface density dwarfs, however, which may help counteract the depletion of high surface density dwarfs. Obviously even the most simple treatment dwarf galaxies is complicated, and any quantitative discussion lies far outside of the scope of this paper.
At first blush it appears that our scenario as presented is overruled by the presence of some old dwarf galaxies with low surface brightnesses in the Local Group (see Ferguson & Binggeli (1994) and references therein). However, by assuming the Press-Schechter distribution function, we are implicitly concerning ourselves with the average number density of collapsed galaxies, independent of environment. A much more detailed treatment of the conditional, environmentally dependent number density by Bower (1991) shows that galaxies which exist in groups today collapse earlier than do galaxies in less dense environments. While our treatment effectively assumes perfectly synchronized formation times for galaxies of a particular surface brightness, the Bower formalism shows how this co-evality breaks down when one considers the range of environments in which galaxies form. Therefore, we are not bothered by the presence of genuinely old low surface brightness dwarfs in the Local Group. Furthermore, even in our simple picture, dwarf galaxies which are three orders of magnitude lower surface brightness than their early forming high surface brightness counterparts can still form at $z=2$, a high enough redshift for their stellar populations to label them as “old” systems.
Relevance to Other Astrophysical Issues {#otherastro}
=======================================
We have postulated that hierarchical structure formation models naturally lead to a large population of low surface brightness galaxies. Such a pervasive population of LSB’s must manifest itself in many astrophysical contexts; we consider several of these here.
HI Surveys {#HI}
----------
It has often been considered a failing of hierarchical structure formation scenarios that deep HI surveys have failed to uncover a significant population of dwarf galaxies. The few uncataloged dwarfs that are discovered are preferentially found near bright galaxies (see van Gorkum 1993 for a recent review). There does not seem to be a large highly uncorrelated population of gas rich dwarfs.
However, in light of recent work showing a sharp cutoff in HI disks at column densities of $10^{19}\cm^{-1}$ (van Gorkum et al. 1993, Corbelli, Schneider, & Salpeter 1989), the paucity of HI dwarfs is not surprising. Recent work by Maloney (1993) and Corbelli & Salpeter (1993) convincingly demonstrates that ionization of the HI by the UV background accounts for the sharp cutoff in HI disks. As we have shown that low-mass galaxies tend to have low surface densities, these galaxies will be prone to having their hydrogen ionized, reducing their detectable HI masses well below their total hydrogen masses. Taking Maloney’s (1993) scaling for the critical column density $N_{cr}\propto V_c^{0.5} {\bar\Sigma}_H^{-0.6}$ , (where $V_c\propto
{\bar\Sigma}_0^{1/3}$ is the halo circular velocity, ${\bar\Sigma}_H \propto
{\bar\Sigma}_0$ is the halo surface density, and ${\bar\Sigma}_0$ is the central HI surface density), a galaxy’s hydrogen will be completely ionized for ${\bar\Sigma}_0 \lta 4 \times 10^{19} \cm^{-2}$ . Field spirals have central HI column densities of roughly $10^{20} - 10^{21} \cm^{-2}$ (Cayatte et al. 1993), so galaxies that have surface densities roughly one-tenth below normal are likely to be highly ionized and thus have extremely low detectable HI masses. The observable HI mass can be expressed as:
$$M_{HI} = 2\pi {\bar\Sigma}_{0} \alpha^{2} \left[1 -
0.01 \left( { {10^{21}} \over {{\bar\Sigma}_{0}}} \right)^{1.43}
\left(5.605 + 1.43 \ln{\left({{\bar\Sigma}_{0}}
\over {10^{21}} \right)}\right) \right]$$
A galaxy with an exponential scale length of $\alpha = 5 \kpc$ and a central HI surface density of $10^{21} \cm^{-2}$ has an HI mass of $10^{9} \msun$ . Another galaxy with a central surface density of $10^{20} \cm^{-2}$ has an HI mass of $4 \times 10^{7} \msun$, two and a half times lower than expected based on its total hydrogen surface density and, more importantly, well below most limits of HI surveys. If the central HI surface density were to be dropped by another factor of two, the HI mass would fall by a factor of sixteen ($\sim 3 \times 10^6 \msun$).
Low surface density galaxies are therefore likely to suffer from strong biases against their detection in HI surveys, not just in optical surveys. A large population of LSB’s could easily have been overlooked by existing surveys. The dwarfs that have been detected in HI surveys must have higher surface densities in general, and thus are more likely to have collapsed earlier from larger overdensities. This would explain why these dwarfs are found to trace the bright galaxy population.
The absence of neutral gas in low surface density galaxies is likely to suppress the star formation in these galaxies, or at least to drive it through different channels than in the Milky Way. It is possible that star formation in low surface density galaxies takes place within small self-shielding clumps, embedded in the diffuse ionized background; the calculation of the critical hydrogen surface density (Maloney 1993) does not take strong clumping into account.
Finally, we note two mitigating circumstances that may improve the prospects for detecting HI in LSBs. First, if the star formation efficiency is depressed in LSB galaxies, they may have a higher gas fraction than normal galaxies, and thus larger column densities than one might expect from their surface brightness alone. Second, because LSBs are more likely to be young systems, they have had less time to convert gas into stars, also increasing their gas fraction. With a higher gas fraction, LSBs would have larger hydrogen masses and lower ionization fractions than one would derive from naively scaling the properties of spiral disks to LSB disks.
Lyman-$\alpha$ Absorbers
------------------------
Given that LSBs are gaseous and numerous, they must contribute to the Lyman-$\alpha$ forest. They are similar to minihalos in that they are gravitationally-confined systems that have collapsed from small overdensities. However they differ from minihalos in many important respects. LSBs are disklike systems supported by rotation, whereas minihalos are assumed to be spherical and supported against gravitational collapse only by their thermal pressure. Press & Rybicki (1993) have shown that the observed line widths of the Lyman-$\alpha$ forest are too large to be explained in a model where the minihalos are in thermal equilibrium with the UV background. They argue that the best explanation for the large line widths is not thermal processes but bulk motion of the gas within the cloud. While other workers have considered collapse of a spherical cloud as a source of these additional velocities, we believe that internal rotation is an equally plausible, well-motivated explanation. We will be addressing this idea in a later paper (Spergel & Dalcanton 1995).
Faint Blue Galaxies
-------------------
Because of their blue colors, weak correlations, and underrepresentation in catalogs of nearby catalogs, LSBs are a natural candidate for the “excess” faint blue galaxies seen in deep galaxy surveys (McGaugh 1994 – see references within for exhaustive listings of the body of work on the faint blue galaxy excess). The late formation times suggested by observations and by the arguments in this paper are not in conflict with the possibility that LSBs make up the excess galaxies seen at moderate redshifts; while LSBs as they have been defined in this paper are formed later than spirals, there are sufficient numbers of them at the moderate redshifts ($z\lta0.7$) where the brighter of the faint blue galaxies are found.
We note that a constraint exists on the amount of “missing” LSBs needed to resolve the discrepancy in the number counts. Dalcanton (1993) found that the rest frame $B$ luminosity density at $z\approx0.4$ is $3-5$ times higher than the local luminosity density as measured in surveys of nearby galaxies. If this discrepancy is due entirely to underestimating the contribution of LSBs to the local luminosity density, then there must be more than a factor of two greater luminosity density in uncataloged LSBs than in cataloged high surface brightness galaxies. The results of the over-simplified model presented in §\[surfbriden\] do not rule out this possibility.
Tully-Fisher
------------
The Tully-Fisher relationship will change as galaxy samples are extended to lower surface brightness. The Tully-Fisher relation is an artifact of the limited range of surface brightnesses sampled in large galaxy catalogs. ${\bar\Sigma} \propto M/R^2$ and $V_c \propto
M^{1/2}/R^{1/2}$ imply: ${\bar\Sigma} M \sim V^4$. This expression reduces to the Tully-Fisher relation,
$$\label{tfeqn}
L \propto \frac{V^4}{(M/L) \, {\bar\Sigma}}$$
where
$$\begin{aligned}
M/L &=& \frac{M_{tot}(r<r_*)}{M_{baryons}} \times \frac{M_{baryons}}{M_{*}}
\times
\frac{M_{*}}{L} \\\end{aligned}$$
and where $M_{baryons}$ is the total baryonic mass of the galaxy, $M_*$ is the mass converted into stars, and $M_{total}$ is the total mass within the region of HI emission (i.e. within the maximum extent of the baryons). Taking the definition of the collapse factor in Eq. \[collapse\],
$$\begin{aligned}
\label{mtot_mb}
\frac{M_{tot}(r<r_*)}{M_{baryons}} &=& \frac{1}{F\Lambda}\end{aligned}$$
for an isothermal dark matter halo. Referring to Eq. \[sigma\_z\], note that Eq. \[mtot\_mb\] slightly reduces the dependence of Eq. \[tfeqn\] on ${\bar\Sigma}$.
The equations above imply that low surface brightness galaxies will in general follow the slope of the Tully-Fisher relationship, but may be offset from the track followed by normal spirals. The lower surface density suggests that LSB’s will tend to be overluminous when compared to normal spirals with the same size and circular velocity. However, a concomitant reduction in the star formation efficiency, as might be expected with decreasing surface density, would pull the luminosity of LSB’s downwards. Variations in the baryonic collapse factor will also contribute to variations in $M/L$; galaxies which have undergone very little collapse will have a smaller baryonic mass fraction (i.e. large $M_{tot}/M_{baryons}$), and thus larger mass-to-light ratios. The combination of lower surface density and reduced star formation efficiency may conspire to leave LSB’s on the same relationship followed by normal spirals. This would be a most fortunate coincidence for extending Tully-Fisher to larger distances. Recent work by Sprayberry et al (1995a) is beginning to shed light on these issues.
We expect much more scatter in the Tully-Fisher relation for LSB galaxies. Because of their reduced halo masses and low surface densities, the disks of LSBs will be much puffier than the disks of normal spirals, which will lead to much greater uncertainty in the inclination correction, especially for smaller galaxies. The characteristic disk scale height, $Z_0$, is proportional to $\sigma_{vz} V_c / 4G{\bar\Sigma} $ where $\sigma_{vz}$ is the vertical velocity dispersion, $V_c$ is the circular velocity at infinity, and ${\bar\Sigma}$ is the surface density. Because $V_c^2
\propto M$ and $ M \propto {\bar\Sigma} $,
$$z_0 \sim \frac{\sigma_{vz}}{{\bar\Sigma}^{1/2}},$$
suggesting that LSBs will have larger scale heights than normal galaxies with the same radial extent (modulo the uncertainty of dependence of vertical temperature on surface density). The resulting uncertainty in the inclination correction will be largest for dwarf LSBs which, while flattened, may hardly look disk-like at all.
Summary
=======
Hierarchical models of galaxy formation provide a qualitatively and quantitatively reasonable explanation for many of the global properties of $L_*$ galaxies. Analytical models and N-body simulations can correctly predict the number and luminosity of bright galaxies as well as their kinematics. However, these models have consistently had difficulty in matching the observed faint end of the luminosity function. We have shown in this paper that low mass galaxies tend to form naturally with low surface densities. Low surface density galaxies can be assumed to have low surface brightnesses as well, unless they also have particularly high star formation efficiencies – a situation we consider unlikely. As such, the faint galaxies predicted by hierarchical clustering models are likely to have very low surface brightnesses, possibly hundreds of times fainter than the surface brightnesses of normal spiral disks. The observed distribution of central surface brightnesses for spirals suggests that there are selection effects which have lead to dramatic underestimates of the numbers low surface brightness galaxies; possibly only one tenth of the galaxies with surface brightnesses of half of “normal” have been cataloged (Disney 1976, Allen & Shu 1979). We have argued that the undercounted galaxies are preferentially low mass, and therefore the failure of models to match observations of the faint end of the luminosity function should not be surprising; low mass galaxies are hidden not only by their low total luminosity, but by their low surface brightnesses as well. Furthermore, these galaxies should be as difficult to detect in the radio as they are in the optical. The gas in galaxies with extremely low surface densities may be easily ionized by the UV background, effectively making these galaxies invisible in HI.
In the scenario which we have developed, LSB’s collapse slowly and at late times from small initial overdensities (Mo et al. 1994). We argue that galaxies formed from small overdensities are likely to have larger spin parameters $\lambda$ and thus smaller collapse factors, leading to lower baryonic surface densities. Because of the longer formation timescales, increased spin parameters, decreased correlations, and decreasing surface brightness, LSBs may be interpreted as a natural extension of the Hubble sequence, from Sc’s to Sd’s and beyond. However, one would expect LSBs to appear scruffier than the galaxies which define the bulk of the Hubble sequence. The decreased surface density implies that LSBs will tend have thicker disks than normal galaxies, and thus a decreased baryonic density within the galaxy. This in turn reduces the efficiency of star formation as well as the likelihood that the galaxy becomes unstable to spiral structure. The low surface density and low overall mass also increase the likelihood that feedback from star formation through supernova-driven winds can affect the apparent morphology of LSB galaxies. All of these processes may lead to the diffuse, chaotic galaxies which are becoming associated with the extremes of galaxy surface brightness (Dalcanton 1995, Schombert et al 1992).
Although LSBs are hardly impressive members of the galactic community when viewed as individuals, their cumulative properties are impressive. Both the number density and mass density of the LSB population are substantial, with LSBs possibly contributing as much (or more) mass to the universe as normal galaxies. (Although we have derived this result specifically assuming a CDM spectrum of initial fluctuations and $\Omega=1$, our qualitative results should be little changed by assuming a different cosmological model, as long as structure formation proceeds hierarchically.) Given this possibility, an accurate measure of the number density and mass density of the LSB population becomes an important goal for the coming years. While LSBs may be overlooked as individuals, they are potentially too important to be overlooked as a class.
**Acknowledgements**
We thank Daniel Eisenstein and Avi Loeb for providing the data used to generate the distribution of $\lambda$ shown in Eisenstein & Loeb (1994) and for discussions on the origins of spin angular momentum, and Arif Babul for his insights on the effects of pressure on galaxy formation. Jim Gunn is also warmly thanked for discussions which, although typically brief, are always enlightening and enjoyable. JJD and DNS were supported by NASA and by a Hubble Fellowship. Ron Kim and Teresa Shaw are warmly thanked for assistance with typing during the Month of Pain, as is Dan Wallach for the loan of the keyboard adapter.
References
==========
[Figure .]{}
The number and mass density of galaxies as a function of surface brightness and redshift. The lightest line in each frame corresponds to $z=0$, with each successively darker line corresponding to $z=1$, $z=2$, etc, up to $z=5$. Each row of plots corresponds to a single value of $\lambda$: 0.15 in the top row, 0.075 in the middle, and 0.03 in the bottom row. The leftmost column shows the number density of galaxies with surface brightness ${\bar\Sigma}$ in units of $\numden$. The second column is the cumulative distribution of the number density shown in the first column. The horizontal dashed lines are the integrated number density in normal galaxies with $L\L_*>0.1$ for all galaxies (long dash), for spirals only (short dash), and for spirals Sc and later (dotted line), taken from Marzke (1994). The third column is the baryonic mass density in galaxies of surface brightness ${\bar\Sigma}$, given in units of $1/F\rho_c$, the total baryonic mass density. The rightmost column is the integrated mass density. The $1-10\%$ error in normalization in the integrated mass density for $\lambda=0.15$ reflects the failure of the approximation to the CDM power spectrum at $r_0<50kpc$. As discussed in the text, normal disk galaxies have ${\bar\Sigma}\approx250-400\surfden$. The value of $r_*$ was chosen to give the correct number density in galaxies formed before $z=2$ for $\lambda=0.075$, the spin parameter appropriate for spiral galaxies. \[bias1\]
Same as Figure \[bias1\], except that $b=2.5$ and $r_*=7.5\kpc$. \[bias2.5\]
[^1]: The exceptional giant “Malin-type” galaxies, while being fascinatingly odd, are unlikely to be the dominant type of LSB galaxy. Their large sizes imply that 2-dimensional surveys have extremely large accessible volumes for this particular type of object. The fact that so few have been found (for example, see Figure 3 of Schombert et al. (1992)) immediately suggests that their number density is small. If these objects are the result of large fluctuations in low density regions (as postulated by Hoffman, et al 1992), then they ought to be rare.
[^2]: Note that the case for causality is a bit ambiguous. The impression that close neighbors induce higher star formation rates could be an artifact of low surface density galaxies being less correlated than high surface density ones, with no dependence of the star formation efficiency on environment.
[^3]: drawn from Faber & Gallagher (1979), Thuan & Seitzer (1979), and Roberts (1969)
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We study almost symmetric numerical semigroups and semigroup rings. We describe a characteristic property of the minimal free resolution of the semigroup ring of an almost symmetric numerical semigroup. For almost symmetric semigroups generated by $4$ elements we will give a structure theorem by using the row-factorization matrices", introduced by Moscariello. As a result, we give a simpler proof of Komeda’s structure theorem of pseudo-symmetric numerical semigroups generated by $4$ elements. Row-factorization matrices are also used to study shifted families of numerical semigroups.'
address:
- 'Jürgen Herzog, Fachbereich Mathematik, Universität Duisburg-Essen, Campus Essen, 45117 Essen, Germany'
- 'Kei-ichi Watanabe, Department of Mathematics, College of Humanities and Sciences, Nihon University, Setagaya-ku, Tokyo, 156-8550'
author:
- 'J" urgen Herzog and Kei-ichi Watanabe'
title:
- Almost symmetric numerical semigroups
- Almost symmetric numerical semigroups
---
Introduction {#introduction .unnumbered}
============
Numerical semigroups and semigroup rings are very important objects in the study of singularities of dimension $1$ (See §1 for the definitions). Let $H =\langle n_1,\ldots ,n_e\rangle$ be a numerical semigroup minimally generated by $\{n_1,\ldots,n_e\}$ and let $K[H] = K[ t^{n_1}, \ldots , t^{n_e}]$ be the semigroup ring of $H$, where $t$ is a variable and $K$ is any field. We can represent $K[H]$ as a quotient ring of a polynomial ring $S=K[x_1,\ldots , x_e]$ as $K[H] = S/I_{H}$, where $I_H$ is the kernel of the $K$-algebra homomorphism which maps $x_i\to t^n_i$. We call $I_H$ the defining ideal of $K[H]$. The ideal $I_H$ is a binomial ideal, whose binomials correspond to pairs of factorizations of elements of $H$.
Among the numerical semigroups, almost symmetric semigroups admit very interesting properties. They are a natural generalization of symmetric numerical semigroups. It was Kunz [@Ku] who observed that $H$ is symmetric, if and only if is $K[H]$ Gorenstein. Almost symmetric semigroups are distinguished by the symmetry of their pseudo-Frobenius numbers - a fact which has been discovered by Nari [@N]. The history of this class of numerical semigroups begins with the work of Barucci, Dobbs and Fontana [@BDF], and the influential paper of Barucci and Fröberg [@BF]. In [@BDF] pseudo-symmetric numerical semigroups appeared the first time, while in [@BF] almost symmetric numerical semigroups were introduced. The pseudo-symmetric are just a special class of almost symmetric numerical semigroups, namely those of type 2, see Section \[sec:1\] for details. Actually, Barucci and Fröberg introduced more generally the so-called almost Gorenstein rings, which in the case of numerical semigroup rings lead to the concept of almost symmetric numerical semigroups. Later on, Goto, Takahashi and Taniguchi [@GTT] developed the theory of almost Gorenstein rings much further, and extended this concept to rings of any Krull dimension.
The aim of this paper is to analyze the structure of an almost symmetric numerical semigroup and its semigroup ring.
In the first part of this paper, we analyze the structure of an almost symmetric semigroup by using Apery sets and pseudo-Frobenius numbers. When our semigroup $H$ is generated by $3$ elements, then a characterization of $H$ to be almost symmetric is known in terms of the relation matrix of $I_H$, see [@NNW]. So in this paper, our main objective is to understand the structure of almost symmetric semigroups generated by $4$ elements. For our approach, the RF-matrix $\RF(f)$ (row-factorization matrix) attached to a pseudo-Frobenius number $f$, as introduced by A. Moscariello in [@Mo], is of particular importance. It will be shown that if $H$ is almost symmetric, then $\RF(f)$ has special rows", as described in Corollary \[0\_in\_row\]. This concept of special rows of RF-matrices plays an essential role in our studies of almost symmetric semigroups generated by $4$ elements.
One of the applications of RF-matrices will be the classification of pseudo-symmetric numerical semigroups generated by $4$ elements. This result was already found by Komeda [@Ko], but using RF-matrices, the argument becomes much simpler. Also Moscariello ([@Mo]) proved that if $H$ is an almost symmetric numerical semigroup generated by $4$ elements, then the type of $H$ is at most $3$. We give a new proof of this by using the special rows of RF-matrices. We also show that in this case the defining relation of the semigroup ring of $H$ is given by RF-matrices. This is not the case for arbitrary numerical semigroups.
In the second part of this paper, we observe the very peculiar structure of the minimal free resolutions of $k[H]=S/I_H$ over $S$ when $H$ is almost symmetric. By this observation, we can see that if $e=4$, and $\type(H)=t$, then the defining ideal needs at least $3(t-1)$ generators and also if $I_H$ is generated by exactly $3(t-1)$ elements we can assert that the degree of each minimal generator of $I_H$ is of the form $f + n_i+n_j$ where $f$ is a pseudo-Frobenius number different from the Frobenius number of $H$. Furthermore, we show that if $e=4$ and $H$ is almost symmetric, then $I_H$ is minimally generated by either $6$ or $7$ elements and in the latter case one has $n_1+n_4=n_2+n_3$, if we assume $n_1<n_2<n_3<n_4$.
In the last section we consider shifted families of numerical semigroups and study periodic properties of $H$ under this shifting operation when $e=4$. Namely, if $H=\langle n_1,\ldots ,n_4\rangle$, we put $H+m=\langle n_1+m,\ldots ,n_4+m\rangle$ and ask when $H+m$ is almost symmetric for infinitely many $m$. We prove that for any $H$, $H+m$ is almost symmetric of type $2$ for only finitely many $m$. We also classify those numerical semigroups $H$ for which $H+m$ is almost symmetric of type $3$ for infinitely many $m$.
Some readers will notice that there is a considerable overlap with K. Eto’s paper [@E]. To explain this, let us briefly comment on the history of the paper. This work began in September 2015, when the 2nd named author visited Essen. So it took several years for this work to be fully ripe. In the meantime we gave reports in [@HW16], [@HW17] in which we gave partial results of contents of this paper without proof. Also, there were some occasions that Eto and Watanabe discussed about this kind of problems and the use of RF-matrics. Then Eto, by his approaches, which are different from ours, more quickly completed the new proof of Theorem \[type\_le3\] and the proofs of some strucure theorems on almost symmetric numerical semigroups generated by 4 elements. Nevertheless we believe that our point of view, which also includes the concept of RF-relations, and our way of proof of the theorems may be useful for the future study of numerical semigroups.
Basic concepts {#sec:1}
==============
In this section we fix notation and recall the basic definitions and concepts which will be used in this paper.
[*Pseudo-Frobenius numbers and Apery sets.*]{} A submonoid $H\subset \NN$ with $0\in H$ and $\NN \setminus H$ is finite is called a numerical semigroup. Any numerical semigroup $H$ induces a partial order on $\ZZ$, namely $a\leq_H b$ if and only if $b-a\in H$.
There exist finitely many positive integers $n_1,\ldots,n_e$ belonging to $H$ such that each $h\in H$ can be written as $h=\sum_{i=1}^e\al_in_i$ with non-negative integers $\al_i$. Such a presentation of $h$ is called a [*a factorization*]{} of $h$, and the set $\{n_1,\ldots,n_e\}\subset H$ is called a [*set of generators*]{} of $H$. If $\{n_1,\ldots,n_e\}$ is a set of generators of $H$, then we write $H=\langle n_1,\ldots ,n_e\rangle$. The set of generators $\{n_1,\ldots,n_e\}$ is called a [*minimal set of generators*]{} of $H$, if none of the $n_i$ can be omitted to generate $H$. A minimal set of generators of $H$ is uniquely determined.
Now let $H=\langle n_1,\ldots ,n_e\rangle$ be a numerical semigroup. We assume that $n_1,\ldots ,n_e$ are minimal generators of $H$, that $\gcd(n_1,\ldots,n_e)=1$ and that $H\neq \NN$, unless otherwise stated.
The assumptions imply that the set $G(H)=\NN\setminus H$ of [*gaps*]{} is a finite non-empty set. Its cardinality will be denoted by $g(H)$. The largest gap is called the [*Frobenius number*]{} of $H$, and denoted $\Fr(H)$.
An element $f\in \ZZ\setminus H$ is called a [*pseudo-Frobenius*]{} number, if $f+n_i\in H$ for all $i$. Of course, the Frobenius number is a pseudo-Frobenius number as well and each pseudo-Frobenius number belongs to $G(H)$. The set of pseudo-Frobenius numbers will be denoted by $\PF(H)$.
We also set $\PF'(H)=\PF(H)\setminus\{\Fr(H)\}$. The cardinality of $\PF(H)$ is called the [*type*]{} of $H$, denoted $t(H)$. Note that for any $a\in \ZZ\setminus H$, there exists $f\in \PF(H)$ such that $f-a\in H$.
Let $a\in H$. Then we let $$\Ap(a, H) = \{ h\in H\;|\; h-a \not\in H\}.$$ This set is called the [*Apery set*]{} of $a$ in $H$. It is clear that $|\Ap(a,H)|=a$ and that $0$ and all $n_i$ belong to $\Ap(a,H)$. For every $a$ the largest element in $\Ap(a,H)$ is $a + \Fr(H)$.
[*Symmetric, pseudo-symmetric and almost symmetric numerical semigroups*]{}. For each $h\in H$, the element $\Fr(H)-h$ does not belong to $H$. Thus the assignment $h\mapsto \Fr(H)-h$ maps each element $h\in H$ with $h<\Fr(H)$ to a gap of $H$. If each gap of $H$ is of the form $\Fr(H)-h$, then $H$ is called [*symmetric*]{}. This is the case if and only if for each $a\in \ZZ$ one has: $a\in H$ $\Fr(H)-a\not\in H$. It follows that a numerical semigroup is symmetric if and only if $g(G)=|\{h\in H\: \; h<\Fr(H)\}|$, equivalently if $2g(H)=F(H)+1$. A symmetric semigroup is also characterized by the property that its type is $1$. Thus we see that a symmetric semigroup satisfies $2g(H)=\Fr(H)+t(H)$, while in general $2g(H)\geq \Fr(H)+t(H)$. If equality holds, then $H$ is called [*almost symmetric*]{}. The almost symmetric semigroups of type $2$ are called [*pseudo-symmetric*]{}. It is quite obvious that a numerical semigroup is pseudo-symmetric if and only if $\PF(H)= \{\Fr(H)/2, \Fr(H)\}$. From this one easily deduces that if $H$ is pseudo-symmetric, then $a\in H$ $\Fr(H)-a\not\in H$ and $a\neq \Fr(H)/2$.
Less obvious is the following nice result of Nari [@N] which provides a certain symmetry property of the pseudo-Frobenius numbers of $H$.
\[Nari\] Let $\PF(H)=\{f_1,f_2,\ldots,f_{t-1}, \Fr(H)\}$ with $f_1<f_2\ldots <f_{t-1}$. Then $H$ is almost symmetric if and only if $$f_i+f_{t-i}=\Fr(H) \quad \text{for} \quad i=1,\ldots,t.$$
[*Numerical semigroup rings*]{}. Many of the properties of a numerical semigroup ring are reflected by algebraic properties of the associated semigroup ring. Let $H$ be a numerical semigroup, minimally generated by $n_1,\ldots,n_e$. We fix a field $K$. The semigroup ring $K[H]$ attached to $H$ is the $K$-subalgebra of the polynomial ring $K[t]$ which is generated by the monomials $t^{n_i}$. In other words, $K[H]=K[t^{n_1},\ldots,t^{n_e}]$. Note that $K[H]$ is a $1$-dimensional Cohen-Macaulay domain. The symmetry of $H$ has a nice algebraic counterpart, as shown by Kunz [@Ku]. He has shown that $H$ is symmetric if and only if and only if $K[H]$ is a Gorenstein ring. Recall that a positively graded Cohen–Macaulay $K$-algebra $R$ of dimension $d$ with graded maximal ideal $\mm$ is Gorenstein if and only if $\dim_K\Ext_R(R/\mm, R)=1$. In general the $K$-dimension of the finite dimensional $K$-vector space is called the CM-type (Cohen-Macaulay type) of $R$. Kunz’s theorem follows from the fact that the type of $H$ coincides with the CM-type of $K[H]$.
If $a\in H$, then $K[H]/(t^a)$ is a $0$-dimensional $K$-algebra with $K$-basis $t^h+(t^a)$ with $h\in \Ap(a,H)$. The elements $t^{f+a}+(t^a)$ with $f\in PF(H)$ form a $K$-basis of the socle of $K[H]/(t^a)$. This shows that indeed the type of $H$ coincides with the CM-type of $K[H]$.
The canonical module of $\omega_{K[H]}$ of $K[H]$ can be identified with the fractionary ideal of $K[H]$ generated by the elements $t^{-f}\in Q(K[H])$ with $f\in \PF(H)$. Consider the exact sequence of graded $K[H]$-modules $$0\to K[H]\to \omega_{K[H]}(-\Fr(H))\to C\to 0,$$ where $K[H]\to \omega_{K[H]}(-\Fr(H))$ is the $K[H]$-module homomorphism which sends $1$ to $t^{-\Fr(H)}$ and where $C$ is the cokernel of this map. One immediately verifies that $H$ is almost symmetric if and only if $\mm C=0$, where $\mm$ denotes the graded maximal ideal of $K[H]$. Motivated by this observation Goto et al [@GTT] call a Cohen–Macaulay local ring with canonical module $\omega_R$ [*almost Gorenstein*]{}, if the exists an exact sequence $$0\to R\to \omega_R\to C \to 0.$$ with $C$ an Ulrich module. If $\dim C=0$, $C$ is a Ulrich module if and only if $\mm C=0$. Thus it can be seen that $H$ is almost symmetric if and only if $K[H]$ is almost Gorenstein (in the graded sense).
In this paper we are interested in the defining relations of $K[H]$. Let $S=K[x_1,\ldots,x_e]$ be the polynomial ring over $K$ in the indeterminates $x_1,\ldots,x_e$. Let $\pi: S\to K[H]$ be the surjective $K$-algebra homomorphism with $\pi(x_i)=t^{n_i}$ for $i=1,\ldots,n$. We denote by $I_H$ the kernel of $\pi$. If we assign to each $x_i$ the degree $n_i$, then with respect to this grading, $I_H$ is a homogeneous ideal, generated by binomials. A binomial $\phi=\prod_{i=1}^ex_i^{\al_i}-\prod_{i=1}^ex_i^{\bl_i}$ belongs to $I_H$ if and only if $\sum_{i=1}^e\al_in_i=\sum_{i=1}^e\bl_in_i$. With respect to this grading $\deg \phi=\sum_{i=1}^e\al_in_i$.
On unique factorization of elements of $H$ and Factorizations of $f+n_k$ for $f \in \PF(H)$ {#sec:2}
===========================================================================================
In this section we discuss unique factorization of elements of $H$ with respect to its minimal generator. Also, we review Komeda’s argument on Apery set.
The following Lemma will be very essential in §4 and §5.
\[Apery\] Let $a\in H$ and $h\in\Ap(a,H)$. Then the following holds:
1. If $h,h'\in H$ and if $h+h'\in \Ap(a,H)$, then $h,h'\in \Ap(a,H)$.
2. Assume $H$ is almost symmetric. If $h\in \Ap(a,H)$, then either $$(a+\Fr(H))-h\in \Ap(a,H)\quad \text{or} \quad h - a\in \PF'(H).$$ In the latter case, $(a+\Fr(H)) - h \in \PF(H)$.
\(i) is obvious. (ii) If $(a+\Fr(H))-h\not\in H$, there is some $h'\in H$ and $f\in \PF(H)$ such that $f = (a+\Fr(H))-h + h'$. Since $h-a \not\in H$, $f\ne \Fr(H)$. Then by Lemma \[Nari\], $f' = \Fr(H)-f = \Fr(H) - [(a+\Fr(H))-h + h'] = (h-a) -h'\in \PF'(H)$. Since $h-a\not\in H$ by assumption, we must have $h'=0$ and $h-a \in \PF'(H)$.
Let $H=\langle n_1,\ldots , n_e\rangle$ be a numerical semigroup minimally generated by $e$ elements. For every $i$, $1\le i\le e$, we define $\al_i$ to be the minimal positive integer such that $$\begin{aligned}
\label{minimal}
\al_i n_i = \sum_{j=1, j\ne i}^e \al_{ij} n_j.\end{aligned}$$ Note that $\al_{ij}$ is in general not uniquely determined. However the minimality of $\al_i$ implies
\[al\_iAp\] For all integers $i$ and $k$ with $1\le i,k\le e$ and $i\ne k$ one has $$(\al_i -1) n_i \in \Ap(n_k, H).$$
Suppose this is not the case. Then $(\al_i -1) n_i - n_k\in H$, and we will have an equation of type $\beta_i n_i = \sum_{j=1, j\ne i}^e \beta_{ij} n_j$ with integers $0<\beta_i< \al_i$ and $\beta_{ij}\geq 0$, contradicting the minimality of $\al_i$.
Let $H$ be a numerical semigroup minimally generated by $n_1,\ldots,n_e$. An element $h\in H$ is said to have UF (Unique Factorization), if $h$ admits only one factorization. Note that $h$ does not have UF if and only if $h\ge_H \deg(\phi)$ for some $\phi \in I_H$.
The set $I=\{h\in H\: \; h \text{ does not have UF} \}$ is an ideal of $H$ and is equal to $\{ \deg(\phi) \:\; \phi \in I_H\}$. Observe that if $\phi\in I_H$ and $\deg(\phi)$ is a minimal generator of $U$, then $\phi$ is a minimal generator of $I_H$. But the converse is not true in general. Hence the number of minimal generators of $I$ is less than or equal to the number of minimal generators of $I_H$.
\[deg\_phi\] Let $\phi = m_1 - m_2$ be a minimal binomial generator of $I_H$. Then the following holds:
1. There exists $f \in \PF(H)$ and integers $i\ne j$ such that $\deg(\phi)\le_H f+ n_i+n_j$.
2. Let $i$ and $j$ such that $x_i|m_1$ and $x_j|m_2$. Then $\deg \phi= f + n_i + n_j $ for some $f\in \PF' (H)$ if and only if $\Fr(H) + n_i+n_j -\deg(\phi) \not\in H$.
\(i) We choose $i$ and $j$ such that $x_i|m_1$ and $x_j|m_2$.
Suppose that $\deg \phi - n_i-n_j \in H$. Then there exists a monomial $m$ with $\deg m=h$. Thus, $mx_j-m_1/x_i\in I_H$ and $mx_i-m_2/x_j\in I_H$, and so $\phi=x_i(mx_j-m_1/x_i)+x_j(mx_i-m_2/x_j)\in I_H$, contradicting the minimality of $\phi$. It follows that $\deg \phi - n_i-n_j \not\in H$. Hence there exist $f\in \PF(H)$ such that $f-(\deg \phi - n_i-n_j)\in H$, and this implies that $\deg \phi \le_H f+ n_i+n_j$.
\(ii) It follows from the proof of (i) that $\deg \phi - n_j \in \Ap(n_i, H)$. Then by Lemma \[Apery\](ii) we obtain that $\deg \phi - n_j = f + n_i$ for some $f\in \PF'(H)$ if and only if $(\Fr(H) + n_i) - (\deg \phi - n_j) \not\in H$.
The factorizations of the elements $f+n_k$ for $f\in \PF(H)$ play an important role in the understanding of the structure of $H$. We first prove
\[f + n\_k = b\_in\_i\] Let $f\in \PF(H)$. With the notation of [*(\[minimal\])*]{} the following holds:
1. If $f + n_k = \sum_{j\ne k}\beta_jn_j$ and if $\beta_i\ge \al_i$ for some $i$, then $\al_{ik}=0$.
2. If $f + n_k = b_in_i$ for some $k \ne i$. Then $b_i\ge \al_i -1$.
3. If $f+n_k\leq_H (\alpha_i - 1)n_i$ for some $k\ne i$, then $ f+n_k = (\al_i-1)n_i$.
\(i) By using equation (\[minimal\]) we can replace the summand $\beta_in_i$ on the right hand side of the equation in (i) by $\sum_{k\neq i}\al_{ik}n_k+(\beta_i-\al_i)n_i$. Thus if $\al_{ik} >0$, then $f\in H$, which is a contradiction.
\(ii) Add $n_i$ to both sides of the equality in (ii). Then we have $$(b_i+1)n_i = (f + n_i) + n_k = \sum_{j \ne i} c_j n_j+ n_k.$$ Since the right hand side does not contain $n_i$, we must have $b_i+1 \ge \al_i$.
\(iii) By assumption, $ (\alpha_i - 1)n_i = f + n_k + h$ for some $h\in H$. Write $f + n_k =\sum_j\beta_jn_j$ and $h=\sum_j\gamma_jn_j$ with non-negative integers $\beta_j$ and $\gamma_j$. Then we get $(\al_i-1-\beta_i-\gamma_i)n_i=\sum_{j\neq i}(\beta_j+\gamma_j)n_j$. The minimality of $\al_i$ implies $\al_i-1-\beta_i-\gamma_i=0$ and $\beta_j= 0$ for $j\neq i$. The second equations imply that $f+n_k=\beta_in_i$, and the first equation implies that $\beta_i\leq \al_i-1$. On the other hand, by (ii) we have $\beta_i\geq \alpha_i-1$.
In the case that $H$ is almost symmetric we can say more about the factorization of $f+n_k$.
\[UF-almost symmetric\] Suppose that $H= \langle n_1,n_2,\ldots,n_e\rangle$ is almost symmetric, and let $f \in \PF'(H)$.
1. Suppose that $f+n_k= \sum_{j\neq k}\beta_jn_j$ with $\beta_i>0$, Then there exists a factorization $\Fr(H)+n_k=\sum_{j\neq k}a_jn_j$ of $\Fr(H)+n_k$ such that $\beta_i=a_i+1$ and $a_j\geq \beta_j$ for $j\neq i$.
2. Suppose that $\Fr(H)+n_k$ has UF, say $\Fr(H)+n_k=\sum_{j\neq k}a_jn_j$. Then $f+n_k=(a_i+1)n_i$ for some $i\neq k$.
\(i) Put $h = (f+n_k) - n_i$. Since $\beta_i>0$, $h\in H$. Then we see $$\Fr(H)+ n_k = h + ((\Fr(H) - f) + n_i).$$ Since $H$ is almost symmetric, $(\Fr(H) - f) \in \PF'(H)$ and $(\Fr(H) - f) + n_i$ does not contain $n_i$ in its factorization.
\(ii) Let $f+n_k= \sum_{j\neq k}\beta_jn_j$ with $\beta_i>0$. Assume there exists $l\neq i$ with $\beta_l>0$. Then (i) implies that $\beta_l=a_l+1$. On the other hand, since $\beta_i>0$ we also have that $a_j\geq \beta_j$ for all $j\neq i$. In particular, $a_l\geq \beta_l$, a contradiction. Thus $f+n_k=(a_i+1)n_i$.
Suppose that $H= \langle n_1,n_2,\ldots,n_e\rangle$ is almost symmetric, and let $f \in \PF'(H)$. If for some $k$, $f+n_k$ has a factorization with more than one non-zero coefficient, then $\Fr(H)+n_k$ does not have UF .
Assume that $\Fr(H) + n_k$ has UF. Then the following holds:
1. If $\Fr(H) + n_k-(\al_j -1)n_j\in H$ for every $j\ne k$, then $\Fr(H) + n_k = \sum_{j\ne k} (\al_j-1)n_j$.
2. If moreover, $H$ is almost symmetric, then $f +n_k=\al_in_i$ for some $i \ne k$ and $f\in \PF'(H)$.
\(i) Let $\Fr(H) + n_k =\sum_{j\neq k}a_jn_j$ be the unique factorization of $\Fr(H) + n_k$. Then the hypothesis in (i) implies that $a_j\geq \alpha_j-1$ for all $j\neq k$. If $a_i\geq \alpha_i$ for some $i\neq k$, then $\sum_{j\neq k}a_jn_j$ can be rewritten by using (\[minimal\]), contradicting the assumption that $\Fr(H) + n_k$ has unique factorization.
\(ii) follows from Lemma \[UF-almost symmetric\](ii).
The next two lemmata deal with the case that $e=4$, a case we are mainly interested in.
\[NU\_e=4\] Let $e(H)=4$ and let $f\in \PF(H)$. Then the following holds:
1. If $f+n_k$ does not have UF, then for some $i$, $f + n_k\ge_H \al_in_i$.
2. If $ f+n_k=\sum_{i\ne k} a_in_i$ with $a_i< \al_i$ for every $i$ and for any factorization of $f+n_k$, then $f + n_k$ has UF.
\(i) Let $f +n_k =\sum_{j\neq k}\beta_jn_j$ and $f +n_k =\sum_{j\neq k}\beta_j'n_j$ be two distinct factorizations of $f+n_k$. Then $\prod_j x_j^{\beta_j}- \prod_j x_j^{\beta_j'}$ is a non-zero binomial of $I_H$. Taking out the common factor of the difference, the remaining binomial $\phi=m_1-m_2$ belongs to $I_H$, since $I_H$ is a prime ideal, and we have $f +n_k\geq_H \deg \phi$. Moreover, $\phi$ does not contain the variable $x_k$. Thus $\phi$ contains at most $3$ variables and since $gcd(m_1,m_2)=1$, $\phi$ must contain a monomial $x_i^b$ with $b\ge \alpha_i$. It proves that $f +n_k\geq_H bn_i$.
\(ii) follows from (i).
*(1) Let $H=\langle 22,28,47,53\rangle$. Then $\PF(H)=\{25,258,283\}$. Since $25+258=283$, $H$ is almost symmetric of type $3$. Moreover, $I_H=(xw-yz, xy^3-w^2, x^2y^2-zw, x^3-z^2, x^{13}z- y^{10}w,
x^{14} - y^{11})$ , so that $\al_1= 14, \al_2=11, \al_3=2=\al_4$.*
In this example, $f + n_i$ has UF for every $i$ and $f\in \PF'(H)$. In these cases, condition (ii) of Lemma \[NU\_e=4\] is satisfied. On the other hand, each of $283+47$ and $283+53$ has 3 factorizations.
\(2) Let $H = \langle 33, 56, 61,84\rangle$. Then $\PF(H) = \{28, 835, 863\}$. Since $28+835=863$, $H$ is almost symmetric of type $3$. Moreover, $I_H= (xw-yz, x^{27}z-y^2w^{10}, y^3-w^2, xy^2-zw, x^2y-z^2,
x^{28}-w^{11})$, so that $\al_1= 28, \al_2=3, \al_3=2=\al_4$. In this example, $835 + n_1$ and $835+n_3$ do not have UF. For example, $835 + n_3$ has 6 different factorizations, including $835 + n_3= 16 n_2$.
\[formula\_Ap\] Assume that $H$ is almost symmetric. Then the following holds:
1. We assume $e=4$. If for some $k$, $\al_{ik} \ge 1$ for all $i\ne k$, then $\Fr(H) + n_k$ has UF.
2. Assume that $\Fr(H) +n_k =
\sum_{j \ne k} \beta_jn_j$. If $\Fr(H) +n_k$ has UF, then $n_k = \prod_{j\ne k} (\beta_j+1) + \type(H) -1$.
\(i) If $\Fr(H) + n_k$ does not have UF, Lemma \[NU\_e=4\] implies that there exists a factorization $\Fr(H) + n_k= \sum_{i\ne k} \beta_in_i$ with $\beta_i \ge \al_i$ for some $i$. But since we assume that $\al_{ik}\ge 1$, this yields that $\Fr(H)\in H$, a contradiction.
\(ii) We first observe that $$\Ap(n_k, H)=\{h\in H\: \; h\leq_H F(H)+n_k\}\union\{f+n_k\:\; f\in PF'(H)\}.$$ Thus, $n_k=\type(H)-1+|\{h\in H\: \; h\leq_H F(H)+n_k\}|$.
Since by assumption $F(H)+n_k$ has UF, each $h\leq_H F(H)+n_k$ has UF, as well. Therefore, each $h\leq_H \Fr(H)+n_k$ has a unique factorization $h=\sum_{i\ne k} \gamma_in_i$ with integers $0\leq \gamma_i\leq \beta_i$, and conversely each such sum $\sum_{i\ne k} \gamma_in_i$ is $\leq_H\Fr(H)+n_k$. It follows that that $|\{h\in H\: \; h\leq_H F(H)+n_k\}| =\prod_{i\neq k}(\beta_i+1)$, as desired.
$\RF$-matrices {#sec:3}
==============
Let us recall the notion of the row-factorization matrix ($\RF$-matrix for short) introduced by Moscariello in [@Mo] for the numerical semigroup $H=\langle n_1,\ldots,n_e\rangle$. It describes for each $f\in\PF(H)$ and each $n_i$ a factorization of $f+n_i$.
\[RF-def\] Let $f\in \PF(H)$. An $e\times e$ matrix $A= (a_{ij})$ is an [*$RF$-matrix*]{} of $f$, if $a_{ii}= -1$ for every $i$, $a_{ij}\in \NN$ if $i\ne j$ and for every $i= 1,\ldots , e$, $$\sum_{j=1}^e a_{ij} n_j = f.$$
Note that an $\RF$-matrix of $f$ need not to be uniquely determined. Nevertheless, $\RF(f)$ will be the notation for one of the possible $\RF$-matrices of $f$.
Fundamental Properties
----------------------
The most important property of $\RF$-matrices is the following.
\[symm\] Let $f,f' \in \PF(H)$ with $f + f'\not\in H$. Set $\RF(f)= A=(a_{ij})$ and $\RF(f') = B = (b_{ij})$. Then either $a_{ij}= 0$ or $b_{ji} =0$ for every pair $i\ne j$. In particular, if $\RF(\Fr(H)/2) = (a_{ij})$, then either $a_{ij}$ or $a_{ji}=0$ for every $i\ne j$.
By our assumption, $f + n_i =\sum_{k\ne i} a_{ik} n_k$ and $f' + n_j = \sum_{l \ne j} b_{jl} n_l$. If $a_{ij}\ge 1$ and $b_{ji}\ge 1$, then summing up these equations, we get $$f + f'= (b_{ji} - 1)n_i + (a_{ij}-1)n_j + \sum_{s\ne i,j} (a_{is} + b_{js})n_s
\in H,$$ a contradiction.
\[symm-Rem\] [*As mentioned before, for given $f\in\PF(H)$, $\RF(f)$ is not necessarily unique. Note that in the notation of Lemma \[symm\], if $a_{ij}>0$ for [**some**]{} $\RF(f)$, then $b_{ji} = 0$ for [**any**]{} $\RF(f')$.*]{}
The rows of $\RF(f)$ produce binomials in $I_H$. We shall need the following notation. For a vector $\ab=(a_1,\ldots,a_n) \in \ZZ^n$, we let $\ab^+$ the vector whose $i$th entry is $a_i$ if $a_i\geq 0$, and is zero otherwise, and we let $\ab^-=\ab^+-\ab$. Then $\ab=\ab^+-\ab^-$ with $\ab^+,\ab^-\in \NN^n$.
\[produce\]Let $\ab_1,\ldots,\ab_e$ be the row vectors of $\RF(f)$, and set $\ab_{ij}=\ab_i-\ab_j$ for all $i,j$ with $1\leq i<j\leq e$. Then $
\phi_{ij}=\xb^{\ab_{ij}^+}-\xb^{\ab_{ij}^-}\in I_H
$ for all $i<j$. Moreover, $\deg \phi_{ij} \leq f+n_i+n_j$. Equality holds, if the vectors $\ab_i+\eb_i+\eb_j$ and $\ab_j+\eb_i+\eb_j$ have disjoint support, in which case there is no cancelation when taking the difference of these two vectors.
Let $\eb_1,\ldots,\eb_e$ be the canonical unit vectors of $\ZZ^e$. The vector $\ab_i+\eb_i+\eb_j$ as well as the vector $\ab_j+\eb_i+\eb_j$ with $i<j$ is the coefficient vector of a factorization of $f+n_i+n_j$. Thus, taking the difference of these two vectors, we obtain the vector $\ab_{ij}$ with $\sum_{k=1}^ec_kn_k=0$ where the $c_k$ are the components of $\ab_{ij}$. It follows that $\phi_{ij}=\xb^{\ab_{ij}^+}-\xb^{\ab_{ij}^-}$ belongs to $I_H$. The remaining statements are obvious.
We call a binomial relation of $\phi\in I_H$ an [*$\RF(f)$-relation*]{}, if it is of the form as in Lemma \[produce\], and we call it an [*$\RF$-relation*]{} if it is an $\RF(f)$-relation for some $f\in \PF'(H)$.
\[RF\_to\_I\_H\]
*Let $H=\langle 7,12,13, 22 \rangle$. Then $\PF(H)= \{15,30\}$. In this case, $$\RF(15) = \left( \begin{array}{cccc} -1 & 0 & 0 & 1\\2 & -1 & 1 & 0\\
4 & 0 & -1 & 0 \\ 0 & 2 & 1 & -1
\end{array}\right).$$ Taking the difference of the first and second row we get the vector $(-3,1,-1,1)$. This gives us the minimal generator $yw-x^3z$ of $I_H$, where we put $(x_1,x_2,x_3,x_4)=(x,y,z,w)$.*
Let us now consider all the choices of 2 rows and the resulting generators of $I_H$ together with their degree. $$\begin{aligned}
\ab_{12} &=& (-3,1,-1,1), \quad\hspace{1cm} yw-x^3z,
\quad \hspace{0.57cm} 34\\
\ab_{13} &=& (-5, 0,1,1), \quad \hspace{1.3cm} zw-x^5, \quad
\hspace{0.83cm} 35 \\
\ab_{14} &=& (-1,-2,-1,2), \quad \hspace{0.65cm} w^2-xy^2z, \quad \hspace{0.45cm}44 \\
\ab_{23} &=& (-2,-1,2,0), \quad \hspace{0.96cm} z^2-x^2y, \hspace{0.75cm}
\quad 26 \\
\ab_{24} &=& (2,-3,0,1), \quad \hspace{1.3cm} x^2w- y^3, \quad
\hspace{0.63cm}36 \\
\ab_{34} &=& (4,-2,-2,1), \hspace{1.4cm} x^4w-y^2z^2, \quad \hspace{0.25cm}50\end{aligned}$$
Among the above elements of $I_H$, $ x^4w-y^2z^2 = x^2(x^2w- y^3) - y^2( z^2-x^2y)$ is not a minimal generator of $I_H$, and we obtain $5$ minimal generators of $I_H$. Cancelation occurs in computing $\ab_{23}$ and $\ab_{24}$.
\[question\] [*Are all minimal minimal generators of a numerical semigroups $\RF$-relations?*]{}
The following lemma provides a condition which guarantees that $I_H$ can be generated by $\RF$-relations.
\[conditionforquestion\] Suppose $I_H$ admits a system of binomial generators $\phi_1,\ldots,\phi_m$ satisfying the following property: for each $k$ there exist $i<j$ and $f\in \PF'(H)$ such that $\deg \phi_k=f+n_i+n_j$ and $\phi_k=u-v$, where $u$ and $v$ are monomials such that $x_i|u$ and $x_j|v$. Then $I_H$ is generated by $\RF$-relations.
Let $\phi_k=u-v$ be as described in the lemma, and let $u'=u/x_i$. Then $\deg u'=f+n_j$ and $\deg u=(f+n_j)+n_i$ . Let $v'$ be a monomial with $\deg v'=f+n_i$. Then $\deg x_j v'=(f+n_i)+n_j$. By construction, $\psi_k=u-x_j v'$ is an $\RF$-relation, and we have $$\phi_k=\psi_k+x_j(v/x_j-v') \quad\text{with}\quad v/x_j-v'\in I_H.$$ It follows that $(\phi_1,\ldots,\phi_m) +\mm I_H=(\psi_1,\ldots,\psi_m) +\mm I_H$. Nakayama’s lemma implies that $I_H=(\psi_1,\ldots,\psi_m)$.
It will be shown in Theorem \[type2\] that Question \[question\] has an affirmative answer when $e=4$ and $\Fr(H)/2\in \PF(H)$. Here we show
\[3generated\] Question \[question\] has an affirmative answer if $e=3$
Let $H=\langle n_1,n_2,n_3\rangle$ be a 3-generated numerical semigroup. We first consider the case that $H$ is not symmetric and collect a few known facts.
By Herzog [@H] and Numata [@Nu Section 2.2] the following facts are known: there exist positive integers $\alpha$, $\beta$ and $\gamma$, as well $\alpha'$, $\beta'$ and $\gamma'$ such that
1. $I_H$ is minimally generated by $g_1=x^{\alpha+\alpha'}-y^{\beta'}z^\gamma$, $g_2=y^{\beta+\beta'}-x^\alpha z^{\gamma'}$ and $g_3=z^{\gamma+\gamma'}-y^\beta z^{\alpha'}$, that is, $(\alpha+\alpha')n_1=\beta' n_2+\gamma n_3$, $(\beta+\beta')n_2=\alpha n_1+\gamma' n_3$ and $(\gamma+\gamma')n_3=\alpha' n_1+\beta n_2$.
2. $n_1=(\beta+\beta')\gamma +\beta'\gamma'$, $n_2=(\gamma+\gamma')\alpha+\gamma'\alpha'$ and $n_3=(\alpha+\alpha')\beta+\alpha'\beta'$.
3. $\PF(H)=\{f,f'\}$ with $f=\alpha n_1+(\gamma-\gamma')n_3-(n_1+n_2+n_3)$ and $f'= \beta' n_2+(\gamma-\gamma')n_3-(n_1+n_2+n_3)$.
Then it is easy to see that. $$\RF(f) = \left( \begin{array}{ccc} -1 & \beta'+\beta-1 & \gamma-1\\
\alpha-1 & -1 & \gamma+\gamma'-1\\
\alpha +\alpha'-1 & \beta-1 & -1
\end{array}\right).$$ Let $\ab_1,\ab_2, \ab_3$, the first, second and third row of $\RF(f)$. Then we obtain $\ab_3-\ab_1=(\alpha+\alpha', -\beta',-\gamma)$, $\ab_1-\ab_2=(-\alpha, \beta+\beta', -\gamma')$ and $\ab_2-\ab_3=(-\alpha',-\beta, \gamma+\gamma')$.
Comparing these vectors with (1), we see that they correspond to the minimal generators of $I_H$.
We now consider the case that $H$ is a symmetric semigroup. Then it is known that there exist positive integers $a,b$ and $d$ with $\gcd(a,b)=1$ and $d>1$ such that (after a relabeling) $n_1=da$,$n_2=db$ and $n_3=\alpha a-\beta b$. In this case $I_H$ is generated by the regular sequence $g_1=x^b-y^a$ and $g_2= z^d-x^{\alpha}y^{\beta}$, that is, for the semigroup we have the generating relations $bn_1=an_2$ and $dn_3=\alpha n_1+\beta_3 n_2$.
Since $H$ is symmetric $\PF(H)=\{\Fr(H)\}$, and since $I_H$ is generated by the regular sequence $F(H)=(\deg g_1)+(\deg g_2)-n_1-n_2-n_3= bn_1+gn_3-n_1-n_2-n_3$, We claim that
$$\RF(F(H)) = \left( \begin{array}{ccc} -1 & a-1 & d-1\\
b-1 & -1 & d-1\\
b-1+\alpha_1 & \alpha_2-1 & -1
\end{array}\right),$$ if $\alpha_2>0$. On the other hand, if $\alpha_2=0$ and $\alpha_1>0$, then the last column of $\RF(F(H))$ has to be replaced by the column $(\alpha_1-1,a-1+\alpha_2,-1)$.
Let $\ab_1,\ab_2, \ab_3$, the first, second and third row of $\RF(f)$. Then, if $\alpha_2>0$ we obtain $\ab_1-\ab_2=(b,-a,0)$ and $\ab_3-\ab_2=(\alpha_1,\alpha_2, -d)$. These are the generating relations of $H$. In the case that $\alpha_1>0$ one obtains $\ab_1-\ab_3=(\alpha_1,\alpha_2, -d)$.
We will show that if $e=4$ and $H$ is pseudo-symmetric or almost symmetric, then Question \[question\] has an affirmative answer ( cf. Theorem \[Type\_even\] and Theorem \[rfrelations\] and Remark \[RF-rel-7gen\].) When $e=4$ and $H$ is symmetric, we can not determine $\RF(\Fr(H))$ in a unique manner and also we cannot get all the generators of $I_H$ from a single $\RF(\Fr(H))$ although we get the generators by selecting a suitable expression and consider linear relations of rows of $\RF(\Fr(H))$.
Moscariello proves [@Mo Lemma 6] that if $H$ is almost symmetric and $e=4$, and if for some $j$ and $a_{ij}=0$ for every $i\ne j$, then $f = \Fr(H)/2$. His result can be slightly improved.
\[a\_ij>0\] Assume that $H$ is almost symmetric and $e=4$. Let $f\in \PF(H), f\ne \Fr(H)$ and put $A=(a_{ij})=\RF(f)$. Then for every $j$, there exists $i\neq j$ such that $a_{ij}>0$. Namely, any column of $A$ should contain some positive component.
First, let us recall the proof of Moscariello. Assume, for simplicity, $a_{i1}=0$ for $i=2,3,4$. Put $d = \gcd(n_2,n_3,n_4)$. From the equation $f = -n_2 + a_{23}n_3 + a_{24}n_4$, we have $d|f$ and from $f = -n_1+a_{12}n_2+a_{13}n_3+a_{14}n_4$, we get $d| n_1$. This implies $d=1$. Hence $H_1: =\langle n_2,n_3,n_4\rangle$ is a numerical semigroup, and the last 3 columns of $A$ show us that $f\in \PF(H_1)$.
Since $H_1$ is generated by $3$ elements, $t(H_1)\le 2$ (see [@H]), and since $\Fr(H_1)\ge \Fr(H)>f$, we conclude that $\PF(H_1)=\{f, \Fr(H_1)\}$. Hence we have $\Fr(H_1) -f \not\in H_1$ and $\Fr(H_1) -f \le_{H_1} f$ or $\Fr(H_1) -f \le_{H_1} \Fr(H_1)$. The second case cannot happen, since otherwise $f\in H_1$. Thus we have $\Fr(H_1) -f \le_{H_1} f$.
On the other hand, since $\Fr(H)\not\in H_1$ and $f< \Fr(H)\le \Fr(H_1)$, we get $\Fr(H) - f \le_{H_1} \Fr(H_1) - f \le_{H_1} f$. Moreover, since $ \Fr(H) - f \in \PF(H)$, it follows that $ \Fr(H) - f =f$. (Until here, this Moscariello’s argument.)
The arguments before show that $F(H)=F(H_1)$, and that $H_1$ is pseudo-symmetric. The latter implies that $g(H_1) = \Fr(H_1)/2 +1$.Therefore, $g(H) \le g(H_1)-1 =\Fr(H_1)/2=\Fr(H)/2$, since $H \supset H_1\cup\{n_1\}$. This is a contradiction.
Combining Lemma \[symm\], Remark \[symm-Rem\] and Lemma \[a\_ij>0\], we get
\[0\_in\_row\] Assume $H$ is almost symmetric, and let $f\in \PF'(H)$. Then every row of $\RF(f)$ has at least one $0$. Moreover, for every $i$, there exists $j\ne i$ such that the $(i,j)$ component of $\RF(f)$ is $0$ for any choice of $\RF(f)$.
If $e=5$, Lemma \[a\_ij>0\] is not true.
If $H= \langle 10, 11 , 15, 16, 28 \rangle$, then $\PF(H) =\{ 5,17,29, 34\}$ and hence $H$ is almost symmetric of type $4$. Then $$\RF(5) = \left( \begin{array}{ccccc} -1 & 0 & 1 & 0 & 0\\
0 & -1 & 0 & 1 & 0\\
2 & 0 & -1 & 0 & 0\\ 1 & 1 & 0 & -1& 0\\ 0 & 3 & 0& 0 & -1
\end{array}\right), \quad {\mbox{\rm and}} \;
\RF(29) = \left( \begin{array}{ccccc} -1 & 1 & 0 & 0& 1\\1 & -1 & 2 & 0 & 0\\
2 & 2& -1 & 0 & 0\\ 0 & 0 & 3 & -1 & 0\\ 3 & 1 & 0 & 1 & -1
\end{array}\right).$$ We see that the 5th column of $\RF(5)$ has no positive entry and we can choose another expression of $\RF(29))$ whose $(5,3)$ entry is positive.
We can say that if $e=4$ and $H$ almost symmetric, then $f + n_i$ has UF in $H$ in most cases for $f\in \PF'(H)$.
\[2nj\] Assume that $H$ is almost symmetric, $e=4$. Then for any $f\in PF'(H)$ and any $n_i$, the decomposition of $f+n_i$ has at most $2$ $n_j$’s. Moreover, if $f+n_i$ does not have UF and $n_j, n_k$ appears in the decomposition of $f+n_i$, then we have $\al_jn_j= \al_kn_k$.
Let $\{i,j,k,l\}$ be a permutation of $\{1,2,3,4\}$. By Corollary \[0\_in\_row\], we may assume that the $(i.l)$ component of $\RF(f)$ is $0$ for any choice of $\RF(f)$. Thus $f+n_i$ contains only $n_j, n_k$. Assume that there are $2$ different expressions $f+n_i = an_j + bn_k = a'n_j + b'n_k$. Assuming $a>a', b< b'$, we have $(a-a')n_j = (b'-b)n_k$ and then $a\ge a-a'\ge \al_j, b'-b \ge \al_k$.
\[f + n\_k = bn\_i\] Suppose that $H$ is almost symmetric, $e=4$ and for some $f\in PF'(H)$ we have $f + n_k = bn_i$ for some $k \ne i$. Then one of the following cases occur:
1. $b= \al_i -1$ or
2. $b \ge \al_i$ and for some $j\ne i,k$, $\al_i n_i = \al_j n_j$.
This is a direct consequence of Lemma \[f + n\_k = b\_in\_i\] Corollary \[2nj\].
[*Let $H =\langle 33,56,61,84\rangle$ with $\PF(H)=\{f=28,f'=835, F(H)=863\}$ and $\al_1=28, \al_2=3, \al_3=2, \al_4=2$. In this case, $\RF(28)$ is uniquely determined, but there are several choices of $\RF(835)$. Among them we can choose the following, where $ f' + n_3= 16 n_2$ with $16> \al_2$. Note that we have $\al_2n_2= \al_4n_4$ in this case. $$\RF(28) = \left( \begin{array}{cccc} -1 & 0 & 1 & 0\\0 & -1 & 0 & 1\\
1 & 1 & -1 & 0 \\ 0 & 2 & 0 & -1
\end{array}\right)\quad \text{and}\quad
\RF(835) = \left( \begin{array}{cccc} -1 & 2 & 0 & 9\\27 & -1 & 0 & 0\\
0 & 16 & -1 & 0 \\ 26 & 0 & 1 & -1
\end{array}\right).$$*]{}
The following Proposition plays an important role in Section \[sec:komeda\].
\[f+n\_k= f’ + n\_l\] Assume $e=4$, $H$ is almost symmetric and $\{i,j,k,l\}=\{1,2,3,4\}$. Then the following statements hold:.
1. For any $f,f',f'' \in \PF'(H)$, $f+n_k = f' + n_l=f''+n_j$ does not occur.
2. Assume that for some $f\ne f'\in \PF '(H)$, we have $f+n_k = f' + n_l$ for some $1\le k,l\le 4$. Then $f+n_k = f' + n_l = (\al_i -1)n_i$ for some $i\ne j,k$.
3. Assume $f+n_k = f' + n_l = (\al_i -1)n_i$ for some $f,f'\in \PF'(H)$ with $f+f' \not\in H$. Then there are expressions $$\al_in_i = pn_j + qn_k = p'n_j + r n_l \quad {\mbox{\text with }} q,r>0.$$ Moreover, if $p\le p'$, then $q = \al_k$ and if $q\ge \al_k+1$, then $\al_kn_k=\al_jn_j$.
\(i) Assume for some $f,f',f'' \in \PF'(H), f+n_k = f' + n_l=f''+n_j$. Then these terms are equal to $bn_i$ for some integer $b>0$. Then, adding $\Fr(H)-f$, we have $\Fr(H)+ n_k = (\Fr(H)-f) + n_l+ f'= (\Fr(H)-f)+n_j+f'' =
(\Fr(H)-f) +bn_i$. This implies that $n_k$ does not appear in $(\Fr(H)-f)+n_l, (\Fr(H)-f)+n_j,
(\Fr(H)-f)+n_i$, contradicting Lemma \[a\_ij>0\].
\(ii) Assume that $f+n_k = f' + n_l = bn_i + b'n_j$. If $b,b'>0$, adding $\Fr(H) -f$ to both sides, we get $$\Fr(H) +n_k = (\Fr(H) -f) + (f' + n_l) = (\Fr(H) -f) + bn_i + b'n_j,$$ where $\{i,j,k,l\}$ is a permutation of $\{1,2,3,4\}$. Then $n_k$ does not appear in $(\Fr(H) -f)+n_i, (\Fr(H) -f)+n_j$. Moreover, since $\Fr(H) +n_k = [(\Fr(H) -f) +n_l] + f'$, $n_k$ does not appear in $(\Fr(H) -f)+n_l$, too. This contradicts Lemma \[a\_ij>0\]. Hence $b=0$ or $b'=0$. We may assume that $b'=0$, and hence $f+n_k = f' + n_l = bn_i$. But if $b\ge \al_i$, then $\al_i n_i$ cannot contain $n_k, n_l, n_j$, which is absurd. Hence by Proposition \[f + n\_k = bn\_i\], we must have $b =\al_i-1$.
\(iii) We write the $(i,j)$ component of $\RF(f)$ (resp. $\RF(f')$) by $m(i,j)$ (resp. $m'(i,j)$). We know by Lemma \[symm\] that $m(i,j) >0$ for some expression of $\RF(f)$, then $m'(j,i)=0$ for any expression of $\RF(f')$ and vice versa.
Now, adding $n_i$ to both sides of the equation, we get $$(f+n_i) +n_k = (f' +n_i) + n_l = \al_i n_i.$$
Hence $\al_{ik}>0$ and $\al_{il}>0$ in some expressions of $\al_in_i$. If we have an expression of the form $\al_i n_i = p n_j + q n_k + rn_l$ with $q,r >0$, then we have $$f + n_i = p n_j + (q-1) n_k + rn_l \quad {\mbox{\text and }} f' + n_i = p' n_j +q n_k + (r-1)n_l,$$ so $m(i,l), m'(i,k) >0$. This contradicts Lemma \[symm\] since $m(k,i), m'(l,i)>0$.
Hence we have different expressions $$\al_in_i = p n_j + q n_k = p' n_j + r n_l$$ with $q,r>0$. If $p\le p'$, then we have $qn_k = (p'-p) n_j + r n_l$. Hence we have $q\ge \al_k$.
If $q\ge \al_k+1$, then $f+n_i$ has 2 different expressions and since $p n_j + (q-1) n_k$ cannot contain $n_l$, we must have $\al_kn_k = \al_jn_j$ by Corollary \[2nj\].
[*The following example demonstrates the result of Proposition \[f+n\_k= f’ + n\_l\] (ii). Let $H= \langle 9,22,46,57 \rangle$. Then $\PF(H) = \{ f=35,f'=70,\Fr(H)=105\}$. We have $f +n_4= 92 = f' + n_2 = 2 n_3$, and $\al_1=10, \al_2=3,
\al_3=3, \al_4=2$. The $\RF$-matrices are $$\RF(35) = \left( \begin{array}{cccc} -1 & 2 & 0 & 0\\0 & -1 & 0 & 1\\
9 & 0 & -1 & 0 \\ 0 & 0 & 2 & -1
\end{array}\right), \quad {\mbox{\rm and}} \;
\RF(70) = \left( \begin{array}{cccc} -1 & 1 & 0 & 1\\0 & -1 & 2 & 0\\
8 & 2& -1 & 0 \\ 9 & 0 & 1 & -1
\end{array}\right).$$*]{}
Komeda’s structure theorem for $4$ generated pseudo-symmetric semigroups via RF-matrices. {#sec:komeda}
==========================================================================================
We will apply our results in the previous sections to give a new proof of Komeda’s structure theorem for $4$-generated pseudo-symmetric semigroups using $\RF(\Fr(H)/2)$.
We believe that our proof of the structure theorem of type $2$ almost symmetric numerical semigroups is simpler than the one in the original paper [@Ko].
In this subsection we always assume that $H=\langle n_1,n_2,n_3,n_4\rangle$ and that $\Fr(H)/2\in \PF(H)$.
First we sum up the properties of $\RF(\Fr(H)/2) = A=(a_{ij})$ given in Lemma \[symm\], \[a\_ij>0\] and Corollary \[0\_in\_row\].
\[RF\_F/2\] Let $\RF(\Fr(H)/2)=(a_{ij})$ be an $\RF$-matrix of $\Fr(H)/2)$. Then:
1. $a_{ii} = -1$ for every $i$, and $a_{ij}$ is a non-negative integer for every $i\ne j$.
2. For every pair $(i,j)$ with $i\ne j$, either $a_{ij}$ or $a_{ji}$ is $0$.
3. Every row and column of $A$ has at least one positive entry.
Since there are at most $6$ positive entries in $A$, at least $2$ rows have only one positive entry. More precisely we have
\[RF(F/2)\] After a suitable relabeling of the generators of $H$ we may assume that $$\RF( \Fr(H)/2) = \left( \begin{array}{cccc} -1 & \alpha_2-1 & 0 & 0\\
0 & -1 & \alpha_3-1 & 0 \\
a & 0 & -1 & d\\
a' & b & 0 & -1 \end{array}\right)$$ with $a', d>0$ and $a, b\ge 0$.
We simply write $F=F(H)$ and let $A=(a_{ij})= \RF(F/2)$. We will see that this matrix is uniquely determined, but at this moment, we can take any choice. Proposition \[RF\_F/2\](ii) implies that $A$ has at least $6$ entries equal to zero. Thus there must exist a row of $A$ with only one positive entry and we can assume that the first row of $A$ is $(-1, b',0,0)$, or $F/2 + n_1 = b' n_2.$ We can assume that in [*any*]{} choice of $\RF( \Fr(H)/2)$, $(1,3), (1,4)$ entries are $0$. Then, by Lemma \[f + n\_k = b\_in\_i\], $b' \ge \al_2 -1$. If $b' \ge \al_2$, we have another choice of $\RF(F/2)$ with non $0$ $(1,3)$ or $(1,4)$ entry. Hence we conclude $b' =\al_2-1$.
Now, by Lemma \[symm\], $a_{21}=0$ and $F/2+n_2= a_{23}n_3+a_{24}n_4$. If $a_{23}$ or $a_{24}=0$, then we may assume the $2$nd row of $\RF(F/2)$ is $(0,-1,c,0)$. If $a_{23}$ and $a_{24}$ are both positive, then since $a_{32}=a_{42}=0$ and $a_{34}$ or $a_{43}=0$ by Lemma \[symm\], the 3rd row (resp. 4th row) of $\RF(F/2)$ is $(a', 0,-1, 0)$ (resp. $(a', 0, 0,-1)$. In both cases, after changing the order of generators, we can assume the first $2$ rows of $\RF(F/2)$ are
$$\left( \begin{array}{cccc} -1 & \al_2-1 & 0 & 0\\
0 & -1 & c & 0 \end{array}\right).$$
or
$$\left( \begin{array}{cccc} -1 & b & 0 & 0\\
0 & -1 & \al_3-1 & 0 \end{array}\right).$$
We assume the 1st expression. We can treat the 2nd case in the same manner.
We have only to show that $c=\al_3-1$ since $a_{32}=0$ and $a_{34}>0$ by \[symm\]. Now we use $a,a',b,d$ as in the right hand side of $\RF(F/2)$ in Proposition \[RF(F/2)\]. If $c\ge \al_3$, $\al_3n_3$ can contain only $n_4$, since $a_{21}=0$. Then we must have $\al_3n_3 = \al_4 n_4$ by Corollary \[2nj\]. Then by Lemma \[symm\], $b=0$ and $F/2 + n_4 = a' n_1$. It is easy to see that $a'=\al_1-1$, since $a_{42}=a_{43}=0$ by our assumption.
Taking the difference of 3rd and 4th rows of $\RF(F/2)$, we get
$$(d+1) n_4 = (\al_1-1-a) n_1+ n_3,$$
and thus $d\ge \al_4-1$. On the other hand, from $F/2 + n_3 = an_1+ d n_4$, we see $d\le \al_4-1$ since we have seen $\al_3n_3= \al_4n_4$. Then we have
$$(\al_3 -1) n_3 = (\al_1-1-a) n_1,$$
contradicting the definition of $\al_3$.
Now we come to the main results of this section.
\[type2\] Let $H=\langle n_1,n_2,n_3,n_4\rangle$, and assume $\PF(H) =\{\Fr(H)/2, \Fr(H)\}$. Then for a suitable relabeling of the generators of $H$, $$\begin{aligned}
\label{normal}
\RF(\Fr(H)/2) = \left( \begin{array}{cccc} -1 & \alpha_2-1 & 0 & 0\\
0 & -1 & \alpha_3-1 & 0 \\
\al_1-1 & 0 & -1 & \al_4-1\\
\al_1-1 & \al_{42} & 0 & -1 \end{array}\right)\end{aligned}$$ and $\Fr(H)/2 +n_k$ has UF for every $k$, that is, $\RF(\Fr(H)/2)$ is uniquely determined.
We start with the $\RF$-matrix $A$ given in Proposition \[RF(F/2)\], and first determine the unknown values $a,a',b,d$ there. Note that until the end of (3), we only assume that $\Fr(H)/2\in \PF(H)$.
\(1) Taking the difference of the $1$st and $2$nd (resp. the $2$nd and $3$rd) row of $A$, leads to the equations $$(*2) \quad \al_2n_2 = n_1 + (\al_3 -1)n_3,$$ $$(*3) \quad \al_3 n_3 = an_1 + n_2 + d n_4.$$ (2) If $a'\ge \al_1$, since (4, 3) component of $\RF(F/2)$ should be $0$ in any expression, we must have $\al_1n_1 = \al_2n_2$. But then from (1), we have another expression of $\RF(F/2)$ with positive (4,3) component. A contradiction! Similarly, we have $b\le \al_2-1$.
Now, taking the difference of the $1$st and the $4$th row, we have $$(\al_2-1- b)n_2+n_4= (a'+1)n_1.$$
Since we have seen $a'\le \al_1 -1$, we have $a'= \al_1-1$. Also, since $n_4$ is a minimal generator of $h$, we have $b < \al_2-1$. We have $$(*1) \quad \al_1n_1 = (\al_2-1-b) n_2 + n_4$$. Then, we must have $a< \al_1$, since otherwise will have different expression of $\RF(F/2)$ with positive $(3,2)$ entry.
\(3) If $d \ge \al_4$, then we must have $\al_4n_4 = \al_1n_1$ since $(3,2)$ entry of $\RF(F/2) =0$ in any expression. Then we have a contradiction from equation $(*1)$.
Taking the difference of the $3$rd and the $4$th row, we have $$(d+1)n_4 = (\al_1 - 1 -a) n_1 + b n_2 + n_3.$$ Since $d\le \al_4-1$, we have $d= \al_4-1$ and we get $$(*4) \quad \al_4n_4 = (\al_1 - 1 -a) n_1 + b n_2 + n_3.$$ Let us sum up what we have got so far:
$$(\dagger) \quad \RF(\Fr(H)/2) = \left( \begin{array}{cccc} -1 & \alpha_2-1 & 0 & 0\\
0 & -1 & \al_3-1 & 0 \\
a & 0 & -1 & \al_4-1\\
\al_1-1 &
b=\al_{42} & 0 & -1 \end{array}\right),$$
Moreover, we have obtained an expression with $\al_{12}=\al_2-1-b >0$, $\al_{13}=0$, $\al_{14}=1$, $\al_{21}=1$, $\al_{23}=\al_3-1$, $\al_{24}=0,$ $\al_{31}=a$, $\al_{32}=1$, $\al_{34}=\al_4-1$, $\al_{41}=\al_1-1-a$, $\al_{42}=b$, $\al_{43}=1.$
\(4) To finish the proof, it suffices to show that $b>0, a>0$ and then $a= \al_1-1$. If $b=0$, then adding 2nd and 4th rows of our matrix, we have $$(\al_1-1)n_1+ (\al_3-1)n_3 = \Fr(H) + n_2+ n_4.$$ Since $(\al_i -1)n_i \in \Ap(n_k, H)$ for every $k\ne i$, and from Lemma \[Apery\], we get $$\Fr(H) +n_2 - (\al_1-1)n_1= (\al_3-1)n_3 - n_4 \in \PF'(H).$$ Which leads to $ (\al_3-1)n_3 = n_4 + \Fr(H)/2 = (\al_1-1)n_1$, a contradiction! Thus we have $b>0$. If $a=0$, then adding 2nd and 4th rows of our matrix and by the same argument as above, we have $(\al_2-1)n_2= \Fr(H)/2 + n_3= (\al_4 -1)n_4$, a contradiction.
Next we show that $a=\al_1-1$. We have seen that $a\le \al_1-1$. If $a< \al_1-1$, then we have $\al_{41}>0$ and by Lemma \[formula\_Ap\], $\Fr(H) + n_1$ has UF. We show that this leads to a contradiction.
Adding the 1st and 2nd rows of $\RF(\Fr(H)/2)$, we get $$\Fr(H) + n_1 = (\al_2-2) n_2 + (\al_3-1) n_3.$$ Since $\Fr(H) + n_1$ has UF, $(\Fr(H) + n_1) - (\al_4-1)n_4\not\in H$ and by Lemma \[Apery\] we should have $(\al_4-1)n_4 = \Fr(H)/2 +n_1$. Since we have seen $\Fr(H)/2 +n_1= (\al_2-1)n_2$, we get a contradiction. Hence we have $a= \al_1-1$.
\[Type\_even\] If $\Fr(H)/2 \in \PF(H)$ and if $\RF(\Fr(H)/2)$ is as in Theorem \[type2\], then we have:
1. $\Fr(H) +n_2$ has UF and we have $ n_2 = \al_1\al_4(\al_3-1) +1$.
2. Every generator of $I_H$ is an $\RF(\Fr(H)/2)$-relation.\
Namely, $I_H = (x_2^{\al_2}- x_1x_3^{\al_3-1},
x_1^{\al_1}- x_2^{\al_{2}-1-\al_{42}}x_4,
x_3^{\al_3}- x_1^{\al_1-1}x_2x_4^{\al_4-1},
x_3^{\al_3-1}x_4- x_1^{\al_1-1}x_2^{\al_{42}+1},
x_4^{\al_4} - x_2^{\al_{42}}x_3)$. (The difference of 1st and 3rd rows does not give a minimal generator of $I_H$.)
3. $H$ is almost symmetric and $\type(H)=2$.
We will show in Proposition \[even\_type\_le2\] that if $e=4$ and $\Fr(H)/2\in \PF(H)$, then $\RF(\Fr(H)/2)$ is as in \[normal\], showing that if $e=4$ and $H$ is almost symmetric with even $\Fr(H)$, then $\type(H)=2$.
First, note that by Lemma \[formula\_Ap\](i), $\Fr(H) + n_2$ has UF, since $\al_{i2}\neq 0$ for all $i\neq 2$, as we can read from our $\RF(\Fr(H)/2)$. Adding the 2nd and 3rd row of $\RF(\Fr(H)/2)$, we get $$\Fr(H) + n_2 = (\al_1-1)n_1+ (\al_3-2)n_3 + (\al_4-1) n_4.$$ Hence $n_2 = \al_1\al_4(\al_3-1) +\type(H)-1\ge \al_1\al_4(\al_3-1) +1$, by Lemma \[formula\_Ap\](ii). We will show that $\type(H)=2$ by showing $n_2 = \al_1\al_4(\al_3-1) +1$.
We determine the minimal generators of $I_H$. Let $I'$ be the ideal generated by the binomials $$x_2^{\al_2}- x_1x_3^{\al_3-1},
x_1^{\al_1}- x_2^{\al_{12}}x_4,
x_3^{\al_3}- x_1^{\al_1-1}x_2x_4^{\al_4-1},
x_3^{\al_3-1}x_4- x_1^{\al_1-1}x_2^{\al_2-\al_{12}},
x_4^{\al_4} - x_2^{\al_2-1-\al_{12}}x_3.$$ Since these binomials correspond to difference vectors of rows of $\RF(\Fr(H)/2)$, it is clear that $I' \subset I_H$. In order to prove that $I' = I_H$, we first show that $S/ (I' ,x_2) = S/(I_H,x_2)$. Note that $S/(I_H,x_2)\iso K[H]/(t^{n_2})$. Therefore, $\dim_KS/(I_H,x_2)=n_2$, and since $I'\subset I_H$ we see that $\dim_K S/(I',x_2)\geq n_2$. We have seen that $n_2 = \al_1\al_4(\al_3-1) +\type(H)-1$. On the other hand, $$S/(I',x_2)\iso K[x_1,x_3,x_4]/( x_1x_3^{\al_3-1}, x_1^{\al_1}, x_3^{\al_3}, x_3^{\al_3-1}x_4, x_4^{\al_4}),$$ from which we deduce that $\dim_k (S/ (I' ,x_2))= \al_1\al_4(\al_3-1) +1$. It follows that $\al_1\al_4(\al_3-1) +1\geq \al_1\al_4(\al_3-1) +\type(H)-1$. This is only possible if $\type(H)=2$ and $S/ (I' ,x_2) = S/(I_H,x_2)$.
Now consider the exact sequence $$0\to I_H/I'\to S/I'\to S/I_H\to 0.$$ Tensorizing this sequence with $S/(x_2)$ we obtain the long exact sequence $$\cdots \to \Tor_1(S/I_H, S/(x_2))\to (I_H/I')/x_2(I_H/I')\to S/(I',x_2)\to S/(I_H,x_2)\to 0.$$ Since $S/I_H$ is a domain, $x_2$ is a non-zerodivisor on $S/I_H$. Thus $\Tor_1(S/I_H, S/(x_2))=0$. Hence, since $S/ (I' ,x_2) = S/(I_H,x_2)$, we deduce from this exact sequence that $(I_H/I')/x_2(I_H/I')=0$. By Nakayama’s lemma, $I_H/I'=0$, as desired.
This finishes the proof of the structure theorem of Komeda, using $\RF(\Fr(H)/2)$.
*The generators of $I_H$ in the papers [@Ko] and [@BFS] are obtained by applying the cyclic permutation $(1,3,4,2)$. Namely, in their paper, $I_H = (x_1^{\al_1}- x_3x_4^{\al_4-1},
x_1^{\al_1}- x_2^{\al_{2}-1-\al_{42}}x_4,
x_4^{\al_4}- x_3^{\al_3-1}x_1x_2^{\al_2-1},
x_4^{\al_4-1}x_2- x_3^{\al_3-1}x_1^{\al_{21}+1},
x_2^{\al_2} - x_1^{\al_{21}}x_4)$.*
These equations are derived from the matrix obtained by the same permutation.
$$\RF(\Fr(H)/2)= \left( \begin{array}{cccc} -1 & 0 & 0 &\alpha_4-1 \\
\al_{21} & -1 & \alpha_3-1 & 0 \\
\al_1-1 & 0 & -1 & 0\\
0 & \al_2-1 & \al_3-1 & -1 \end{array}\right)$$
Some structure Theorem for $\RF$-matrices of a $4$-generated almost almost symmetric numerical semigroup $H$ and a proof that $\type(H) \le 3$. {#sec:5}
===============================================================================================================================================
We investigate the special rows" of $\RF$-matrices of a $4$-generated almost almost symmetric numerical semigroup $H$ and give another proof of the fact that a $4$-generated almost symmetric numerical semigroup has type $\le 3$, proved by A. Moscariello.
\[type\_le3\][@Mo] If $H=\langle n_1,\ldots, n_4\rangle$ is almost symmetric, then $\type(H)\le 3$.
In this section, let $H =\langle n_1,n_2,n_3,n_4\rangle$ and we assume always that $H$ is almost symmetric. Our main tool is the special row of $\RF(f)$ for $f\in \PF(H)$.
\[special\_def\] [*If a row of $\RF(f)$ is of the form $(\al_i -1) {\bld e}_i - {\bld e}_k$ is called [**a special row**]{}, where ${\bld e}_i$ the $i$-th unit vector of ${\mathbb Z}^4$.*]{}
\[special row\] We assume $e=4, \{n_1,n_2,n_3,n_4\} =
\{n_i, n_j, n_k, n_l\} $ and $H$ is almost symmetric.
1. There are $2$ rows in $\RF(\Fr(H)/2)$ of the form $(\al_i -1) {\bld e}_i - {\bld e}_k$.
2. If $f\ne f' \in \PF(H)$ with $f+f' \not\in H$, then there are $4$ rows in $\RF(f)$ and $\RF(f')$ of the form $(\al_i -1) {\bld e}_i - {\bld e}_k$.
3. For every pair $\{f,f'\}\subset \PF'(H), f\ne f', f+f'\not\in H$ and for every $j\in \{1,2,3,4\}$, there exists some $s$ such that either $(\al_j-1)n_j = f +n_s$ or $(\al_j-1)n_j = f' +n_s$.
We have proved (i) in Proposition \[RF\_F/2\].
\(ii) By Lemma \[symm\], we have at least $12$ zeroes in $\RF(f)$ and $\RF(f')$. Also, by Corollary \[0\_in\_row\], a row of $\RF(f)$ should not contain $3$ positive components. Also, we showed in \[f + n\_k = b\_in\_i\] that if the $k$-th row of $\RF(f)$ is $-{\bld e}_k + b{\bld e}_i$, then $b\ge \al_i -1$.
Now, a row of $\RF(f), \RF(f')$ is either one of the following $3$ types.
1. contains $2$ positive components,
2. $q {\bld e}_s - {\bld e}_t$ with $q\ge \al_s$,
3. $(\al_s-1){\bld e}_s - {\bld e}_t$.
Now, in case (b), some different components of $t$-th row are positive. Hence if $a,b,c$ be number of rows or type (a), (b), (c), respectively, we must have $2(a+b)+c\le 12$. Since $a+b+c=8$, we have $c\ge 4$.
\(iii) We have shown in (ii) that there are at least $4$ rows in $\RF(f)$ and $\RF(f')$ of the form $(\al_i -1) {\bld e}_i - {\bld e}_k$. So, we may assume that for some $i,k,l$ we have the relations $$n_k + f = (\al_i-1)n_i = n_l+ f'.$$ Then we have shown in Proposition \[f+n\_k= f’ + n\_l\] that $\al_in_i$ must have $2$ different expressions $$(5.7.1) \qquad \al_i n_i = p n_j + qn_k = p' n_j + r n_l$$ and we will have $$(5.7.2) \qquad f + n_i = p n_j + (q-1)n_k, f'+ n_i = p' n_j + (r-1) n_l.$$
We will write $\RF(f) = (m_{st})$ and $\RF(f') = (m'_{st})$ if $f+ n_s = \sum_t m_{st} n_t$ for some expression and we say $m_{st}=0$ for some $(s,t)$ if $m_{st} =0$ in [*any*]{} expression of $f+ n_s = \sum_t m_{st} n_t$ and likewise for $\RF(f') = (m'_{st})$.
Here, since our argument is symmetric on $k,l$ until now, we may assume that $p'\ge p$ and then from (5.7.1), we have $$qn_k = (p'-p)n_j + rn_l$$ and we must have $q \ge \al_k$ and also if $q\ge \al_k + 1$, then $\al_kn_k=\al_jn_j$ by Proposition \[f + n\_k = bn\_i\]. But then from $p n_j + qn_k
=(p+\al_j)n_j +(q-\al_k)n_k= p' n_j + r n_l$, we have $r\ge \al_l$. and this will easily lead to a contradiction. Hence we have $q=\al_k$ and $$(5.7.2') \qquad f + n_i = p n_j + (\al_k-1)n_k, f'+ n_i = p' n_j + (r-1) n_l.$$
Now, since $m_{il}=m_{kl}=0$ (resp. $m'_{ik} = m'_{lk}=0$), then $m_{jl}$ (resp. $m'_{jk}$) must be positive by Lemma \[a\_ij>0\]. Let us put $$f + n_j = s n_i + t n_k + u n_l \quad (u>0).$$
Since $f' = f + n_k - n_l$, we have $f' = s n_i + (t+1) n_k + (u-1)n_l$.
We discuss according to $s>0$ or $s=0$.
Case (a). If $s>0$, by Lemma \[a\_ij>0\], $m_{jl}=m'_{jk}=0$ since by Lemma \[symm\], $m_{ij}=m'_{ij}=0$ and hence we have $p=p'=0$ and after a short calculation we have $$(5.7.3) \qquad f + n_i = (\al_k-1)n_k, f' +n_i = (\al_l-1)n_l$$ by Lemma \[f + n\_k = b\_in\_i\] and since $m_{il}=m'{ik}=0$. Also, by Corollary \[0\_in\_row\], since $s>0$, $m_{li}=m'_{ki}=0$ and by Lemma \[symm\], either $m_{lk}$ or $m'_{lk}=0$. Hence we have $f + n_l = (\al_j-1)n_j$ (resp. $f' +n_k = (\al_j-1)n_j$) if $m_{lk}$ (resp. $m'_{lk}=0$). And then we have proved our assertion.
Case (b). $s=0$. We put $$(5.7. 4)\qquad f + n_l = an_i + bn_j + cn_k, f' + n_k = b'n_j + d' n_l$$ (since $m_{ik} =\al_k-1>0$, we have $m'_{ki}=0$ by Lemma \[symm\]) so that $$RF(f)= \left( \begin{array}{cccc} -1 & p & \al_k-1 & 0 \\
0 & -1 & t & u \\
\al_i-1 & 0 & -1 & 0\\
a & b & c & -1 \end{array}\right),\quad
\RF(f') =
\left( \begin{array}{cccc} -1 & p' & 0 & r-1 \\
0 & -1 & t+1 & u-1 \\
0 & b' & -1 & d'\\
\al_i-1 &0 & 0 & -1 \end{array}\right).$$
Note that by Lemma \[symm\], we have $$a(r-1) = b(u-1)= b't = cd' =0.$$
We then discuss the cases $c>0$ and $c-0$.
Case (b1). $c>0$. Then $d'=0, b'= \al_j -1$ and $t=0$. Then $u = \al_l-1$ and we need to show there is a special row $(\al_k -1) {\bld e}_ k - {\bld e}_s$ for some $s$. In this case, by Corollary \[0\_in\_row\], either $b=0$ or $a=0$. If $b>0$, then $u=1$ and the 2nd row of $\RF(f')$ is $(\al_k -1) {\bld e}_ k - {\bld e}_j$ with $\al_k=2$. If $a>0$, then the ist row of $\RF(f')$ is $p' {\bld e}_ j - {\bld e}_i$ and this contradicts the fact the 3rd row of $\RF(f')$ is $(\al_j-1) {\bld e}_ j - {\bld e}_k$ inducing $n_i=n_k$. If $a=b=0$, then the 4th row of $\RF(f)$ is the desired special row. Now we are reduced to the case $c=0$.
Case (b2). $c=0$ and $b>0$. Then we have $t+1= \al_k-1$. If $a>0$, then we have $r=1$ and $p'=\al_j-1$ and either $b'=0$ or $t=0$. In either case, we have enough special rows. If $a=0$, then $b=\al_j-1$ and the 4th row of $\RF(f)$ is $(\al_j-1) {\bld e}_ j - {\bld e}_l$. If, moreover, $b'=0$, then $d' = \al_l-1$ and the 3rd row of $\RF(f')$ is the desired special row. If, moreover, $b'>0$, then we have $t=0$ and the 2nd rows of $\RF(f)$ and $\RF(f')$ give enough numbers of special rows.
We investigate the semigroups which has special type of $\RF(f)$.
\[4rows\] Assume $H$ is almost symmetric with odd $\Fr(H)$ and assume for some $f\in \PF'(H)$, $\RF(f)$ has only one positive entry in each row. Then we have:
1. After suitable permutation of indices, we can assume $$\RF(f) = \left( \begin{array}{cccc} -1 & \alpha_2-1 & 0 & 0\\
0 & -1 & \al_3-1 & 0 \\
0 & 0 & -1 & \al_4-1\\
\al_1-1 & 0 & 0 & -1 \end{array}\right).$$
2. In this case, if we put $f' =\Fr(H)-f$, then $$\RF(f') = \left( \begin{array}{cccc} -1 & \alpha_2-2 & \al_3-1 & 0\\
0 & -1 & \al_3-2 & \al_4-1 \\
\al_1-1 & 0 & -1 & \al_4-2\\
\al_1-2 & \al_2-1 & 0 & -1 \end{array}\right)$$
3. We have $\type(H) = 3$ with $\PF(H)=
\{f,f', \Fr(H)\}$, $\mu(I_H)=6$ and the minimal generators of $I_H$ are obtained from taking differences of $2$ rows of $\RF(f)$, namely $I_H=(x_1^{\al_1}-x_2^{\al_2-1}x_4, x_2^{\al_2}-x_3^{\al_3-1}x_1, x_3^{\al_1}-x^{\al_4-1}x_2, x_4^{\al_4}-x_1^{\al_1-1}x_3,
x_1x_4^{\al_4-1}-x_2^{\al_2-1}x_3, x_1^{\al_1-1}x_2-x_3^{\al_3-1}x_4).$
4. We have $$n_1= (\al_2-1)(\al_3-1)\al_4 +\al_2, n_2= (\al_3-1)(\al_4-1)\al_1,$$ $$n_3= (\al_4-1)(\al_1-1)\al_2+\al_4, n_4=(\al_1-1)(\al_2-1)\al_3+\al_1,$$
Assume $(i, a_i)$ component of $\RF(f)$ is positive. Then $(a_1.a_2.a_3,a_4)$ gives a permutation of $(1,2,3,4)$ with no fixed point. So, we can assume either $(a_1.a_2.a_3,a_4)= (2,1,4,3)$ or $(2,3,4,1)$.
Since $f + n_1= (\al_2-1)n_2$, we have $$f + n_1+n_2 = \al_2 n_2 = (\al_3-1)n_3 + n_1,$$ hence $\al_{21}=1,\al_{23}=\al_3-1,\al_{24}= 0$ and likewise, we have $\al_{12}= \al_2-1, \al_{13}=0, \al_{14}=1, \al_{31}=0,\al_{32}=1, \al_{34}=\al_4-1,
\al_{41}= \al_1-1, \al_{42}=0,\al_{43}=1$.
Next, by Lemma \[symm\], $\RF(f')$ is of the form $\RF(f') = \left( \begin{array}{cccc} -1 & p_2 & p_3 & 0\\
0 & -1 & q_3& q_4 \\
r_1 & 0 & -1 & r_4\\
s_1 & s_2 & 0 & -1 \end{array}\right).$ Then, note that we should have $p_2<\al_2$ because $\al_{21}=1>0$ and $p_3<\al_4$ because $\al_{34}>0$ and likewise.
Them we compute $\Fr(H)+n_1= (\al_2-1)n_2+ f' = (\al_2-2)n_2 + q_3n_3+q_4n_4
= p_2n_2+p_3n_3+ f = (p_2-1)n_2 + (\al_3-1+p_3)n_3 = p_2n_2+(p_3-1)n_3 +
(\al_4-1)n_4$. Hnce we have $p_2=\al_2-2$ and $q_4= \al_4-1$. Repeating this process, we get $\RF(f')$ as in the statement.
Now, let’s begin our proof of Theorem \[type\_le3\]. First we treat the case with even $\Fr(H)$.
\[even\_type\_le2\] We assume $e=4$. If $\Fr(H)$ is even and if $\Fr(H)/2 \in \PF(H)$, then $\type(H)=2$. That is, if $e=4$ and $H$ is almost symmetric of even type, then $\type(H)=2$.
It suffices to show that $\RF(\Fr(H)/2)$ is as in \[normal\] in Theorem \[type2\]. Then we have seen that $\type(H)=2$ by Theorem \[Type\_even\]. We recall the proof of Theorem \[type2\] and show that $b=\al_{42}>0$ and $a =\al_1-1$ in the following matrix $(\dagger)$. $$(\dagger) \quad \RF(\Fr(H)/2) = \left( \begin{array}{cccc} -1 & \alpha_2-1 & 0 & 0\\
0 & -1 & \al_3-1 & 0 \\
a & 0 & -1 & \al_4-1\\
\al_1-1 &
b=\al_{42} & 0 & -1 \end{array}\right),$$
If we assume $b=0$, adding the 2nd and 4th rows of $(\dagger)$, we get $$\Fr(H) + n_2+n_4 = (\al_1-1)n_1+(\al_3-1)n_3.$$ Since $(\al_1-1)n_1- n_2, (\al_3-1)n_3-n_4\not\in H$, by Lemma \[Apery\], $$\begin{aligned}
(\al_1-1)n_1&=& f + n_2=\Fr(H)/2+n_4 \quad \text{ and}\\
(\al_3-1)n_3&=& f' + n_4= \Fr(H)/2+n_2
\end{aligned}$$ for some $f,f'\in \PF'(H)$ with $f+f'=\Fr(H)$. Also, we have $f + n_1= \al_1n_1 - n_2 = (\al_2-2)n_2 +n_4$ and likewise, $f' + n_3= an_1 + n_2 + (\al_4-2)n_4$. Now by Lemma \[special row\], there should be special rows $(\al_2 -1) {\bld e}_2 - {\bld e}_j, (\al_4 -1) {\bld e}_4 - {\bld e}_k$ on either $\RF(f)$ or $\RF(f')$. We write $\RF(f) = (m_{ij})$ and $\RF(f')=(m'_{ij})$. We have seen $m_{13}=m_{23}=0$. Hence we must have $m_{43}>0$ by Lemma \[a\_ij>0\]. Then we must have $m'_{34} = \al_4-2=0$, giving $\al_4=2$. Also, since $m_{21}=\al_1-1>0$, $m'_{12}=0$. Hence only possibility of $(\al_2 -1) {\bld e}_2 - {\bld e}_j$ is the 3rd row of $\RF(f)$, giving $m_{31}=m_{34}=0, m_{32}=\al_2-1$ and $(\al_2-1)n_2= f + n_3$. Since $(\al_2-1)n_2= \Fr(H)/2 + n_1$ and $(\al_1-1)n_1= f + n_2=\Fr(H)/2+n_4$, we have $n_1+n_2= n_3+n_4$. Then since $\Fr(H)/2 - f' = n_4-n_2= n_1-n_3$, we have $n_4= (\al_4-1)n_4 = f' + n_1$. We have seen $f + n_1= (\al_2-2)n_2 +n_4$ above. Substituting $n_4= (\al_4-1)n_4 = f' + n_1$ we get $f + n_1= (\al_2-2)n_2 +n_4 = (\al_2-2)n_2 + f' + n_1$ and then $f = (\al_2-2)n_2 + f'$, contradicting $f\ne f'$ and $f' \in \PF(H)$. Tuus we have showed $b=\al_{42}>0$.
We have seen at the end of the proof of Theorem \[type2\] (3), $\al_{12}=\al_2-1-b >0$, $\al_{13}=0$, $\al_{14}=1$, $\al_{21}=1$, $\al_{23}=\al_3-1$, $\al_{24}=0,$ $\al_{31}=a$, $\al_{32}=1$, $\al_{34}=\al_4-1$, $\al_{41}=\al_1-1-a$, $\al_{42}=b$, $\al_{43}=1.$
Since $b>0$, we can assert that $\Fr(H)+n_2$ has UF by Lemma \[formula\_Ap\].
Next we will show $a=\al_1-1$. We have seen that $a\le \al_1-1$ in the proof of Theorem \[type2\]. If $a< \al_1-1$ then we have $\al_{41}>0$ and hence by by Lemma \[formula\_Ap\], $\Fr(H)+n_1$ has UF. We will show that this leads to a contradiction.
Adding 1st and 2nd rows of $(\dagger)$, we have $$\Fr(H) + n_1 = (\al_2-2)n_2+ (\al_3-1)n_3.$$
Since this expression is unique, $\Fr(H) + n_1 - n_4\not\in H$ and thus $\Fr(H) + n_1 - n_4 \in \PF'(H)$ by Lemma \[Apery\]. We put $$f = \Fr(H) + n_1 - n_4, \quad f' = \Fr(H) - f = n_4 -n_1.$$ Since $H$ is almost symmetric, $f' \in \PF'(H)$. Then by Proposition \[f + n\_k = bn\_i\], $\al_4=2$. Then $$f' + n_4 = 2n_4 - n_1 = \al_4 n_4 - n_1= (\al_1-2-a)n_1+bn_2+n_3.$$ Then by Corollary \[0\_in\_row\], since $b>0$, we must have $\al_1-2-a =0$; $a=\al_1-2$.
Since $(4,2), (4,3)$ entries of $\RF(f')$ are both positive, $(2,4), (3,4)$ entries of $\RF(f)$ must be $0$ by Lemma \[symm\]. Then by Lemma \[a\_ij>0\], $(1,4)$ entry of $\RF(f)$ is positive, which induces that $f + n_1 \ge_H n_4 = f' + n_1$. Then we have $f \ge_H f'$, contradicting our assumption $f,f' \in \PF'(H)$. Thus we have shown that if $\Fr(H)/2 \in \PF(H)$, then $\RF(\Fr(H)/2)$ is as in Theorem \[type2\]. We have shown that if $H$ is almost symmetric with even $\Fr(H)$, then $\type(H)=2$.
If $H$ is almost symmetric and $\Fr(H)$ is even, then we have shown in Proposition \[even\_type\_le2\] that $\type(H)=2$. So, in the rest of this section, we will assume that $\Fr(H)$ is odd.
Let $H=\langle n_1,n_2,n_3,n_4\rangle$ be almost symmetric and let $\{i,j,k,l\}$ be a permutation of $\{1,2,3,4\}$.
Our tool is Moscariello’s $\RF$-matrices and especially the special rows of those matrices.
By Lemma \[special row\] there are at least $4$ special rows in $\RF(f)$ and $\RF(f')$ together, if $f+f'= \Fr(H)$. On the other hand, we showed in Proposition \[f+n\_k= f’ + n\_l\] that for fixed $n_i$, there exists at most $2$ relations of type $f+ n_k=(\al_i-1)n_i$. Since the possibility of $n_i$ is $4$, there exists at most $8$ special rows in $\RF(f)$ for all possibilities of $f\in \PF'(H)$. This implies the cardinality of $\PF'(H)$ is at most $4$ and we have $\type(H)\le 5$. Also, this argument shows that for every $n_i$, there are exactly $2$ special rows of type $(\al_i -1){\bld e}_i - {\bld e}_k$ in some $\RF(f), f\in \PF'(H)$.
Now, we assume $\type(H) = 5$ and $\PF(H) = \{f_1,f_2,f_2', f_1', \Fr(H)\}$ with $$f_1<f_2<f_2'< f_1' \quad {\mbox{\rm and}} \quad
f_1+f_1' = f_2+f_2'=\Fr(H).$$
We will show a contradiction.
We have seen in Lemma \[special row\] that for every pair of $f, f'\in \PF(H)$ with $f+f'=\Fr(H)$, and for every $n_i$, there is a special row of type $(\al_i-1){\bld e}_i - {\bld e}_k$ for some $k$ on either $\RF(f)$ or $\RF(f')$. Hence if we had a relation of type $$f+n_k = (\al_i-1)n_i = f' + n_l,$$ then there will be $2$ special rows $(\al_i-1){\bld e}_i - {\bld e}_k$ in $\RF(f)$ and $(\al_i-1){\bld e}_i - {\bld e}_l$ in $\RF(f')$, which leads to $5$ special rows on $\RF(f)$ and $\RF(f')$ together, which contradicts the fact there are at most $8$ special rows in total.
Hence if there is a relation of type $$f + n_k = (\al_i-1) n_i = f' + n_l,$$ then $f+f' \ne\Fr(H)$. Since there exists exactly $4$ such pairs of $\{f,f'\}$, namely, $\{f_1,f_2\}, \{f_1', f_2\}. \{f_1,f'_2\}, \{f_1', f_2'\}$, we must have the following relations for some $ n_p,\ldots , n_y\in \{n_1,n_2,n_3,n_4\}$.
Let $\{i,j,k,l\}$ be a permutation of $\{1,2,3,4\}$ and assume that $n_1<n_2<n_3<n_4$.
Now we must have the following relations
$$\begin{aligned}
f_1+ n_p = (\al_i-1)n_i = f_2+ n_q\\
f_1'+ n_r = (\al_j-1)n_j = f_2+ n_s\\
f_1+ n_t = (\al_k-1)n_k = f_2'+ n_u\\
f_1'+ n_x = (\al_l-1)n_l = f_2'+ n_y\end{aligned}$$
0.5cm
From these equations and since $f_2-f_1= f_1' - f_2'$, we have
$$\begin{aligned}
n_p-n_q= f_2-f_1= f_1'- f_2' = n_y-n_x\\
n_s-n_r= f_1'-f_2= f_2'-f_1= n_t-n_u.\end{aligned}$$
We divide the cases according to how many among $\{n_p,n_q, n_x, n_y\}$ and $\{n_t.n_u, n_x,n_y\}$ are different. 0.2cm Case 1: First, we assume that $n_p = n_y$ and $n_q= n_x$. Since we have assumed $\{i,j,k,l\}$ is a permutation of $\{1,2,3,4\}$ and since $n_i, n_l$ must be different from $n_p, n_q$, we must have $\{n_l, n_k\} = \{n_p, n_q\}$. Then from equations (4), (5), $n_r=n_u
< n_s= n_t$ and $\{n_i, n_l\} =\{n_r, n_s\}$.
Now for the moment, assume $n_i > n_l$ (hence $n_i= n_s= n_t$ and $n_l = n_r = n_u$) and we will deduce a contradiction. The assumption $n_i < n_l$ leads to a contradiction similarly.
We check $\RF$ matrices of $f_1,f_2, f_2', f_1'$ with respect to $\{n_i,n_p, n_q, n_l\}$. Equations (3), (6) show us the $p$-th row of $\RF(f_1)$ is $(\al_i-1, -1,0,0)$ and since $\RF(f'_1)_{qi}=\al_i-1$, $\RF(f_1)_{iq}=0$ by Lemma \[symm\]. Hence by Lemma \[a\_ij>0\], $\RF(f_1)_{lq}>0$. From equation (5), to have $\RF(f_1)_{lq}>0$, we must have $n_k = n_q$ and then $n_j = n_p$. Then from (3), we have $$\al_i n_i = (f_1+ n_i) + n_p = (f_2 + n_i) + n_q.$$ But from (4), (5), we know $f_1+ n_i = (\al_q-1)n_q,
f_2+n_i = (\al_p-1)n_p$ and then we have $(\al_q-1)n_q+ n_p = (\al_p-1)n_p + n_q$, getting $\al_p=\al_q=2$ and from (4), (5), $n_p = f_1' + n_l = f_2 + n_i, n_q = f_1 + n_i = f_2' + n_l$. Then from (3), (6), we get $\al_in_i = (f_1+n_i) + n_p = n_p+n_q
= (f_1'+ n_p) +n_l= \al_l n_l$.
Now we compute $f_1 + n_l$. We know that $f_1+n_l < n_p, n_q$. Then we must have $f_1 + n_l = c n_i$ for some positive integer $c$. But by Lemma \[f + n\_k = bn\_i\], $c\ge \al_i -1$. Then we have $f_1 + n_l + n_i \ge \al_in_i = n_p+n_q$, contradicting $n_p = f_1' +n_l$ and $n_q = f_1+n_i$. Thus Case 1 does not occur.
Case 2: If $\sharp \{n_p,n_q,n_x, n_y\}=3$, then either $n_p=n_x$ or $n_q=n_y$ and hence either $2 n_p = n_q + n_y$ or $2 n_q= n_p+ n_x$, having $\al_p=2$ or $\al_q=2$. For the moment we assume $2 n_q= n_p+ n_x$ and $\al_q=2$. Then from (3) - (6), for some $f,f'\in \PF'(H)$, $$(*) \quad (\al_q-1)n_q= n_q =f+n_v = f' + n_w.$$ Since $n_q$ is not the biggest among $n_1,\ldots , n_4$ and bigger than the other $2$, $n_q=n_3$. We assume $n_1<n_2 < n_3=n_q < n_4$ and again compute $n_1 + f_1.$ By (\*), $n_1+ f_1< n_3$ and we muct have $n_1+ f_1 = c n_2$ for some positive integer $c$. We have seen $c \al_2-1$ and if $n_1+ f_1 = (\al_2-1)n_2$, then by (3) - (6), $n_1+ f_1= (\al_2-1)n_2 = n_w+ f$ but that is impossible since $n_1+ f_1< n_3$. If $c\ge \al_2$ we get also a contradiction since $n_1+f_1$ cannot contain $n_1,n_3,n_4$. Thus Case 2 does not occur. It is easy to see that $\sharp\{n_r,n_s,n_t, n_u\} =3$ leads to a contradiction, either. 0.2cm
Case 3: To prove the Theorem \[type\_le3\], it suffices to get a contradiction assuming $\{n_p,n_q,n_x, n_y\}$ and $\{n_r,n_s,n_t, n_u\}$ are different elements. By (3) - (6), we may assume $n_1<n_2<n_3<n_4$ with $n_2-n_1= n_4-n_3 = f_2-f_1$ and $n_3-n_1=n_4- n_2 = f_2' - f_1$. Hence $n_4-n_1 = (f_1'-f_2) + (f_2-f_1) = f_1'-f_1$ and we have $f_1 +n_1= f_1' + n_4$. Also, since $f_1 +n_4$ is not of the type $(\al_w-1)n_w$, we have $$f_1+n_4= f_1' + n_1 = b n_2 + c n_3$$ with $c,d >0$. Then from Lemma \[symm\] we must have $\RF(f_1)_{2,1} = \RF(f_1)_{3,1}=0$. Since the 4th row of $\RF(f_1) = (0,b,c,0)$, every component of the 1st column of $\RF(f_1)$ is $0$, contradicting Lemma \[a\_ij>0\]. This finishes our proof of Theorem \[type\_le3\].
On the free resolution of $k[H]$. {#sec:6}
=================================
Let as before $H=\langle n_1,\ldots,n_e\rangle$ be a numerical semigroup and $K[H]=S/I_H$ its semigroup ring over $K$.
We are interested in the minimal graded free $S$-resolution $(\FF, d)$ of $K[H]$. For each $i$, we have $F_i = \bigoplus_j S(- \beta_{ij})$, where the $\beta_{ij}$ are the graded Betti numbers of $K[H]$. Moreover, $\beta_i = \sum_j \beta_{ij}= \rank(F_i)$ is the $i$th Betti number of $K[H]$. Note that $\projdim_SK[H]=e-1$ and that $F_{e-1} \cong \bigoplus_{f\in \PF(H)} S( - f - N)$, where we put $N=\sum_{i=1}^e n_i$
Recall from Section 1 that $H$ is almost symmetric, or, equivalently, $R$ is almost Gorenstein if the cokernel of a natural morphism $$R \to \omega_R( -\Fr(H))$$ is annihilated by the graded maximal ideal of $K[H]$. In other words, there is an exact sequence of graded $S$-modules $$0 \to R \to \omega_R( -\Fr(H)) \to \bigoplus_{f \in \PF(H), f\ne \Fr(H)} K(-f)\to 0.$$ Note that, we used the symmetry of $\PF(H)$ given in Lemma \[Nari\] when $H$ is almost symmetric.
Since $\omega_S\cong S(-N)$, the minimal free resolution of $\omega_R$ is given by the the $S$-dual $\FF^{\vee}$ of $\FF$ with respect to $S(-N)$. Now, the injection $R \to \omega_R (-\Fr(H))$ lifts to a morphism $\varphi : \FF \to \FF^{\vee}(- \Fr(H))$, and the resolution of the cokernel of $R\to
K_R(-\Fr(H))$ is given by the mapping cone $\MC(\varphi)$ of $\varphi$.
On the other hand, the free resolution of the residue field $K$ is given by the Koszul complex $\KK= \KK(x_1,\ldots , x_e;K)$. Hence we get
\[Koszul\] The mapping cone $\MC(\varphi)$ gives a (non-minimal) free $S$-resolution of $\bigoplus_{f\in \PF(H), f\ne \Fr(H)} K(-f)$. Hence, the minimal free resolution obtained from $\MC(\varphi)$ is isomorphic to $\bigoplus_{f\in \PF(H), f\ne \Fr(H)} K(-f).$
Let us discuss the case $e=4$ in more details. For $K[H]$ with $t=\type(K[H])$ we have the graded minimal free resolution $$0\to \Dirsum_{f\in \PF(H)}S(-f-N)\to \Dirsum_{i=1}^{m+t-1}S(-b_i)\to \Dirsum_{i=1}^mS(-a_i)\to S\to K[H]\to0$$ of $K[H]$. The dual with respect to $\omega_S=S(-N)$ shifted by $-F(H)$ gives the exact sequence $$\begin{aligned}
0\to S(-\Fr(H)-N)&\to & \Dirsum_{i=1}^mS(a_i-F(H)-N)\to \Dirsum_{i=1}^{m+t-1}S(b_i-F(H)-N)\\
&\to & \Dirsum_{f\in \PF(H)}S(f-\Fr(H))\to \omega_{K[H]}(-\Fr(H))\to 0.\end{aligned}$$
Considering the fact that for the map $\varphi\: \FF\to \FF^\vee$ the component $\varphi_0\: S\to \Dirsum_{f\in \PF(H)}S(f-F(H))$ maps $S$ isomorphically to $S(\Fr(H)-\Fr(H))=S$, these two terms can be canceled against each others in the mapping cone. Similarly,via $\varphi_4: \Dirsum_{f\in \PF(H)}S(-f-N)\to S(-\Fr(H)-N)$ the summands $S(-\Fr(H)-N)$ can be canceled. Observing then that $\PF'(H)=\{\Fr(H)-f\: f\in \PF'(H)\}$, we obtain the reduced mapping cone $$\begin{aligned}
0&\to& \Dirsum_{f\in \PF'(H)}S(-f-N)\to \Dirsum_{i=1}^{m+t-1}S(-b_i)\to \Dirsum_{i=1}^mS(-a_i)\dirsum \Dirsum_{i=1}^mS(a_i-F(H)-N)\\
&\to& \Dirsum_{i=1}^{m+t-1}S(b_i-F(H)-N)\to \Dirsum_{f\in \PF'(H)}S(-f)\to \Dirsum_{f\in \PF'(H)}K(-f)\to 0.\end{aligned}$$ which provides a graded free resolution of $\Dirsum_{f\in \PF'(H)}K(-f)$. Comparing this resolution with the minimal graded free resolution of $\Dirsum_{f\in \PF'(H)}K(-f)$, which is $$\begin{aligned}
0&\to &\Dirsum_{f\in \PF'(H)}S(-f-N)\to \Dirsum_{f\in \PF'(H)\atop 1\leq i \leq 4}S(-f-N+n_i)\to \Dirsum_{f\in \PF'(H)\atop 1\leq i<j\leq 4}S(-f-n_i-n_j)\\
&\to & \Dirsum_{f\in \PF'(H)\atop 1\leq i\leq 4}S(-f-n_i)\to \ \Dirsum_{f\in \PF'(H)}S(-f)\to \Dirsum_{f\in \PF'(H)}K(-f)\to 0,\end{aligned}$$ we notice that $m\geq 3(t-1)$. If $m=3(t-1)$, then reduced mapping cone provides a graded minimal free resolution of $\Dirsum_{f\in \PF'(H)}K(-f)$. Also, if $m = 3(t-1)+s$ with $s>0$, then there should occur $s$ cancellations in the mapping $\varphi : \Dirsum_{i=1}^mS(-a_i)
\to \Dirsum_{i=1}^{m+t-1}S(b_i-F(H)-N)$.
A comparison of the mapping cone with the graded minimal free resolution of $\Dirsum_{f\in \PF'(H)}K(-f)$ yields the following numerical result.
\[comparison\] Let $H$ be a $4$-generated almost symmetric numerical semigroup of type $t$. Then putting $m_0= 3(t-1)$, we have $m=\mu_R(I_H) = m_0+s$ with $s\ge 0$. Moreover, with the notation introduced, we can put $\{a_1, \ldots, a_{m_0}, \ldots , a_m=a_{m_0+s}\}$ and $\{b_1,\ldots,b_{m_0+t-1},\ldots , b_{m+t-1}\}$ so that one has the following equalities of multisets: $$\begin{aligned}
&&\{a_1,\ldots, a_m\}\union \{\Fr(H)+N-a_1,\cdots, \Fr(H)+N-a_m\}\\
&&=\{f+n_i+n_j \:\; f\in \PF'(H), 1\leq i<j\leq 4\},\end{aligned}$$ and $$\{b_1,\ldots,b_{m_0+t-1}\}=\{f+N-n_i\:\; f\in \PF'(H), 1\leq i\leq 4\}.$$ and if $s>0$, $$a_{m_0+j} = \Fr(H) - b_{m_0+ t-1 +j} , (1\leq j \leq s).$$
\[a&b\] (1) Let $H = \langle 5,6,7,9 \rangle$. Then $H$ is pseudo-symmetric with $\PF(H) = \{ f=4, \Fr(H)= 8\}$ and we see $$\{a_1,\ldots, a_5\} = \{ 15,16,18; 12, 14\}, \{b_1,\ldots , b_6\} = \{22, 24, 25, 26; 23,21\},$$ where we see $15 = f + n_1+n_2, 16= f;n_1+n_3, 16= f + n_1+n_4; \Fr(H)+N = 35$ and $35- a_4= b_5, 35 - a_5 = b_6$.
\(2) Let $H = \langle 18,21,23,26 \rangle$. Then $\PF(H) = \{ 31, 66, 97\}$ showing that $H$ is AS of type $3$. Then we see $\mu(I_H) = 7 = 3(t-1) +1$ and $$\{a_1,\ldots, a_7\} = \{ 72, 75, 78, 105, 110, 115; 44\},$$ $$\{b_1,\ldots , b_9\} = \{93, 96, 98, 101, 128, 131, 133, 136; 141 \},$$ where $141 = \Fr(H) + N - a_7$.
Now let us assume that $t=3$, and that $m=6$. An example of an almost symmetric $4$-generated numerical semigroup of type $3$ with $6$ generators for $I_H$ is the semigroup $H=\langle 5, 6, 8, 9 \rangle$. In this example $I_H$ is generated by $$\begin{aligned}
\phi_1&=&x_1^3-x_2x_4,\; \phi_2=x_2^3-x_1^2x_3,\; \phi_3=x_3^2-x_1^2x_2,\; \phi_4=x_4^2-x_2^3,\\
\phi_5&=&x_1x_2^2-x_3x_4,\; \phi_6= x_1x_4-x_2 x_3.\end{aligned}$$ We have $\PF(H)=\{3,4,7\}$, and the $\RF$-matrices of $H$ for $3$ and $4$ are $$\RF(3) = \left( \begin{array}{cccc} -1 & 0 & 1 & 0\\
0 & -1 & 0 & 1 \\
1 & 1 & -1 & 0\\
0 & 2 & 0 & -1 \end{array}\right),
\quad
and
\quad
\RF(4) = \left( \begin{array}{cccc} -1 & 0 & 0 & 1\\
2 & -1 & 0 & 0 \\
0 & 2 & -1 & 0\\
1 & 0 & 1 & -1 \end{array}\right).$$ The $\RF$-relations resulting from $\RF(3)$ (which are obtained by taking for each $i<j$ the difference of the $i$th row and the $j$th row of the matrix ) are $$\begin{aligned}
\phi_{12}&=&x_2x_3-x_1x_4,\; \phi_{13}=x_3^2-x_1^2x_2,\; \phi_{14}=x_3x_4-x_1x_2^2,\; \phi_{23}= x_3x_4-x_1x_2^2,\\
\phi_{24}&=& x_4^2-x_2^3,\; \phi_{34}=x_1x_4-x_2x_3,\end{aligned}$$ while the $\RF$-relations resulting from $\RF(4)$ are $$\begin{aligned}
\psi_{12}&=&x_2x_4-x_1^3,\; \psi_{13}=x_3x_4-x_1x_2^2,\; \psi_{14}=x_4^2-x_1^2x_3,\; \psi_{23}=x_2^3 -x_1^2x_3,\\
\psi_{24}&=& x_1x_4-x_2x_3,\; \psi_{34}=x_2^2x_4-x_3^2x_1.\end{aligned}$$ We see that $\deg \phi_{ij}=3+n_i+n_j$ for all $i<j$, except for $\phi_{34}$ for which we have $\deg \phi_{34}=14<20= 3+8+9$. Similarly, $\deg \psi_{ij}=4+n_i+n_j$ for all $i<j$, except for $\psi_{24}$ for which we have $\deg \psi_{24}=14< 19=4+6+9$.
Comparing the $\RF$-relations with the generators of $I_H$ we see that $$\begin{aligned}
&&\phi_1=-\psi_{12},\; \phi_2= \psi_{23},\; \phi_3= \phi_{13},\\
&&\phi_4= \phi_{24},\; \phi_5= -\phi_{14}=-\psi_{13} ,\; \phi_6 = -\phi_{12}=\phi_{34}=\psi_{24}.\end{aligned}$$ In this example we see that the $\RF$-relations generate $I_H$.
The next result shows that this is always the case for such kind of numerical semigroups
\[rfrelations\] Let $H$ be a $4$-generated almost symmetric numerical semigroup of type $t$ for which $I_H$ is generated by $m=3(t-1)$ elements. Then $I_H$ is generated by $\RF$-relations.
By Lemma \[conditionforquestion\] it suffices to show that $I_H$ admits a system of generators $\phi_1,\ldots, \phi_r$ such that for each $k$ there exist $i<j$ and $f\in \PF'(H)$ such that $\deg \phi_k=f+n_i+n_j$ and $\phi_k=u-v$, where $u$ and $v$ are monomials such that $x_i|u$ and $x_j|v$, or $x_j|u$ and $x_i|v$.
Consider the chain map $\varphi : \FF \to \GG$ with $\GG=\FF^{\vee}(- \Fr(H))$ resulting from the inclusion $R \to \omega_R( -\Fr(H))$: $$\begin{CD}
0@>>> F_3@>\partial_3 >> F_2@>\partial_2 >> F_1@>\partial_1 >> F_0\\
@. @VV\varphi_3 V @VV\varphi_2 V @VV\varphi_1 V @VV\varphi_0 V\\
0@>>> G_3@>d_3 >> G_2@>d_2 >> G_1@>d_1 >> G_0. \\
\end{CD}$$ The assumption of the theorem implies that the reduced mapping cone of $\varphi$ is isomorphic as a graded complex to a direct sum of Koszul comlexes with suitable shifts, as described above. Thus we obtain a commutative diagram $$\begin{CD}
K_2@>\kappa_2 >> K_1@>\kappa_1 >> K_0@.\\
@VV\alpha_2 V @VV\alpha_1 V @VV\alpha_0 V@. \\
F_1\dirsum G_2@>>(-\varphi_1,d_2) > G_1@>>\bar{d_1} > \overline{G_0}@= G_0/F_0\\
\end{CD}$$ of graded free $S$-modules, where the top complex is the begin of the direct sum of Koszul complexes, and where each $\alpha_i$ is a graded isomorphism.
We choose suitable bases for the free modules involved in this diagram. The free module $K_0$ (resp. $K_1$) admits a basis $\{ e_f \: f\in \PF'(H)\}$ (resp. $\{e_{f,i}\: f\in \PF'(H),\; i=1,\ldots,4\}$) with $\deg e_f = f$, $\deg e_{f,i}=f+n_i$ and such that $\kappa_1(e_{f,i})=x_ie_f$. Then a basis for $K_2$ is given by the wedge products $e_{f,i}\wedge e_{f,j}$ with $\deg(e_{f,i}\wedge e_{f,j})=f+n_i+n_j$.
On the other hand, $F_1$ admits a basis $\epsilon_1,\ldots, \epsilon_m$, where each $\epsilon_k$ has a degree of the form $f+n_i+n_j$ for some $f\in \PF'(H)$ and some $i<j$ since $\alpha_2$ is an isomorphism of graded complexes. Moreover, $\partial_1(\epsilon_k)=\phi_k$, where $\phi_k=u_k-v_k$ is a binomial with $\deg u_k=\deg v_k=\deg \phi_k=\deg \epsilon_k$.
Let $\alpha_2(e_{f,i}\wedge e_{f,j})=\epsilon_{f,ij}+\sigma_{f,ij}$ with $\epsilon_{f,ij}\in F_1$ and $\sigma_{f,ij}\in G_2$ for $f\in \PF'(H)$ and $i<j$. Then the elements $\epsilon_{f,ij}$ generate $F_1$. Moreover, we have $$-\varphi_1(\epsilon_{f,ij})+d_2(\sigma_{f,ij})=x_i\alpha_1(e_{f,j}) - x_j\alpha_1(e_{f,i})\subset (x_i,x_j)G_1.$$ Since $\bar{d_1}(-\varphi_1(\epsilon_{f,ij})+d_2(\sigma_{f,ij}))=0$. it follows that $$-\varphi_0\partial_1(\epsilon_{f,ij})=d_1(-\varphi_1(\epsilon_{f,ij})+d_2(\sigma_{f,ij}))
=d_1(-\varphi_1(\epsilon_{f,ij}))\in (x_i,x_j)\varphi_0(F_0).$$ If follows that $$\begin{aligned}
\label{ij}
\partial_1(\epsilon_{f,ij})\subset (x_i,x_j)F_0=(x_i,x_j),\end{aligned}$$ for all $f\in \PF'(H)$ and $i<j$.
Since the elements $\epsilon_{f,ij}$ generate $F_1$, it follows that the elements
$\partial_1(\epsilon_{f,ij})$ generate $I_H$.
To show that $I_H$ is generated by RF-relations, it suffices to show the following.
1. $I_H$ admits a system of binomial generators $\phi_1,\ldots,\phi_m$ such that for each $k$ there exist $i<j$ and $f\in \PF'(H)$ with $\deg \phi_k=f+n_i+n_j$
2. $\phi_k=u-v$, where $u$ and $v$ are monomials such that $x_i|u$ and $x_j|v$.
For given $f\in\PF'(H)$ and $i<j$, let $\epsilon_{f,ij}=\sum_{k=1}^m\lambda_k\epsilon_k$. Then $\lambda_k=0$ if $\deg \epsilon_k \neq f+n_i+n_j$, and
$\partial_1(\epsilon_{f,ij})=\sum_{k=1}^m\lambda_k \phi_k=\sum_{k=1}^m\lambda_k(u_k-v_k)$ with $\lambda_k\neq 0$ only if degree $\deg u_k=\deg v_k=f+n_i+n_j$. This sum can be rewritten as $\sum_{k=1}^r\mu_kw_k$ with $\sum_{k=1}^r\mu_k=0$, and pairwise distinct monomials $w_k$ with $\{w_1,\ldots,w_r\}\subset \{u_1,v_1,\ldots, u_m,v_m\}$ and $\deg w_k=f+n_i+n_j$ for $i=1,\ldots,r$. Provided $\partial_1(\epsilon_{f,ij})\neq 0$, we may assume that $\mu_k\neq 0$ for all $k$. Then $\sum_{k=1}^r\mu_kw_k=\sum_{k=2}^r\mu_k(w_k-w_1)$. Since $\deg w_k=\deg w_1$ for all $k$, it follows that $w_k-w_1\in I_H$ for all $k$. Moreover, since the $\partial_1(\epsilon_{f,ij})$ generate $I_H$, we see that the binomials $w_k-w_1$ in the various $\partial_1(\epsilon_{f,ij})$ altogether generate $I_H$. We also have that $\sum_{k=2}^r\mu_k(w_k-w_1)\subset (x_i,x_j)$. We may assume that $x_j|w_1$, and $x_j$ does not divide $w_2,\ldots,w_s$ while $x_j|w_k$ for $k=s+1,\ldots,r$. Then $x_i|w_k$ for $k= 2,\ldots,s$ and $\partial_1(\epsilon_{f,ij})+\mm I_H=\sum_{i=2}^s\mu_k(w_k-w_1)+\mm I_H$. It follows that modulo $\mm I_H$, the ideal $I_H$ is generated by binomials $\phi=u-v$ for which there exists $f\in\PF'(H)$ and $i<j$ such that $\deg \phi=f+n_i+n_j$ and $x_i|u$ and $x_j|v$. By Nakayama, the same is true for $I_H$, as desired.
$7$-th generator of $I_H$.
--------------------------
In this subsection, we will show that if $H$ is almost symmetric generated by $4$ elements, then $I_H$ is generated by $6$ or $7$ elements, and if $I_H$ is generated by $7$ elements, we can determine such $H$.
We put $H=\left< n_1,n_2,n_3,n_4\right>$ with $n_1<n_2 <n_3 <n_4$ and put $N=\sum_{i=1}^4 n_i$. Also, we always assume $H$ is almost symmetric and $\PF(H) = \{ f, f', \Fr(H)\}$ with $f+f'= \Fr(H)$.
\[7th\] If $I_H$ is minimally generated by more than $6$ elements, then $I_H$ is generated by $7$ elements and we have the relation $n_1+n_4= n_2+n_3$.
By Lemma \[Koszul\], if we need more than $6$ generators for $I_H$, then there must be a cancellation in the mapping $$\phi_2 : F_1 \to F_2^\vee.$$ Namely, there is a monomial generator $g$ of $I_H$ and a free base $e$ of $F_2$ with $$\deg e = \Fr(H) + N - \deg(g).$$
On the other hand, being a base of $F_2$, $e$ corresponds to a relation $$(**) \quad \sum_i g_i y_i = 0,$$ where $g_i$ are generators of $I_H$ and $\deg y_i = h_i\in H_+$. It follows that for every $i$ with $y_i\ne 0$, we have $$\deg e = \deg g_i + h_i.$$
Note that by Lemma \[Koszul\], we have $6$ minimal generators of $I_H$, whose degree is of the form $$f + n_i + n_j \quad (f\in \PF'(H)).$$
Now, by Lemma \[special row\], for every $i$, there is $f\in \PF'(H)$ and some $n_k$ such that $(\al_i -1)n_i = f + n_k$, or, $\al_in_i = f + n_i + n_k$. Since our $g$ has degree not of the form $ f + n_i + n_k$, we may assume that $$g = x_i^ax_j^b - x_k^cx_{\ell}^d$$ for some permutation $\{i,j,k,\ell\}$ of $\{1,2,3,4\}$ with positive $a,b,c,d$. Our aim is to show $a=b=c=d=1$.
We need a lemma.
\[relation\] In the relation $(**)$ above, the following holds:
1. If $y_i\ne 0$ in $(**)$, then at most $3$ $x_i's$ appear in $g_i$.
2. At least $3$ non-zero terms appear in $(**)$.
\(i) If $g$ contains $4$ $x_i$’s, then either $g = x_r^{\al_r} - x_s^{\beta}x_t^{\gamma}x_u^{\delta}$ or $g= x_r^{a'} x_s^{b'}- x_t^{c'}x_u^{d'}$ for some permutation of $x_i$’s. In the first case, if $n_r \in \{n_i, n_j\}$, then we have $$\Fr(H) + N = an_i + bn_j + {\beta}n_s + {\gamma}n_t +
{\delta}n_u,$$ which will give $\Fr(H)\in H$, a contradiction ! In the second case, since we have $$\Fr(H) + N -\deg(g) = \deg(g_i) + h_i$$ and it is easy to see that $\deg(g) + \deg(g_i) + h_i\ge_H N$, which again deduce $\Fr(H)\in H$.
\(ii) Since the minimal generators of $I_H$ are irreducible in $S$, if there appear only $2$ $g_i$’s in $(**)$ like $y_ig_i - y_jg_j = 0$, then $y_i = g_j$ and $y_j = g_i$ up to constant. Also, by (i), $g_i$ are of the form $g_i = x_p^{\al_p} - x_r^px_s^q$. Then for $f_1,f_2\in \PF'(H)$ and $n_p,n_q,n_t,n_u$, we have $\deg g_i = f_1+ n_p+n_q, \deg g_j = f_2+ n_r+n_s$. Since $\deg g_i = \al_p n_p$ and $\deg g_j = \al_rn_r$ for $n_p\ne n_r$, there are at least $3$ different elements among $\{ n_p, n_q,n_r, n_s\}$. Then the relation
$$\Fr(H) + N -\deg(g) = (f_1+ n_p+n_q) + (f_2+ n_t+n_u)$$ will deduce $\Fr(H)\in H$.
Now, assume that $\Fr(H) + N - \deg(g) = f + n_p + n_q + h$. Then we deduce $$f' + n_r + n_s = an_i+bn_j + h = cn_k + dn_{\ell} + h.$$
Since $\{n_i, n_j\}$ and $\{n_k, n_{\ell}\}$ are symmetric at this stage, we may assume $n_r= n_i, n_s=n_k$ and $$f' + n_k = (a-1) n_i +bn_j +h, \quad f' + n_i = (c-1)n_k +
dn_{\ell}+h.$$
Now, it is clear that $h$ should not contain $n_i$ or $n_k$. For the moment, assume that $h = m n_j$. Then we have $$f' + n_i = m n_j + (c-1)n_k + d n_{\ell}.$$ By Corollary \[0\_in\_row\], $i$-th row of $\RF(f')$ should contain $0$ and hence we should have $c=1$. Likewise, if $h = m n_j + m' n_{\ell}$ with $m,m'>0$, then we will have $a=c=1$.
Now, since at least $3$ non-zero terms appear in $(**)$, by Lemma \[relation\] we have at least $3$ relations of the type $$\Fr(H) - \deg(g) = f_i + (n_{p.i} + n_{q,i})$$ for $i= 1,2,3$. We can assume $f_1=f_2=f$, $$f' + n_i + n_k = an_i+bn_j + h = cn_k + dn_{\ell} + h$$ and also $h = m n_j$ with $m >0$. Note that by our discussion above, we have $c=1$ and $$f' + n_k = (a-1) n_i +(b+m) n_j ,
\quad f' + n_i = m n_j + dn_{\ell}.$$
Now, we have the 2nd relation $$\Fr(H) - \deg(g) = f+ (n_{p.2} + n_{q,2}) + h_2$$ and hence $$f' + n_t + n_u = an_i+bn_j + h_2 = n_k + dn_{\ell} + h_2.$$ We discuss the possibility of $n_t, n_u$ and $h_2$. Since $h_2\ne h=mn_j$, we must have $\{n_t, n_u\}
= \{ n_j, n_{\ell}\}$ and we have either
Case A: $h_2= m' n_i, d=1$, or
Case B: $h_2= m'' n_k, b=1$.
Now, by the argument above, in Case A, matrix $\RF(f')$ with respect to $\{n_i,n_j, n_k, n_{\ell}\}$ is $$\RF(f') = \left( \begin{array}{cccc} -1 & m & 0 & d\\
m' & -1 & 1 & 0 \\
a-1 & b+m & -1 & 0\\
a+m' & b-1 & 0 & -1 \end{array}\right),$$
By Lemma \[symm\], $(i,j), (i, \ell)$ entries of $\RF(f)$ is $0$ and by Lemma \[a\_ij>0\], $(i,k)$ component should be $\al_k-1>0$. This implies that $(k,i)$ entry $a-1$ of $\RF(f')=0$, thus we obtain $a=1$. Likewise, since $(j,i), (j,k)$ entry of $\RF(f)$ are $0$, hence $(j, \ell)$ entry is $\al_{\ell} -1>0$, forcing $(\ell, j)$ entry $b-1$ of $\RF(f')$ to be $0$. Thus we have $a=b=c=d=1$. We have the same conclusion in Case B, too.
\[RF-rel-7gen\] [*Since we get $a=b=1$ from above proof, the relation $x_ix_j- x_kx_l$ is obtained by taking the difference of 2nd and 4th row of $\RF(f')$ above. So, it is an RF-relation". Thus combining this with Theorem \[rfrelations\] and Theorem \[7th\] we see that if $H$ is almost symmetric of type $3$, then $I_H$ is generated by RF-relations.*]{}
When is $H+m$ almost symmetric for infinitely many $m$? {#sec:7}
=======================================================
In this section we consider shifted families of numerical semigroups.
\[H+m\_def\] [*For $H =\langle n_1,\ldots ,n_e\rangle$, we put $H+m =\langle n_1+m,\ldots ,n_e+m\rangle$. When we write $H+m$, we assume that $H+m$ is a numerical semigroup, that is, $\GCD(n_1+m,\ldots , n_e+m)=1$. In this section, we always assume that $n_1<n_2 < \ldots < n_e$. We put*]{} $$s = n_e -n_1, d = \GCD(n_2-n_1, \ldots , n_e - n_1) \quad {\rm\text and}\quad s' = s/d.$$
First, we will give a lower bound of Frobenius number of $H+m$.
For $m\gg 1$, $\Fr(H+m) \ge m^2/s $.
Note that $\Fr(H+m) \ge \Fr(H'_m)$, where we put $H'_m= \langle m+n_1,m+n_1+1,\ldots , m+n_e= m+n_1+s\rangle$, and it is easy to see that $\Fr(H'_m)\ge m^2/s$.
The following fact is trivial but very important in our argument.
\[al\_i(m)\] If $\phi = \prod_{i=1}^e x_i^{a_i} -
\prod_{i=1}^e x_i^{b_i}\in I_H$ is homogeneous, namely, if $\sum_{i=1}^e a_i =\sum_{i=1}^e b_i$, then $\phi \in I_{H+m}$ for every $m$.
We define $\alpha_i(m)$ to be the minimal positive integer such that $$\al_i(m) (n_i +m) = \sum_{j=1, j\ne i}^e \al_{ij}(m) (n_j+m).$$
\[homogeneous\] Let $H+m$ be as in Definition \[H+m\_def\]. Then, if $m$ is sufficiently big with respect to $n_1,\ldots , n_e$, then $\al_2(m),\ldots , \al_{e-1}(m)$ is constant, $\al_1(m)\ge (m+n_1)/s'$ and $\al_4(m)\ge (m+n_1)/s'-1$. Moreover, there is a constant $C$ depending only on $H$ such that $\al_1(m)-(m+n_1)/s' \le C$ and $\al_4(m)-(m+n_1)/s' \le C$.
It is obvious that there are some [*homogenous*]{} relation $\phi\in I_H$ of type $\phi= x_i^{a_i} - \prod_{j=1, j\ne i}^e x_j^{b_j}$ if $i \ne 1,e$. Thus for $m\gg 1$, $\al_i(m)$ is the minimal $a_i$ such that there exists a homogeneous equation of type $\phi= x_i^{a_i} - \prod_{j=1, j\ne i}^e x_j^{b_j} \in I_H$.
If $\al_1(m) (m+n_1)= \sum_{i=2}^e a_i(m+n_i)$, then obviously, $\al_1(m) \ge \sum_{i=2}^e a_i+1$. Moreover, if $d>1$, then since we should have $\GCD(m+n_1,d)=1$ to make a numerical semigroup, we have $\al_1(m) \equiv \sum_{i=2}^e a_i$ (mod $d$). Hence $\al_1(m)\ge \sum_{i=2}^e a_i+d$.
We can compute $\al_1(m)$ in the following manner. We assume that $m$ is sufficiently large and define $m'$ by the equation $$(m+n_1)=s' m' - r \quad {\text with} \quad 0\le r<s',$$ then $m' (m+ n_e) - (m' +d) (m+n_1) = dr \ge 0 $ and also, for an integer $c>0$, we have $ (m'+c) (m+ n_e) - (m' +d+c) (m+n_1)
= dr +cs.$ Take $c$ minimal so that $(m'+c) (m+ n_e)- (m' +d+c) (m+n_1) = \sum_{j=2}^{e-1} b_j (n_e-n_j)$. Since $\GCD\{n_e-n_1=s, \ldots , n_e-n_{e-1}\}=d$, such $c$ is a constant depending only on $\{n_1,\ldots , n_e\}$ and $r$, which can take only $s'$ different values. Then we have $\al_1(m) = m'+ c$, since $ (m' +d+c) (m+n_1) =
(m'+c- \sum_{j=2}^{e-1} b_j ) (m+ n_e) + \sum_{j=2}^{e-1} b_j(m+n_j)$ and the minimality of $c$.
Due to Lemma \[homogeneous\], we write simply $\al_i= \al_i(m) $ for $m\gg 1$ and $i \ne 1,e$.
[*If we assume $H=\langle n_1,n_2,n_3,n_4\rangle$ is almost symmetric of type $3$, we have some examples of $d>1$ and odd, like $H=\langle 20, 23, 44, 47\rangle$ with $d=3$ or $H= \langle 19, 24, 49, 54\rangle$ with $d=5$. But in all examples we know, at least one of the minimal generators is even. Is this true in general? Note that we have examples of $4$ generated [*symmetric*]{} semigroup all of whose minimal generators are odd.*]{}
$H+m$ is almost symmetric of type $2$ for only finite $m$.
----------------------------------------------------------
\[H+m\_type2\] Assume $H+m=\langle n_1+m,\ldots ,n_4+m\rangle$. Then for large enough $m$, $H+m$ is not almost symmetric of type $2$.
We assume that $m$ is suitably big and $H+m$ is almost symmetric of type $2$.
Recall the RF-matrix is of the following form (since we assumed the order on $\{n_1,\ldots, n_4\}$, we changed the indices $\{1,2,3,4\}$ to $\{i,j,k,l\}$) by Theorem \[type2\].
$$\RF(\Fr(H+m)/2) = \left( \begin{array}{cccc} -1 & \alpha_j(m)-1 & 0 & 0\\
0 & -1 & \alpha_k(m)-1 & 0 \\
\al_i(m)-1 & 0 & -1 & \al_l(m)-1\\
\al_i(m)-1 & \al_j(m)-1-\al_{ij}(m) & 0 & -1 \end{array}\right)$$
Also, we know by Lemma \[al\_i(m)\] that $\al_1(m), \al_4(m)$ grows linearly on $m$ and $\al_2(m), \al_3(m)$ stays constant for big $m$.
Now, 1st and 2nd rows of $\RF(\Fr(H+m)/2)$ shows $\Fr(H+m)/2 + (n_i+m) = (\al_j(m)-1)(n_j +m)$ and $\Fr(H+m)/2 + (n_j+m) = (\al_k(m) - 1)(n_k+m)$. Since $\Fr(H+m)$ grows by the order of $m^2$, these equations show that $\{j,k\} = \{1,4\}$. But then looking at the 3rd row, since $\{i,l\} = \{2,3\}$, $\Fr(H+m)/2$ grows as a linear function on $m$. A contradiction!
The classification of $H$ such that $H+m$ is almost symmetric of type 3 for infinitely many $m$
-----------------------------------------------------------------------------------------------
Unlike the case of type $2$, there are infinite series of $H+m$, which are almost symmetric of type $3$ for infinitely many $m$. The following example was given by T. Numata.
If $H= \langle 10,11,13,14\rangle$, then $H+4m$ is AS of type $3$ for all integer $m\ge 0$.
*For the following $H$, $H+m$ is almost symmetric with type $3$ if*
1. $H= \langle10,11,13,14\rangle$, $m$ is a multiple of $4$.
2. $H = \langle 10, 13, 15, 18\rangle$, $m$ is a multiple of $8$.
3. $H = \langle 14, 19, 21, 26\rangle$, $m$ is a multiple of $12$.
4. $H= \langle18, 25, 27, 34\rangle$, $m$ is a multiple of $16$.
In the following, we determine the type of numerical semigroup $H$ generated by $4$ elements and $H+m$ is almost symmetric of type $3$ for infinitely many $m$.
Let $H=\langle n_1,n_2,n_3 ,n_4\rangle$ with $n_1<n_2<n_3<n_4$ and we assume that $H+m$ is almost symmetric of type $3$ with $\PF(H+m) = \{ f(m), f'(m), \Fr(H+m)\}$ with $f(m) < f'(m), f(m) + f'(m)= \Fr(H+m)$. We say some invariant $\sigma(m)$ (e.g. $\Fr(H+m), f(m), f'(m)$) of $H+m$ is $O(m^2)$ (resp. $o(m)$) if there is some positive constant $c$ such that $\sigma(m) \ge cm^2$ (resp. $\sigma(m) \le cm$) for all $m$.
The invariants $\Fr(H+m)$ and $f'(m)$ are $O(m^2)$ and $f(m)$ is $o(m)$.
By Lemma \[(al\_i-1)n\_i\], we have
$$\phi(m)+ (n_k +m) = (\al_i (m)-1 )(n_i +m)$$
for every $i$ and for some $k$ and $\phi(m) \in \PF'(H+m)$. If $i=2,3$ (resp. $1,4$), then $\al_i(m)=\al_i$ is constant (resp. grows linearly on $m$) and $\phi(m)$ is $O(m)$ (resp. $O(m^2)$).
The following lemma is very important in our discussion.
\[(al\_i-1)n\_i\] Assume $H+m=\langle n_1+m,n_2+m,n_3+m ,n_4+m\rangle$ is almost symmetric of type $3$ for sufficiently large $m$. Then for every $i$, there exists $k\ne i$ and $\phi(m)\in \PF'(H+m)$ such that $(\al_i(m)-1)(n_i+m) = \phi(m)+ (m+n_k)$.
We have seen in Lemma \[special row\] (ii) that there are at least $4$ relations of type $(\al_i(m)-1)(n_i+m) = \phi(m)+ (m+n_k)$. If for some $i$, the relation $(\al_i(m)-1)(n_i+m) = \phi(m)+ (m+n_k)$ does not exist, then for some $j\ne i$, there must exist relations $$(\al_i(m)-1)(m+n_j) = f(m)+ (m+n_k)= f'(m)+(m+n_l),$$ which is absurd since $f(m)= o(m)$ and $f'(m)=O(m^2)$.
Now, we will start the classification of $H$, almost symmetric of type $3$ and $H+m$ is almost symmetric of type $3$ for infinitely many $m$.
\[AG3-fm\] Assume $H$ and $H+m$ are almost symmetric of type $3$ for infinitely many $m$. We use notation as above and we put $d=\GCD (n_2-n_1, n_3-n_2,n_4-n_3)$. If $H+m$ is almost symmetric of type $3$ for sufficiently big $m$, then the following statements hold.
1. We have $\al_2=\al_3$ and $\al_1(m)= \al_4(m)+1$. If we put $a=\al_2=\al_3$ and $b= \al_1(m)=\al_4(m)+1$, we have the following $\RF(f(m)), \RF(f'(m))$. Note that $\RF(f(m))$ does not depend on $m$ if $H+m$ is almost symmetric.
$$\RF ( f(m) ) = \left( \begin{array}{cccc} -1 & a-1 & 0 & 0\\
1 & -1 & a-2 & 0 \\
0 & a-2 & -1 & 1\\
0 & 0 & a-1 & -1 \end{array}\right),
\RF(f'(m))= \left( \begin{array}{cccc} -1 & 0 & 1 & b-d-2\\
0 & -1 & 0 & b-d-1\\
b-1 & 0 & -1 & 0\\
b-2 & 1 & 0 & -1 \end{array}\right),$$ where we put $b= \al_1(m)$ and then $\al_4(m)= b-d$.
2. The integer $a=\al_2=\al_3$ is odd and we have $n_2= n_1+(a-2)d, n_3= n_1+ ad, n_4= n_1+ (2a-2)d$.
We divide our proof into several steps.
\(1) By Lemma \[(al\_i-1)n\_i\], we have the relations $$f(m) + (m+n_k) = (\al_2-1)(m+n_2), \quad f(m) + (m+n_l) = (\al_3-1)(m+n_3)$$ in $H+m$. Taking the difference, we have $$n_l-n_k = (\al_3-\al_2)m + (\al_3-1)n_3- (\al_2-1)n_2.$$ Hence we must have $\al_2=\al_3$, since $m$ is sufficiently larger than $n_1,\ldots, n_4$. Now, we will put $\al_2=\al_3=a$.
Then we will determine $n_k,n_l$. Since we have $n_k< n_l$, there are $3$ possibilities;
$n_k=n_1$ and $n_l=n_4$, $n_k=n_1$ and $n_l=n_2$, or $n_k=n_3$ and $n_l=n_4$. If we have $n_k =n_3$ and $f(m)+ n_3 = (a-1)(m+n_2)$, $f(m) + n_1< (a-1)(m+n_2)$ and there is no way to express $f(m)+n_1$ as an element of $H+m$. Hence we have $n_k=n_1$ and in the same manner, we can show $n_l=n_4$. Thus we have obtained $$f(m) + (m+n_1 )= (a-1)(m+n_2), \quad f(m) + (m+n_4 )= (a-1)(m+n_3),$$ which are 1st and 4th rows of $\RF(f(m))$.
Now, put $f(m) = (a-2)e_m$ ($e_m\in \QQ$). Then we have $$(*) \quad (a-2)(e_m - (n_2+m))= n_2-n_1, \quad (a-2)((n_3+m)-e_m) = n_4-n_3.$$
Now, put $e_m = e+m, t= e - n_2$ and $u= n_3 - e$. Then from (\*) we have $$(**)\quad n_1= e- (a-1)t, n_2= e -t, n_3= e+u, n_4 =e + (a-1)u$$ with $n_3-n_2= t +u \in {\mathbb Z}$.
\(2) Noting $a=\al_2(m)$, we put $$\begin{aligned}
a(m+n_2) = c_1(m+n_1)+c_3(m+n_3)+c_4(m+n_4),\\
a(m+n_3)= c'_1(m+n_1)+ c'_2(m+n_2)+c'_4(m+n_4)\end{aligned}$$
Since $x_2^a - x_1^{c_1}x_3^{c_3}x_4^{c_4}$ is a homogeneous equation, we have $a= c_1+c_3+c_4$. Also, since $(a-1)(n_3-n_2)> n_2-n_1$, we have $c_1>1$ and in the same manner, we get $c'_4>1$.
\(3) By Lemma \[(al\_i-1)n\_i\], there should be relations
$$f'(m) + (m+n_k) = (\al_1(m)-1)(m+n_1), \quad f'(m) + (m+n_l)= (\al_4(m)-1)(m+n_4)$$ for some $k\ne 1$ and $l\ne 4$. By Lemma \[symm\], we must have $k=3$ or $4$, $l= 1$ or $2$. We will show that $k=3$ and $l=2$.
Assume, on the contrary, $f'(m) + (m+n_4) = (\al_1(m)-1)(m+n_1)$ and put $f'(m) + (m+n_3)= p_1(m)(m +n_1)+ p_2(m)(m+n_2)$, noting that $(3.4)$ entry of $\RF(f'(m))$ is $0$. Then since $f'(m) + (m+n_4) = (\al_1(m)-1)(m+n_1)> f'(m) + (m+n_3)= p_1(m)(m +n_1)+ p_2(m)(m+n_2)
\ge (p_1(m)+p_2(m))(m +n_1)$,
we have $\al_1(m)-1\ge p_1(m)+p_2(m)+1$.
Taking the difference of $$\begin{aligned}
f'(m) + (m+n_4) = (\al_1(m)-1)(m+n_1),\\
f'(m) + (m+n_3)= p_1(m)(m +n_1)+ p_2(m)(m+n_2),\end{aligned}$$ we have $$(***) \quad n_4-n_3= (\al_1(m) - 1-p_1(m) -p_2(m))(m+n_1) - p_2(m)(n_2-n_1).$$
Using the definition in Lemma \[homogeneous\], since $n_4-n_3, n_2-n_1$ are divisible by $d$ and $m +n_1$ is relatively prime with $d$, $(\al_1(m) - 1-p_1(m) -p_2(m))$ should be a multiple of $d$ and thus we have $\al_1(m) - 1-p_1(m) -p_2(m)\ge d$.
Then we have $p_2(m)(n_2-n_1) \ge d (m+n_1) +C_1$, where $C_1$ is a constant not depending on $m$. On the other hand, we have seen that $p_2(m) < \al_1(m)$ and $n_2-n_1 < s$ and $|\al_1(m) - dm/ s|\le C$, by Lemma \[homogeneous\]. Then we have $d (m+n_1)+C_1\le p_2(m)(n_2-n_1) < p_2(m)s <\al_1(m) s
\le dm$, which is a contradiction!
Thus we get $(\al_1(m)-1)(m+n_1) = f'(m) + (m+n_3)$ and in the same manner, $(\al_4(m)-1)(m+n_4) = f'(m) + (m+n_2)$.
\(4) Since we have seen $c_1,c'_4>1$ in (2), by Lemma \[symm\], we have $f'(m)+ (m+ n_1) = p_3(m)(m+n_3)+p_4(m)(m+n_4)$ and $f'(m)+ (m+ n_4) = q_1(m)(m+n_1)+q_2(m)(m+n_2)$. If, moreover, we assume $p_3(m)=0$, then we will have $f'(m)+ (m+ n_1)= p_4(m)(m+n_4)$ and $f'(m) +(m+n_2) = (\al_ 4(m) -1)(m+n_4)$, which will lead to $(n_2-n_1) = (\al_4(m)-1-p_4(m))(m+n_4)$, a contradiction! Hence we have $p_3(m) >0$ and in the same manner, $q_2(m)>0$. Again by Lemma \[symm\], we have $c_4=0= c'_1$.
\(5) Let us fix $\RF(f(m))$. Using (\*\*), we compute $$\begin{aligned}
m+n_1 + (a-2) (m+n_3) - (f(m) + (m+n_2)) = (a-2) (u-t)\\
m+n_4 + (a-2) (m+n_2) - (f(m) + (m+n_3))= (a-2) (u-t) .\end{aligned}$$
This shows that if $u=t$, we get desired $\RF(f(m))$.
Now, if $u>t$, since we have $f(m) + (m+n_3)= c'_2 (m+n_2) + (c_4'-1) (m+n_4)$ with $c'_2 + c'_4=a, c'_4 \ge 2$, then $c'_2 (m+n_2) + (c_4'-1) (m+n_4) \ge m+n_4 + (a-2) (m+n_2) = (f(m) + (m+n_3)) + (a-2) (u-t) >0$, a contradiction. Similarly, $t>u$ leads to a contradiction, too. So we have $u=t$ and we have proved the form of $\RF(f(m))$ in Proposition \[AG3-fm\].
\(6) From $f'(m) + (m+n_2) = (\al_4(m)-1)(m+n_4)$, add $n_1-n_2=n_3-n_4$ to both sides, we get $f'(m) + (m+n_1) = (m+n_3) + (\al_4(m)-2)(m+n_4)$, likewise $f'(m) + (m+n_4) = (m+n_2) + (\al_1(m)-2)(m+n_1)$
\(7) We will show that $\al_4(m)= \al_1(m)-d$. By equations $$f'(m)+(m+n_3) =(\al_1(m) -1)(m+n_1),$$ $$f'(m)+(m+n_2) =(\al_4(m) -1)(m+n_4),$$ taking the difference, we have $$n_3-n_2 + (\al_1(m) - \al_4(m))(m+n_1) = (\al_4(m) - 1)s.$$ Since $s, n_3-n_2$ are divisible by $d$ and $\GCD(m+n_1,d)=1$, $\al_1(m)-\al_4(m)$ is also divisible by $d$ and since $(\al_4(m)-1)s$ is bounded by $(d+C')m$ for some constant $C'$ by Lemma \[homogeneous\], we have $\al_1(m)-\al_4(m)=d$.
\(8) Finally, we will show that $a$ is odd and $t=u=d$. Since $(a-2)t = n_2-n_1$ and $2t= n_3-n_2$ are integers, if we show that $a$ is odd, then $t,u\in {\mathbb Z}$. Recall that $$an_2 = 2 n_1+ (a-2) n_3$$ and $a$ is minimal so that $an_2$ is representable by $n_2,n_3,n_4$. Then if $a$ is even, then it will contradict the fact that $a$ is minimal.
By the definition of $d$ we see that $t=u=d$.
\[H+mAG3\] Assume that $H=\langle n_1, n_2, n_3, n_4\rangle$ with $n_1<n_2<n_3<n_4$ and we assume that $H$ and $H+m$ are almost symmetric of type $3$ for infinitely many $m$. Then putting $d=\GCD (n_2-n_1,n_3-n_2,n_4-n_3)$, $a=\al_2,
b= \al_1$ and $\PF(H)= \{ f,f', \Fr(H)\}$, $H$ has the following characterization.
1. $a$ and $d$ are odd, $\GCD(a,d)=1$ and $b\ge d+2$.
2. $\RF(f)$ and $\RF(f')$ have the following form. $$\RF(f) = \left( \begin{array}{cccc} -1 & a-1 & 0 & 0\\
1 & -1 & a-2 & 0 \\
0 & a-2 & -1 & 1\\
0 & 0 & a-1 & -1 \end{array}\right),
\RF(f')= \left( \begin{array}{cccc} -1 & 0 & 1 & b-d-2\\
0 & -1 & 0 & b-d-1\\
b-1 & 0 & -1 & 0\\
b-2 & 1 & 0 & -1 \end{array}\right).$$
3. $n_1= 2a+ (b-d-2)(2a-2), n_2= n_1+(a-2)d, n_3= n_1+ad,
n_4= 2a+ (b-2)(2a-2)$.
4. If we put $H(a,b; d) = \langle n_1,n_2,n_3,n_4\rangle$, it is easy to see that $H(a,b+1; d) = H(a,b;d) + (2a-2)$. Since $H(a,b;d)$ is almost symmetric of type $3$ for every $a,d$ odd, $\GCD(a,d)=1$ and $b\ge d+2$, $H(a,b;d)+m$ is almost symmetric of type $3$ for infinitely many $m$.
5. $I_H = (xw-yz, y^a - x^2z^{a-2}, z^{a}-y^{a-2}w^2, xz^{a-1}- y^{a-1} w,
x^b -z^2w^{b-d-2}, w^{b-d}-x^{b-2}y^2, x^{b-1}y - zw^{b-d-1}))$.
We have determined $\RF(f), \RF(f')$ in Proposition \[AG3-fm\] and saw that $n_2= n_1 +(a-2)d, n_3= n_1+ ad, n_4= n_1+(2a-2)d$. Here we write $n_i$ (resp. $f.f'$) instead of $n_i+m$ (resp. $f(m), f'(m)$). Then from 1st and 3rd rows of $\RF(f')$, we get $$bn_1 = 2 (n_1+ad) + (b-d-2)(n_1+(2a-2)d)$$ hence $n_1= 2a+ (b-d-2)(2a-2)$. It is easy to see that putting $n_2= n_1+(a-2)d, n_3= n_1+ad,
n_4= n_1+(2a-2)d$, we have $f = (a-1)n_2-n_1
= (a-2)(2a+(a-1)(2b-d-4))$ and $f' = (b-1)n_1-n_3=(b-2)n_1-ad$.
Also, since $n_1$ is even, $d$ should be odd to make a numerical semigroup.
We put $I' = (xw-yz, y^a - x^2z^{a-2}, z^{a}-y^{a-2}w^2, xz^{a-1}- y^{a-1} w,
x^b -z^2w^{b-d-2}, w^{b-d}-x^{b-2}y^2, x^{b-1}y - zw^{b-d-1})$. We can easily see those equations come from $\RF(f(m))$ or $\RF(f'(m))$. Hence $I'\subset I_{H+m}$. Now we will show that $I' = I_H$ by showing $\dim_k S/ (I', x)= \dim_k S/(I_H, x)
= n_1$.
Now, we see that $$S/(I', x)\cong k[y,z,w]/( yz, y^{a}, z^{a}-y^{a-2}w^2,y^{a-1}w,
z^2w^{b-d-2}, w^{b-d}, zw^{b-d-1}),$$ whose base over $k$ we can take
$$A \cup B \cup C \cup \{y^{a-1}\}\cup \{zw^{b-d-2}\},$$ where $A = \{y^iw^j \;|\; 0\le i\le a-3, 0\le j \le b-d-1\}$,
$B = \{z^iw^j\;|\; 1\le i \le a-1, 0\le j\le b-d-3\},
C= \{ y^{a-2}w^j\;|0\le j\le b-d-1\}$. Note that $y^{a-2}w^{b-d-1}
= z^aw^{b-d-3}$.
Hence $\dim_k S/(I', x) = \sharp A +\sharp B + \sharp C + 2
= (a-2)(b-d) + (a-1)(b-d-2) + (b-d) +2 = n_1 = S/(I_H, x )$. Thus we have show that $I'=I_H$.
Also, it is easy to see that $\Soc( S/(I_H,x))$ is generated by $y^{a-1}, zw^{b-d-2},
y^{a-2}w^{b-d-1}= z^{a} w^{b-d-3}$ and since $xw = yz$ in $k[H]$, we have $ (y^{a-1})\cdot (zw^{b-d-2}) = x (y^{a-2}w^{b-d-1})$ in $k[H]$, which shows that $H(a,b)$ is almost symmetric.
[NNW]{}
V. Barucci, D. E. Dobbs, M. Fontana, *Maximality properties in numerical semigroups and applications to one-dimensional analytically irreducible local domains*, Memoirs of the Amer. Math. Soc. [**598**]{} (1997).
V. Barucci, R. Fröberg, *One-dimensional almost Gorenstein rings*, J. Algebra, **188** (1997), 418-442. V. Barucci, R. Fr" oberg, M. Şahin, On free resolutions of some semigroup rings, J. Pure Appl. Alg., [**218**]{} (2014), 1107-1116. M. Delgado, P.A. García-Sánchez, J. Morais, NumericalSgps - a GAP package, 0.95, 2006, http://www.gap-sytem.org/Packages/numericalsgps.
K. Eto, *Almost Gorenstein monomial curves in affine four space*. J. Alg., **488** (2017), 362 - 387.
R. Fröberg, C. Gottlieb, R. Häggkvist, *On numerical semigroups*, Semigroup Forum **35** (1987), 63-83. S. Goto, R. Takahashi, N. Taniguchi, *Almost Gorenstein rings - towards a theory of higher dimension-*, J. Pure and Applied Algebra, [**219**]{} (2015), 2666-2712. S. Goto, K. Watanabe, *On graded rings*, J. Math. Soc. Japan [**30**]{} (1978), 172-213. J. Herzog, *Generators and relations of abelian semigroups and semigroup rings*, Manuscripta Math. **3** (1970), 175-193.
J. Herzog, K. Watanabe, *Almost symmetric numerical semigroups and almost Gorenstein semigroup rings,* RIMS Kokyuroku, Kyoto University, [**2008**]{} (2016), 107-128.
J. Herzog, K. Watanabe, *Almost symmetric numerical semigroups and almost Gorenstein semigroup rings generated by 4 ekements,* RIMS Kokyuroku Kyoto University, [**2051**]{} (2017), 120-132. J. Komeda, *On the existence of weierstrass points with a certain semigroup generated by 4 elements*, Tsukuba J. Math [**6**]{} (1982), 237-270. E. Kunz, *The value-semigroup of a one-dimensional Gorenstein rings*, Proc. Amer. Math. Soc. **25** (1970), 748-751.
A. Moscariello, On the type of an almost Gorenstein monomial curve, J. of Algebra, **456** (2016), 266 – 277. H. Nari, *Symmetries on almost symmetric numerical semigroups*, Semigroup Forum, **86** (2013), 140 - 154. H. Nari, T. Numata, K.-i. Watanabe, *Genus of numerical semigroups generated by three elements*, J. Algebra, **358** (2012), 67-73. T. Numata, *Almost symmetric numerical semigroups generated by four elements*, Proceedings of the Institute of Natural Sciences, Nihon University **48** (2013), 197-207.
J. C. Rosales, P. A. García-Sánchez, *Numerical semigroups*, Springer Developments in Mathematics, Volume **20**, (2009). J. C. Rosales, P. A. García-Sánchez, *Constructing almost symmetric numerical semigroups from irreducible numerical semigroups*, preprint. K. Watanabe, *Some examples of one dimensional Gorenstein domains*, Nagoya Math. J. **49** (1973), 101-109.
| {
"pile_set_name": "ArXiv"
} |
prepictex pictex postpictex
[**Discrete phase space - I:\
Variational formalism for classical\
relativistic wave fields**]{}\
[**A. Das**]{}
0.6cm
*Department of Mathematics and Statistics*
*Simon Fraser University, Burnaby, B.C., V5A 1S6, Canada*
0.6cm
0.2cm
[**Abstract**]{}
[The classical relativistic wave equations are presented as partial [*difference*]{} equations in the arena of covariant discrete phase space. These equations are also expressed as [*difference-differential*]{} equations in discrete phase space and continuous time. The relativistic invariance and covariance of the equations in [*both*]{} versions are established. The partial difference and difference-differential equations are derived as the Euler-Lagrange equations from the variational principle. The difference and difference-differential conservation equations are derived. Finally, the total momentum, energy, and charge of the relativistic classical fields satisfying [*difference-differential*]{} equations are computed.]{} 0.6cm
[*PACS:*]{} [11.10.Ef; 11.10.Qr; 11-15.Ha ]{}\
[*Keywords:*]{} [Classical fields; Relativistic; Lattices Equations]{}
Introduction
============
Partial difference equations have been studied \[1\] for a long time to investigate problems in mathematical physics. Moreover, modern numerical analysis \[2\], which studies the differential equations arising out of various physical problems approximately, is based upon finite difference (ordinary or partial) equations.
Recently \[3\], we have formulated the wave mechanics in an exact fashion in the arena of discrete phase space in terms of the partial difference equations. This new formulation includes (free) classical relativistic Klein-Gordon, Dirac, and gauge field equations.
The proofs of the relativistic invariance or covariance of various partial difference and difference-differential equations are [*quite subtle*]{}. However, we have managed to provide such proofs in Section 4.
The Euler-Lagrange equations for the partial difference and difference-differential equations and Noether’s theorems for difference and difference-differential conservation laws are derived in Appendix I and Appendix II respectively.
[F]{}inally, the total momentum, energy and charge are computed for various relativistic classical fields obeying [*difference-differential*]{} equations in Section 5. (We have not computed conserved quantities for wave fields satisfying partial difference equations for a physical reason to be explained later.)
Moreover, the stress-energy-momentum tensor and the consequent total momentum-energy as computed from a general Lagrangian are somewhat incomplete in this paper. However, in the next paper, these quantities for the free Klein-Gordon, electro-magnetic, and Dirac fields are furnished with [*exact*]{} equations. 1.2cm
Notations and preliminary definitions
=====================================
There exists a characteristic length $\ell$ (which may be the Planck length) in this theory. We choose the absolute units such that $\hbar = c = \ell = 1.$ All physical quantities are expressed as dimensionless numbers. Greek indices take from $\{1,2,3,4\},$ roman indices take from $\{1,2,3\},$ and the capital roman take from $\{1,2\}.$ Einstein’s summation convention is followed in all three cases. We denote the flat space-time metric by $\eta_{\mu \nu}$ and the diagonal matrix $[\eta_{\mu \nu}] := {\mathrm{diag}} [1,1,1-1].$ (The signature of the metric is obviously $+ 2.$) We denote the set of all non-negative integers by $\N := \{0\} \cup \{Z^+\} =
\{0,1,2,3,\ldots\}.$ An element $n \in \N\times\N\times\N\times\N$ and an element $({{\mathbf{n}}},t) \in \N^3 \times \R$ can be expressed as $$n = (n^1, n^2, n^3, n^4), \;\; n^{\mu} \in \N, \;\; \mu \in
\{1,2,3,4\}\,;
\hspace*{0.75cm}
\eqno{\rm (1A)}$$ $$\hspace*{0.25cm}({{\mathbf{n}}},t) = (n^1, n^2, n^3, t), \;\; n^j \in \N,
\;\; j \in \{1,2,3\}, \;\; t\in \R\vspace*{0.2cm}\,.
\eqno{\rm (1B)}$$ Here and elsewhere, a bold roman letter indicates a [*three-dimensional*]{} vector. In this paper, the equations in the relativistic phase space are denoted by (..A), whereas the equations in the discrete phase space and continuous time are labelled by (..B). Both formulations are presented up to the difference and difference-differential conservation laws. Subsequently, only the difference-differential equations are pursued. The physical meanings of the quantum numbers $n^\mu$ are understood from the equations $$(x^\mu )^2 + (p^\mu )^2 = 2n^\mu + 1, \quad \mu \in \{1,2,3,4\},
\eqno(2)$$ where $x^\mu $ denote the space-time coordinates and $p^\mu $ stand for four-momentum components in quantum mechanics.
$$\begin{minipage}[t]{15cm}
\beginpicture
\setcoordinatesystem units <0.65truecm,0.65truecm>
\setplotarea x from 0 to 13, y from 7 to -9
\setlinear
\plot 0 0 12 0 /
\plot 6 6 6 -6 /
\put {\vector(1,0){4}} [Bl] at 12 0
\put {\vector(0,1){4}} [Bl] at 6 6
\setquadratic
\circulararc 360 degrees from 5.35 0 center at 6 0
\circulararc 360 degrees from 4.75 0 center at 6 0
\circulararc 360 degrees from 4.15 0 center at 6 0
\setlinear
\plot 6 0 5.5 -0.3775 /
\plot 6 0 6.75 -1 /
\plot 6 0 7.075 1.5 /
\put {${\rm p}^\mu$} at 6 6.6
\put {${\rm q}^\mu$} at 12.7 0
\put {$\scriptscriptstyle 1$} at 5.8725 -0.35
\put {$\scriptscriptstyle \sqrt{3}$} at 6.75 -0.5255
\put {$\scriptscriptstyle \sqrt{5}$} at 7.05 1.075
\put {{\bf FIG. 1} \ \ One discrete phase plane.} at 6 -8.2
\endpicture
\end{minipage}$$
Therefore, $n^\mu $ gives rise to a closed phase space loop of radius $\sqrt{2n^\mu+1}$ in the $\mu$-th phase plane. Field quanta reside in such phase space loops where the measurements of angles are [*completely uncertain*]{} (see Fig.1). Phase space loops can be interpreted as degenerate phase cells.
We shall encounter three dimensional improper integrals in computing the total conserved quantities. Those integrals are always defined to be the Cauchy-principal-value: $$\int\limits_{\R} f({{\mathbf{k}}})\,d^3 {{\mathbf{k}}} :=
\lim\limits_{L\rightarrow \infty} \left[\,\int\limits_{-L}^{L}
\int\limits_{-L}^{L} \int\limits_{-L}^{L} f(k_1, k_2,
k_3)\,dk_1dk_2dk_3\right]\vspace*{0.1cm}.
\eqno(3)$$
Let a function be defined by $f: \N^4 \rightarrow \R$ (or $f: \N^4
\rightarrow \C$). Then the right partial difference, the left partial difference, and the weighted-mean difference are defined respectively by \[3,4\]: $$\Delta _\mu f(n) := f(.., n^\mu +1,..) - f(..,n^\mu ,..)
\vspace*{0.1cm}\,,
\eqno({\rm 4i})$$ $$\Delta^{\prime}_\mu f(n) := f(.., n^\mu ,..) - f(..,n^\mu -1,..)
\vspace*{0.2cm}\,,
\eqno({\rm 4ii})$$ $$\Delta_\mu^{\scsc\#} f(n) := (1 / \sqrt{2}\,)
\left[\sqrt{n^\mu +1}\,f (..,n^\mu +1,..) -\sqrt{n^\mu}\,f(..,n^\mu
-1,..)\right] \vspace*{0.3cm}
\eqno({\rm 4iii})$$ It is clear that the partial difference operators $\Delta_\mu,
\Delta_\mu^{\prime}, \Delta_\mu^{\scsc\#}$ are all linear and the operators $-i\Delta_\mu^{\scsc\#}$ are self-adjoint \[3\]. By direct computations, we can prove the following generalizations of the Leibnitz rule: $$\Delta _\mu [f(n) g(n)] = f(..,n^\mu +1,..)
\Delta_\mu g(n) + g(n) \Delta_\mu f(n)\vspace*{0.4cm}\,,
\eqno({\rm 5i})$$ $$\Delta_\mu^{\prime} [f(n) g(n)] = f(n) \Delta_\mu^{\prime} g(n) +
g(..,n^\mu-1,..) \Delta_\mu^{\prime} f(n)\vspace*{0.6cm}\,,
\eqno({\rm 5ii})$$ $$\begin{array}{rcl}
\Delta_\mu^{\scsc\#} [f(n) g(n)] &=& f(..,n^\mu\!+\!1,..)
\Delta_\mu^{\scsc\#} g(n) + g(..,n^\mu\!-\!1,..)
\Delta_\mu^{\scsc\#} f(n) \\[0.25cm]
&& - f(..,n^\mu\!+\!1,..) g(..,n^\mu\!-\!1,..) \Delta_\mu^{\scsc\#}
(1)\,, \\[0.35cm]
&=& f(..,n^\mu\!-\!1,..) \Delta_\mu^{\scsc\#} g(n)
+ g(..,n^\mu +1,..) \Delta_\mu^{\scsc\#} f(n) \\[0.25cm]
&& - f(..,n^\mu\!-\!1,..) g(..,n^\mu\!+\!1,..)
\Delta_\mu^{\scsc\#}(1)\vspace*{0.7cm}\,,
\end{array}
\eqno({\rm 5iii})$$ $$\Delta_\mu^{\scsc\#}(1) := (1 / \sqrt{2}\,) \left[
\sqrt{n^\mu +1} - \sqrt{n^\mu}\,\right], \; \lim\limits_{n^\mu
\rightarrow \infty} \Delta_\mu^{\scsc\#}(1) = 0\vspace*{0.5cm}\,,
\eqno({\rm 5iv})$$ $$\begin{array}{l}
\Delta_\mu \left\{\sqrt{n^\mu} \,[\phi (n) \psi(..,n^\mu-1,..)
+ \phi (..,n^\mu-1,..) \psi (n)]\right\} \\[0.3cm]
= \sqrt{2}\,[\phi (n) \Delta_\mu^{\scsc\#} \psi (n) + \psi (n)
\Delta_\mu^{\scsc\#} \phi (n)].
\end{array}
\eqno({\rm 5v})$$ 0.8cm
In the left-hand side of the equation (5v), the index $\mu$ is [not]{} summed.
We shall furnish a few more rules involving finite difference operations in the following equations: $$\sqrt{n^\nu+1} \,\Delta_\nu \phi (n) + \sqrt{n^\nu}
\Delta_\nu^{\prime} \phi (n) = \sqrt{2}\,[\Delta_\nu^{\scsc\#}
\phi (n) - \phi (n)\cdot \Delta_\nu^{\scsc\#}(1)]\vspace*{0.7cm}\,,
\eqno({\rm 6i})$$ $$\begin{array}{l}
\sqrt{n^\nu+1} \,\Delta_\nu \Delta_\mu^{\scsc\#} \phi (n)
+ \sqrt{n^\nu} \Delta_\nu^{\prime} \Delta_\mu^{\scsc\#}
\phi (n) \\[0.3cm]
= \sqrt{2} \,[\Delta_\nu^{\scsc\#} \Delta_\mu^{\scsc\#}
\phi (n) - \Delta_\mu^{\scsc\#} \phi (n) \cdot
\Delta_\nu^{\scsc\#}(1)]\,,
\end{array} \vspace*{0.8cm}\,
\eqno({\rm 6ii})$$ $$\begin{array}{l}
\sqrt{n^\mu+1} \,[\Delta_\mu \phi (n)]^2
- \sqrt{n^\mu} \,[\Delta_\mu^{\prime} \phi (n)]^2 \\[0.3cm]
= \sqrt{2} \{\Delta_\mu^{\scsc\#} [\phi (n)]^2 - 2\phi (n)
\Delta_\mu^{\scsc\#} \phi (n) + [\phi (n)]^2 \cdot
\Delta_\mu^{\scsc\#}(1)\}\,,
\end{array} \vspace*{0.4cm}\,
\eqno({\rm 6iii})$$ $$\begin{array}{l}
\sqrt{n^\mu+1} \,(\Delta_\mu \Delta_\nu^{\scsc\#}\phi)\cdot
(\Delta_\mu \Delta_\sigma^{\scsc\#}\phi) - \sqrt{n^\mu}
\,(\Delta_\mu^{\prime} \Delta_\nu^{\scsc\#}\phi)\cdot
(\Delta_\mu^{\prime} \Delta_\sigma^{\scsc\#} \phi) \\[0.3cm]
= \sqrt{2} \,\{\Delta_\mu^{\scsc\#}(\Delta_\nu^{\scsc\#}\phi\cdot
\Delta_\sigma^{\scsc\#}\phi) - \Delta_\nu^{\scsc\#}\phi\cdot
(\Delta_\mu^{\scsc\#}\Delta_\sigma^{\scsc\#}\phi)
- \Delta_\sigma^{\scsc\#}\phi\cdot (\Delta_\mu^{\scsc\#}
\Delta_\nu^{\scsc\#}\phi) \\[0.3cm]
+ (\Delta_\nu^{\scsc\#}\phi)\cdot (\Delta_\sigma^{\scsc\#}\phi)
\Delta_\mu^{\scsc\#}(1)\}\,,
\end{array} \vspace*{0.7cm}\,
\eqno{\rm (6iv)}$$ $$\begin{array}{l}
\sqrt{n^\mu+1} \,(\Delta_\mu A_\nu) (\Delta_\mu B_\sigma)
- \sqrt{n^\mu} \,(\Delta_\mu^{\prime}A_\nu) (\Delta_\mu^{\prime}
B_\sigma) \\[0.3cm]
= \sqrt{2}\,[\Delta_\mu^{\scsc\#}(A_\nu B_\sigma) - A_\nu
\Delta_\mu^{\scsc\#}B_\sigma - (\Delta_\mu^{\scsc\#}A_\nu) B_\sigma
+ A_\nu B_\sigma\Delta_\mu^{\scsc\#}(1)]\,.
\end{array}
\eqno({\rm 6v})$$ 0.6cm
Here, neither the index $\mu$ nor the index $\nu$ is summed.
Now the rules for the summations will be listed. $$\sum\limits_{n^\mu=N_1^\mu}^{N_2^\mu} \Delta_\mu f(n)
= f(..,N_2^\mu+1,..) - f(..,N_1^\mu,..)\vspace*{0.4cm}\,,
\eqno({\rm 7i})$$ $$\sum\limits_{n^\mu=N_1^\mu}^{N_2^\mu} (\Delta_\mu^{\prime} f(n)
= f(..,N_2^\mu,..) - f(..,N_1^\mu-1,..)\vspace*{0.6cm}\,,
\eqno({\rm 7ii})$$ $$\begin{array}{l}
{\ds \sum\limits_{n^\mu=N_1^\mu}^{N_2^\mu} \Delta_\mu^{\scsc\#}f(n)
= (1 / \sqrt{2}) \Biggr[\sqrt{N_2^\mu+1} \:f(..,N_2^\mu +1,..)}
\\[0.2cm]
{\ds - \sqrt{N_1^\mu} \:f(..,N_1^\mu -1,..)
+ \sum\limits_{n^\mu = N_1^\mu+1}^{N_2^\mu} \sqrt{n^\mu}
\,\Delta_\mu^{\prime} f(n)\Biggr]\,.} \end{array}
\eqno({\rm 7iii})$$ 0.5cm
It can be noted that right-hand sides of the equations (7i) and (7ii) contain only boundary terms whereas the right-hand side of (7iii) contains many more than just boundary terms. 1.4cm
Gauss’s theorem and conservation laws in a discrete space
=========================================================
Let a domain $D$ of the four-dimensional discrete space $\N^4$ be given by (see Fig.2) $$D := \{n \in \N^4 : \,N_1^\mu < n^\mu < N_2^\mu , \; \mu \in
\{1,2,3,4\}\}\,.
\eqno(8)$$
0.3cm
$$\hspace*{-1cm}\begin{minipage}[t]{15cm}
\beginpicture
\setcoordinatesystem units <.65truecm,.65truecm>
\setplotarea x from 0 to 15, y from 7 to -7
\setsolid
\setlinear
\plot 0 0 14 0 /
\plot 6 6 6 -6 /
\put {\vector(1,0){4}} [Bl] at 14 0
\put {\vector(0,1){4}} [Bl] at 6 6
\setquadratic
\plot 7.85 1.1 7.7 0.8 8 0.5 /
\plot 8 0.5 8.25 0.2 8.3 -0.4 /
\plot 13.15 5.35 13.4 5.5 13.7 5.575 /
\setlinear
\put {\vector(1,0){4}} [Bl] at 13.7 5.575
\put {\vector(0,-1){4}} [Bl] at 8.305 -0.4
\put {${\rm n}^2$} at 6 6.7
\put {${\rm n}^1$} at 14.7 0.1
\put {$({\rm N}_1^1, {\rm N}_1^2)$} at 9.4 -0.9
\put {$({\rm N}_2^1, {\rm N}_2^2)$} at 15.1 5.6
\setplotsymbol ({\circle*{2.5}} [Bl])
\plotsymbolspacing8mm
\setdots <5mm> \plot 8 5.2 13.5 5.2 /
\plot 8 4.2 13.5 4.2 /
\plot 8 3.2 13.5 3.2 /
\plot 8 2.2 13.5 2.2 /
\plot 8 1.2 13.5 1.2 /
\setsolid
\put {{\bf FIG. 2} \ \ A two-dimensional dicrete domain.} at 7 -8.4
\endpicture
\end{minipage}$$ 1.6cm
The discrete boundary points of $D$ are taken to be $$\begin{array}{l}
\!\!\!\!\partial D = \partial_{1-}D \cup \partial_{1+}D \cup
\partial_{2-}D \cup \partial_{2+}D \cup \partial_{3-}D \cup
\partial_{3+}D \cup \partial_{4-}D \cup \partial_{4+}D , \\[0.4cm]
\!\!\!\!\partial_{\mu-}D := \{n\in \N^4 : \,n^\mu = N_1^\mu,
N_1^\sigma \leq n^\sigma \leq N_2^\sigma ,\sigma \neq \mu\} ,\\[0.4cm]
\!\!\!\!\partial_{\mu+}D := \{n\in \N^4 : \,n^\mu = N_2^\mu,
N_1^\sigma \leq n^\sigma \leq N_2^\sigma ,\sigma \neq \mu\} .
\end{array}
\eqno(9)$$ 0.7cm
We also denote the unit “normal” $\nu_\mu$ on the boundary $\partial
D$ by the following definition. $$\nu_\mu (n) := \left\{ \begin{array}{l}
\phantom{-} 1 \; \mbox{on} \; \partial_{\mu+} D\,, \\[0.2cm]
-1 \; \mbox{on} \; \partial_{\mu-} D\,.
\end{array} \right.
\eqno(10)$$ 0.8cm
We assume that a tensor field $j_{..}^{\mu ..}(n)$ is defined over $D\subset \N^4.$ (See equation (34A) for the definition of a tensor field.) Now we are ready to state and prove formally the “discrete Gauss’s theorem” \[5\]. 1.4cm
[**Theorem 3.1**]{} [(Discrete Gauss’s):]{} 0.3cm
[**Proof.**]{} Using the equation (7i) four times to the left-hand side of the equation (11) we obtain $$\begin{array}{l}
{\ds \sum\limits_{n^2=N_1^2}^{N_2^2} \sum\limits_{n^3=N_1^3}^{N_2^3}
\sum\limits_{n^4=N_1^4}^{N_2^4} \left[\,\sum\limits_{n^1=N_1^1}^{N_2^1}
\Delta_1 j_{..}^{1..} (n)\right] + .. + .. + .. } \\[0.7cm]
{\ds = \sum\limits_{n^2=N_1^2}^{N_2^2} \sum\limits_{n^3=N_1^3}^{N_2^3}
\sum\limits_{n^4=N_1^4}^{N_2^4} [\,j_{..}^{1..}(N_2^1,..)
- j_{..}^{1..} (N_1^1,..)] + .. + .. + .. } \\[0.7cm]
{\ds \phantom{=\hspace*{0.04cm}} \left[\,\sum\limits_{\partial_{1+}
D}^{(3)} j_{..}^{1..}(n) - \sum\limits_{\partial_{1-}D}^{(3)}
j_{..}^{1..} (n)\right] + .. + .. + .. } \\[0.7cm]
{\ds = \left[\,\sum\limits_{\partial_{1+}D\cup \partial_{1-}D}^{(3)}
j_{..}^{1..}(n) \nu_1(n)\right] + .. + .. + .. } \\[0.7cm]
{\ds = \sum\limits_{\partial D\subset \N^4}^{(3)} j_{..}^{\mu ..}
(n) \nu_\mu (n)\,. }
\end{array}
\eqno{\raisebox{-18.25ex}{\qed}}$$ 0.4cm
We shall now make some comments on the preceding theorem.\
(i) An alternate theorem holds by replacing the left-hand side of the equation (11) by $\sum\limits_{D\subset \N^4}^{(4)}
{[\Delta_\mu^{\prime} j_{..}^{\mu ..}(n)]}.$ 0.4cm
\(ii) Both forms of Gauss’s theorem can be generalized to any finite-dimensional discrete space. 0.3cm
\(iii) In the finite difference representation of quantum mechanics \[3\], the four momentum operators are furnished by $P_\mu
= -i\Delta_{\mu}^{\scsc\#}.$ These are consequences of the [*relativistic*]{} representations. However, Gauss’s theorem 3.1, which uses $\Delta_\mu$ operators, is [*non-relativistic*]{}. (The relativistic Gauss’s theorem involving $\sum\limits_{D\subset
\N^4}^{(4)} [\Delta_\mu^{\scsc\#} j_{..}^{\mu ..}(n)]$ is not yet solved. See the comments at the end of this section.) The partial difference and difference-differential conservation equations (non-relativistic) are furnished by $$\Delta_\mu j_{..}^{\mu ..} (n) = 0\vspace*{0.1cm}\,,
\eqno({\rm 12A})$$ $$\Delta_b j_{..}^{b..} ({{\mathbf{n}}},t) + \partial_t j_{..}^{4..}
({{\mathbf{n}}},t) = 0\vspace*{0.3cm}\,.
\eqno({\rm 12B})$$ Difference conservation equations lead to summation conservations. We shall presently state and prove a theorem about this topic. 0.7cm
[**Theorem 3.2**]{} [(Conserved sums):]{} 0.7cm
[**Proof.**]{} Using Gauss’s theorem 3.1 and the difference conservation equation (12A), we conclude that $$\begin{array}{rcl}
0 &=& {\ds \sum\limits_{D\subset \N^4}^{(4)}\,[\Delta_\mu j_{..}^{\mu ..}
(n)] } \\[0.7cm]
&=& {\ds \sum\limits_{\partial_{1+}D\cup \partial_{1-}D}^{(3)}
j_{..}^{1..} (n) \nu_1(n) + .. + .. + \sum\limits_{\partial_{4+}D\cup
\partial_{4-}D}^{(3)} j_{..}^{4..} (n) \nu_4 (n). }
\end{array} \vspace*{0.2cm}\,$$ Assuming the boundary conditions (13), the above equation yields $$\begin{array}{rcl}
0 &=& {\ds \sum\limits_{\partial_{4+}D\cup \partial_{4-}D}^{(3)}
j_{..}^{4..} (n) \nu_4(n) } \\[0.6cm]
&=& {\ds \sum\limits_{\partial_{4+}D}^{(3)}
j_{..}^{4..} (n^1,n^2,n^3,N_2^4) - \sum\limits_{\partial_{4-}D}^{(3)}
j_{..}^{4..} (n^1,n^2,n^3,N_1^4). }
\end{array} \vspace*{0.1cm}\,$$ Considering Gauss’s theorem for a proper subset of $D,$ the equation (14) follows. 0.5cm
In the case of the difference conservation equation (12) being valid for the denumerably infinite domain $\N^4,$ we can derive conserved sums under suitable boundary conditions. Such boundary conditions (sufficient) are $$\begin{array}{rcl}
\partial_{a-}D &:=& \left\{n\in \N^4 : \,n^a=0, \; 0\leq
n^b < \infty, \; a \neq b\right\}\,, \\[0.2cm]
\partial_{a+}D &:=& \left\{n\in \N^4 : \,n^a=M, \; 0\leq n^b
< \infty, \; a \neq b\right\}\,, \\[0.2cm]
&& \lim\limits_{M\rightarrow \infty} \left[j_{..}^{b..}(n)
\,\nu_b(n)\right]_{|\partial_{a-}D\cup\partial_{a+}D}=0\,;
\end{array} \vspace*{0.4cm}\,
\eqno{\rm (15A)}$$ $$\begin{array}{rcl}
\partial_{a-}D &:=& \left\{({{\mathbf{n}}},t)\in \N^3 \times R :
\,n^a=0, \; 0\leq n^b < \infty, \; a \neq b\right\}\,, \\[0.2cm]
\partial_{a+}D &:=& \left\{({{\mathbf{n}}},t)\in \N^3 \times R :
\,n^a=M, \; 0\leq n^b < \infty, \; a \neq b\right\}\,, \\[0.2cm]
&& \lim\limits_{M\rightarrow \infty} \left[j_{..}^{b..}({{\mathbf{n}}},t)
\,\nu_b ({{\mathbf{n}}},t)\right]_{|\partial_{a-}D\cup\partial_{a+}D}=0\,.
\end{array}
\eqno({\rm 15B})$$ 0.4cm
Under boundary conditions (15A), the equation (14) yields the following totally conserved quantities (generalized charges!) $$\begin{array}{rll}
{\ds Q_{..}^{..} = \sum\limits_{n^1=0}^{\infty}
\sum\limits_{n^2=0}^{\infty} \sum\limits_{n^3=0}^{\infty}
j_{..}^{4..} (n^1,n^2,n^3,n^4) } &=:& {\ds \sum\limits_{{{\mathbf{n}}}
\in \N^3}^{(3)} j_{..}^{4..} ({{\mathbf{n}}},n^4) } \\[0.6cm]
&=& {\ds \sum\limits_{{{\mathbf{n}}}\in \N^3}^{(3)} j_{..}^{4..}
({{\mathbf{n}}},2)\,. }
\end{array}
\eqno({\rm 16A})$$ 0.1cm
In the case of the difference-differential conservation equations (12B) and the boundary conditions (15B), we can derive the totally conserved quantities $$Q_{..}^{..} = \sum\limits_{{{\mathbf{n}}}\in \N^3}^{(3)} j_{..}^{4..}
({{\mathbf{n}}},t) = \sum\limits_{{{\mathbf{n}}}\in \N^3}^{(3)} j_{..}^{4..}
({{\mathbf{n}}},0)\vspace*{0.1cm}\,.
\eqno({\rm 16B})$$
One may wonder why are we considering the [*non-relativistic*]{} Gauss’s theorem and the consequent conserved sums at all! In Section 5, we shall prove that [*the theorems of this section can be used tactfully to elicit relativistic conserved sums.*]{}
Relativistic covariance of partial difference and difference-differential wave equations
========================================================================================
The relativistic invariance or covariance of our partial difference or difference-differential equations is quite delicate. The criterion used in this paper has been developed through many papers \[6\] published in the last three decades. Let us start with a very simple example of Poincar[é]{} transformations and the invariance of the usual wave equation under that transformation. Consider the infinitesimal time-translation: $$\begin{array}{l}
\widehat{x}^a = x^a, \, \widehat{x}^4 = x^4 + \varepsilon^4\,,
\\[0.1cm]
x^a = \widehat{x}^a, \, x^4 = \widehat{x}^4 - \varepsilon^4\,.
\end{array}
\eqno(17)$$ There exists in the old frame a [*different*]{} event $(x^{\scsc\#})$ which has the [*same*]{} coordinates $(x)$ in the new frame. The coordinates of $(x^{\scsc\#})$ from (17) are given by $$x^{\scsc\# a} = x^a, \, x^{\scsc\# 4} =
x^4-\varepsilon^4, \, \widehat{x}^{\scsc\# 4} = x^4\vspace*{0.1cm}\,.
\eqno(18)$$ Consider the transformation rule for a scalar field $\phi (x)$ given by $$\begin{array}{rcl}
\widehat{\phi}(\widehat{x}) &=& \phi (x)\,, \\[0.1cm]
\widehat{\phi}(x) &=& \phi (x^{\scsc\#}) = \phi
(x^1,x^2,x^3,x^4-\varepsilon^4)\vspace*{0.2cm}\,.
\end{array}
\eqno(19)$$
Let $\phi (x)$ be a Taylor-expandible (or analytic) function. In that case we can express (19) by Lagrange’s formula as $$\begin{array}{rrl}
\widehat{\phi}(x) &=& \left[\exp (-\varepsilon^4 \partial_4)\right]
\phi (x) \\[0.2cm]
&=& \left[\exp (-i\varepsilon^4 (-i) \partial_4)\right]
\phi (x) \\[0.2cm]
&=& \phi (x) - \varepsilon^4 \partial_4 \phi (x)
+ 0\left[(\varepsilon^4)^2\right]\,, \\[0.2cm]
\partial_\mu &:=& {\ds \frac{\partial}{\partial x^\mu }\,.}
\end{array}
\eqno(20)$$ 0.2cm
Moreover, let $\phi (x)$ satisfy the usual wave equation $$\eta^{\mu\nu} \partial_\mu \partial_\nu \phi (x) = 0\,.
\eqno(21)$$ 0.1cm
In the new frame, $$\begin{array}{rcl}
\eta^{\mu\nu} \partial_\mu \partial_\nu \widehat{\phi}(x) &=&
\eta^{\mu\nu} \partial_\mu \partial_\nu \{[\exp (-\varepsilon^4
\partial_4)] \phi (x)\} \\[0.2cm]
&=& [\exp (-\varepsilon^4 \partial_4)]\,[\eta^{\mu\nu} \partial_\mu
\partial_\nu \phi (x)] = 0\,.
\end{array} \vspace*{0.1cm}\,$$ The above equation demonstrates in an [*unusual*]{} manner the relativistic invariance of the wave equation under an infinitesimal time translation. We shall follow similar proofs in the sequel.
Let us now re-examine the very concept of relativistic invariance or covariance. The relativistic covariance does [*not*]{} necessarily imply that the space and time coordinates must be treated on the same footing. Nor do the equations which treat space and time variables on the equal footing automatically imply the relativistic covariance. For example, let us consider the partial difference Klein-Gordon equation [\[7\]]{} in the lattice space-time as $$\eta^{\mu\nu} \Delta_\mu \Delta_\nu^{\prime} \phi (n) - m^2 \phi
(n) = 0\,.
\eqno(22)$$ This equation treats discrete space and time variables on the same footing. However, the equation (22) is certainly [*not*]{} invariant under the continuous Poincar[é]{} group ${\mathcal{I}}O(3,1)!$ (Although there exists the lattice Lorentz group \[8\], a subgroup of the Lorentz group $O(3,1),$ which leaves the lattice space-time invariant.)
Consider another example, namely the Friedmann-Robertson-Walker cosmological model of the universe. The corresponding metric and the “orthonormal” tetrad are given by \[9\]: $$\begin{array}{rcl}
ds^2 &=& [R(t)]^2 [1+K\delta_{ab} x^ax^b]^{-2} [\delta_{ij} dx^i dx^j]
- (dt)^2\,, \\[0.3cm]
e_{(j)}^\mu &=& [R(t)]^{-1} [1+K\delta_{ab} x^ax^b] \delta_{(j)}^\mu\,,
\\[0.3cm]
e_{(4)}^\mu &=& \delta_{(4)}^\mu\,.
\end{array}
\eqno(23)$$ The above metric and the corresponding tetrad do not treat the space and time variables on equal footing. (Although these are extracted as exact solutions of Einstein’s general covariant equations.) However, if we consider a suitably parametrized motion curve given by a time-like geodesic, the appropriate Lagrangian and the four-momentum are given by: $$\begin{array}{rcl}
L(..) &=& (m / 2) \left\{[R(t)]^2 [1+K\delta_{ab} x^ax^b]^{-2}
{[\delta_{ij} \dot{x}^i \dot{x}^j]} - (\dot{t}^2)\right\}\,, \\[0.3cm]
p_{(j)} &:=& {\ds \frac{\partial L(..)}{\partial \dot{x}^\mu}
\,e_{(j)}^\mu = m[R(t)]\,[1+K\delta_{ab} x^ax^b]^{-1} \delta_{ji}
\dot{x}^i\,, } \\[0.5cm]
p_{(4)} &:=& {\ds \frac{\partial L(..)}{\partial \dot{x}^\mu}
\,e_{(4)}^\mu = -m\dot{t}\,, } \\[0.4cm]
m &>& 0\,.
\end{array}
\eqno(24)$$ Recalling that $g_{\mu\nu} (x) \dot{x}^\mu \dot{x}^\nu \equiv -1$ along a time-like geodesic, we obtain from (24) that $$\eta^{(\mu)(\nu)} p_{(\mu)} p_{(\nu)} = -m^2\vspace*{0.2cm}\,.
\eqno(25)$$ Therefore, the [*special relativistic*]{} equation (25) holds among the tetrad components of the four-momentum locally. In fact, in every reasonable curved universe, with [*any*]{} admissible coordinate system, the special relativity holds [*locally*]{} whether or not the Poincar[é]{} group ${\mathcal{I}}O(3,1)$ is [*globally*]{} admitted as Killing motion.
In a flat space-time, the first quantization of the equation (25) leads to the relativistic operator equation: $${[\eta^{\mu\nu} P_\mu P_\nu + m^2 \,{\mathrm{I}}\,]}
\vec{\phi} = \vec{0}\,.
\eqno(26)$$ Here, $\vec{\phi}$ and $\vec{0}$ are vectors in the tensor-product [\[3\]]{} of the Hilbert spaces. The $P_\mu$’s represent four self-adjoint, unbounded linear operators and “I” stands for the identity operator. The equation (26) physically represents the quantum mechanics of a massive, spin-less, free particle. Let us explore the relativistic invariance of the operator equation (26). The finite and infinitesimal versions of a Poincar[é]{} transformation in the classical level are given respectively by: $$\begin{array}{l}
\widehat{x}^\mu = a^\mu + \ell_\nu^\mu x^\nu\,, \\[0.15cm]
\eta_{\mu\nu} \ell_\alpha^\mu \ell_\beta^\nu = \eta_{\alpha\beta}\;;
\\[0.35cm]
\widehat{x}^\mu = x^\mu + \varepsilon^\mu + \varepsilon_\nu^\mu x^\nu\,,
\\[0.125cm]
\varepsilon_{\mu\nu} := \eta_{\mu\sigma} \varepsilon_\nu^\sigma =
-\varepsilon_{\nu\mu} + 0(\varepsilon^2)\,.
\end{array} \vspace*{0.2cm}\,
\eqno(27)$$
However, there exist in a deeper level, the quantum Poincar[é]{} transformations on quantum mechanical operators. In fact, there are [*two*]{} possible quantum Poincar[é]{} transformations of operators which involve the [*same*]{} unitary mapping. The Heisenberg-type of Poincar[é]{} transformations \[10\] under the infinitesimal version of (27) are furnished by: $$U(\varepsilon) := \exp \left\{-i \varepsilon^\mu P_\mu + (i/4)
\varepsilon^{\mu\nu} (Q_\mu P_\nu - Q_\nu P_\mu + P_\nu Q_\mu
- P_\mu Q_\nu )\right\},
\eqno({\rm 28iH})$$ $$Q_\mu P_\nu - P_\nu Q_\mu = i\eta_{\mu\nu} {\mathrm{I}}\vspace*{0.2cm}\,,
\eqno({\rm 28iiH})$$ $$\widehat{P}_\mu = U^{\dagger} (\varepsilon ) P_\mu U(\varepsilon)
= P_\mu - \varepsilon_\mu^\rho P_\rho + 0(\varepsilon^2)\vspace*{0.2cm}\,,
\eqno({\rm 28iiiH})$$ $$\widehat{Q}^\mu = U^{\dagger} (\varepsilon) Q^\mu U(\varepsilon)
= Q^\mu + \varepsilon^\mu {\mathrm{I}} + \varepsilon_\rho^\mu Q^\rho
+ 0(\varepsilon^2)\vspace*{0.1cm}\,,
\eqno({\rm 28ivH})$$ $$\widehat{\!\!\vec{\phi}} = \vec{\phi}\vspace*{0.2cm}\,.
\eqno({\rm 28vH})$$ Here, the dagger denotes the hermitian conjugation.
In the Schroedinger-type of covariance \[10\], the abstract operators and the state vectors transform under the infinitesimal version of (27) as: $$\widehat{P}_\mu = P_\mu , \, \widehat{Q}^\mu = Q^\mu\vspace*{0.05cm}\,,
\eqno({\rm 28iS})$$ $$\widehat{\!\!\vec{\phi}} = U(\varepsilon) \vec{\phi}\vspace*{0.2cm}\,,
\eqno({\rm 28iiS})$$ $$\begin{array}{rcl}
\delta_L \vec{\phi} &:=& \widehat{\!\!\vec{\phi}} - \vec{\phi} \; = \;
\Big\{\!-i\,[\varepsilon^\mu P_\mu - (1/4) \varepsilon^{\mu\nu}
\\[0.1cm]
&& \!\!(Q_\mu P_\nu
- Q_\nu P_\mu + P_\nu Q_\mu - P_\mu Q_\nu )] + 0(\varepsilon^2)\Big\}
\vec{\phi}\,.
\end{array} \vspace*{0.3cm}\,
\eqno({\rm 28iiiS})$$ Here, $U(\varepsilon)$ is the same as in (28iH). The vector $\delta_L \vec{\phi}$ is called the Lie-variation of the vector $\vec{\phi}.$
Note that in the two types of transformations, the expectation values of a polynomial operator $F(P,Q)$ is given by: $$\begin{array}{rcl}
\langle\,\,\widehat{\!\!\vec{\phi}}_b |F(\widehat{P}, \widehat{Q})|\,\,
\widehat{\!\!\vec{\phi}}_a\rangle_H &=& \langle \vec{\phi}_b
|F[U^{\dagger}PU, U^{\dagger}QU]| \vec{\phi}_a\rangle \\[0.25cm]
&=& \langle \vec{\phi}_b| U^{\dagger} F(P,Q)U|
\vec{\phi}_a\rangle = \langle U\vec{\phi}_b| F(P,Q)|
U\vec{\phi}_a\rangle \\[0.2cm]
&=& \langle\,\,\widehat{\!\!\vec{\phi}}_b |F(\widehat{P},
\widehat{Q})|\,\,\widehat{\!\!\vec{\phi}}_a\rangle_{ _S}\vspace*{0.1cm}\,.
\end{array}$$ (Here, $\langle \vec{\phi}_b | \vec{\phi}_a\rangle$ denotes the inner-product of the two vectors $\vec{\phi}_b$ and $\vec{\phi}_a$.) Therefore, physical quantities transform exactly in the same manner under Heisenberg-type and Schroedinger-type of quantum covariance rules. We shall follow the Schroedinger-type of covariance in this paper.
It is well known \[11\] that the operator $P^\mu P_\mu,$ which is one of the Casimir operators of the Poincar[é]{} group ${\mathcal{I}} O(3,1),$ [*commute*]{} with all the generators $P_\mu$ and $(1/4) (Q_\mu P_\nu - Q_\nu P_\mu + P_\nu Q_\mu
- P_\mu Q_\nu )$ of the Poincar[é]{} group. Therefore, we obtain from (28i-ivS), $$\begin{array}{l}
{[\widehat{P}^\sigma \widehat{P}_\sigma + m^2\,{\mathrm{I}}\,]}\,\,\,
\widehat{\!\!\vec{\phi}} = [P^\sigma P_\sigma +m^2\,{\mathrm{I}}\,]
\,[\vec{\phi}+\delta_L \vec{\phi}] \\[0.3cm]
= [P^\sigma P_\sigma + m^2\,{\mathrm{I}}\,]\,[\delta_L
\vec{\phi}] \\[0.3cm]
= -i\,[\varepsilon^\mu P_\mu\!-\!(1/4)\varepsilon^{\mu\nu}
(Q_\mu P_\nu\!-\!Q_\nu P_\mu\!+\!P_\nu Q_\mu\!-\!P_\mu Q_\nu)] \times
\\[0.2cm]
\phantom{ = } \;[P^\sigma P_\sigma
\!+\!m^2\,{\mathrm{I}}\,] \vec{\phi}\!+\!0(\varepsilon^2) \\[0.3cm]
= 0(\varepsilon^2)\,.
\end{array} \vspace*{0.2cm}\,
\eqno(29)$$ Therefore, the operator equation (29) is an actual proof of the relativistic invariance (up to the 2nd order terms) for the operator Klein-Gordon equation (26).
We can generalize the Lie-variation (28ivH) for an arbitrary relativistic tensor (or spinor) operator $\vec{\phi}^{..}$. The appropriate definition is given by: $$\begin{array}{l}
\widehat{\,\vec{\phi}^{..}} - \vec{\phi}^{..} = \delta_L
\vec{\phi}^{..} := -i\,\Big\{\varepsilon^\mu P_\mu -(1/4)
\varepsilon^{\mu\nu} \\[0.25cm]
\Big[(Q_\mu P_\nu - Q_\mu P_\nu + P_\nu
Q_\mu - P_\mu Q_\nu ) + 2S_{\mu\nu {..}}^{..}\Big]\Big\}
\vec{\phi}^{..} + 0(\varepsilon^2)\,,
\end{array} \vspace*{0.1cm}\,
\eqno(30)$$ where $S_{\mu\nu {..}}^{..} = -S_{\nu\mu {..}}^{..}$ denotes the “spin operator”.
The values of the entries for $S_{\mu\nu {..}}^{..}$ can be calculated exactly from the [*usual*]{} tensor and spinor transformation rules. [F]{}or example, if we consider a vector field $\vec{\phi}^{..}$ with components $\phi^\alpha,$ then $S_{\mu\nu\beta}^\alpha =
\eta_{\nu\beta} \delta_{\mu}^\alpha - \eta_{\mu\beta}
\delta_\nu^\alpha .$
Now we shall state a necessary criterion for the relativistic invariance of an operator equation.
[*“Let an infinite-dimensional Hilbert space vector $\vec{\phi}^{..}$ represent a particle with zero or non-zero spin. Let the corresponding Lie-variation $\delta_L \vec{\phi}^{..}$ be given by the equation [(30)]{}. In the case of the operator equation for $\vec{\phi}^{..}$ being relativistic invariant or covariant, the Lie-variation $\delta_L \vec{\phi}^{..}$ must satisfy the same operator equation as $\vec{\phi}^{..}$ does [(]{}up to the [2]{}nd order terms[)]{}.”*]{}
Let us apply the above criterion of covariance on different first-quantized equations arising out of [*different representations*]{} of quantum mechanics. Firstly, consider the usual Schroedinger representation of quantum mechanics, namely, $P_\mu
= -i\partial_\mu$ and $Q^\mu = x^\mu.$ The operator equation (26) of the mass-shell constraint goes over into $$-\square \phi (x) + m^2 \phi (x) := -[\eta^{\mu\nu} \partial_\mu
\partial_\nu
\phi (x) - m^2 \phi (x)] = 0\,.$$ This is the usual Klein-Gordon equation. The Lie-variation under ${\mathcal{I}}O(3,1)$ is given by $$\delta_L \phi (x) = -\left[\varepsilon^\mu \partial_\mu - (1/4)
\varepsilon^{\mu\nu} (x_\mu \partial_\nu - x_\nu \partial_\mu
+ \partial_\nu x_\mu - \partial_\mu x_\nu)\right] \phi (x)
+ 0(\varepsilon^2)\,.$$ Therefore, $$\begin{array}{l}
-\square \delta_L \phi (x) + m^2 \delta_L \phi (x) \\[0.2cm]
= -[\varepsilon^\mu \partial_\mu\!-\!(1/4)\varepsilon^{\mu\nu}
(x_\mu \partial_\nu\!-\!x_\nu \partial_\mu\!+\!\partial_\nu
x_\mu\!-\!\partial_\mu x_\nu)]
\,[-\square \phi (x)\!+\!m^2 \phi (x)]\!+\!0(\varepsilon^2) \\[0.2cm]
= 0 (\varepsilon^2)\,.
\end{array} \vspace*{0.2cm}\,$$ Therefore, the relativistic invariance (up to the second order term) is assured.
In the second example, let us consider the finite-difference representation \[3\] of the quantum mechanics by putting $$P_\mu = -i\Delta_\mu^{\scsc\#} , \quad Q_\mu =
(1 / \sqrt{2}\,) (\Delta_\mu \sqrt{n^\mu} - \sqrt{n^\mu}
\Delta_\mu^{\prime} + 2\sqrt{n^\mu}\, {\mathrm {I}})\,.$$ Here, the index $\mu$ is not summed. The operator equation (26) yields $$\begin{array}{l}
\eta^{\mu\nu} \Delta_\mu^{\scsc\#} \Delta_\nu^{\scsc\#} \phi (n)
- m^2 \phi (n) = 0\,, \\[0.2cm]
n := (n^1,n^2,n^3,n^4) \in \N^4\,.
\end{array} \vspace*{0.2cm}\,
\eqno{\rm (31A)}$$ This is the finite difference version of the Klein-Gordon equation. The corresponding Lie-variation is given by: $$\begin{array}{l}
\delta_L \phi (n) = -\Big\{\varepsilon^\mu \Delta_\mu^{\scsc\#} \phi
(n) - (1/4\sqrt{2}) \varepsilon^{\mu\nu} \Big[(\Delta_\mu \sqrt{n^\mu}
- \sqrt{n^\mu} \Delta^{\prime}_{\mu} + 2\sqrt{n^{\mu}})
\Delta_\nu^{\scsc\#} \\[0.2cm]
-(\Delta_\nu \sqrt{n^\mu} - \sqrt{n^\nu} \Delta_\nu^{\prime}
+ 2\sqrt{n^\nu}) \Delta_\mu^{\scsc\#} + \Delta_\nu^{\scsc\#}
(\Delta_\mu \sqrt{n^\mu} - \sqrt{n^\mu} \Delta_\mu^{\prime} +
2\sqrt{n^\mu}) \\[0.2cm]
-\Delta_\mu^{\scsc\#} (\Delta_\nu \sqrt{n^\nu} - \sqrt{n^\nu}
\Delta_\nu^{\prime} + 2\sqrt{n^\nu})\Big] \phi (n) \Big\} +
0(\varepsilon^2)\vspace*{0.2cm}\,.
\end{array}$$ Here, the indices $\mu ,\nu$ are summed. It can be proved by direct computations that $$\eta^{\mu\nu} \Delta_\mu^{\scsc\#} \Delta_\nu^{\scsc\#} [\delta_L
\phi (n)] - m^2 [\delta_L \phi (n)] = 0(\varepsilon^2)\,.$$ Thus the partial difference equation (31A) is indeed invariant (up to the second order terms) under the continuous Poincar[é]{} group!
Now, as the third example, let us consider a [*mixed difference-differential*]{} representation of the quantum mechanics by choosing $$\begin{array}{l}
P_b = -i\Delta_b^{\scsc\#}, \, P_4 = -i\partial_4 \equiv
i\partial_t\,, \\[0.2cm]
Q_b = (1 / \sqrt{2})\,(\Delta_b \sqrt{n^b} - \sqrt{n^b}
\Delta_b^{\prime} + 2\sqrt{n^b}\,{\mathrm{I}}\,), \, Q^4
= x^4\equiv t\,.
\end{array}$$ Here, the index $b$ is not summed. The operator equation (26) yields the [*difference-differential*]{} version of the Klein-Gordon equation: $$\begin{array}{l}
\delta^{ab} \Delta_a^{\scsc\#} \Delta_b^{\scsc\#} \phi ({{\mathbf{n}}},t)
- (\partial_t)^2 \phi ({{\mathbf{n}}},t) - m^2 \phi ({{\mathbf{n}}},t) = 0\,,
\\[0.2cm]
({{\mathbf{n}}},t) \in \N^3 \times \R\,.
\end{array}
\eqno{\rm (31B)}$$ Here $\delta^{ab}$ is the Kronecker delta. We have for the Lie-variation $$\begin{array}{l}
\delta_L \phi ({{\mathbf{n}}},t) = \\[0.25cm]
-\Big\{\varepsilon^b \Delta_b^{\scsc\#}
+ \varepsilon^4 \partial_t - (1/4\sqrt{2})
\varepsilon^{ab} \Big[(\Delta_a \sqrt{n^a} -
\sqrt{n^a} \Delta_a^{\prime} + 2\sqrt{n^a})
\Delta_b^{\scsc\#}
\\[0.25cm]
-(\Delta_b \sqrt{n^b} - \sqrt{n^b} \Delta_b^{\prime} + 2\sqrt{n^b})
\Delta_a^{\scsc\#} + \Delta_b^{\scsc\#}(\Delta^a \sqrt{n^a} -
\sqrt{n^a} \Delta_a^{\prime} + 2\sqrt{n^a}) \\[0.25cm]
- \Delta_a^{\scsc\#} (\Delta_b \sqrt{n^b} - \sqrt{n^b}
\Delta_b^{\prime} + 2\sqrt{n^b})\Big] \phi ({{\mathbf{n}}},t) \\[0.25cm]
+ (1 / \sqrt{2}) \varepsilon^{a4} \Big[(\Delta_a \sqrt{n^a}
- \sqrt{n^a} \Delta_a^{\prime}) \partial_t - t\Delta_a^{\scsc\#}
\Big]\Big\} \phi ({{\mathbf{n}}},t)\,.
\end{array} \vspace*{0.2cm}\,
\eqno{\rm (32B)}$$ Here, the indices $a, b$ are summed. We can prove by a long calculation that $$\delta^{ab} \Delta_a^{\scsc\#} \Delta_b^{\scsc\#} [\delta_L \phi
({{\mathbf{n}}},t)] - (\partial_t)^2 [\delta_L \phi ({{\mathbf{n}}},t)] - m^2
[\delta_L \phi ({{\mathbf{n}}},t)] = 0(\varepsilon^2)\,.$$ In other words, the difference-differential equation (31B) is indeed invariant (up to the second order terms) under the ten-parameter continuous group ${\mathcal{I}}O(3,1)$!
As the last example, consider the Schroedinger’s difference-differential equation \[3\] for a free particle (with $m>0)$ as $$(2m)^{-1} \delta^{ab} \Delta_a^{\scsc\#} \Delta_b^{\scsc\#} \psi
({{\mathbf{n}}},t) + i\partial_t \psi ({{\mathbf{n}}},t) = 0\,.$$ The Lie-variation $\delta_L \psi({{\mathbf{n}}},t)$ according to the equation (32B) does [*not*]{} satisfy the Schroedinger difference-differential equation up to the second order terms. The reason for this failure is the operator $P_4 = i\partial_t$ does [*not*]{} commute with three particular generators $Q_4P_b
- Q_bP_4$ of the Poincar[é]{} group. Therefore, we obtain an alternate proof of the well-known fact that the Schroedinger equation is non-relativistic. Thus, our necessary criterion involving the Lie-variation can prove or else disprove the relativistic invariance or covariance (up to the second order terms) of a quantum mechanical system in [*any*]{} representation.
We need to define the exact transformation rules for a tensor or spinor field in partial difference and difference-differential representations. Recall from (27) that the finite Poincar[é]{} transformation is given by $$\begin{array}{l}
\widehat{\eta}^\mu = a^\mu + \ell_\nu^\mu x^\mu \,, \\[0.1cm]
\omega^{\mu\sigma} := \eta^{\sigma\nu} (\ell_\nu^\mu -
\delta_\nu^\mu)\,, \\[0.1cm]
\omega _{\alpha\beta} + \omega_{\beta\alpha} + \eta_{\mu\nu}
\omega_\alpha^\mu \omega_\beta^\nu = 0\,.
\end{array} \vspace*{0.2cm}\,
\eqno(33)$$
The [*exact*]{} transformation rules for the tensor and spinor fields under (33) are furnished by: $$\begin{array}{rcl}
\widehat{\phi}^{..}(n) &=& \Big(\exp \Big\{-a^\mu \Delta_\mu^{\scsc\#}+
(1/8\sqrt{2}) \,(\omega^{\mu\nu}-\omega^{\nu\mu}) \\[0.35cm]
&& \Big[(\Delta_\mu\sqrt{n^\mu}-\sqrt{n^\mu}
\Delta_\mu^{\prime}+2\sqrt{n^\mu}) \Delta_\nu^{\scsc\#} \\[0.3cm]
&& - (\Delta_\nu \sqrt{n^\nu} - \sqrt{n^\nu} \Delta_\nu^{\prime} +
2\sqrt{n^\nu}) \Delta_\mu^{\scsc\#} \\[0.3cm]
&& + \Delta_\nu^{\scsc\#} (\Delta_\mu
\sqrt{n^\mu} - \sqrt{n^\mu} \Delta_\mu^{\prime} + 2\sqrt{n^\mu})
\\[0.3cm]
&& - \Delta_\mu^{\scsc\#} (\Delta_\nu \sqrt{n^\nu} - \sqrt{n^\nu}
\Delta_\nu^{\prime} + 2\sqrt{n^\nu}) + i2S_{\mu\nu ..}^{..}
\Big]\Big\}\Big) \cdot \phi^{..}(n)\,,
\end{array}
\eqno{\rm (34A)}$$ 0.75ex $$\begin{array}{rcl}
\widehat{\phi}^{..}({{\mathbf{n}}},t) &=& \Big(\exp \Big\{-a^b \Delta_b^{\scsc\#}
- a^4\partial_t - (1/8\sqrt{2})\,(\omega^{ab} - \omega^{ba}) \\[0.3cm]
&& \Big[(\Delta_a
\sqrt{n^a} - \sqrt{n^a} \Delta_a^{\prime} + 2\sqrt{n^a})
\Delta_b^{\scsc\#} \\[0.3cm]
&& - (\Delta_b \sqrt{n^b} - \sqrt{n^b} \Delta_b^{\prime} +
2\sqrt{n^b}) \Delta_a^{\scsc\#} \\[0.3cm]
&& + \Delta_b^{\scsc\#} (\Delta_a
\sqrt{n^a} - \sqrt{n^a} \Delta_a^{\prime} + 2\sqrt{n^a}) \\[0.3cm]
&& - \Delta_a^{\scsc\#} (\Delta_b \sqrt{n^b} - \sqrt{n^b}
\Delta_b^{\prime} + 2\sqrt{n^b}) + i2S_{ab ..}^{..}\Big] \\[0.3cm]
&& - (i/2\sqrt{2})\,
(\omega^{a4} - \omega^{4a})\Big[(\Delta_a \sqrt{n^a} - \sqrt{n^a}
\Delta_a^{\prime}) \partial_t \\[0.3cm]
&& - t\Delta_a^{\scsc\#} + iS_{a4 ..}^{..}\Big]\Big\}\Big) \cdot
\phi^{..}({{\mathbf{n}}},t)\,.
\end{array} \vspace*{0.3cm}\,
\eqno{\rm (34B)}$$ Here, indices $\mu, \nu$ and $a, b$ are summed and $S_{\mu\nu ..}^{..}$ stands for the spin-operator for the field $\phi^{..}(n)$ or $\phi^{..}({{\mathbf{n}}},t).$
An obvious question that arises is how the discrete variables $n^1,
n^2,n^3,n^4$ transform from one inertial observer to another! We can recall that the integer $n^\mu$ appears as the eigenvalue $2n^\mu+1$ of the operator $(P^\mu)^2 + (Q^\mu)^2.$ A particle in the corresponding eigenstate $\vec{\psi}_{n^\mu }$ is located on a circle of radius $\sqrt{2n^\mu+1}$ in the $p^\mu\!\!-\!x^\mu$-th phase plane (see Fig.1). However, for a relatively moving observer, according to the Schroedinger-type of covariance, the particle is in the state $\,\,\widehat{\!\!\vec{\psi}} = U(\varepsilon) \vec{\psi}_{n^\mu}.$ (See equations (28iS,iiS,iiiS).) But $\,\,\widehat{\!\!\vec{\psi}}$ is [*not*]{} an eigenstate of the operator $(\widehat{P}^\mu)^2 +
(\widehat{Q}^\mu)^2 = (P^\mu)^2 + (Q^\mu)^2.$ Therefore, the particle appears in a [*fuzzy domain*]{} in the phase plane for the moving observer. But for the moving observer, there exists [*another state*]{} $\,\,\widehat{\!\!\vec{\psi}}_{n^\mu} := \vec{\psi}_{n^\mu}
\neq U(\varepsilon) \vec{\psi}_{n^\mu} = \,\,\widehat{\!\!\vec{\psi}}$ such that $[(\widehat{P}^\mu)^2 + (\widehat{Q}^\mu)^2]
\,\,\widehat{\!\!\vec{\psi}}_{n^\mu} = [(P^\mu)^2 + (Q^\mu)^2]
\vec{\psi}_{n^\mu} = (2n^{\mu}+1) \,\,\widehat{\!\!\vec{\psi}}_{n^{\mu}}.$ Therefore, $\,\,\widehat{\!\!\vec{\psi}}_{n^\mu}$ is an eigenvector for the operator $(\widehat{P}^\mu)^2 +
(\widehat{Q}^\mu)^2$ in the moving frame and the corresponding eigenvalue is $2n^\mu+1.$ [*Thus both observers have exactly similar discretized phase planes [(]{}see Fig.[1),]{} although discrete circles in one frame do not transform into discrete circles in another frame!*]{}
Now, we shall prove the relativistic invariance of the summation operation $\sum\limits_{n^1=0}^{\infty} \sum\limits_{n^2=0}^{\infty}
\sum\limits_{n^3=0}^{\infty} \sum\limits_{n^4=0}^{\infty}$ and the sum-integral operation $\sum\limits_{n^1=0}^{\infty}
\sum\limits_{n^2=0}^{\infty} \sum\limits_{n^3=0}^{\infty}\,
\int\limits_{\R} dt.$ Consider the state vector $\vec{\phi}$ representing a scalar particle in the abstract equations in (26) and (28i,ii,iiiS). Mathematically speaking, $\vec{\phi} \in {\cal{H}}_1
\otimes {\cal{H}}_2 \otimes {\cal{H}}_3 \otimes {\cal{H}}_4$, is the tensor product of Hilbert spaces \[3,12\]. Here, the Hilbert space ${\cal{H}}_\mu$ is acted upon by the operators $P_\mu, Q_\mu$ etc. The square of the norm of $\vec{\phi}$ is given by: $$\|\,\vec{\phi}\,\|^2 := \langle
\vec{\phi}\,|\vec{\phi}\,\rangle\vspace*{0.2cm}\,.$$ But from (28iH), (28iiS) we have $$\|\,\,\,\widehat{\!\!\vec{\phi}}\,\|^2 = \langle\vec{\phi}\,|U^{\dagger}
(\varepsilon) U(\varepsilon) \vec{\phi}\,\rangle
= \|\,\vec{\phi}\,\|^2\,,$$ where the dagger denotes the hermitian-conjugation. Thus, the norm $\|\,\vec{\phi}\,\|$ is invariant under the unitary transformation induced by an infinitesimal Poincar[é]{} transformation.\
Moreover, under a finite Poincar[é]{} transformation which induces a unitary mapping, we can conclude that $\|\,\,\,\widehat{\!\!\vec{\phi}}\,\| = \|\,\vec{\phi}\,\|.$
In the finite difference representation of the scalar field \[3\] we have $$\|\,\vec{\phi}\,\|^2 := \sum\limits_{n^1=0}^{\infty}
\sum\limits_{n^2=0}^{\infty} \sum\limits_{n^3=0}^{\infty}
\sum\limits_{n^4=0}^{\infty} |\phi (n^1,n^2,n^3,n^4)|^2\vspace*{0.1cm}\,.$$ Moreover, in the difference-differential representation $$\|\,\vec{\phi}\,\|^2 := \sum\limits_{n^1=0}^{\infty}
\sum\limits_{n^2=0}^{\infty} \sum\limits_{n^3=0}^{\infty}\,
\int\limits_{\R}\, |\phi (n^1,n^2,n^3,t)|^2 dt\,.$$ Therefore, by the invariance of $\|\vec{\phi}\|^2,$ both operations $\sum\limits_{n^1=0}^{\infty} \sum\limits_{n^2=0}^{\infty}
\sum\limits_{n^3=0}^{\infty} \sum\limits_{n^4=0}^{\infty}$ and $\sum\limits_{n^1=0}^{\infty} \sum\limits_{n^2=0}^{\infty}
\sum\limits_{n^3=0}^{\infty}\,\int\limits_{\R} dt$ must be relativistically invariant. 1.2cm
Variational formalism and conservation equations
================================================
Let us consider a real-valued tensor field $A^{\mu ..}(n)$ in the difference representation and $A^{\mu ..}({{\mathbf{n}}},t)$ in the difference-differential representations. The action sum or the action sum-integral for such fields are defined respectively by: $$\begin{array}{l}
{\mathcal{A}}(A^{\nu ..}) := {\ds \sum\limits_{n^1=N_1^1}^{N_1^2}
\sum\limits_{n^2=N_1^2}^{N_2^2} \sum\limits_{n^3=N_1^3}^{N_2^3}
\sum\limits_{n^4=N_1^4}^{N_2^4} } \\[0.6cm]
L\left(n; y^{\nu ..}\,;
y_\mu^{\nu ..}\right)_{|y^{\nu ..}=A^{\nu ..}(n), \, y_\mu^{\nu ..}
= \Delta_\mu^{{\mbox{\tts\#}}} A^{\nu ..}(n)} \:,
\end{array}
\eqno{\rm(35A)}$$ 0.2cm $$\begin{array}{l}
{\mathcal{A}}(A^{\nu ..}) := {\ds \sum\limits_{n^1=N_1^1}^{N_1^2}
\sum\limits_{n^2=N_1^2}^{N_2^2} \sum\limits_{n^3=N_1^3}^{N_2^3}\,
\int\limits_{t_1}^{t_2} } \\[0.6cm]
L\Big({{\mathbf{n}}},t; y^{\nu ..}\,;
y_b^{\nu ..}, y_4^{\nu ..}\Big)_{|y^{\nu ..}=A^{\nu ..}
({{\mathbf{n}}},t), \, y_b^{\nu ..} = \Delta_b^{{\mbox{\tts\#}}}
A^{\nu ..}, \, y_4^{\nu..} = \partial_t A^{\nu ..}}\,.
\end{array} \vspace*{0.6cm}\,
\eqno{\rm(35B)}$$
The Euler-Lagrange equations under [*zero*]{} boundary variations for $A^{\nu ..}(n)$ or $A^{\nu ..}({{\mathbf{n}}},t)$ are given by (see Appendix I): $$\frac{\partial L(..)}{\partial y^{\nu ..}\,_{\!|..}}\, -
\Delta_\mu^{\scsc\#} \left\{\left[ \frac{\partial L(..)}{\partial
y_\mu^{\nu ..}}\right]_{|..}\right\} = 0\vspace*{0.2cm}\,,
\eqno{\rm (36A)}$$ $$\frac{\partial L(..)}{\partial y^{\nu ..}\,_{\!|..}}\, -
\Delta_b^{\scsc\#} \left\{\left[ \frac{\partial L(..)}{\partial
y_b^{\nu ..}}\right]_{|..}\right\} - \partial_t \left\{\left[
\frac{\partial L(..)}{\partial y_4^{\nu ..}}\right]_{|..}\right\}
= 0\vspace*{0.5cm}\,.
\eqno{\rm (36B)}$$
In the case of a complex-value tensor field $\phi^{\nu ..}(n)$ and $\phi^{\nu ..}({{\mathbf{n}}},t),$ the action sum and sum-integral are defined respectively by: $$\begin{array}{l}
{\mathcal{A}} (\phi^{\nu ..}, \overline{\phi^{\nu ..}}\,) :=
{\ds \sum\limits_{n^1=N_1^1}^{N_2^1} \sum\limits_{n^2=N_1^2}^{N_2^2}
\sum\limits_{n^3=N_1^3}^{N_2^3} \sum\limits_{n^4=N_1^4}^{N_2^4} }
\\[0.6cm]
L\Big(n; \rho^{\nu ..}, \overline{\rho^{\nu ..}}, \rho_\mu^{\nu ..},
\overline{\rho_\mu^{\nu ..}}\Big)
{\hspace*{-0.2cm}\mbox{\raisebox{-2.225ex}{$
{\begin{array}{l} {\scsc |\rho^{\nu ..}
= \phi^{\nu ..}(n), \,\overline{\rho^{\nu ..}} = \overline{\phi^{\nu
..}(n)} } \\[-0.15cm] { \scsc |\rho_\mu^{\nu ..}=\Delta_\mu^{\mbox{\tts\#}}
\phi^{\nu..}, \,\overline{\rho_\mu^{\nu ..}} = \Delta_\mu^{\mbox{\tts\#}}
\overline{\phi^{\nu ..}(n)} } \end{array}}\,, $}}}
\end{array}
\eqno{\rm (37A)}$$ 0.2cm $$\begin{array}{l}
{\mathcal{A}} (\phi^{\nu ..}, \overline{\phi^{\nu ..}}\,) :=
{\ds \sum\limits_{n^1=N_1^1}^{N_2^1} \sum\limits_{n^2=N_1^2}^{N_2^2}
\sum\limits_{n^3=N_1^3}^{N_2^3}\, \int\limits_{t_1}^{t_2} }
\\[0.6cm]
L\Big({{\mathbf{n}}},t; \rho^{\nu ..}, \overline{\rho^{\nu ..}}\,;
\rho_b^{\nu..}, \overline{\rho_b^{\nu ..}}, \rho_4^{\nu ..},
\overline{\rho_4^{\nu..}}\Big){\hspace*{-0.2cm}\mbox{\raisebox{-3.5ex}
{${\begin{array}{l}
\noindent{\scsc |\rho^{\nu ..} = \phi^{\nu ..}({{\mathbf{n}}},t),
\,\rho^{\nu ..} \noindent= \overline{\phi^{\nu..}({{\mathbf{n}}},t)} }
\\[-0.1cm] { \scsc |\rho_b^{\nu ..} = \Delta_b^{\mbox{\tts\#}}
\phi^{\nu..}, \,\overline{\rho_b^{\nu ..}} = \Delta_b^{\mbox{\tts\#}}
\overline{\phi^{\nu ..}} } \\[-0.1cm] { \scsc |\rho_4^{\nu ..}
= \partial_t \phi^{\nu ..}, \,\overline{\rho_4^{\nu ..}} = \partial_t
\overline{\phi^{\nu ..}} } \end{array} }
$}}} dt\,. \end{array}
\eqno{\rm (37B)}$$ 0.7cm
Here, the bar stands for the complex-conjugation.\
The corresponding Euler-Lagrange equations are: $$\hspace*{-3.2cm}\frac{\partial L(..)}{\partial
\rho^{\nu ..} {\scs |..}}\, - \Delta_\mu^{\scsc\#}
\left\{\left[ \frac{\partial L(..)}{\partial
\rho_\mu^{\nu ..}}\right]_{|..}\right\} = 0\vspace*{0.25cm}\,,
\eqno{\rm (38A)}$$ $$\hspace*{-3.2cm}\frac{\partial L(..)}{\partial
\overline{\rho^{\nu ..}}{\scs |..}}\,
- \Delta_\mu^{\scsc\#} \left\{\left[ \frac{\partial L(..)}{\partial
\overline{\rho_\mu^{\nu ..}}}\right]_{|..}\right\} = 0\vspace*{0.4cm}\,,
\eqno(38\overline{\rm A})$$ $$\frac{\partial L(..)}{\partial \rho^{\nu ..}{\scs |..}}\, -
\Delta_b^{\scsc\#} \left\{\left[ \frac{\partial L(..)}{\partial
\rho_b^{\nu ..}}\right]_{|..}\right\} - \partial_t \left\{\left[
\frac{\partial L(..)}{\partial \rho_4^{\nu ..}}\right]_{|..}\right\}
= 0\vspace*{0.4cm}\,,
\eqno{\rm (38B)}$$ $$\hspace*{1.1cm}\frac{\partial L(..)}{\partial
\overline{\rho^{\nu ..}{\scs |..}}}\, -
\Delta_b^{\scsc\#} \left\{\left[ \frac{\partial L(..)}{\partial
\overline{\rho_b^{\nu ..}}}\right]_{|..}\right\} - \partial_t
\left\{\left[ \frac{\partial
L(..)}{\partial \overline{\rho_4^{\nu ..}}}\right]_{|..}\right\}
= 0\vspace*{0.5cm}\,.
\eqno(38\overline{B})$$
In the derivation of above equations, the techniques of the complex-conjugate coordinates are used \[13,14\].
We shall now derive the partial difference and the difference-differential conservation equations for various fields (see Appendix II). $$\Delta_\mu T_\mu^\nu + .. = 0\vspace*{0.3cm}\,,
\eqno{\rm (39Ai)}$$ $$\begin{array}{rcl}
T_\mu^\nu (n) &:=& {\ds \sqrt{\frac{n^\nu}{2}} \:\Biggr[
\frac{\partial L(..)}{\partial y_\nu^{\alpha ..}}_{|(..,n^\nu-1,..)}
\cdot \Delta_\mu^{\scsc\#} A^{\alpha ..} } \\[0.5cm]
&& {\ds + \frac{\partial (L..)}{\partial
y_\nu^{\alpha ..}\,_{|..}} \cdot (\Delta_\mu^{\scsc\#}
A^{\alpha ..})_{|(..,n^\nu-1,..)} -
\delta_\nu^\mu L(..)_{|..} \Biggr],}
\end{array} \vspace*{0.5cm}\,
\eqno{\rm (39Aii)}$$ $$\Delta_b T_a^b + \partial_t T_a^4 + .. = 0\vspace*{0.4cm}\,,
\eqno{\rm (39Bi)}$$ $$\Delta_b T_4^b + \partial_t T_4^4 = 0\vspace*{0.5cm}\,,
\eqno{\rm (39Bii)}$$ $$\begin{array}{rcl}
T_a^b ({{\mathbf{n}}},t) &:=& {\ds \sqrt{\frac{n^b}{2}} \:\Biggr[
\frac{\partial L(..)}{\partial y^{\alpha ..}}_{b|(..,n^b-1,..)}
\cdot \Delta_a^{\scsc\#} A^{\alpha}) } \\[0.5cm]
&& {\ds + \frac{\partial (L..)}{\partial
y_{\alpha..}}_{b|..} \cdot (\Delta_a^{\scsc\#}
A^{\alpha ..})_{|(..,n^b-1,..)} -
\delta_a^b L(..)_{|..} \Biggr], }
\end{array} \vspace*{0.5cm}\,
\eqno{\rm (39Biii)}$$ $$T_a^4 ({{\mathbf{n}}},t) := \frac{\partial L(..)}{\partial
y_{4|..}^{\alpha..}} \cdot \Delta_a^{\scsc\#}
A^{\alpha..}\vspace*{0.5cm}\,,
\eqno{\rm (39Biv)}$$ $$T_4^b ({{\mathbf{n}}},t) := \sqrt{\frac{n^b}{2}} \:\Biggr[
\frac{\partial L(..)}{\partial y^{\alpha ..}_{b|(..,n^b-1,..)}}
\cdot \partial_t A^{\alpha ..}
+ \frac{\partial L(..)}{\partial
y_{b|..}^{\alpha ..}} \cdot (\partial_t A^{\alpha
..})_{|(..,n^b-1,..)} \Biggr],
\vspace*{0.5cm}\,
\eqno{\rm (39Bv)}$$ $$T_4^4 ({{\mathbf{n}}},t) := \frac{\partial L(..)}{\partial y_{4|..}^{\alpha..}}
\cdot \partial_t A^{\alpha ..} - L(..)_{|..}\vspace*{0.5cm} \,.
\eqno{\rm (39Biv)}$$ We can sum or sum-integrate the relativistic conservation equations over an appropriate domain of $\N^4$ or $\N^3 \times \R.$ Using Gauss’s theorem 3.1 and [*assuming*]{} boundary conditions similar to the equations (15A,B), we obtain the total conserved four-momentum: $$-P_\mu = \sum\limits_{{{\mathbf{n}}}=0}^{\infty (3)} [T_\mu^4 ({{\mathbf{n}}},n^4)
+ ..] = \sum\limits_{{{\mathbf{n}}}=0}^{\infty (3)} [T_\mu^4 ({{\mathbf{n}}},2)
+ ..]\vspace*{0.5cm}\,,
\eqno{\rm (40A)}$$ $$-P_b = \sum\limits_{{{\mathbf{n}}}=0}^{\infty (3)} [T_b^4 ({{\mathbf{n}}},t)
+ ..] = \sum\limits_{{{\mathbf{n}}}=0}^{\infty (3)} [T_b^4 ({{\mathbf{n}}},0)
+ ..]\vspace*{0.5cm}\,,
\eqno{\rm (40Bi)}$$ $$H := -P_4 = \sum\limits_{{{\mathbf{n}}}=0}^{\infty (3)} [T_4^4 ({{\mathbf{n}}},t)
+ ..] = \sum\limits_{{{\mathbf{n}}}=0}^{\infty (3)} [T_4^4 ({{\mathbf{n}}},0)
+ ..] \vspace*{0.5cm}\,,
\eqno{\rm (40Bii)}$$
$\sum\limits_{{{\mathbf{n}}}=0}^{\infty (3)} := \sum\limits_{n^1=0}^{\infty}
\sum\limits_{n^2=0}^{\infty} \sum\limits_{n^3=0}^{\infty}.$ 0.25cm
The total energy-momentum components $P_\mu$ are given by incomplete equations (40A), (40Bi,ii) for a general Lagrangian. However, in the following paper, we shall derive [*exact*]{} equations for the Klein-Gordon, electro-magnetic, and Dirac fields.
In case of a complex-valued field $\phi^{\alpha..},$ the difference and difference-differential conservation for the charge-current vector $j^\mu$ is given by equations (A.II.9A,B) as: $$\Delta_{\mu} j^\mu (n) = 0\vspace*{0.4cm}\,,
\eqno{\rm (41Ai)}$$ $$\begin{array}{rcl}
j^\mu (n) &:=& {\ds ie \sqrt{\frac{n^\mu}{2}} \:\Biggr\{ \Biggr[
\frac{\partial L(..)}{\partial \rho_{\mu|(..,n^\mu-1,..)}^{\alpha ..}}
\cdot \phi^{\alpha ..}(n) } \\[0.5cm]
&& {\ds + \frac{\partial L(..)}{\partial
\rho_{\mu |..}^{\alpha ..}} \cdot \phi^{\alpha ..} (..,n^\mu-1,..)
\Biggr]\Biggr\} + {\rm (c.c.)} }\vspace*{0.6cm}\,,
\end{array}
\eqno{\rm (41ii)}$$ $$\Delta_b j^b ({{\mathbf{n}}},t) + \partial_t j^4 ({{\mathbf{n}}},t) =
0\vspace*{0.5cm}\,,
\eqno{\rm (41Bi)}$$ $$\begin{array}{rcl}
j^b ({{\mathbf{n}}},t) &:=& {\ds ie \sqrt{\frac{n^b}{2}} \:\Biggr\{ \Biggr[
\frac{\partial L(..)}{\partial \rho^{\alpha ..}}_{b|(..,n^b-1,..)} \cdot
\phi^{\alpha ..}({{\mathbf{n}}},t) } \\[0.5cm]
&& {\ds + \frac{\partial L(..)}{\partial
\rho_{b|..}^{\alpha ..}} \cdot \phi^{\alpha ..} (..,n^b-1,..)
\Biggr]\Biggr\} + {\rm (c.c.)}\,,}
\end{array} \vspace*{0.5cm}\,
\eqno{\rm (41Bii)}$$ $$j^4 ({{\mathbf{n}}},t) := ie \Biggr[\frac{\partial L(..)}{\partial
\rho_{4|..}^{\alpha ..}} \cdot \phi^{\alpha ..} ({{\mathbf{n}}},t)
- \frac{\partial L(..)}{\partial\,\overline{\rho}_{4|..}^{\alpha ..}}
\cdot \overline{\phi^{\alpha ..}({{\mathbf{n}}},t)}\Biggr]\vspace*{0.5cm}.
\eqno{\rm (41Biii)}$$ Here, (c.c) stands for the complex-conjugation of the [*preceding*]{} terms and $e = \sqrt{4\pi / 137}$ is the charge parameter.
Under appropriate boundary conditions (15A,B), we can derive the conserved total charge: $$Q\!=\!-\frac{ie}{\sqrt{2}}
\sum\limits_{{{\mathbf{n}}}=0}^{\infty (3)}
\left\{\!\left[\frac{\partial L(..)}{\partial
\rho_{4|({{\mathbf{n}}},1)}^{\alpha ..}} \cdot \phi^{\alpha ..}
({{\mathbf{n}}},2)\!+\!\frac{\partial L(..)}{\partial
\rho_{4|({{\mathbf{n}}},2)}^{\alpha ..}} \cdot \phi^{\alpha ..}
({{\mathbf{n}}},1) \right]\!\right\}\!+\!{\rm (c.c.)}\vspace*{0.6cm},\;
\eqno{\rm (42A)}$$ $$Q = \Biggr[-ie \sum\limits_{{{\mathbf{n}}}=0}^{\infty (3)}
\frac{\partial L(..)}{\partial \rho_{4|({{\mathbf{n}}},0)}^{\alpha ..}}
\cdot \phi^{\alpha ..} ({{\mathbf{n}}},0)\Biggr]
+ {\rm (c.c.)}\vspace*{0.6cm}\,.
\eqno{\rm (42B)}$$
Note that equations (42A,B) are [*already exact*]{}. 0.075cm
We shall now make some comments about the total conserved quantities. In the preceding section, we have deduced that the four-fold summation $\sum\limits_{n^1=0}^{\infty} \sum\limits_{n^2=0}^{\infty}
\sum\limits_{n^3=0}^{\infty} \sum\limits_{n^4=0}^{\infty}$ and the summation-integration $\sum\limits_{n^1=0}^{\infty}
\sum\limits_{n^2=0}^{\infty} \sum\limits_{n^3=0}^{\infty}
\,\int\limits_{\R} dt$ are [*relativistically invariant*]{} operations. We have obtained the total four-momentum $P_\mu$ in the equations (40A), (40Bi,ii) by the four-fold summation and the summation-integration of the [*relativistic*]{} conservation equations (A.II.4A) and (A.II.4Bi,ii) respectively. Therefore, we claim that (the complete) $P_\mu$’s are [*components of a relativisitic four-vector*]{}. Similarly, the total charge in equations (42A) or (42B) is [*relativistic invariant*]{}.
The physical contents of the partial difference conservation (39Ai) and the difference-differential conservation (39Bi) are [*identical*]{}. However, the total four-momentum components $P_\mu$ in (40A) and (40Bi,ii) are [*quite different*]{} inspite of the same notations! The reason for this distinction is that in equation (40A), the summation of the field is over a $n^4 = {\mathrm{const.}}$ “hyperspace”. In covariant phase space, an $n^4={\mathrm{const.}}$ “hypersurface” implies that $(t)^2 + (p_4)^2 = (2n^4+1) =
{\mathrm{const.}}$ Therefore, in the $t$-$p^4$ phase plane, the time coordinate $t$ oscillates between the values $-\sqrt{2n^4+1}$ and $\sqrt{2n^4+1}.$ It is certainly [*not*]{} a $t={\mathrm{const.}}$ “slice”. However, in the case of equations (40Bi,ii), the [*usual*]{} $t = {\mathrm{const.}}$ “hypersurface” of the discrete phase space and continuous time is used. 1.2cm
Appendix I: Euler-Lagrange equations {#appendix-i-euler-lagrange-equations .unnumbered}
====================================
We shall consider a [*two-dimensional*]{} lattice function $f$ for the sake of simplicity. We firstly choose a two-dimensional discrete domain (see Fig.2) $$D\!:=\!\{n\!\in\!\N^2\!\!: 0\!<\!N_1^A\!\!<\!N_2^A,
2\!\leq\!N_2^A\!\!-\!N_1^A,
N_1^A\!\!<\!n^A\!<\!N_2^A, A\!\in\!\{1,2\}\}.
\eqno{\rm (A.I.1)}$$ We shall follow the summation convention on capital indices which take values from $\{1,2\}.$
A real-valued lattice function $f: \,D\subset \N^2 \rightarrow \R$ is considered. The Lagrangian function $L: \,\tilde{D} \subset \N^2
\times \R \times \R^2 \rightarrow \R$ is assumed to be Taylor-expandible (or real analytic). (In all practical purposes, $L$ is a polynomial function.) The action functional ${\mathcal{A}}$ is defined to be the double-sum $${\mathcal{A}} (f) := {\ds \sum\limits_{n^1=N_1^1}^{N_2^1}
\sum\limits_{n^2=N_1^2}^{N_2^2} L(n; y; y_A)_{|y=f(n),
\, y_A=\Delta_A^{\mbox{\tts\#}} f(n)} \;, }
\quad n := (n^1, n^2)\vspace*{0.3cm}\,.
\eqno{\rm (A.I.2)}$$
Now we define the variations of the lattice function $f$ by $$\begin{array}{rcl}
\delta f(n) &:=& \varepsilon h(n)\,, \\[0.3cm]
\delta [\Delta_A^{\scsc\#} f(n)] &:=& \varepsilon \Delta_A^{\scsc\#}
h(n) = \Delta_A^{\scsc\#}[\delta f(n)]\,.
\end{array} \vspace*{0.2cm}\,
\eqno{\rm (A.I.3)}$$ Here, $\varepsilon >0$ is an arbitrary, small, positive number. Therefore, the variation of the action functional ${\mathcal{A}}$ is furnished by: $$\begin{array}{rcl}
\delta {\mathcal{A}} (f) &=& {\ds \sum\limits_{n^=N_1^1}^{N_2^1}
\sum\limits_{n^2=N_1^2}^{N_2^2} \Big\{ L(n;y;y_A)_{|y=f+\delta f,
\, y_A=\Delta_A^{\mbox{\tts\#}} f +\delta (\Delta_A^{\mbox{\tts\#}}
f)} } \\[0.6cm]
&& - L(n;y;y_A)_{|y=f(n), \, y_A=\Delta_A^{\mbox{\tts\#}}f(n)}\Big\}
\\[0.6cm]
&=& {\ds \varepsilon \sum\limits_{n^1=N_1^1}^{N_2^1}
\sum\limits_{n^2=N_1^2}^{N_2^2} \Biggr\{\Biggr[\frac{\partial
L(..)}{\partial y}\Biggr]_{|y=f(n),
\, y_A=\Delta _A^{\mbox{\tts\#}}f} } \\[0.7cm]
&& {\ds \cdot \,h(n) + \Biggr[ \frac{\partial L(..)}{\partial y_A}
\Biggr]_{|y=f(n), \, y_A=\Delta _A^{\mbox{\tts\#}}f} \cdot
\Delta_A^{\scsc\#} h(n) \Biggr\} + 0(\varepsilon^2)\,. }
\end{array} \vspace*{0.3cm}\,
\eqno{\rm (A.I.4)}$$
The variational principle, which implies the stationary (or critical) values of the action functional, can be stated succinctly as $$\lim\limits_{\varepsilon\rightarrow 0_+} \frac{\delta
{\mathcal{A}}(f)}{\varepsilon} = 0\,.$$ The above equation yields from (A.I.4) that $$\sum\limits_{n^1=N_1^1}^{N_2^1} \sum\limits_{n^2=N_1^2}^{N_2^2}
\,\Biggr\{\Biggr[\frac{\partial L(..)}{\partial y}\Biggr]_{|(n^1,
n^2)} \!\!\cdot h(n)+\Biggr[\frac{\partial L(..)}{\partial
y_A}\Biggr]_{|(n^1, n^2)} \!\!\cdot \Delta_A^{\scsc\#} h(n)
\Biggr\} = 0\vspace*{0.2cm}.
\eqno{\rm (A.I.5)}$$ (Here, we have simplified the notation by putting $|(n^1, n^2)$ in place of $|y=f(n^1, n^2), \; y_A=\Delta_A^{\scsc\#} f(n^1, n^2)$.) Using the definition (4iii), opening up the double-sum in (A.I.5), and rearranging terms, we obtain (after a very long calculation) $$\!\begin{array}{l}
{\ds\sum\limits_{n^1=N_1^1+1}^{N_2^1-1}\,\sum\limits_{n^2=N_1^2
+1}^{N_2^2-1} \,\Biggr\{\Biggr[\frac{\partial L(..)}{\partial y}
\Biggr]_{|(n^1, n^2)} \!\!\!-\!\Delta_A^{\scsc\#}\Biggr[\frac{\partial
L(..)}{\partial y_A}\Biggr]_{|(n^1, n^2)}\Biggr\}
\cdot h(n^1, n^2) } \\[0.8cm]
+ \mbox{(Boundary terms)} \; = 0\,.
\end{array}\!\! \vspace*{0.4cm}\,
\eqno{\rm (A.I.6)}$$ Here, the boundary terms are given by: $$\begin{array}{l}
\mbox{(Boundary terms)} := \\[0.4cm]
{\ds -\sqrt{\frac{N_1^2}{2}} \,\sum\limits_{k=0}^{N_2^1-N_1^1}
\frac{\partial L(..)}{\partial y_2}_{|(N_1^1+k, N_1^2)} \cdot
h(N_1^1 + k, N_1^2-1) } \\[0.7cm]
{\ds +\sqrt{\frac{N_2^1+1}{2}} \,\sum\limits_{k=0}^{N_2^2-N_1^2}
\frac{\partial L(..)}{\partial y_1}_{|(N_2^1+1, N_1^2+k)} \cdot
h(N_2^1 + 1, N_1^2 + k) } \\[0.7cm]
{\ds +\sqrt{\frac{N_2^2+1}{2}} \,\sum\limits_{k=0}^{N_2^1-N_1^1}
\frac{\partial L(..)}{\partial y_2}_{|(N_1^1+k, N_2^2)} \cdot
h(N_1^1 + k, N_2^2 + 1) } \\[0.7cm]
{\ds -\sqrt{\frac{N_1^1}{2}} \,\sum\limits_{k=0}^{N_2^2-N_1^2}
\frac{\partial L(..)}{\partial y_1}_{|(N_1^1, N_1^2+k)} \cdot
h(N_1^1 - 1, N_1^2 + k) } \\[0.7cm]
{\ds +\sum\limits_{j=1}^{N_2^1-N_1^1-1}
\left\{\frac{\partial L(..)}{\partial y}_{|(N_1^1+j, N_1^2)}
- \left[ \Delta_1^{\scsc\#}\left(\frac{\partial L(..)}{\partial
y_1} \right)_{|..}\right]_{|(N_1^1+j,N_1^2)} \right. } \\[0.7cm]
{\ds \left. -\sqrt{\frac{N_1^2+1}{2}}\; \frac{\partial L(..)}{\partial
y_2}_{|(N_1^1+j, N_1^2+1)}\right\} \cdot h(N_1^1 +j, N_1^2) }
\end{array}$$
$$\begin{array}{l}
{\ds +\sum\limits_{j=1}^{N_2^2-N_1^1-1}
\left\{\frac{\partial L(..)}{\partial y}_{|(N_2^1, N_1^2+j)}
+ \sqrt{\frac{N_2^1}{2}}\; \frac{\partial L(..)}{\partial
y_1}_{|(N_2^1-1,N_1^2+j)} \right. } \\[0.6cm]
{\ds \left. - \left[\Delta_2^{\scsc\#}\left(\frac{\partial
L(..)}{\partial y_2}\right)_{|..}\right]_{|(N_2^1,N_1^2+j)}
\right\} \cdot h(N_2^1, N_1^2 + j) } \\[0.7cm]
{\ds +\sum\limits_{j=1}^{N_2^1-N_1^1-1}
\left\{\frac{\partial L(..)}{\partial y}_{|(N_1^1+j, N_2^2)}
- \left[\Delta_1^{\scsc\#}\left(\frac{\partial L(..)}{\partial
y_1}_{|..}\right)\right]_{|(N_1^1+j,N_2^2)} \right. } \\[0.7cm]
{\ds \left.+\sqrt{\frac{N_2^2}{2}}\; \frac{\partial L(..)}{\partial
y_2}_{|(N_1^1+j, N_2^2)}\right\} \cdot
h(N_1^1 +j, N_2^2) } \\[0.7cm]
{\ds +\sum\limits_{j=1}^{N_2^2-N_1^2-1}
\left\{\frac{\partial L(..)}{\partial y}_{|(N_1^1, N_1^2+j)}
-\sqrt{\frac{N_1^1+1}{2}}\; \frac{\partial L(..)}{\partial
y_1}_{|(N_1^1+1, N_1^2+j)} \right. } \\[0.7cm]
{\ds \left.- \left[\Delta_2^{\scsc\#}\left(\frac{\partial
L(..)}{\partial y_2}\right)_{|..}\right]_{|(N_1^1,N_1^2+j)}
\right\} \cdot h(N_1^1, N_1^2 + j) } \\[0.7cm]
{\ds + \left\{ \frac{\partial L(..)}{\partial y}_{|(N_1^1,N_1^2)}
- \frac{1}{\sqrt{2}} \left[\sqrt{N_1^1+1}\; \frac{\partial
L(..)}{\partial y_1}_{|(N_1^1+1,N_1^2)} \right.\right.}\\[0.7cm]
{\ds\left.\left.+ \sqrt{N_1^2+1}\; {\frac{\partial L(..)}{\partial
y_2}}_{|(N_1^1,N_1^2+1)}\right]\right\}\cdot h(N_1^1, N_1^2)}\\[0.7cm]
{\ds + \left\{ \frac{\partial L(..)}{\partial y}_{|(N_2^1,N_1^2)}
- \frac{1}{\sqrt{2}} \left[\sqrt{N_2^1} \frac{\partial
L(..)}{\partial y_{1}}_{|(N_2^1-1,N_1^2)} \right.\right.} \\[0.7cm]
{\ds \left.\left.- \sqrt{N_1^2+1}\; \frac{\partial L(..)}{\partial
y_{2}}_{|(N_2^1,N_1^2+1)}\right]\right\}\cdot h(N_2^1,N_1^2)}\\[0.7cm]
{\ds + \left\{ \frac{\partial L(..)}{\partial y}_{|(N_2^1,N_2^2)}
+ \frac{1}{\sqrt{2}} \left[\sqrt{N_2^1}\; \frac{\partial
L(..)}{\partial y_1}_{|(N_2^1-1,N_2^2)} \right.\right. }\\[0.7cm]
{\ds\left.\left.+ \sqrt{N_2^2}\; \frac{\partial L(..)}{\partial
y_2}_{|(N_2^1,N_2^2-1)}\right]\right\}\cdot h(N_2^1,N_2^2)}\\[0.7cm]
{\ds + \left\{ \frac{\partial L(..)}{\partial y}_{|(N_1^1,N_2^2)}
- \frac{1}{\sqrt{2}} \left[\sqrt{N_1^1+1}\; \frac{\partial
L(..)}{\partial y_{1}}_{|(N_1^1+1,N_2^2)} \right.\right.}\\[0.7cm]
{\ds\left.\left.- \sqrt{N_2^2}\; \frac{\partial L(..)}{\partial
y_{2}}_{|(N_1^1,N_2^2-1)}\right]\right\}\cdot h(N_1^1, N_2^2)\,.}
\end{array}
\eqno\raisebox{-57.5ex}{{\rm (A.I.7)}}$$ 0.6cm
The [*imposed plus natural*]{} boundary conditions \[13\] imply that the boundary terms in (A.I.7) add up to zero. There exist many possible boundary conditions. We shall adopt the simplest of all, namely $$\begin{array}{rcl}
h(n^1, n^2)_{|{\mathrm{Boundary}}} &=& 0\,, \\[0.2cm]
\delta f(n^1, n^2)_{|{\mathrm{Boundary}}} &=& 0\,.
\end{array}
\eqno{\rm (A.I.8)}$$
Note that the discrete boundary points in equation (A.I.7) are more in number than those in equation (9). We have to prove now the analogue of the Dubois-Reymond lemma \[14\]. 2ex
[**Lemma:**]{} [*Let the double sum $$\sum\limits_{n^1=N_1^1+1}^{N_2^1-1}\: \sum\limits_{n^2=N_1^2
+1}^{N_2^2-1}g (n^1,n^2) \,h(n^1, n^2) = 0$$ for a function $g$ and an arbitrary function $h$ over the domain $D\subset \N^2$ defined in equation [(A.I.1)]{}. Then $g(n^1,n^2)
\equiv 0$ in $D.$* ]{} 2ex
[**Proof.**]{} Choose $h(n^1,n^2) := \delta_{n^1}^{m^1}
\delta_{n^2}^{m^2}$ for some $(m^1,m^2) \in D.$ (The $\delta_n^m$ denotes the Kronecker delta.) Then $$\begin{array}{l}
{\ds \sum\limits_{n^1=N_1^1+1}^{N_2^1-1}\: \sum\limits_{n^2
=N_1^2+1}^{N_2^2-1} g (n^1,n^2) \,h(n^1, n^2) } \\[0.7cm]
{\ds \quad \; \, = \sum\limits_{n^1=N_1^1+1}^{N_2^1-1}\:
\sum\limits_{n^2=N_1^2+1}^{N_2^2-1}
g (n^1,n^2) \,\delta_{n^1}^{m^1} \delta_{n^2}^{m^2}
= g (m^1,m^2) = 0\vspace*{0.2cm}\,. }
\end{array}$$ Since the choice of $(m^1,m^2) \in D$ is arbitrary, it follows that $g(n^1,n^2) \equiv 0$ for all $(n^1,n^2) \in D.$ 0.3cm
At this stage we have furnished essentially the proof of the following generalization of the Euler-Lagrange theorem. 0.5cm
[**Theorem (A.I.1):**]{} [*Let a function $f: \,D\subset \N^2
\rightarrow \R,$ where $D$ is defined in [(A.I.1).]{} Let an action functional ${\mathcal{A}}(f)$ be defined by [(A.I.2).]{} The stationary values of ${\mathcal{A}},$ under the boundary variation $\delta f(n)_{|..} = 0,$ are given by the solutions of the partial difference equation: $$\frac{\partial L(..)}{\partial y}_{|y=f(n), \,
y_A=\Delta_A^{\mbox{\tts\#}}f(n)} - \Delta_A^{\scsc\#} \left\{
\left[ \frac{\partial L(..)}{\partial y_A}\right]_{|y=f(n), \,
y_A=\Delta_A^{\mbox{\tts\#}}f(n)} \right\} = 0\vspace*{0.2cm}\,.
\eqno{\rm (A.I.9)}$$* ]{} (Here, the index $A$ is summed over $\{1,2\}.$) 0.5cm
Appendix II: Partial difference and difference-differential conservation equations {#appendix-ii-partial-difference-and-difference-differential-conservation-equations .unnumbered}
==================================================================================
Consider the discrete domain $D\subset \N^4$ given by equation (A.I.1). Let $A^{\alpha ..} (n)$ be a real-valued $(r\!+\!s)$-th order tensor field over $D\subset \N^4.$ (See equations (34A,B).) The Lagrangian $L(y^{\alpha ..}\,;
y_\nu^{\alpha ..})$ is a real-valued, analytic, and relativistic invariant function over a domain of the continuum $R^{4^{r+s}}\times
R^{4^{r+s+1}}.$ The action functional ${\mathcal{A}}(A^{\alpha ..})$ is given by the four-fold sum (compare equation (A.I.2)) $$\begin{array}{rcl}
{\mathcal{A}} (A^{\alpha ..}) &:=& {\ds \sum\limits_{n^1
=N_1^1}^{N_2^1} \sum\limits_{n^2=N_1^2}^{N_2^2} \sum\limits_{n^3
=N_1^3}^{N_2^3} \sum\limits_{n^4=N_1^4}^{N_2^4} } \\[0.7cm]
&& L(y^{\alpha ..}\,; y_\nu^{\alpha ..})_{|y^{\alpha ..}
=A^{\alpha ..}(n), \, y_\nu^{\alpha ..}=
\Delta_\nu^{\mbox{\tts\#}}A^{\alpha ..}}\vspace*{0.2cm}\:.
\end{array}
\eqno{\rm (A.II.1)}$$ In the usual case, $L$ is a polynomial function of the $4^{2(r+s)+1}$ real variables.
The Taylor expansion of $L$ is given by $$\begin{array}{l}
L(y^{\alpha ..} + \Delta y^{\alpha ..}, y_\nu^{\alpha ..}
+ \Delta y_\nu^{\alpha ..}) \\[0.4cm]
= L(y^{\alpha ..}, y_\nu^{\alpha ..})
+ {\ds \left[\frac{\partial L(..)}{\partial
y^{\alpha ..}}\Delta y^{\alpha ..} + \frac{\partial L(..)}{\partial
y_\nu^{\alpha ..}}\Delta y_\nu^{\alpha ..}\right] } \\[0.6cm]
+ {\ds \frac{1}{2} \left[\frac{\partial^2 L(..)}{\partial
y^{\alpha ..} \partial y^{\beta ..}} \:(\Delta y^{\alpha ..}) (\Delta
y^{\beta ..}) + \frac{\partial^2 L(..)}{\partial y^{\alpha ..}
\partial y_\nu^{\beta..}} \:(\Delta y^{\alpha ..}) (\Delta
y_\nu^{\beta..}) \right. } \\[0.6cm]
+ {\ds\left. \frac{\partial^2 L(..)}{\partial
y_\nu^{\alpha ..}\partial y_\sigma^{\beta..}} \:(\Delta
y_\nu^{\alpha ..}) (\Delta y_\sigma^{\beta..})\right]+ \ldots\;.}
\end{array} \vspace*{0.4cm}\,
\eqno{\rm (A.II.2)}$$ Here we have followed the summation convention on every repeated Greek index. Moreover, for a [*second-degree*]{} polynomial function $L,$ [*the additional terms denoted by $\ldots$ exactly vanish.*]{}
Now, we investigate the partial difference operations on $L(..).$ Using equations (4iii), (A.II.2), (5iv), (6i,ii,iii,iv) and [*not*]{} summing the index $\mu ,$ we derive that $$\begin{array}{l}
\sqrt{2} \,\Delta_\mu^{\scsc\#} \Big[L(y^{\alpha ..}\,;y_\nu^{\alpha
..})_{|y^{\alpha ..}=A^{\alpha ..}(n), \, y_\nu^{\alpha
..}=\Delta_\nu^{\mbox{\tts\#}}A^{\alpha ..}}\Big] \\[0.3cm]
= (\sqrt{n^\mu\!+\!1}\,) \cdot L\Big\{A^{\alpha ..}
(..,n^\mu\!+\!1,..);\:2^{-1/2}\!\Big[\sqrt{n^\nu\!+\!1}\,
A^{\alpha ..}(..,n^\mu\!+\!1,..,n^\nu\!+\!1,..) \\[0.3cm]
\qquad - \sqrt{n^\nu} A^{\alpha ..} (..,n^\mu +1,..,n^\nu -1,..)
\Big]\Big\} \\[0.3cm]
- (\sqrt{n^\mu}\,) \cdot L\Big\{A^{\alpha ..}(..,n^\mu -1,..);
\:2^{-1/2}\Big[\sqrt{n^\nu+1}\,A^{\alpha ..}(..,n^\mu-1,..,n^\nu
+1,..) \\[0.3cm]
\qquad - \sqrt{n^\nu} A^{\alpha ..} (..,n^\mu -1,..,n^\nu
-1,..)\Big]\Big\}
\end{array}$$ $$\begin{array}{l}
= (\sqrt{n^\mu+1}\,) \cdot L\Big[A^{\alpha ..}(n) + \Delta_\mu
A^{\alpha ..}\,;\:\Delta_\nu^{\scsc\#} A^{\alpha ..}(n)+\Delta_\mu
\Delta_\nu^{\scsc\#} A^{\alpha ..}\Big] \\[0.3cm]
\qquad - (\sqrt{n^\mu}\,) \cdot L\Big[A^{\alpha ..}(n) -
\Delta_\mu^{\prime} A^{\alpha ..}\,;\:\Delta_\nu^{\scsc\#}
A^{\alpha ..}(n) - \Delta_\mu^{\prime}
\Delta_\nu^{\scsc\#} A^{\alpha ..}\Big] \\[0.3cm]
{\ds = \sqrt{n^\mu\!+\!1}\,\Biggr\{\!L(..)\!+
\!\!\left[\frac{\partial
L(..)}{\partial y^{\alpha ..}}_{|..}\!\cdot\!\Delta_\mu
A^{\alpha ..}\!+\!\frac{1}{2}\,\frac{\partial^2 L(..)}{\partial
y^{\alpha ..}\partial y^{\beta..}}_{|..}\!\!\cdot\!(\Delta_\mu
A^{\alpha ..}) (\Delta_\mu A^{\beta..})\!+\!..\right] }
\\[0.6cm]
{\ds + \Biggr[ \frac{\partial L(..)}{\partial
y_{\nu}^{\alpha ..}}_{|..} \cdot \Delta_\mu \Delta_\nu^{\scsc\#}
A^{\alpha ..} + \frac{1}{2}\,\frac{\partial^2 L(..)}{\partial
y_\nu^{\alpha ..} \partial y_{\sigma}^{\beta ..}}_{|..}\cdot
(\Delta_\mu \Delta_\nu^{\scsc\#} A^{\alpha ..})\,(\Delta_\mu
\Delta_\sigma^{\scsc\#} A^{\beta..})\Biggr] + ..\Biggr\} }\\[0.6cm]
{\ds - \sqrt{n^\mu}\,\Biggr\{\!L(..)+\Biggr[\!-\frac{\partial
L(..)}{\partial y^{\alpha ..}}_{|..}\!\cdot\!\Delta_\mu^{\prime}
A^{\alpha ..}\!+\!\frac{1}{2}\,\frac{\partial^2 L(..)}{\partial
y^{\alpha ..}\partial y^{\beta..}}_{|..}\!\cdot\!(\Delta_\mu^{\prime}
A^{\alpha..})(\Delta_\mu^{\prime} A^{\beta..})-..\Biggr] } \\[0.6cm]
{\ds + \Biggr[\!-\frac{\partial L(..)}{\partial
y_{\nu}^{\alpha ..}}_{|..}\!\cdot\!\Delta_\mu^{\prime}
\Delta_\sigma^{\scsc\#} A^{\alpha ..}\!+\!\frac{1}{2}\,
\frac{\partial^2 L(..)}{\partial y_\nu^{\alpha ..}
\partial y_{\sigma}^{\beta ..}}_{|..}\!\cdot\!(\Delta_\mu^{\prime}
\Delta_\nu^{\scsc\#} A^{\alpha ..})(\Delta_\mu^{\prime}
\Delta_\sigma^{\scsc\#} A^{\beta..})-..\Biggr]\Biggr\} }\\[0.6cm]
{\ds = \sqrt{2} \,[L(..)] \,\Delta_\mu^{\scsc\#}(1) + \frac{\partial
L(..)}{\partial y^{\alpha ..}}_{|..}\cdot \Big[\sqrt{n^\mu+1}
\,(\Delta_\mu A^{\alpha ..}) + \sqrt{n^\mu} \,(\Delta_\mu^{\prime}
A^{\alpha ..})\Big] } \\[0.5cm]
{\ds + \frac{\partial L(..)}{\partial y_{\nu}^{\alpha ..}}_{|..}
\cdot \Big[\sqrt{n^\mu +1} \,(\Delta_\mu \Delta_\nu^{\scsc\#}
A^{\alpha ..}) + \sqrt{n^\mu } \,(\Delta_\mu^{\prime}
\Delta_\nu^{\scsc\#} A^{\alpha ..})\Big] } \\[0.6cm]
{\ds + \frac{1}{2}\,\frac{\partial^2 L(..)}{\partial y^{\alpha ..}
\partial y^{\beta ..}}_{|..} \cdot \Big[\sqrt{n^\mu+1}
\,(\Delta_\mu A^{\alpha ..})(\Delta_\mu A^{\beta ..}) } \\[0.5cm]
\qquad - \sqrt{n^\mu } (\Delta_\mu^{\prime} A^{\alpha ..})
(\Delta_\mu^{\prime} A^{\beta ..})\Big] \\[0.4cm]
{\ds + \frac{1}{2}\,\frac{\partial^2 L(..)}{(\partial
y_\nu^{\alpha..})\,(\partial y_{\sigma}^{\beta ..})}\cdot
\Big[\sqrt{n^\mu +1}\,(\Delta_\mu \Delta_\nu^{\scsc\#}
A^{\alpha ..})(\Delta_\mu \Delta_\sigma^{\scsc\#}
A^{\beta ..}) } \\[0.5cm]
\qquad - \sqrt{n^\mu } \,(\Delta_\mu^{\prime} \Delta_\nu^{\scsc\#}
A^{\alpha ..})\,(\Delta_\mu^{\prime} \Delta_\sigma^{\scsc\#}
A^{\beta ..})\Big] + .. \\[0.4cm]
{\ds = \sqrt{2} \,\Biggr\{[L(..)] \,\Delta_\mu^{\scsc\#}(1)
+ \frac{\partial L(..)}{\partial y^{\alpha ..}}_{|..}
\cdot [\Delta_\mu^{\scsc\#} A^{\alpha ..} - A^{\alpha ..}(n)
\Delta_\mu^{\scsc\#}(1)] } \\[0.6cm]
\qquad{\ds + \frac{\partial L(..)}{\partial
y_{\nu}^{\alpha ..}}_{|..} \cdot [\Delta_\mu^{\scsc\#}
\Delta_\nu^{\scsc\#} A^{\alpha ..} - (\Delta_\nu^{\scsc\#}
A^{\alpha ..}) \Delta_\mu^{\scsc\#}(1)] } \\[0.7cm]
{\ds + \frac{1}{2} \, \frac{\partial^2 L(..)}{\partial y^{\alpha..}
\partial y^{\beta ..}}_{|..}\cdot \big[\Delta_\mu^{\scsc\#}
(A^{\alpha ..}(n))\,(A^{\beta ..}(n)) - A^{\alpha ..}(n)
\Delta_\mu^{\scsc\#} A^{\beta ..} } \\[0.6cm]
\qquad - A^{\beta ..}(n) \Delta_\mu^{\scsc\#} A^{\alpha ..}
+ A^{\alpha ..}(n) A^{\beta ..}(n) \Delta_\mu^{\scsc\#}(1)\big]
\\[0.4cm]
{\ds + \frac{1}{2}\,\frac{\partial^2 L(..)}{\partial y_\nu^{\alpha..}
\partial y_{\sigma}^{\beta ..}}_{|..}\!\cdot\!\big[\Delta_\mu^{\scsc\#}
(\Delta_\nu^{\scsc\#} A^{\alpha ..}\!\cdot \Delta_\sigma^{\scsc\#}
A^{\beta ..})\!-\!(\Delta_\nu^{\scsc\#} A^{\alpha ..})
(\Delta_\mu^{\scsc\#}\Delta_\sigma^{\scsc\#} A^{\beta ..}) }\\[0.5cm]
- (\Delta_\sigma^{\scsc\#} A^{\beta ..})\,(\Delta_\mu^{\scsc\#}
\Delta_\nu^{\scsc\#} A^{\alpha..}) + (\Delta_\nu^{\scsc\#}
A^{\alpha..})\,(\Delta_\sigma^{\scsc\#} A^{\beta ..})
\Delta_\mu^{\scsc\#}(1)\big] + .. \Biggr\}.
\end{array}
\hspace*{\fill}\raisebox{-57ex}{\rm (A.II.3)}$$ 0.5cm
Dividing this equation by $\sqrt{2},$ bringing the left-hand side term to the right-hand side, and equating $\frac{\partial
L(..)}{\partial y^{\alpha ..}}_{|..} = \Delta_\nu^{\scsc\#} \Big[
\frac{\partial L(..)}{\partial y_\nu^{\alpha ..}}\Big]_{|..}$ by the Euler-Lagrange equation (36A), we finally obtain: $$\begin{array}{l}
{\ds \left\{\left[\Delta_\nu^{\scsc\#}\!\left(\frac{\partial
L(..)}{\partial y_\nu^{\alpha ..}}\right)_{\!|..}\right]\!\!\cdot\!
\Delta_\mu^{\scsc\#} A^{\alpha ..}\!+\!\frac{\partial L(..)}{\partial
y_\nu^{\alpha ..}}_{|..} \Delta_\mu^{\scsc\#} \Delta_\nu^{\scsc\#}
A^{\alpha ..}\!-\!\Delta_\mu^{\scsc\#} [L(..)_{|..}]\right\}
} \\[0.7cm]
{\ds + \frac{1}{2}\, \frac{\partial^2 L(..)}{\partial y^{\alpha..}
\partial y^{\beta ..}}_{|..} \cdot \big[\Delta_\mu^{\scsc\#}
(A^{\alpha ..}(n) \cdot A^{\beta ..}(n)) } \\[0.7cm]
\qquad - A^{\alpha ..} (n) \Delta_\mu^{\scsc\#} A^{\beta ..}
- (\Delta_\mu^{\scsc\#} A^{\alpha ..})\,(A^{\beta ..}(n))\big]
\\[0.45cm]
{\ds + \frac{1}{2}\, \frac{\partial^2 L(..)}{\partial y_\nu^{\alpha..}
\partial y_\sigma^{\beta ..}}_{|..} \cdot \big[\Delta_\mu^{\scsc\#}
(\Delta_\nu^{\scsc\#} A^{\alpha ..} \cdot \Delta_\sigma^{\scsc\#}
A^{\beta ..}) } \\[0.75cm]
\qquad - (\Delta_\nu^{\scsc\#} A^{\alpha ..})\, (\Delta_\mu^{\scsc\#}
\Delta_\sigma^{\scsc\#} A^{\beta ..}) - (\Delta_\mu^{\scsc\#}
\Delta_\nu^{\scsc\#} A^{\alpha ..})\,(\Delta_\sigma^{\scsc\#}
A^{\beta ..})\big] \\[0.5cm]
{\ds + [\Delta_\mu^{\scsc\#}(1)] \Biggr[L(..)_{|..}-\frac{\partial
L(..)}{\partial y^{\alpha ..}}_{|..} \cdot A^{\alpha ..}(n)
- \frac{\partial L(..)}{\partial y_\nu^{\alpha ..}}_{|..}
\cdot \Delta_\nu^{\scsc\#}A^{\alpha ..} } \\[0.7cm]
\qquad{\ds + \frac{1}{2}\, \frac{\partial^2 L(..)}{\partial y^{\alpha
..} \partial y^{\beta ..}}_{|..} \cdot A^{\alpha ..}(n) A^{\beta
..}(n) } \\[0.6cm]
{\ds + \frac{1}{2}\, \frac{\partial^2 L(..)}{\partial y_\nu^{\alpha..}
\partial {y_\sigma^{\beta ..}}}_{|..} \cdot (\Delta_\nu^{\scsc\#}
A^{\alpha ..})\,(\Delta_\sigma^{\scsc\#} A^{\beta ..})\Biggr]
+ ... = 0\,. }
\end{array} \vspace*{0.5cm}\,
\eqno\raisebox{-25.5ex}{\rm (A.II.4A)}$$
Here, the indices $\alpha ,\beta ,\nu ,\sigma $ are summed. We claim that the above equations involving the relativistic operator $\Delta_\mu^{\scsc\#} = iP_\mu$ constitute [*relativistic*]{} conservation equations in the finite difference form. The corresponding relativisitic difference-differential conservation equations are $$\begin{array}{l}
{\ds \Biggr\{\Biggr[\Delta_b^{\scsc\#}\!\left(\frac{\partial
L(..)}{\partial y_b^{\alpha ..}}\right)_{|..}\!+\!\partial_t
\left(\frac{\partial L(..)}{\partial y_4^{\alpha ..}}\right)_{|..}
\Biggr]\!\cdot \Delta_a^{\scsc\#} A^{\alpha ..} + \frac{\partial
L(..)}{\partial y_{b}^{\alpha ..}}_{|..}\!\cdot \Delta_a^{\scsc\#}
\Delta_b^{\scsc\#} A^{\alpha ..} } \\[0.7cm]
\qquad{\ds + \frac{\partial L(..)}{\partial y_{4}^{\alpha ..}}_{|..}
\,\Delta_a^{\scsc\#} \partial_t A^{\alpha ..}
- \Delta_a^{\scsc\#} [L(..)_{|..}\,]\Biggr\} } \\[0.7cm]
{\ds + \frac{1}{2}\, \frac{\partial^2 L(..)}{\partial y^{\alpha ..}
\partial y^{\beta ..}}_{|..} \cdot \big[ \Delta_a^{\scsc\#}
(A^{\alpha ..} \cdot A^{\beta ..}) } \\[0.7cm]
\qquad - A^{\alpha ..} ({{\mathbf{n}}},t)
\Delta_a^{\scsc\#} A^{\beta ..} - A^{\beta ..} ({{\mathbf{n}}},t)
\Delta_a^{\scsc\#} A^{\alpha ..}\big]
\end{array}$$ $$\begin{array}{l}
{\ds + \frac{1}{2}\, \frac{\partial^2 L(..)}{\partial y_b^{\alpha ..}
\partial y_c^{\beta ..}}_{|..} \cdot \big[ \Delta_a^{\scsc\#}
(\Delta_b^{\scsc\#} A^{\alpha ..} \cdot \Delta_c^{\scsc\#}
A^{\beta ..}) } \\[0.7cm]
\qquad - (\Delta_b^{\scsc\#} A^{\alpha ..})\,
(\Delta_a^{\scsc\#} \Delta_c^{\scsc\#} A^{\beta ..}) -
(\Delta_c^{\scsc\#} A^{\beta ..})\,(\Delta_a^{\scsc\#}
\Delta_b^{\scsc\#} A^{\alpha ..})\big] \\[0.5cm]
{\ds + \frac{\partial^2 L(..)}{\partial y_b^{\alpha ..}
\partial y_4^{\beta ..}}_{|..} \cdot \big[ \Delta_a^{\scsc\#}
(\Delta_b^{\scsc\#} A^{\alpha ..} \cdot \partial_t
A^{\beta ..}) } \\[0.75cm]
\qquad - (\Delta_b^{\scsc\#} A^{\alpha ..})\,
(\Delta_a^{\scsc\#} \partial_t A^{\beta ..}) -
(\partial_t A^{\beta ..})\,(\Delta_a^{\scsc\#}
\Delta_b^{\scsc\#} A^{\alpha ..})\big] \\[0.55cm]
{\ds + \frac{1}{2}\, \frac{\partial^2 L(..)}{\partial y_4^{\alpha ..}
\partial y_4^{\beta ..}}_{|..} \cdot \big[ \Delta_a^{\scsc\#}
(\partial_t A^{\alpha ..} \cdot \partial_t
A^{\beta ..}) } \\[0.7cm]
\qquad - (\partial_t A^{\alpha ..})\,
(\Delta_a^{\scsc\#} \partial_t A^{\alpha ..}) -
(\partial_t A^{\beta ..})\,(\Delta_a^{\scsc\#}
\partial_t A^{\alpha ..})\big] \\[0.45cm]
{\ds + \Big[\Delta_a^{\scsc\#}(1)]\,[L(..)_{|..} - \frac{\partial
L(..)}{\partial y^{\alpha ..}}_{|..} \cdot A^{\alpha ..}
({{\mathbf{n}}},t) } \\[0.65cm]
\qquad{\ds - \frac{\partial L(..)}{\partial y_b^{\alpha ..}}_{|..}
\cdot \Delta_b^{\scsc\#} A^{\alpha ..} - \frac{\partial
L(..)}{\partial y_4^{\alpha ..}}_{|..} \cdot \partial_t A^{\alpha
..} } \\[0.75cm]
{\ds + \frac{1}{2}\, \frac{\partial^2 L(..)}{\partial y^{\alpha ..}
\partial y^{\beta ..}}_{|..} \cdot A^{\alpha ..} ({{\mathbf{n}}},t)
A^{\beta ..} ({{\mathbf{n}}},t) } \\[0.75cm]
\qquad{\ds + \frac{1}{2}\, \frac{\partial^2 L(..)}{\partial
y_b^{\alpha..} \partial y_c^{\beta ..}}_{|..}\cdot (\Delta_b^{\scsc\#}
A^{\alpha ..})\,(\Delta_c^{\scsc\#} A^{\beta ..}) } \\[0.65cm]
{\ds + \frac{\partial^2 L(..)}{\partial y_b^{\alpha ..}
\partial y_4^{\beta ..}}_{|..} \cdot (\Delta_b^{\scsc\#}
A^{\alpha ..})\,(\partial_t A^{\beta ..}) } \\[0.7cm]
\qquad{\ds + \frac{1}{2}\, \frac{\partial^2 L(..)}{\partial
y_4^{\alpha ..} \partial y_4^{\beta ..}}_{|..}\cdot (\partial_t
A^{\alpha ..})\,(\partial_t A^{\beta ..})\Big] + \ldots = 0\,, }
\end{array}
\eqno\raisebox{-40.25ex}{\rm (A.II.4Bi)}$$ 4ex $$\!\begin{array}{l}
{\ds \Biggr\{\Biggr[\Delta_b^{\scsc\#}\!\Biggr(\frac{\partial
L(..)}{\partial y_b^{\alpha ..}}\Biggr)_{\!|..}\!+\partial_t\!
\Biggr(\frac{\partial L(..)}{\partial y_4^{\alpha ..}}\Biggr)_{\!|..}
\!\!\cdot\partial_t A^{\alpha ..}\!+\!\frac{\partial L(..)}{\partial
y_b^{\alpha ..}}_{|..}\Biggr]\!\!\cdot\partial_t \Delta_b^{\scsc\#}
A^{\alpha ..} } \\[0.7cm]
\qquad{\ds + \frac{\partial L(..)}{\partial y_4^{\alpha ..}}_{|..}
\cdot (\partial_t)^2 A^{\alpha ..} - \partial_t [L(..)_{|..}]\Biggr\}
+ \ldots = 0\,. }
\end{array} \vspace*{0.2cm}\,
\eqno{\rm (A.II.4Bii)}$$ The above equations containing operators $\Delta_j^{\scsc\#} = iP_j$ and $\partial_t = iP_4$ on the same footing are [*relativistic*]{}. It is extremely hard to put (A.II.4A) and (A.II.4Bi,ii) into the relativistic conservation equations: $$\begin{array}{c}
\Delta_\nu^{\scsc\#} T_\mu^\nu = 0\,, \\[0.3cm]
\Delta_b^{\scsc\#} T_a^b + \partial_t T_a^4 = 0\,,
\quad \Delta_b^{\scsc\#} T_4^b + \partial_t T_4^4 = 0\vspace*{0.2cm}\,.
\end{array}$$ However, using the equation (5v), the relativistic conservation equation(A.II.4A) can be cast into the form: $$\begin{array}{l}
{\ds (1 / \sqrt{2}) \Biggr\{\Biggr[\Delta_\nu \sqrt{n^\nu}
\frac{\partial L(..)}{\partial {y^{\alpha
..}_{\nu}}_{|(..,n^\nu-1,..)}}
\cdot \Delta_\mu^{\scsc\#} A^{\alpha ..} } \\[0.6cm]
{\ds + \frac{\partial L(..)}{\partial {y^{\alpha ..}_{\nu}}_{|..}}
\cdot (\Delta_\mu^{\scsc\#} A^{\alpha ..})_{|(..,n^{\nu}-1,..)} }
\\[0.6cm]
- \delta_\mu^\nu L(..)_{|..}\Biggr]\Biggr\}
+ .. =: \Delta_\nu [T_\mu^\nu(n)] + .. = 0\,.
\end{array} \vspace*{0.3cm}\,
\eqno{\rm (A.II.5A)}$$ Here, neither $T_\mu^\nu (n)$ are relativistic tensor components, nor $\Delta_\mu$ is a relativistic difference-operator. [*However, the combination $\Delta_\nu T_\mu^\nu + ..$ a r e components of a relativistic covariant vector!*]{}
Similarly, from the relativistic difference-differential conservation equations (A.II.4i,ii) we derive $$\begin{array}{l}
{\ds (1 / \sqrt{2}) \Delta_b \Biggr\{ \sqrt{n^b}\Biggr[\frac{\partial
L(..)}{\partial {y^{\alpha ..}_{b}}_{|(..,n^b-1,..)}} } \\[0.55cm]
{\ds + \frac{\partial L(..)}{\partial y_{b|..}^{\alpha ..}}
\cdot (\Delta_a^{\scsc\#} A^{\alpha ..})_{|(..,n^b-1,..)}
- \delta_b^a L(..)_{|..}\Biggr]\Biggr\} } \\[0.7cm]
{\ds + \partial_t \Biggr\{\frac{\partial L(..)}{\partial {y_4^{\alpha
..}}_{|..}} \cdot \Delta_a^{\scsc\#} A^{\alpha ..}\Biggr\} + .. =:
\Delta_b [T_a^b] + \partial_t [T_a^4] + .. = 0\,, }
\end{array} \vspace*{0.5cm}\,
\eqno{\rm (A.II.5Bi)}$$ 0.5ex $$\begin{array}{l}
{\ds (1 / \sqrt{2}) \Delta_b \Biggr\{\sqrt{n^b}\Biggr[\frac{\partial
L(..)}{\partial {y^{\alpha ..}_{b}}_{|(..,n^b-1,..)}}\cdot\partial_t
A^{\alpha ..} } \\[0.6cm]
{\ds + \frac{\partial L(..)}{\partial {y_{b}^{\alpha ..}}_{|..}}
\cdot (\partial_t A^{\alpha ..})_{|(..,n^b-1,..)}\Biggr]\Biggr\}
} \\[0.7cm]
{\ds + \partial_t \Biggr\{\frac{\partial L(..)}{\partial
y_{4|..}^{\alpha..}}\cdot \partial_t A^{\alpha ..}
- L(..)_{|..}\Biggr\} =:
\Delta_b T_4^b + \partial_t T_4^4 = 0\,.}
\end{array} \vspace*{0.4cm}\,
\eqno\raisebox{1ex}{\rm (A.II.5Bii)}$$
We can generalize conservation equations (A.II.5A) and (A.II.5Bi,ii) for a complex-valued tensor or spinor field $\phi^{..}(n)$ and $\phi^{..} ({{\mathbf{n}}},t)$ to the following equations: $$\begin{array}{l}
{\ds (1 / \sqrt{2}) \Delta_\nu \Biggr\{ \sqrt{n^\nu} \Biggr[
\frac{\partial L(..)}{\partial \rho_\nu^{..}}_{|(..,n^\nu-1,..)}
\cdot \Delta_\mu^{\scsc\#} \phi^{..} } \\[0.55cm]
{\ds + \frac{\partial L(..)}{\partial \rho_{\nu|..}^{..}}
\cdot (\Delta_\mu^{\scsc\#} \phi^{..})_{|(..,n^\nu-1,..)} + {\rm
(c.c.)} -\delta_\mu^\nu [L(..)]_{|..}\Biggr]\Biggr\} } \\[0.7cm]
{\ds - \sqrt{\frac{n^\mu}{2}} \Delta_\mu^{\prime} [L(..)]_{|..}
+ .. =: \Delta_\nu T_\mu^\nu + .. = 0\,,}
\end{array}
\eqno{\rm (A.II.6A)}$$ 3ex $$\begin{array}{l}
{\ds (1 / \sqrt{2}) \Delta_b \Biggr\{ n^b \Biggr[\frac{\partial
L(..)}{\partial \rho_{b|(..,n^b-1,..)}^{..}} \cdot
\Delta_a^{\scsc\#} \phi^{..}
+ \frac{\partial L(..)}{\partial \rho_{b}^{..}}_{|..} \,\cdot
} \\[0.7cm]
{\ds \cdot\, (\Delta_a^{\scsc\#} \phi^{..})_{|(..,n^b-1,..)} +
{\rm (c.c.)} -\delta_a^b [L(..)]_{|..}\Biggr]\Biggr\} }
{\ds - \sqrt{\frac{n^a}{2}} \Delta_a^{\prime} [L(..)]_{|..}
} \\[0.7cm]
{\ds + \partial_t \Biggr\{\Biggr[ \frac{\partial L(..)}{\partial
\rho_{4}^{..}}_{|..} \cdot \Delta_a^{\scsc\#} \phi^{..}
+ \frac{\partial L(..)}{\partial \overline{\rho_{4|..}^{..}}}
\cdot \Delta_a^{\scsc\#} \overline{\phi^{..}}\Biggr]\Biggr\} }
+ .. \\[0.75cm]
= \Delta_b T_a^b + \partial_t T_a^4 + .. = 0\,,
\end{array}
\eqno{\rm (A.II.6Bi)}$$ 4ex $$\begin{array}{l}
{\ds (1 / \sqrt{2}) \Delta_b \Biggr\{ \sqrt{n^b} \Biggr[
\frac{\partial L(..)}{\partial \rho_{b |(..,n^b-1,..)}^{..}}
\cdot \partial_t \phi^{..} } \\[0.6cm]
{\ds + \frac{\partial L(..)}{\partial \rho_{b |..}^{..}}
\cdot \partial_t \phi^{..}_{|(..,n^b-1,..)} + {\rm
(c.c.)} \Biggr]\Biggr\} } \\[0.7cm]
{\ds + \partial_t \Biggr\{ \frac{\partial L(..)}{\partial
\rho_{4|..}^{..}} \cdot \partial_t \phi^{..} +
\frac{\partial L(..)}{\partial \overline{\rho}_{4|..}^{..}} \cdot
\partial_t \overline{\phi^{..}} - [L(..)_{|..}]\Biggr\} } \\[0.75cm]
=: \Delta_b T_4^b + \partial_t T_4^4 = 0\,.
\end{array}
\eqno{\rm (A.II.6Bii)}$$ 3ex
Here, (c.c.) indicates the complex-conjugation of the [*preceding*]{} terms.
Now, we shall investigate the gauge invariance of the Lagrangian for a complex-valued tensor or spinor field $\phi^{\alpha ..}(n)$ and $\phi^{\alpha ..}({{\mathbf{n}}},t)$ and the corresponding difference and difference-differential conservation equations. A global, infinitesimal gauge transformation is characterized by:
$$\widehat{\phi}^{\alpha ..}(n) = [\exp (i\varepsilon)] \phi^{\alpha ..}
(n) = \phi^{\alpha ..}(n) + (i\varepsilon ) \phi^{\alpha..} (n) + 0
(\varepsilon^2)\vspace*{0.15cm},
\eqno{\rm (A.II.7A)}$$ $$\widehat{\phi}^{\alpha ..}({{\mathbf{n}}},t) = [\exp (i\varepsilon)]
\phi^{\alpha ..}({{\mathbf{n}}},t)
= \phi^{\alpha ..}({{\mathbf{n}}},t) + (i\varepsilon) \phi^{\alpha ..}
({{\mathbf{n}}},t) + 0(\varepsilon^2)\vspace*{0.4cm}.
\eqno{\rm (A.II.7B)}$$ The invariance of the Lagrangian function under (A.II.7A) implies that $$\hspace*{-0.1cm}\begin{array}{rcl}
0 &=& L(\rho^{..}, \overline{\rho}^{..}\,; \rho_\mu^{..},
\overline{\rho}_\mu^{..})_{|\rho^{..} = \widehat{\phi}^{..}(n),
\,\overline{\rho}^{..} =
\overline{\widehat{\phi}}^{..}(n), \,\rho_\mu^{..} =
\Delta_\mu^{\mbox{\tts\#}} \widehat{\phi}^{..}, \,\overline{\rho}_\mu^{..}
= \Delta_\mu^{\mbox{\tts\#}} \overline{\widehat{\phi}}^{..}} \\[0.5cm]
&-& L(\rho^{..}, \overline{\rho}^{..}\,; \rho_\mu^{..},
\overline{\rho}_\mu^{..})_{|\rho^{..} = \phi^{..}(n),
\,\overline{\rho}^{..} =
\overline{\phi}^{..}(n), \,\rho_\mu^{..} =
\Delta_\mu^{\mbox{\tts\#}} \phi^{..}, \,\overline{\rho}_\mu^{..}
= \Delta_\mu^{\mbox{\tts\#}} \overline{\phi}^{..}} \\[0.5cm]
&=& {\ds (i \varepsilon) \Biggr[ \frac{\partial L(..)}{\partial
\rho^{..}} \rho^{..}\!-\!\frac{\partial L(..)}{\partial
\overline{\rho}^{..}} \overline{\rho}^{..}\!+\!\frac{\partial
L(..)}{\partial \rho_\mu^{..}} \rho_\mu ^{..}\!-\!\frac{\partial
L(..)}{\partial \overline{\rho}_\mu^{..}} \overline{\rho}_\mu^{..}
\Biggr]_{|..}\!+\!0 (\varepsilon^2). }
\end{array}
\eqno{\rm (A.II.8A)}$$ 0.8ex
Dividing by $\varepsilon > 0,$ taking the limit $\varepsilon
\rightarrow 0_+$, and equating $\frac{\partial L(..)}{\partial
{\rho^{..}}}_{|..} =\linebreak \Delta_\mu^{\scsc\#} \Big[
\frac{\partial L(..)}{\partial \rho_\mu^{..}}\Big]_{|..},$ we obtain that $$0 = i\Biggr\{\Biggr[ \Delta_\mu^{\scsc\#} \Biggr(\frac{\partial
L(..)}{\partial \rho_\mu^{\alpha ..}}\Biggr)_{|..}\Biggr] \cdot
\phi^{\alpha ..}(n) + \frac{\partial L(..)}{\partial {\rho_\mu^{\alpha
..}}_{|..}} \cdot \Delta_\mu^{\scsc\#} \phi^{\alpha ..}\Biggr\}+{\rm
(c.c.)}\vspace*{0.2cm}\,.
\eqno{\rm (A.II.9A)}$$ Since $i\Delta_\mu^{\scsc\#} = P_\mu$ represent the relativistic four-momentum operators, the equation (A.II.9A) incorporates the [*relativistic*]{} partial difference equation for the charge-current conservation. The difference-differential version of the [*relativistic*]{} charge-current conservation is furnished by: $$\begin{array}{l}
{\ds i\Biggr\{\Biggr[ \Delta_b^{\scsc\#} \Biggr(\frac{\partial
L(..)}{\partial \rho_b^{\alpha ..}}\Biggr)\Biggr] \cdot
\phi^{\alpha ..}({{\mathbf{n}}},t) + \Biggr[ \partial_t \frac{\partial
L(..)}{\partial \rho_4^{\alpha..}}\Biggr)_{|..}\cdot \phi^{\alpha ..}
({{\mathbf{n}}},t) } \\[0.7cm]
{\ds + \frac{\partial L(..)}{\partial {\rho_b^{\alpha ..}}_{|..}} \cdot
\Delta_b^{\scsc\#} \phi^{\alpha ..} ({{\mathbf{n}}},t) + \frac{\partial
L(..)}{\partial {\rho_4^{\alpha ..}}_{|..}} \cdot \partial_t
\phi^{\alpha ..} ({{\mathbf{n}}},t)\Biggr\} + {\rm (c.c.)} = 0\,. }
\end{array}
\eqno{\rm (A.II.9B)}$$ 1.5cm
[99]{} W.R. Hamilton, Collected Mathematical Papers, Vol.2, (Cambridge U. Press, 1940), 451;\
R. Courant, K. Friedrichs, and H. Lewy, Math Ann. 100 (1928) 32;\
B.T. Darling, Phys. Rev. 80 (1951) 460; 92 (1953) 1547;\
R.J. Duffin, Duke Math. Jour. 20 (1953) 234; 23 (1955) 335.
A.J. Pettofrezzo, Introductory Numerical Analysis (D.C. Heath & Co. Boston, 1966);\
L.V. Kantorovich and V.I. Krylov, Approximate Methods of Higher Analysis (translated by C.D. Benster, Interscience Publ. Inc., New York etc., 1964).
A. Das and P. Smoczynski, Found. Phys. Letters 7 (1994) 21, 127.
C. Jordan, Calculus of Finite Differences (Chelsea, New York, 1965) 21.
P.R. Garabedian, Partial Differential Equations (John Wiley & Sons Inc., New York etc., 1964), 458.
A. Das, J. Math. Phys. 7 (1966) 52; Prog. Theor. Phys. 68 (1982) 336, 341; 70 (1983) 1660; Nucl. Phys. B (Proc. Suppl.) 6 (1989) 249.
A. Das, Nuovo Cimento 18 (1960) 482.
A. Schild, Phys. Rev. 73 (1948) 414; Canad. Jour. Math. 1 (1949) 29.
S.W. Hawking and G.F.R. Ellis, The Large Scale Structure of Space-time (Cambridge Univ. Press, 1973), 134.
J. Leite Lopes, Inversion Operators in Quantum Field Theory (Universidad de Buenos Aires, 1960), 148;\
S. Weinberg, The Quantum Theory of Fields, Vol.I (Cambridge Univ. Press, 1995), 59;\
M.E. Peskin and D.V. Schroeder, An Introduction to Quantum Field Theory (Addison-Wesley Publ. Co., New York etc., 1995), 36.
bibitem\[11\][11]{} E.P. Wigner, Ann. Math. 40 (1939) 149.
M. Jauch, Foundations of Quantum Mechanics (Addison-Wesley Publ. Co., Reading, Mass. etc., 1968), 273.
Z. Nehari, Introduction to Complex Analysis (Allyn & Bacon Inc., Boston, 1968), 20.
A. Das, The Special Theory of Relativity: A Mathematical Exposition (Springer-Verlag, New York, Heidelberg etc., 1993), 123.
R. Courant and D. Hilbert, Methods of Mathematical Physics, Vol.I (Interscience Publ. Inc., New York, 1966), 208;\
C. Lanczos, The Variational Principles of Mechanics (Univ. of Toronto Press, Toronto, 1977), 70.
I.M. Gelfand and S.V. Fomin, Calculus of Variations (translated by R.A. Silverman, Prentice-Hall Inc., New Jersey, 1963), 9.
\
$$\begin{minipage}[t]{15cm}
\beginpicture
\setcoordinatesystem units <1truecm,1truecm>
\setplotarea x from 0 to 13, y from 10 to -10
\setsolid
\setlinear
\plot 0 0 12 0 /
\plot 6 6 6 -6 /
\put {\vector(1,0){4}} [Bl] at 12 0
\put {\vector(0,1){4}} [Bl] at 6 6
\setquadratic
\circulararc 360 degrees from 5.35 0 center at 6 0
\circulararc 360 degrees from 4.75 0 center at 6 0
\circulararc 360 degrees from 4.15 0 center at 6 0
\setlinear
\plot 6 0 5.5 -0.3775 /
\plot 6 0 6.75 -1 /
\plot 6 0 7.075 1.5 /
\put {${\rm p}^\mu$} at 6 6.4
\put {${\rm q}^\mu$} at 12.5 0
\put {$\scriptstyle 1$} at 5.8725 -0.35
\put {$\scriptstyle \sqrt{3}$} at 6.75 -0.55
\put {$\scriptstyle \sqrt{5}$} at 7.05 1.075
\put {{\bf FIG. 1} \ \ One discrete phase plane.} at 6 -9
\endpicture
\end{minipage}$$
\
$$\hspace*{-1cm}\begin{minipage}[t]{15cm}
\beginpicture
\setcoordinatesystem units <0.8truecm,0.8truecm>
\setplotarea x from 0 to 15, y from 10 to -10
\setsolid
\setlinear
\plot 0 0 14 0 /
\plot 6 6 6 -6 /
\put {\vector(1,0){4}} [Bl] at 14 0
\put {\vector(0,1){4}} [Bl] at 6 6
\setquadratic
\plot 7.85 1.1 7.8 0.8 8 0.5 /
\plot 8 0.5 8.15 0.2 8.15 -0.4 /
\plot 13.05 5.35 13.4 5.5 13.7 5.575 /
\setlinear
\put {\vector(1,0){4}} [Bl] at 13.7 5.575
\put {\vector(0,-1){4}} [Bl] at 8.15 -0.4
\put {${\rm n}^2$} at 6 6.5
\put {${\rm n}^1$} at 14.5 0.1
\put {$({\rm N}_1^1, {\rm N}_1^2)$} at 9.25 -0.9
\put {$({\rm N}_2^1, {\rm N}_2^2)$} at 14.8 5.575
\setplotsymbol ({\circle*{3.5}} [Bl])
\plotsymbolspacing13mm
\setdots <5mm> \plot 8 5.2 13.5 5.2 /
\plot 8 4.2 13.5 4.2 /
\plot 8 3.2 13.5 3.2 /
\plot 8 2.2 13.5 2.2 /
\plot 8 1.2 13.5 1.2 /
\setsolid
\put {{\bf FIG. 2} \ \ A two-dimensional dicrete domain.} at 8 -9
\endpicture
\end{minipage}$$
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'The representations of dimension vector $\alpha$ of the quiver $Q$ can be parametrised by a vector space $R(Q,\alpha)$ on which an algebraic group $\operatorname{Gl}(\alpha)$ acts so that the set of orbits is bijective with the set of isomorphism classes of representations of the quiver. We describe the semi–invariant polynomial functions on this vector space in terms of the category of representations. More precisely, we associate to a suitable map between projective representations a semi–invariant polynomial function that describes when this map is inverted on the representation and we show that these semi–invariant polynomial functions form a spanning set of all semi–invariant polynomial functions in characteristic $0$. If the quiver has no oriented cycles, we may replace consideration of inverting maps between projective representations by consideration of representations that are left perpendicular to some representation of dimension vector $\alpha$. These left perpendicular representations are just the cokernels of the maps between projective representations that we consider.'
address:
- |
Department of Mathematics\
University of Bristol Senate House\
Tyndall Avenue\
Bristol BS8 1TH
- |
Departement WNI\
Limburgs Universitair Centrum\
Universitaire Campus\
Building D\
3590 Diepenbeek\
Belgium
author:
- Aidan Schofield
- Michel Van den Bergh
title: 'Semi–invariants of quivers for arbitrary dimension vectors'
---
Notation and Introduction {#s0}
=========================
In the sequel $k$ will be an algebraically closed field. For our main result, Theorem \[t2\] below, $k$ will have characteristic zero.
Let $Q$ be a quiver with finite vertex set $V$, finite arrow set $A$ and two functions $i,t:A\rightarrow V$ where for an arrow $a$ we shall usually write $ia$ in place of $i(a)$, the initial vertex, and $ta=t(a)$, the terminal vertex. A representation, $R$, of the quiver $Q$ associates a $k$-vector space $R(v)$ to each vertex $v$ of the quiver and a linear map $R(a):R(ia)\rightarrow R(ta)$ to each arrow $a$. A homomorphism $\phi$ of representations from $R$ to $S$ is given by a collection of linear maps for each vertex $\phi(v):R(v)\rightarrow
S(v)$ such that for each arrow $a$, $R(a)\phi(ta)=\phi(ia)S(v)$. The category of representations of the quiver $\operatorname{Rep}(Q)$ is an abelian category as is the full subcategory of finite dimensional representations. We shall usually be interested in finite dimensional representations in which case each representation has a dimension vector $\operatorname{\underline{dim}}R$, which is a function from the set of vertices $V$ to the natural numbers $\mathbb{N}$ defined by $(\operatorname{\underline{dim}}R)(v) = \dim
R(v)$. Now let $\alpha$ be a dimension vector for $Q$. The representations of dimension vector $\alpha$ are parametrised by the vector space $$R(Q,\alpha) = \times_{a \in A}{^{\alpha(ia)}k^{\alpha(ta)}}$$ where ${^{m}k^{n}}$ is the vector space of $m$ by $n$ matrices over $k$ (and ${^{m}k}$ and $k^{n}$ are shorthand for ${^{m}k^{1}}$ and ${^{1}k^{n}}$ respectively). Given a point $p \in R(Q,\alpha)$, we denote the corresponding representation by $R_{p}$. The isomorphism classes of representations of dimension vector $\alpha$ are in $1$ to $1$ correspondence with the orbits of the algebraic group $$\operatorname{Gl}(\alpha) = \times_{v \in V} \operatorname{Gl}_{\alpha(v)}(k)$$ We consider the action of $\operatorname{Gl}(\alpha)$ on the co–ordinate ring $S(Q,\alpha)$ of $R(Q,\alpha)$; $f \in S(Q,\alpha)$ is said to be semi–invariant of weight $\psi$ where $\psi$ is a character of $\operatorname{Gl}(\alpha)$ if $g(f) = \psi(g)f, \; \forall g \in \operatorname{Gl}(\alpha)$.
The invariants and semi–invariants for this action are of importance for the description of the moduli spaces of representations of the quiver for fixed dimension vector. In characteristic $0$ we may apply Weyl’s theory of invariants for $\operatorname{Sl}_{n}(k)$ to give an explicit description of all such semi–invariants. We shall find a set of semi–invariants that span all semi–invariants as a vector space in characteristic $0$. It seems likely that the result we obtain here should hold in arbitrary characteristic and that this would follow from Donkin [@donkin; @donkin1]. However, we restrict ourselves in this paper to characteristic $0$. In [@schofield], the first author described all the semi–invariants when $\operatorname{Gl}(\alpha)$ has an open orbit on $R(Q,\alpha)$ in terms of certain polynomial functions naturally associated to the representation theory of the quiver. We shall begin by recalling the definition of these semi–invariants and some related theory.
Given the quiver $Q$, let $\operatorname{add}(Q)$ be the additive $k$–category generated by $Q$. To describe this more precisely, we define a path of length $n>0$ from the vertex $v$ to the vertex $w$ to be a monomial in the arrows $p=a_{1}\dots a_{n}$ such that $ta_{i}=ia_{i+1}$ for $0<i<n$ and $ia_{1}=v$, $ta_{n}=w$. We define $ip=v$ and $tp=w$. For each vertex $v$ there is also the trivial path of length $0$, $e_{v}$, from $v$ to $v$.
For each vertex $v$, we have an object $O(v)$ in $\operatorname{add}(Q)$, and $\operatorname{Hom}(O(v),O(w)) = P(v,w)$ where $P(v,w)$ is the vector space with basis the paths from $v$ to $w$ including the trivial path if $v = w$; finally, $$\operatorname{Hom}(\bigoplus_{v}O(v)^{a(v)}, \bigoplus_{v}O(v)^{b(v)})$$ is defined in the usual way for an additive category, where composition arises via matrix multiplication.
Any representation $R$ of $Q$ extends uniquely to a covariant functor from $\operatorname{add}(Q)$ to $\operatorname{Mod}(k)$ which we shall continue to denote by $R$; thus given $\phi$ a map in $\operatorname{add}(Q)$, its image under the functor induced by $R$ is $R(\phi)$. Let $\alpha$ be some dimension vector, and $\phi$ a map in $\operatorname{add}(Q)$ $$\phi: \bigoplus_{v\in V} O(v)^{a(v)}
\rightarrow \bigoplus_{v\in V} O(v)^{b(v)}$$ then for any representation $R$ of dimension vector $\alpha$, $R(\phi)$ is a ${\sum_{v\in V} a(v)\alpha(v)}$ by ${\sum_{v\in V}b(v)\alpha(v)}$ matrix. If ${\sum_{v\in V}a(v)\alpha(v) = \sum_{v\in V}b(v)\alpha(v)}$, we define a semi–invariant polynomial function $P_{\phi,\alpha}$ on $R(Q,\alpha)$, by $$P_{\phi,\alpha}(p) = \det R_{p}(\phi).$$ We shall refer to these semi–invariants as determinantal semi–invariants in future. We will show that the determinantal semi–invariants span all semi–invariants (Theorem \[t2\] below). To prove Thereom \[t2\] we will use the classical symbolic method which was also used by Procesi to show that the invariants of matrices under conjugation are generated by traces [@Procesi]. Procesi’s result is generalized in [@LBP1] where it is shown that invariants of quiver-representations are generated by traces of oriented cycles. Subsequently Donkin showed that suitable analogues of these results were valid in characteristic $p$ [@donkin; @donkin1].
It was shown in [@schofield] that the determinantal semi-invariants can be defined in terms of the representation theory of $Q$. This is reviewed in Section §\[representationtheory\].
One corollary worth stating of this representation theoretic interpretation is the following: define a point $p$ of $R(Q,\beta)$ to be semistable if some non–constant semi–invariant polynomial function does not vanish at $p$. Then we have:
\[interestingcorollary\] In characteristic zero the point $p$ of $R(Q,\beta)$ is semistable if and only there is some non–trivial (possibly infinite dimensional) representation $T$ of $Q$ such that $\operatorname{Hom}(T,R_p)=\operatorname{Ext}(T,R_p)=0$.
For the proof we refer to the end of Section \[representationtheory\].
In the sequel we will frequently change the quiver $Q=(V,A)$ to another quiver $Q'=(V',A')$ which is connected to $Q$ through an additive funcor. $$s:\operatorname{add}(Q')\rightarrow \operatorname{add}(Q)$$ Through functoriality $s$ will act on various objects associated to $Q$ and $Q'$. We will list these derived actions below. In order to avoid having to introduce a multitude of adhoc notations we denote each of the derived actions also by $s$.
To start there is an associated functor $$s:\operatorname{Rep}(Q)\rightarrow \operatorname{Rep}(Q'):R\mapsto R\circ s$$ If $R$ has dimension vector $\alpha$ then $s(R)$ has dimension vector $s(\alpha)\overset{\mathrm{def}}{=}\alpha\circ s$. Put $\alpha'=s(\alpha)$. We obtain that $s$ defines a map $$s:R(Q,\alpha)\rightarrow R(Q',\alpha')$$ such that $s(R_p)=R_{s(p)}$. As usual there is a corresponding $k$-algebra homomorphism $$s:S(Q',\alpha)\rightarrow S(Q,\alpha)$$ given by $s(f)=f\circ s$. Writing out the definitions one obtains: $$\label{relationwithdeterminantalinvariants}
s(P_{\phi',\alpha'})=P_{s(\phi'),\alpha}$$ Finally $s$ defines a homomorphism $$s:\operatorname{Gl}(\alpha)\rightarrow \operatorname{Gl}(\alpha')$$ which follows from functoriality by considering $\operatorname{Gl}(\alpha)$ as the automorphism group of $R_0(\alpha)$ in $\operatorname{Rep}(Q)$. One checks that for $g\in
\operatorname{Gl}(\alpha)$, $p\in R(Q,\alpha)$ one has $s(g\cdot p)=s(g)\cdot
s(p)$. It follows in particular that if $f$ is a semi–invariant in $S(Q',\alpha')$ with character $\psi'$ then $s(f)$ is a semi-invariant with character $s(\psi')\overset{\mathrm{def}}{=}\psi'\circ s$.
Semi–invariant polynomial functions {#s1}
===================================
Next, we discuss the ring $S(Q,\alpha)$. $S(Q,\alpha)$ has two gradings, one of which is finer than the other. First of all, $S(Q,\alpha)$ may be graded by $\mathbb{Z}^{A}$ in the natural way since $R(Q,\alpha) = \times_{a\in A}{^{\alpha(ia)}k^{\alpha(ta)}}$. We call this the $A$–grading. On the other hand, $\operatorname{Gl}(\alpha)$ acts on $R(Q,\alpha)$ and hence $S(Q,\alpha)$ and so $\times_{v\in V}
k^{*}$ acts on $S(Q,\alpha)$; we may therefore decompose $S(Q,\alpha)$ as a direct sum of weight spaces for the action of $\times_{v\in V}
k^{*}$ which gives a grading by $\mathbb{Z}^{V}$. We call this the $V$–grading. The semi–invariants $P_{\phi,\alpha}$ are homogeneous with respect to the second grading though not the first. The first grading is induced by the natural action of $\times_{a\in A} k^{*}$ on $\operatorname{add}(Q)$. $\times_{a\in A} k^{*}$ acts on $\operatorname{add}(Q)$ by $(\dots,\lambda_{a},\dots)(a) = \lambda_{a}a$ and according to $g P_{\phi,\alpha} = P_{g \phi, \alpha}$ for $g \in \times_{a\in
A} k^{*}$.
A standard Van der Monde determinant argument implies that the vector subspace spanned by determinantal semi–invariants in $S(Q,\alpha)$ is also the space spanned by the homogeneous components of the determinantal semi–invariants with respect to the $\mathbb{Z}^{A}$–grading. Thus it is enough to find the latter subspace. Given a character $\chi$ of $\times_{a\in A} k^{*}$, we define $P_{\phi,\alpha,\chi}$ to be the $\chi$–component of $P_{\phi,\alpha}$.
We begin by describing the semi–invariants which are homogeneous with respect to the $A$–grading and linear in each component of $R(Q,\alpha) = \times_{a\in
A}{^{\alpha(ia)}k^{\alpha(ta)}}$. We call these the homogeneous multilinear semi–invariants. We need temporarily another kind of semi–invariant. A path $l$ in the quiver is an oriented cycle if $il =
tl$. Associated to such an oriented cycle is an invariant $\operatorname{Tr}_{l}$ for the action of $\operatorname{Gl}(\alpha)$ on $R(Q,\alpha)$ defined by $\operatorname{Tr}_{l}(p) = \operatorname{Tr}(R_{p}(l))$ where $\operatorname{Tr}$ is the trace function.
\[t1\] The homogeneous multilinear semi–invariants of $R(Q,\alpha)$ are spanned by semi–invariants of the form $P_{\phi,\alpha,\chi}
\prod_{i} \operatorname{Tr}_{l_{i}}$ where $l_{i}$ are oriented cycles in the quiver.
The semi–invariants are invariants for $\operatorname{Sl}(\alpha) = \times_{v \in V} \operatorname{Sl}_{\alpha(v)}(k)$ and conversely, $\operatorname{Sl}(\alpha)$–invariant polynomials that are homogeneous with respect to the $V$–grading are also semi–invariant for $\operatorname{Gl}(\alpha)$. We may therefore use Weyl’s description of the homogeneous multilinear invariants for $\operatorname{Sl}_{n}(k)$ and hence for $\operatorname{Sl}(\alpha)$.
Given a homogeneous multilinear $\operatorname{Sl}(\alpha)$–invariant $f: \times_{a\in A}{^{\alpha(ia)}k^{\alpha(ta)}} \rightarrow k,$ it factors as $$\times_{a\in A}{^{\alpha(ia)}k^{\alpha(ta)}} \rightarrow
\otimes_{a\in A} {^{\alpha(ia)}k^{\alpha(ta)}}
\xrightarrow{\tilde{f}} k$$ for a suitable linear map $\tilde{f}$. Since $${^{\alpha(ia)}k^{\alpha(ta)}} \cong
{^{\alpha(ia)}k}\otimes k^{\alpha(ta)}$$ as $\operatorname{Sl}(\alpha)$–representation, we may write $$\tilde{f}: \otimes_{a\in A}{^{\alpha(ia)}k} \otimes k^{\alpha(ta)}
\rightarrow k.$$ Moreover, $\otimes_{a\in A}{^{\alpha(ia)}k} \otimes k^{\alpha(ta)}$ as $\operatorname{Sl}(\alpha)$–representation is a tensor product of covariant and contravariant vectors for $\operatorname{Sl}(\alpha)$. Thus we may re–write $$\bigotimes_{a\in A} {^{\alpha(ia)}k} \otimes k^{\alpha(ta)} =
\bigotimes_{v\in V} \left(\bigotimes_{a,ia=v} {^{\alpha(v)}k}
\otimes \bigotimes_{a,ta=v} k^{\alpha(v)}\right)$$ and $\tilde{f} = \prod_{v\in V} \tilde{f}_{v}$ for $\operatorname{Sl}_{\alpha(v)}(k)$–invariant linear maps: $$\tilde{f}_{v}: \bigotimes_{a,ia=v} {^{\alpha(v)}k}
\otimes \bigotimes_{a,ta=v} k^{\alpha(v)} \rightarrow k.$$ Roughly speaking Weyl [@weyl] showed that there are $3$ basic linear semi–invariant functions on tensor products of covariant and contravariant vectors for $\operatorname{Sl}_{m}(k)$. Firstly, there is the linear map from ${^{m}k}\otimes k^{m}$ to $k$ given by $f(x\otimes y) = yx$; this is just the trace function on ${^{m}k}\otimes k^{m} \cong
M_{m}(k)$. Secondly, there is the linear map from $\otimes_{i=1}^{m}{^{m}k}$ to $k$ determined by $f(x_{1}\otimes \dots
\otimes x_{m}) = \det\left(x_1|\dots|x_m\right)$. The third case is similar to the second; there is a linear map from $\otimes_{i=1}^{m}k^{m}$ to $k$ again given by the determinant. A spanning set for the linear semi–invariant functions on a general tensor product of covariant and contravariant vectors is constructed from these next. A spanning set for the $\operatorname{Sl}_{m}(k)$–invariant linear maps from $\otimes_{B} {^{
m}k} \otimes \otimes_{C} k^{m}$ to $k$ is obtained in the following way. We take three disjoint indexing sets $I,J,K$: we have surjective functions $\mu: B \rightarrow I \cup K, \:
\nu: C \rightarrow J \cup K$ such that $\mu^{-1}(k)$ and $\nu^{-1}(k)$ have one element each for $k \in K$, and $\mu^{-1}(i)$ and $\nu^{-1}(j)$ have $m$ elements each for $i \in I, \: j \in J$. We label this data by $\gamma = (\mu, \nu, I, J, K)$. To $\gamma$, we associate an $\operatorname{Sl}_{m}(k)$–invariant linear map $$f_{\gamma}\left(\bigotimes_{b \in B} x_{b} \otimes\bigotimes_{C} y_{c}\right)
= \prod_{k\in K} y_{\nu^{-1}(k)}x_{\mu^{-1}(k)}
\prod_{i\in I} \det\left(x_{b_{1}}|\dots|x_{b_{m}}\right)
\prod_{j\in J} \det
\begin{pmatrix}
y_{c_{1}}\\
\vdots\\
y_{c_{m}}
\end{pmatrix}$$ where $\{b_{1},\dots,b_{m}\} = \mu^{-1}(i), \;
\{c_{1},\dots,c_{m}\} = \nu^{-1}(j)$. Note that $f_{\gamma}$ is determined only up to sign since we have not specified an ordering of $\mu^{-1}(i)$ or $\nu^{-1}(j)$.
A spanning set for $\operatorname{Sl}(\alpha)$–invariant linear maps from $$\bigotimes_{v\in V} \left(\bigotimes_{a,ia = v}{^{\alpha(v)}k}
\otimes \bigotimes_{a,ta=v} k^{\alpha(v)}\right)$$ is therefore determined by giving quintuples $(\mu,\nu,I,J,K) = \Gamma$ where $$\begin{aligned}
I &= \bigcup_{v\in V}^{\bullet} I_{v}\\
J &= \bigcup_{v\in V}^{\bullet} J_{v}\\
K &= \bigcup_{v\in V}^{\bullet} K_{v}\end{aligned}$$ and surjective maps $$\begin{aligned}
\mu &: A \rightarrow I \cup K,\\
\nu &: A \rightarrow J \cup K\end{aligned}$$ where $$\begin{aligned}
\mu(a) &\in I_{ia} \cup K_{ia},\\
\nu(a) &\in J_{ta} \cup K_{ta},\end{aligned}$$ $\mu^{-1}(k)$ and $\nu^{-1}(k)$ have one element each, $\mu^{-1}(i)$ and $\nu^{-1}(j)$ have $\alpha(v)$ elements each for $i \in I_{v}\text{ and } j\in J_{v}$. Then $\Gamma$ determines data $\gamma_{v}$ for each $v \in
V$ and we define $$f_{\Gamma} = \prod_{v\in V} f_{\gamma_{v}}.$$
We show that these specific semi–invariants lie in the linear span of the homogeneous components of determinantal semi–invariants. First, we treat the case where $K$ is empty. Let $n = |A|$. We have two expressions for $n$: $$n = \sum_{v\in V} |I_{v}| \alpha(v) = \sum_{v\in V} |J_{v}| \alpha(v).$$ To each arrow $a$, we have a pair $(\mu(a), \nu(a))$ associated. To this data, we associate a map in $\operatorname{add}(Q)$ in the following way. We consider a map $$\Phi_{\Gamma}: \bigoplus_{v\in V} O(v)^{I_{v}} \rightarrow
\bigoplus_{v\in V} O(v)^{J_{v}}$$ whose $(i,j)$–entry is $$\sum_{a,(\mu(a),\nu(a))=(i,j)}a.$$ Given $p \in R(Q,\alpha), \; R_{p}(\Phi_{\Gamma})$ is an $n$ by $n$ matrix which we may regard as a partitioned matrix where the rows are indexed by $I$ and the columns by $J$, there are $\alpha(v)$ rows having index $i \in I_{v}$, $\alpha(v)$ columns having index $j \in J_{v}$ and the block having index $(i,j)$ is $$\sum_{a:(\mu(a),\nu(a))=(i,j)} R_{p}(a).$$ We claim that $f_{\Gamma} = P_{\Phi_{\Gamma},\alpha,\chi}$ up to sign where $\chi((\lambda_a)_a)=\prod_a \lambda_a$. To prove this it will be convenient to define a new quiver $Q'=(V',A')$ whose vertices are given by $I\cup J$ and whose edges are the same as those of $Q$. The initial and terminal vertex of $a\in A'=A$ are given by $(\mu(a),\nu(a))$. On $Q'$ we define data $\Gamma'$ which is defined by the same quintuple $(\mu,\nu,I,J,K)$ as $\Gamma$, but with has different decompositions $I=\bigcup_{i\in V'} I_{i}$, $J=\bigcup_{j \in V'} J_{j}$. In fact $$I_{i}=\begin{cases}
\{i\}&\text{if $i\in I$}\\
\emptyset&\text{otherwise}
\end{cases}
\qquad
\text{and}
\qquad
J_{j}=\begin{cases}
\{j\}&\text{if $j\in J$}\\
\emptyset&\text{otherwise}
\end{cases}$$ We define a functor $s:\operatorname{add}(Q')\rightarrow \operatorname{add}(Q)$ by $s(a)=a$, $s(O(i))=O(v)$, $s(O(j))=O(w)$ where $a\in A'=A$, $i\in I_v$, $j\in
J_w$.
Since $Q$ and $Q'$ have the same edges, the action of $\times_{a\in
A}k^\ast$ on $\operatorname{add}(Q)$ lifts canonically to an action of $\times_{a\in A}k^\ast$ on $\operatorname{add}(Q')$. Put $\alpha'=s(\alpha)$. Using we then find $s(P_{\Phi_{\Gamma'},\alpha'})=P_{s(\Phi_{\Gamma'}),\alpha}
=P_{\Phi_\Gamma,\alpha}$ and similarly $s(P_{\Phi_{\Gamma'},\alpha',\chi})=P_{\Phi_\Gamma,\alpha,\chi}$. Finally one also verifies that $s(f_{\Gamma'})=f_\Gamma$.
Hence to prove that $f_\Gamma=\pm P_{\Phi_\Gamma,\alpha,\chi}$ we may replace the triple $(Q,\alpha,\Gamma)$ by $(Q',\alpha',\Gamma')$. We do this now.
In order to prove that $f_\Gamma=\pm P_{\Phi_\Gamma,\alpha,\chi}$, we need only check that the two functions agree on the image of $W=\times_{a\in A}{^{\alpha(ia)}k} \times k^{\alpha(ta)}$ in $\otimes_{a\in A} {^{\alpha(ia)}}k \otimes k^{\alpha(ta)}$.
Let $\psi:\operatorname{Gl}(\alpha)\rightarrow k^\ast$ be the character given by $$\psi((A_v)_{v\in V})=\prod_{i\in I}\det A_i \cdot
\prod_{j\in J} (\det A_j)^{-1}$$ Then one checks that both $f_\Gamma$ and $P_{\Phi_\Gamma,\alpha}$ are semi-invariants on $W$ with character $\psi$. Now we claim that on $W$ we have $f_\Gamma=P_{\Phi_\Gamma,\alpha}$, up to sign. To prove this we use the fact that $\operatorname{Gl}(\alpha)$ has an open orbit on $W$.
For vertices $i \in I$ and $j \in J$, we let $\{a_{i,1},\dots,a_{i,\alpha(i)}\}$ and $\{a_{j,1},\dots,\alpha_{j,\alpha(j)}\}$ be the sets of arrows incident with $i$ and $j$ respectively. So $$\times_{a\in A}{^{\alpha(ia)}k} \times k^{\alpha(ta)} =
\times_{i\in I} \left(\times_{l=1}^{\alpha(i)}{^{\alpha(i)}k}\right) \times
\times_{j\in J} \left(\times_{m=1}^{\alpha(j)} k^{\alpha(j)}\right).$$ We take the point $p$ whose $(i,l)$–th entry is the $l$–th standard column vector in $^{\alpha(i)}k$; that is its $n$th entry is $\delta_{nl}$ and whose $(j,m)$–th entry is the $m$–th standard row vector in $k^{\alpha(j)}$. The $\operatorname{Gl}(\alpha)$-orbit of this point is open in $W$. Hence to show that $f_\Gamma=\pm P_{\Phi_\Gamma,\alpha}$ on $W$, it suffices to do this in the point $p$.
Now $f_{\Gamma}(p)=\pm 1$. Furthermore we have $$R_{p}(\Phi_{\gamma})_{(i,l),(j,m)}=
\begin{cases}
1&\text{if $a_{il}=a_{jm}$}\\
0&\text{otherwise}
\end{cases}$$ In particular $R_p(\Phi_{\Gamma})$ is a permutation matrix and thus $P_{\Phi_\Gamma,\alpha}(p)=\pm 1$. Hence indeed $f_{\Gamma} = \pm P_{\Phi_{\Gamma},\alpha}$ (on $W$). Note that the non-zero entries of $R_p(\Phi_{\Gamma})$ are naturally indexed by $A$.
To prove that $f_\Gamma=\pm P_{\Phi_\Gamma,\alpha,\chi}$ on $W$ it is now sufficient to prove that $P_{\Phi_\Gamma,\alpha}=P_{\Phi_\Gamma,\alpha,\chi}$ on $W$. To this end we lift the $\times_{a\in A} k^\ast$ action on $R(Q,\alpha)$ to $W$ by defining $(\lambda_a)_a\cdot (x_{ia},y_{ta})_a=
(\lambda_a x_{ia},y_{ta})_a$ (this is just some convenient choice).
Now we have to show that $P_{\Phi_\Gamma,\alpha}$ is itself homogeneous with character $\chi$ when restricted to $W$. Since the action of $\times_{a\in A} k^\ast$ commutes with the $\operatorname{Gl}(\alpha)$-action it suffices to do this in the point $p$. Now if we put $q=(\lambda_a)_a\cdot
p$ then $R_q(\Phi_\Gamma)$ is obtained from $R_p(\Phi_\Gamma)$ by multiplying by $\lambda_a$ for all $a\in A$ the non-zero entry in $R_p(\Phi_\Gamma)$ indexed by $a$. Therefore $P_{\Phi_\Gamma,
\alpha}(p)$ is multiplied by $\prod_a\lambda_a$. This proves what we want.
It remains to deal with the case where $K$ is non–empty. Roughly speaking one of two things happens here; if we have two distinct arrows $a=\nu^{-1}(k)$ and $b=\mu^{-1}(k)$ for some $k\in K$ then this element of $K$ corresponds to replacing $a$ and $b$ by their composition; if on the other hand $a=\nu^{-1}(k)=\mu^{-1}(k)$ then $ia=ta$ and this element of $K$ corresponds to taking the trace of $a$.
We associate a quiver $Q(A,K)$ with vertex set $A$ and arrow set $K$; given $k \in K$, $ik = \nu^{-1}(k), \; tk = \mu^{-1}(k)$. This quiver has very little to do with the quiver $Q$; it is a temporary notational convenience. The connected components of $Q(A,K)$ are of three types: either they are oriented cycles, open paths or isolated points. The vertices of components of the first type are arrows of $Q$ that also form an oriented cycle, those of the second type are arrows that compose to a path in the $Q$ (which can in fact also be an oriented cycle); the isolated points we shall treat in the same way to the second type.
We label the oriented cycles $\{L_{l}\}$ and the open paths $\{M_{m}\}$. To an oriented cycle $L$ in $Q(A,K)$, we associate the invariant $\operatorname{Tr}_{p_{L}}$ where $p_{L}$ is the path around the oriented cycle in $Q$. This is independent of our choice of starting point on the loop. To the open path $M$ in $Q(A,K)$, we associate the path $p_{M}$, the corresponding path in $Q$.
We consider the adjusted quiver $Q_{K}$ with vertex set $V$ and arrow set $\{p_{M_{m}}\}$ where $i{p_{M}}$ and $t{p_{M}}$ are defined as usual. Define the functor $s:\operatorname{add}(Q_K)\rightarrow \operatorname{add}(Q)$ as follows: on vertices $s$ is the identity, and on edges $s(p_{M_m})=p_{M_m}$.
If $p_{M_{m}} = a_{m,1} \dots a_{m,d}$, we define $$\begin{aligned}
\mu(p_{M_{m}}) &= \mu(a_{m,1})\\
\nu(p_{M_{m}}) &= \nu(a_{m,d}).\end{aligned}$$
Then $$\begin{aligned}
\mu &: \{p_{M_{m}}\} \rightarrow I\\
\nu &: \{p_{M_{m}}\} \rightarrow J\end{aligned}$$ are surjective functions giving data $\Gamma_{K}$ on $Q_{K}$. One checks directly that $$\begin{aligned}
f_{\Gamma} = s(f_{\Gamma_{K}}) \prod_{L} \operatorname{Tr}_{p_{L}}
&=s(P_{\phi_{\Gamma_{K}},\alpha,\chi_{K}})\prod_{L} \operatorname{Tr}_{p_{L}}\\
&=P_{s(\phi_{\Gamma_K}),\alpha,\chi}\prod_{L} \operatorname{Tr}_{p_{L}}\end{aligned}$$ for $\chi_{K}$ and $\chi$ of weight $1$ on each arrow for the quivers $Q_{K}$ and $Q$ respectively, which completes our proof.
If $l$ is an oriented cycle in the quiver then $\operatorname{Tr}(R_{p}(l))$ lies in the linear span of $\det(I + \lambda R_{p}(l))$ for varying $\lambda \in k$ and $\det(I + \lambda R_{p}(l)) = P_{I + \lambda l,\alpha}(p)$ Also $$P_{\phi,\alpha} P_{\mu,\alpha} =
P_{\bigl(\begin{smallmatrix}
\phi & 0\\
0 & \mu
\end{smallmatrix}\bigr),\alpha}$$ so we may deduce the following corollary.
\[t3\] Homogeneous multilinear semi–invariants lie in the linear span of determinantal semi–invariants.
Since we have dealt with the multilinear case, standard arguments in characteristic $0$ apply to the general case.
\[t2\] In characteristic $0$, the semi–invariant polynomial functions for the action of $\operatorname{Gl}(\alpha)$ on $R(Q,\alpha)$ are spanned by determinantal semi–invariants.
It is enough to consider semi–invariants homogeneous with respect to the $A$–grading of weight $\chi$ where $\chi((\lambda_a)_a) = \prod_a \lambda_a^{m_{a}}$.
We may also assume that no component of the $A$–grade on the semi–invariant is $0$ since we may always restrict to a smaller quiver. To $\chi$, we associate a new quiver $Q_{\chi}$ with vertex set $V$ and arrow set $A_{\chi}$ where $A_{\chi} = \bigcup_{a\in A} \{a_{1},\dots,a_{m_{a}}\}$ where $ia_{i} = ia, \; ta_{i} = ta$.
We have functors $$\begin{gathered}
\sigma: \operatorname{add}(Q) \rightarrow \operatorname{add}(Q_{\chi})\\
\sigma(a) = \sum a_{i}\\
\pi: \operatorname{add}(Q_{\chi}) \rightarrow \operatorname{add}(Q)\\
\pi (a_{i}) = a.\end{gathered}$$
Given a semi–invariant $f$ for $R(Q,\alpha)$, we define $\tilde{f}$ to be the $\chi'$–component of $\sigma(f)$ where $\chi'((\lambda_{a_i})_{a,i}) = \prod_{a,i} \lambda_{a_i}$. So $\tilde{f}$ is homogeneous multilinear. Then one checks that $\pi(\tilde{f)}=\prod_a m_{a}!\cdot f$.
However, by the previous corollary, $\tilde{f}$ lies in the linear span of determinantal semi–invariants; therefore $$\tilde{f}=\sum_{i}\lambda_{i}P_{\phi_{i},\alpha}$$ and so $$\begin{aligned}
\prod_{a\in A} m_{a}! \cdot f &=
\pi(\tilde{f})\\
&=\sum_{i}\lambda_{i}\pi(P_{\phi_{i},\alpha})\\
&=\sum_{i}\lambda_{i}P_{\pi(\phi_{i}),\alpha}\qquad
\text{(by \eqref{relationwithdeterminantalinvariants})} \end{aligned}$$ Hence, in characteristic 0, semi–invariants homogeneous with respect to the $A$–grading and hence all semi–invariants lie in the linear span of the determinantal semi–invariants as required.
To finish this section we will give a a slightly more combinatorial interpretation of our proof of Theorem \[t2\].
Let us say that a pair $(Q,\alpha)$ is *standard* if one of the following holds.
1. $Q$ consists of one vertex with one loop.
2. $Q$ is bipartite with vertex set $I\cup J$ with all initial vertices in $I$ and all terminal vertices in $J$. Furthermore if $v\in V=I\cup J$ then $v$ belongs to exactly $\alpha(v)$ edges.
To a standard pair $(Q,\alpha)$ we associate a standard semi–invariant as follows. If $Q$ is a loop then we take $\operatorname{Tr}(R_p(a))$ where $a$ is the loop. If $Q$ is bipartite then we take $P_{\phi,\alpha}$ where $\phi$ is given by $$\phi:\oplus_{i\in I} O(i)\rightarrow \oplus_{j\in J} O(j)$$ such that the $(i,j)$ component of $\phi$ is given by the sum of arrows from $i$ to $j$.
If $(Q,\alpha)$ is arbitrary then we define the standard semi–invariants of $S(Q,\alpha)$ as those semi–invariants which are of the form $s(f)$ for some functor $$s:\operatorname{add}(Q')\rightarrow \operatorname{add}(Q)$$ such that $(Q',s(\alpha))$ is standard and such that $f$ is the corresponding standard semi–invariant.
The follows corollary can now easily be obtained from our proof of Theorem \[t2\].
In characteristic zero the semi–invariants in $S(Q,\alpha)$ are generated by the standard semi–invariants.
Interpretation in terms of representation theory {#representationtheory}
================================================
This section is mainly a review of some results of [@schofield] which provided the motivation for the current paper. We prove Corollary \[interestingcorollary\].
It is standard that the category $\operatorname{Rep}(Q)$ is equivalent to the category of right modules over $kQ$, the *path algebra* of $Q$. We will identify a representation with its corresponding $kQ$-module.
With every vertex $v\in Q$ corresponds canonically an idempotent $e_v$ in $kQ$ given by the empty path. We denote by $P_v$ the representations of $Q$ associated to $e_v kQ$. Clearly this is a projective object in $\operatorname{Rep}(Q)$.
For any representation $R$ we have $\operatorname{Hom}(P_{v},R)\cong R e_v=R(v)$ and furthermore $\operatorname{Hom}(P_v,P_w)=e_w kQ e_v$. Now $e_w kQ e_v$ is a vector space spanned by the paths having initial vertex $w$ and terminal vertex $v$. Hence in fact $$\operatorname{Hom}_{\operatorname{Rep}(Q)}(P_v,P_w)=\operatorname{Hom}_{\operatorname{add}(Q)}(O(w),O(v))$$ In other words if we denote by $\operatorname{proj}(Q)$ the additive category generated by the $(P_v)_{v\in V}$ then $\operatorname{proj}(Q)$ is equivalent to the opposite category of $\operatorname{add}(Q)$.
Now recall the following:
\[projectiveslemma\] $\operatorname{proj}(Q)$ is equivalent to the category of finitely generated projective $kQ$-modules.
Only in the case of quivers with oriented cycles, there is something to prove here; one must show that the finitely generated projective modules $kQ$ of the quiver are all direct sums of $P_v$’s. This may be proved in a similar way as the fact that over a free algebra all projective modules are free [@cohn1].
Given a map $$\gamma: \bigoplus_{v\in V} P_{v}^{b(v)}
\rightarrow \bigoplus_{v\in V} P_{v}^{a(v)},$$ we denote by $$\hat{\gamma}: \bigoplus_{v\in V} O(v)^{a(v)}
\rightarrow \bigoplus_{v\in V} O(v)^{b(v)}$$ the corresponding map in $\operatorname{add}(Q)$; similarly, for $\mu$ a map in $\operatorname{add}(Q)$, $\hat{\mu}$ is the corresponding map in $\operatorname{proj}(Q)$.
In order to link the semi–invariant polynomial functions $P_{\phi,\alpha}$ with the representation theory of $Q$, we recall some facts and definitions about the category of representations of a quiver. First of all, $\operatorname{Ext}^{n}$ vanishes for $n>1$. Given dimension vectors $\alpha$ and $\beta$, we define the Euler inner product by $$\langle \alpha,\beta\rangle =
\sum_{v\in V}\alpha(v)\beta(v)-\sum_{a\in
A}\alpha(ia)\beta(ta).$$ If $R$ and $S$ are representations of dimension vector $\alpha$ and $\beta$ respectively, then $$\label{eulerformula}
\dim\operatorname{Hom}(R,S)-\dim\operatorname{Ext}(R,S)=\langle\alpha,\beta\rangle.$$
Given representations $R$ and $S$, we say that $R$ is left perpendicular to $S$ and that $S$ is right perpendicular to $R$ if and only if $\operatorname{Hom}(R,S)=0=\operatorname{Ext}(R,S)$; Given a representation $R$ we define the right perpendicular category to $R$, $R^{\perp}$, to be the full subcategory of representations that are right perpendicular to $R$ and the left perpendicular category to $R$, ${^{\perp}R}$ is defined to be the full subcategory of representations that are left perpendicular to $R$. It is not hard to show that $R^{\perp}$ and ${}^\perp R$ are exact hereditary abelian subcategories of $\operatorname{Rep}(Q)$. In [@schofield] the first author even shows that if $Q$ has no oriented cycles and $R$ has an open orbit in $R(Q,\alpha)$ then $R^{\perp}$ is given by the representations of a quiver without oriented cycles and with $|V|-s$ vertices where $s$ is the number of non-isomorphic indecomposable summands of $R$. A similar result holds for ${}^\perp R$.
If $R$ and $S$ are finite dimensional and $R$ is left perpendicular to $S$ then it follows from that $\langle\operatorname{\underline{dim}}R,\operatorname{\underline{dim}}S\rangle = 0$. The converse problem is interesting: suppose that we have a representation $R$ of dimension vector $\alpha$ and a dimension vector $\beta$ such that $\langle\alpha,
\beta\rangle =0$, what are the conditions on a point $p\in R(Q,\beta)$ in order that $R_p$ lies in $R^{\perp}$?
This problem was discussed and solved by the first author in [@schofield] in the case that $Q$ has no oriented cycles. Let us assume this for a moment. We start with a minimal projective resolution of $R$: $$0\rightarrow \bigoplus_v P_v ^{b(v)}\xrightarrow{\theta} \bigoplus_v
P_v^{a(v)}
\rightarrow R\rightarrow 0.$$ By the above discussion $\theta=\hat{\phi}$ for some map $\phi:\bigoplus_v
O(v)^{a(v)}\rightarrow \bigoplus_v O(v)^{b(v)}$ in $\operatorname{add}(Q)$. Applying $\operatorname{Hom}(-,R_p)$ yields a long exact sequence $$\label{longexactsequence}
0\rightarrow \operatorname{Hom}(R,R_p)\rightarrow \operatorname{Hom}(\oplus_v
P_v^{a(v)},R)\xrightarrow{\operatorname{Hom}(\hat{\phi},R_p)} \operatorname{Hom}(\oplus_v
P_v^{b(v)},R)\rightarrow
\operatorname{Ext}(R,R_p)\rightarrow 0$$ and clearly $\operatorname{Hom}(\hat{\phi},R_p)=R_p(\phi)$. The condition $\langle\alpha,\beta\rangle=0$ translates into $\sum_{v\in A}
((a(v)-b(v))\beta_v=0$, so that $R_p(\phi)$ is in fact represented by a square matrix.
In [@schofield] the first author defined $P_{R,\beta}(p)=\det R_p(\phi)$. It is not hard to see that $P_{R,\beta}$ is independent of the choice of $\theta$ and furthermore that it is a polynomial on $R(Q,\beta)$.
Hence if we define $$V(R,\beta)=\{p\in R(Q,\beta)\mid R\perp R_p\}$$ then it follows from that $$V(R,\beta)=\{p\in R(Q,\beta)\mid P_{R,\beta}(p)\neq 0\}$$ In other words, $V(R,\beta)$ is either trivial or the complement of a hypersurface.
If there is an exact sequence in $\operatorname{Rep}(Q)$ $$0\rightarrow R_1\rightarrow R \rightarrow R_2\rightarrow 0$$ with $\langle\operatorname{\underline{dim}}R_1,\beta\rangle=\langle \operatorname{\underline{dim}}R_2,\beta\rangle=0$ then clearly $P_{R,\beta}=P_{R_1,\beta}P_{R_2,\beta}$. This provides some motivation for the main result of [@schofield] which we state below.
Assume that $Q$ has no oriented cycles and let $S$ be a representation with dimension vector $\beta$ which has an open orbit in $R(Q,\beta)$. Let $R_1,\ldots,R_t$ be the simple objects of ${}^\perp S$. Then the ring of semi–invariants for $R(Q,\beta)$ is a polynomial ring in the generators $P_{R_i,\beta}$.
Now let us go back to the general case. Thus we allow that $Q$ has oriented cycles. In this case $kQ$ may be infinite dimensional, so that it is not clear how to define the minimal resolution of a representation $R$. Therefore we will take the map $\phi:\bigoplus_v
O(v)^{a(v)}\rightarrow \bigoplus_v O(v)^{b(v)}$ as our fundamental object and we put $P_{\phi,\beta}(p)=\det R_p(\phi)$ (provided $\sum_{v\in A}
(a(v)-b(v))\beta_v=0$).
To make the link with the discussion above we need that $\hat{\phi}$ is injective. What happens if $\hat{\phi}$ is not injective? Then there is a non-trival kernel $$0\rightarrow P\rightarrow \bigoplus_v P_v ^{b(v)}\xrightarrow{\hat{\phi}} \bigoplus_v P_v^{a(v)}$$ By the fact that $kQ$ is hereditary and lemma \[projectiveslemma\] $P\cong\oplus_v P_v^{c(v)}$. Furthermore, again because $kQ$ is hereditary $P$ is a direct summand of $\oplus_v P_v ^{b(v)}$. Now using the equivalence with $\operatorname{add}(Q)$ we find that $\hat{\phi}$ is not injective if and only if $\oplus_v O(v) ^{b(v)}$ has a direct summand in $\operatorname{add}(Q)$ which is in the kernel of $\phi$. Let $\phi'$ be the restriction of $\phi$ to the complementary summand. Then we find $$P_{\phi,\beta}=
\begin{cases}
P_{\phi',\beta}&\text{if $\forall: v\in\operatorname{Supp}\beta:c(v)=0$}\\
0&\text{otherwise}
\end{cases}$$
So for the purposes of semi-invariants we may assume that $\hat{\phi}$ is injective, which is what we will do below. If $Q$ has no oriented cycles then it follows that $P_{\phi,\beta}=P_{\operatorname{cok}\hat{\phi},\beta}$. In general we find (using ): $$\label{property}
P_{\phi,\beta}(p) \neq 0 \Leftrightarrow
\det R_{p}(\phi) \neq 0 \Leftrightarrow
R_p \in {\operatorname{cok}\hat{\phi}}^{\perp}$$ Note however that $\operatorname{cok}\hat{\phi}$ may be infinite dimensional.
Since the determinantal semi–invariant polynomial functions span all semi–invariant polynomial functions, there must exist some $\phi\in
\operatorname{add}(Q)$ with the properties $P_{\phi,\beta}(p)\neq 0$, $P_{\phi,\beta}$ is not constant and $\hat{\phi}$ is injective. Then $\operatorname{cok}\hat{\phi}\in{^{\perp}R_{p}}$. If $\operatorname{cok}\hat{\phi}=0$ then by $P_{\phi,\beta}$ is nowhere vanishing, and hence constant by the Nullstellensatz. This is a contradiction, whence we may take $T=\operatorname{cok}\hat{\phi}$.
[1]{}
P. M. Cohn, [*Free rings and their relations.*]{}, London Mathematical Society Monographs, vol. 19, Academic Press, Inc., 1985.
S. Donkin, [*Invariants of several matrices*]{}, Invent. Math. [**110**]{} (1992), no. 2, 389–401.
[to3em]{}, [*Polynomial invariants of representations of quivers*]{}, Comment. Math. Helv. [**69**]{} (1994), no. 1, 137–141.
L. Le Bruyn and C. Procesi, [*Semisimple representations of quivers*]{}, Trans. Amer. Math. Soc. [**317**]{} (1990), no. 2, 585–598.
C. Procesi, [*Invariant theory of [$n\times n$]{}-matrices*]{}, Adv. in Math. [**19**]{} (1976), 306–381.
A. Schofield, [*Semi-invariants of quivers*]{}, J. London Math. Soc. (2) [ **43**]{} (1991), no. 3, 385–395.
H. Weyl, [*The classical groups*]{}, Princeton University Press, 1946.
| {
"pile_set_name": "ArXiv"
} |
-0.25cm -0.25cm -0.5cm 16.3cm 22.3cm
=msym10 at 12pt \#1\#2
Physique Nucléaire Théorique et Physique Mathématique, Université Libre de Bruxelles, Campus de la Plaine CP229, Boulevard du Triomphe, B-1050 Brussels, Belgium
Short title: *Generalized $q$-oscillators*
PACS numbers: 02.20.+b, 03.65.Fd
Submitted to: *J. Phys. A: Math. Gen.*
Date:
In a recent paper (1994 [*J. Phys. A: Math. Gen. *]{}[**27**]{} 5907), Oh and Singh determined a Hopf structure for a generalized $q$-oscillator algebra. We prove that under some general assumptions, the latter is, apart from some algebras isomorphic to su$_q$(2), su$_q$(1,1), or their undeformed counterparts, the only generalized deformed oscillator algebra that supports a Hopf structure. We show in addition that the latter can be equipped with a universal ${\mbox{$\mathcal{R}$}}$-matrix, thereby making it into a quasitriangular Hopf algebra.
In a recent paper (henceforth referred to as I and whose equations will be quoted by their number preceded by I), Oh and Singh [@oh] studied the relationships among various forms of the $q$-oscillator algebra and considered the conditions under which it supports a Hopf structure. They also presented a generalization of this algebra, together with its corresponding Hopf structure.
In the present comment, our purpose will be twofold. First, we plan to show that under some general assumptions about the coalgebra structure and the antipode map, the generalized $q$-oscillator algebra considered by Oh and Singh is, apart from some algebras isomorphic to su$_q$(2), su$_q$(1,1), or their undeformed counterparts, the only generalized deformed oscillator algebra (GDOA) that supports a Hopf structure. Second, we shall provide the universal ${\mbox{$\mathcal{R}$}}$-matrix for this deformed algebra and prove that the corresponding Hopf algebra is quasitriangular.
Let us introduce GDOA’s as follows:
[*Definition.*]{} Let ${{\cal A}\bigl(G(N)\bigr)}$ be the associative algebra generated by the operators $\bigl\{{\mbox{\bf 1}},a,{a^{\dagger}},N\bigr\}$ and the function $G(N)$, satisfying the commutation relations $$\left[N,{a^{\dagger}}\right] = {a^{\dagger}}\qquad \left[N,a\right] = - a \qquad
\left[a,{a^{\dagger}}\right] =
G(N) \label{eq:algebra}$$ and the Hermiticity conditions $$(a)^{\dagger} = {a^{\dagger}}\qquad \left({a^{\dagger}}\right)^{\dagger} = a \qquad N^{\dagger}
= N
\qquad \left(G(N)\right)^{\dagger} = G(N) \label{eq:hermiticity}$$ where $G(z)$ is assumed to be an analytic function, which does not vanish identically.
For $$G(N) = [\alpha N + \beta + 1]_q - [\alpha N + \beta]_q
= \frac{\cosh(\varepsilon(\alpha N + \beta + 1/2))}{\cosh(\varepsilon/2)}
\label{eq:ohalgebra}$$ where $\alpha$ and $\beta$ are some real parameters, and $q = \exp \varepsilon
\in \R^+$, ${{\cal A}\bigl(G(N)\bigr)}$ reduces to the generalization of the $q$-oscillator algebra considered by Oh and Singh [@oh].
Note that the definition of ${{\cal A}\bigl(G(N)\bigr)}$ differs from the usual definition of GDOA’s [@daska], wherein both a commutation and an anticommutation relations $$\left[a,{a^{\dagger}}\right] = F(N+1) - F(N) \qquad \left\{a,{a^{\dagger}}\right\} = F(N+1) +
F(N)
\label{eq:daska}$$ are imposed in terms of some structure function $F(z)$, assumed to be an analytic function, positive on some interval $[0,a)$ (where $a \in \R^+$ may be finite or infinite), and such that $F(0) = 0$. As in I, the reason for considering only the first relation in (\[eq:daska\]) is that the two relations do not prove compatible with a coalgebra structure.
As a consequence of its definition, the algebra ${{\cal A}\bigl(G(N)\bigr)}$ has a Casimir operator defined by $C = F(N) - {a^{\dagger}}a$, where $F(N)$ is the solution of the difference equation $F(N+1) - F(N) = G(N)$, such that $F(0) = 0$. The present definition of GDOA’s is therefore equivalent to the usual one [@daska] only in the representation wherein $C=0$, i.e., in a Fock-type representation.
Let us now try to endow some of the algebras ${{\cal A}\bigl(G(N)\bigr)}$ with a coalgebra structure and an antipode map, making them into Hopf algebras ${\mathcal{H}}$. For the coproduct, counit and antipode, let us postulate the following expressions : $$\begin{aligned}
\Delta \left({a^{\dagger}}\right) & = & {a^{\dagger}}\otimes c_1(N) + c_2(N) \otimes {a^{\dagger}}\qquad
\Delta (a) = a \otimes c_3(N) + c_4(N) \otimes a \label{eq:deltaa}
\\[0.1cm]
\Delta (N) & = & c_5 N \otimes {\bf 1} + c_6 {\bf 1} \otimes N
+ \gamma {\bf 1} \otimes {\bf 1} \label{eq:deltan} \\[0.1cm]
\epsilon \left({a^{\dagger}}\right) & = & c_7 \qquad \epsilon (a) = c_8 \qquad
\epsilon (N) = c_9 \label{eq:epsilon} \\[0.1cm]
S\left({a^{\dagger}}\right) & = & - c_{10}(N) {a^{\dagger}}\qquad S(a) = - c_{11}(N) a \qquad
S(N) = - c_{12} N + c_{13} {\bf 1} \label{eq:antipode}\end{aligned}$$ where $ c_i(N)$, $i=1$, $\ldots$ , 4, 10, 11, are functions of $N$, and $c_i$, $i=5$, $\ldots$, 9, 12 , 13, and $\gamma$ are constants to be determined. Such expressions generalize those found in I for $G(N)$ given by (\[eq:ohalgebra\]), which correspond to $$\begin{aligned}
c_1(N) & = & \left( c_2(N) \right)^{-1} = c_3(N) = \left( c_4(N)
\right)^{-1} =
q^{\alpha (N + \gamma ) / 2 } \nonumber \\[0.1cm]
c_5 & = & c_6 = c_{12} = 1 \qquad c_7 = c_8 = 0 \nonumber \\[0.1cm]
c_9 & = & \case{1}{2} c_{13} = - \gamma \qquad c_{10}(N) = \left( c_{11}(N)
\right)^{-1} = q^{\alpha /2 } \nonumber \\[0.1cm]
\gamma & = & \frac {2 \beta + 1 }{2 \alpha } - {\rm i}\,
\frac{(2 k + 1) \pi }{2 \alpha \varepsilon} \qquad k \in \Z.
\label{eq:ohhopf}\end{aligned}$$
To remain as general as possible, we shall not start by making any specific assumption about $G(N)$, except that it satisfies eqs. (\[eq:algebra\]) and (\[eq:hermiticity\]). For the moment, we shall also disregard the Hermiticity conditions (\[eq:hermiticity\]) and work with complex algebras. Only at the end will conditions (\[eq:hermiticity\]) be imposed.
In order that equations (\[eq:deltaa\])–(\[eq:antipode\]) define a Hopf structure, the so-far undetermined functions and parameters must be chosen in such a way that $\Delta$, $\epsilon$, and $S$ satisfy the coassociativity, counit and antipode axioms, given in (I35), and that in addition, $\Delta$ and $\epsilon$ be algebra homomorphisms.
In accordance with eq. (\[eq:ohhopf\]), we shall start by assuming that in eq. (\[eq:deltan\]), $\gamma$ takes a nonvanishing value. By substituting eq. (\[eq:deltan\]) into the coassociativity axiom (I35a), and taking into account that $\Delta$ must be an algebra homomorphism, we directly obtain $$c_5 = c_6 =1. \label{eq:c56}$$
To derive the corresponding conditions for ${a^{\dagger}}$ and $a$, it is useful to expand the functions $c_i(N)$, $i = 1$, $\ldots$, 4, of eq. (\[eq:deltaa\]) into power series $$c_i(N) = \sum_{A=0}^{\infty} \frac{1}{A!}\, c_i^{(A)}(0) N^A
\label{eq:series}$$ where $c_i^{(A)}(N)$ denotes the $A$th derivative of $c_i(N)$, and to apply the relation $$\Delta c_i(N) = \sum_{A,B=0}^{\infty} \frac{1}{A! \, B!}\,
c_i^{(A+B)}(\gamma)
N^A \otimes N^B \qquad \mbox{\rm if} \qquad \Delta(N) = N \otimes {\bf 1} +
{\bf1} \otimes N + \gamma {\bf 1} \otimes {\bf 1}. \label{eq:deltac}$$ We then obtain in a straightforward way that $c_i(N)$, $i = 1$, $\ldots$, 4, must satisfy the equations $$c_i^{(A)}(0) c_i^{(B)}(0) = c_i^{(A+B)}(\gamma) \qquad i = 1, \ldots ,4
\qquad
A,B = 0,1,2, \ldots . \label{eq:cond1}$$
By substituting now eqs. (\[eq:deltaa\])–(\[eq:antipode\]) into the counit and antipode axioms (I35b) and (I35c), we easily get $$\begin{aligned}
c_i(- \gamma ) & = & 1 \qquad i = 1, \ldots ,4 \label{eq:cond2}\\[0.1cm]
c_7 & = & c_8 = 0 \qquad c_9 = - \gamma \label{eq:c789}\end{aligned}$$ and $$\begin{aligned}
c_1( - N +1-2 \gamma ) & = & c_2(N)\, c_{10}(N) \qquad c_2 ( - N -2 \gamma
) =
c_1(N-1)\, c_{10}(N) \label{eq:cond3} \\[0.1cm]
c_3 ( - N -1-2 \gamma ) & = & c_4(N)\, c_{11}(N) \qquad c_4 ( - N -2 \gamma
)
= c_3(N +1)\, c_{11}(N) \label{eq:cond4} \\[0.1cm]
c_{12} & = & 1 \qquad c_{13} = - 2 \gamma \label{eq:c1213}\end{aligned}$$ respectively.
It remains to impose that the algebra and coalgebra structures are compatible. By applying $\Delta$ or $\epsilon$ to both sides of the first two equations contained in (\[eq:algebra\]), we obtain identities, while by doing the same with the third one and using equations similar to (\[eq:series\]) and (\[eq:deltac\]) for $G(N)$, we are led to the conditions $$c_2(N +1) \otimes c_3(N) = c_2(N) \otimes c_3(N-1) \qquad
c_4 (N) \otimes c_1(N+1) = c_4(N-1) \otimes c_1(N) \label{eq:cond5}$$ $$\begin{aligned}
& & G^{(A-B)}(0) (c_1 c_3 )^{(B)}(0) + (c_2 c_4 )^{(A-B)}(0) G^{(B)}(0)
= G^{(A )}(\gamma) \nonumber \\[0.1cm]
& & \qquad A = 0,1,2, \ldots \qquad B= 0,1, \ldots A \label{eq:cond6}\end{aligned}$$ and $$G(- \gamma ) = 0 . \label{eq:cond7}$$
We note that the Hopf axioms directly fix the values of all the constants $c_i$, $i =
5$, $\ldots$, 9, 12, 13, in terms of the remaining one $\gamma$, but that the seven functions $c_i(N)$, $i = 1$, $\ldots$, 4, 10, 11, and $G(N)$ are only implicitly determined by eqs. (\[eq:cond1\]), (\[eq:cond2\]), (\[eq:cond3\]), (\[eq:cond4\]), (\[eq:cond5\]), (\[eq:cond6\]), and (\[eq:cond7\]). We shall now proceed to show that the latter can be solved to provide explicit expressions for the yet unknown functions of $N$ in terms of $\gamma$ and of some additional parameters.
Considering first the two conditions in (\[eq:cond5\]), we immediately see that they can only be satisfied if there exist some complex constants $k_1$, $k_2$, such that $$\begin{aligned}
c_1(N+1) & = & k_1 c_1(N) \qquad c_4(N ) = k_1^{-1} c_4(N-1) \nonumber
\\[0.1cm]
c_2(N+1) & = & k_2 c_2(N) \qquad c_3(N ) = k_2^{-1} c_3(N-1).
\label{eq:solcond5}\end{aligned}$$ These relations in turn imply that $$c_1(N) = \alpha_1 \mbox{\rm e}^{\kappa_1 N } \qquad
c_2(N) = \alpha_2 \mbox{\rm e}^{\kappa_2 N } \qquad
c_3(N) = \alpha_3 \mbox{\rm e}^{-\kappa_2 N } \qquad
c_4(N) = \alpha_4 \mbox{\rm e}^{-\kappa_1 N } \label{eq:c1234}$$ where $\kappa_1 = \ln k_1$, $\kappa_2 = \ln k_2$, and $\alpha_i$, $i = 1$, $\ldots$, 4, are some complex parameters. The latter are determined by condition (\[eq:cond2\]) as $$\alpha_1 = \mbox{\rm e}^{\kappa_1 \gamma} \qquad
\alpha_2 = \mbox{\rm e}^{\kappa_2 \gamma} \qquad
\alpha_3 = \mbox{\rm e}^{-\kappa_2 \gamma} \qquad
\alpha_4 = \mbox{\rm e}^{-\kappa_1 \gamma}. \label{eq:alpha}$$ It is then straightforward to check that the functions $c_i(N)$, $i= 1$, $\ldots$, 4, defined by (\[eq:c1234\]) and (\[eq:alpha\]), automatically satisfy condition (\[eq:cond1\]).
By inserting now eqs. (\[eq:c1234\]) and (\[eq:alpha\]) into conditions (\[eq:cond3\]) and (\[eq:cond4\]), we directly obtain the following explicit expressions for $c_{10}(N)$ and $c_{11}(N)$, $$c_{10}(N) = \mbox{\rm e}^{-(\kappa_1 + \kappa_2 )( N + \gamma ) + \kappa_1}
\qquad c_{11}(N) = \mbox{\rm e}^{(\kappa_1 + \kappa_2 )( N + \gamma ) +
\kappa_2 }. \label{eq:c1011}$$
The same substitution performed in condition (\[eq:cond6\]) transforms the latter into $$\begin{aligned}
& & (\kappa_1 - \kappa_2 )^B\, e^{(\kappa_1 - \kappa_2) \gamma} \,
G^{(A-B)}(0) + (-1)^{A-B} (\kappa_1 - \kappa_2)^{A-B} \,
e^{-(\kappa_1 - \kappa_2) \gamma}\, G^{(B)}(0) = G^{(A)}(\gamma)
\nonumber
\\[0.1cm]
& & \qquad A = 0,1,2, \ldots \qquad B= 0,1, \ldots ,A. \label{eq:condG}\end{aligned}$$ It can be easily shown by induction over $A$ that whenever $\kappa_1 \ne
\kappa_2$, the solution of recursion relation (\[eq:condG\]) is given by $$\begin{array}[b]{llll}
G^{(A)}(0) &= &(\kappa_1 - \kappa_2)^A\, G(0) &\qquad\mbox{\rm if $A$ is
even}
\\[0.3cm]
&= & (\kappa_1 - \kappa_2)^A \coth \bigl((\kappa_1 - \kappa_2) \gamma
\bigr) G(0) &\qquad\mbox{\rm if $A$ is odd}
\end{array} \label{eq:GAzero}$$ and $$\begin{array}[b]{llll}
G^{(A)}(\gamma) & = &(\kappa_1 - \kappa_2)^A\, G(\gamma) &\qquad
\mbox{\rm if $A$ is even} \\[0.3cm]
& = & (\kappa_1 - \kappa_2)^A \coth \bigl(2(\kappa_1 - \kappa_2) \gamma
\bigr) G(\gamma) &\qquad \mbox{\rm if $A$ is odd}
\end{array}\label{eq:GAgamma}$$ where $$G(\gamma) = 2 \cosh \bigl((\kappa_1 - \kappa_2) \gamma \bigr) G(0).
\label{eq:Ggamma}$$ From (\[eq:GAzero\]) and the Taylor expansion of $G(N)$, we then obtain $$G(N) = G(0)\, \frac{\sinh \bigl((\kappa_1 - \kappa_2) (N + \gamma)
\bigr)}{\sinh
\bigl((\kappa_1 - \kappa_2) \gamma \bigr)} \qquad \kappa_1 \ne \kappa_2 .
\label{eq:GN}$$ Such a function also satisfies (\[eq:GAgamma\]) and (\[eq:Ggamma\]), as well as the remaining condition (\[eq:cond7\]). Equations (\[eq:GAzero\])–(\[eq:GN\]) remain valid for $\kappa_1 = \kappa_2$ provided appropriate limits are taken. In such a case, function (\[eq:GN\]) becomes $$G(N) = G(0) \left({\bf 1} + \frac{N}{\gamma }\right) \qquad \kappa_1 =
\kappa_2 .
\label{eq:limitGN}$$
Had we taken $\gamma = 0$ instead of $\gamma \ne 0$ in (\[eq:deltan\]), a similar analysis would have led to $$\begin{array}[b]{llll}
G(N) & = & G^{(1)}(0) \,\frac{\displaystyle\sinh \bigl((\kappa_1 -
\kappa_2) N
\bigr)}{\displaystyle\kappa_1 - \kappa_2} &\qquad \mbox{\rm if
$\kappa_1
\ne \kappa_2$} \\[0.3cm]
& = & G^{(1)}(0) \, N &\qquad \mbox{\rm if $\kappa_1 = \kappa_2$}
\end{array} \label{eq:GNzero}$$ and a Hopf structure given by (\[eq:c56\]), (\[eq:c789\]), (\[eq:c1213\]), (\[eq:c1234\]), (\[eq:alpha\]), and (\[eq:c1011\]), but where $\gamma$ is set equal to $0$. For an appropriate choice of $G^{(1)}(0)$ (obtained by renormalizing ${a^{\dagger}}$ and $a$ if necessary), such a form of $G(N)$ corresponds to the complex $q$-algebra sl$_q$(2) if $\kappa_1 \ne
\kappa_2$, and to sl(2) if $\kappa_1 = \kappa_2$ [@majid].
The remaining step in the construction of algebras ${{\cal A}\bigl(G(N)\bigr)}$ with a Hopf structure consists in imposing the Hermiticity conditions (\[eq:hermiticity\]) on the algebraic structure. They require that the function $G(N)$, defined in (\[eq:GN\]), (\[eq:limitGN\]), or (\[eq:GNzero\]), be a real function of $N$. For the latter choice, we obtain the real forms of sl$_q$(2) or sl(2), namely su$_q$(2) and su$_q$(1,1), or su(2) and su(1,1) [@majid]. It remains to consider the former choices for $\gamma$ non real, since the real $\gamma$ case comes down to the $\gamma = 0$ one by changing $N$ into $N + \gamma$. For such $\gamma$ values, function (\[eq:limitGN\]) cannot be Hermitian. It therefore only remains to consider the case where $G(N)$ is given by (\[eq:GN\]).
In such a case, the discussion of the hermiticity conditions is rather involved as $G(N)$ depends upon two complex parameters $\kappa_1 - \kappa_2$, and $\gamma$, in addition to the nonvanishing real parameter $G(0)$. By setting $$\kappa_1 = \xi_1 + \mathrm{i} \eta_1 \qquad
\kappa_2 = \xi_2 + \mathrm{i} \eta_2 \qquad
\kappa \equiv \kappa_1 - \kappa_2 = \xi + \mathrm{i} \eta \qquad
\gamma = \gamma_1 + \mathrm{i} \gamma_2 \label{eq:xieta}$$ where $\xi_1$, $\eta_1$, $\xi_2$, $\eta_2$, $\xi$, $\eta$, $\gamma_1$, $\gamma_2 \in \R$, the function $G(N)$, defined in (\[eq:GN\]), can be rewritten as $$G(N) = G(0) \left(\alpha(N) + \mathrm{i} \beta(N) \right)
\label{eq:decompGN}$$ where $$\begin{aligned}
\alpha(N) & = & \frac{a(N) c+b(N) d}{c^2+d^2} \qquad \beta(N) = \frac{b(N)
c-a(N)
d}{c^2+d^2} \nonumber \\[0.1cm]
a(N) & = & \sinh\left(A(N)\right) \cos\left(B(N)\right) \qquad b(N) =
\cosh
\left(A(N)\right) \sin\left(B(N)\right) \nonumber \\[0.1cm]
c & = & \sinh C \cos D \qquad d = \cosh C \sin D \nonumber \\[0.1cm]
A(N) & = & \xi(N+ \gamma_1) - \eta \gamma_2 \qquad B(N) = \xi \gamma_2
+ \eta (N+\gamma_1) \nonumber \\[0.1cm]
C & = & \xi \gamma_1 - \eta \gamma_2 \qquad D = \xi \gamma_2 +\eta
\gamma_1. \label{eq:decomp}\end{aligned}$$ Hence, $G(N)$ is a real function of $N$ if and only if $$\beta(N) = 0. \label{eq:condhermite}$$ Note that from the expressions of $\alpha(N)$ and $\beta(N)$ given in (\[eq:decomp\]), it is clear that the parameter values for which $c$ and $d$ simultaneously vanish should be discarded.
Condition (\[eq:condhermite\]) has now to be worked out by successively combining the cases where $\gamma_1 = 0$ and $\gamma_2\ne 0$, or $\gamma_1
\ne 0 $ and $\gamma_2\neq 0$, with those where $\xi\ne 0$ and $\eta=0$, $\xi = 0$ and $ \eta \ne 0$, or $\xi\ne 0$ and $\eta\ne 0$. For instance, if $\gamma_1$, $\gamma_2$, $\xi \ne 0$, and $\eta=0$, equation (\[eq:condhermite\]) can be written as $$\cosh \bigl(\xi(N + \gamma_1)\bigr) \sin(\xi\gamma_2)
\sinh(\xi\gamma_1) \cos(\xi\gamma_2 )
= \sinh\bigl(\xi(N + \gamma_1)\bigr) \cos(\xi\gamma_2)
\cosh(\xi\gamma_1) \sin(\xi\gamma_2). \label{eq:case1}$$ As both sides of this relation have a different dependence on $N$, they must identically vanish. Since $\xi\ne 0$ by hypothesis, we must therefore have either $\sin(\xi\gamma_2) = 0$ or $\cos(\xi\gamma_2 ) =0$. The first condition leads to $\gamma_2 = k \pi / \xi$, $k \in \Z_0$, while the second one gives rise to $\gamma_2 = (2 k + 1) \pi / (2 \xi)$, $ k \in \Z $.
Similarly, if we assume that $\gamma_1$, $\gamma_2$, $\eta \ne 0$, and $\xi=0$, we obtain that equation (\[eq:condhermite\]) is equivalent to $$\sin\bigl(\eta(N + \gamma_1)\bigr) \cos(\eta\gamma_1) =
\cos\bigl(\eta(N + \gamma_1)\bigr) \sin(\eta\gamma_1) \label{eq:case2}$$ or, by using some trigonometric identities, $$\sin(\eta N) = 0. \label{eq:case2a}$$ As $\eta\ne 0$, this relation cannot be satisfied as an operator identity.
By proceeding in this way, one can easily show the following result:
[*Proposition 1.*]{} The algebras ${{\cal A}\bigl(G(N)\bigr)}$ that support a Hopf structure of type (\[eq:deltaa\])–(\[eq:antipode\]) and are not isomorphic to su$_q$(2), su$_q$(1,1), su(2), su(1,1), are determined by eqs. (\[eq:c56\]), (\[eq:c789\]), (\[eq:c1213\]), (\[eq:c1234\]), (\[eq:alpha\]), (\[eq:c1011\]), and the following conditions $$\begin{aligned}
G(N) &=& G(0)\, \frac{\cosh\bigl(\xi(N +
\gamma_1)\bigr)}{\cosh(\xi\gamma_1)}
\qquad G(0), \xi \in \R_0 \qquad \gamma_1 \in \R \nonumber \\[0.1cm]
\kappa_1 - \kappa_2 &=& \xi \qquad \gamma = \gamma_1 + i\,
\frac{(2k+1)\pi}{2\xi} \qquad k \in \Z. \label{eq:prop1}\end{aligned}$$
[*Remark.*]{} The isomorphism referred to in the proposition is an algebra (not a Hopf algebra) isomorphism. One can indeed obtain algebras ${{\cal A}\bigl(G(N)\bigr)}$ that have the commutation relations and Hermiticity conditions of su$_q$(2), su$_q$(1,1), su(2), su(1,1), but more general expressions for the coproduct, the counit, and the antipode.
Comparing the results of Proposition 1 with eqs. (\[eq:ohalgebra\]) and (\[eq:ohhopf\]), we notice that provided we set $\kappa_1 = - \kappa_2$, the Hopf algebra so obtained does coincide with that derived by Oh and Singh, the relations between the two sets of parameters being given by $$G(0) = \frac{\cosh \left( \varepsilon (2 \beta + 1)/2 \right)}
{\cosh (\varepsilon/2)} \qquad \xi = \alpha \varepsilon \qquad
\gamma_1 = \frac{2\beta + 1}{2\alpha} . \label{eq:equivalence}$$ Hence, we have:
[*Corollary 2.*]{} The only algebras ${{\cal A}\bigl(G(N)\bigr)}$ that support a Hopf structure of type (\[eq:deltaa\])–(\[eq:antipode\]) and are not isomorphic to su$_q$(2), su$_q$(1,1), su(2), su(1,1), are isomorphic to those considered in I.
[*Remarks.*]{} (1) The Hopf algebra obtained here is slightly more general than that constructed by Oh and Singh, as it contains the additional parameter $\kappa_1 + \kappa_2$.
\(2) Some further generalizations of the coproduct given in eqs. (\[eq:deltaa\]) and (\[eq:deltan\]), obtained by introducing additional functions of $N$, fail to provide new Hopf algebras.
Let us now turn ourselves to the second point of this comment, namely the construction of the universal ${\mbox{$\mathcal{R}$}}$-matrix for the Oh and Singh Hopf algebra.
By first omitting the Hermiticity conditions, one obtains the following result:
[*Lemma 3.*]{} The complex Hopf algebras ${\mathcal{H}}$, defined by eqs. (\[eq:algebra\]), (\[eq:deltaa\])–(\[eq:antipode\]), (\[eq:c56\]), (\[eq:c789\]), (\[eq:c1213\]), (\[eq:c1234\])–(\[eq:c1011\]), and (\[eq:GN\]), can be made into quasitriangular Hopf algebras by considering the element ${\mbox{$\mathcal{R}$}}\in {\mathcal{H}}\otimes {\mathcal{H}}$, given by $$\begin{aligned}
{\mbox{$\mathcal{R}$}}& = & X^{-2(N + \gamma {\mbox{$\mathbf{\scriptstyle1}$}}) \otimes (N + \gamma {\mbox{$\mathbf{\scriptstyle1}$}})}
\sum_{n=0}^{\infty} \frac{(1-X^2)^n}{[n]_X!} X^{-n(n-1)/2} Y^n
\lambda^{-2n}
\nonumber \\
& & \mbox{} \times \left((XY)^{(N + \gamma {\mbox{$\mathbf{\scriptstyle1}$}})} a\right)^n \otimes
\left((XY)^{-(N + \gamma {\mbox{$\mathbf{\scriptstyle1}$}})} {a^{\dagger}}\right)^n
\label{eq:complexR}\end{aligned}$$ where $$\begin{aligned}
X & = & e^{(\kappa_1 - \kappa_2)/2} \qquad Y = e^{(\kappa_1 + \kappa_2)/2}
\qquad \lambda^2 = - G(0) \frac{\sinh \left((\kappa_1 -
\kappa_2)/2\right)}{\sinh \left((\kappa_1 - \kappa_2) \gamma\right)}
\nonumber \\[0pt]
[n]_X & = & \frac{X^n - X^{-n}}{X - X^{-1}} \qquad [n]_X! = [n]_X [n-1]_X
\ldots [1]_X
\qquad [0]_X! = 1. \label{eq:XY}\end{aligned}$$ [*Proof.*]{} By direct substitution, one finds that ${\mbox{$\mathcal{R}$}}$, defined by (\[eq:complexR\]) and (\[eq:XY\]), satisfies the relations $$\begin{aligned}
\left(\Delta \otimes \mbox{\rm id}\right) {\mbox{$\mathcal{R}$}}& = & {\mbox{$\mathcal{R}$}}_{13} {\mbox{$\mathcal{R}$}}_{23} \qquad
\left(\mbox{\rm id} \otimes \Delta\right) {\mbox{$\mathcal{R}$}}= {\mbox{$\mathcal{R}$}}_{13} {\mbox{$\mathcal{R}$}}_{12}
\nonumber \\[0.1cm]
\tau \circ \Delta(h) & = & {\mbox{$\mathcal{R}$}}\Delta(h) {\mbox{$\mathcal{R}$}}^{-1} \label{eq:qt}\end{aligned}$$ where ${\mbox{$\mathcal{R}$}}_{12}$, ${\mbox{$\mathcal{R}$}}_{13}$, ${\mbox{$\mathcal{R}$}}_{23} \in {\mathcal{H}}\otimes {\mathcal{H}}\otimes {\mathcal{H}}$, and for instance ${\mbox{$\mathcal{R}$}}_{12} = {\mbox{$\mathcal{R}$}}\otimes I$, while $\tau$ is the twist operator, $\tau(a \otimes b) = b \otimes a$.
By introducing now the additional conditions (\[eq:prop1\]) and $\kappa_1 +
\kappa_2 = 0$, and changing to Oh and Singh’s notations (\[eq:equivalence\]), we obtain the final result:
[*Proposition 4.*]{} The Oh and Singh Hopf algebra, defined by eqs. (\[eq:algebra\])–(\[eq:ohalgebra\]), (\[eq:deltaa\])–(\[eq:ohhopf\]), is quasitriangular, with the ${\mbox{$\mathcal{R}$}}$-matrix given by $$\begin{aligned}
{\mbox{$\mathcal{R}$}}& = & q^{- \frac{1}{\alpha} \left[\left(\beta + \frac{1}{2}\right)^2 -
\left(\frac{2k+1}{2\ln q}\pi\right)^2 + i \frac{(2\beta+1) (2k+1)}
{2\ln q} \pi\right]} q^{-\alpha N \otimes N} \nonumber \\[0.1cm]
& & \mbox{}\times \left(q^{- \left(\beta + \frac{1}{2} + i
\frac{2k+1}{2\ln q} \pi\right) N} \otimes q^{- \left(\beta +
\frac{1}{2}
+ i \frac{2k+1}{2\ln q} \pi \right) N} \right) \nonumber \\[0.1cm]
& & \mbox{}\times \sum_{n=0}^{\infty} \frac{\left[i\, (-1)^k \left(q^{1/2} +
q^{-1/2}\right)\right]^n} {[n]_{q^{\alpha/2}}!} q^{-\alpha n(n-3)/4}
\left(
\left(q^{\alpha N/2} a\right)^n \otimes \left(q^{-\alpha N/2}
{a^{\dagger}}\right)^n
\right). \label{eq:realR}\end{aligned}$$
[99]{} Oh C H and Singh K 1994 [*J. Phys. A: Math. Gen.*]{} [**27**]{} 5907 Daskaloyannis C 1991 [*J. Phys. A: Math. Gen.*]{} [**24**]{} L789\
Bonatsos D and Daskaloyannis C 1993 [*Phys. Lett.*]{} [**307B**]{} 100 Majid S 1990 [*Int. J. Mod. Phys.*]{} A [**5**]{} 1
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We present a theory for the photon energy and polarization dependence of ARPES intensities from the CuO$_2$ plane in the framework of strong correlation models. We show that for electric field vector in the CuO$_2$ plane the ‘radiation characteristics’ of the $O$ $2p_\sigma$ and $Cu$ $3d_{x^2-y^2}$ orbitals are strongly peaked along the CuO$_2$ plane, i.e. most photoelectrons are emitted at grazing angles. This suggests that surface states play an important role in the observed ARPES spectra, consistent with recent data from Sr$_2$CuCl$_2$O$_2$. We show that a combination of surface state dispersion and Fano resonance between surface state and the continuum of LEED-states may produce a precipitous drop in the observed photoelectron current as a function of in-plane momentum, which may well mimic a Fermi-surface crossing. This effect may explain the simultaneous ‘observation’ of a hole-like and an electron-like Fermi surfaces in Bi2212 at different photon energies. We show that by suitable choice of photon polarization one can on one hand ‘focus’ the radiation characteristics of the in-plane orbitals towards the detector and on the other hand make the interference between partial waves from different orbitals ‘more constructive’.'
address: 'Institut für Theoretische Physik, Universität Würzburg, Am Hubland, 97074 Würzburg, Germany'
author:
- 'C. Dahnken and R. Eder'
title: 'Theory of ARPES intensities from the CuO$_2$ plane'
---
[2]{}
Introduction
============
Their quasi-2D nature makes cuprate superconductors ideal materials for angle resolved photoemission spectroscopy (ARPES) studies, and by now a wealth of experimental data is available[@ShenDessau]. On the other hand, it does not seem as if these data are really well-understood, the major reason being that we still lack even a rudimentary understanding of the matrix element effects present in these materials. It has recently turned out that matrix element effects are (or rather: should be) the central issue in the discussion of ARPES data.\
ARPES is generally believed to measure the single particle spectral function, which near the chemical potential $\mu$ (and neglecting the finite lifetime) can be written as $$\begin{aligned}
A(\bbox{k},\omega) &=&
| \langle \Psi_{QP}(\bbox{k})| c_{\bbox{k},\sigma} | \Psi_0 \rangle |^2\\
&& \Theta(E_{QP}(\bbox{k}) - \mu)\;\delta(\omega-(E_{QP}(\bbox{k}) - \mu))
\end{aligned}$$ Here $E_{QP}$ denotes the dispersion of the ‘quasiparticle band’, $ |
\Psi_0 \rangle$ and $| \Psi_{QP}(\bbox{k})\rangle$ denote the ground state and quasiparticle state, respectively. In other words, the experiment gives a ‘peak’ whose dispersion follows the quasiparticle band $E_{QP}(\bbox{k})$, with the total intensity of the peak being given by the so-called quasiparticle weight $Z(\bbox{k})= | \langle
\Psi_{QP}(\bbox{k})| c_{\bbox{k},\sigma} | \Psi_0 \rangle |^2$. For free particles we have $Z(\bbox{k})=1$, whence the only reason for a sudden vanishing of the peak with changing $\bbox{k}$ can be the Fermi factor $\Theta(E_{QP}(\bbox{k}) - \mu)$, i.e. the crossing of the quasiparticle band through the Fermi energy. Under these circumstances, it would be very easy to infer the Fermi surface geometry from the measured photoelectron spectra, and indeed this very assumption, namely that a sudden drop of the photoemission intensity automatically indicates a Fermi level crossing, has long been made in the interpretation of all experimental spectra on metallic cuprates.\
Several experimental findings have shown, however, that this assumption is not tenable in the cuprates. The first indication comes from the study of the insulating compounds Sr$_2$CuCl$_2$O$_2$[@Wells] and Ca$_2$CuO$_2$Cl$_2$[@Ronning]. Although these insulators cannot have any Fermi surface in the usual sense, which means that the factor $\Theta(E_{QP}(\bbox{k}) - \mu)$ is always equal to unity, the experiments show that also in these compounds the quasiparticle peak disappears as one passes from inside the noninteracting Fermi surface to outside. Thereby a particularly striking feature of the experimental data is the sharpness of the drop in spectral weight[@Wells], which is for example along $(1,1)$, quite comparable to the drops seen at the ‘Fermi level crossings’ in the metallic compounds. The only possible explanation for this phenomenon is a quite dramatic $\bbox{k}$-dependence of the quasiparticle weight, $Z(\bbox{k})$. Apparently in these compounds we have very nearly $Z(\bbox{k}) \propto \Theta(E_{free}(\bbox{k}) -
\mu)$, i.e. the $\bbox{k}$ dependence of $Z(\bbox{k})$ resembles that of the noninteracting system.\
This result immediately raises the question as to how significant the ‘Fermi level crossings’ observed in the metallic compounds really are. That they may, in some cases, have little or no significance for the true Fermi surface topology has been demonstrated by the recent controversy as to whether the Fermi surface in the most exhaustively studied compound, Bi$_2$Sr$_2$CaCu$_2$O$_{8+\delta}$, is hole-like (as inferred from a large number of studies[@ShenDessau; @Golden; @Fretwell; @Mesot] with photon energy $22eV$) or electron-like (as concluded by several recent studies[@Dessau; @Feng; @Gromko; @Bogdanov] at photon-energy $33eV$). Here it should be noted that the true Fermi surface topology is an intrinsic property of the material which can under no circumstances change with the photon energy. It follows from these considerations that to extract any meaningful information from angle-resolved photoemission we need an understanding of the matrix elements and their $\bbox{k}$-dependence, as well as other effects which might possibly influence the intensity of the ARPES signal. Motivated by these considerations, we have performed a theoretical analysis of the spectral weight of strongly correlated electron models. In section II we derive a simple expression for the photoelectron current, which can be applied e.g. in numerical calculations for strong correlation models. In section III we specialize this to the CuO$_2$ plane, in section IV we discuss the angular ‘radiation characteristics’ of a Zhang-Rice singlet and how these could be exploited to optimize the photoemission intensity from the respective states. We also show that photoelectrons are emitted predominantly at small angles with respect to the CuO$_2$ plane. In section V we point out that this may lead to the injection of these photoelectrons into surface resonance states. Fano-resonance between the surface resonance and the continuum of LEED states then leads to a strong energy dependence of the ARPES signal and we show that already a very simple free-electron model can explain the experimental energy dependence of the first ionization states in Sr$_2$CuCl$_2$O$_2$ measured recently by by Dürr [*et al.*]{}[@duerr] surprisingly well. In section VI we describe how the interplay between surface state dispersion and Fano-resonance between the processes of direct emission and emission via a surface state can mimic a Fermi level crossing where none exists, and suggest that the apparent change in Fermi surface topology with photon energy may be due to such ‘apparent Fermi surfaces’. Section VII contains our conclusions.
Photoemission intensities for strong correlation models
=======================================================
Photoelectrons with a kinetic energy in the range $10-100eV$ have wavelengths comparable to the distances between individual atoms in the CuO$_2$ plane. It follows that whereas the [*eigenvalue spectrum*]{} of the plane probably can be described well by an effective single band model[@ZhangRice], this is not possible for the [*matrix elements*]{}. We necessarily have to discuss (at least) the full three-band model.\
Our first goal therefore is to derive a representation of the photoemission process in terms of the electron annihilation operators for the Cu $3d_{x^2-y^2}$ and O $2p\sigma$ orbitals in the CuO$_2$ plane. In other words, we seek an operator of the form $$\tilde{c}_{\bbox{k},\sigma} = m(d_{x^2-y^2}) d_{\bbox{k},\sigma} +
m(p_x) p_{x,\bbox{k},\sigma} + m(p_y) p_{y,\bbox{k},\sigma}$$ such that the single particle spectral density of this operator $$\tilde{A}(\bbox{k},\omega) = \frac{1}{\pi} \langle 0|
\tilde{c}_{\bbox{k},\sigma}^\dagger \frac{1}{\omega - (H-E_0) - i0^+}
\tilde{c}_{\bbox{k},\sigma}^\dagger|0 \rangle$$ evaluated for the correlated electron model in question reproduces the experimental photoemission intensities. Here $p_{x,\bbox{k},\sigma}$ and $ p_{y,\bbox{k},\sigma}$ and are the annihilation operators for an electron in an $x$- and $y$-directed $\sigma$-bonding oxygen orbital, $d_{\bbox{k},\sigma}$ annihilated an electron in the $d_{x^2-y^2}$ orbital.\
To that end, let us first consider the problem of a single atom (which may be either Cu or O). The calculation is similar as outlined in Refs. [@Matsushita] and [@Moskvin]. We want to study photoionization, i.e. an optical transition from a localized valence orbital into a scattering state with energy $E$. The dipole matrix element for light polarized along the unit vector $\bbox{\epsilon}$ reads: $$m_{i\rightarrow f} = \int d\bbox{r}\; \Psi_f^*( \bbox{r})\;(
\bbox{\epsilon} \cdot \bbox{r})\; \Psi_i( \bbox{r})$$ Here the initial state is taken to be a CEF state with angular momentum $l'=1,2$ and crystal-field label $\alpha=p_x,p_y,d_{x^2-y^2}\dots$: $$\Psi_{i}( \bbox{r}) = \sum_{m'} c_{\alpha m'} Y_{l',m'}(\bbox{r}^0)
\;R_{nl'}(r).$$ We consider this state to be ‘localized’, that means the radial wave function $R_{nl'}(r)$ is zero outside the atomic radius $r_0$. For the wave function of the final (=scattering) state we choose: $$\Psi_f( \bbox{r}) = \left\{
\begin{array}{l c}
Y_{lm}(\bbox{r}^0)\; \frac{2}{r}\sin(kr+\delta_{l}-\frac{l\pi}{2}) & r > r_0 \\
\; \\
Y_{lm}(\bbox{r}^0) \;\frac{1}{n} R_{E,l}(r) & r < r_0
\end{array} \right.$$ The real functions $R_{nl'}(r)$ and $R_{l}(E)$ both are a solution of the radial Schrödinger equation for a suitably chosen atomic potential. The scattering phase $\delta_{l}$ and the prefactor $\frac{1}{n}$ are determined by the condition that the wave function $\Psi_f( \bbox{r})$ and its derivative be continuous at $r=r_0$. Details are given in Appendix I. We note that the scattering phase $\delta_{l}$ also plays an important role in the interpretation of EXAFS spectra (where it is usually called the central atom phase shift) and thus could in principle be determined experimentally (although at the relatively high photoelectron kinetic energies important for EXAFS are not ideal for ARPES).\
Representing the dipole operator as $$\bbox{\epsilon} \cdot \bbox{r} = \frac{4\pi r}{3} \sum_{\mu=-1}^1
Y_{1,\mu}^*(\bbox{\epsilon})\; Y_{1,\mu}(\bbox{r}^0),$$ we can rewrite the dipole matrix element as $$\begin{aligned}
m_{i\rightarrow f} &=& \sum_l d_{l,m} \; R_{l,l'}(E) \nonumber \\
d_{l,m} &=& \sqrt{ \frac{4\pi}{3} }
\sum_{m'} \; c_{\alpha m'} \;c^1(l,m;l',m') \;
Y_{1,m-m'}^*(\bbox{\epsilon}).
\end{aligned}$$ Here the radial integral is given by $$R_{l,l'}(E) = \frac{1}{n} \int_0^{r_0} dr \; r^3\; R_{E,l}(r)\;
R_{nl'}(r)$$ and the following abbreviation for the angular integrals of three spherical harmonics has been introduced: $$\begin{aligned}
\int d \Omega\;&&
Y_{lm}^* \;(\bbox{r}^0)\; Y_{1,\mu}(\bbox{r}^0)\;
Y_{l'm'}(\bbox{r}^0) = \\
&& \;\;\;\;\delta_{m,m'+\mu}\;
\sqrt{ \frac{3}{4\pi} }\; c^1(l,m;l',m').
\end{aligned}$$ The constants $c^1(l,m;l',m')$ are well-known in the theory of atomic multiplets and tabulated for example in Slater’s book[@Slater]. Knowing the atomic potential the radial integral and thus the entire matrix element could now in principle be calculated.\
Next, we assume that the radial part of $\Psi_f(\bbox{r})$ at large distances is decomposed into outgoing and incoming spherical waves. Observing the outgoing spherical wave at large distance under a direction defined by the polar angles $\Theta_k$ and $\Phi_k$, it may locally be approximated by a phase shifted plane wave: $$\Psi_f( \bbox{r}) \approx \left( \frac{e^{i \bbox{k}\cdot
\bbox{r}}}{r}\right)\;
Y_{l,m}(\bbox{k}^0)\;(-i)^{l+1}\;e^{i\delta_{l}},$$ where $\bbox{k} = \sqrt{\frac{2mE}{\hbar^2}} \bbox{r}^0$.\
Let us next consider an array of identical atoms in the $(x,y)$-plane of our coordinate system (which we take to coincide with the CuO$_2$ plane in all that follows). To describe the final state after ejection of an electron from the atom at site $j$ we would simply have to replace throughout $\bbox{r} \rightarrow \bbox{r} - \bbox{R}_j$ in the above calculation, where $\bbox{R}_j$ denotes the position of the atom. If we want to give the created photohole a definite in-plane momentum $\bbox{k}_{\|}$, however, we have to form a coherent superposition of such states, that means weighted by the Bloch factors $\frac{1}{\sqrt{N}}e^{- i \bbox{k}_{\|} \cdot \bbox{R}_j}$, where $N$ denotes the number of atoms in the plane. At the remote distance $\bbox{r}$ we consequently replace $\bbox{r} \rightarrow \bbox{r} -
\bbox{R}_j$, leaving the polar angles $\Theta_k$ and $\Phi_k$ unchanged, multiply by $\frac{1}{\sqrt{N}}e^{- i \bbox{k}_{\|} \cdot
\bbox{R}_j}$ and sum over $j$.\
The photoelectron wave function at $\bbox{r}$ then becomes $$\begin{aligned}
\Psi_f( \bbox{r}) &\rightarrow&\left( \frac{e^{i \bbox{k}\cdot \bbox{r}} }{r}\right)
Y_{l,m}(\bbox{k}^0)\;(-i)^{l+1}\;e^{i\delta_{l}}
\nonumber \\
&&\;\;\;\;\;\;\;\;
\frac{1}{\sqrt{N}} \sum_j e^{i (\bbox{k}-\bbox{k}_{\|}) \cdot \bbox{R}_j}
\nonumber \\
&=&\left( \frac{e^{i \bbox{k}\cdot \bbox{r}} }{r}\right)
Y_{l,m}(\bbox{k}^0)\;(-i)^{l+1}\;e^{i\delta_{l}}\nonumber \\
&&\;\;\;\;\;\;\;\;\;
\sqrt{N} \delta_{\bbox{k}_{\|} + \bbox{G}_{\|},\bbox{k}_{x,y}}.
\end{aligned}$$ Here $\bbox{k}_{x,y}$ denotes the projection of $\bbox{k}$ onto the $(x,y)$-plane and $\bbox{G}_{\|}$ is a 2D reciprocal lattice vector. An important feature of this result is the fact that by creating a photohole with momentum $\bbox{k}_{\|}$ (which must belong to the first Brillouin zone) the photoelectrons may well be emitted with the 3D momentum $(\bbox{k}_{\|}+\bbox{G}_{\|}, k_\perp)$, that means the parallel momentum component of the photoelectrons need not be equal to $\bbox{k}_{\|}$.\
Summing over all possible partial waves $(l,m)$ and introducing the abbreviation $\tilde{R}_{l,l'}(E)= R_{l,l'}(E) e^{i\delta_l}$ the electron current per solid angle at $\bbox{r}$ due to the in-plane orbitals of the type $(l',\alpha)$ finally becomes $${\bf j} = \frac{4N \hbar {\bf k}}{m}\; | \sum_{l,m}\; d_{lm}\;
\tilde{R}_{l,l'}(E)\; Y_{lm}(\bbox{k}^0)\;(-i)^{l+1} |^2 .$$ Here $N$ should be taken equal to the number of unit cells within the sample area illuminated by the photon beam.\
After some algebra (Appendix II) this can be brought to the form $${\bf j} = \frac{4N \hbar {\bf k}}{m}\;
|\sum_l \tilde{R}_{l,l'}(E)\;(\bbox{v}_{l,\alpha}(\bbox{k}^0) \cdot
\bbox{\epsilon})\; |^2.
\label{sing}$$ Here all the angular dependence on the shape of the original CEF-level has been collected in the vectors $\bbox{v}_{l,\alpha}$ - it is important to note, that these vectors are obtained by standard angular-momentum recoupling so that the resulting angular dependence is [*exact*]{}. It is only the ‘radial matrix elements’ $\tilde{R}_{l,l'}(E)$ which have to be calculated approximately and thus may be prone to inaccuracies.\
So far we have limited ourselves to one specific type of orbital in the plane. If we allow photoemission from any type of orbital we simply have to add up the prefactors of the plane-wave states [*before*]{} squaring to compute the photocurrent.\
Summarizing the preceding discussion we can conclude that the proper electron annihilation operator to describe the photoemission process would be $$\begin{aligned}
\tilde{c}_{\bbox{k}_{\|},\sigma} &=& \sum_{l,\alpha}
e^{ -i \bbox{k}_{\|}\cdot \bbox{R}_\alpha}\;
\left( \tilde{R}_{l,l'(\alpha)}(E)\;
( {\bbox v}_{l,\alpha}(\bbox{k}^0) \cdot
\bbox{\epsilon})\; \right)\; c_{\alpha,\bbox{k}_{\|},\sigma}
\nonumber \\
&=& \sum_{\alpha} \tilde{\bf v}(\alpha)\cdot
\bbox{\epsilon}\; c_{\alpha,\bbox{k}_{\|},\sigma} .
\end{aligned}$$ Here $\bbox{R}_\alpha$ denotes the position of the orbital $\alpha$ within the unit cell. Suppressing the spin and momentum index for simplicity, the total ARPES intensity then would be given (up to an energy and momentum independent prefactor) by $$\begin{aligned}
I(\omega) &=& \sum_{\alpha,\beta}
\left( \tilde{\bf v}(\alpha) \cdot \bbox{\epsilon}\right)^*
\left( \tilde{\bf v}(\beta) \cdot \bbox{\epsilon}\right)\;
\Im R_{\alpha,\beta}(\omega -i 0^+) \\
\label{intens}
R_{\alpha,\beta}(z) &=&
\frac{1}{\pi}
\langle 0| c_\alpha^\dagger\; \frac{1}{\omega - (H-E_0) -i0^+}\; c_\beta |0\rangle.
\label{resolv}
\end{aligned}$$ Whereas the scalar products $(\tilde{\bf v}(\alpha) \cdot
\bbox{\epsilon})$ take into account the interplay between the real-space shape of the orbitals, the polarization of the incident light and the direction of electron emission, the spectral densities $R_{\alpha,\beta}(z)$ incorporate the possible many-body effects in the CuO$_2$ planes - we thus have the desired recipe for studying photoemission intensities in the framework of a strong correlation model.\
To conclude this section we note that we have actually performed only the first stage of the calculation within the so-called three-step model. We give a brief list of complications that we have neglected: the emission from lower planes than the first and the extinction of the respective photoelectron intensity, any diffraction of the outgoing electron wave function from the surrounding atoms, the refraction of the photoelectrons as they pass the potential step at the surface of the solid. It is thus quite obvious that our theory is strongly simplified.
Application to the three-band model
===================================
We now want to discuss some consequences of the results in the preceding section, thereby using mainly the standard three-band Hubbard-model to describe the CuO$_2$ plane. We first consider the noninteracting limit $U=0$. Here we use a $4$-band model which was introduced by Andersen [*et al.*]{}[@andersen] to describe the LDA bandstructure of YBa$_2$Cu$_3$O$_4$. In addition to the Cu 3$d_{x^2-y^2}$ orbital and the to $\sigma$-bonding O $2p$ orbitals this model includes a Cu $4s$ orbital, which produces the so-called $t'$ and $t''$ terms in the single-band model - which are essential to obtain the correct Fermi surface topology. Suppressing the spin index the creation operators for single [*electron*]{} eigenstates read $$\gamma_{\bbox{k},\nu}^\dagger = \alpha_{1,\nu} p_{\bbox{k},x}^\dagger
+ \alpha_{2,\nu} p_{\bbox{k},y}^\dagger + \alpha_{3,\nu}
d_{\bbox{k}}^\dagger + \alpha_{4,\nu} s_{\bbox{k}}^\dagger$$ where $\bbox{\alpha}_\nu$ denotes the $\nu^{th}$ eigenvector of the matrix
$$H = \left(
\begin{array}{c c c c}
0, & 0, & 2i t_{pd} \sin(\frac{k_x a}{2})& 2i t_{sp} \sin(\frac{k_x a}{2}) \\
0, & 0, & -2i t_{pd} \sin(\frac{k_y a}{2})& 2i t_{sp} \sin(\frac{k_y a}{2}) \\
- 2i t_{pd} \sin(\frac{k_x a}{2}),& 2i t_{pd} \sin(\frac{k_y a}{2}) & \Delta &0 \\
-2i t_{sp} \sin(\frac{k_x a}{2}) & -2i t_{sp} \sin(\frac{k_y a}{2})&0 &\Delta_s\\
\end{array} \right)$$
[2]{} Values of the parameters $t_{pd}$ $t_{sp}$ and $\Delta$ are given in Ref. [@andersen].\
If we consider the planar momentum $\bbox{k}_{\|} = \bbox{G} + \bbox{k}$, where $\bbox{k}$ is in the first BZ and $\bbox{G}$ is a reciprocal lattice vector, the photoemission intensity for the $\nu^{th}$ band becomes $$\begin{aligned}
I &\propto& |
(\bbox{v}(p_x) \cdot \bbox{\epsilon})\; e^{-i G_x/2} \alpha_1^*
+ (\bbox{v}(p_y) \cdot \bbox{\epsilon})\; e^{-i G_y/2}\alpha_2^*\nonumber \\
&& + (\bbox{v}(d_{x^2-y^2}) \cdot \bbox{\epsilon})\; \alpha_3^*
+ (\bbox{v}(s) \cdot \bbox{\epsilon})\; \alpha_4^* |^2.
\label{tbmat}
\end{aligned}$$ Using this we proceed to a detailed comparison with the work of Bansil and Lindroos[@banroos]. These authors have performed an extensive first-principles study of the ARPES intensities in Bi2212, thereby using the more realistic one-step model of photoemission and a complete surface band-structure, both for the initial and final state wave function. Amongst others, Bansil and Lindroos studied the variation of the peak-weight along the Fermi surface for given direction of the polarization vector $\bbox{\epsilon}$. Their results are shown in Figure \[bansilcomp\] and compared to the above theory. Obviously the overall trends seen by Bansil and Lindroos are reproduced reasonably well by the present theory and we want to give a brief discussion of the mechanisms which lead to these trends.\
To begin with, it is to simplest approximation the ‘oxygen content’ of the wave function, which determines the ARPES intensity. This is not so much due to the smallness of the radial matrix elements for Cu (which are quite comparable to the ones for oxygen, see the Appendix), but rather the fact that the partial waves emitted by a $d_{x^2-y^2}$ orbital produce virtually no intensity close to the surface normal (as will be shown below).
For an electron energy of $\approx 20 \;eV$ the detector would have to be placed at $\approx 20^o$ degrees from the surface normal (neglecting the refraction by the potential step at the surface), and the $d_{x^2-y^2}$ orbital emits practically no electrons into this direction.\
Next we note that light polarized along the $(1,0)$direction can only excite electrons from $p_x$-type orbitals, light polarized along the $(1,1)$ direction will excite the combination $p_x +p_y$, whereas light polarized along $(1,-1)$ excites $p_x -p_y$. Finally one has to bear in mind that near $(\pi,0)$ the tight-binding wave function contains only $p_x$-type orbitals (the mixing between $p_y(\pi,0)$ and $d_{x^2-y^2}(\pi,0)$ being exactly zero). Therefore the polarizations $(1,1)$ and $(1,-1)$ have equal intensity, whereas $(1,0)$ gives maximum intensity. Near $(0,\pi)$ in the other hand, the tight-binding wave function contains only $p_y$, whence the polarizations $(1,1)$ and $(1,-1)$ again have equal intensity, whereas $(1,0)$ this time gives no intensity. For $\bbox{k}\; {\|}\; (1,1)$ exciting $p_x +p_y$ gives no intensity, because this combination does not mix with $d_{x^2-y^2}$ and hence is not contained in the tight-binding wave function, whereas exciting $p_x -p_y$ gives high intensity. The intensity for polarization $(1,0)$ at these momenta is approximately $1/2$ of that for $(\pi,0)$. These simple considerations obviously explain the overall shape of the curves in Figure \[bansilcomp\] quite well. The main discrepancy between the Bansil-Lindroos theory and our calculation is the behaviour of the curve for polarization $(1,-1)$. Actually, this is the only curve which is not determined by symmetry alone, and subtle details of the wave function become important. We believe that the parameterization of Andersen [*et al.*]{} [@andersen] does give the correct dispersion, but not necessarily the right wave functions. Next we consider the intensity variation along various high-symmetry lines in the Brillouin zone, shown in Figure \[bansilcomp2\].
Again, there is reasonable agreement between Bansil and Lindroos and our theory. All in all the comparison shows that also in the results of Bansil and Lindroos the behaviour of the intensity is to a considerable extent determined by the ‘radiation characteristics’ of the orbitals in the CuO$_2$ plane and the relative weight of the $p_x$ and $p_y$ orbitals. Additional complications like multiple scattering of the photoelectrons, Bragg scattering from the BiO surface layer etc. do not seem to have a very strong impact on the intensity variations, not even at the relatively low photon energy of $22\;eV$. Despite its simplicity we therefore believe that our theory has some merit, particularly so because it allows (unlike the single particle calculation of Bansil and Lindroos) to incorporate the effects of strong correlations. One weak point of all single-particle-like calculations for the CuO$_2$-plane is the following: since (in electron language) the band which forms the Fermi surface is the topmost one obtained by mixing the energetically higher Cu 3$d_{x^2-y^2}$-orbital with the energetically lower O 2p$\sigma$ orbitals it is clear, that the respective wave functions have predominant Cu 3$d_{x^2-y^2}$ character. This is exactly opposite to the actual situation in the cuprates, where the fist ionization states in the doped and undoped case are known to have predominant O 2p character. We now consider the case of large $U$. An exact calculation of the single-particle spectra (\[resolv\]) is no longer possible in this case and we have to use various approximations. First we study an isolated Zhang-Rice singlet (ZRS) in a single CuO$_4$ plaquette, see Figure \[plaqu\]. The bonding combination of $O$ $2p\sigma$-orbitals is $$p_{b,\sigma}^\dagger = \frac{1}{2}( p_{1,\sigma}^\dagger
- p_{2,\sigma}^\dagger - p_{3,\sigma}^\dagger + p_{4,\sigma}^\dagger )$$ and the single-hole basis states (relevant at half-filling) can be written as $$\begin{aligned}
\label{eq:1holestates}
|1\rangle &=& p_{b,\sigma} |full\rangle, \nonumber \\
|2\rangle &=& d_{\sigma} |full\rangle.
\end{aligned}$$ Here $|full\rangle$ denotes the $Cu3d^{10} \otimes 4\; O2p^6$ state. The single hole ground state then is $|\Psi_0^{(1h)}\rangle=\alpha |1\rangle + \beta |2\rangle$ where $(\alpha,\beta)$ is the normalized ground state eigenvector of the matrix $$H = \left( \begin{array}{c c}
0 & 2t_{pd} \\
2 t_{pd} & -\Delta
\end{array} \right)
\label{singleham}$$ Two-hole states are obtained by starting from the basis states $$\begin{aligned}
\label{eq:2holestates}
|1'\rangle &=& p_{b,\uparrow} p_{b,\downarrow} |full\rangle \nonumber \\
|2'\rangle &=& \frac{1}{\sqrt{2}}( p_{b,\uparrow} d_{\downarrow} +
d_{\uparrow} p_{b,\downarrow}) |full\rangle \nonumber \\
|3'\rangle &=& d_{\uparrow} d_{\downarrow} |full\rangle
\end{aligned}$$ The two-hole ground state of the plaquette reads $|\Psi_0^{(2h)}\rangle=\alpha' |1'\rangle + \beta' |2'\rangle + \gamma' |3'\rangle$, where $(\alpha',\beta',\gamma')$ is an eigenvector of $$H = \left( \begin{array}{c c c}
0 & 2\sqrt{2}t_{pd} & 0\\
2\sqrt{2} t_{pd} & -\Delta & 2\sqrt{2}t_{pd} \\
0 & 2\sqrt{2}t_{pd} & -2\Delta +U
\end{array} \right).
\label{doubleham}$$ If we want to make contact with the $t-J$ model, where a ‘hole’ at site $i$ stands for a ZRS in the plaquette centered on the copper site $i$, we have to incorporate a phase factor of $e^{-i \bbox{k}_{\|} \cdot \bbox{R}_i}$, into the definition of $|\Psi_0^{(2h)}\rangle$, where $\bbox{R}_i$ is the position of the central Cu orbital. In other words, the matrix element for the creation of a ‘hole’ in the $t-J$ model is $$m_{ZRS} = e^{i \bbox{k}_{\|} \cdot \bbox{R}_i}
\langle \Psi_0^{(2h)} | \tilde{c}_{\bbox{k},\sigma} |\Psi_0^{(1h)}\rangle .
\label{mzrs}$$ The matrix elements for the creation of $e^{-i \bbox{k}_{\|} \cdot \bbox{R}_j} p_{b,\sigma}|full\rangle$ and $e^{-i \bbox{k}_{\|} \cdot \bbox{R}_j} d_{\sigma}|full\rangle$ are $$\begin{aligned}
m_b &=& i \left( \bbox{v}(p_y) \sin(\frac{k_y}{2}) -
\bbox{v}(p_x) \sin(\frac{k_x}{2} )\right)\cdot \bbox{\epsilon},\nonumber \\
m_d &=& \bbox{v}(d_{x^2-y^2}) \cdot \bbox{\epsilon}.
\label{ZRSform}
\end{aligned}$$ Finally, the matrix element for creation of a ZRS from the single-hole ground state becomes $$\begin{aligned}
m_{ZRS} &=& \alpha'^* \alpha m_b + \frac{\beta'^*}{\sqrt{2}}(
\alpha m_d + \beta m_b ) \\
&& + \gamma'^* \beta m_d.
\end{aligned}$$ Using this expression, the intensity expected for a ZRS with momentum $\bbox{k}$ can be calculated as a function of photon polarization and energy. This will be discussed in the next sections.\
=6.0cm
In comparing the intensities calculated from our above theory for a single ZRS to experiment one would be implicitly assuming that the first ionization states can be described completely by a coherent superposition of ZRS [*in a single plaquette*]{}. This need not be the case and in order to get at least a rough feeling for the effects of ‘embedding the plaquette in a lattice’ we use the technique of cluster perturbation theory (CPT) to study the extended system. CPT, which was first suggested by Senechal [*et al.*]{}[@Senechal], is a technique to ‘extrapolate’ the single-electron’s Green’s function calculated on a finite cluster to an infinite periodic system. It is based on an perturbative treatment of the intercluster hopping. The cluster we used for the present calculation is a quadratic arrangement of four unit cells each containing one Cu d$_{x^2-y^2}$ and two O p$_{x,y}$ orbitals as shown in figure \[fig1\]. The Green’s function $$G_{ij\sigma}=\left< \Phi_0\right|
c_{i\sigma}\frac{1}{z-H}c^\dagger_{j\sigma}\left| \Phi_0\right>
+\left< \Phi_0\right| c^\dagger_{j\sigma}\frac{1}{z-H} c_{i\sigma}\left|
\Phi_0\right>$$ of this cluster with open boundary conditions is calculated by the Lanczos method. The boundary orbitals of the cluster are connected to the adjacent cluster by the Fourier transform of the intercluster hopping $V({\bf Q})$, where ${\bf Q}$ is a “superlattice” wave-vector restricted to the smaller Brillouin zone formed by the now enlarged lattice of the clusters. The perturbative treatment of the intercluster hopping in lowest order yields an RPA like expression for the approximate Green’s function. $${\cal G}_{ij\sigma}\left( {\bf Q},\omega\right)=\left(
\frac{1}{G^{-1}(\omega)_{\sigma}-V({\bf Q})}\right)_{ij}
.$$ This mixed representation can then be transformed into momentum space by a Fourier transformation thereby taking into account the geometry of the cluster: $$G_{CPT}\left({\bf k},\omega\right)= \frac{1}{N} \sum_{ij}
e^{i{\bf k}\left( {\bf r}_i-{\bf r}_j\right)}
{\cal G}_{ij\sigma}\left( {\bf N}{\bf k},\omega\right).$$ This result is exact for $U=0$ and has been shown[@Senechal] to produce good results also for large and intermediate $U$. In the following sections we will use this technique for the approximate calculation of the single-particle spectral densities as a valuable cross-check for the calculations based on a single ZRS. While CPT still is far from being rigorous, it includes some effects which result from embedding the ZRS in a lattice and, as we will in fact see, it always gives results which are very similar to those for a ZRS.\
To conclude this section we want to give a simple estimate for the photoelectron kinetic energy $T$ as a function of photon energy $h\nu$. It is easy to see that in the absence of any potential step at the surface of the solid we would have $$T = h\nu + E_0^{(N)} - E_{\bbox{k}}^{(N-1)}$$ where $E_0^{(N)}$ is the energy of the ground state of the solid with $N$ electrons and $E_{\bbox{k}}^{(N-1)}$ the final state of the solid. We approximate $$E_0^{(N)} - E_{\bbox{k}}^{(N-1)} \approx
( E_0^{1h} - E_0^{2h} ) - E_{2p}$$ Here $E_0^{nh}$ are the ground state energies of a CuO$_4$ plaquette with $n$ holes, obtained by diagonalizing the matrices (\[singleham\]) and (\[doubleham\]). In these matrices the energy of the $O2p$ level, $E_{2p}$, has been chosen as the zero of energy, whence it has to be taken into account separately. For the ‘standard values’ of the parameters $t_{pd}=-1.3\;eV$, $\Delta=3.6\;eV$ and $U=10.5\;eV$ this gives $E_0^{1h} - E_0^{2h} = 1.93\;eV$. Using the estimate $E_{2p} \approx -10\;eV$ from our atomic LDA calculation we find $$T \approx h\nu - 8\; eV,$$ which we will use in all that follows.\
As a first application, Figure \[bansilcomp4\] then shows the intensity of the topmost ARPES peak calculated by CPT and compares this to experimental data. Reasonable qualitative agreement can be found, especially for the CPT calculation, although the shift of the spectral weight maximum towards $k=(\pi,0)$ for $22\;eV$ is not reproduced. The much better agreement for $E_{photon}=34eV$ suggests that for the lower photon energy of $22\;eV$ additional effects (such as multiple scattering corrections or Bragg-scattering from the BiO top-layer) are more important.
Application to experiment
=========================
Coming back to the theory for the ZRS we can already draw conclusions of some importance. By combining the expressions for the vectors $\bbox{v}_{l,\alpha}$ from Appendix II with the ‘form factor’ of a ZRS, (\[ZRSform\]), the expression for the photocurrent can be brought to the form $$\bbox{j} \propto | \sum_{l,l'} \tilde{R}_{l,l'}(E)
\;(\bbox{v}_{l,l'}(\bbox{k}_{\|},\bbox{k})\cdot{\bf \epsilon})\; |^2.
\label{total}$$ We remember that $\bbox{k}_{\|}$ is the momentum of the photohole (which is within the first Brillouin zone of the CuO$_2$ plane), whereas $\bbox{k}$ is the momentum of the escaping photoelectron. Using the expressions from Appendix II it is now a matter of straightforward algebra to derive the following expressions for $\bbox{k}_{\|}=(k,0)$, and the ‘optimal’ photon polarization for momenta along $(1,0)$, $\bbox{\epsilon}=(1,0,0)$: $$\begin{aligned}
\bbox{v}_{0,1}\cdot\bbox{\epsilon} &=& \frac{\sin(k/2)}{\sqrt{12\pi}} ,\\
\bbox{v}_{1,2}\cdot\bbox{\epsilon} &=& -i \sqrt{\frac{3}{20\pi}} \;\sin(\Theta),\\
\bbox{v}_{2,1}\cdot\bbox{\epsilon} &=& \frac{\sin(k/2)}{\sqrt{12\pi}}
\left( \frac{3}{2}\cos(2\Theta) -\frac{1}{2}\right),\\
\bbox{v}_{3,2}\cdot\bbox{\epsilon} &=& -i \sqrt{\frac{3}{20\pi}}\; \sin(\Theta)\;
(\frac{5}{4}\cos(2\Theta) - \frac{1}{4}).
\end{aligned}$$ Figure \[char\] shows the $\Theta$-dependence of these expressions.
=6.0cm
Obviously a ZRS with momentum near $(\pi,0)$ emits photoelectrons predominantly parallel to the CuO$_2$ plane if it is excited with light polarized along $(1,0)$ within the CuO$_2$ plane. The situation is similar, though not as pronounced, for other momenta, such as $(\frac{\pi}{2},\frac{\pi}{2})$. It should be noted, that the vectors $\bbox{v}_{l,l'}(\bbox{k}_{\|},\bbox{k})$ are computed [*exactly*]{} namely by elementary angular-momentum recoupling. The fact that a ZRS with momentum near $(\pi,0)$ emits photoelectrons predominantly at small angles with respect to the CuO$_2$ plane therefore is a rigorous result.\
The radial matrix elements $\tilde{R}_{l,l'}(E)$ which also enter in (\[total\]) actually tend to suppress the emission close to the surface normal even more. Namely one finds (Appendix I) that $R_{0,1} \approx 0.2\; R_{2,1}$, that means the $s$-like partial wave (which would contribute strongly to emission at near perpendicular directions, see Figure \[char\]) has a very small weight due to the radial matrix elements. We note in passing that the smallness of the ratio $R_{0,1}/R_{2,1}$ is well-known in the EXAFS literature[@exafs].\
Combining the partial waves in Figure \[char\] with the proper radial matrix elements $\tilde{R}_{l,l'}(E)$ as in (\[total\]) we expect to obtain a curve with a minimum for some finite $\Theta$ (mainly due to to the node in the dominant $d$-like $l=2$ partial wave emitted by the 2p $l'=1$ orbital (see Figure \[char\]). This may in fact explain a well-known[@Dessau] effect in ARPES, namely the the relatively strong asymmetry with respect to $(\pi,0)$ of the ARPES intensity for $\bbox{k}_{\|}$ along $(1,0)$ which
is seen at $22\;eV$ photon energy but not at $34\;eV$. Figure \[ZRS-Along\_1\_0\_comb.eps\] shows the calculated intensities as a function of in-plane momentum. For $h\nu \approx 22\;eV$ the angles$\Theta$ which would be appropriate to observe $\bbox{k}_{\|}$ slightly beyond $(\pi,0)$ (i.e. in the second zone) are such, that one is ‘looking into the node’ of the radiation characteristics of the ZRS, whereas for smaller $\bbox{k}_{\|}$ one is looking at the maximum (see the inset). Hence there is a strong asymmetry around $(\pi,0)$. Increasing the energy to $\ge 30 eV$ the angles $\Theta$ becomes smaller, one is no longer sampling the node and the the intensity is much more symmetric around $(\pi,0)$.\
As shown in the preceding discussion, our theory reproduced, despite its simplicity, some experimental features seen in the cuprates. We therefore proceed to address potential applications in experiment. Thereby the main goal is to find experimental conditions under which the ZRS-derived state at a given momentum ${\bf k}_{\|}$ can be observed with the highest intensity. To that end, we can vary different experimental parameters, mainly the direction of the photon polarization and the Brillouin zone in which we are measuring.\
We first consider the case of normal incidence of the light, that means the electric field vector $\bbox{\epsilon}$ is in the CuO$_2$ plane. Rotating $\bbox{\epsilon}$ in the plane will change the intensity from the ZRS-derived states in a systematic way, and for some angle $\Phi_E$ it will be maximum. Along high symmetry directions like $(1,1)$ or $(1,0)$ the optimal polarization can be deduced by symmetry considerations, because the ZRS has a definite parity under reflections by these directions. For momenta $\bbox{k}$ along $(1,1)$ $\bbox{\epsilon}$ has to be perpendicular to $\bbox{k}$, whereas along $(1,0)$ it must be parallel[@ShenDessau]. For nonsymmetric momenta, however, this symmetry analysis is not possible and one has to calculate the optimum direction. We note that apart from merely enhancing the intensity of the ZRS, knowledge of the polarization dependence of the ARPES intensity of the ‘ideal’ ZRS would allow also to separate the original signal from ZRS-derived states from any ‘background’, that means photoelectrons which have undergone inelastic scattering on their way to the analyzer. Since it is plausible that these background electrons have more or less lost the information about the polarization of the incoming light, their contribution should be insensitive to polarization. Taking the spectrum at the ‘optimal angle’ for the ZRS and perpendicular to it then would allow to remove the background by merely subtracting the two spectra (provided one can measure the absolute intensity). Along the high symmetry directions $(1,0)$ and $(1,1)$ the feasibility of this procedure has recently been demonstrated by Manzke [*et al.*]{} [@Manzke] and using the calculated optimal polarizations this analysis could be extended to any point in the BZ.\
For the special case of the ZRS, and neglecting the contribution of copper altogether, it is possible to give a rather simple expression for the optimal angle. The matrix element for creating the bonding combination of O 2p$\sigma$ orbitals is $$\begin{aligned}
m_{ZRS} &\propto& i
\left(\sin(\Phi_E)\sin(\frac{k_y}{2})-\cos(\Phi_E)\sin(\frac{k_x}{2})\right).
\end{aligned}$$ We thus find the angles which give minimum and maximum intensity: $$\begin{aligned}
\sin(\Phi_{E,min})\sin(\frac{k_y}{2})-\cos(\Phi_{\min})\sin(\frac{k_x}{2})
&=& 0 \nonumber \\
\cos(\Phi_{E,max})\sin(\frac{k_y}{2})+\sin(\Phi_{\max})\sin(\frac{k_x}{2})
&=& 0
\end{aligned}$$ We see that always $\Phi_{E,min} = \Phi_{E,max} + \frac{\pi}{2}$. Moreover, in this approximation the intensity is exactly zero for $\Phi_{E,min}$, and the angle for maximum intensity is $$\Phi_{E,max} = - arctan\left(
\frac{\sin(\frac{k_y}{2})}{\sin(\frac{k_x}{2})} \right).
\label{ZRSsimp}$$ This expression takes a particularly simple form along the line $(\pi,0) \rightarrow (0,\pi)$, where $\Phi_{\max} = -k_y/2$. This reproduces the known values $\Phi_{\max} = 0$ along $(1,0)$, $\Phi_{\max} = -\frac{\pi}{4}$ along $(1,1)$ and $\Phi_{\max} = -\frac{\pi}{2}$ along $(0,1)$. Despite its simplicity formula (\[ZRSsimp\]) gives a quite good estimate for the optimum polarization. Numerical evaluation shows that the lines of constant $\Phi_{\max} $ in $\bbox{k}$-space are to very good approximation straight lines through the center of the Brillouin zone. This means that in practice one could adjust the polarization angle $\Phi_E$ once to either $\Phi_{E,max}$ or $\Phi_{E,min}$, and then scan an entire straight line through the origin by varying the emission angle $\Theta_k$ without having to change the polarization along the way.
Figures \[max-zrs\] and \[max-cpt\] then show the optimal angle for observation of the first ionization state for momenta in the first Brillouin zone. For perpendicular polarization the intensity practically vanishes. In the Figure we compare the ‘isolated ZRS’ and the ‘embedded ZRS’ whose spectra are obtained by CPT. Both methods of calculation give very similar results for the optimal angle, which in turn agrees very well with the simple estimate \[ZRSsimp\]. Since along the high symmetry lines $(1,0)$ and $(1,1)$ the optimal $\Phi_E$ is determined by symmetry alone there is obviously not so much freedom to ‘interpolate’ smoothly between these values.\
Next, moving to a higher Brillouin zone allows to enhance the intensity of the ZRS, as can be seen from Table \[table1\] and \[table2\]. Table \[table1\] shows the ratio $I/I_0$ of the intensity obtainable by measuring the ZRS at $(\frac{\pi}{2},\frac{\pi}{2})$ at the ‘original position’ (there the polarization has to be perpendicular to (1,1) by symmetry) and the intensity that can be obtained by measuring in a higher zone with optimized polarization, both within in the CuO$_2$ plane and, for later reference, also out of plane.
The same information is given for $(\pi,0)$ in table \[table2\]. Obviously, an enhancement of a factor of $2$ or more can be achieved by proper choice of the experimental conditions. In view of the large ‘background’ in the experimental spectra, even such a moderate enhancement may be quite important.\
We note that the physical origin of the variation of intensity with $\Phi_E$ and order of the Brillouin zone
is the interference between the processes of creating a photohole in $p_x$-like and $p_y$-like $O$ $2p\sigma$ orbitals. By changing $\Phi_E$ the relative intensity and phase between photoholes in the two types of orbitals can be tuned. Thus, one can always find an angle $\Phi_{E,max}$ where the interference between these two types of orbital is maximally constructive. This angle depends on the relative phase between the two orbitals in the wave function and thus is specific for the state in question. Next, we study a quite different effect, namely what happens if we tilt the polarization vector $\bbox{\epsilon}$ out of the CuO$_2$ plane. To illustrate the usefulness of this, we again consider a momentum along the high symmetry line $(0,0)\rightarrow (\pi,0)$ and assume that the polarization vector $\bbox{\epsilon}$ is within in $x-z$ plane It follows from symmetry considerations that any component of the light perpendicular to this plane (‘s-polarization’) cannot excite any ZRS-derived states. In other words we assume that $\Phi_E$ but that $\Theta_E$ is variable. The contribution from the bonding combination of $O$ $2p\sigma$ orbitals then becomes
$$\begin{aligned}
m_{ZRS}&=& -i \xi_1 \Bigg[
\left(
\frac{1}{\sqrt{3}} \tilde R_{01}s
+\frac{1}{\sqrt{15}} \tilde R_{21}d_{3z^2-r^2}
-\frac{1}{\sqrt{5}} \tilde R_{21}d_{x^2-y^2}
\right) \sin(\Theta)
+\frac{1}{\sqrt{5}} \tilde R_{21}d_{xz}\cos(\Theta)
\Bigg] \cr
&&+i\xi_2 \Bigg[
\left(
-\frac{1}{\sqrt{5}} \tilde R_{321} p_x
-\frac{1}{\sqrt{70}} \tilde R_{323} f_{x\left( 5z^2-1\right)}
+\sqrt{\frac{3}{70}} \tilde R_{323} f_{ x^3-3xy^2}
\right)\sin(\Theta)
+\frac{1}{\sqrt{7}} \tilde R_{323} f_{z\left( x^2-y^2\right)} \cos(\Theta)
\Bigg], \cr\end{aligned}$$
[2]{} where $\xi_1=\alpha'^* \alpha+\frac{\beta'^* \beta}{\sqrt{2}}$ and $\xi_2=\gamma'^* \beta+\frac{\beta'^* \alpha}{\sqrt{2}}$. Tilting the electric field vector out of the plane thus admixes the $d_{xz}$ harmonic into the radiation characteristics, which has its maximum intensity at an angle of $45^o$ with respect to the CuO$_2$ plane. Clearly, this enhances the intensity at intermediate $\Theta_k$, particularly so if constructive interference with the $d_{x^2-y^2}$ and $d_{3z^2-r^2}$ harmonics (which are excited by the $x$-component of $\bbox{\epsilon}$) occurs. Since the $d_{xz}$ on one hand and the $d_{x^2-y^2}$ and $d_{3z^2-r^2}$ on the other have opposite parity under reflection by the $y-z$ plane, constructive interference for some momentum $(k,0)$ automatically implies destructive interference for momentum $(-k,0)$. Tilting the electric field out of the plane thus amounts to ‘focusing’ the photoelectrons towards the detector. This effect is illustrated in Figure \[focus2\], which shows the angular variation within the $(x-z)$ plane of the photocurrent radiated by a ZRS.
Figure \[thetascan\] then shows the intensity as a function of $\Theta_E$ for various momenta along $(0,0)\rightarrow (\pi,0)$. Remarkably enough, the enhancement of current towards the detector may be up to a factor of $3$ as compared to polarization in the plane. Exploiting this effect could enhance the photoemission intensity considerably in the region around $(\pi,0)$ - which might be important, because this is precisely the ‘controversial’ region in $\bbox{k}$-space which makes the difference between hole-like[@ShenDessau; @Golden; @Fretwell; @Mesot] and electron-like[@Dessau; @Feng; @Gromko; @Bogdanov] Fermi surface in Bi2212 and where bilayer-splitting[@Armitage; @Chuang] should be observable. One [*caveat*]{} is the fact, that for a polarization which is not in the plane one has to compute the actual field direction by use of the Fresnel formulae[@Matzdorf]. The value of the dielectric constant, however, which has to be chosen for such a calculation, is very close to $1$. Tables I and II also give the optimal angles $\Theta_E$ and $\Phi_E$ for observing $(\frac{\pi}{2},\frac{\pi}{2})$ and $(\pi,0)$ in higher Brillouin zones. Obviously, by choosing the right zone and the right polarization a considerable enhancement of the intensity can be achieved.
Surface resonances and the energy dependence of the intensity
=============================================================
We have seen in the preceding sections that the orbitals from which the ZRS is built emit photoelectrons predominantly at grazing angles with respect to the CuO$_2$ plane (see Figure \[char\]). In most cases electrons emitted with momentum $(\bbox{k}_{\|}+\bbox{G}_{\|},k_\perp')$ will be simply ‘lost’ because they will not reach a detector positioned to collect electrons with momentum $(\bbox{k}_{\|},k_\perp)$ (it might happen that the photoelectron ‘gets rid of its $\bbox{G}_{\|}$’ by Bragg-scattering at the BiO surface layer - a possibility which we are neglecting here). It may happen, however, that the motion with momentum $\bbox{k}_{\|}+\bbox{G}_{\|}$ parallel to the surface ‘consumes so much energy’ that there is hardly any energy left for the motion perpendicular to the surface. This will happen whenever the kinetic energy $T$ is just above the so-called emergence condition for the reciprocal lattice vector $\bbox{G}_{\|}$: $$T = \frac{\hbar^2}{2m}(\bbox{k}_{\|} + \bbox{G}_{\|})^2.
\label{emergence}$$ In other words: the kinetic energy is ‘just about sufficient’ to create a free electron state with 3D momentum $(\bbox{k}_{\|} + \bbox{G},0)$. In this case the energy available for motion perpendicular to the surface is not sufficient for the photoelectron to surmount the energy barrier at the surface and it will be reflected back in to the solid.\
On the other hand, if the ‘perpendicular energy’ is below the energy of the Hubbard gap as well, the electron cannot re-enter the solid either, because there are no single-particle states available in the solid with the proper energy. The situation thus is quite analogous to the case of the so-called Shockley state seen on the Cu surface: a combination of surface potential and a gap in the single-particle DOS causes the electron to be trapped at the surface of the solid. For an extensive review of the properties of such so-called surface resonances see Ref.[@McRae].\
The existence of such surface resonances also in a cuprate-related material has been established by Pothuizen[@hans_thesis]. In an extensive EELS study of Sr$_2$CuCl$_2$O$_2$, Pothuizen could in fact identify not just one, but a total of $3$ such states (in some cases with clearly resolved dispersion) in the energy window $0-30\;eV$. His results show very clearly that such surface states do exist in Sr$_2$CuCl$_2$O$_2$ - whether this holds true for other cuprate materials has not been established yet, but for the moment we will take their existence for granted.\
To proceed in at least a semiquantitative way we use a very simple approximation[@McRae]. We decompose the potential felt by an electron at the surface as $$\begin{aligned}
V(\bbox{r}) &=& V_{av}(z) + V_1(\bbox{r}),\\
V_{av}(z) &=& \frac{1}{a^2}\int_{0}^a dx \int_{0}^a dy\; V(x,y,z),\\
V_1(\bbox{r}) &=& V(\bbox{r}) - V_{av}(z),\end{aligned}$$ ($a$ denotes the planar lattice constant) and assume that $V_1$ can be treated as a perturbation. The eigenstates of an electron in the potential $ V_{av}(z)$ can be factorized: $$\Psi_{\bbox{G}_{\|},\mu}(\bbox{r}) = e^{i (\bbox{k}_{\|} + \bbox{G}_{\|})\cdot
\bbox{r}}\; \Psi_{\mu}(z)$$ with corresponding energy $$E(\bbox{G}_{\|}) = E_{\mu} +
\frac{ \hbar^2 (\bbox{k}_{\|} + \bbox{G}_{\|})^2}{2m}.$$ A surface resonance would correspond to a $\Psi_{\mu}(z)$ which is localized around the surface.\
Let us now consider the subspace of eigenstates of $V_{av}(z)$ which comprises\
a) the (discrete) surface resonance state $\Psi_{\bbox{G}_{\|},\mu}$ and\
b) the continuum of states $\Psi(\bbox{k}_{\|},k_\perp)$ which evolve into plane waves with momentum $(\bbox{k}_{\|}, k_\perp)$ asymptotically far away from the surface. The periodic surface potential (i.e. the term $V_1$) then provides a mechanism for mixing $\Psi_{\bbox{G}_{\|},\mu}$ and the continuum. Moreover, dipole transitions of an electron from the valence band into both a continuum state and into the surface resonance (whereby the latter alternative is possibly even more probable due to the special radiation characteristics of a ZRS) are possible. We thus have all ingredients for the standard Fano-type resonance (see Figure \[fano\]) and we would thus expect a pronounced peak
in an intensity-versus-photon-energy curve whenever the kinetic energy of the photoelectrons approximately matches the energy of a surface resonance. Neglecting the (presumably small) energy $E_{\mu}$ this will happen when the kinetic energy $T$ obeys the emergence condition (\[emergence\]) for some $\bbox{G}_{\|}$.\
A strong oscillation of the intensity of the ZRS-derived first ionization states at $(\pi/2,\pi/2)$ and $(0.7\pi,0)$ in Sr$_2$CuCl$_2$O$_2$ as a function of photon energy has indeed been found recently by Dürr [*at al.*]{}[@duerr]. Their data show a total of four maxima of intensity in the photon-energy range $10-70\;eV$. Much unlike the absorption cross section oscillations seen in EXAFS, these maxima are separated by near-zeroes of the intensity for certain photon energies.\
One interpretation which might come to mind would be interference between the directly emitted electron wave and partial waves which have been reflected from neighboring atoms, similar as the variations in absorption cross section seen in EXAFS. If we neglect the energy dependence of the scattering phase shift at the hypothetical scatterers, the difference $d$ in pathlength between the interfering partial waves would then obey $$d(k^{max}_{\nu+1} - k^{max}_{\nu})=2\pi$$ where $k^{max}_{\nu}$ is the free electron wavevector at the $\nu^{th}$ maximum of intensity. From the four maxima observed by Dürr [*at al.*]{} at $(0.7\pi,0)$, one obtains three estimates for $d$: $10.6$, $12.6$, $9.5$ $\AA$. This is in any case much longer than the Cu-O bond length of $2$ $\AA$. The only ‘natural length’ in Sr$_2$CuCl$_2$O$_2$ which would give a comparable distance is the distance between the two inequivalent CuO$_2$ planes, which is $7$ $\AA$. One might therefore be tempted to explain the oscillation as being due to electrons from the topmost CuO$_2$ plane being reflected at the first CuO$_2$ plane below. This would give a difference in pathway of approximately $14$ $\AA$, the discrepancy with the values given above could possibly be explained by an energy dependence of the phase shift upon reflection. On the other hand it is quite obvious that the reflected wave, having to travel a total of $14$ $\AA$ through the interior of the solid and having undergone one reflection, would have a considerably smaller amplitude than the primary wave. In this picture it is then hard to explain why the intensity at the minima is so close to zero. We therefore believe that interference is the correct explanation of the strong oscillations.\
Let us assume on the other hand, that the maxima are due to Fano-resonances of the type discussed above. As already mentioned above, we would expect a surface resonance state to form whenever the emergence condition (\[emergence\]) is fulfilled approximately for some reciprocal lattice vector $\bbox{G}_{\|}$.
As a rough approximation, we moreover assume that the probability for the dipole transition from the CuO$_2$ plane into the surface resonance state is proportional the ‘transition matrix element’ $m_{ZRS}$ into a plane wave state with momentum $(\bbox{k}_{\|}+\bbox{G}_{\|}, k_\perp=0)$. Neglecting the form of the Fano lineshape (which would be hard to compute anyway because we do not know the Fourier coefficients of the potential $V_1$) we then expect an energy dependence of the intensity to be roughly given by $$I(h\nu) \approx \sum_{\bbox{G}_{\|}} |m_{ZRS}(\bbox{k}_{\|}+\bbox{G}_{\|})|^2
\delta(T - \frac{\hbar^2}{2m}(\bbox{k}_{\|} + \bbox{G}_{\|})^2).
\label{theoint}$$ Actually injection into the surface state should be possible if the kinetic energy is in a narrow window above the emergence condition, whence for simplicity we replace the $\delta$-functions by Lorentzians. Figure \[duerrcomp\] then compares this simple estimate to the data of Dürr [*at al.*]{}[@duerr]. Thereby we have again assumed that $T=h\nu - 8\;eV$. The agreement with experiment is satisfactory, given the rather crude nature of our estimate for the intensity. Given the fact that surface resonances in Sr$_2$CuCl$_2$O$_2$ have been established conclusively by the EELS-work of Pothuizen[@hans_thesis] we believe that the present interpretation of the intensity variations is the most plausible one. The question to whether such surface resonance states exist also in metallic Bi2212 must be clarified by experiment.
Apparent Fermi surfaces
=======================
Let us now consider what happens in the above picture if we vary the in-plane momentum $\bbox{k}_{\|}$. It is important to notice from the outset, that the group velocity of the surface state, ${\bf v}_{\|}= \frac{\hbar^2}{m}(\bbox{k}_{\|}+
\bbox{G}_{\|})$, is much larger than that of the valence band. Thus, if we shift the valence band ‘upwards’ by the photon energy $h\nu$, this replica and the surface state my intersect the surface resonance dispersion at some momentum (see Figure \[pseudo\]). Note that here the binding energy of the O2p level must be incorporated into the valence band energy. For each $\bbox{k}_{\|}$ we now assume that the Fano-type interference between the surface resonance state and the continuum occurs. The maximum of the resonance curve thereby will roughly follow the dispersion of the surface state. In an ARPES experiment we then obviously probe the intensity along the Fano-curve at an energy which corresponds to the shifted valence band (see Figure \[pseudo\]). Obviously this leads to a drastic variation of the observed photoelectron intensity: at the $\bbox{k}_{\|}$ labeled 1, the point where the resonance curve is probed is on the ascending side of the resonance, whence we observe a moderately large
intensity. At the second momentum, we are right at the maximum of the respective Fano-resonance, so the intensity will be high. At the momentum labeled 3, however, we are already at the minimum of the Fano-resonance, where the intensity is approximately zero. We thus would observe a dramatic drop in the observed intensity over a relatively small distance $\Delta k$ in the $\bbox{k}_{\|}$-plane. More precisely, the sharpness of the drop is determined by the width $W$ of the Fano-curve and the difference $\Delta v_g$ in group velocity between the valence band and the surface resonance state: $\Delta k\approx W/\Delta v_g$.\
To be more quantitative, we present a simple model calculation of the ARPES spectra to be expected in the presence of a free-electron-like surface resonance. We label the surface resonance state as $|0\rangle$ and the continuum of LEED states by their ‘perpendicular’ momentum: $|k_\perp\rangle$. The Hamiltonian then reads $$\begin{aligned}
H_0 &=& |0\rangle E(\bbox{k}_{\|},\bbox{G}_{\|}) \langle 0|
+ \sum_{k_\perp} |k_\perp\rangle \frac{\hbar^2 k_\perp^2}{2m}
\langle k_\perp |\\
H_1 &=& \sum_{k_\perp} ( |k_\perp\rangle \frac{V}{\sqrt{L}}
\langle 0| + H.c.),\nonumber \\
E(\bbox{k}_{\|},\bbox{G}_{\|}) &=& \frac{ \hbar^2 (\bbox{k}_{\|} + \bbox{G}_{\|})^2}{2m}.\end{aligned}$$ We assume that the vacuum consists of a volume spanned by $N\times N$ planar unit cells and thickness $L$ perpendicular to the surface. We write the mixing matrix element as $ V/\sqrt{L}$ so as to explicitly isolate the scaling of the matrix elements with $L$. $V$ should be independent of $L$ and since we assume it to be independent of $k_\perp$ we may also assume it to be real.\
We then obtain the resolvent operator $$\begin{aligned}
R_{0,0}(\omega) &=& \langle 0| (\omega -i0^+ - H)^{-1} |0\rangle \nonumber \\
&=& \frac{1}{\hbar \tilde{\omega} - E_0(\bbox{k}_{\|}) -
\Sigma(\tilde{\omega})} \nonumber \\
\hbar\tilde{\omega} &=& \hbar \omega - \frac{\hbar^2}{2m}\bbox{k}_{\|}^2 \\
\Sigma(\omega) &=& i \frac{V^2}{4\pi^2}
\sqrt{\frac{2m}{\hbar^3 \omega}} \nonumber \\
&=& i\;v_0 \sqrt{ \frac{\omega_0}{\omega}} \end{aligned}$$ Here $v_0$ and $\omega_0$ have the dimension of energy and frequency, respectively; only one of them can be chosen independently, we choose $\omega_0 = 1eV/\hbar$ whence $v_0$ is a measure for the strength of the mixing between surface resonance and continuum.\
A possible final state obtained by dipole transition of an electron from the valence band then is $$|\Phi\rangle = T_0\; |0\rangle + \frac{T_1}{\sqrt{L}}\; \sum_{k_\perp} |k_\perp\rangle$$ where we have again explicitly isolated the scaling of the transition matrix elements with $L$. Both matrix elements are assumed to be real. The measured intensity then becomes $$\begin{aligned}
A(\omega) &=& \frac{1}{\pi} \Im \;R(\omega)\nonumber \\
R(\omega) &=& \langle \Phi| (\omega -i0^+ - H)^{-1} |\Phi\rangle
\nonumber \\
&=&
\frac{ \left(T_0 + \tilde{T}_1\Sigma(\tilde{\omega})\right)^2}
{\hbar \tilde{\omega} -
E(\bbox{k}_{\|},\bbox{G}_{\|}) -
\Sigma(\tilde{\omega})} + \tilde{T}_1^2 \Sigma(\tilde{\omega})\end{aligned}$$ where we have introduced $\tilde{T}_1= T_1/V$. Taking into account the dispersion of the initial state as well as its finite lifetime (which we describe by a Lorentzian broadening $\Gamma$) we approximate the measured spectrum for some momentum $\bbox{k}_{\|}$ as $$I(\omega) = \frac{\Gamma}{(\omega- \epsilon(\bbox{k}_{\|}))^2 + \Gamma^2}\cdot
A(\omega+h\nu)
\label{arpesspec}$$ A simulated ARPES spectrum is then shown in Figure \[fanodisp\]. For simplicity we have chosen $\epsilon(\bbox{k}_{\|})$ to be the simple SDW-like dispersion $$\begin{aligned}
\epsilon(\bbox{k}_{\|}) &=& -t_{eff} (\cos{k_x} + \cos(k_y))^2 \end{aligned}$$ with $t_{eff}=0.25\;eV$. It has been assumed that this band is completely filled, that means we would not have any true Fermi surface at all. Despite this, the ARPES spectra, which is simulated by using (\[arpesspec\]) shows a sharp drop in intensity at the intersection of the surface state dispersion $E(\bbox{k}_{\|},\bbox{G}_{\|})$ shifted downward by the photon energy. Here $\bbox{G}_{\|}=(0,2\pi)$.
The photon energy in the above example has been chosen deliberately such that the surface state dispersion cuts through the quasiparticle dispersion near the top of the latter, so as to produce the impression of a Fermi surface. Usually this would happen only by coincidence for very few photon energies. At this point is has to be remembered, however, that the dispersion of the valence band in cuprate superconductors seems to show an extended region with practically no dispersion in the region around $(\pi,0)$. This band portion moreover is energetically immediately below the chemical potential. The almost complete lack of dispersion in this region then makes it necessary to rely mostly on ‘intensity drops’ of the valence band in assigning a Fermi surface, and it is quite obvious that whenever the surface state dispersion, shifted downward by the photon energy, happens to cut through the flat-band region for the photon energy in question, the resulting drop in intensity will be almost indistinguishable from a true Fermi surface. To illustrate this effect we have calculated the intensity within a window of $10\;meV$ below $E_F$ for the ‘standard’ dispersion with an extended van-Hove singularity[@Radtke] $$\begin{aligned}
\epsilon_{QP}(\bbox{k}_{\|}) &=& c
-\frac{t}{2} \left( \cos(k_x) + \cos(k_y) \right)
\nonumber \\
&+& t' \cos(k_x)\cos(k_y) \nonumber \\
&+& \frac{t''}{2} \left( \cos(2k_x) + \cos(2k_y) \right)\end{aligned}$$ which (for $t=0.5\;eV$, $t'=0.15\;eV$ and $t''=-0.05\;eV$) produces the well-known ‘$22\;eV$-Fermi surface’, together with the ‘flat bands’ around $(\pi,0)$. The constant $c$ incorporates a constant shift due to the binding energy of the O 2p orbitals. Figures \[denplo\] then shows the integrated intensity in a window of $10\;meV$ below the Fermi energy - a plot which is by now a standard method to discuss Fermi surfaces. In Figure \[denplo\]a the Fano-curve $A(\omega)$ in (\[arpesspec\]) is replaced by unity, and we see the expected spectral weight map in which the Fermi surface can be clearly identified.
The latter is shown in Figure \[denplo\]b. In Figure \[denplo\]c, on the other hand, we have included the Fano curve $A(\omega)$ for the surface state with $\bbox{G}_{\|}=(-2\pi,-2\pi)$ (assuming a free 2D-electron dispersion with lattice constant $2.82 Angstrom$) and a kinetic energy of $T=33eV$. The corresponding constant energy-contour $\frac{4\pi^2\hbar^2}{2ma^2}( \bbox{k}+\bbox{G})^2 =33eV$ is also shown in Figure \[circles\]. One can see quite clearly then, that the contour attenuates the true Fermi surface on its ‘backside’ and strongly enhances the intensity at its ‘frontside’ thus creating the rather perfect impression of a Fermi surface arc which intersects the $(1,0)$ direction at approximately $(0.8\pi,0)$. On the other hand in our model the true Fermi surface is the one seen in Figure \[denplo\]a. The combination of Fano resonance and flat band portion near $(\pi,0)$ thus produces an ‘apparent Fermi surface’ which is entirely artificial.\
To compare with experimental data in more detail, we recall that upon neglecting the dispersion of the valence band state as compared to that of the surface resonance, the momenta in the $\bbox{k}_{\|}$-plane where intersections as the one in Figure \[pseudo\] occur have to obey the equation $$T = \frac{ \hbar^2 (\bbox{k}_{\|} + \bbox{G})^2}{2m},$$ where $T$ denotes the kinetic energy of the ejected electron. In other words: the resonant enhancement of the ARPES intensity occurs along a constant energy contour of the surface resonance dispersion. If we stick to our free-electron approximation, these are simply circles in the $\bbox{k}_{\|}$-plane centered on $-\bbox{G}$. Figure \[circles\] then shows these contours for two different photon energies, namely $22\;eV$ and $34\;ev$. Thereby we have again assumed that $T=h\nu-8\;eV$. Also shown is the standard ‘Fermi surface’, determined at $22\;eV$. It is then obvious that the ‘$22\;eV$-Fermi surface’ always is close to some portion of a surface resonance contour. This would imply that for this photon energy most potions of the Fermi surface could be enhanced - it might also be taken to suggest, however, that some Fermi surface portions near $(\pi,0)$ are actually artificial. It is hard to give a more detailed discussion, because even slight deviations from the free electron dispersion for the surface resonance will have a major impact on the energy contours. For $34\;eV$, on the other hand, the Fermi surface obviously is ‘cut off’ near $(\pi,0)$ by the contour for ${\bf G}=2\pi\;(-2,0)$. The Fermi surface seen at $34\;eV$ then might be obtained by following the true Fermi contour (dashed line) until the intersection with the $(-2,0)$-contour, and then following the latter (where the ARPES intensity is high due to the resonance effect and the fact there are states immediately below $\mu$ due to the flat band around $(\pi,0)$). In this way one would obtain a ‘Fermi contour’ which is quite similar to the one actually observed at $34\;eV$. For even higher photon energy there are too many $\bbox{G}$’s which contribute their own circles, so that one cannot give a meaningful discussion.
The question which of the different Fermi surface portions observed in Bi2212 are artificial and which are real clearly needs a detailed experimental study. A very simple way would be to vary the photon energy in small steps of (e.g.) $1eV$, in which case one should observe a continuous drift of the ‘Fermi surface’ for those parts which are artificial. Unfortunately the present understanding of surface resonances in cuprate materials is much too rudimentary to make any theoretical prediction. Not even knowing any details of their dispersion, we have to content ourselves with the very much oversimplified model calculations outlined above to show what might happen. Even these simple model calculations do show rather clearly, however, that simply plotting intensity maps near $E_F$ or identifying ‘Fermi surface crossings’ near $(\pi,0)$ by drops of spectral weight is not really an adequate means to clarify this issue.
Conclusion
==========
In summary, we have presented a discussion of ARPES intensities from the CuO$_2$ plane. Thereby we could actually address only a few (but hopefully the most important) aspects of the problem.\
To begin with, the geometry of the orbitals, which form the first ionization states of the CuO$_2$ plane, leads to a pronounced anisotropy of the photoelectron current. This makes itself felt in a strong dependence of the intensity on the polarization of the incoming light. This may be exploited to enhance the intensity seen in an ARPES experiment by choosing appropriate photon polarization or going to a higher Brillouin zone. We have presented a simple model to guess ‘optimized’ values of the polarization and the Brillouin zone to obtain maximum intensity for a given ${\bf k}$-point.\
Moreover there is a strong preference for a Zhang-Rice singlet to emit photoelectrons at small angles relative to the plane. This in turn may lead to a injection of the photoelectrons into states located at the surface of the sample, so-called surface resonances. Such surface resonances are well-established in at least one cuprate-related material, namely Sr$_2$CuCl$_2$O$_2$. We have proposed to explain the pronounced photon-energy dependence of the ARPES intensities in this material by these surface resonances. Next, we have presented a simple model calculation which shows how such surface resonances may lead to apparent Fermi surfaces in the flat-band region around $(\pi,0)$. This may be one explanation for the apparently different Fermi surface topology seen in Bi2212 at $22\;eV$ and at $34\;eV$, possibly also in an interplay with bilayer-splitting[@Armitage; @Chuang]. In any case the existence or non-existence of surface-resonances should be clarified experimentally, since the identification of any major Fermi surface portion near $(\pi,0)$ as being ‘artificial’ amy have major implications for our understanding of cuprate materials.\
Acknowledgment: Instructive discussions with V. Borisenko, J. Fink, M. Golden and A. Fleszar are most gratefully acknowledged. This work is supported by the projects DFG HA1537/20-2, KONWIHR OOPCV and BMBF 05SB8WWA1. Support by computational facilities of HLRS Stuttgart and LRZ Munich is acknowledged.
Z.-X. Shen and D. S. Dessau, Phys. Rep. [**253**]{} 1 (1995). B. O. Wells, Z.-X. Shen, A. Matsuura, D. King, M. Kastner, M. Greven, and R. J. Birgenau, Phys. Rev. Lett. [**74**]{}, 964 (1995). F. Ronning, C. Kim, D.L. Feng, D.S. Marshall, A.G. Loeser, L.L. Miller, J.N. Eckstein, I. Bozovic, and Z.-X. Shen, Science, [**282**]{}, 2067 (1998). S. V. Borisenko, M. S. Golden, S. Legner, T. Pichler, C.Duerr, M.Knupfer, J. Fink, G.Yang, S. Abell, and H.Berger, Phys. Rev. Lett. [**83**]{}, 3717 (1999). H.M. Fretwell, A. Kaminski, J. Mesot, J.C. Campuzano, M.R. Norman, M. Randeria, T. Sato, R. Gatt, T. Takahashi, and K. Kadowaki, Phys. Rev. Lett. [**84**]{}, 4449 (2000) J. Mesot [*et. al.*]{}, Phys. Rev. B [**63**]{}, 224516-1 (2001) Y. D. Chuang, A. D. Gromko, D. S. Dessau, Y. Aiura, Y. Yamaguchi, K. Oka, A. J. Arko, J. Joyce, H. Eisaki, S.I. Uchida, K. Nakamura, and Yoichi Ando, Phys. Rev. Lett. [**83**]{}, 3717 (1999). D.L. Feng, W.J. Zheng, K.M. Shen, D.H. Lu, F. Ronning, J.-I. Shimoyama, K. Kishio, G. Gu, D. Van der Marel, and Z.-X. Shen, cond-mat/9908056. A.D. Gromko, Y.-D. Chuang, D.S. Dessau, K. Nakamura, and Yoichi Ando, cond-mat/0003017. P. V. Bogdanov, A. Lanzara, X. J. Zhou, S. A. Kellar, D.L. Feng, E. D. Lu, J.-I. Shimoyama, K. Kishio, Z. Hussain, and Z.-X. Shen, cond-mat/0005394. Phys. Rev. B [**63**]{}, 14505 (2000) F. C. Zhang and T. M. Rice, PRB [**37**]{} 3759 (1988). T. Matsushita, S. Imada, H. Daimon, T. Okuda, K. Yamaguchi, H. Miyagi, and S. Suga, Phys. Rev. B [**56**]{} 7687 (1997). A. S. Moskvin, E. N. Kondrashov, V. I. Cherepanov cond-mat 0007470 (2001). J. C. Slater, [*Quantum Theory of Atomic Structure*]{} McGraw-Hill Book Company, New York Toronto London (1960); see in particular Appendix 20 of Volume II. O. K. Andersen, A. I. Liechtenstein, O. Jepsen, and F. Paulsen, J. Phys. Chem. Solids [**56**]{}, 1573 (1995). A. Bansil and M. Lindroos Phys. Rev. Lett. 83, 5154 (1999) D. S[é]{}n[é]{}chal, D. Perez and M. Pioro-Ladri[è]{}re, Phys. Rev. Lett. **84**, 522 (2000). B. K. Theo, ‘EXFAS: Basic Principles and Data Analysis’, Springer, Berlin (1986). R. Manzke [*et. al*]{} Phys. Rev. B [**63**]{}, 100504 (2001). D. L. Feng, N. P. Armitage, D. H. Lu, A. Damascelli, J. P. Hu, P. Bogdanov, A. Lanzara, F. Ronning, K. M. Shen, H. Eisaki, C. Kim, Z. X. Shen, Phys. Rev. Lett. [**86**]{}, 5550 (2001). Y.-D. Chuang, A. D. Gromko, A. V. Fedorov, Y. Aiura, K. Oka, Yoichi Ando, D. S. Dessau, cond-mat/0107002 (2001). A. Gerlach, R. Matzdorf and A. Goldmann, Phys. Rev. B [**58**]{}, 10969 (1998). E. G. McRae, Rev. Mod. Phys. [**51**]{}, 541 (1979). J. J. M. Pothuizen, ‘Electrons in and close to Correlated Systems’, Thesis, Rijksuniversiteit Groningen (1998). The parts about the surface resonance are unpublished, but can be downloaded from http://www.ub.rug.nl/eldoc/dis/science/j.j.m.pothuizen/ R. J . Radtke and M. R. Norman, cond-mat/9404081 J. P. Perdew and A. Zunger, Phys. Rev. B [**23**]{}, 5048 (1981).
Appendix I:Calculation of Radial Matrix Elements
================================================
The actual calculation of the radial matrix elements $R_{l,l'}$ prove to be one of the most crucial points when making contact with experimental results. Functional forms of effective potentials for the CuO$_2$ compound are not known. A simple approach with a hydrogen-like potential seems doubtful, if one attempts cover at least a certain amount of physical reality of the photoemission process. A good starting point for an at least qualitative derivation of the matrix elements are the radial wave functions that can be obtained from density functional calculations, which include information about the electron-electron exchange correlation and the screening of the nuclear charge. The radial matrix elements used in this paper are calculated in the local-density approximation, where one solves the effective one electron Schrödinger equation $$\label{eq:sic1}
\left(\frac{1}{2} \nabla^2 + v^{\alpha,\sigma}_{\mbox{eff}} \left( {\bf r} \right) \right) \psi_{\alpha,\sigma}
=\epsilon^{LDA}_{\alpha,\sigma}\psi_{\alpha,\sigma}\left( {\bf r}\right).$$ The potential $v^{\alpha,\sigma}_{\mbox{eff}} \left( {\bf r}\right)$ is a functional of the electron density, given by $$v^{\alpha,\sigma}_{\mbox{eff}} \left( {\bf r}\right)
=\left(
v\left( {\bf r}\right) + u\left([n]; {\bf r}\right)
+v^{\sigma}_{\mbox{xc}} \left(\left[ n_\uparrow,n_\downarrow \right];{\bf r} \right)
\right).$$ $v({\bf r})$ is the full nuclear potential and $u\left([n]; {\bf r}\right)$ the direct Coulomb-potential $\int d{\bf r}' n\left({\bf r}' \right)/\left| {\bf r}-{\bf r}' \right|$. The exchange-correlation functional $v^{\sigma}_{\mbox{xc}} \left(\left[ n_\uparrow,n_\downarrow \right];{\bf r} \right)$ is a parameterized after Ref. [@perdew]. Equation (\[eq:sic1\]) is solved selfconsistently to convergence of the total energy, resulting in a set of energy eigenvalues $\epsilon_n$ and radial wave functions $R_{n,l}(r)$, which we used to calculate $$R_{ll'}(E)=\int dr r^3 R_{E,l}(r) R_{n,l}(r).$$ Since we assume emitted electrons to be approximated by plane waves at the detection point, additional information on the outgoing wave function is needed in form of the phase shift compared to the asymptotic behaviour of $$R^{asym}_{El}(r) \approx 2 \frac{\sin\left(kr-\frac{l\pi}{2}+\delta_l \right)}{r},
\qquad r \rightarrow \infty,$$ defining the interference of wave functions emitted by Op$_{x,y}$ and Cud$_{x^2-y^2}$ orbitals. The phase shift can easily be found by comparing the logarithmic derivatives $$\left(\frac{\partial}{\partial r}R\left( r \right) \right)/R\left( r \right)
=\left(\frac{\partial}{\partial r}R^{asym}_{El}\left( r \right) \right)/R^{asym}_{El}\left( r \right)$$ at a sufficiently large radius where $$\frac{\partial}{\partial r} v^{\alpha,\sigma}_{\mbox{eff}} \left( {\bf r} \right) \approx 0.$$
Appendix II
===========
Here we present in more detail the calculation of the vectors $\bbox{v}$ of interest to us. For the CEF initial states of interest to us, such as $p_x$, $p_y$ and $d_{x^2-y^2}$, the coefficients $c_{\alpha m'}$ take the form $$c_{\alpha m'}= \frac{\zeta}{\sqrt{2}}\;
( \delta_{m',\nu} \pm \delta_{m',-\nu}),$$ (see Table I) whence we find that $$\begin{aligned}
\sum_{m}\; d_{lm}\; Y_{lm}(\bbox{k}^0)
&=& \sqrt{\frac{4\pi}{3}}
\sum_{\mu=-1}^1\; Y_{1,\mu}^*(\bbox{\epsilon})\; A_{l,\mu}(\bbox{k}^0)
\nonumber \\
A_{l,\mu}(\bbox{k}^0) &=&
\frac{\zeta}{\sqrt{2}} \;
\left( Y_{l,\mu+\nu}(\bbox{k}^0)\; c^1(l, \mu+\nu; l', \nu) \pm
Y_{l,\mu-\nu}(\bbox{k}^0)\; c^1(l, \mu-\nu; l',-\nu) \right)
\label{prod}\end{aligned}$$ We now rewrite the scalar product $$\sqrt{\frac{4\pi}{3}}\sum_{\mu=-1}^1\; Y_{1,\mu}^*(\bbox{\epsilon})\; A_\mu^l
= \bbox{ \epsilon} \cdot \bbox{A}_l$$ and by use of the property $c^1(l,-m; l',-m') = c^1(l,m; l',m')$ we obtain the components of $\bbox{A}^l$: $$\begin{aligned}
A_{l,x} &=& \frac{1}{\sqrt{2}}
\;[\; c^1(l,\nu-1;l',\nu)\; \frac{\zeta}{\sqrt{2}}\;
(Y_{l,\nu-1} \mp Y_{l,-(\nu-1)})
- c^1(l,\nu+1;l',\nu)\; \frac{\zeta}{\sqrt{2}}\;
(Y_{l,\nu+1} \mp Y_{l,-(\nu+1)}) \; ],
\nonumber \\
A_{l,y} &=& \frac{i}{\sqrt{2}}
\; [\; c^1(l,\nu-1;l',\nu)\; \frac{\zeta}{\sqrt{2}}\;
(Y_{l,\nu-1} \pm Y_{l,-(\nu-1)})
+ c^1(l,\nu+1;l',\nu)\; \frac{\zeta}{\sqrt{2}}\;
(Y_{l,\nu+1} \pm Y_{l,-(\nu+1)}) \; ],
\nonumber \\
A_{l,z} &=& c^1(l\nu;l'\nu)\; \frac{\zeta}{\sqrt{2}}
(Y_{l,\nu} \pm Y_{l,-\nu}).\end{aligned}$$ Defining $\tilde{R}_{l,l'}(E) = \;e^{i\delta_{l}} R_{l,l'}(E)$ we now readily arrive at the following expressions for photocurrent: $$\begin{aligned}
{\bf j} &=& \frac{4N \hbar {\bf k}}{m}\; |\bbox{v} \cdot \bbox{\epsilon} |^2,
\nonumber \\
\bbox{v} &=& \sum_{l=l'\pm1} (-i)^{l+1}\;\tilde{R}_{l,l'}(E)\;\bbox{A}_l.\end{aligned}$$ Here the vector $\bbox{v}$ depends on the type of initial orbital. Straightforward algebra then yields the following expressions for the vectors $\bbox{v}$: $$\begin{aligned}
\bbox{v}(p_x) &=&
\sqrt{ \frac{1}{15} } \tilde{R}_{21} \left(
\begin{array}{c}
\sqrt{\frac{5}{4\pi}} \frac{\tilde{R}_{01}}{\tilde{R}_{21}} +
d_{3z^2-r^2}(\bbox{k}^0) - \sqrt{3} d_{x^2-y^2}(\bbox{k}^0) \\
-\sqrt{3} d_{xy}(\bbox{k}^0) \\
-\sqrt{3} d_{xz}(\bbox{k}^0)
\end{array} \right)
\nonumber \\
\bbox{v}(p_y) &=&
\sqrt{ \frac{1}{15} } \tilde{R}_{21} \left(
\begin{array}{c}
-\sqrt{3} d_{xy}(\bbox{k}^0) \\
\sqrt{\frac{5}{4\pi}}\frac{\tilde{R}_{01}}{\tilde{R}_{21}} +
d_{3z^2-r^2}(\bbox{k}^0) + \sqrt{3} d_{x^2-y^2}(\bbox{k}^0) \\
-\sqrt{3} d_{yz}(\bbox{k}^0)
\end{array} \right)
\nonumber \\
\bbox{v}(p_z) &=&
\sqrt{ \frac{1}{15} } \tilde{R}_{21} \left(
\begin{array}{c}
-\sqrt{3} d_{xz}(\bbox{k}^0) \\
-\sqrt{3} d_{yz}(\bbox{k}^0) \\
\sqrt{\frac{5}{4\pi}}\frac{\tilde{R}_{01}}{\tilde{R}_{21}} +
\sqrt{4} d_{3z^2-r^2}(\bbox{k}^0)
\end{array} \right)\end{aligned}$$ With the exception of the numerical prefactors, most of the above formulas could have been guessed from general principles: there are only transitions into $s$-like and $d$-like partial waves (i.e. the dipole selection rule $\Delta L = \pm 1$!) and acting e.g. with an electric field in $x$ direction onto a $p_x$ -orbital produces only $s$-like, $d_{x^2-y^2}$-like and $d_{3z^2-r^2}$-like partial waves (which have even parity under reflection by the $z-y$-plane), whereas acting with a field in $y$ direction on $p_x$ can only produce the $d_{xy}$ partial wave. We therefore believe that much of the formula remains true even if more realistic wave functions for the final states are chosen.\
Similarly, we find for the matrix element of the $d$-like orbitals: $$\begin{aligned}
\label{eq:d0}
{\bf v}\left(d_{3z^2-r^2} \right)&=& i\left( \matrix{
\frac{1}{\sqrt{30}} \tilde{R}_{12} p_x
+ \sqrt{\frac{3}{35}}\tilde{R}_{32} f_{5x(z^2-r^2)}
\cr
\frac{1}{\sqrt{30}} \tilde{R}_{12} p_y
+ \sqrt{\frac{3}{35}}\tilde{R}_{32} f_{5y(z^2-r^2)}
\cr
-\frac{2}{\sqrt{15}} \tilde{R}_{12} p_z
+\frac{3}{\sqrt{35}}\tilde{R}_{32} f_{z(5z^2-3r^2)} }\right)\end{aligned}$$
$$\begin{aligned}
\label{eq:dyz}
{\bf v}\left(d_{yz} \right)&=& i
\left( \matrix{
\frac{1}{\sqrt{7}} \tilde{R}_{32} f_{xyz} \cr
-\frac{1}{\sqrt{5}} \tilde{R}_{12} p_z - \sqrt{\frac{3}{35}} \tilde{R}_{32} f_{5z^3-3z} -\frac{1}{\sqrt{7}} \tilde{R}_{32} f_{z(x^2-y^2)} \cr
-\frac{1}{\sqrt{5}} \tilde{R}_{12} p_y +\sqrt{\frac{8}{35}} \tilde{R}_{32} f_{y(5z^2-1)}} \right)\end{aligned}$$
$$\begin{aligned}
\label{eq:dxz}
{\bf v}\left(d_{xz} \right)&=& i
\left( \matrix{
-\frac{1}{\sqrt{5}} \tilde{R}_{12} p_z-\sqrt{\frac{3}{35}} \tilde{R}_{32} f_{5z^2-3z} +\frac{1}{\sqrt{7}} \tilde{R}_{32} f_{z(x^2-y^2)} \cr
\frac{1}{\sqrt{7}} \tilde{R}_{32} f_{xyz} \cr
-\frac{1}{\sqrt{5}} \tilde{R}_{12} p_x +\sqrt{\frac{8}{35}} \tilde{R}_{32} f_{x(5z^2-1)}
} \right)\end{aligned}$$
$$\begin{aligned}
\label{eq:dx2-y2}
{\bf v}\left(d_{x^2-y^2} \right)&=&i
\left( \matrix{
-\frac{1}{\sqrt{5}} \tilde{R}_{12} p_x -\frac{1}{\sqrt{70}} \tilde{R}_{32} f_{x(5z^2-1)}
+\sqrt{\frac{3}{14}} \tilde{R}_{32} f_{x^3-3xy^2} \cr
\frac{1}{\sqrt{5}} \tilde{R}_{12} p_y +\frac{1}{\sqrt{70}} \tilde{R}_{32} f_{y(5z^2-1)}
+\sqrt{\frac{3}{14}} \tilde{R}_{32} f_{3yx^2-y^3} \cr
\frac{1}{\sqrt{7}} \tilde{R}_{32} f_{z(x^2-y^2)}
} \right)\end{aligned}$$
$$\begin{aligned}
\label{eq:dxy}
{\bf v}\left(d_{xy} \right)&=&i
\left( \matrix{
-\frac{1}{\sqrt{5}} \tilde{R}_{12} p_y -\frac{1}{\sqrt{70}} \tilde{R}_{32} f_{y(5z^2-1)}
+\sqrt{\frac{3}{14}} \tilde{R}_{32} f_{yx^2-y^3} \cr
-\frac{1}{\sqrt{5}} \tilde{R}_{12} p_x -\frac{1}{\sqrt{70}} \tilde{R}_{32} f_{x(5z^2-1)}
-\sqrt{\frac{3}{14}} \tilde{R}_{32} f_{x^3-xy^2} \cr
\frac{1}{\sqrt{7}} \tilde{R}_{32} f_{xyz}
} \right)\end{aligned}$$
With the exception of the numerical prefactors, most of the above formulas could have been guessed from general principles: there are only transitions into $s$-like and $d$-like partial waves (i.e. the dipole selection rule $\Delta L = \pm 1$!) and acting e.g. with an electric field in $x$ direction onto a $p_x$ -orbital produces only $s$-like, $d_{x^2-y^2}$-like and $d_{3z^2-r^2}$-like partial waves (which have even parity under reflection by the $z-y$-plane), whereas acting with a field in $y$ direction on $p_x$ can only produce the $d_{xy}$ partial wave. We therefore believe that much of the formula remains true even if more realistic wave functions for the final states are chosen.\
[rr|rr|rrr|r]{} G$_x$/$2\pi$ & G$_y$/$2\pi$ & $\phi^{(opt)}_E$ & $I/I_0$&$\phi^{(opt)}_E
$&$\theta^{(opt)}_E$&$I/I_0$& $h\nu$ -1 & -1 & 2.356 & 1.908 & 2.356 & 1.571 & 1.908&22eV -1 & 0 & 1.275 & 0.958 & 1.275 & 1.005 & 1.345&22eV 0 & -1 & 0.295 & 0.958 & 0.295 & 1.005 & 1.345&22eV 0 & 0 & 2.356 & 1.000 & 2.356 & 1.571 & 1.000&22eV -1 & -1 & 2.356 & 2.011 & 2.356 & 1.571 & 2.011&34eV -1 & 0 & 0.974 & 1.053 & 0.974 & 1.040 & 1.417&34eV -1 & 1 & 1.426 & 2.255 & 1.426 & 0.964 & 3.342&34eV 0 & -1 & 0.597 & 1.053 & 0.597 & 1.040 & 1.417&34eV 0 & 0 & 2.356 & 1.000 & 2.356 & 1.571 & 1.000&34eV 0 & 1 & 2.218 & 0.913 & 2.190 & 0.754 & 1.922&34eV 1 & -1 & 0.145 & 2.255 & 0.145 & 0.964 & 3.342&34eV 1 & 0 & 2.494 & 0.913 & 2.523 & 2.388 & 1.922&34eV
[rr|rr|rrr|r]{} G$_x$/$2\pi$ & G$_y$/$2\pi$ & $\phi^{(opt)}_E$ & $I/I_0$&$\phi^{(opt)}_E
$&$\theta^{(opt)}_E$&$I/I_0$ & $h\nu$ -1 & -1 & 2.422 & 4.313 & 2.419 & 1.806 & 4.563&22eV -1 & 0 & 0.000 & 1.000 & 0.000 & 0.556 & 3.436&22eV -1 & 1 & 0.719 & 4.313 & 0.723 & 1.335 & 4.563&22eV 0 & -1 & 0.719 & 4.313 & 0.723 & 1.806 & 4.563&22eV 0 & 0 & 0.000 & 1.000 & 0.000 & 2.586 & 3.436&22eV 0 & 1 & 2.422 & 4.313 & 2.419 & 1.335 & 4.563&22eV -2 & 0 & 0.000 & 1.645 & 0.000 & 2.318 & 3.058&34eV -1 & -1 & 2.472 & 2.588 & 2.466 & 1.963 & 3.019&34eV -1 & 0 & 0.000 & 1.000 & 0.000 & 0.751 & 2.133&34eV -1 & 1 & 0.669 & 2.588 & 0.675 & 1.178 & 3.019&34eV 0 & -1 & 0.669 & 2.588 & 0.675 & 1.963 & 3.019&34eV 0 & 0 & 0.000 & 1.000 & 0.000 & 2.391 & 2.133&34eV 0 & 1 & 2.472 & 2.588 & 2.466 & 1.178 & 3.019&34eV 1 & 0 & 0.000 & 1.645 & 0.000 & 0.823 & 3.058&34eV
| {
"pile_set_name": "ArXiv"
} |
---
author:
- |
G. Iorio,$^{1,2}$[^1] F. Fraternali,$^{1,3}$ C. Nipoti,$^{1}$ E. Di Teodoro,$^{4}$ J. I. Read$^{5}$ and G. Battaglia$^{6,7}$\
$^{1}$Dipartimento di Fisica e Astronomia, Università di Bologna, Viale Berti Pichat 6/2, I-40127, Bologna, Italy\
$^{2}$INAF - Osservatorio Astronomico di Bologna, via Ranzani 1, I-40127, Bologna, Italy\
$^{3}$Kapteyn Astronomical Institute, University of Groningen, Landleven 12, 9747 AD Groningen, The Netherlands\
$^{4}$Research School of Astronomy and Astrophysics - The Australian National University, Canberra, ACT, 2611, Australia\
$^{5}$Department of Physics, University of Surrey, Guildford, GU2 7XH, Surrey, UK\
$^{6}$Instituto de Astrofisica de Canarias, calle Via Lactea s/n, E-38205 La Laguna, Tenerife, Spain\
$^{7}$Universidad de La Laguna, Dpto. Astrofisica, E-38206 La Laguna, Tenerife, Spain
bibliography:
- 'references.bib'
date: 'Accepted XXX. Received YYY; in original form ZZZ'
title: 'LITTLE THINGS in 3D: robust determination of the circular velocity of dwarf irregular galaxies.'
---
=1
\[firstpage\]
galaxies: dwarf – galaxies: kinematics – galaxies: ISM – galaxies: structure
Introduction
============
Sample and Data {#sec:sample}
===============
The Method {#sec:method}
==========
Data analysis {#sec:datan}
=============
Results {#sec:gls}
=======
Application: test of the Baryonic Tully-Fisher relation {#sec:scientific}
=======================================================
Discussion {#sec:disc}
==========
Summary {#sec:sum}
=======
\[lastpage\]
[^1]: giuliano.iorio@unibo.it
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Cubic Pr-based compounds with $\Gamma_3$ non-Kramers doublet ground states can realize a novel heavy Fermi liquid with spinorial hybridization (‘hastatic’ order) that breaks time reversal symmetry. Several Pr-“1-2-20” materials exhibit a suggestive heavy Fermi liquid stabilized in intermediate magnetic fields; these provide key insight into the quadrupolar Kondo lattice. We develop a simple, yet realistic microscopic model of ferrohastatic order, and elaborate its experimental signatures and behavior in field, where it is a good candidate to explain the observed heavy Fermi liquids at intermediate fields in Pr(Ir,Rh)$_2$Zn$_{20}$. In addition, we develop the Landau theory of ferrohastatic order, which allows us to understand its behavior close to the transition and explore thermodynamic signatures from magnetic susceptibility to thermal expansion.'
author:
- 'John S. Van Dyke'
- Guanghua Zhang
- Rebecca Flint
title: 'Field-induced Ferrohastatic Order in Cubic Non-Kramers Doublet Systems'
---
Introduction
============
The complex interplay of spin and orbital degrees of freedom underlies many unusual properties of correlated electron systems. This interplay is especially relevant in heavy fermion materials, where it leads to exotic phenomena from unconventional superconductivity[@Stewart2017] and quantum criticality [@Coleman2005; @Gegenwart2008] to topological insulators[@Dzero2016], spin liquids[@Lucas2017] and hidden orders[@Mydosh2014]. Heavy fermion research has mostly focused on Ce- or Yb-based compounds, where the $4f$ orbital is singly occupied, and its interaction with conduction electrons well described by the single-channel Kondo effect. However, there are also Pr and U-based heavy fermion materials that contain two localized $f$ electrons and whose many-body ground state is a non-Kramers doublet protected by crystal, not time-reversal, symmetry. These materials offer up a whole new host of behaviors driven by quadrupolar degrees of freedom and the *two-channel Kondo effect*. While these systems may form magnetic or quadrupolar order, become superconducting [@Flint2008; @Hoshino2013] or realize a non-Fermi liquid [@Cox1987; @Cox1998], their Kondo physics is particularly interetsting. Here the doubly-occupied ground state fluctuates to a singly (or triply) occupied excited state. As the excited state is Kramers degenerate, there are two distinct channels in which valence fluctuations may occur. Heavy fermions may still form, but now require breaking the channel symmetry. This symmetry-broken heavy Fermi liquid has been termed either “diagonal composite order” [@Hoshino2011; @Hoshino2013; @Kuramoto2014] or, to emphasize its novel spinorial nature, “hastatic order” [@Chandra2013; @Chandra2015; @Zhang2018]. Hastatic order is a *fractionalized* order [@Komijani2018], with a spinorial hybridization.
Cubic materials provide a particularly simple setting in which to study this physics, as here the two-channel Kondo effect is a Kondo effect for the local quadrupolar moments, which are screened by conduction quadrupolar moments in two different spin channels. Indeed, the physics of the quadrupolar Kondo lattice is a long standing problem. In particular, its two-channel nature and relevance to the non-Fermi liquid and unconventional superconductivity in UBe$_{13}$ [@Cox1987; @Cox1998] are not fully understood. However, discerning its role in actinide materials is challenging due to difficulties in resolving the valence and crystal field ground states. The recently discovered cubic Pr-based 1-2-20 materials provide an important opportunity to study these phenomena in a simpler system. These materials exhibit Kondo physics at high temperatures [@Sakai2011; @Onimaru2011; @Matsunami2011; @Onimaru2012; @Tsujimoto2014; @Shimura2015; @Onimaru2016a], along with quadrupolar [@Sakai2011; @Onimaru2011; @Ito2011; @Onimaru2012; @Sato2012; @Ishii2013; @Taniguchi2016; @Iwasa2017], superconducting [@Onimaru2010; @Sakai2012; @Matsubayashi2012; @Onimaru2012; @Tsujimoto2014], non-Fermi liquid [@Matsubayashi2012; @Shimura2015] and unidentified low temperature phases [@Tsujimoto2014; @Tsujimoto2015; @Onimaru2016b; @Yoshida2017]. Unlike in the actinides, the ground state is known to be the $4f^2$ $\Gamma_3$ [@Sakai2011; @Onimaru2011; @Onimaru2012], which imposes two-channel Kondo physics. These materials provide an ideal setting to resolve the role of the quadrupolar Kondo effect and explore hastatic order within a simpler setting. Several exhibit a dome of heavy Fermi liquid at finite magnetic fields [@Onimaru2016b; @Yoshida2017] that is consistent with field-induced hastatic order.
While our previous work has explored cubic hastatic order in a simple two-channel Kondo model [@Zhang2018], comparison to experiment requires a more realistic model. To this end, we treat uniform or “ferrohastatic” (FH) order in the realistic cubic two-channel Anderson model with a combination of a microscopically motivated mean-field theory (justified within large-$N$ and expected to work well at low temperatures) and a phenomenological Landau theory (which can capture the nature of the phase transition). Neither approach captures the whole story, but together they give significant insight. Finally, we argue that the intermediate field regions in Pr(Ir,Rh)$_2$Zn$_{20}$ are ferrohastatic, and give concrete experimental tests, including induced dipole moments in magnetic field, signatures in magnetostriction and thermal expansion, spin-resolved spectroscopies, and novel symmetry-breaking hybridization gaps.
Pr-based 1-2-20 materials details
---------------------------------
Kondo physics in Pr-based materials is rare, but the 1-2-20 materials, Pr$T_2X_{20}$, have atypically strong $c$–$f$ hybridization [@Sakai2011; @Tokunaga2013], as the Pr ions sit within Frank-Kasper cages of 16 X=Al or Zn atoms. Cubic crystal fields select a $\Gamma_3$ ground state doublet [@Sakai2011; @Onimaru2011; @Onimaru2012], and there is considerable evidence for Kondo physics: at high temperatures, experiment shows partial quenching of the $R\ln 2$ entropy [@Sakai2011], logarithmic scattering terms in the resistivity [@Tsujimoto2014], large hyperfine coupling due to $c$–$f$ hybridization [@Tokunaga2013], enhanced effective masses [@Shimura2015], and a Kondo resonance in photoemission [@Matsunami2011]. At low temperatures, all of these materials order in some fashion and become superconducting at very low temperatures: PrTi$_2$Al$_{20}$ and PrIr$_2$Zn$_{20}$ order ferro- (FQ) and antiferroquadrupolarly (AFQ) at $T_Q = 2$K and $0.11$K, respectively [@Sakai2011; @Onimaru2011], while the ordering in PrV$_2$Al$_{20}$ [@Sakai2011] and PrRh$_2$Zn$_{20}$ [@Onimaru2012] is still undetermined, although octupolar order seems likely in PrV$_2$Al$_{20}$ [@Freyer2018; @Lee2018; @Patri2018]. Quadrupolar order can be suppressed both with pressure (PrTi$_2$Al$_{20}$ [@Matsubayashi2012]) and with field \[Pr(Ir,Rh)$_2$Zn$_{20}$ [@Onimaru2011; @Onimaru2012] and PrV$_2$Al$_{20}$ [@Sakai2011]\], leading to an extended non-Fermi liquid region at higher temperatures. Pressure enhances the superconductivity in PrTi$_2$Al$_{20}$ [@Matsubayashi2012], which is likely unconventional. The in-field phase diagrams are even more interesting, as there is a heavy Fermi liquid region sandwiched between the zero-field order and a polarized high field state where Kondo physics is lost[@Onimaru2016b; @Yoshida2017].
Structure of the paper
----------------------
Ferrohastatic order and the generic infinite-$U$ two-channel Anderson model is introduced in section \[sec:FHorder\]. Section \[sec:microscopics\] fleshes out the details of the microscopic Anderson model and solves it within a large-$N$ mean-field theory for both FH and the competing antiferrohastatic (AFH) orders, and also considers interactions with the competing antiferroquadrupolar (AFQ) order. In section \[sec:landau\], we develop the Landau theory of cubic ferrohastatic order, examine its interactions with field, strain and AFQ order and discuss the thermodynamic signatures. Finally, in Section \[sec:signatures\] we summarize and expand upon the experimental signatures of FH order and how it may be distinguished from quadrupolar orders, before concluding in Section \[sec:conclusion\].
Ferrohastatic order \[sec:FHorder\]
===================================
Hastatic order is a natural candidate for materials with an even number of $f$ electrons and doublet crystal-field ground states. The cubic $\Gamma_3$ doublet is the simplest of these, with no dipole moments, $\langle \vec{J} \rangle = 0$, but finite quadrupolar ($O_{x^2-y^2}$, $O_{3z^2-r^2}$) and octupolar ($T_{xyz}$) moments [@Cox1998], and it is protected by cubic, not time-reversal, symmetry. Overlap between the non-Kramers $\Gamma_3$ states and conduction electrons leads to valence fluctuations, shown in Fig. \[fig:leveldiagram\], in which a $4f$ electron escapes into the conduction sea, leaving an excited $4f^1$ state, here the $\Gamma_7$ Kramers doublet [^1]. These valence fluctuations are mediated by a $\Gamma_8$ quartet of conduction electrons, and thus two conduction channels screen a single $f$-moment. We consider a simple cubic lattice with two $e_g$ conduction bands that have the required $\Gamma_8=e_g\otimes\sfrac{1}{2}$ symmetry, from the orbital and spin degrees of freedom, respectively [^2]. The $\Gamma_3$ states are labeled by their quadrupole moments, $\alpha$, while the excited $\Gamma_7$ are labeled by their dipole moments, $\mu$. $\mu$ is the channel index in a two channel Anderson lattice model [@Cox1998], while $\alpha$ represents the screened pseudospin.
![Atomic level diagram illustrating valence fluctuations out of a $4f^2$ non-Kramers ($\Gamma_3$) doublet to a $4f^1$ excited $\Gamma_7$ Kramers doublet via the conduction electron quartet, $\Gamma_8$. The form-factors of each state are shown, with the $\Gamma_7$ and $\Gamma_8$ orbitals possessing an extra dipolar moment (indicated by the arrows).\[fig:leveldiagram\]](level_diagram.png "fig:")-0.2cm
We consider an infinite-$U$ two channel Anderson model with the Hamiltonian: $$\begin{aligned}
\mathcal{H} &= \mathcal{H}_c + \mathcal{H}_f + \mathcal{H}_{VF}.\end{aligned}$$ The valence fluctuation Hamiltonian is $$\begin{aligned}
\mathcal{H}_{VF} = V \sum_{j\mu\alpha} \tilde{\mu} |j, \Gamma_3, \alpha \rangle \langle j, \Gamma_7, -\mspace{2mu} \mu | \psi_{j, \Gamma_8 \mu \alpha} + H.c. \label{eq:VF}\end{aligned}$$ The Hubbard operators $| j,\Gamma_3,\alpha\rangle \langle j, \Gamma_7,-\mspace{2mu}\mu |$ transition the $f$-electron system between the ground and excited states, while $\psi_{j,\Gamma_8 \mu \alpha}$ annihilates a $\Gamma_8$ conduction electron. $V$ is the bare hybridization strength and $\tilde{\mu} = \mathrm{sgn}(\mu)$ imposes a singlet state of the conduction and $f$ electrons. The conduction and $f$-electron terms are $$\begin{aligned}
\mathcal{H}_c &= \sum_{{\boldsymbol{\mathbf{k}}}\sigma\alpha\beta} \epsilon_{{\boldsymbol{\mathbf{k}}}\alpha\beta} c^\dagger_{{\boldsymbol{\mathbf{k}}}\sigma\alpha} c_{{\boldsymbol{\mathbf{k}}}\sigma\beta} \label{eq:Hc} \\
\mathcal{H}_f &= \sum_{j\mu} \Delta E | j,\Gamma_7,\mu\rangle \langle j, \Gamma_7,\mu |, \label{eq:Hf}\end{aligned}$$ where $c_{{\boldsymbol{\mathbf{k}}}\sigma\alpha}$ annihilates a conduction electron in channel $\sigma$ with pseudospin $\alpha=\{+,-\}$ ($\epsilon_{{\boldsymbol{\mathbf{k}}}\alpha\beta}$ is the conduction electron dispersion). Here $\Delta E > 0$ is the energy of the excited $4f^1$ state and $ | j,\Gamma_7,\mu\rangle \langle j, \Gamma_7,\mu |$ is the projector onto this state.
To proceed, we replace the Hubbard operators with slave bosons $b_{j\mu}$ and fermions $f_{j\alpha}$ [@Coleman1984; @Read1983]. $b_{j\mu}$ represents the excited doublet and $f_{j\alpha}$ the non-Kramers doublet. Other states are forbidden, imposed by the constraint $f^\dagger_{j\alpha} f_{j\alpha} + b_{j\mu}^\dagger b_{j\mu} = 1$, where we introduce Einstein summation notation. The Hubbard operators become $$\begin{aligned}
& |j,\Gamma_7,\mu\rangle \langle j,\Gamma_7,\mu | \rightarrow b^\dagger_{j\mu} b_{j\mu}, \cr
&|j,\Gamma_3,\alpha\rangle \langle j,\Gamma_3,\alpha | \rightarrow f^\dagger_{j\alpha} f_{j\alpha},\cr
& |j,\Gamma_3,\alpha\rangle \langle j,\Gamma_7,\mu | \rightarrow f^\dagger_{j\alpha} b_{j\mu},\end{aligned}$$ In this representation, the Hamiltonian becomes $$\begin{aligned}
H = & \sum_{{\boldsymbol{\mathbf{k}}}\sigma} \epsilon_{{\boldsymbol{\mathbf{k}}}\alpha\beta} c^\dagger_{{\boldsymbol{\mathbf{k}}}\sigma\alpha} c_{{\boldsymbol{\mathbf{k}}}\sigma\beta} +\! V\! \sum_{j}\! \left( \tilde{\mu} f^\dagger_{j\alpha} b_{j-\mu} \psi_{j,\Gamma_8 \mu \alpha} + H.c. \right)\notag \\
& + \sum_j \left( \left[\lambda_j + \Delta E \right] b_{j\mu}^\dagger b_{j\mu} + \lambda_j \left[f^\dagger_{j\alpha} f_{j\alpha} - 1\right] \right), \end{aligned}$$ where the Lagrange multipliers $\lambda_j$ enforce the constraint.
This model can be solved exactly within an $SU(N)$ large-$N$ limit, where $\alpha = \pm 1, \ldots ,N$, while $\mu = \uparrow,\downarrow$ remains $SU(2)$. In this mean-field limit, the slave bosons condense, $\langle b_{j\mu} \rangle \neq 0$ below the transition temperature $T_K$. On account of the two degenerate excited levels (corresponding to the channels labeled by $\mu$), the hastatic order parameter $b$ forms a spinor $$\begin{aligned}
b = \begin{pmatrix}
b_\uparrow \\
b_\downarrow
\end{pmatrix} \label{eq:bspinor}\end{aligned}$$ As $b_\uparrow$ and $b_\downarrow$ assume definite values in the hastatic state, the system necessarily breaks time reversal and spin rotation symmetry. While these are also broken in an ordinary magnetic system, hastatic order additionally breaks double time reversal symmetry, due to the spinorial nature of the order parameter. Microscopically, we may think of hastatic order as consisting in a choice of hybridization spinor (magnitude and direction) for each site in the lattice. This leads to various realizations of hastatic order similar to the forms of magnetic order (ferro-, antiferro-, etc.) determined by the arrangement of spins in a magnetic system. We term the simplest possibility, namely a uniform magnitude and direction of the spinor at each site, *ferrohastatic order* (FH), in analogy with the magnetic case. A particular FH ansatz, in which the $f$ electrons exclusively hybridize with spin-$\uparrow$ conduction electrons, is shown in Fig. \[fig:FHansatz\].
![Schematic of ferrohastatic (FH) order in which the $f$-electron hybridizes exclusively with the spin-$\uparrow$ conduction electrons. $b$ represents the occupation of the excited state, or in the Kondo limit, the Kondo singlet formed between local moment and conduction electrons.\[fig:FHansatz\]](FH_ansatz2.png "fig:")-0.2cm
The large-$N$ phase diagram of the two-channel Kondo limit shows that FH order is favored in a range around half-filling, with antiferrohastatic (AFH) order favored for smaller fillings [@Zhang2018]. Strong coupling analysis yields a similar picture, where the strong coupling limit of our model is the two-channel Kondo lattice with $J_K = \frac{V^2}{\Delta E}$. As $J_K \rightarrow \infty$, conduction electrons added to the system form Kondo singlets until all of the local moments are screened, which occurs at quarter-filling. These Kondo singlets carry the channel (physical spin) index and can be treated as hard-core bosons [@Schauerte2005]. Exactly at quarter-filling, these spinful Kondo singlets are the only degree of freedom and order antiferrohastatically due to superexchange ($\sim t^2/J_K$) from virtual hopping ($t$) of the conduction electrons. Adding a single conduction electron forces the Kondo singlets to be FH in order to maximize kinetic energy ($\sim t$), in analogy with the infinite-U Hubbard model [@Nagaoka1966]; as in the Hubbard model, we expect the AFH region to extend some distance above quarter-filling for finite $J_K$. FH order also wins at half-filling, as it again maximizes the kinetic energy. Note that hastatic order is always stabilized over quadrupolar order at strong coupling, as the local Kondo singlet lowers its energy via quantum fluctuations, while AFQ order freezes the local $f$-moment. On site, the energy of the Kondo singlet is $-J_K S(S+1) = -3J_K/4$, while the frozen $f$-moment minimizes its energy with two conduction electrons per site: $ c^\dagger_{\uparrow +} f^\dagger_- c^\dagger_{\downarrow +}\vert 0\rangle$, with energy $-2 J_K S^2 = -J_K/2$. In section \[sec:microscopics\], we solve our Anderson model within the $SU(N)$ large-$N$ limit and find that FH order is found in a large region around half-filling of the conduction electrons, similar to what is expected from this strong coupling analysis and what was found in the Kondo limit. Additionally, we shall see that as FH order contains small magnetic moments, it is favored by magnetic field, as is also the case in the Kondo limit [@Zhang2018].
Microscopic model and phase diagrams \[sec:microscopics\]
=========================================================
Now we return to our microscopic Anderson model to flesh out the details and solve it within the large-$N$ limit. As we are particularly interested in the effect of magnetic field, we first examine this coupling in detail.
Magnetic field affects Kramers and non-Kramers components differently, coupling linearly to the conduction electrons and the excited $\Gamma_7$ state, as shown in the Hamiltonian below, where the magnetic field $\vec{B} = B \hat z$. $\Gamma_3$ does not couple to $B$ directly, but acquires a small moment linear in $B$ due to virtual transitions to the excited triplet states at energy $\Delta$, leading to an $O(B^2/\Delta)$ splitting [@Zhang2018]. For simplicity, we consider transitions only to the excited $\Gamma_4$ triplet [@Onimaru2016a], which affects only $\vert\Gamma_3, +\rangle$. The magnetic field part of the Hamiltonian is therefore $$\begin{aligned}
H_B = &-\sum_{{\boldsymbol{\mathbf{k}}}\sigma\alpha\beta} \tilde{\sigma} \mu_B B c^\dagger_{{\boldsymbol{\mathbf{k}}}\sigma\alpha} c_{{\boldsymbol{\mathbf{k}}}\sigma\beta} -\sum_j \Big( \gamma B^2 \delta_{\alpha,+} f^\dagger_{j\alpha} f_{j\alpha} \notag \\
& \hspace{20pt} + \mu_B g_L B \langle J_z \rangle_{\Gamma_7} \tilde{\mu} b_{j\mu}^\dagger b_{j\mu} \Big).\end{aligned}$$ $\mu_B$ is the Bohr magneton, $g_L$ the Landé $g$-factor and $\langle J_z \rangle_{\Gamma_7}$ the $J_z$ angular momentum of the $\Gamma_7$ state. $\gamma = 6$ gives the nonlinear coupling of $|\Gamma_3,+\rangle$ to $B^2$. In finite field, the $\vert\Gamma_3, +\rangle$ state develops a dipole moment linear in field, $$\!m_f = \mu_B \!\! \left[ 12 \frac{\mu_B B}{\Delta}\! -\! 130 \left(\frac{\mu_B B}{\Delta}\right)^3\!\right]\! \langle n_{g} \rangle + O[\left(\frac{\mu_B B}{\Delta}\right)^5\!],\!$$ for $\mu_B B \ll \Delta$, where $\langle n_{g} \rangle$ is the ground state ($\Gamma_3+$) occupation.
The slave boson Hamiltonian is then, $$\begin{aligned}
H = & \sum_{{\boldsymbol{\mathbf{k}}}\sigma} \left( \epsilon_{{\boldsymbol{\mathbf{k}}}\alpha\beta} -\mu_B B \tilde{\sigma} \right) c^\dagger_{{\boldsymbol{\mathbf{k}}}\sigma\alpha} c_{{\boldsymbol{\mathbf{k}}}\sigma\beta} \notag \\
& + V \sum_{j} \left( \tilde{\mu} f^\dagger_{j\alpha} b_{j-\mu} \psi_{j,\Gamma_8 \mu \alpha} + H.c. \right)\notag \\
& + \sum_j \left( \left[\lambda_j + \Delta E -\mu_B B g_L \langle J_z \rangle_{\Gamma_7} \tilde{\mu} \right] b_{j\mu}^\dagger b_{j\mu} \right. \notag \\
& \hspace{20pt} + \left. \left[ \lambda_j - \gamma B^2 \delta_{\alpha,+} \right] f^\dagger_{j\alpha} f_{j\alpha} - \lambda_j \right), \end{aligned}$$
The $\Gamma_8$-symmetry electrons, $\psi_{j,\Gamma_8 \mu \alpha}$ appearing above are $f$-states, but they have a finite overlap with the conduction electron bands of whatever type, which can be incorporated via a Wannier form factor $\Phi$; here this describes the overlap between the odd-parity $\Gamma_8$ and the even-parity $d$ states at neighboring sites: $$\begin{aligned}
\psi_{j, \Gamma_8 \mu \alpha} &= \sum_{{\boldsymbol{\mathbf{k}}} \sigma \alpha'} \mathrm{e}^{i {\boldsymbol{\mathbf{k}}} \cdot {\boldsymbol{\mathbf{R}}}_j} \Phi^{\sigma \alpha'}_{\mu \alpha}({\boldsymbol{\mathbf{k}}})
c_{{\boldsymbol{\mathbf{k}}} \sigma \alpha'}.\end{aligned}$$ For simplicity we consider Pr $5d$ states; $f$-electrons in PrT$_2$(Al,Zn)$_{20}$ are more likely to hybridize with Al or Zn $p$-states, leading to different form factors but qualitatively similar physics. Our bands mirror those of SmB$_6$, where the $\Gamma_8$ ground state couples to $e_g$ conduction electrons [@Alexandrov2013; @Baruselli2014], although the nature of our (spinorial) hybridization is clearly different. We consider generic nearest-neighbor conduction electron dispersions $\epsilon_{{\boldsymbol{\mathbf{k}}}}$ and hybridization form factors $\Phi$ with cubic symmetry. Both of these are matrices: $\epsilon_{{\boldsymbol{\mathbf{k}}}}$ is a matrix in $\alpha$ and $\sigma$ space, and is derived similarly to the hybridization form factor, shown in Appendix \[sec:slaterkoster\], $$\begin{aligned}
\mathcal\epsilon_{{\boldsymbol{\mathbf{k}}}} = &
- t[(c_x+c_y)(\frac{1}{2} + \frac{3}{2}\eta_c)+2c_z] (\sigma_0 \otimes\alpha_0) \cr
& -\frac{\sqrt{3}}{2}t(c_x-c_y)(1-\eta_c)(\sigma_0 \otimes\alpha_1)\cr
& -\mu_B B (\sigma_3 \otimes \alpha_0) \label{eq:Hd}\end{aligned}$$ where $c_i \equiv \cos (k_i a)$ $(i=x,y,z)$ and $\mu$ is the chemical potential. For our numerical calculations we set the nearest neighbor spacing $a = 1$ and the overall hopping magnitude $t=1$, effectively measuring everything else in units of $t$. The conduction electron band width $D = 12t$. There is a single free parameter, $\eta_c$ that tunes the degeneracies and anisotropies of the bands. We fix the number of conduction electrons above the transition, $n_{c0}$ and allow $\mu$ to vary to preserve the total charge. The hybridization form factors $\Phi$ are given below, but are similarly described by an overall magnitude $V$ and free parameter $\eta_V$.
Slave boson theory for ferrohastatic order
------------------------------------------
In this section, we give the full detailed mean-field Hamiltonian for the FH ansatz with the hastatic spinor oriented along $\hat{z}$, $\hat{b}_j = (b, 0)^T$, $$\begin{aligned}
H & = \sum_{{\boldsymbol{\mathbf{k}}}} \epsilon_{{\boldsymbol{\mathbf{k}}}\alpha\beta} c^\dagger_{{\boldsymbol{\mathbf{k}}}\sigma\alpha} c_{{\boldsymbol{\mathbf{k}}}\sigma\beta} + \mathcal{N}\left(\Delta E -\mu_B B g_L \langle J_z \rangle_{\Gamma_7}\right)|b|^2 \cr
&\!\! - Vb\! \sum_{j}\! \left( f^\dagger_{j\alpha} \psi_{j,\Gamma_8 \downarrow \alpha} + H.c. \right)\! +\! \lambda \sum_{j} (f^\dagger_{j\alpha} f_{j\alpha} + |b|^2\! -\! 1) \notag \\
&
\!\!+ \mu \sum_j(c^\dagger_{j\sigma\alpha} c_{j\sigma\alpha}\! -\! |b|^2\! -\! n_{c,0})\!-\! \gamma B^2 \sum_{j}\! f^\dagger_{j+} f_{j+} \end{aligned}$$ Here, $\epsilon_{{\boldsymbol{\mathbf{k}}}\alpha\beta}$ is the conduction electron dispersion matrix given in eqn. (\[eq:Hd\]). There are two Lagrange multipliers, $\lambda$ and $\mu$. The first enforces the average local constraint on the occupations of the localized $f$-electron orbitals, while the second enforces the global conservation of charge. The magnetic field lies solely along the direction of the hastatic spinor, $\hat{z}$. In momentum space, the Hamiltonian is $$\begin{aligned}
H &= \sum_{{\boldsymbol{\mathbf{k}}},\sigma} \epsilon_{{\boldsymbol{\mathbf{k}}}\alpha\beta} c^\dagger_{{\boldsymbol{\mathbf{k}}}\sigma\alpha} c_{{\boldsymbol{\mathbf{k}}}\sigma\beta} + \mathcal{N}\Delta E |b|^2 + \lambda \mathcal{N} (|b|^2 - 1) \notag \\
&- Vb \sum_{{\boldsymbol{\mathbf{k}}}\sigma\alpha\alpha'} \Big( f^\dagger_{{\boldsymbol{\mathbf{k}}},\alpha} c_{{\boldsymbol{\mathbf{k}}}, \sigma \alpha'}\Phi^{\sigma \alpha'}_{\downarrow \alpha}({\boldsymbol{\mathbf{k}}})
+ H.c. \Big) \notag \\
&+ \mu \left[ \sum_{{\boldsymbol{\mathbf{k}}}} c^\dagger_{{\boldsymbol{\mathbf{k}}}\sigma\alpha} c_{{\boldsymbol{\mathbf{k}}}\sigma\alpha} - \mathcal{N} (|b|^2 - n_{c,0}) \right]+\lambda \sum_{{\boldsymbol{\mathbf{k}}}}f^\dagger_{{\boldsymbol{\mathbf{k}}}\alpha}f_{{\boldsymbol{\mathbf{k}}}\alpha} \notag \\
&- \! \gamma B^2\! \sum_{{\boldsymbol{\mathbf{k}}}} f^\dagger_{{\boldsymbol{\mathbf{k}}}+} f_{{\boldsymbol{\mathbf{k}}}+} - \mathcal{N} g_L \langle J_z \rangle_{\Gamma_7} \mu_B B |b|^2 \end{aligned}$$ where $\mathcal{N}$ is the number of sites. In a path integral approach, the saddle-point approximation (exact in the $SU(N)$ large-$N$ limit) leads to the self-consistency equations: $$\begin{aligned}
\frac{\partial \mathcal{F}}{\partial b} &= 0; &
\frac{\partial \mathcal{F}}{\partial \lambda} &= 0; &
\frac{\partial \mathcal{F}}{\partial \mu} &= 0. &
\label{eq:selfcons}\end{aligned}$$ The resulting mean field Hamiltonian can be written as a matrix $$\begin{aligned}
H &= \sum_{{\boldsymbol{\mathbf{k}}}} \Psi_{{\boldsymbol{\mathbf{k}}}}^\dagger \begin{pmatrix}
\mathcal{H}_c({\boldsymbol{\mathbf{k}}}) & \mathcal{V}_z({\boldsymbol{\mathbf{k}}})^\dagger \\
\mathcal{V}_z({\boldsymbol{\mathbf{k}}}) & \mathcal{H}_f({\boldsymbol{\mathbf{k}}})
\end{pmatrix}\Psi_{{\boldsymbol{\mathbf{k}}}} + const. \notag \\
&\equiv \sum_{{\boldsymbol{\mathbf{k}}}} \Psi_{{\boldsymbol{\mathbf{k}}}}^\dagger \mathcal{H}_{{\boldsymbol{\mathbf{k}}}}'
\Psi_{{\boldsymbol{\mathbf{k}}}} + const.\label{eq:HMF}\end{aligned}$$ with spinor $\Psi = (
c_{\uparrow +} \; c_{\uparrow -} \; c_{\downarrow +} \; c_{\downarrow -} \; f_{+} \; f_{-})^T$. Here $\mathcal{H}_c = \epsilon_{{\boldsymbol{\mathbf{k}}}} + \mu \sigma_0 \alpha_0$ is a $4 \times 4$ matrix, $\mathcal{H}_f = \lambda \alpha_0-\gamma B^2 (1+\alpha_3)/2$ is a $2 \times 2$ matrix (we use two types of Pauli matrices, $\sigma_\lambda$ and $\alpha_\lambda$ ($\lambda = 0,1,2,3$), to represent the spin and pseudospin degrees of freedom), and $\mathcal{V}_z$ is the $2 \times 4$ hybridization matrix, $$\begin{aligned}
\mathcal{V}({\boldsymbol{\mathbf{k}}}) = -V\sum_{\mu} \tilde{\mu} b_{-\mu} \Phi^{\sigma \alpha'}_{\mu \alpha}({\boldsymbol{\mathbf{k}}}),\end{aligned}$$ which takes the form for $\hat b || \hat z$ ,
$$\begin{aligned}
&\mathcal{V}_z({\boldsymbol{\mathbf{k}}}) = -Vb\begin{pmatrix}
\Phi^{\uparrow +}_{\downarrow +}({\boldsymbol{\mathbf{k}}}) & \Phi^{\uparrow -}_{\downarrow +}({\boldsymbol{\mathbf{k}}}) & \Phi^{\downarrow +}_{\downarrow +}({\boldsymbol{\mathbf{k}}}) & \Phi^{\downarrow -}_{\downarrow +}({\boldsymbol{\mathbf{k}}}) \\
\Phi^{\uparrow +}_{\downarrow -}({\boldsymbol{\mathbf{k}}}) & \Phi^{\uparrow -}_{\downarrow -}({\boldsymbol{\mathbf{k}}}) & \Phi^{\downarrow +}_{\downarrow -}({\boldsymbol{\mathbf{k}}}) & \Phi^{\downarrow -}_{\downarrow -}({\boldsymbol{\mathbf{k}}})
\end{pmatrix}\\
&= -Vb \begin{pmatrix}
-\frac{i(1\!+\!3\eta_v)}{2}s_+ & \frac{i\sqrt{3}(-1\!+\!\eta_v)}{2}s_- & -2i s_z & 0 \\
\frac{i\sqrt{3}(-1\!+\!\eta_v)}{2}s_- & -\frac{i(3\!+\!\eta_v)}{2}s_+ & 0 & -2i \eta_v s_z
\end{pmatrix}.\end{aligned}$$
$s_\pm = s_x \pm i s_y$, where $s_i = \sin (k_i a) (i = x,y,z)$. The free energy density $\mathcal{F}$ is obtained by integrating out the fermions, leading to $$\begin{aligned}
\mathcal{F} =& -T\sum_{{\boldsymbol{\mathbf{k}}}\zeta} \ln (1 + e^{-E_{{\boldsymbol{\mathbf{k}}}\zeta}/T}) + (\lambda+\Delta E)|b|^2 - \lambda \notag \\
&- \mu (|b|^2 + n_{c,0})- g_L \langle J_z \rangle_{\Gamma_7} \tilde{\mu} B|b|^2\end{aligned}$$ where $E_{{\boldsymbol{\mathbf{k}}}\zeta}$ are the six eigenvalues of the Hamiltonian matrix $\mathcal{H}_{{\boldsymbol{\mathbf{k}}}}'$. Within this formalism, we can treat both FH order ($b \neq 0$) and paraquadrupolar order ($b = 0,\lambda =0$). From the Kondo limit and strong coupling analysis, we expect FH order to be favored near half-filling, and as the system gains energy by aligning the spinor with the external field, the uniform FH case is also favored in field over competing states with non-uniform arrangements of the hastatic spinor. In section \[sec:FHvsAFQ\], we consider the competition between the FH and the AFQ ansatzes.
![Phase diagram of ferrohastatic (FH) and antiferrohastatic (AFH) orders at $T=0$ as a function of conduction electron filling. FH order is favored near half-filling, while AFH order is favored near quarter-filling, as expected from the strong coupling approach. Note that particle-hole symmetry is absent in the Anderson model approach due to the finite occupation of the excited $f$ state. \[fig:AFHPD\]](phasediagram_FH_vs_AFH.pdf)
Antiferrohastatic mean-field theory
-----------------------------------
In Fig. \[fig:AFHPD\] we show the mean-field $T = 0$ phase diagram as a function of conduction electron filling $n_{c0}$ for our model on the simple cubic lattice. As expected from the strong coupling analysis, FH order appears near half-filling, while AFH is found around quarter-filling. Here we used the mean-field theory of the Néel staggered AFH ansatz to compute the phase diagram. The two Néel sublattices have the hastatic spinor oriented oppositely, e.g. $\hat{b}_A = (b, 0)^T$, $\hat{b}_B = (0, b)^T$. A one dimensional version of this AFH order is shown in the cartoon of Fig. \[fig:AFHcartoon\].
![Cartoon of AFH order in 1D, where the hastatic spinor $\hat{b}$ is alternately aligned with the $+\hat{z}$ and $-\hat{z}$ axes. \[fig:AFHcartoon\]](AFHcartoon.png)
The slave boson expectation value at site $j$ may be written as $$\begin{aligned}
b_j &= \frac{b_A}{2} \left(1 + e^{i {\boldsymbol{\mathbf{Q}}} \cdot {\boldsymbol{\mathbf{R}}}} \right) + \frac{b_B}{2} \left(1 - e^{i {\boldsymbol{\mathbf{Q}}} \cdot {\boldsymbol{\mathbf{R}}}} \right) \notag \\
&= \frac{1}{2}\left[
\begin{pmatrix}
b\\
b
\end{pmatrix}
+e^{i {\boldsymbol{\mathbf{Q}}} \cdot {\boldsymbol{\mathbf{R}}}}
\begin{pmatrix}
b\\
-b
\end{pmatrix}
\right],\end{aligned}$$ where ${\boldsymbol{\mathbf{Q}}} = (\pi,\pi,\pi)$. The mean-field Hamiltonian for the AFH case can then be written, $$\begin{aligned}
H =& \sum_{{\boldsymbol{\mathbf{k}}},\sigma} \epsilon_{{\boldsymbol{\mathbf{k}}}\alpha\beta} c^\dagger_{{\boldsymbol{\mathbf{k}}}\sigma\alpha} c_{{\boldsymbol{\mathbf{k}}}\sigma\beta} \cr
& + \frac{Vb}{2}\! \sum_{{\boldsymbol{\mathbf{k}}}}\! \left( [\Phi_{\sigma \alpha'}^{\uparrow \alpha}({\boldsymbol{\mathbf{k}}}) - \Phi_{\sigma \alpha'}^{\downarrow \alpha}({\boldsymbol{\mathbf{k}}})]c^\dagger_{{\boldsymbol{\mathbf{k}}} \sigma \alpha'} f_{{\boldsymbol{\mathbf{k}}},\alpha} \right. \cr
& \hspace{1em} \left. - [\Phi_{\sigma \alpha'}^{\uparrow \alpha}({\boldsymbol{\mathbf{k}}}) + \Phi_{\sigma \alpha'}^{\downarrow \alpha}({\boldsymbol{\mathbf{k}}})]c^\dagger_{{\boldsymbol{\mathbf{k}}} \sigma \alpha'} f_{{\boldsymbol{\mathbf{k}}}+{\boldsymbol{\mathbf{Q}}},\alpha} + H.c. \right) \notag \\
& + \lambda \left[ \sum_{{\boldsymbol{\mathbf{k}}}} f^\dagger_{{\boldsymbol{\mathbf{k}}}\alpha} f_{{\boldsymbol{\mathbf{k}}}\alpha} + \mathcal{N} (|b|^2 - 1) \right] + \mu \sum_{{\boldsymbol{\mathbf{k}}}} c^\dagger_{{\boldsymbol{\mathbf{k}}}\sigma\alpha} c_{{\boldsymbol{\mathbf{k}}}\sigma\alpha} \cr
&- \mu \mathcal{N} (|b|^2 - n_{c,0}) + \mathcal{N}\Delta E |b|^2\end{aligned}$$ where ${\boldsymbol{\mathbf{k}}}$ ranges over the original Brillouin zone. The free energy is obtained in a similar fashion as the FH case, $$\begin{aligned}
\mathcal{F} =& -T\sum_{{\boldsymbol{\mathbf{k}}}\zeta} \ln (1 + e^{-E_{{\boldsymbol{\mathbf{k}}}\zeta}/T}) + \mathcal{N}(\lambda+\Delta E)|b|^2 \cr &- \mathcal{N}\lambda
-\mu \mathcal{N}(|b|^2 + n_{c,0}),\end{aligned}$$ where $E_{{\boldsymbol{\mathbf{k}}}\zeta}$ now ranges over the twelve AFH bands. The free energy is minimized by solving the saddle point equations, $\partial \mathcal{F}/\partial b = 0$, $\partial \mathcal{F}/\partial \lambda = 0$, and $\partial \mathcal{F}/\partial \mu = 0$.
The AFH phase has staggered magnetic moments, but no uniform moments (magnetic or multipolar), and we find that FH order is quickly favored over AFH in finite magnetic field. The relative stability of AFH order will be more materials-dependent than FH order, as it depends more strongly on the details of the crystal structure. Here we analyze the simple cubic case for simplicity, but the qualitative features are expected to hold for the diamond lattice applicable to the Pr-1-2-20 materials.
Competition with antiferroquadrupolar order \[sec:FHvsAFQ\]
-----------------------------------------------------------
To capture the competing AFQ orders observed in PrT$_2$X$_{20}$ materials, we introduce a quadrupolar Heisenberg term to the Hamiltonian: $$\begin{aligned}
\mathcal{H}_{\mathcal{Q}} = J_{\mathcal{Q}} \sum_{\langle i j \rangle} \vec{\tau}_{f,i} \cdot \vec{\tau}_{f,j}\end{aligned}$$ where $\vec{\tau}_{f,i} = \frac{1}{2} f^\dagger_{i\alpha}\vec{\tau}_{\alpha \beta} f_{j\beta}$ is the $\Gamma_3$ pseudospin and $\vec{\tau}$ a vector of Pauli matrices. Within the present mean-field theory, the decouplings into the two different quadrupolar moments $Q_\mu \propto 3z^2 - r^2$ and $Q_\nu \propto x^2 - y^2$ are degenerate at $\vec{B}=0$. In finite field $B \parallel z$, $Q_\mu$ is favored, and so we use it here to examine the state which competes most strongly with hastatic order. We therefore choose the specific mean-field decoupling of this interaction to yield an AFQ order parameter $\mathcal{Q}$ along the $z$ axis, with the resultant mean-field Hamiltonian $$\begin{aligned}
\mathcal{H}_{\mathcal{Q},MF} = &-\mathcal{Q} \sum_{k} \Big[ (f^\dagger_{k+Q,+} f_{k+}-f^\dagger_{k+Q,-} f_{k-}) + H. c. \Big] \notag \\
&+ \frac{ 3\mathcal{N} \mathcal{Q}^2}{J_Q}.\end{aligned}$$ Experiments on PrIr$_2$Zn$_{20}$ have detected $Q_\nu$ AFQ order, but the qualitative features of our calculated phase diagrams are expected to remain the same with this choice as well. Adding this $\mathcal{H}_{\mathcal{Q},MF}$ to our FH mean-field Hamiltonian, we obtain the free energy and solve the saddle point equations, $\partial \mathcal{F}/\partial \lambda = \partial \mathcal{F}/\partial \mu = \partial \mathcal{F}/\partial b = \partial \mathcal{F}/\partial \mathcal{Q} =0$; note that the AFQ mean-field is not justified within the $SU(N)$ large-$N$ limit, and this mean-field theory is therefore not controlled.
At zero and small fields, the Pr(Ir,Rh)$_2$Zn$_{20}$ compounds order quadrupolarly, giving way to heavy Fermi liquid behavior at intermediate fields [@Onimaru2016b; @Yoshida2017]. We qualitatively reproduce this behavior in our self-consistently calculated mean-field phase diagrams for FH and AFQ ansatze in magnetic field, shown in Fig. \[fig:PD\] for three different sets of $J_Q$ and $n_c^0$ parameters that capture three possible behaviors. Near half-filling, for small $J_Q$, there is a large coexistence region with FH order at low fields \[Fig. \[fig:PD\](a)\]. By increasing $J_Q$ and moving away from half-filling, the coexistence region can be made to disappear, replaced by a direct transition between the FH and AFQ phases \[Fig. \[fig:PD\](b,c)\], which is reminiscent of the experimental result, with FH order explaining the heavy Fermi liquid region in intermediate fields.
![Three example mean-field phase diagrams for FH (red) and AFQ (blue) orders in magnetic field, for three different strengths of quadrupolar exchange coupling: (a) $J_Q\!=\!0.38t$, (b) $J_Q\!=\!1.6t$, (c) $J_Q\!=\!2t$. The transition to FH order, $T_K$ initially rises with magnetic field and is then suppressed, ultimately becoming a first-order transition in field. Other parameters for (a): $t\!=\!1$, $V\!=\!0.8$, $\Delta E\!=\!5.5$, $\Delta\!=\!4.8$, $n_{c,0}\!=\!1.9$; other parameters for (b),(c): $t\!=\!1$, $V\!=\!0.9$, $\Delta E\!=\!5.5$, $\Delta\!=\!20$, $n_{c,0}\!=\!1.6$. \[fig:PD\]](plot_allPDvert.pdf "fig:")-0.2cm
Due to the linear coupling of the magnetic field to the FH moments, we expect magnetic field to initially favor FH order, leading to increased hybridization and higher $T_K$. This increase is seen in all three subplots of Fig. \[fig:PD\], most noticeably in part (b). However, as stronger fields split the $\Gamma_3$ doublet, the Kondo screening and thus FH order are eventually destroyed, as seen in the mean-field calculation. AFQ order is also suppressed as a function of $B$, leading to a first-order transition between it and FH order at a critical field $B_c$. Note that while the mean-field theory always finds a first order transition between FH and PQ states at low temperatures and high fields, we do not expect this to necessarily hold beyond the mean-field level.
Landau theory \[sec:landau\]
============================
Large-$N$ theories have been extremely useful in understanding Kondo physics at low temperatures, where they work well [@hewson]. However, these theories are known to have issues near the Kondo temperature – most notably in predicting a phase transition in the single channel Kondo effect. As the single channel “order parameter” $\langle b\rangle$ breaks only the emergent gauge symmetry, Elitzur’s theorem prevents it from ordering; indeed, $1/N$ corrections show that the bosonic expectation value is not long range ordered and wash out the phase transition [@Read1985]. In the two channel case, our bosonic order parameter $\langle b_\mu \rangle$ breaks a real symmetry in addition to the gauge symmetry and so the transition must survive.
The key question here is exactly how the hastatic order parameter breaks the symmetry. Is the order parameter really spinorial, that is, described by a spinor (double group) representation? Or is it vectorial, like most known order parameters? We know the answer in the large-$N$ limit, where our mean-field theory is strictly correct. The infinite-$N$ order parameter is spinorial, in the $\Gamma_7$ double group irreducible representation. In this limit, the order parameter does not couple linearly to the magnetic field, which has a $\Gamma_4$ symmetry, and there is always a phase transition into the FH phase at $T_K$, even in finite field. Note that the usual Kondo “order parameter” also survives in the large-$N$ limit, but is washed out with $1/N$ corrections. There are strong reasons to believe that the $\Gamma_7$ nature of the order parameter does not survive the $1/N$ corrections that wash out $\langle b\rangle$ in the single channel case. The first is that the conjugate field to the FH order parameter along $\hat z$ is the breaking of the channel symmetry $\delta J = \frac{J_\uparrow - J_\downarrow}{2}$, where $J_\sigma$ is the Kondo coupling in each conduction electron channel. As soon as the channel symmetry is broken, a heavy Fermi liquid develops in the strongest channel below a crossover scale $T_K$; this heavy Fermi liquid *is* FH order. We can see that $\delta J$ is the conjugate field to the composite order parameter by rewriting the two-channel Kondo coupling in terms of $\Psi_z = \langle \tilde{\sigma}c{^{\dagger }}_{\sigma}\vec{\tau}c_\sigma \cdot \vec{\tau}_f \rangle$ [@Komijani2018]: $$H_K = J \sum_\sigma c{^{\dagger }}_{\sigma}\vec{\tau}c_\sigma \cdot \vec{\tau}_f + \delta J \sum_\sigma \tilde{\sigma} c{^{\dagger }}_{\sigma}\vec{\tau}c_\sigma \cdot \vec{\tau}_f.$$ Of course, we can also write down conjugate fields that couple to composite orders in the $x$ and $y$ directions, for $\vec{\Psi} = \langle c{^{\dagger }}\vec{\sigma} [\vec{\tau} \cdot \vec{\tau}_f] c\rangle$ and in fact, overall the composite order parameter forms an $SO(5)$ order parameter that includes composite pairing, $\Phi = \langle c{^{\dagger }}i\sigma_2 [(i\tau_2 \vec{\tau})\cdot \vec{\tau}_f]c{^{\dagger }}\rangle$ and $\Phi{^{\dagger }}$ order parameters [@Hoshino2014; @Flint2008], although this $SO(5)$ symmetry is broken in the cubic Anderson model. In cubic symmetry, $\vec{\Psi}$ belongs to the $\Gamma_{4}$ representation, and so the conjugate field $\vec{\delta J}$ will also have $\Gamma_{4}$ symmetry. The Landau order parameter at the phase transition must therefore be the $\Gamma_{4}$ composite order parameter, $\vec{\Psi}$ which behaves like $\langle b{^{\dagger }}\vec{\sigma} b\rangle$, and not the $\Gamma_7$ spinorial order parameter $\langle b_\mu \rangle$. We can also understand the vector nature of the order parameter by appealing to the Higgs mechanism in the Kondo effect that locks together the internal and external gauge fields and gives charge to the composite fermions. For the single-channel Kondo effect, the phase of the hybridization plays the role of the Goldstone boson, and is absorbed to make the difference between internal and external gauge fields heavy. For the two-channel Kondo effect, we have an $SU(2)$ spinor, $$b_j = |b_j|\mathrm{e}^{i\chi_j}\left( \begin{array}{c} \cos \theta_j \mathrm{e}^{i\phi_j/2} \\ \sin \theta_j \mathrm{e}^{-i\phi_j/2}\end{array}\right).$$ Here, the Higgs mechanism absorbs the overall phase $\chi_j$, leaving an $SO(3)$ order parameter defined by two angles (and the overall amplitude)[@piers].
The correct FH order parameter is then $\vec{\Psi}$, which has $\Gamma_4$ symmetry and couples linearly to the magnetic field, like a ferromagnet. There are several key differences between FH and ferromagnetic orders though, which it is important to keep in mind.
- The composite order parameter, $\vec{\Psi}$ and the mixed valent moment, $\langle b{^{\dagger }}\vec{\sigma}b\rangle$ have the same symmetry, but will typically have very different magnitudes. In the Kondo limit of integer valence, $\langle b\rangle = 0$, and yet $|\Psi|$ will still be large. Therefore, the coupling of $\vec{\Psi}$ to external field, $\vec{h}$ will typically be quite small. In this sense, while very close to the phase transition FH order will look like a ferromagnet, with diverging susceptibility, further from the phase transition, it will look like the spinorial order parameter, with linear magnetization, instead of the square-root behavior of a ferromagnet, for example.
- The composite order parameter does not commute with the Hamiltonian, although it has been shown to retain the quadratic Goldstone modes [@Hoshino2015; @piers].
- The development of a hybridization gap is associated with hastatic order, with the gap magnitude growing as $\sqrt{|\Psi|}$. Additionally, the originally neutral pseudofermions $f_\alpha$ pick up electric charge via a Higgs mechanism and become part of the Fermi surface, and so a discrete change in Fermi surface volume is expected across the FH transition.
If we want to understand the behavior of FH order near the phase transition, we need to examine the Landau theory of the composite order parameter $\vec{\Psi}$, keeping in mind the weak linear coupling to external field. The Landau theory should really be thought of as capturing how $1/N$ corrections will modify the behavior near the transition, while the low temperature physics is still expected to be well-described by our mean-field theory.
To explore these consequences in detail, and the effect on the thermodynamic responses, we consider a simple Landau theory. As AFQ order is a natural competitor for FH order, we will compare the behavior of FH and AFQ orders. As the theory is complex, we introduce it in stages, but our goal is a full theory of the interplay of FH and AFQ orders in both magnetic field and strain. Here, we neglect the possible octupolar order of the $\Gamma_3$ doublet; a Landau theory of quadrupolar and octupolar orders, and their interaction with external field and strain was recently developed [@Lee2018; @Patri2018].
Ferrohastatic order
-------------------
The allowed terms in a Landau theory are found by considering products of the representations of the various order parameters, and taking all the invariant ($\Gamma_1$) terms of each order. The local site symmetry of the Pr atoms is known to be $T_d$ [@Niemann1995], and so the appropriate group for the composite order parameter, $\vec{\Psi}$ is $T_d \times \tau$, where $\tau$ is time-reversal symmetry. $\vec{\Psi}$ is described by the same $\Gamma_{4u}$ irreducible representation as the external magnetic field $\vec{h}$, and so magnetic field is expected to smear out the phase transition into a crossover. Here we use $u/g$ to indicate odd/even behavior under time-reversal. We first find all quadratic terms in both $\vec{\Psi}$ and $\vec{h}$: $$\begin{aligned}
\Gamma_{4u} \otimes \Gamma_{4u} & = \Gamma_{1g} \oplus \Gamma_{3g} \oplus \Gamma_{4g} \oplus \Gamma_{5g}\cr
& = |\Psi|^2 \oplus \overrightarrow{\Psi^2}_{\Gamma_3} \oplus \cdot \oplus \overrightarrow{\Psi^2}_{\Gamma_5}\cr
& \qquad \mathrm{or}\cr
& = |h|^2 \oplus \overrightarrow{h^2}_{\Gamma_3} \oplus \cdot \oplus \overrightarrow{h^2}_{\Gamma_5}\cr
& \qquad \mathrm{or}\cr
& = \vec{h} \cdot \vec{\Psi} \oplus \overrightarrow{h \Psi}_{\Gamma_3} \oplus \vec{h} \times \vec{\Psi} \oplus \overrightarrow{h \Psi}_{\Gamma_5}\end{aligned}$$ Here $\overrightarrow{\phi^2}_{\Gamma_3}= (\frac{1}{\sqrt{3}}\left[3\phi_z^2-|\phi|^2\right],\phi_x^2-\phi_y^2)$ and $\overrightarrow{\phi^2}_{\Gamma_5}=(\phi_x \phi_y,\phi_y \phi_z,\phi_z \phi_x)$, for $\phi = h$ or $\Psi$ and the $\Gamma_{4g}$ $\vec{\phi}\times\vec{\phi}$ term vanishes. We also have the mixed terms $\overrightarrow{h \Psi}_{\Gamma_3} = (\frac{1}{\sqrt{3}}\left[3h_z \Psi_z-|h||\Psi|\right],h_x\Psi_x-h_y \Psi_y)$ and $ \overrightarrow{h \Psi}_{\Gamma_5} =(h_x \Psi_y + h_y \Psi_x,h_y \Psi_z + h_z \Psi_y,h_z \Psi_x +h_x \Psi_y)$. These terms can be used to construct the allowed fourth order terms with $\Gamma_1$ symmetry; there are no allowed third order terms due to the time-reversal symmetry breaking nature of the order parameter. And so we construct the Landau theory, $$F_\Psi = \alpha_\Psi |\Psi|^2 + \frac{u_\Psi}{2} |\Psi|^4 - \lambda \vec{h}\cdot \vec{\Psi} + u_{h\Psi}|h|^2|\Psi|^2$$ Here, we neglect several fourth order terms: there are terms that pin the hastatic spinor, $\overrightarrow{\Psi^2}_{\Gamma_3} \cdot \overrightarrow{\Psi^2}_{\Gamma_3}$ and $\overrightarrow{\Psi^2}_{\Gamma_5} \cdot \overrightarrow{\Psi^2}_{\Gamma_5}$ – either to the $[111]$ or $[100]$ directions, respectively. Microscopic calculations show these pinning terms to be quite weak, and the $\lambda$ magnetic field coupling will quickly overwhelm them to pin the hastatic order parameter along the external field direction. We also drop several second order terms in both $h$ and $\Psi$ for the same reason. Remember that the composite order parameter $\vec{\Psi}$ is nonzero even in the Kondo limit where the mixed valent moment $b{^{\dagger }}\vec{\sigma} b$ vanishes, and so the coupling to external field will be extremely small when the materials are near integral valence, as is likely the case for the Pr-based materials. From this Landau theory, we can already see that FH order is a type of ferromagnetism, but with a peculiarly weak linear coupling to external field. We expect a divergence in the magnetic susceptibility at $T_K$ in zero field, although the coefficient \[$\lambda^2/(2 \alpha_\Psi)$\] is small. For finite fields, the susceptibility is nearly constant above the transition and then develops a linear component, $d\chi/dT \approx 2 u_{h \Psi} \alpha_\Psi/u_\Psi$ below the transition, where this equation is exactly true if $\lambda = 0$. For $\lambda = 0$, the magnetic moment grows linearly in temperature below $T_K$, while finite $\lambda$ leads to the typical square root behavior in zero field. For small $\lambda$ and finite $h$, the linear behavior is still evident slightly away from the transition and the field smears out the kink. All signatures of the phase transition, like the specific heat jump, will be similarly smeared out, governed by $\lambda$.
Ferrohastatic order and coupling to strain
------------------------------------------
As the $\Gamma_3$ doublet has quadrupolar components $O_2^0$ and $O_2^2$, including coupling to strain is extremely important. There are five strain components, $$\begin{aligned}
\overrightarrow{\epsilon}_{\Gamma_{3g}}& = (\epsilon_\mu, \epsilon_\nu) = \left(\frac{1}{\sqrt{3}}[2\epsilon_{zz} - \epsilon_{xx}-\epsilon_{yy}], \epsilon_{xx} - \epsilon_{yy}\right)\cr
\overrightarrow{\epsilon}_{\Gamma_{5g}} & = (\epsilon_{xy},\epsilon_{yz},\epsilon_{zx}),\end{aligned}$$ The first two components, $\overrightarrow{\epsilon}_{\Gamma_{3g}}$ couple linearly to the possible ferroquadrupolar (FQ) orderings of the $\Gamma_3$ doublet: $\vec{R} = (R_\mu, R_\nu)$. Here $R_\mu = \langle O_2^0 \rangle$ and $R_\nu = \langle O_2^2\rangle$ are the two possible FQ orders. The $\Gamma_5$ components will couple to the $\Gamma_5$ combinations of $\Psi$ and $h$. However, $\overrightarrow{h\Psi}_{\Gamma_5}$ requires $\vec{h} \perp \vec{\Psi}$, which is forbidden by the pinning of the hastatic spinor to the external field direction, and so we neglect these strain components entirely.
The elastic free energy for $\overrightarrow{\epsilon}_{\Gamma_{3g}}$ is, $$\begin{aligned}
F_{el} = \frac{c_{11}-c_{12}}{2} (\epsilon_\mu^2+\epsilon_\nu^2) - g_3 \vec{R} \cdot \overrightarrow{\epsilon}_{\Gamma_{3g}},\end{aligned}$$ where $c_{11}$ and $c_{12}$ are elastic coefficients, and we take $g_3 < 0$, as in PrIr$_2$Zn$_{20}$ [@Worl2018]. We can then integrate out the strain, $$\epsilon_{\mu,\nu} = \frac{g_3}{c_{11}-c_{12}} R_{\mu,\nu},$$ and work directly with the ferroquadrupolar order parameters. We can again use group theory to determine the symmetries of different combinations of $\vec{R}$: $$\begin{aligned}
\Gamma_{3g} \otimes \Gamma_{3g} & = \Gamma_{1g} \oplus \Gamma_{2u} \oplus \Gamma_{3g} \cr
& = |R|^2 \oplus \cdot \oplus \overrightarrow{R^2}_{\Gamma_3} \cr
&= |R|^2 \oplus \cdot \oplus (R_\nu^2-R_\mu^2,2 R_\mu R_\nu).\end{aligned}$$ Note that there is no $\Gamma_{2u}$ term – the original $\Gamma_3$ multiplets have $\Gamma_{3g}$ quadrupolar orders and $\Gamma_{2u}$ octupolar order, but this octupolar order cannot be constructed from the time-reversal invariant quantities here and must be treated independently, as has been done recently [@Patri2018].
The Landau theory for FQ order is, $$F_R = \alpha_R |R|^2 + \frac{u_R}{2} |R|^4 + v_R \vec{R} \cdot \overrightarrow{R^2}_{\Gamma_3} - \gamma_R \vec{R}\cdot \overrightarrow{h^2}_{\Gamma_3}$$ Here, we assume $\alpha_R > 0$ to forbid intrinsic FQ order; it will be induced by both FH and AFQ orders, as well as finite field. The third order clock term pins the FQ order parameter to the lattice. We know from single-ion physics that magnetic field favors FQ order via induced magnetic moments, and so $\gamma_R > 0$.
Finally, we can couple the two orders: $$F_{\Psi R} = \kappa_R \vec{R} \cdot \overrightarrow{h \Psi}_{\Gamma_3}+ \nu_R \vec{R} \cdot \overrightarrow{\Psi^2}_{\Gamma_3} + u_{\Psi R}|\Psi|^2|R|^2$$ Note that FQ order appears to develop immediately with FH order, due to $\nu_{R}$, however, $\nu_{R}$ vanishes in our microscopic theory due to the nodal structure of the hybridization. $\kappa_R$ induces FQ order whenever both $\Psi$ and $h$ are nonzero, and this term is nonzero in the microscopic theory, although likely to be small, just as $\lambda$ is. The thermal expansion $\alpha$ and magnetostriction $\lambda$ are defined in terms of the fractional change in length, $\Delta L/L$ along some direction, which is in turn proportional to strain, $$\begin{aligned}
\frac{\Delta L}{L} \biggr \rvert_{z} & = \frac{1}{3} \epsilon_B + \frac{1}{\sqrt{3}} \epsilon_\mu\cr
\frac{\Delta L}{L} \biggr \rvert_{x} & = \frac{1}{3} \epsilon_B - \frac{1}{2\sqrt{3}} \epsilon_\mu + \frac{1}{2} \epsilon_\nu,\end{aligned}$$ where $\epsilon_B$ is the symmetric volume strain. For specificity, we consider the magnetic field to be along $\hat z$, and define, $$\begin{aligned}
\alpha_{\parallel} & = \frac{1}{L}\frac{d\Delta L}{dT} \biggr \rvert_{z}; \quad \alpha_{\perp} = \frac{1}{L}\frac{d\Delta L}{dT} \biggr \rvert_{x} \cr
\lambda_{\parallel} & = \frac{1}{L}\frac{d\Delta L}{dh} \biggr \rvert_{z}; \quad \lambda_{\perp} = \frac{1}{L}\frac{d\Delta L}{dh} \biggr \rvert_{x}.\end{aligned}$$
None of the order parameters couple linearly to the bulk $\epsilon_B$, and as we consider the $R_\mu = O_2^0$ FQ order (so $\epsilon_\nu = 0$), motivated by experiments on PrIr$_2$Zn$_{20}$ [@Worl2018], the relationships $\alpha_\parallel + 2 \alpha_\perp = 0$ and $\lambda_\parallel + 2 \lambda_\perp = 0$ always hold. Note that these are likely violated beyond Landau theory, where changes in $f$-electron valence typically result in volume changes. Here, we focus on the parallel components, with the perpendicular components understood to be given by these relations.
The coupling of FH and FQ orders leads to negative jumps in both the thermal expansion and magnetostriction at the transition into hastatic order if either $h = 0$ or the couplings $\lambda$ and $\kappa_R$ are zero. If $\lambda$ and $\kappa_R$ are nonzero and $h$ is finite, as is expected experimentally, the jumps are slightly smeared. We show some examples in Fig. \[fig:Landau\], where we consider both AFQ and FH orders.
![Signatures of ferrohastatic and antiferroquadrupolar orders. (a) Example phase diagram in temperature (T) and magnetic field along $\hat z$ (B). $T_Q$ indicates the phase transition into AFQ order, while $T^*$ is the crossover scale for FH order; $T^*$ becomes a phase transition only for $B = 0$. (b) Thermal expansion ($\alpha_\parallel$) as a function of temperature for three different fields. There is a sharp negative jump upon entering the AFQ phase, and a smeared out jump upon cooling through $T^*$ which is negative for $T^*(B) > T_Q(B)$ and positive otherwise. Note that the Landau theory captures $\alpha_\parallel$ only near the transition - far from the transition, the microscopic physics will lead to non-monotonic behavior and eventually $\alpha_\parallel$ goes to zero as $T {\rightarrow}0$. (c) Magnetostriction ($\lambda_\parallel$) as a function of magnetic field for three different temperatures. There is a negative jump upon exiting the AFQ order that grows with decreasing temperature, but the signature at $B^*$ is completely smeared out. Parameters are given in the text. \[fig:Landau\]](landau.pdf)
Comparison with antiferroquadrupolar order
------------------------------------------
A similar analysis for the $\Gamma_3$ AFQ order in magnetic field yields [@Lee2018], $$\begin{aligned}
F_Q & = \alpha_Q (T-T_Q)|Q|^2 + \frac{u_Q}{2} |Q|^4 + w_Q |Q|^6\cr
& +v_Q (-Q_\nu^3+3 Q_\mu^2 Q_\nu)^2+u_{Qh} |Q|^2|h|^2\cr
& + v_{Qh} \overrightarrow{Q^2}_{\Gamma_3} \cdot \overrightarrow{h^2}_{\Gamma_3} \label{eq:landauFS1}\end{aligned}$$ where $Q_\mu = \langle O^0_2 \rangle_A - \langle O^0_2\rangle_B$ and $Q_\nu = \langle O^2_2 \rangle_A - \langle O^2_2 \rangle_B $ are the AFQ order parameters comprising the $\Gamma_3$ doublet, and we keep only the lowest order symmetry-breaking terms. $A$, $B$ are the two sublattices of the diamond structure, which describes the arrangement of Pr ions. The sixth order term, $v_Q$ is the square of the third order in $Q$ term with $\Gamma_2$ symmetry. The coupling to FQ order is given by, $$\begin{aligned}
F_{QR} & = \rho \vec{R} \cdot \overrightarrow{Q^2}_{\Gamma_3} +u_{QR} |R|^2|Q|^2\label{eq:landauFS2}\end{aligned}$$ $\overrightarrow{Q^2}_{\Gamma_3}$ is defined identically to $\overrightarrow{R^2}_{\Gamma_3}$, and couples linearly to the FQ order parameter $\vec{R}$. We neglect higher order terms as subdominant.
Here, $\rho$ leads to jumps in thermal expansion and magnetostriction that can take either sign, although $\rho < 0$ is indicated by the experimental results, wherein $R_\mu$ is induced by $Q_\nu$[@Iwasa2017; @Worl2018].
Coupling ferrohastatic and antiferroquadrupolar orders
------------------------------------------------------
For completeness, the interactions between FH and AFQ order are captured in, $$F_{\Psi Q} = u_{\Psi Q}|Q|^2|\Psi|^2 + \kappa_Q \overrightarrow{h \Psi}_{\Gamma_3} \cdot \overrightarrow{Q^2}_{\Gamma_3} + \nu_Q \overrightarrow{\Psi^2}_{\Gamma_3} \cdot \overrightarrow{Q^2}_{\Gamma_3}$$ Note that $\nu_Q$ also vanishes in our microscopic theory. AFQ and FH orders suppress one another, and can either coexist (for sufficiently small $u_{\Psi Q}$) or phase separate via a first order transition (for larger $u_{\Psi Q}$). We find both cases in our microscopic theory above, for different values of the AFQ coupling, and show a Landau theory example in the next section.
Example phase diagram and thermodynamics
----------------------------------------
In Fig. \[fig:Landau\] (a), we show one possible phase diagram in temperature and field. Here, we choose our Landau parameters to roughly reproduce the experimental phase diagram. The AFQ order parameter is $Q_\nu$, while the FQ order parameter (not shown) is $R_\mu$, and the FH order parameter, $\vec{\Psi}$ points along $\hat z$. We similarly choose the magnetic field $\vec{h} = B \hat z$. The Landau parameters are: $$\begin{aligned}
T_K & = 1.1, \alpha_\Psi = 1, u_\Psi = 7, \lambda = 0.05, u_{h\Psi} = 1\cr
\alpha_R & = 1, u_R = 3, v_R = 0, \gamma_R = 1 \cr
\kappa_R & = -0.05, \nu_R = -0.25, u_{\Psi R}=0.5 \cr
T_Q & = 1.2, \alpha_Q = 1, u_Q = 3.5, v_Q = 0, u_{Qh} = 4\cr
\rho & = -0.5, u_{QR} = 0, u_{\Psi Q} =6, \kappa_Q = 0, \nu_Q = 0\end{aligned}$$ Note that $\lambda$ and $\kappa_R$ are nonzero but small, to reflect the smallness of the moment relative to the magnitude of the composite order parameter, $\vec{\Psi}$ for nearly integral valence. For any finite $B$, the FH phase transition is smeared out by these parameters. FQ order only turns on via interactions with other order parameters and magnetic field. The parameters here were chosen to roughly reproduce the single ion behavior of the magnetostriction in magnetic field. The signs of $\rho$ and $\nu_R$ are chosen to reproduce the negative jump in the thermal expansion seen in PrIr$_2$Zn$_{20}$ [@Worl2018]. $\nu_R$ is zero in our microscopic theory, but is generically allowed to be nonzero by symmetry; we take it to be small, but negative to match the sign of the experimental thermal expansion jump. FH and AFQ orders have similar zero field transition temperatures (which requires fine-tuning, of course), and they strongly repel one another via $u_{\Psi Q}$. We otherwise set $\kappa_Q$ and $\nu_Q$ to zero for simplicity.
Signatures of ferrohastatic order \[sec:signatures\]
====================================================
Fundamentally, FH order is a heavy Fermi liquid with a spinorial hybridization that breaks the channel symmetry. As such, it has two types of signatures: heavy Fermi liquid behavior, where half of the conduction electrons hybridize with the local moments and half remain unhybridized, and symmetry breaking signatures, including magnetic moments and thermodynamic signatures.
Heavy Fermi liquid behavior
---------------------------
In the simplest cases, FH order is a half-heavy Fermi liquid – one band of conduction electrons hybridizes and becomes heavy, while the other remains light. Along high symmetry lines, or for the simple case of $\eta_c = 1, \eta_V = 1$, this is true: one light band remains completely unmodified, while the other hybridizes and becomes heavy. In more generic cases, however, both bands become hybridized due to the strong spin-orbit coupling, although one is more strongly modified \[Fig. \[fig:dispersions\](a)\]. In the simple two-channel Kondo model, for example, the spin-up conduction electrons hybridize and form a heavy band, with a hybridization gap, while the spin-down conduction electrons remain unhybridized and ungapped to form a light band[@Zhang2018]. In the more realistic model considered here, the spin-orbit coupled hybridization means that the spin structure of the heavy band varies throughout the Brillouin zone, as shown in Fig. \[fig:dispersions\](b).
![Two example FH dispersions to illustrate the nature of the hybridization. (a) Example dispersion of heavy quasiparticles in which all conduction bands are hybridized, as seen from the $X--R$ cut, which is not a high symmetry line. (b) Example dispersion in which one conduction electron band always remains unhybridized; here, we also use the color scale to show the spin polarization of the heavy bands. The unhybridized $c$- and $f$-bands (dashed black lines) and hybridized bands (thick solid lines) are plotted along high symmetry lines in the cubic Brillouin zone, near $E_F=0$. The color indicates the projection of spin along the z-axis; note that $\langle S_z\rangle = 0$ merely implies that the spins lie in the $xy$-plane. Parameters for (a): $t\!=\!1$, $\mu\!=\!0$, $\eta_c\!=\!0.2$, $\lambda\!=\!0$, $V\!=\!\eta_v\!=\!0.1$, $b_\mu\!=\!(1,0)$; (b): $t\!\!=\!\!\eta_c\!\!=\!\!V\!\!=\!\!1$, $\eta_v \!\!=\!\! -\sfrac{1}{3}$, $\Delta E\!\!=\!\! 5.5$, $n_{c,0}\!\!=\!\!1.6$ \[$\lambda$, $b_\mu$, $\mu$ determined self-consistently for (b)\]. \[fig:dispersions\]](plot_bothdispersions.png "fig:")-0.2cm
The heavy band dominates the thermodynamic properties, and FH order has all the traditional signatures of heavy Fermi liquids, including a large Sommerfeld coefficient and $A T^2$ resistivity. The two bands will have very different effective masses, which can be probed by quantum oscillations. The resulting “half”-hybridization gap should be observed in angle-resolved photoemission spectroscopy (ARPES), scanning-tunneling microscopy (STM), and optical conductivity measurements. The optical conductivity sum rule, $n(\omega)=\frac{m}{e^2}\int_0^\infty \frac{d\omega'}{\pi} \sigma_1 (\omega')$, will have a kink at approximately half of the total weight, as a direct consequence of the half-hybridization gap pushing spectral weight of the initial Drude peak above the direct gap. As the $\Gamma_8$ form-factors mix spin and orbital angular momentum, the physical spin structure of both hybridized and unhybridized bands varies throughout momentum space, as shown in Fig. \[fig:dispersions\](b); this structure could be detected in spin-resolved ARPES.
In zero magnetic field, there are generically two types of hybridization gaps: symmetric gaps that only depend on the amplitude of the hastatic spinor, $\operatorname{Tr}[\hat{b}^\dagger \hat{b}]$ and symmetry-breaking gaps that depend on the direction, $\operatorname{Tr}[\hat{b}^\dagger \vec{\mu} \hat{b}]$, where $\vec{\mu}$ is a vector of channel Pauli matrices. Spin-orbit coupling implies that the symmetry breaking gaps break both $SU(2)$ and cubic symmetries, which we show in Fig. \[fig:symmbreakgap\](b) for special parameters that allow an analytic form of the dispersion. Here, $\hat b || \hat z$ and the gap has $\Gamma_3 +$ symmetry (see Appendix A for details). Note that in mean-field theory, both gaps develop via a phase transition at $T_K$, but fluctuations will allow the non-symmetry-breaking gap to develop as a crossover at a higher $T^*$, along with the heavy Fermi liquid signatures.
![The symmetry breaking hybridization gap has a $\Gamma_3 +$ angular dependence for $\hat b \parallel \hat z$. Blue and red indicate positive and negative values, respectively.\[fig:symmbreakgap\]](plot_hybgap_gwave4.png "fig:")-0.2cm
Symmetry breaking signatures
----------------------------
Broken time-reversal symmetry manifests as magnetic moments for both the conduction electrons, $\vec{m}_c$, and the excited doublet, $\vec{m}_{b}$. These are strictly parallel to the hastatic spinor $\hat b$, and are calculated as, $$\begin{aligned}
m_c &= -\left.\frac{\partial \mathcal{F}}{\partial B_c}\right|_{B_c \rightarrow 0}; & m_b &= -\mu_B g_L \langle J_z \rangle_{\Gamma_7} |b|^2\end{aligned}$$ where $B_c$ is conjugate to $m_c$, coupling only to the conduction electrons. Both magnetizations turn on linearly below $T_K$, as shown in Fig. \[fig:magneticmoments\](a), which follows from the BCS-like temperature dependence of $\langle b \rangle$, but is also seen to persist in the Landau theory for small moments and finite $B$. The total magnitude is small, $O(T_K/D)$, with $D$ the conduction electron bandwidth. While the FH moments are quite small in zero field, they will grow fairly quickly in finite fields (Fig. \[fig:magneticmoments\](b)). The most straightforward way to positively identify FH order over the competing quadrupolar orders is to examine the field-dependence of the magnetic moments – in particular, their direction. FH moments will always be pinned to the field, and so all moments will align with the external field; by contrast, FQ order induces magnetic moments in field with a significant perpendicular component for some field directions [@Ito2011; @Sato2012; @Taniguchi2016]. Similarly, AFQ order generically induces FQ order and will have the same field dependence of the uniform moments. As quadrupolar order is difficult to detect directly, due to its weak coupling to the lattice [@Devishvili2008], measuring the in-field moments along several directions is essential to distinguish between hastatic and quadrupolar orders.
![(a) Behavior of conduction electron (red) and $\Gamma_7$ (blue) moments with temperature; note that each turns on linearly in field, in contrast to ferromagnetic order. (b) Conduction electron (red), 4f$^3$ $\Gamma_7$(blue) and 4f$^2$ $\Gamma_3$(orange) moments as a function of field; in FH order, all moments are strictly parallel to the applied field. Parameters are $t=1$, $\eta_c=1$, $V=0.8$, $\eta_V=1$, $n_{c,0}=1.6$, $\Delta E=5.5$, with $B = 0$ for (a) and $T=0.005$ for (b). Note the the ground state mixed valency for these parameters is $\langle b \rangle^2 \approx 0.3$ at $B=0$, and the real materials likely have significantly smaller zero field moments. \[fig:magneticmoments\]](plot_magtwopanel2.pdf)
Thermodynamic signatures
------------------------
Broken time-reversal symmetry is also apparent in the development of a finite magnetostriction in the FH state. For the hastatic spinor aligned along the $z$ axis, this susceptibility is given by $\chi_{ms} \equiv \partial^2 \mathcal{F}/\partial B_z \partial \epsilon_\mu \neq 0 \
\propto \lambda_\parallel$. Susceptibilities involving $B$ and strain derivatives along $x,y$ vanish, so we do expect a small zero field volume magnetostriction that increases with decreasing temperature in the FH state. A zero field magnetostriction has been observed in PrV$_2$Al$_{20}$ [@Patri2018] preferentially along the $[111]$ direction, as one would expect for octupolar order; the hastatic magnetostriction would be relatively independent of the direction of field, as long as field and strain components are aligned. As we believe FH might explain the heavy Fermi liquids in Pr(Ir,Rh)$_2$Zn$_{20}$ at *finite* fields, this signature is not practical, as the transition will be smeared out.
The magnetostriction and thermal expansion expected for FH order can also be calculated more generally within the microscopic mean-field model. While we expect the behavior near the transition will be modified as indicated by the Landau theory, the microscopic calculation allows us to access the behavior away from the phase transition. Fig. \[fig:MSandTE\](a) shows the magnetostriction at fixed temperature $T=1.2T_{K,B=0}$ as a function of field using the parameters of Fig. \[fig:PD\](b). The self-consistently calculated result exhibits jumps at the transitions into and out of the FH phase, but otherwise mostly follows the single ion physics. Fig. \[fig:MSandTE\](b) compares the thermal expansion calculated in the FH and PQ phases (using a different set of parameters than the magnetostriction calculation). As the temperature is lowered, the thermal expansion exhibits a sharp downward jump upon entering the FH phase, followed by a superlinear rise. This jump will again be somewhat smeared out, looking similar to the downturns seen in the Landau theory. Similar behavior has been observed in experiment [@Worl2018], with a dip followed by a steep rise. The peak in $\alpha_\parallel$ is much sharper and shifted to low temperatures compare to the corresponding result for the PQ phase, which can explain why the experiments have only measured an increasing $\alpha_\parallel$ as the temperature is reduced, having not reached low enough temperatures to see the inevitable downturn. Note that the valence change associated with hastatic order will also have a small contribution to the volume magnetostriction $\lambda = \lambda_{\parallel} + 2 \lambda_{\perp}$, in addition to the symmetry breaking contribution discussed above. Recent magnetostriction measurements on PrIr$_2$Zn$_{20}$ suggest relatively small changes in valence as a function of field, in the heavy Fermi liquid region [@Worl2018], consistent with the relatively flat $T_K$ seen in Fig. \[fig:PD\](c).
![Mean-field calculations of magnetostriction and thermal expansion. (a) Magnetostriction versus field for the self-consistent mean-field solution (green) and for the paraquadrupolar state (dashed orange); this calculation was done at a relatively large fraction of $T/T_K(B)$. Between $B_z \approx 0.4 B_c$ and $B_z=B_c$ the system is FH. There are jumps in $\lambda_{\parallel}$ when entering the phase, which show a clear increase with increasing field; these are expected to be smeared out by $1/N$ corrections. (b) Thermal expansion versus temperature for the FH (blue) and paraquadrupolar (dashed orange) phases at intermediate fields. The jump at $T=T_K$ is a signature of the onset of FH order, while the narrow peak at low $T$ contrasts with the broader one of the PQ state. Again, the jump is expected to be smeared out by $1/N$ corrections. Parameters for (a): $t=1$, $\eta_c=1$, $V=0.9$, $\eta_V=1$, $\Delta=20$, $n_{c,0}=1.6$, $T=0.168$; (b): $t=1$, $\eta_c=1$, $V=0.8$, $\eta_V=1$, $\Delta=10$, $n_{c,0}=1.9$, $B_z=0.8$. The vertical axes of (a) and (b) are scaled such that the minimum value of the calculated PQ magnetostriction in (a) roughly matches the minimum experimentally determined value at the lowest measured temperature [@Worl2018]. \[fig:MSandTE\]](plot_magstrict_thermexpanv2.pdf)
The magnetic field phase diagrams of PrT$_2$Zn$_{20}$ (T=Ir,Rh), with their intermediate field heavy Fermi liquid regions, are consistent with our model. PrIr$_2$Zn$_{20}$ orders antiferroquadrupolarly (with $O_2^2$-type moments) for $B = 0$ [@Iwasa2017], but has a finite field region between 4-5T for $B || [100]$ with enhanced $C/T$ and $A$ [@Onimaru2016b], while PrRh$_2$Zn$_{20}$ has a similar heavy Fermi liquid region between 3.5-6.7 T for $B || [100]$ [@Yoshida2017]. Note that an earlier review suggested that these regions could be composite order [@Onimaru2016a]; here we propose specifically that these are FH, justified within a microscopic model with concrete predictions. Measuring the magnetic moments for multiple field directions is the best way to differentiate FH and quadrupolar orders. FH order will additionally exhibit a half-hybridization gap in optical conductivity or STM measurements, and Raman measurements of the symmetry breaking hybridization gap are another intriguing possibility. PrV$_2$Al$_{20}$ also exhibits an intermediate in-field region [@Shimura2015], but more experiments are needed. Other $\Gamma_3$ materials like PrInAg$_2$[@Yatskar1996] and PrPb$_3$[@Sato2010], with its high field phases also merit further study.
Conclusions \[sec:conclusion\]
==============================
To summarize, we have investigated ferrohastatic order in cubic systems via a realistic two-channel Anderson lattice model, in combination with a phenomenological Landau theory that accounts for the effect of fluctuations. The development of a heavy Fermi liquid necessarily breaks channel symmetry, including time-reversal and spin rotation symmetries. For FH order, this heavy Fermi liquid includes spin-textured dispersions, symmetry-breaking hybridization gaps, and small magnetic moments for both the conduction electrons and excited $f$-states. Several materials may realize FH order in finite magnetic field \[Pr$T_2$Zn$_{20}$ (T=Ir, Rh)\], and it is also a possible candidate for PrTi$_2$Al$_{20}$ once the FQ order is suppressed under pressure.
The nature of two-channel Kondo lattice superconductivity is an open question; thus far research has focused on composite pairing [@Flint2008; @Hoshino2014; @Kusunose2016]. Quadrupolarly mediated superconductivity arising out of the FH state leads to Cooper pairs that are orbital singlets and spin triplets, at least in the resonating valence bond limit where the orbital singlets first form among the spinless $f$-“electrons”, and are transmitted to the spinful conduction electrons via the hastatic spinor [@Andrei1989]. The resulting triplet state is reminiscent of the $A_1$ phase of He-3, due to the asymmetry between $\uparrow \uparrow$ and $\downarrow \downarrow$ pairs[@Vollhardt2013]. Further exploration of AFH order and superconductivity in the hastatic state is left for future work.
We thank Piers Coleman, Premala Chandra, Peter Orth, and Arun Paramekanti for helpful discussions. This work was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, under Award No. DE-SC0015891. R.F. acknowledges the hospitality of the Aspen Center for Physics, supported by National Science Foundation Grant No. PHYS-1066293.
Spin-orbit-coupled Hybridization \[sec:slaterkoster\]
=====================================================
Here we give further details regarding the spin-orbit coupled hybridization in our model for FH order. As mentioned in the main text, the valence fluctuation term $$\begin{aligned}
H_{VF} = V \sum_{j\mu\alpha} \Big[ \tilde{\mu} |j \Gamma_3, \alpha \rangle \langle j \Gamma_7, -\mu | \psi_{j, \Gamma_8 \mu \alpha} + H.c. \Big]\end{aligned}$$ involves the creation or annihilation of conduction electrons projected onto the $\Gamma_8$ symmetry channel of the localized $f$ electrons. Explicitly, we consider the overlap of a $d$-band conduction electron of $e_g$ symmetry with the $\Gamma_8$ $f$-electron orbital. Using angular momentum eigenstates $| l, m_l, s, m_s \rangle$ for the $l=2$ $d$ electrons, the $e_g$ states are $$\begin{aligned}
|e_g,\sigma, + \rangle &= |2, 0, 1/2, \sigma \rangle \\
|e_g,\sigma, - \rangle &= \frac{1}{\sqrt{2}}(|2, 2, 1/2, \sigma \rangle+|2, -2, 1/2, \sigma \rangle ) \end{aligned}$$ with $\sigma = \pm \frac{1}{2} = \uparrow,\downarrow$. On the other hand, the $J=5/2$ $\Gamma_8$ quartet states are expressed using total angular momentum eigenstates $| j, m_j \rangle$: $$\begin{aligned}
|\Gamma_8, \mu, + \rangle &= | 5/2, \tilde{\mu} (1/2) \rangle \\
|\Gamma_8, \mu, - \rangle &= \sqrt{\frac{5}{6}}| 5/2, \tilde{\mu} (5/2) \rangle + \sqrt{\frac{1}{6}}| 5/2, -\tilde{\mu} (3/2) \rangle\end{aligned}$$ with $\tilde{\mu} = \mathrm{sgn} (\mu)$. Since the $\Gamma_8$ electrons in our model arise from the overlap of the $e_g$ conduction electrons with $f$ electron states, the annihilation operators of the former can be written as $$\begin{aligned}
\psi_{\Gamma_8, j, \mu, \alpha} &= \sum_{j',\sigma,\alpha'} \langle \Gamma_8, j, \mu, \alpha | e_g, j', \sigma, \alpha' \rangle c_{j',\sigma, \alpha'}\end{aligned}$$ Here the conduction electron state sits at a distinct site, $j'$, as generically the overlap between d- and f-electrons at the same site is zero; we assume that the $f$-electron is located at the origin, ${\bf R}_{j} ={\bf 0}$. The wave function overlaps are $$\begin{aligned}
\langle \Gamma_8, j, \mu, \alpha | e_g, j', \sigma, \alpha' \rangle = \!\int\! \mathrm{d}{\boldsymbol{\mathbf{r}}} \langle \Gamma_8, j, \mu, \alpha | {\boldsymbol{\mathbf{r}}} \rangle \langle {\boldsymbol{\mathbf{r}}} | e_g, j', \sigma, \alpha' \rangle\end{aligned}$$ The $e_g$ wave functions are sums of spherical harmonics via $$\begin{aligned}
\langle {\boldsymbol{\mathbf{r}}} | e_g, j', \sigma, \alpha' \rangle = \sum_m &\langle {\boldsymbol{\mathbf{r}}} - {\bf R}_{j'} | 2, m,\frac{1}{2}, \sigma \rangle \notag \\
&\times \langle 2, m, \frac{1}{2}, \sigma | e_g, j, \sigma, \alpha' \rangle \\
\langle {\boldsymbol{\mathbf{r}}} - {\bf R}_{j'} | 2, m, \frac{1}{2}, \sigma \rangle &= Y^m_2({\boldsymbol{\mathbf{r}}} - {\bf R}_{j'}),\end{aligned}$$ while the spin-orbit-coupled $\Gamma_8$ expressions contain additional Clebsch-Gordan coefficients, $$\begin{aligned}
\langle \Gamma_8, j, \mu, \alpha | {\boldsymbol{\mathbf{r}}} \rangle = \sum_m &\langle \Gamma_8, j, \mu, \alpha | j, m \rangle \langle j, m | 3, m - \sigma, \frac{1}{2},\sigma \rangle \notag \\
&\times \langle 3, m - \sigma, \frac{1}{2},\sigma | {\boldsymbol{\mathbf{r}}} \rangle, \\
\langle j, m | 3, m - \sigma,\frac{1}{2},\sigma \rangle &= -2\sigma \sqrt{\frac{7/2-2m\sigma}{7}},\\
\langle 3,m-\sigma,\frac{1}{2},\sigma|{\boldsymbol{\mathbf{r}}}\rangle &= [Y^m_3({\boldsymbol{\mathbf{r}}})]^*.\end{aligned}$$ Following the Slater-Koster method [@Slater1954; @Alexandrov2013; @Baruselli2014], we numerically calculate the overlaps of wave functions on neighboring sites and determine how they are related by symmetry. The following overlaps between neighboring $\Gamma_8$ orbitals at position $(r_x,r_y,r_z)$ and $e_g$ orbitals at $(r_x,r_y,r_z + \delta)$ in the $z$ direction are found to be nonzero and generically distinct, with their proportionality captured by the factor $\eta_v$: $$\begin{aligned}
&\langle e_g, \uparrow, + | \Gamma_8, \uparrow, + \rangle = -\langle e_g, \downarrow, + | \Gamma_8, \downarrow, + \rangle = \tilde{V} \\
&\langle e_g, \uparrow, - | \Gamma_8, \uparrow, - \rangle = -\langle e_g, \downarrow, - | \Gamma_8, \downarrow, - \rangle = \tilde{V}\eta_v . \end{aligned}$$ For $e_g$ orbitals located at $(r_x,r_y,r_z - \delta)$, the signs of each overlap are reversed. This leads to an odd-parity hybridization term along the $z$ direction after the Fourier transform to momentum space \[basis: $(\uparrow +, \uparrow -, \downarrow +, \downarrow -)$\] $$\begin{aligned}
H_{e_g - \Gamma_8}^z = i \tilde{V}
\begin{pmatrix}
-2 s_z & 0 & 0 & 0 \\
0 & - 2 \eta_v s_z & 0 & 0\\
0 & 0 & 2 s_z & 0\\
0 & 0 & 0 & 2 \eta_v s_z
\end{pmatrix}\end{aligned}$$ with $s_z = \sin(k_z)$ Here $\eta_v$ tunes the relative overlap integrals between $e_g$ and $\Gamma_8$. The 3D hybridization term with cubic symmetry is then obtained from the 1D $H_{e_g - \Gamma_8}^z$ by applying $2\pi/3$ rotations around a cubic body diagonal, transforming $\hat{z} \rightarrow \hat{x} \rightarrow \hat{y} \rightarrow \hat{z}$. $$\begin{aligned}
\mathcal{R}_{2\pi/3} = e^{-i \pi/4} \frac{1}{2 \sqrt{2}} \begin{pmatrix}
-1 & \sqrt{3} & -i & i\sqrt{3} \\
-\sqrt{3} & -1 & -i\sqrt{3} & -i \\
1 & -\sqrt{3} & -i & i\sqrt{3} \\
\sqrt{3} & 1 & -i\sqrt{3} & -i
\end{pmatrix}\end{aligned}$$ Then $H_{3D} = H_{e_g - \Gamma_8}^z + \mathcal{R}_{2\pi/3} H_{e_g - \Gamma_8}^x (\mathcal{R}_{2\pi/3})^\dagger + (\mathcal{R}_{2\pi/3})^2 H_{e_g - \Gamma_8}^y [(\mathcal{R}_{2\pi/3})^\dagger]^2$, leading to the form factor $\Phi^{\sigma \alpha'}_{\mu \alpha}({\boldsymbol{\mathbf{k}}})$ expressing the $\Gamma_8$ creation operator in terms of overlaps with $e_g$ conduction electrons $$\begin{aligned}
&\psi_{\Gamma_8, j, \mu \alpha} = \sum_{{\boldsymbol{\mathbf{k}}} \sigma \alpha'} \mathrm{e}^{i {\boldsymbol{\mathbf{k}}} \cdot {\boldsymbol{\mathbf{R}}}_j} \Phi^{\sigma \alpha'}_{\mu \alpha}({\boldsymbol{\mathbf{k}}})
c_{{\boldsymbol{\mathbf{k}}} \sigma \alpha'} \\
&\Phi^{\sigma \alpha'}_{\mu \alpha}({\boldsymbol{\mathbf{k}}})
= \begin{pmatrix}
\hat{A} & \hat{B} \\
\hat{B}' & -\hat{A}
\end{pmatrix} \\
&\hat{A} = \begin{pmatrix}
-2i s_z & 0 \\
0 & -2i \eta_v s_z
\end{pmatrix}\\
&\hat{B} = \begin{pmatrix}
\frac{i}{2}(1+3\eta_v)s_+ & -\frac{i\sqrt{3}}{2}(-1+\eta_v)s_- \\
-\frac{i\sqrt{3}}{2}(-1+\eta_v)s_- & \frac{i}{2}(3+\eta_v)s_+
\end{pmatrix}\\
&\hat{B}' = \hat{B} \, (s_+ \leftrightarrow s_-)\end{aligned}$$
where $s_{\pm} \equiv \sin(k_x) \pm i\sin(k_y)$ and $s_z \equiv \sin(k_z)$. $\Phi$ possesses an overall amplitude, $\tilde V$ that we set to one, as its effect is already captured by the overall strength of hybridization $V$. Here the matrix is written in the basis $(\uparrow +, \uparrow -, \downarrow +, \downarrow -)$ for $\sigma=\uparrow,\downarrow$ and $\alpha=+,-$. Similar techniques are used to obtain the conduction electron dispersion, $\epsilon_{{\boldsymbol{\mathbf{k}}}}$ used in the main text by treating the $e_g$–$e_g$ hoppings [@Alexandrov2013; @Baruselli2014] (see eq. \[eq:Hd\]).
Ferrohastatic hybridization gaps\[sec:gaps\]
============================================
As discussed in the main text, cubic FH order typically possesses a “half”-hybridization gap, realized by both symmetry-breaking and non-symmetry-breaking hybridization gaps. As the Hamiltonian cannot generically be diagonalized analytically, analyzing the gaps is complicated. We first discuss the general case, where we keep the direction of the hastatic spinor general. Here, we examine the full $f$-electron Green’s function, where the symmetry-breaking terms of the $f$-electron self-energy can be isolated; these allow us to clearly discuss the terms entering into the dispersion. Next, we discuss a special case, where some hybridization matrix elements vanish from the dispersion and the problem is analytically tractable. Finally, we discuss how the half-hybridization gap affects the density of states.
$f$-electron self-energy
------------------------
The heavy quasiparticle band structure is given by the solutions of $\det ( \omega - H_{{\boldsymbol{\mathbf{k}}}}' ) = 0$, which do not generically have a closed form. Still, one may gain insight into the symmetry-breaking properties of FH order by factorizing the determinant as $$\begin{aligned}
\det ( i\omega_n - H_{{\boldsymbol{\mathbf{k}}}}' ) & = \det [g^c({\boldsymbol{\mathbf{k}}},i\omega_n)]^{-1} \cr
& \!\! \times \det \left[ i\omega_n \alpha_0 - \Sigma_f ({\boldsymbol{\mathbf{k}}},i\omega_n) \right]\end{aligned}$$ and examining the $f$-electron self energy, $\Sigma_f ({\boldsymbol{\mathbf{k}}},i\omega_n) = \mathcal{V}_{{\boldsymbol{\mathbf{k}}}}^\dagger g^c({\boldsymbol{\mathbf{k}}},i\omega_n) \mathcal{V}_{{\boldsymbol{\mathbf{k}}}}$. In particular, we may determine which components of $\Sigma_f$ break symmetries of the underlying cubic lattice, and how these couple to the $SU(2)$ symmetry breaking of the hastatic spinor. The full hybridized $f$-electron Green’s function can be obtained from $$\begin{aligned}
[\mathcal{G}^f({\boldsymbol{\mathbf{k}}},i\omega_n)]^{-1} = i\omega_n\alpha_0 - \mathcal{V}_{{\boldsymbol{\mathbf{k}}}}^\dagger g^c({\boldsymbol{\mathbf{k}}},i\omega_n) \mathcal{V}_{{\boldsymbol{\mathbf{k}}}}\end{aligned}$$ where the generic hybridization term is a $4 \times 2$ matrix is $$\begin{aligned}
\mathcal{V}_{{\boldsymbol{\mathbf{k}}}} = V\sum_{\mu} \tilde{\mu} b_{-\mu} \Phi^{\sigma \alpha'}_{\mu \alpha}({\boldsymbol{\mathbf{k}}}) \end{aligned}$$ and the bare unhybridized conduction electron Green’s function is obtained from $$\begin{aligned}
[g^c({\boldsymbol{\mathbf{k}}},i\omega_n)]^{-1} = \sigma_0 \otimes [ (i\omega_n -\psi_{00}) \alpha_0 - \psi_{01} \alpha_1 - \psi_{03} \alpha_3]\end{aligned}$$ with coefficients given by $$\begin{aligned}
\psi_{00} &= \frac{1}{4}\operatorname{Tr}[\mathcal{H}_c \sigma_0 \otimes \alpha_0] = \mu - t(1+\eta_c)(c_x + c_y + c_z)\\
\psi_{01} &= \frac{1}{4}\operatorname{Tr}[\mathcal{H}_c \sigma_0 \otimes \alpha_1] = \frac{\sqrt{3}}{2} t(\eta_c-1)(c_x - c_y)\\
\psi_{03} &= \frac{1}{4}\operatorname{Tr}[\mathcal{H}_c \sigma_0 \otimes \alpha_3] = \frac{t}{2}(1-\eta_c)(c_x+c_y-2c_z).\end{aligned}$$ Again, we use the Pauli matrices, $\sigma_\lambda$ and $\alpha_\lambda$ ($\lambda = 0,1,2,3$) to represent the spin and pseudospin degrees of freedom. Inverting, $$\begin{aligned}
g^c({\boldsymbol{\mathbf{k}}},i\omega_n) &= \left( \frac{1}{(i \omega_n - \psi_{00})^2 - \psi_{01}^2-\psi_{03}^2} \right) \times \notag \\
&\sigma_0 \otimes [ (i\omega_n -\psi_{00}) \alpha_0 + \psi_{01} \alpha_1 + \psi_{03} \alpha_3]\end{aligned}$$ Thus we find three non-zero components $\sigma_0 \otimes \alpha_i$ ($i=0,1,3$) for $g^c({\boldsymbol{\mathbf{k}}},i\omega_n)$, and hence also for the $f$-electron self energy $\Sigma_f ({\boldsymbol{\mathbf{k}}},i\omega_n)$, which has the matrix structure $$\begin{aligned}
\mathcal{V}_{{\boldsymbol{\mathbf{k}}}}^\dagger \sigma_0 \otimes \alpha_i \mathcal{V}_{{\boldsymbol{\mathbf{k}}}} &= \sum_{j}V_{2,ij}({\boldsymbol{\mathbf{k}}}) \alpha_j\end{aligned}$$ with
$$\begin{aligned}
V_{2}^{ij}({\boldsymbol{\mathbf{k}}}) &= \frac{1}{2}\operatorname{Tr}[ \mathcal{V}_{{\boldsymbol{\mathbf{k}}}}^\dagger \sigma_0 \otimes \alpha_i \mathcal{V}_{{\boldsymbol{\mathbf{k}}}}\alpha_j ] = \begin{cases} \frac{V}{4} \operatorname{Tr}[\Phi({\boldsymbol{\mathbf{k}}})^\dagger \sigma_0 \otimes \alpha_i \Phi({\boldsymbol{\mathbf{k}}}) \sigma_0 \otimes \alpha_j] \operatorname{Tr}[ \hat{b}^\dagger \mu_0 \hat{b}],& \;\; j = 0,1,3 \\
-\frac{V}{4} \sum_k \operatorname{Tr}[\Phi({\boldsymbol{\mathbf{k}}})^\dagger \sigma_0 \otimes \alpha_i \Phi({\boldsymbol{\mathbf{k}}}) \sigma_k \otimes \alpha_2] \operatorname{Tr}[ \hat{b}^\dagger \mu_k \hat{b}],& \;\; j = 2 \end{cases}\end{aligned}$$
where $\mu_k$ is one of the Pauli matrices representing the excited Kramers doublet pseudospin. These self-energy terms form a non-trivial matrix in $\alpha$ space, and so the full dispersion will contain not only these terms, but quartic traces of the form, $$V_{4}^{ij}({\boldsymbol{\mathbf{k}}})= \operatorname{Tr}[ \mathcal{V}_{{\boldsymbol{\mathbf{k}}}}^\dagger \sigma_0 \otimes \alpha_i \mathcal{V}_{{\boldsymbol{\mathbf{k}}}} \sigma_k \otimes \alpha_j \mathcal{V}_{{\boldsymbol{\mathbf{k}}}}^\dagger \sigma_0 \otimes \alpha_i \mathcal{V}_{{\boldsymbol{\mathbf{k}}}} \sigma_k \otimes \alpha_j], $$ where we keep only the nonzero terms. We note that out of the above terms, only $V_{2}^{00}({\boldsymbol{\mathbf{k}}})$ explicitly preserves the cubic symmetry of the underlying lattice for generic values of $\eta_v$. However, only the terms that also break $SU(2)$ symmetry lead to cubic symmetry breaking in the full dispersion. The other terms should be considered similarly to the $\eta_c \neq 1$ terms in the conduction electron dispersion: they lead to band splitting, but the overall dispersion satisfies cubic symmetry. We have checked this explicitly by setting the $b^\dagger \vec{\mu} b$ terms to zero, while keeping the $b^\dagger b$ terms, and have found that cubic symmetry is always preserved. In order to analyze the symmetry-breaking nature of the hybridization gaps that depend on $b^\dagger \vec{\mu} b$, we now turn to an analytically tractable special case.
Analytic expression for $\eta_c = 1$, $\eta_v=\!\sfrac{-1}{3}$, $\hat{b} = (1,0)^T$
-----------------------------------------------------------------------------------
In this special case, the mean-field Hamiltonian may then be diagonalized to obtain two degenerate unhybridized conduction electron bands and the following four hybridized bands: $$\begin{aligned}
E_{{\boldsymbol{\mathbf{k}}}} = \! \left[\! \frac{(E_{c{\boldsymbol{\mathbf{k}}}} + \lambda)}{2}\! \pm \! \sqrt{\frac{[ (E_{c{\boldsymbol{\mathbf{k}}}} -\lambda)^2 + V_{1{\boldsymbol{\mathbf{k}}}}^2 ]}{4} \pm \frac{16 \sqrt{\gamma_{{\boldsymbol{\mathbf{k}}}} + \delta_{{\boldsymbol{\mathbf{k}}}}}}{9}} \right]\end{aligned}$$ where we define $$\begin{aligned}
E_{c{\boldsymbol{\mathbf{k}}}} &= -2t(c_x+c_y+c_z)+\mu \\
|V_{1{\boldsymbol{\mathbf{k}}}}|^2 &= 4 V_{2}^{00}({\boldsymbol{\mathbf{k}}}) = \frac{80}{9} V^2b^2 (s_x^2 + s_y^2 + s_z^2) \\
\gamma_{{\boldsymbol{\mathbf{k}}}} &= V^4 b^4(s_x^4 + s_y^4 + s_z^4)\\
\delta_{{\boldsymbol{\mathbf{k}}}} &= V^4 b^4 (2s_x^2s_y^2-(s_x^2+s_y^2)s_z^2)\end{aligned}$$ Here, $\gamma_{{\boldsymbol{\mathbf{k}}}}$ has the full ($\Gamma_1$) symmetry of the lattice, as does $V_{1{\boldsymbol{\mathbf{k}}}}$. $\delta_{{\boldsymbol{\mathbf{k}}}}$ breaks the cubic symmetry, and has the symmetry of $|\Gamma_3,+\rangle$, mixing both $g$-wave ($7[2z^4-x^4-y^4]-6[3z^2-r^2]r^2$) and $d$-wave ($3z^2-r^2$) components of the same symmetry; it is plotted in Fig. 2(b) of the main text. $\delta_{{\boldsymbol{\mathbf{k}}}}$ is the only term that depends on $b^\dagger \vec{\mu} b$, and is proportional to $(b^\dagger \mu_3 b)^2$. It is written in terms of the above traces as, $$\delta_{{\boldsymbol{\mathbf{k}}}} = 4 \left(V_{4}^{33} +V_{4}^{11} -V_{4}^{00} -V_{4}^{22}\right) -4 (V_{2}^{32})^2,$$ where we have suppressed the ${\boldsymbol{\mathbf{k}}}$ dependence on the right hand side. Note that individually, each $V_4$ or $V_2^2$ is positive definite, and each have different symmetries that are not $|\Gamma_3,+\rangle$; it is only the combination of these that gives the nodal gap. Rotation of the hastatic spinor to $\hat x$ or $\hat y$ maintains the same shape of the symmetry-breaking gap component, but with a rotated quantization axis. Thus for $\hat{b}$ along the $\hat{x}$ axis ($ = (1,1)^T/\sqrt{2}$), the analogous component has the form of Fig. 2(b) of the main text, but now oriented along the $\hat{x}$ axis. Essentially, rotating the hybridization spinor away from $\hat z$ mixes the $\Gamma_3 \pm$ states.
[60]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\
12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty [****, ()](\doibase 10.1080/00018732.2017.1331615) [****, ()](\doibase 10.1038/nature03279) [****, ()](\doibase 10.1038/nphys892) [****, ()](\doibase
10.1146/annurev-conmatphys-031214-014749) [****, ()](\doibase
10.1103/PhysRevLett.118.107204) [****, ()](\doibase 10.1080/14786435.2014.916428) [****, ()](\doibase 10.1038/nphys1024) [****, ()](\doibase 10.7566/JPSJ.82.044707) [****, ()](\doibase 10.1103/PhysRevLett.59.1240) [****, ()](\doibase 10.1080/000187398243500) [****, ()](\doibase 10.1103/PhysRevLett.107.247202) [****, ()](\doibase 10.7566/JPSJ.83.061007) [****, ()](\doibase 10.1038/nature11820) [****, ()](\doibase 10.1103/PhysRevB.91.205103) [****, ()](\doibase 10.1103/PhysRevB.98.235143) [ ()](http://arxiv.org/abs/1811.11115), [****, ()](\doibase 10.1143/JPSJ.80.063701) [****, ()](\doibase
10.1103/PhysRevLett.106.177001) [****, ()](\doibase 10.1103/PhysRevB.84.193101) [****, ()](\doibase
10.1103/PhysRevB.86.184426) [****, ()](\doibase
10.1103/PhysRevLett.113.267001) [****, ()](\doibase 10.1103/PhysRevB.91.241102) [****, ()](\doibase 10.7566/JPSJ.85.082002) [****, ()](\doibase 10.1143/JPSJ.80.113703) [****, ()](\doibase 10.1103/PhysRevB.86.184419) [****, ()](\doibase 10.1103/PhysRevB.87.205106) [****, ()](\doibase 10.7566/JPSJ.85.113703) [****, ()](\doibase
10.1103/PhysRevB.95.155106) [****, ()](\doibase 10.1143/JPSJ.79.033704) [****, ()](\doibase 10.1143/JPSJ.81.083702) [****, ()](\doibase
10.1103/PhysRevLett.109.187004) [****, ()](\doibase 10.1088/1742-6596/592/1/012023) [****, ()](\doibase 10.1103/PhysRevB.94.075134) [****, ()](\doibase 10.7566/JPSJ.86.044711) [****, ()](\doibase
10.1103/PhysRevB.88.085124) [****, ()](\doibase
10.1103/PhysRevB.97.115111) [****, ()](\doibase 10.1103/PhysRevB.98.134447) [ ()](http://arxiv.org/abs/1901.00012), [****, ()](\doibase 10.1103/PhysRevB.29.3035) [****, ()](\doibase 10.1088/0022-3719/16/29/007) [****, ()](\doibase 10.1103/PhysRevLett.94.147201) [****, ()](\doibase 10.1103/PhysRev.147.392) [****, ()](\doibase 10.1103/PhysRevLett.111.226403) [****, ()](\doibase 10.1103/PhysRevB.90.201106) [**](\doibase 10.1017/CBO9780511470752), Cambridge Studies in Magnetism (, ) [****, ()](\doibase 10.1088/0022-3719/18/13/012) [****, ()](\doibase 10.1103/PhysRevLett.112.167204) @noop [****, ()](\doibase 10.1088/1742-6596/592/1/012098) [****, ()](\doibase 10.1006/jssc.1995.1052) [ ()](http://arxiv.org/abs/1801.01042), [****, ()](\doibase 10.1088/0953-8984/20/10/104218) [****, ()](\doibase 10.1103/PhysRevLett.77.3637) [****, ()](\doibase 10.1143/JPSJ.79.093708) [****, ()](\doibase 10.7566/JPSJ.85.113701) [****, ()](\doibase 10.1103/PhysRevLett.62.595) @noop [**]{}, ed. (, , ) [****, ()](\doibase 10.1103/PhysRev.94.1498)
[^1]: The transition to $4f^3$ is more likely in the real materials, but yields the same physics as the $4f^1$ transition considered here
[^2]: The 1-2-20 materials have an fcc structure, but the nature of FH order is independent of the structural details.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We present a first study of the channel $H, A$ [$\;\rightarrow$]{}$~\tau\tau$ [$\;\rightarrow$]{}$~h^{\pm}+ h^{\mp} + X$ in CMS at high $m_A$ values where no triggering difficulties are expected with QCD jets. At present the $\tau$ selection is based solely on the presence of a hard isolated track in the “$\tau$” jet, but further refinements based on calorimeter collimation or impact parameter selections are obviously possible. The main irreducible background in these conditions is due to QCD jets with hard fragmentations. A large reduction of this background and improvement in the expected signal to background ratio is provided by $E_t^{miss}$ cuts. The expected high-mass reach in the $m_A$, $tan\beta$ parameter space for $3 \times 10^4pb^{-1}$ is shown. This $H$ [$\;\rightarrow$]{}$~\tau\tau$ channel provides the highest mass reach and the best mass resolution when compared to $\tau\tau$ [$\;\rightarrow$]{}$~l^{\pm}+h^{\mp} + X$ and $\tau\tau$ [$\;\rightarrow$]{}$~e^{\pm}+\mu^{\mp} + X$ final states. To the extent that with further calorimetric and impact parameter based selection criteria the QCD background can be kept under control, i.e. below the irreducible $Z,\gamma*$ [$\;\rightarrow$]{}$~\tau\tau$ background, we should strive to have a first level trigger allowing to explore the mass range down to $\sim$150 - 200 GeV.'
---
[**July 7, 1999**]{}
[**The $H_{SUSY}$ [$\;\rightarrow$]{}$~\tau\tau$ [$\;\rightarrow$]{}$~h^{\pm}+h^{\mp} + X$ channel, its advantages and potential instrumental drawbacks**]{}
R. Kinnunen $^{a)}$\
Helsinki Institute of Physics, Helsinki, Finland\
D. Denegri $^{b)}$\
DAPNIA/SPP, CEN Saclay, France\
\
$^{a)}$ Email: ritva.kinnunen@cern.ch\
$^{b)}$ Email: daniel.denegri@cern.ch
Introduction
============
We present here the first investigation of the purely hadronic $\tau$ decay modes in our systematic investigation of $h,H,A(H_{SUSY}$) [$\;\rightarrow$]{}$~\tau\tau$ channels. The one lepton plus one charged hadron [@tausel],[@tauhlep] and the one electron plus one muon [@tauemu] final states have been shown to cover a significant region of the $m_A$, $tan\beta$ parameter space at high $tan\beta$ values ($tan\beta >$ 10 and $m_A>$ 150 GeV). These channels have been studied in both the inclusive $H_{SUSY}$ production mode, and in $b\overline{b}H_{SUSY}$ in association with b-jets. For Higgs masses from 200 to 800 GeV the fraction of events produced in association with b-jets varies from 75% to 80% for $tan\beta$ = 10 and from 75% to 96% for $tan\beta$ = 45. The associated production channels provide much better signal to background ratios, but statistics is much reduced (factor $\sim$20) and are very sensitive to the tracker performance through the b-tagging procedure, the associated b-jets tending to be rather soft and uniformly distributed between barrel and endcap pixels [@btag]. Studies of these channels are continuing, implementing detailed pattern recognition and track finding in the CMS tracker [@tdr:tracker].
For these $A, H$ [$\;\rightarrow$]{}$~\tau\tau$ [$\;\rightarrow$]{}$~h^+$ + $h^-$ final states with two isolated hard hadrons (plus $\pi^0$’s) we start with high masses $m_A >$ 300 GeV, as for the low mass range ($\sim$ 150 GeV) these purely hadronic final states competing with QCD jets obviously run into difficulties firstly of triggering at an acceptable rate, and second, in the off-line analysis on problems of large background from QCD jets followed by hard fragmentation. These channels can however provide a spectacular signature for a sufficiently heavy Higgs and should obviously extend significantly the upper $H_{SUSY}$ mass reach due to favourable branching ratios. The branching ratio of $\tau$ to single hadron plus any number of neutrals is about 50%. The combined branching ratio for $A, H$ [$\;\rightarrow$]{}$~\tau\tau$ [$\;\rightarrow$]{}$~h^+ + h^- + X$ is 2.8% compared to about 3.7% for $A, H$ [$\;\rightarrow$]{}$~\tau\tau$ ra $l^{\pm} + h^{\mp} + X$ assuming BR($A, H$ [$\;\rightarrow$]{}$~\tau\tau$) = 11% being roughly valid for the full $m_A$ and $tan\beta$ ($>$10) range studied here. The fractional momentum taken away by the neutrinos is significantly smaller in the $h^+$ + $h^-$ final state compared to $l^{\pm} + h^{\mp}$ and especially $e^{\pm} + \mu^{\mp}$ final states, thus an improved $H_{SUSY}$ [$\;\rightarrow$]{}$~\tau\tau$ mass resolution can be expected, and improved signal/background if QCD background can be reduced below the irreducible $\tau\tau$ background.
The main reducible background sources are the QCD jets and $W+jet$ events with $W$ [$\;\rightarrow$]{}$~\tau\nu$, and the irreducible ones are $Z, \gamma^*$ [$\;\rightarrow$]{}$~\tau\tau$ and $t\overline{t}$ with $W$ [$\;\rightarrow$]{}$~\tau\nu$. The rate for QCD jets is very large; about $3 \times 10^{11}$ two-jet events with $E_t^{jet} >$ 60 GeV are expected for $3 \times 10^4pb^{-1}$. Therefore a rejection factor of at least $10^4$ per jet is needed for a useful signal (further rejection can be provided by $E_t^{miss}$ cuts). In this study the $\tau$ identification is based solely on the presence of a single hard isolated charged hadron in the jet using tracker information. Indeed a large rejection factor is obtained from the tracker alone, a result expected to be confirmed with the detailed simulation and track reconstruction currently under way. The use of calorimeter information exploiting the collimation of a $\tau$ jet to improve $\tau$ identification is also presently in progress, with GEANT calorimeter simulation [@tdr:ecal][@tdr:hcal]. We are also investigating the possibility to use the impact parameter measurement of the hard hadron from $\tau$ to further reduce the QCD background. All these improvements will thus further reinforce and extend the reach of the $H_{SUSY}$ [$\;\rightarrow$]{}$~\tau\tau$ channel, the only channel which until now gives us access to the theoretically favoured $H_{SUSY}>$ 500 GeV mass range. However, this channel should also play an important (decisive?) role in the intermediate mass range 150 GeV $< m_H <$ 300 GeV, which is the most critical one from the point of view of the MSSM $tan\beta$, $m_A$ parameter space coverage, provided we can trigger on it efficiently, which requires a dedicated study and is not discussed here.
Event simulation
================
Events are generated with PYTHIA [@PYTHIA]. Masses and couplings of SUSY Higgses are calculated using two-loop/RGE-improved radiative corrections [@gunion]. The branching ratios and cross sections are normalized using the HDECAY program [@HDECAY]. No stop mixing is included and the SUSY particles are first assumed to be heavy enough not to contribute to the SUSY Higgs decays. The decays to neutralinos and charginos can however reduce strongly the $A, H$ [$\;\rightarrow$]{}$~\tau\tau$ branching ratio at low $tan\beta$, and by a factor of two even at $tan\beta$ = 30 [@HDECAY]. For the high $m_A$ and $tan\beta$ region investigated here the effects of sparticle decay modes are nevertheless relatively modest, as mentioned later on. In the case of stop mixing, if the light stop $\tilde{t_1}$ is lighter than or comparable in mass to the top quark, the squark loop effects are expected to suppress significantly the $gg$ [$\;\rightarrow$]{}$~h, A, H$ production processes [@djouadi]. For the large $m_A$ and $tan\beta$ values discussed here this would mean at most a 20% reduction in the production rate as we are dominated by $gg$ [$\;\rightarrow$]{}$~b\overline{b}H_{SUSY}$ tree-diagram production which is not affected by $\tilde{t}_1$ - $t$ interference effects. For the QCD background, PYTHIA two-jet events with initial and final state QCD radiation are generated. A cut $p_t^{q,g} >$ 50 GeV is applied for the hard process at the generation level. The default Lund fragmentation is used.
The CMS detector response is simulated and the jets and the missing transverse energy are reconstructed with the fast simulation package CMSJET [@cmsjet]. The loss of reconstruction efficiency for the hard track due to secondary interactions in the tracker material ($\sim$ 20% of $\lambda_{int}$), as well as the degradation of the track isolation due to conversions of accompanying $\pi^0$ [$\;\rightarrow$]{}$~\gamma\gamma$ in tracker, is taken into account only is average, which is sufficient at this stage.
Selection of events
===================
$p_t^h$ threshold and track isolation
-------------------------------------
Hard $\tau$ jets are expected in the $A, H$ [$\;\rightarrow$]{}$~\tau\tau$ [$\;\rightarrow$]{}$~h^+ + h^- + X$ events at large $m_A$ as can be seen from Fig. 1 which shows the $E_t$ distribution of hadronic $\tau$ jets for $m_A$ = 300, 500 and 800 GeV and the $E_t$ distribution of the two hardest jets in QCD jet events (Fig.1c). Events are required to have at least two calorimetric jets with $E_t >$ 60 GeV within $|\eta| <$ 2.5. The $\tau$ jet canditates are chosen to be the $\tau$ jets for the signal and for the backgrounds with real $\tau$’s, i.e. from $Z,\gamma^*$ [$\;\rightarrow$]{}$~\tau\tau$ and $t\overline{t}$ events. For the QCD background the $\tau$ jet canditates are taken to be the jets with highest $E_t$. Events are assumed to be triggered with a two-jet trigger with full trigger efficiency for $E_t^{jet} >$ 60 GeV at this stage. For $m_A >$ 300 GeV and a threshold $E_t^{jet} >$ 60 GeV the trigger efficiency is high and the trigger rate acceptable at $L \sim 10^{33}cm^{-2}s^{-1}$ [@trigger]. For high luminosity running, and in particular for the lower mass range 150 GeV $< m_A <$ 300 GeV, a better understunding of possible $1^{st}$ level triggers and of trigger efficiency versus $E_t$ is needed and then implemented in this type of study.
A jet is defined to be a “$\tau$” jet if it contains one isolated hard hadron within $\Delta R <$ 0.1 from the calorimeter jet axis. A powerfull tracker isolation can be implemented in CMS thanks to efficient track finding down to low $p_t$ values (0.9 GeV) [@tdr:tracker]. Here we require that there in no other track with $p_t >$ 1 GeV than the hard track in a larger cone of $\Delta R <$ 0.4 around the calorimeter jet axis. This isolation criterion is adequate for running at the luminosity of about $10^{33}cm^{-2}s^{-1}$, i.e. without significant event pile-up. Figure 2 shows the transverse momentum for the isolated single track in a jet with $E_t >$ 60 GeV and $|\eta|<$ 2.5 for signal events at $m_A$ = 300, 500 and 800 GeV, for the $Z,\gamma^*$ [$\;\rightarrow$]{}$~\tau\tau$, $t\overline{t}$ events with $W$ [$\;\rightarrow$]{}$\tau\nu$ and for QCD jet events. The $\tau$ selection efficiencies per jet, due to both track isolation and the hard track $p_t$ threshold cut are shown in Table 1 as a function of the $p_t$ for the signal and backgrounds. As can be seen from the table, a rejection factor against the QCD jets varying from 700 to about 1500 is obtained from tracker momenta alone. In the following, $p_t^h >$ 40 GeV is chosen, with an efficiency of 50% for the signal at $m_A$ = 500 GeV and 22% and 32% for the $Z, \gamma^*$ [$\;\rightarrow$]{}$~\tau\tau$ background in the low and high mass range, respectively. The rejection factor against QCD jets is then about 1500.
The rejection factor against the QCD jets is obviously sensitive to the fragmentation function used to simulate the jets. On this point a more detailed study is needed to understand the hard fragmentation effects in jets. The rejection factor shown in Table 1 is an average factor for $E_t^{jet} >$ 60 GeV. As the simulation of even one QCD jet event with two jets with $E_t >$ 60 GeV both containing one isolated hard hadron with $p_t >$ 40 GeV is not possible, we have evaluated the rejection factor as a function of $E_t^{jet}$ generating large numbers of QCD jets in several $E_t$ bins. This is shown in Fig. 3 and is used in the simulation and evaluation of the QCD background assuming also factorisation of the two jet fragmentations. For a QCD jet with $E_t$ values between 300 and 350 GeV, for instance, the probability to fluctuate into a “$\tau$” jet with one isolated charged hadron with $p_t >$ 40 GeV is about $2\times 10^{-4}$.
The two hard tracks from $\tau^+$ and $\tau^-$ in the signal events have an opposite sign while no such strong charge correlation is expected for the QCD jet events. As the charge assignement in CMS is almost 100% for tracks with $p_t <$ 1 TeV [@tdr:tracker], about half of the QCD background can be removed by requiring an opposite charge for the two isolated hadrons.
0.1 in
Process $p_t^h >$ 20 GeV $p_t^h >$ 30 GeV $p_t^h >$ 40 GeV
-------------------------------------------------------------------------------- -------------------- -------------------- --------------------
$A,H$ [$\;\rightarrow$]{} $~\tau \tau$, $m_A$=300 GeV 38.1 % 32.5% 27.3%
$A,H$ [$\;\rightarrow$]{} $~\tau \tau$, $m_A$=500 GeV 61.7 % 55.7% 50.0%
$A,H$ [$\;\rightarrow$]{} $~\tau \tau$, $m_A$=800 GeV 65.7 % 61.2% 56.9%
$Z, \gamma^*$ [$\;\rightarrow$]{} $~\tau \tau$, 130 GeV$<m_{\tau\tau}<$300 GeV 30.9% 25.0% 22.2%
$Z, \gamma^*$ [$\;\rightarrow$]{} $~\tau \tau$, $m_{\tau\tau}>$300 GeV 41.7% 36.7% 32.1%
$t\overline{t}$, $W$ [$\;\rightarrow$]{}$~\tau\nu$ 25.0% 20.5% 16.6%
QCD jets, $p_t^{q,g} >$ 60 GeV 1.4$\times10^{-3}$ 9.4$\times10^{-4}$ 6.7$\times10^{-4}$
: $\tau$ selection efficiency per jet using just tracker momentum measurement for $A,H$ [$\;\rightarrow$]{} $~\tau \tau$ with $\tau$ [$\;\rightarrow$]{}$~h^{\pm}+X$ at $m_A$ = 300, 500 and 800 GeV, for $Z, \gamma^*$ [$\;\rightarrow$]{} $~\tau \tau$ in the low and high mass range and for QCD jet background, with $E_t^{jet} >$ 60 GeV and $|\eta^{jet}| <$ 2.4. The hard track is required to be within $\Delta R$ = 0.1 from the calorimeter jet axis and isolated in the tracker with respect to tracks with a $p_t$ threshold of 1 GeV in a cone of $\Delta R$ = 0.4 around the calorimeter jet axis.
\[table1\]
$E_t^{miss}$ cuts
-----------------
Due to the large Higgs masses considered here the missing transverse energy $E_t^{miss}$ from the neutrinos is expected to be significant, although the two $\tau$’s tend to be in back-to-back configuration resulting in a partial compensation. To perform $H$ [$\;\rightarrow$]{}$~\tau\tau$ mass reconstruction non-back-to-back $\tau\tau$ pairs are required. Figure \[fig:4\] shows the expected $E_t^{miss}$ from the simulation of the CMS calorimeter response with the CMSJET program for a signal with $m_A$ = 300, 500 and 800 GeV and for the QCD jet background, in a data taking regime without any event pile-up. After requiring two $\tau$ jets, a cut $E_t^{miss} > $ 40 GeV reduces the QCD background by a factor of about 100, while the efficiency for the signal is 59% at $m_A$ = 500 GeV and 33% at $m_A$ = 300 GeV. For the largest masses the cut can be increased to $E_t^{miss} > $ 60 GeV with a rejection factor against QCD of 180 and an efficiency of 58% for the signal. These numbers are evidently rather sensitive to CMSJET modelling of detector response and the no pile-up hypothesis, but also to the exact $H_{SUSY}$ production mechamism etc. The presence of the forward calorimetry (VFCAL) is particularly important for the $E_t^{miss}$ measurement in this channel as illustrated in Fig. \[fig:4\]b, showing the $E_t^{miss}$ distributions for QCD events with two jets with $E_t>$ 60 GeV for the full $\eta$ range and for the central calorimeters only ($|\eta|<$ 3). The $E_t^{miss}$ distributions are also significantly modified in the presence of event pile-up in the $E_t^{miss}\sim$ 10 -30 GeV range, which is discussed in a separate note [@higlum], but a cut at $E_t^{miss}>$ 40 GeV is rather safe.
Relative azimuthal distributions
--------------------------------
The two $\tau$ jets in the signal events are predominantly in the back-to-back configuration, especially at high masses, as can be seen from Fig. 5 showing the $\Delta\phi$ angle in the transverse plane between the two $\tau$ jets in the signal at $m_A$ = 300, 500 and 800 GeV (Figs. 4 a, b, c). The $\Delta\phi$ angle for the $Z, \gamma^*$ [$\;\rightarrow$]{}$~\tau\tau$, $t\overline{t}$ and QCD backgrounds is shown in Figs. 4 d, e and f. Only the $t\overline{t}$ background component is significantly less back-to-back correlated.
0.1 in
Process $\sigma$ \* BR “$\tau$” jets $E_t^{miss}>$40 GeV $\Delta\phi<175^0$ mass rec.
----------------------------------------------------------------------- ---------------- --------------- --------------------- -------------------- ----------- --
$A,H$ [$\;\rightarrow$]{} $~\tau \tau$, $m_A$=300 GeV 215 fb 574 188 133 70
$tan\beta$ = 15
$A,H$ [$\;\rightarrow$]{} $~\tau \tau$, $m_A$=500 GeV 44.4 fb 329 195 108 66
$tan\beta$ = 20
$Z,\gamma^*$ [$\;\rightarrow$]{} $~\tau \tau$, $m_{\tau\tau}>$130 GeV 3.08 pb 818 258 124 80
QCD jets, $p_t^{q,g} >$ 60 GeV 11.0 $\mu$b 6579 53 46 8
$t\overline{t}$, $W$ [$\;\rightarrow$]{}$~\tau\nu$ 1.74 pb 73 52 46 20
$W+jet$, $W$ [$\;\rightarrow$]{}$~\tau\nu$ 949 pb 410 240 212 20
: Cross section times branching ratio and number of events for $A,H$ [$\;\rightarrow$]{} $~\tau \tau$ at $m_A$ = 300 and 500 GeV and for the background processes for $3 \times 10^4pb^{-1}$. Shown are the number of events after selection ot two jets with $E_t>$ 60 GeV and $\tau$ identification in the tracker, $\Delta\phi$ cut, $E_t^{miss}$ cut and the reconstruction of Higgs mass. The two hadrons in the $\tau$ jets are required to be of opposite sign. A reconstruction efficiency of 85% per jet is assumed for $\tau$ jet reconstruction, consistent with CMSIM evaluation.
\[table2\]
0.1 in
Process $\sigma$ \* BR “$\tau$” jets $E_t^{miss}>$60 GeV $\Delta\phi<175^0$ mass rec.
------------------------------------------------------------------------ ---------------- --------------- --------------------- -------------------- ----------- --
$A,H$ [$\;\rightarrow$]{} $~\tau \tau$, $m_A$=800 GeV 32.8 fb 213 124 57 37
$tan\beta$ = 45
$Z, \gamma^*$ [$\;\rightarrow$]{} $~\tau \tau$, $m_{\tau\tau}>$130 GeV 3.08 pb 286 85 42 30
QCD jets, $p_t^{q,g} >$ 60 GeV 11.0 $\mu$b 1158 5 5 1
$t\overline{t}$, $W$ [$\;\rightarrow$]{}$~\tau\nu$ 2.90 pb 22 13 12 7
$W+jet$, $W$ [$\;\rightarrow$]{}$~\tau\nu$ 949 pb 70 54 48 6
: The same as in Table 2 but for $A,H$ [$\;\rightarrow$]{} $~\tau \tau$ at $m_A$ = 800 GeV and $tan\beta$ = 45 with $E_t^{jet} >$ 100 GeV and $E_t^{miss}>$ 60 GeV.
\[table3\]
0.1 in
------------------------------------------------ -------------------------------- -------------------------------- --------------------------------
$m_A$=300 GeV, $tan\beta$ = 20 $m_A$=500 GeV, $tan\beta$ = 25 $m_A$=800 GeV, $tan\beta$ = 45
230 GeV$<m_{\tau\tau}<$350 GeV 400 GeV$<m_{\tau\tau}<$700 GeV 600 GeV$<m_{\tau\tau}<$ 1 TeV
Signal 67 58 33
$Z, \gamma^*$ [$\;\rightarrow$]{} $~\tau \tau$ 39 49 16
QCD jets 4 4 0.1
$t\overline{t}$ 8 10 4
$W+jet$ 5 9 2
S / B 1.1 0.7 1.4
$S/\sqrt{S+B}$ 5.9 5.0 4.4
$S/\sqrt{B}$ 8.6 6.6 6.7
------------------------------------------------ -------------------------------- -------------------------------- --------------------------------
: Number of events after selection within a mass window cut and statistical significance for $A,H$ [$\;\rightarrow$]{} $~\tau \tau$ at $m_A$ = 300, 500 and 800 GeV and for the background processes for $3 \times 10^4pb^{-1}$. In all cases $tan\beta$ is chosen to be near the limit of observability.
\[table4\]
Higgs mass reconstruction
-------------------------
As well known, despite at least two escaping neutrinos, the Higgs mass can be approximately reconstructed from the two $\tau$ jets measured in calorimetry and from the $E_t^{miss}$, when the two $\tau$’s are not exactly back-to-back, by projecting the $E_t^{miss}$ vector on the directions of the reconstructed $\tau$ jets. This has been shown in connection with the $A,H$ [$\;\rightarrow$]{} $~\tau \tau$ [$\;\rightarrow$]{}$~l^{\pm} + h^{\mp}$ [@tauhlep] [@hmass] and $A,H$ [$\;\rightarrow$]{} $~\tau \tau$ [$\;\rightarrow$]{}$~e + \mu$ [@tauemu] channels. Figure 6a shows for example the reconstructed Higgs mass for $m_A$ = 500 GeV ($tan\beta$ = 20) with the cuts discussed above and without pile-up. An upper limit of $\Delta \phi(\tau-jet_1,\tau-jet_2) <$ 175$^o$ is applied. A gaussian fit to the mass distribution for A and H (which are almost mass degenarate) yields 58 GeV for the resolution ($\sigma$). The mass resolution can be significantly improved by reducing the upper limit in $\Delta \phi$ between the two $\tau$ jets i.e. making them less back-to-back as is shown in Fig. 6b for the same mass. For $\Delta \phi <$ 160$^o$ compared to $\Delta \phi <$ 175$^o$ the resolution improves from 58 to 42 GeV and the tail is reduced. This would however lead to a 55% loss of signal statistics and is not used here but shows the potential gain in resolution in conditions where statistics would not be the limiting factor. Figures 6c and 6d show the reconstructed Higgs masses for $m_A$ = 300 GeV ($tan\beta$ = 15) and for $m_A$ = 800 GeV ($tan\beta$ = 45), all for $\Delta \phi <$ 175$^o$. The fitted mass resolutions are 33 GeV and 92 GeV, respectively. For $m_A$ = 800 GeV and $tan\beta$ = 45 the natural Higgs width is 43 GeV thus instrumental effects still dominate observed signal width.
Figure 7 summarizes our results on the relative resolution ($\sigma^{gauss}$/$m_H$) for the reconstructed Higgs mass in the $e^{\pm} + \mu^{\mp}$ [@tauemu], $l^{\pm} + h^{\mp}$ [@tauhlep] and $h^+ + h^-$ channels. The resolution is best in the $h^+ + h^-$ channel ($\sim$ 10%) and the worst in the $e^{\pm} + \mu^{\mp}$ channel ($\sim$ 25%). More exactly, for $m_A$ = 500 GeV and $tan\beta$ = 20, for instance, the mass resolution is $\simeq$11% for $h^+ + h^-$, $\simeq$16% for $l^{\pm} + h^{\mp}$ and $\simeq$23% for $e^{\pm} + \mu^{\mp}$ with always the same upper limit in $\Delta \phi$ . This is due to the fact that the mass resolution is dominated by the precision of the $E_t^{miss}$ measurement, and the resolution is better in the channels with a smaller fraction of energy carried away by neutrinos, i.e. in the case of two hadronic $\tau$ decays. This significantly better mass resolution and thus possibly best signal/background ratio (provided the QCD background is reducible below irreducible $\tau\tau$ background) - not fully appreciated before - also speaks in favour of these double hadronic final states.
Results
=======
Mass spectra, effects of $E_t^{miss}$ cuts
------------------------------------------
Figure 8a shows the distribution of the reconstructed Higgs mass for $A,H$ [$\;\rightarrow$]{}$~\tau\tau$ at $m_A$ = 500 GeV for $tan\beta$ = 20 above the total background before any explicit $E_t^{miss}$ cut is applied at this stage. The QCD background dominates and the signal peak is not even visible over the background distribution. Figure 8b shows the same distribution with $E_t^{miss} >$ 40 GeV. The importance of the $E_t^{miss}$ selection and thus of detector hermeticity in general and the VFCAL in this search is evident. The remaining background is due to $Z, \gamma^*$ [$\;\rightarrow$]{}$~\tau\tau$, $t\overline{t}$ and $W$+jet events. The potential backgrounds from $b\overline{b}$ events and from WW production with $W$ [$\;\rightarrow$]{}$~\tau\nu$ are found to be negligible. The statistical fluctuations correspond to the expected statistics for $3 \times 10^4pb^{-1}$. The mass distributions before and after the $E_t^{miss}$ cut for a lighter Higgs with $m_A$ = 300 GeV and $tan\beta$ = 15 is shown in Figs. \[fig:9\]a and 9b, and for a very heavy Higgs with $m_A$ = 800 GeV and $tan\beta$ = 45 are shown in Figs. \[fig:9\]c and 9d. In all the figures 8a,8b and 9 we are showing $A, H$ [$\;\rightarrow$]{}$~\tau\tau$ signals close to the observability limit at 3$\times$10$^4$ pb$^{-1}$ (see table 4 for exact signal significancies). Further away from the observation limit signals are very spectacular, as visible in fig. 8c for $m_A$ = 500 GeV and $tan\beta$ = 30 and in fig. 8d for $tan\beta$ = 40.
Event rates and signal significance
-----------------------------------
Table 2 gives the summary of the cross section times the branching ratio and the expected numbers of events for $3 \times 10^4pb^{-1}$ for the signal at $m_A$ = 300, 500 GeV and for the background prosesses, after selection of two $\tau$ jets with $E_t>$60 GeV, with the cuts $E_t^{miss}>$ 40 GeV and $\Delta \phi<$ 175$^0$ and after Higgs mass reconstruction. Table 3 shows the same for $m_A$ = 800 GeV with harder cuts $E_t^{jet}>$ 100 GeV and $E_t^{miss}>$ 60 GeV. Table 4 summarizes the the expected number of events and the statistical significance for $3 \times 10^4pb^{-1}$ for the signal at $m_A$ = 300, 500 and 800 GeV and for the background prosesses after all the kinematical cuts and applying a window in the reconstructed Higgs mass. An overall reconstruction efficiency of 85% per $\tau$ jet is assumed. This number is confirmed by reconstructing a sample of signal events with the CMSIM package. No possible $1^{st}$ level trigger losses have been included, and they may be not negligible in the $m_A$ = 300 GeV case. For the high mass range investigated here and after $E_t^{miss}$ cuts, the QCD jet background is well below the irreducible $\tau\tau$ background. This suggests that extending this channel to lower Higgs masses is very promising.
Conclusions and future prospects
================================
We have investigated the possibility to look for the neutral SUSY Higgses A and H in $A, H$ [$\;\rightarrow$]{}$~\tau\tau$ [$\;\rightarrow$]{}$~h^{\pm}$ + $h^{\mp}$ + X with hadronic $\tau$ decay modes. Despite the simplifying assumptions made concerning both production dynamics (in particular for QCD background) and approximations made in the fast simulation of detector response, the discovery potential for large $A,H$ masses in this channel is clear. Of course, more detailed studies are needed to confirm and consolidate these preliminary results. Let us remind that the $A,H$ [$\;\rightarrow$]{}$~\tau\tau$ channels are the only ones up to now giving access to the $m_{A,H} \sim$ 0.5 - 1 TeV mass range which is considered as the most plausible one for the $A$ and $H$. The $A,H$ [$\;\rightarrow$]{}$~b\overline{b}$ modes may allow it too, but this is even less well understood at present than the $A,H$ [$\;\rightarrow$]{}$~\tau\tau$ channels in CMS.
To summarize, we require the presence of two calorimetric jets of $E_t >$ 60 - 100 GeV in $|\eta| <$ 2.4 for trigger. These thresholds are about adequate for running at 10$^{33}$ cm$^{-2}s^{-1}$ but are too low for 10$^{34}$ cm$^{-2}s^{-1}$, which requires further study. The $\tau$ selection is based here largely on the tracker, requiring a hard isolated single track with $p_t >$ 40 GeV in the jet. This already provides a large rejection factor against QCD jets, allowing a major suppression of this background. The QCD background can be further reduced by a cut on the missing transverse energy. A rejection factor of about 100 is obtained with a cut $E_t^{miss} >$ 40 - 60 GeV thus suppressing the QCD background much below the irreducible $Z, \gamma^*$ [$\;\rightarrow$]{}$~\tau\tau$ background. The effectiveness of these $E_t^{miss}$ cuts at rather modest $E_t^{miss}$ values has however still to be checked with full CMSIM simulations for full reliability. It has also to be evaluated in case of running at 10$^{34}$ cm$^{-2}s^{-1}$ i.e. in presence of pile-up. Figure 10 shows the 5$\sigma$ significance discovery contours for SUSY Higgses as a function of $m_A$ and $tan\beta$ for 3$\times$10$^4$ pb$^{-1}$ (without pile-up) assuming no stop mixing. The discovery range for the $A, H$ [$\;\rightarrow$]{}$~\tau\tau$ [$\;\rightarrow$]{}$~h^{\pm}$ + $h^{\mp}$ + X channel extends down to $tan\beta \sim$ 15 at $m_A$ = 300 GeV and down to $tan\beta \sim$20 at $m_A$ = 500 GeV. For very heavy Higgs at $m_A$ = 800 GeV, $tan\beta$ values of $\sim$ 45 can be probed. Even for these boundary $tan\beta$ values the mass peak may be recognized above the background distribution (figs. 8 and 9); away from the boundary the mass peak can be rather spectacular. These $\tau\tau$ [$\;\rightarrow$]{}$~h^{\pm}$ + $h^{\mp}$ modes provide the best mass resolution at same $m_H$ and $tan\beta$ when compared to other $\tau\tau$ final states (figs. 8c and 8d).
The 5$\sigma$ discovery limit of Fig. 10 is a preliminary result, optimized for high masses, and based on a fast simulation of the CMS tracker and calorimeter and on a $\tau$ selection, exploiting only momentum measurements in the tracker. An improvement in the QCD background rejection in the low mass range ($m_H <$ 300 GeV) can be expected by including the calorimeter $\tau$ selection developed in [@tausel] exploiting the collimation of the hadronic $\tau$ signal in the ECAL and the $\tau$ isolation in the ECAL+HCAL. The possibility to use impact parameter $\tau$-tagging for the two high $p_t$ tracks to reduce the QCD background is in progress applying a full track finding procedure. The $\tau$-tagging is not as easy as b-tagging, however the impact parameter selection is here to be applied on fast tracks where multiple scattering is minimal and where the best impact parameter resolution is expected, with an asymptotic $\sigma_{ip} \sim$ 20 $\mu m$ [@tdr:tracker] whilst $<c\tau > \sim$ 87 $\mu m$ for $\tau$. These improved $\tau$ selection criteria i.e. QCD jet suppression methods are surely necessary to extend the low-mass reach of this channel towards $m_H \sim$ 150 GeV, but can also be regarded as alternatives or complements to the $E_t^{miss}$ selections - if it turned out that these were not as effective in the final detector as expected here.
Exploiting $b\overline{b}H_{SUSY}$ associated production channels with b-tagging opens still other possibilities, as it is the way to reduce efficiently the $Z, \gamma^*$ [$\;\rightarrow$]{}$~\tau\tau$ background. B-tagging will further reduce the QCD background as well. To extend the Higgs search in this channel to high luminosity running, in particular the b-tagging [@tdr:tracker] in associated production channels, requires a separate study at high luminosity.
Let us remind again that the $A, H$ [$\;\rightarrow$]{}$~\tau\tau$ modes are the only ones (until now) which give access to the most difficult region of the $m_A$, $tan\beta$ plot (150 GeV $< m_A <$ 300 GeV, $tan\beta <$ 10). It is clear that the low-mass, low-$tan\beta$ reach in this channel ultimately will be trigger limited. Every effort should nonetheless be made so that the $m_A <$ 300 GeV region could be explored at $L >$ 10$^{33}$ cm$^{-2}s^{-1}$ with this $A, H$ [$\;\rightarrow$]{}$~\tau\tau$ [$\;\rightarrow$]{}$~h^{\pm}$ + $h^{\mp}$ + X channel and the possible trigger rate and $E_t^{jet}$ threshold limitations properly assessed.
[11]{}
, R. Kinnunen and A. Nikitenko “Study of $\tau$ jet identification in CMS”
, R. Kinnunen and A. Nikitenko “Study of $H_{SUSY}$ [$\;\rightarrow$]{}$~\tau\tau$ [$\;\rightarrow$]{}$~l^{\pm}+h^{\mp} + E_t^{miss}$ in CMS”
, S. Lehti, R. Kinnunen and J. Tuominiemi “Study of $h, H, A$ [$\;\rightarrow$]{}$~\tau\tau$ [$\;\rightarrow$]{}$~e\mu$ in the CMS detector”
, R. Kinnunen and D. Denegri “B-tagging with Impact Parameter in the CMS Tracker”
; [**CERN-TH.6488/92; CERN-TH.7112/93**]{}, T. Sjostrand
, J.F. Gunion, A. Stange and S. Willenbrock “Weakly-coupled Higgs Bosons”
, A.Djouadi, J. Kalinowski and M. Spira “HDECAY: a Program for Higgs Boson Decays in the Standard Model and its Supersymmetric Extension”
, A.Djouadi
, S. Abdullin, A. Khanov and N. Stepanov “CMSJET”
, A. Nikitenko et al., “Simulation of the CMS Level-1 Calorimeter Trigger.”
, R. Kinnunen and A. Nikitenko
, R. Kinnunen “Reconstruction of $m_A$ in $A,H,h$ [$\;\rightarrow$]{}$~l + \tau jet + E_t^{miss}$.”
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Corrections to scaling in the 3D Ising model are studied based on non–perturbative analytical arguments and Monte Carlo (MC) simulation data for different lattice sizes $L$. Analytical arguments show the existence of corrections with the exponent $(\gamma-1)/\nu \approx 0.38$, the leading correction–to–scaling exponent being $\omega \le (\gamma-1)/\nu$. A numerical estimation of $\omega$ from the susceptibility data within $40 \le L \le 2048$ yields $\omega=0.25(33)$. It is consistent with the statement $\omega \le (\gamma-1)/\nu$, as well as with the value $\omega = 1/8$ of the GFD theory. We reconsider the MC estimation of $\omega$ from smaller lattice sizes to show that it does not lead to conclusive results, since the obtained values of $\omega$ depend on the particular method chosen. In particular, estimates ranging from $\omega =1.274(72)$ to $\omega=0.18(37)$ are obtained by four different finite–size scaling methods, using MC data for thermodynamic average quantities, as well as for partition function zeros. We discuss the influence of $\omega$ on the estimation of exponents $\eta$ and $\nu$.'
author:
- |
J. Kaupužs$^{1,2}$ [^1] , R. V. N. Melnik$^3$, J. Rimšāns$^{1,2,3}$\
$^1$Institute of Mathematics and Computer Science, University of Latvia\
29 Raina Boulevard, LV–1459 Riga, Latvia\
$^2$ Institute of Mathematical Sciences and Information Technologies,\
University of Liepaja, 14 Liela Street, Liepaja LV–3401, Latvia\
$^3$ The MS2 Discovery Interdisciplinary Research Institute,\
Wilfrid Laurier University, Waterloo, Ontario, Canada, N2L 3C5
title: |
**Corrections to finite–size scaling in the 3D Ising model based on non–perturbative approaches\
and Monte Carlo simulations**
---
**Keywords:** Ising model, corrections to scaling, non–perturbative methods, Feynman diagrams, Monte Carlo simulation
Introduction {#intro}
============
The critical exponents of the three–dimensional (3D) Ising universality class have been a subject of extensive analytical as well as Monte Carlo (MC) studies during many years. The results of the standard perturbative renormalization group (RG) methods are well known [@Amit; @Ma; @Justin; @Kleinert; @PV]. An alternative analytical approach has been proposed in [@K_Ann01] and further analyzed in [@K2012], where this approach is called the GFD (Grouping of Feynman Diagrams) theory. A review of MC work till 2001 is provided in [@HasRev]. More recent papers are [@Has1; @Has2; @GKR11; @KMR_2011; @KMR_2013].
In this paper we will focus on the exponent $\omega$, which describes the leading corrections to scaling. A particular interest in this subject is caused by recent challenging non-perturbative results reported in [@K2012], showing that $\omega \le (\gamma-1)/\nu$ holds in the $\varphi^4$ model based on a rigorous proof of certain theorem. The scalar 3D $\varphi^4$ model belongs to the 3D Ising universality class with $(\gamma-1)/\nu \approx 0.38$. Therefore, $\omega$ is expected to be essentially smaller than the values of about $0.8$ predicted by standard perturbative methods and currently available MC estimations. The results in [@K2012] are fully consistent with the predictions of the alternative theoretical approach of [@K_Ann01], from which $\omega=1/8$ is expected. We have performed a Monte Carlo analysis of the standard 3D Ising model, using our data for very large lattice sizes $L$ up to $L=2048$, to clarify whether $\omega$, extracted from such data, can be consistent with the results of [@K2012] and [@K_Ann01]. Since our analysis supports this possibility, we have further addressed a related question how a decrease in $\omega$ influences the MC estimation of critical exponents $\eta$ and $\nu$. We have also tested different finite–size scaling methods of estimation $\omega$ from smaller lattice sizes to check whether such methods always give $\omega$ consistent with $0.832(6)$, as one can be expected from the references in [@Has1].
Models with the so-called improved Hamiltonians are often considered instead of the standard Ising model for a better estimation of the critical exponents [@Has1; @Has2]. The basic idea of this approach is to find such Hamiltonian parameters, for which the leading correction to scaling vanishes. However, this correction term has to be large enough and well detectable for the estimation of $\omega$. So, this idea is not very useful in our case.
Analytical arguments {#sec:analytical}
====================
In [@K2012], the $\varphi^4$ model in the thermodynamic limit has been considered, for which the leading singular part of specific heat $C_V^{sing}$ can be expressed as $$C_V^{sing} \propto \xi^{1/\nu} \left( \int_{k<\Lambda'} [G({\bf k})- G^*({\bf k})] d {\bf k} \right)^{sing} \;,
\label{eq:CVsing}$$ assuming the power–law singularity $\xi \sim t^{-\nu}$ of the correlation length $\xi$ at small reduced temperature $t \to 0$. Here $G({\bf k})$ is the Fourier–transformed two–point correlation function, and $G^*({\bf k})$ is its value at the critical point. This expression is valid for any positive $\Lambda' < \Lambda$, where $\Lambda$ is the upper cut-off parameter of the model, since the leading singularity is provided by small wave vectors with the magnitude $k = \mid {\bf k} \mid \to 0$ and not by the region $\Lambda' \le k \le \Lambda$. In other words, $C_V^{sing}$ is independent of the constant $\Lambda'$.
The leading singularity of specific heat in the form of $C_V^{sing} \propto (\ln \xi)^{\lambda} \xi^{\alpha/\nu}$ and the two–point correlation function in the asymptotic form of $G({\bf k}) = \sum_{\ell \ge 0} \xi^{(\gamma - \theta_{\ell})/\nu} g_{\ell}(k \xi)$, $G^*({\bf k}) = \sum_{\ell \ge 0} b_{\ell} k^{(-\gamma + \theta_{\ell})/\nu}$ with $\theta_0=0$ and $\theta_{\ell}>0$ for $\ell \ge 1$ have been considered in [@K2012]. These expressions are consistent with the conventional scaling hypothesis, $g_{\ell}(k \xi)$ being the scaling functions. The exponent $\lambda$ is responsible for possible logarithmic correction in specific heat, whereas the usual power–law singularity is recovered at $\lambda=0$.
According to the theorem proven in [@K2012], the two–point correlation function of the $\varphi^4$ model contains a correction with the exponent $\theta_{\ell} = \gamma + 1 -\alpha - d \nu$, if $C_V^{sing}$ can be calculated from (\[eq:CVsing\]), applying the considered here scaling forms, if the result is $\Lambda'$–independent, and if the condition $\gamma + 1 -\alpha - d \nu >0$ is satisfied for the critical exponents. Applying the known hyperscaling hypothesis $\alpha + d \nu = 2$, it yields $\theta_{\ell} = \gamma -1$ for $\gamma >1$. Apparently, the listed here conditions of the theorem are satisfied for the scalar 3D $\varphi^4$ model. Since the critical singularities are provided by long–wave fluctuations, the condition of $\Lambda'$–independence is generally meaningful. The assumption $\xi \sim t^{-\nu}$ (with no logarithmic correction) and the considered here scaling forms (with $\lambda =0$), as well as the relation $\gamma + 1 -\alpha - d \nu >0$ (or $\gamma >1$ according to the hyperscaling hypothesis) are correct for the scalar 3D $\varphi^4$ model, according to the current knowledge about the critical phenomena.
The correction with the exponent $\theta_{\ell}$ corresponds to the one with $\omega_{\ell} =\theta_{\ell}/\nu$ in the finite–size scaling. In such a way, the discussed here analytical arguments predict the existence of a finite–size correction with the exponent $(\gamma-1)/\nu$ in the scalar 3D $\varphi^4$ model. As discussed in [@K2012], nontrivial corrections tend to be cancelled in the 2D Ising model, in such a way that only trivial ones with integer $\theta_{\ell}$ are usually observed. However, there is no reason to assume such a scenario in the 3D case. Therefore, the existence of corrections with the exponent $(\gamma-1)/\nu$ is expected in the 3D Ising model, since it belongs to the same universality class as the 3D $\varphi^4$ model. Because this correction is not necessarily the leading one, the prediction is $\omega \le \omega_{\mathrm{max}}$, where $\omega_{\mathrm{max}} =(\gamma-1)/\nu$ is the upper bond for the leading correction–to–scaling exponent $\omega$. Using the widely accepted estimates $\gamma \approx 1.24$ and $\nu \approx 0.63$ [@Justin] for the 3D Ising model, we obtain $\omega_{\mathrm{max}} \approx 0.38$. The prediction of the GFD theory [@K_Ann01] is $\gamma=5/4$, $\nu =2/3$ and, therefore, $\omega_{\mathrm{max}} = 0.375$. Thus, we can state that in any case $\omega_{\mathrm{max}}$ is about $0.38$. The value of $\omega$ is expected to be $1/8$ according to the GFD theory considered in [@K_Ann01; @K2012].
MC estimation of $\omega$ from finite–size scaling
==================================================
The case of very large lattice sizes $L \le 2048$ {#sec:large}
-------------------------------------------------
We have simulated the 3D Ising model on simple cubic lattice with periodic boundary conditions. The Hamiltonian $H$ of the model is given by $$H/T = - \beta \sum_{\langle ij \rangle} \sigma_i \sigma_j \;,$$ where $T$ is the temperature measured in energy units, $\beta$ is the coupling constant and $\langle ij \rangle$ denotes the pairs of neighboring spins $\sigma_i = \pm 1$. The MC simulations have been performed with the Wolff single cluster algorithm [@Wolff], using its parallel implementation described in [@KMR_2010]. An iterative method, introduced in [@KMR_2010], has been used here to find pseudocritical couplings $\widetilde{\beta}_c(L)$ corresponding to certain value $U=1.6$ of the ratio $U=\langle m^4 \rangle / \langle m^2 \rangle^2$, where $m$ is the magnetization per spin. We have evaluated by this method the susceptibility $\chi = L^3 \langle m^2 \rangle$ an the derivative $\partial Q /\partial \beta$ at $\beta = \widetilde{\beta}_c(L)$, where $Q=1/U$. The results for $16 \le L \le 1536$ are already reported in Tab. 1 of our earlier paper [@KMR_2011]. We have extended the simulations to lattice sizes $L=1728$ and $L=2048$, using approximately the same number of MC sweeps as for $L=1536$ in [@KMR_2011]. Thus, Tab. 1 of [@KMR_2011] can be now completed with the new results presented in Tab. \[tab1\] here.
[|c|c|c|c|]{}
------------------------------------------------------------------------
L & $\widetilde \beta_c$ & $\chi/L^2$ & $10^{-3} \partial Q /\partial \beta$\
2048 & 0.2216546252(66) & 1.1741(27) & 151.1(1.1)\
1728 & 0.2216546269(94) & 1.1882(20) & 116.98(87)\
The exponent $\omega$ describes corrections to the asymptotic finite–size scaling. In particular, for the susceptibility at $\beta = \widetilde{\beta}_c(L)$ we have $$\chi \propto L^{2-\eta} \left( 1 + a L^{-\omega} + o \left( L^{-\omega} \right) \right) \;.
\label{eq:chi}$$ We define the effective exponent $\eta_{\mathrm{eff}}(L)$ as the mean slope of the $-\ln \chi$ vs $\ln L$ plot, evaluated by fitting the data within $[L/2,2L]$. It behaves asymptotically as $\eta_{\mathrm{eff}}(L)=\eta + \mathcal{O} \left( L^{-\omega} \right)$. It has been mentioned in [@KMR_2011] that $\omega$ might be as small as $1/8$, since the plot of the effective exponent $\eta_{\mathrm{eff}}$ vs $L^{-1/8}$ looks rather linear for large lattice sizes (see Fig. 6 in [@KMR_2011]). This observation is confirmed also by the extended here data, as it can be seen from Fig. \[fig1\] (left).
![The $\eta_{\mathrm{eff}}$ vs $L^{-1/8}$ (left) and the $\Phi_2(L)$ vs $L^{-1/8}$ (right) plots. Straight lines show the linear fits for large enough lattice sizes $L$.[]{data-label="fig1"}](eta_eff.eps "fig:"){width="48.50000%"} ![The $\eta_{\mathrm{eff}}$ vs $L^{-1/8}$ (left) and the $\Phi_2(L)$ vs $L^{-1/8}$ (right) plots. Straight lines show the linear fits for large enough lattice sizes $L$.[]{data-label="fig1"}](chi_ratio.eps "fig:"){width="48.50000%"}
An estimate of $\omega$ can be obtained by fitting the $\eta_{\mathrm{eff}}(L)$ data. Here we use a more direct method, which gives similar, but slightly more accurate results. We consider the ratio $\Phi_b(L) = b^{-4} \chi(bL)/\chi(L/b)$ at $\beta = \widetilde{\beta}_c(L)$, where $b$ is a constant. According to (\[eq:chi\]), $\Phi_b(L)$ behaves as $$\Phi_b(L) = A + B L^{-\omega}
\label{eq:phi}$$ at $L \to \infty$, where $A = b^{-2 \eta}$ and $B = a b^{-2 \eta} \left( b^{-\omega} -b^{\omega} \right)$. The correction amplitude $B$ is larger for a larger $b$ value, whereas a smaller $b$ value allows us to obtain more data points for $\Phi_b(L)$. The actual choice $b=2$ is found to be optimal for our data. Like the $\eta_{\mathrm{eff}}(L)$ vs $L^{-1/8}$ plot, also the $\Phi_2(L)$ vs $L^{-1/8}$ plot can be well approximated by a straight line for large enough lattice sizes, as shown in Fig. \[fig1\]. Thus, $\omega$ could be as small as $1/8$.
[|c|c|c|]{}
------------------------------------------------------------------------
$L_{\mathrm{min}}$ & $\omega$ & $\chi^2/\mathrm{d.o.f.}$\
32 & 1.055(76) & 1.07\
40 & 0.99(11) & 1.09\
48 & 0.99(16) & 1.16\
54 & 1.02(22) & 1.23\
64 & 0.76(29) & 1.14\
80 & 0.25(33) & 0.76\
96 & 0.06(38) & 0.74\
108 & 0.27(46) & 0.70\
128 & 0.11(59) & 0.75\
We have fit the quantity $\Phi_2(L)$ to (\[eq:phi\]) within $L \in [L_{\mathrm{min}},1024]$ (estimated from the $\chi/L^2$ data within $L \in [L_{\mathrm{min}}/2,2048]$) to evaluate $\omega$. The results are collected in Tab. \[tab2\]. The estimated $\omega$ values are essentially decreased for $L_{\mathrm{min}} \ge 80$ as compared to smaller $L_{\mathrm{min}}$ values. Moreover, the quality of fits is remarkably improved in this case, i. e., the values of $\chi^2$ of the fit per degree of freedom ($\chi^2/\mathrm{d.o.f.}$) become smaller. Note that $L_{\mathrm{min}} = 80$ corresponds to the fit interval for $\Phi_2(L)$ in Fig. \[fig1\], where the data are well consistent with $\omega=1/8$. From a formal point of view, $\omega = 0.25(33)$ at $L_{\mathrm{min}} = 80$ can be considered as the best estimate from our data, since it perfectly agrees with the results for $L_{\mathrm{min}} > 80$ and has the minimal statistical error within $L_{\mathrm{min}} \ge 80$. The estimate $\omega = 0.06(38)$ at $L_{\mathrm{min}} =96$ most clearly shows the deviation below the usually accepted values at about $0.8$, e. g., $\omega = 0.832(6)$ reported in [@Has1]. Our estimation is fully consistent with the analytical arguments in Sec. \[sec:analytical\], since all our $\omega$ values for $L_{\mathrm{min}} \ge 80$ are smaller than $\omega_{\mathrm{max}} \approx 0.38$ and also well agree with $1/8$.
Unfortunately, the statistical accuracy of this estimation is too low to rule out a possibility that the dropping of $\omega$ to smaller values at $L_{\mathrm{min}} \ge 80$ is caused by statistical errors in the data. However, the decrease in $\omega$ for large enough lattice sizes is strongly supported by the theorem discussed in Sec. \[sec:analytical\]. Note also that the recent MC analysis of the 2D $\varphi^4$ model [@KMR_14] is consistent with this theorem. These facts make our MC estimation plausible.
Note that there exist many quantities, which scale asymptotically as $A + B L^{-\omega}$ with different values of coefficients $A$ and $B$ — see, e. g., [@Hasenbusch; @Has1], as well as the examples in the next section. In principle, all of them can be used to estimate $\omega$. However, it is possible that the leading correction term $B L^{-\omega}$ for a subset of such quantities is too small as compared to statistical errors and, therefore, it is not well detectable at large lattice sizes. It means that a correction with small $\omega$ of about $1/8$, probably, will not be detected by MC analysis in many cases, but this still does not imply that such a correction does not exist. Thus, it is sufficient to demonstrate clearly that such a correction exists in one of the cases. Our MC analysis shows that $\Phi_2(L)$ is an appropriate quantity where corrections, i. e., variations in $\Phi_2(L)$, are well detectable even for very large values of $L$. Moreover, it suggests that a correction with such small $\omega$ as $1/8$, very likely, exists here.
Different estimates from the data for smaller lattice sizes {#sec:difomega}
-----------------------------------------------------------
The quality of fits with $L_{\mathrm{min}} < 80$ is remarkably improved if 5 data points for the largest lattice sizes are discarded, i. e., if $\Phi_2(L)$ is fit within $L \in [L_{\mathrm{min}},432]$. Choosing also not too large values of $L_{\mathrm{min}}$, we obtain formally quite good (provided by good fits with sufficiently small $\chi^2/\mathrm{d.o.f.}$ values) and stable estimates from remarkably smaller lattice sizes than those in Sec. \[sec:large\]. These are presented in Tab. \[tab3\]. The estimate $\omega = 1.171(96)$ at $L_{\mathrm{min}}=32$ is accepted as the best one from this reduced data set, since it perfectly agrees with the results for $L_{\mathrm{min}}>32$ and has the smallest statistical error.
[|c|c|c|]{}
------------------------------------------------------------------------
$L_{\mathrm{min}}$ & $\omega$ & $\chi^2/\mathrm{d.o.f.}$\
32 & 1.171(96) & 0.77\
40 & 1.17(14) & 0.83\
48 & 1.29(22) & 0.84\
We have tested another finite–size scaling method. Based on our simulations discussed in [@KMR_2013], we have evaluated $U=U(L)$ at the pseudocritical coupling $\hat{\beta}_c(L)$, corresponding to the maximum of specific heat $C_V$. It scales as $$U(L) = {\cal A} + {\cal B} L^{-\omega}
\label{eq:UL}$$ at large $L$. This method is similar in spirit to the one used by Hasenbusch for the 3D $\varphi^4$ model in [@Hasenbusch]. The only difference is that another pseudocritical coupling (corresponding to certain value of $Z_a/Z_p$, where $Z_p$ and $Z_a$ are partition functions for the lattice with periodic and antiperiodic boundary conditions) has been used in [@Hasenbusch]. We have found that our $U(L)$ data provide a good fit to (\[eq:UL\]) within $8 \le L \le 384$, where $L=384$ is similar to the maximal size $L=360$ simulated in [@Has1]. These data are listed in Tab. \[tab4\], and the fit results are presented in Tab. \[tab5\].
[|c|c|c|]{}
------------------------------------------------------------------------
L & $\hat{\beta}_c$ & $U$\
384 & 0.22167526(52) & 1.1884(62)\
320 & 0.22168192(69) & 1.1901(63)\
256 & 0.22169312(76) & 1.1937(50)\
192 & 0.2217149(10) & 1.1940(42)\
160 & 0.2217347(14) & 1.1951(44)\
128 & 0.2217742(16) & 1.1831(32)\
96 & 0.2218366(24) & 1.1917(33)\
80 & 0.2219002(32) & 1.1885(32)\
64 & 0.2220057(42) & 1.1888(30)\
48 & 0.2221987(58) & 1.1930(27)\
40 & 0.2223761(76) & 1.1933(26)\
32 & 0.222659(10) & 1.1983(26)\
24 & 0.223195(12) & 1.2035(19)\
20 & 0.223686(13) & 1.2051(16)\
16 & 0.224443(15) & 1.2121(13)\
12 & 0.225813(16) & 1.22147(93)\
10 & 0.226903(18) & 1.23159(86)\
8 & 0.228567(20) & 1.24474(64)\
[|c|c|c|]{}
------------------------------------------------------------------------
$L_{\mathrm{min}}$ & $\omega$ & $\chi^2/\mathrm{d.o.f.}$\
8 & 1.247(73) & 0.87\
10 & 1.31(12) & 0.90\
12 & 1.24(16) & 0.94\
16 & 1.46(29) & 0.95\
The estimate $\omega=1.247(73)$ at $L_{\mathrm{min}}=8$ seems to be the best one, as it has the smallest statistical error, a good fit quality, and it perfectly agrees with the results for $L_{\mathrm{min}}>8$. This value disagrees (the discrepancy is $5.7$ standard deviations) with the best estimate $\omega = 0.832(6)$ of [@Has1], obtained by a different finite–size scaling method. It well agrees with the other value $\omega = 1.171(96)$ reported here.
Searching for a different method, we have evaluated the Fisher zeros of partition function from MC simulations by the Wolff single cluster algorithm, following the method described in [@GKR11]. The results for $4 \le L \le 72$ have been reported in [@GKR11]. We have performed high statistics simulations (with MC measurements after each $\max\{2,L/4\}$ Wolff clusters, omitting $10^6$ measurements from the beginning of each simulation run, and totally $5 \times 10^8$ measurements used in the analysis for each $L$) for $4 \le L \le 128$. Two different pseudo-random number generators, discussed and tested in [@KMR_2011], have been used to verify that the results agree within error bars of about one or, sometimes, two standard deviations. Considering $\beta = \eta + i \xi$ as a complex number, the results for the first Fisher zero $\mathrm{Re} \, u^{(1)} + i \, \mathrm{Im} \, u^{(1)}$ in terms of $u=\exp(-4 \beta)$ are reported in Tab. \[tabzero1\]. Our values are obtained, evaluating $R = \langle \cos(\xi E) \rangle_{\eta} + i \langle \sin(\xi E) \rangle_{\eta}$ (where $E$ is energy) by the histogram reweighting method and minimizing $\mid R \mid$ (see [@GKR11]). Reliable results are ensured by the fact that, for each $L$, the simulation is performed at the coupling $\beta_{\mathrm{sim}}$ which is close to $\mathrm{Re} \beta^{(1)}$ – see Tab. \[tabzero1\]. We have reached it by using the results of [@GKR11] and finite–size extrapolations.
[|c|c|c|c|c|]{}
------------------------------------------------------------------------
$L$ & $\beta_{\mathrm{sim}}$ & $\mathrm{Re} \beta^{(1)}$ & $\mathrm{Re} \, u^{(1)}$ & $\mathrm{Im} \, u^{(1)}$\
4 & 0.2327517 & 0.2327392(37) & 0.3842870(59) & -0.0877415(55)\
6 & 0.228982187 & 0.2289856(28) & 0.3975550(44) & -0.0454038(44)\
8 & 0.22674832 & 0.2267531(27) & 0.4027150(44) & -0.0285905(42)\
12 & 0.224558048 & 0.2245557(17) & 0.4070191(28) & -0.0149314(25)\
16 & 0.223560276 & 0.2235605(12) & 0.4088085(19) & -0.0094349(17)\
24 & 0.22268819 & 0.22268780(72) & 0.4103176(12) & -0.0049422(11)\
32 & 0.222317896 & 0.22231846(49) & 0.41094218(81) & -0.00312478(84)\
48 & 0.22200815 & 0.22200835(26) & 0.41146087(43) & -0.00163982(49)\
64 & 0.221880569 & 0.22188039(17) & 0.41167349(29) & -0.00103825(37)\
96 & 0.2217737 & 0.22177375(12) & 0.41185008(21) & -0.00054521(19)\
128 & 0.22173025 & 0.221730228(83) & 0.41192200(14) & -0.00034552(14)\
We have also estimated the second zeros for $L=4, 32, 64$ from different simulation runs – see Tab. \[tabzero2\].
[|c|c|c|c|c|]{}
------------------------------------------------------------------------
$L$ & $\beta_{\mathrm{sim}}$ & $\mathrm{Re} \beta^{(2)}$ & $\mathrm{Re} \, u^{(2)}$ & $\mathrm{Im} \, u^{(2)}$\
4 & 0.2464072 & 0.246484(18) & 0.344470(27) & -0.143307(23)\
32 & 0.22313686 & 0.223169(15) & 0.409529(25) & -0.004891(25)\
64 & 0.222166355 & 0.2221781(85) & 0.411182(14) & -0.001616(13)\
Our results in Tabs. \[tabzero1\] and \[tabzero2\] are reasonably consistent with those of [@GKR11], but are more accurate and include larger lattice sizes. Like in [@GKR11], the results for the second zeros are much less accurate than those for the first zeros. Therefore only the latter ones are used here in the analysis, considering the ratios $\Psi_1(L) = \mathrm{Im} \, u^{(1)}(L) / (\mathrm{Re} \, u^{(1)}(L) - u_c )$ and $\Psi_2(L) = \mathrm{Im} \, u^{(1)}(L) / \mathrm{Im} \, u^{(1)}(L/2)$, which behave asymptotically as $A + B L^{-\omega}$ at $L \to \infty$. Here $u_c = \exp(-4 \beta_c)$ is the critical $u$ value, corresponding to the critical coupling $\beta_c$. The estimation of correction–to–scaling exponent $\omega$ from fits of $\Psi_1(L)$ to $A + B L^{-\omega}$ has been considered in [@GKR11], assuming the known approximate value $0.2216546$ of $\beta_c$. The use of $\Psi_2(L)$ instead of $\Psi_1(L)$ is another method, which has an advantage that it does not require the knowledge of the critical coupling $\beta_c$. However, a disadvantage is that the data for two sizes, $L$ and $L/2$, are necessary for one value of $\Psi_2(L)$. The values of $\Psi_1(L)$ and $\Psi_2(L)$ are listed in Tab. \[tabPsi\]. The standard errors of $\Psi_1(L)$ are calculated by the jackknife method [@MC], thus taking into account the statistical correlations between $\mathrm{Re} \, u^{(1)}$ and $\mathrm{Im} \, u^{(1)}$. As in [@GKR11], the errors due to the uncertainty in $\beta_c$ are ignored, assuming that $\beta_c = 0.2216546$ holds with a high enough accuracy. According to [@KMR_2011], this $\beta_c$ value, likely, is correct within error bars of about $\pm 3 \times 10^{-8}$. It justifies the actual estimation.
[|c|c|c|]{}
------------------------------------------------------------------------
$L$ & $\Psi_1$ & $\Psi_2$\
8 & 3.0638(15) & 0.325829(52)\
12 & 2.9698(17) & 0.328858(64)\
16 & 2.9136(18) & 0.330001(77)\
24 & 2.8581(21) & 0.330994(92)\
32 & 2.8289(22) & 0.33119(11)\
48 & 2.7988(23) & 0.33180(12)\
64 & 2.7814(24) & 0.33226(15)\
96 & 2.7718(31) & 0.33248(15)\
128 & 2.7691(33) & 0.33279(18)\
[|c|c|c|c|]{}
------------------------------------------------------------------------
$L_{\mathrm{max}}$ & $L_{\mathrm{min}}$ & $\omega$ & $\chi^2/\mathrm{d.o.f.}$\
& 8 & 0.807(27) & 1.01\
64 & 12 & 0.903(60) & 0.22\
& 16 & 0.84(10) & 0.06\
& 8 & 0.872(21) & 3.12\
128 & 12 & 0.997(42) & 1.21\
& 16 & 1.026(65) & 1.43\
The values of exponent $\omega$, extracted from the fits of $\Psi_1(L)$ to $A + B L^{-\omega}$ within $L \in [L_{\mathrm{min}},L_{\mathrm{max}}]$, are collected in Tab. \[tabPsi1fit\]. The results $\omega=0.903(60)$ for $L \in [12,64]$ and $\omega=0.84(10)$ for $L \in [16,64]$ agree within error bars with the results for similar fit intervals in [@GKR11], i. e., $\omega = 0.77(9)$ for $L \in [12,72]$ and $\omega = 0.63(16)$ for $L \in [16,72]$. However, the fits with $L_{\mathrm{max}}=128$ are preferable for a reasonable estimation of the asymptotic exponent $\omega$. The best estimate with $L_{\mathrm{max}}=128$ is $\omega = 0.997(42)$, obtained at $L_{\mathrm{min}}=12$. Indeed, this fit has an acceptable $\chi^2/\mathrm{d.o.f.}$ value and the result is well consistent with that for $L_{\mathrm{min}}=16$, where the statistical error is larger. It turns out that the estimated value of $\omega$ becomes larger when $L_{\mathrm{max}}$ is increased from $64$ to $128$. One of possible explanations, which is consistent with the data in Tab. \[tabPsi\], is such that the $\Psi_1(L)$ plot has a minimum near $L=128$ or at somewhat larger $L$ values. In this case the actual method is really valid only for remarkably larger lattice sizes.
The results of the other method, using the ratio $\Psi_2(L)$ instead of $\Psi_1(L)$, are collected in Tab. \[tabPsi2fit\]. Here $L_{\mathrm{max}}=128$ is fixed and only $L_{\mathrm{min}}$ is varied. The standard errors of $\omega$ are calculated, taking into account that fluctuations in $\mathrm{Im} \, u^{(1)}$ are the statistically independent quantities. As we can see, the estimated exponent $\omega$ decreases with increasing of $L_{\mathrm{min}}$ in the considered range. Since $\chi^2/\mathrm{d.o.f.}$ is about unity for moderately good fits, the estimates $\omega=0.61(19)$ at $L_{\mathrm{min}}=16$ and $\omega=0.18(37)$ at $L_{\mathrm{min}}=24$ are acceptable.
[|c|c|c|]{}
------------------------------------------------------------------------
$L_{\mathrm{min}}$ & $\omega$ & $\chi^2/\mathrm{d.o.f.}$\
8 & 1.400(57) & 4.12\
12 & 0.96(13) & 2.02\
16 & 0.61(19) & 1.20\
24 & 0.18(37) & 0.88\
Summarizing the results of this section, we conclude that three of the considered here methods give larger values of $\omega$ ($1.171(96)$, $1.274(72)$ and $0.997(42)$) than $\omega=0.832(6)$ reported in [@Has1], whereas the fourth method tends to give smaller values ($0.61(19)$ and $0.18(37)$). Thus, it is evident that the estimation of $\omega$ from finite–size scaling, using the data for not too large lattice sizes (comparable with $L \le 360$ in [@Has1] or $L \le 72$ in [@GKR11]) does not lead to conclusive results. Indeed, the obtained values depend on the particular method chosen and are varied from $1.274(72)$ to $0.18(37)$ in our examples.
Influence of $\omega$ on the estimation of exponents $\eta$ and $\nu$ {#sec:influence}
=====================================================================
Allowing a possibility that the correction–to–scaling exponent $\omega$ of the 3D Ising model is, indeed, essentially smaller than the commonly accepted values of about $0.8$, we have tested the influence of $\omega$ on the estimation of critical exponents $\eta$ and $\nu$ (or $1/\nu$). We have fit our susceptibility data at $\beta = \widetilde{\beta}_c(L)$ to the ansatz $$\chi = L^{2-\eta} \left( a_0 + \sum\limits_{k=1}^m a_k L^{-k \omega} \right)
\label{eq:chiansatz}$$ with $m=1$ and $m=2$ to estimate $\eta$ at three fixed values of the exponent $\omega$, i. e., $\omega=0.8$, $\omega=0.38$ and $\omega=1/8$. The first one is very close to the known RG value $\omega=0.799 \pm 0.011$ [@Justin] and also is quite similar to a more recent RG estimate $\omega=0.782(5)$ [@PS08] and the MC estimate $\omega=0.832(6)$ of [@Has1]. The second value corresponds to the upper bound $\omega_{\mathrm{max}} \approx 0.38$ stated in Sec. \[sec:analytical\], and the third value $1/8$ is extracted from the GFD theory [@K_Ann01; @K2012]. There exist different corrections to scaling, but the two correction terms in (\[eq:chiansatz\]) are the most relevant ones at $L \to \infty$, as it can be seen from the analysis in [@KMR_2013; @Has1]. The results of the fit within $L \in [L_{\mathrm{min}},2048]$ depending on $\omega$, $m$ and $L_{\mathrm{min}}$ are shown in Tab. \[tab6\]. Similarly, we have fit our $\partial Q/\partial \beta$ data at $\beta = \widetilde{\beta}_c(L)$ to the ansatz $$\frac{\partial Q}{\partial \beta} = L^{1/\nu} \left( b_0 + \sum\limits_{k=1}^m b_k L^{-k \omega} \right)
\label{eq:Qansatz}$$ and have presented the results in Tab. \[tab7\].
[|c|c|c|c|c|]{}
------------------------------------------------------------------------
$\omega$ & $m$ & $L_{\mathrm{min}}$ & $\eta$ & $\chi^2/\mathrm{d.o.f.}$\
& & 32 & 0.03617(45) & 0.93\
& 1 & 48 & 0.03562(59) & 0.89\
0.8 & & 64 & 0.03563(76) & 0.97\
& & 32 & 0.03521(94) & 0.91\
& 2 & 48 & 0.0366(14) & 0.90\
& & 64 & 0.0384(18) & 0.86\
& & 32 & 0.04387(78) & 1.40\
& 1 & 48 & 0.0414(11) & 0.82\
0.38 & & 64 & 0.0407(14) & 0.84\
& & 32 & 0.0342(32) & 0.97\
& 2 & 48 & 0.0408(45) & 0.86\
& & 64 & 0.0465(60) & 0.84\
& & 32 & 0.0656(15) & 1.90\
& 1 & 48 & 0.0589(22) & 0.85\
$1/8$ & & 64 & 0.0562(30) & 0.80\
& & 32 & 0.0106(16) & 1.51\
& 2 & 48 & 0.031(54) & 0.87\
& & 64 & 0.075(30) & 0.83\
[|c|c|c|c|c|]{}
------------------------------------------------------------------------
$\omega$ & $m$ & $L_{\mathrm{min}}$ & $1/\nu$ & $\chi^2/\mathrm{d.o.f.}$\
& & 32 & 1.5872(16) & 0.63\
& 1 & 48 & 1.5895(22) & 0.48\
0.8 & & 64 & 1.5880(27) & 0.47\
& & 32 & 1.5914(34) & 0.56\
& 2 & 48 & 1.5854(49) & 0.46\
& & 64 & 1.5869(64) & 0.50\
& & 32 & 1.5873(29) & 0.63\
& 1 & 48 & 1.5913(40) & 0.49\
0.38 & & 64 & 1.5880(52) & 0.47\
& & 32 & 1.598(12) & 0.61\
& 2 & 48 & 1.576(15) & 0.47\
& & 64 & 1.584(22) & 0.50\
& & 32 & 1.5878(84) & 0.63\
& 1 & 48 & 1.599(14) & 0.50\
$1/8$ & & 64 & 1.588(15) & 0.47\
& & 32 & 1.636(21) & 0.64\
& 2 & 48 & 1.525(50) & 0.47\
& & 64 & 1.56(37) & 0.50\
Considering the fits with only the leading correction to scaling included ($m=1$), one can conclude from Tab. \[tab6\] that the estimated critical exponent $\eta$ increases with decreasing of $\omega$, whereas the exponent $1/\nu$ in Tab. \[tab7\] is rather stable. The sub-leading correction to scaling ($m=2$) makes the estimated exponents $\eta$ and $1/\nu$ remarkably less stable for small $\omega$ values, such as $\omega=1/8$. The latter value is expected from the GFD theory [@K_Ann01; @K2012], so that the estimation at $\omega=1/8$ is self-consistent within this approach. In this case, the estimation of $\eta$ appears to be compatible with the theoretical value $\eta=1/8=0.125$ of [@K_Ann01], taking into account that the evaluated $\eta$ increases with $L_{\mathrm{min}}$. Moreover, the self-consistent estimation of $1/\nu$ is even very well consistent with $\nu = 2/3$ predicted in [@K_Ann01]. In particular, $1/\nu = 1.525(50)$ can be considered as the best estimate at $\omega=1/8$ and $m=2$ (it has the smallest $\chi^2/\mathrm{d.o.f.}$ value and much smaller statistical error than the estimate at $L_{\mathrm{min}}=64$), which well agrees with $1.5$. Thus, contrary to the statements in [@GKR11], the value $\nu=2/3$ of the GFD theory is not ruled out, since it is possible that $\omega$ has a much smaller value than $0.832(6)$ assumed in [@GKR11].
A question can arise about the influence of $\omega$ value on the estimation of critical exponents in the case of improved Hamiltonians [@Has1; @Has2]. It is expected that the leading corrections to scaling vanish in this case, and therefore the influence of $\omega$ is small. However, in the case if the asymptotic corrections to scaling are described by the exponent $\omega \le \omega_{\mathrm{max}} \approx 0.38$, as it is strongly suggested by the theorem discussed in Sec. \[sec:analytical\], the vanishing of leading corrections cannot be supported by the existing MC analyses of such models. Indeed, in these analyses the asymptotic corrections to scaling are not correctly identified (probably, because of too small lattice sizes) if $\omega \le \omega_{\mathrm{max}} \approx 0.38$, since one finds that $\omega \approx 0.8$.
Comparison of recent results {#sec:compare}
============================
It is interesting to compare our MC estimates and those of [@Has1] with the most recent RG (3D expansion) values of [@PS08] cited in [@Has1]. Note that the estimates of $\omega$ in [@Has1] and [@PS08], i. e., $\omega=0.832(6)$ and $\omega=0.782(5)$, are clearly inconsistent within the claimed error bars. This discrepancy, however, can be understood from the point of view of our MC analysis, suggesting that the real uncertainty in the MC estimation of $\omega$ can be rather large.
source method $\eta$ $\nu$
-------------- -------- ------------- -------------
this work MC 0.0384(18) 0.6302(25)
Ref. [@Has1] MC 0.03627(10) 0.63002(10)
Ref. [@PS08] 3D exp 0.0318(3) 0.6306(5)
: Recent estimates of the critical exponents $\eta$ and $\nu$ from different sources. Our values correspond to $\omega=0.8$, $m=2$ and $L_{\mathrm{min}}=64$.[]{data-label="tab8"}
The comparison of critical exponents $\eta$ and $\nu$ is provided in Tab. \[tab8\]. This comparison includes only some recent or relatively new results, since older ones are extensively discussed in [@Has1; @Has2; @HasRev]. Our values correspond to the fits within $L \in [64,2048]$ at $m=2$ and $\omega=0.8$. The choice of $\omega=0.8$ is reasonable here, since this value is close enough to the above mentioned estimates $\omega=0.832(6)$ and $\omega=0.782(5)$, and practically the same results are obtained if $\omega=0.8$ is replaced by any of these two values. According to the claimed statistical error bars, the estimates of [@Has1] seem to be extremely accurate. Note, however, that these estimates are extracted from much smaller lattice sizes ($L \le 360$) as compared to ours ($L \le 2048$).
The values of $\nu$ in Tab. \[tab8\] are consistent with each other. The MC estimates of $\eta$ are consistent, as well. However, the recent RG value of [@PS08] appears to be somewhat smaller and not consistent within the error bars with the actual MC estimations, even if the assumed values of $\omega$ are about $0.8$, as predicted by the perturbative RG theory. In particular, the discrepancy with the MC value of [@Has1] is about $45$ standard deviations of the MC estimation or about $15$ error bars of the RG estimation.
Recently, the conformald field theory (CFT) has been applied to the 3D Ising model [@Showk] to obtain very accurate values of the critical exponents, using the numerical conformal bootstrap method. The conformal–symmetry relations for the correlation functions, like (2.1) in [@Showk], are known to hold asymptotically in two dimensions, whereas their validity in 3D case can be questioned. Here “asymptotically” means that the limit $L/x \to \infty$, $\xi/x \to \infty$ and $a/x \to 0$ is considered, where $x$ is the actual distance, $a$ is the lattice spacing and $\xi$ is the correlation length. In other words, the conformal symmetry is expected to hold exactly for the asymptotic correlation functions on an infinite lattice ($L = \infty$) at the critical point ($\beta = \beta_c$). These asymptotic correlation functions are obtained by subtracting from the exact correlations functions (at $L=\infty$ and $\beta=\beta_c$) the corrections to scaling, containing powers of $a/x$. The existence of the conformal symmetry in the 3D Ising model has been supported by a non–trivial MC test in [@MCconf].
Apart from the assumption of the validity of (2.1) in [@Showk] for the 3D Ising model, the following hypotheses have been proposed:
1. There exists a sharp kink on the border of the two–dimensional region of the allowed values of the operator dimensions $\Delta_{\sigma} = (1+\eta)/2$ and $\Delta_{\epsilon} = 3 - 1/\nu$;
2. Critical exponents of the 3D Ising model correspond just to this kink.
These hypotheses have been supported by the MC estimates of the exponents $\eta$, $\nu$ and $\omega$ in [@Has1]. However, the obtained in [@Showk] exponent $\omega=0.8303(18)$ is not supported by our MC value $\omega = 0.25(33)$, obtained from the susceptibility data for very large lattice sizes $L \le 2048$. Moreover, it does not satisfy the inequality $\omega \le (\gamma-1)/\nu$, following from the theorem discussed in Sec. \[sec:analytical\]. This apparent contradiction can be understood from the point of view that corrections to scaling are not fully controlled in the CFT. Indeed, the prediction for $\omega$ in this CFT is based on the assumption that $\omega = \Delta_{\epsilon'} -3$ holds, where $\Delta_{\epsilon'}$ is the dimension of an irrelevant operator in the conformal analysis of the asymptotic four–point correlation function. It means that corrections to scaling of the exact four–point correlation function are discarded (to obtain the asymptotic correlation function, as discussed before) and not included into the analysis.
More recently, a modified conformal bootstrap analysis has been performed in [@xx], where the mentioned two hypotheses have been replaced with the hypothesis that the operator product expansion (OPE) contains only two relevant scalar operators. The results for the exponents $\Delta_{\sigma}$ and $\Delta_{\epsilon}$ (or $\eta$ and $\nu$) are consistent with those of [@Showk]. This consistency is not surprising, since both methods agree with the idea that the operator spectrum of the 3D Ising model is relatively simple, so that the true values of $\Delta_{\sigma}$ and $\Delta_{\epsilon}$ are located inside of a certain narrow region (as in [@xx]) or on its border (as in [@Showk]), where many operators are decoupled from the spectrum. Apparently, the analysis in [@xx] does not lead to a contradiction with the two relations $\omega \le (\gamma-1)/\nu$ (the theorem) and $\omega = \Delta_{\epsilon'} -3$, since only $\Delta_{\epsilon'}>3$ is assumed for the dimension $\Delta_{\epsilon'}$. Thus, both relations can be satisfied simultaneously, if the 3D Ising point in Fig. 1 of [@Showk] is located inside of the allowed region, rather than on its border. This possibility is supported by the behavior of the effective exponent $\eta_{\mathrm{eff}}$ in our Fig. \[fig1\]. It suggests that the asymptotic exponent $\eta$ could be larger than it is usually expected from MC simulations for relatively small lattice sizes, as in [@Has1]. Thus, it is important to make further refined estimations, based on MC data for very large lattice sizes, in order to verify the hypotheses proposed in [@Showk; @xx].
If the hypothesis (i) about the existence of a sharp kink is true, then this kink, probably, has a special meaning for the 3D Ising model. Its existence, however, is not evident.
![The values of $\Delta(\sigma)$, corresponding to the minimum (solid circles) and the “kink” (empty circles) in the plots of Fig. 7 in [@Showk] depending on $N^{-3}$. The dashed lines show linear extrapolations.[]{data-label="fig2"}](conf.eps){width="60.00000%"}
According to the conjectures of [@Showk], such a kink is formed at $N \to \infty$, where $N$ is the number of derivatives included into the analysis. As discussed in [@Showk], it implies that the minimum in the plots of Fig. 7 in [@Showk] should be merged with the apparent “kink” at $N \to \infty$. This “kink” is not really sharp at a finite $N$. Nevertheless, its location can be identified with the value of $\Delta(\sigma)$, at which the second derivative of the plot has a local maximum. The minimum of the plot is slightly varied with $N$, whereas the “kink” is barely moving [@Showk]. Apparently, the convergence to a certain asymptotic curve is remarkably faster than $1/N$, as it can be expected from Fig. 7 and other similar figures in [@Showk]. In particular, we have found that the location of the minimum in Fig. 7 of [@Showk] is varied almost linearly with $N^{-3}$. We have shown it in Fig. \[fig2\] by solid circles, the position of the “kink” being indicated by empty circles. The error bars of $\pm 0.000001$ correspond to the symbol size. The results for $N=153, 190, 231$ are presented, skipping the estimate for the location of the “kink” at $N=153$, which cannot be well determined from the corresponding plot in Fig. 7 of [@Showk]. The linear extrapolations (dashed lines) suggest that the minimum, very likely, is moved only slightly closer to the “kink” when $N$ is varied from $N=231$ to $N = \infty$. The linear extrapolation might be too inaccurate. Only in this case a refined numerical analysis for larger $N$ values can possibly confirm the hypothesis about the formation of a sharp kink at $N \to \infty$.
The results of both [@Showk] and [@xx] strongly support the commonly accepted 3D Ising values of the critical exponents $\eta$ and $\nu$. In particular, the estimates $\eta=0.03631(3)$ and $\nu = 0.62999(5)$ have been reported in [@Showk]. However, these estimates are obtained, based on certain hypotheses. If these hypotheses are not used, then the conformal bootstrap analysis appears to be consistent even with the discussed here GFD values $\eta=1/8$ and $\nu =2/3$. Indeed, the corresponding operator dimensions $\Delta_{\sigma} = (1+\eta)/2$ and $\Delta_{\epsilon} = 3 - 1/\nu$ lie inside of the allowed region in Fig. 1 of [@Showk].
The hypotheses (i) and (ii) can be questioned in view of the observations summarized in Fig. \[fig2\]. The hypothesis of [@xx] about the existence of just two relevant scalar operators might be supported by some physically–intuitive arguments. In particular, one needs to adjust two scalar parameters $P$ (pressure) and $T$ (temperature) to reach the critical point of a liquid–vapor system. A real support for this hypothesis is provided by the already known estimations of the critical exponents. Taking into account the non–perturbative nature of the critical phenomena, the most reliable estimates are based on non–perturbative methods, such as the Monte Carlo simulation. An essential point in this discussion is that the MC estimates can be remarkably changed, if unusually large lattices are considered, as it is shown in our current study.
Summary and conclusions
=======================
Analytical as well as Monte Carlo arguments are provided in this paper, showing that corrections to scaling in the 3D Ising model are described by a remarkably smaller exponent $\omega$ than the usually accepted values of about $0.8$. The analytical arguments in Sec. \[sec:analytical\], which are based on a rigorous proof of certain theorem, suggest that $\omega \le (\gamma -1)/\nu$ holds, implying that $\omega$ cannot be larger than $\omega_{\mathrm{max}}=(\gamma -1)/\nu \approx 0.38$ in the 3D Ising model. The analytical prediction of the GFD theory [@K_Ann01; @K2012] is $\omega=1/8$ in this case. Our MC estimation of $\omega$ from the susceptibility ($\chi$) data of very large lattices (Sec. \[sec:large\]) is well consistent with these analytical results. Numerical values, extracted from the $\chi$ data within $40 \le L \le 2048$ and $48 \le L \le 2048$ (or $\Phi_2(L)=2^{-4}\chi(2L)/\chi(L/2)$ data within $80 \le L \le 1024$ and $96 \le L \le 1024$) are $\omega=0.25(33)$ and $\omega = 0.06(38)$, respectively. Unfortunately, the statistical errors in $\omega$ are rather large.
As discussed in [@KMR_14], our analytical predictions generally refer to a subset of $n$-vector models, where spin is an $n$-component vector with $n=1$ in two dimensions and $n \ge 1$ in three dimensions. Our recent MC analysis agrees with these predictions for the scalar ($n=1$) 2D $\varphi^4$ model [@KMR_14], where statistical errors are small enough. The 3D case with $n=2$ has been tested in [@K2012], based on accurate experimental data for specific heat in zero gravity conditions very close to the $\lambda$–transition point in liquid helium. The test in Sec. 4 of [@K2012] reveals some inconsistency of the data with corrections to scaling proposed by the perturbative RG treatments, indicating that these corrections decay slower, i. e., $\theta=\nu \omega$ is smaller than usually expected. This finding is consistent with the theorem discussed in Sec. \[sec:analytical\]. The mentioned here facts emphasize the importance of our MC analysis.
Our proposed values of $\omega$ may seem to be incredible in view of a series of known results, yielding $\omega$ at about $0.8$ for the 3D Ising model. However, it is meaningful to reconsider these results from several aspects.
- First of all, they disagree with non–perturbative arguments in the form of the rigorously proven theorem, discussed in Sec. \[sec:analytical\].
- This theorem states that $\omega \le (\gamma-1)/\nu$, whereas the perturbative RG estimates are essentially larger. In view of the recent analysis in [@K2012x] (see also the discussions in [@K2012]), this discrepancy can be understood as a failure of the standard perturbative RG methods. The actually discussed (Sec. \[sec:compare\]) discrepancy between the recent RG and MC estimates of the critical exponent $\eta$ also points to problems in the perturbative approach. Moreover, any perturbative method, also the high- and low-temperature series expansions, can fail to give correct results in critical phenomena, since it is not the natural domain of validity of the perturbation theory.
- The previous MC estimations of $\omega$ are based on simulations of lattices not larger than $L \le 360$ in [@Has1]. We have clearly demonstrated in Sec. \[sec:difomega\] that the values obtained from finite–size scaling with such relatively small (as compared to $L \le 2048$ in our study) lattice sizes depend on the particular method chosen. For example, different estimates ranging from $\omega =1.274(72)$ to $\omega=0.18(37)$ are obtained here, which substantially deviate from the usually reported values between $0.82$ and $0.87$ (see [@Has1; @HasRev]).
- Although the recent estimate $\omega=0.8303(18)$ of the conformal bootstrap method [@Showk] is inconsistent with $\omega \le (\gamma-1)/\nu$, the apparent contradiction can be understood and resolved, as discussed in Sec. \[sec:compare\].
Taking into account the possibility that the correction–to–scaling exponent $\omega$ can be remarkably smaller than the usually accepted values at about $0.8$, we have tested in Sec. \[sec:influence\] the influence of $\omega$ on the estimation of critical exponents $\eta$ and $\nu$. We have concluded that the effect is remarkable if $\omega$ is changed from $0.8$ to a much smaller value, such as $\omega=1/8$ of the GFD theory. In this case, the error bars strongly increase, and the estimation becomes compatible, or even well consistent, with the predictions of the GFD theory. In particular, the estimate $1/\nu = 1.525(50)$ agrees with the GFD theoretical value $1.5$.
Acknowledgments {#acknowledgments .unnumbered}
===============
We are grateful to Slava Rychkov for the useful communications concerning the conformal bootstrap method. This work was made possible by the facilities of the Shared Hierarchical Academic Research Computing Network (SHARCNET:www.sharcnet.ca). The authors acknowledge the use of resources provided by the Latvian Grid Infrastructure. For more information, please reference the Latvian Grid website (http://grid.lumii.lv). R. M. acknowledges the support from the NSERC and CRC program.
[100]{}
D. J. Amit, *Field Theory, the Renormalization Group, and Critical Phenomena*, World Scientific, Singapore, 1984.
S. K. Ma, *Modern Theory of Critical Phenomena*, W. A. Benjamin, Inc., New York, 1976.
J. Zinn-Justin, *Quantum Field Theory and Critical Phenomena*, Clarendon Press, Oxford, 1996.
H. Kleinert, V. Schulte-Frohlinde, *Critical Properties of $\phi^4$ Theories*, World Scientific, Singapore, 2001.
A. Pelissetto, E. Vicari, Phys. Rep. 368 (2002) 549–727.
J. Kaupužs, Canadian J. Phys. **9**, 373 (2012).
M. Hasenbusch, Int. J. Mod. Phys. C [**12**]{}, 911 (2001).
M. Hasenbusch, Phys. Rev. B 82 (2010) 174433. M. Hasenbusch, Phys. Rev. B 82 (2010) 174434.
A. Gordillo-Guerro, R. Kenna, J. J. Ruiz-Lorenzo, J. Stat. Mech., P09019 (2011).
J. Kaupužs, J. Rimšāns, R. V. N. Melnik, Ukr. J. Phys. 56, 845 (2011).
J. Kaupužs, R. V. N. Melnik, J. Rimšāns, Commun. Comp. Phys. **14**, 355 (2013).
J. Kaupužs, Ann. Phys. (Berlin) 10 (2001) 299–331.
U. Wolff, Phys. Rev. Lett. 62 (1989) 361.
J. Kaupužs, J. Rimšāns, R. V. N. Melnik, Phys. Rev. E **81**, 026701 (2010).
J. Kaupužs, R. V. N. Melnik, J. Rimšāns, arXiv:1406.7491 \[cond-mat.stat-mech\] (2014)
M. Hasenbusch, J. Phys. A: Math. Gen. **32**, 4851 (1999).
M. E. J. Newman, G. T. Barkema, *Monte Carlo Methods in Statistical Physics* (Clarendon Press, Oxford, 1999).
A. A. Pogorelov, I. M. Suslov, J. Exp. Theor. Phys. **106**, 1118 (2008).
S. El-Showk, M. F. Paulos, D. Poland, S. Rychkov, D. Simmons-Duffin, arXiv:1403.4542 \[hep-th\] (2014), to appear in a special issue of J. Stat. Phys.
M. Billó, M. Caselle, D. Gaiotto, F. Gliozzi, M. Meineri, R. Pelligrini, arXiv:1304.4110 \[hep-th\] (2013)
F. Kos, D. Poland, D. Simmons-Duffin, arXiv:1406.4858 \[hep-th\] (2014)
J. Kaupužs, Int. J. Mod. Phys. A 27, 1250114 (2012)
[^1]: E–mail: `kaupuzs@latnet.lv`
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We propose a novel self-attention mechanism that can learn its optimal attention span. This allows us to extend significantly the maximum context size used in Transformer, while maintaining control over their memory footprint and computational time. We show the effectiveness of our approach on the task of character level language modeling, where we achieve state-of-the-art performances on `text8` and `enwiki8` by using a maximum context of $8$k characters.'
author:
- |
Sainbayar Sukhbaatar Edouard Grave Piotr Bojanowski Armand Joulin\
Facebook AI Research\
`{sainbar,egrave,bojanowski,ajoulin}@fb.com`\
bibliography:
- 'acl2019.bib'
title: Adaptive Attention Span in Transformers
---
Conclusion
==========
In this work, we present a novel self-attention layer with an adaptive span. This mechanism allows for models with longer context, and thus with the capability to catch longer dependencies. We have shown the importantce of this feature in the context of character level modeling where information is spread over great distances.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We have studied collective modes of quasi-2D Bose-Einstein condensates with multiply-charged vortices using a variational approach. Two of the four collective modes considered exhibit coupling between the vortex dynamics and the large-scale motion of the cloud. The vortex presence causes a shift in all frequencies of collective modes even for the ones that do not couple dynamically with the vortex-core. The coupling between vortex and large-scale collective excitations can induce the multi-charged vortex to decay into singly-charged vortices with the quadrupole mode being one possible channel for such a decay. Therefore a thorough study was done about the possibility to prevent the vortex decay by applying a Gaussian potential with its width proportional to the vortex-core radius and varying its height. In such way, we created a stability diagram of height versus interaction strength which has stable regions due the static Gaussian potential. Furthermore, by using a sinusoidal time-modulation around the average height of the Gaussian potential, we have obtained a diagram for the parametric resonance which can prevent the vortex decay in regions where static potential can not.'
author:
- 'Rafael Poliseli Teles$^{1}$, F. E. A. dos Santos$^{2}$ and V. S. Bagnato$^{1}$'
bibliography:
- 'manuscript.bib'
title: 'Stability change of a multi-charged vortex due to coupling with quadrupole mode'
---
Introduction
============
The dynamics of a trapped Bose-Einstein condensate (BEC) containing a vortex line at its center has been the object of our studies. We have studied the effects of a multi-charged vortex in free expansion dynamics. These central vortices contribute with the quantum pressure (kinetic energy) which increases the expansion velocity of the condensate [@rpteles1]. Consequently, our work culminates in describing the collective excitations of a vortex state as well. Here the vortex-core dynamics couples with the well known collective modes [@rpteles2]. Furthermore, we shows that it is possible to excite these modes using modulation of the s-wave scattering length. Such a technique has been already applied to excite the lowest-lying quadrupole mode in a lithium experiment [@cm1]. The motivation for these works is the possibility of experimental realization. Now our focus is in the anisotropic oscillations of the vortex-core. In other words, oscillations that lead the vortex shape from circular to elliptical. Such deformation is a symmetry breaking of vortex state, and can result in changes of dynamical stability.
The presence of vortices in condensates can also shift the frequency of collective excitations. The frequency shift of quadrupole oscillations have been analytically explored for positive scattering lengths by using the sum-role approach [@smv2], as well as the effects of lower-dimensional geometry in the frequency splitting of quadrupole oscillations [@smv3].
First of all, multi-charged vortices in trapped ultracold Bose gases are thermodynamically unstable, which means that a single $\ell$-charged vortex tends to decay into $\ell$ singly quantized vortices. Thus the configuration of separated singly-charged vortices has lower energy instead a single vortex with the same angular momentum. Although such a state with multiple singly-charged vortices is also thermodynamical unstable when compared with a vortex-free condensate. These multiple vortices spiral outward from the condensate until remain only the ground state.
The vortex dynamic instability has so far been studied in the context of Bogoliubov excitations [@mcv06; @mcv07; @stab03]. Indeed, the vortex state possess certain Bogoliubov eigenmodes which grow exponentially and become unstable against infinitesimal perturbations [@mcv03]. These vortices present several unstable modes being a quadrupole mode the most unstable. For instance, let us consider the work in Ref.[@mcv03]. There the authors studied the modes of quadruple-charged vortex. Among of them, only three modes are unstable. These unstable modes have complex eigenfrequencies (CE) and are associated with $l$-fold symmetries. These symmetries are:
- Two-fold symmetry; the quadruply-charged vortex splits into four single vortices arranged in a straight line configuration.
- Three-fold symmetry; the quadruply-charged vortex splits into four single vortices arranged in a triangular configuration, i.e., there are three vortices forming a triangle with each vortex representing a vertex. The fourth one is at center.
- Four-fold symmetry; the quadruply-charged vortex splits into four single vortices arranged in a square configuration with each vortex placed in a vertex.
Our target is to describe them as a result of the coupling between the vortex-core dynamics with the collective modes of the condensate. In order to achieve this goal we have used variational calculations focusing on the description of only one of the unstable mode (specially two-fold symmetry). The variational description becomes very complicated as we increase the number of parameters. Fortunately the most relevant unstable mode is also the easiest one to calculate within the variational approximation.
Furthermore, there are some works which add a static Gaussian potential centered in the core of a vortex with a large circulation which results into a stable configuration for the multi-charged vortex [@mcv05; @mcv09]. Based on these works we checked the dynamical stability for a static as well as dynamic potential due to a Gaussian laser beam placed in the vortex-core, when compared with the multiple vortices state.
This paper is organized as follows: In section \[sec:quasi2D\], the quasi-2D approach is introduced. We discussed the wave-function used with the variational method in section \[sec:breakingS\] and detailed the calculation of the Lagrangian in section \[sec:lagrangian\]. Section \[sec:collectiveM\] contains equations of the motion and their solutions, i.e. the stationary solution, collective modes, and the fully numerical calculation of Gross-Pitaevskii equation (GPE). In section \[sec:staticLG\], we made a dynamical stability diagram considering a static Gaussian potential while in section \[sec:dynamicLG\] we made a parametric resonance diagram due to a dynamical Gaussian potential where its height is sinusoidally time dependent.
Quasi-2d condensate
===================
\[sec:quasi2D\]
The presence of a large number of atoms in the ground state allows us to use a classical field description [@pit-str]. Where the non-uniform Bose gas of atomic mass $m$ and s-wave scattering length $a_{s}$. The scattering length is smaller than the average inter-particle distance at absolute zero temperature. Its dynamics is given by the Gross-Pitaevskii equation [@pethick]: $$i\hbar\frac{\partial\Psi\left(\mathbf{r},t\right)}{\partial t}=\left[-\frac{\hbar^{2}}{2m}\nabla^{2}+V\left(\mathbf{r}\right)+U_{0}\left|\Psi\left(\mathbf{r},t\right)\right|^{2}\right]\Psi\left(\mathbf{r},t\right),\label{eq:GPE}$$ where the interaction strength between two atoms is $$U_{0}=\frac{4\pi\hbar^{2}a_{s}}{m}.$$
In order to suppress possible effects due to motions along the axial direction, we consider a highly anisotropic harmonic confinement of the form $$\begin{aligned}
V\left(\mathbf{r}\right) & = & V_{\bot}\left(\mathbf{r}_{\bot}\right)+V_{z}\left(z\right)\nonumber \\
& = & \frac{1}{2}m\omega_{\rho}^{2}\rho^{2}+\frac{1}{2}m\omega_{z}^{2}z^{2},\label{eq:2d-1}\end{aligned}$$ with $\omega_{z}\gg\omega_{\rho}$. With this condition the condensate wave-function can be separated as a product of radial and axial functions, which are entirely independent. This yields a quasi-2D Bose-Einstein condensate [@quasi-2D.collisions; @2Dtempvortex; @smv3], and leads to $$\Psi\left(\mathbf{r},t\right)=N\Phi\left(\mathbf{r}_{\bot},t\right)W\left(z,t\right),\label{eq:2d-2}$$ where $$W\left(z,t\right)=\frac{1}{d_{z}\sqrt{\pi}}\exp\left(-\frac{z^{2}}{2d_{z}^{2}}-\frac{i\omega_{z}t}{2}\right).\label{eq:2d-3}$$ By replacing (\[eq:2d-1\]) and (\[eq:2d-2\]) with (\[eq:2d-3\]) into the Gross-Pitaevskii equation (\[eq:GPE\]), we obtain $$\begin{aligned}
i\hbar W\frac{\partial\Phi}{\partial t}+\frac{\hbar\omega_{z}}{2}\Phi W & = & \left[-\frac{\hbar^{2}}{2m}\nabla_{\bot}^{2}+\frac{\hbar^{2}}{2md_{z}^{2}}-\frac{\hbar^{2}z^{2}}{2md_{z}^{4}}+V\left(\mathbf{r}\right)+NU_{0}\left|\Phi W\right|^{2}\right]\Phi W.\label{eq:2d-4}\end{aligned}$$ The product $\Phi\left(\mathbf{r}_{\bot},t\right)W\left(z,t\right)$ is normalized to unity, thus the number of atoms appears multiplying the coupling constant $U_{0}$. Now we can multiply Eq. (\[eq:2d-4\]) by $W^{*}\left(z,t\right)$ and integrate this equation over the entire z domain. Since $\hbar^{2}/2md_{z}^{2}=\hbar\omega_{z}/2$, and $\hbar^{2}/2md_{z}^{4}=m\omega_{z}^{2}/2$ we obtain the following simplified equation $$i\hbar\frac{\partial\Phi\left(\mathbf{r}_{\bot},t\right)}{\partial t}=\left[-\frac{\hbar^{2}}{2m}\nabla_{\bot}^{2}+V_{\bot}\left(\mathbf{r}_{\bot}\right)+NU_{2D}\left|\Phi\left(\mathbf{r}_{\bot},t\right)\right|^{2}\right]\Phi\left(\mathbf{r}_{\bot},t\right),\label{eq:2D-GPE}$$ where $$U_{2D}=\frac{U_{0}}{d_{z}\sqrt{2\pi}}=2\sqrt{2\pi}\frac{\hbar^{2}a_{s}}{md_{z}}.$$ Let us then write the Lagrangian density which leads to quasi-2D Gross-Pitaevskii equation (\[eq:2D-GPE\]) for a complex field $\Phi\left(\mathbf{r}_{\bot},t\right)$ normalized to unity. So the Lagrangian is given by $$\begin{aligned}
\mathcal{L}_{2D} & = & -\frac{i\hbar}{2}\left[\Phi^{*}\left(\mathbf{r}_{\bot},t\right)\frac{\partial\Phi\left(\mathbf{r}_{\bot},t\right)}{\partial t}-\Phi\left(\mathbf{r}_{\bot},t\right)\frac{\partial\Phi^{*}\left(\mathbf{r}_{\bot},t\right)}{\partial t}\right]\nonumber \\
& & +\frac{\hbar^{2}}{2m}\left|\nabla_{\bot}^{2}\Phi\left(\mathbf{r}_{\bot},t\right)\right|^{2}+V_{\bot}\left(\mathbf{r}_{\bot}\right)\left|\Phi\left(\mathbf{r}_{\bot},t\right)\right|^{2}+\frac{NU_{2D}}{2}\left|\Phi\left(\mathbf{r}_{\bot},t\right)\right|^{4}.\label{eq:dL2D}\end{aligned}$$
Breaking wave-function symmetry {#sec:breakingS}
===============================
In order to examine the coupling between the vortex-core dynamics and the collective modes as well as their stability, we choose the situation where a multi-charged vortex is created at the center of a condensate. Its wave-function can be written in cartesian coordinates as $$\Phi_{\ell}\left(\mathbf{r}_{\bot},t\right)\propto\left\{ 1-\frac{1}{\left[x/\xi_{x}\left(t\right)\right]^{2}+2xy/\xi_{xy}\left(t\right)+\left[y/\xi_{y}\left(t\right)\right]^{2}+1}\right\} ^{\ell/2}\sqrt{1-\left[\frac{x}{R_{x}\left(t\right)}\right]^{2}-\left[\frac{y}{R_{y}\left(t\right)}\right]^{2}}e^{iS\left(\mathbf{r}_{\bot},t\right)}.\label{eq:wfA}$$ The sizes in each direction are given by $R_{i}\left(t\right)$. They are known as Thomas-Fermi radii, since the wave-function vanishes for $x>R_{x}$ and $y>R_{y}$. The vortex-core sizes are given by $\xi_{i}\left(t\right)$. They are of the order of the healing length for a singly charged vortex. The parameter $\xi_{xy}\left(t\right)$ is responsible for a complete description of the quadrupole symmetries between vortex-core and condensate. The wave-function phase $S\left(\mathbf{r},t\right)$ must be carefully chosen within the context of the variational method. Because the phase must contain the same number of degrees of freedom as the wave-function amplitude. Since we have one pair of variational parameters for each direction in the wave-function amplitude ($\xi_{i}$ and $R_{i}$), we also need a pair of variational parameter in the wave-function phase ($B_{i}$ and $C_{i}$): $$S\left(\mathbf{r}_{\bot},t\right)=\ell\arctan\left(\frac{y}{x}\right)+B_{x}\left(t\right)\frac{x^{2}}{2}+B_{xy}\left(t\right)xy+B_{y}\left(t\right)\frac{y^{2}}{2}+C_{x}\left(t\right)\frac{x^{4}}{4}+C_{y}\left(t\right)\frac{y^{4}}{4}.\label{eq:wfP}$$ Thus $B_{i}\left(t\right)$ and $C_{i}\left(t\right)$ compose the variations of the condensate velocity field allowing the components $\xi_{i}\left(t\right)$ and $R_{i}\left(t\right)$ to oscillate with opposite directions. While $B_{xy}\left(t\right)$ gives us the contribution of the distortion $\xi_{xy}\left(t\right)$ for velocity field which changes the axis of the quadrupole oscillation. Note that we are not using a parameter which yields a scissor motion to the external components of the condensate, since it has already been shown that such a motion is not coupled with neither breathing nor quadrupole modes [@scissors1; @scissor2].
This choice for our wave-function implies that our vortex-core might have an elliptical shape. It is enough to destabilize a multi-charged vortex and allow it to decay splitting itself into several vortices, each one with unitary charge.
Following the variational method used in Ref. [@perez1; @perez2; @rpteles1; @rpteles2], we substituted (\[eq:wfA\]) into (\[eq:dL2D\]), and performed the integration over the spacial coordinates, $L_{2D}=\int\mathcal{L}_{2D}d\mathbf{r}_{\bot}$. Although the Lagrangian density (\[eq:dL2D\]) cannot be analytically integrated since it does not keep the polar symmetry. One way to proceed is to introduce small fluctuations around the polar-symmetry solutions into the wave-function, and to then to make a Taylor expansion. Thus we can take advantage of the approximate polar symmetry of the vortex-core while the fluctuations act breaking the vortex-core symmetry. These calculations are discussed in detail in the next section.
Expanding the Lagrangian around the polar-symmetry solution
===========================================================
\[sec:lagrangian\]
Within the Thomas-Fermi approximation the trapping potential shape determines the condensate dimensions. The wavefunction (\[eq:wfA\]) is approximated by an inverted parabola except for the central vortex. So that its integration domain is defined by $1-x^{2}/R_{x}^{2}-y^{2}/R_{y}^{2}\geq0$. Some care should be taken when calculating the kinetic energy $\left|\nabla_{\bot}\Phi\left(\mathbf{r}_{\bot},t\right)\right|^{2}$ before integrating. The vortex presence inserts an important term in the gradient, while the rest of the gradient is neglected in the Thomas-Fermi approximation. That means that the density varies smoothly along the condensate except in the vortex.
By introducing deviations from the equilibrium position in our parameters $$\begin{aligned}
\xi_{j}\left(t\right) & \approx & \xi_{0}+\delta\xi_{j}\left(t\right),\\
R_{j}\left(t\right) & \approx & R_{0}+\delta R_{j}\left(t\right),\end{aligned}$$ we can expand in Taylor series the deviations of the Lagrangian. In this way we have $$L=L^{\left(0\right)}+L^{\left(1\right)}+L^{\left(2\right)}+...\label{eq:L-1}$$ The linear approximation is obtained by truncating the series in second order terms, this leads to:
- Terms of zeroth order in $L^{\left(0\right)}$ being responsible for the equilibrium energy per number of atoms.
- Terms of first order $L^{\left(1\right)}$ that vanish due to the stationary solution of Euler-Lagrange equations. The equilibrium configuration has polar symmetry.
- Terms of second order $L^{\left(2\right)}$ carries the collective excitations. Their Euler-Lagrange equations result in a eigensystem whose eigenvectors are composed of deviations ($\delta R_{j}$ and $\delta\xi_{j}$).
Notice that $B_{i}$ and $C_{i}$ from phase (\[eq:wfP\]) also must be considered as first order terms since they lead to deviations in the velocity field. In order to evaluate all the necessary integrals in Eq.(\[eq:L-1\]), it is convenient to use $\xi_{i}\left(t\right)/R_{i}\left(t\right)=\alpha_{i}\left(t\right)$ instead of $\xi_{i}\left(t\right)$. This change is explained due to all these integrals result in functions of $\alpha_{0}=\xi_{0}/R_{0}$. Thus, a we use $\alpha_{i}\left(t\right)\approx\alpha_{0}+\delta\alpha_{i}\left(t\right)$ instead of $\xi_{i}\left(t\right)\approx\xi_{0}+\delta\xi_{i}\left(t\right)$. The same happens for $\alpha_{xy}\left(t\right)=R_{x}\left(t\right)R_{y}\left(t\right)/\xi_{xy}\left(t\right)$ where $\alpha_{xy}\left(t\right)\approx\delta\alpha_{xy}\left(t\right)$. Hereafter we omit the time dependences for simplicity, and we named zeroth order functions as $A_{i}\equiv A_{i}\left(\ell,\alpha_{0}\right)$ as well as the other integrated results as $I_{i}\equiv I_{i}\left(\ell,\alpha_{0}\right)$. Such functions are described in Appendix \[sec:A\].
The proportionality constant in wave-function (\[eq:wfA\]) is found through normalization, being $$\begin{aligned}
N_{0} & = & R_{x}^{-1}R_{y}^{-1}\left[A_{0}+I_{1}\left(\delta\alpha_{x}+\delta\alpha_{y}\right)+I_{2}\left(\delta\alpha_{x}^{2}+\delta\alpha_{y}^{2}\right)+I_{3}\delta\alpha_{x}\delta\alpha_{y}+I_{4}\delta\alpha_{xy}^{2}\right]^{-1}\nonumber \\
& \approx & R_{x}^{-1}R_{y}^{-1}A_{0}^{-1}\left[1-\frac{I_{1}}{A_{0}}\left(\delta\alpha_{x}+\delta\alpha_{y}\right)+\left(\frac{I_{1}^{2}}{A_{0}^{2}}-\frac{I_{2}}{A_{0}}\right)\left(\delta\alpha_{x}^{2}+\delta\alpha_{y}^{2}\right)\right.\nonumber \\
& & \left.+\left(\frac{2I_{1}^{2}}{A_{0}^{2}}-\frac{I_{3}}{A_{0}}\right)\delta\alpha_{x}\delta\alpha_{y}-\frac{I_{4}}{A_{0}}\delta\alpha_{xy}^{2}\right].\end{aligned}$$
By calculating the Lagrangian integrals we obtain $$\begin{aligned}
\int\rho^{2}\left|\Phi\right|^{2}d\mathbf{r}_{\bot} & = & N_{0}R_{x}R_{y}\left\{ R_{x}^{2}\left[A_{1}+I_{5}\left(\delta\alpha_{x}+\frac{1}{3}\delta\alpha_{y}\right)+I_{6}\delta\alpha_{x}^{2}+I_{7}\delta\alpha_{y}^{2}+I_{8}\delta\alpha_{x}\delta\alpha_{y}+I_{9}\delta\alpha_{xy}^{2}\right]\right.\nonumber \\
& & \left.R_{y}^{2}\left[A_{1}+I_{5}\left(\frac{1}{3}\delta\alpha_{x}+\delta\alpha_{y}\right)+I_{7}\delta\alpha_{x}^{2}+I_{6}\delta\alpha_{y}^{2}+I_{8}\delta\alpha_{x}\delta\alpha_{y}+I_{9}\delta\alpha_{xy}^{2}\right]\right\} ,\label{eq:Etrap}\end{aligned}$$ $$\begin{aligned}
-i\int\left[\Phi^{*}\frac{\partial\Phi}{\partial t}-\Phi\frac{\partial\Phi^{*}}{\partial t}\right]d\mathbf{r}_{\bot} & = & N_{0}R_{x}R_{y}\left\{ R_{x}^{2}\dot{B_{x}}\left[A_{1}+I_{5}\left(\delta\alpha_{x}+\frac{1}{3}\delta\alpha_{y}\right)\right]\right.\nonumber \\
& & +2\dot{B_{xy}}I_{10}\delta\alpha_{xy}+R_{y}^{2}\dot{B_{y}}\left[A_{1}+I_{5}\left(\frac{1}{3}\delta\alpha_{x}+\delta\alpha_{y}\right)\right]\nonumber \\
& & +\frac{1}{2}R_{x}^{4}\dot{C_{x}}\left[A_{2}+I_{11}\left(\delta\alpha_{x}+\frac{1}{5}\delta\alpha_{y}\right)\right]\nonumber \\
& & \left.+\frac{1}{2}R_{y}^{4}\dot{C_{y}}\left[A_{2}+I_{11}\left(\frac{1}{5}\delta\alpha_{x}+\delta\alpha_{y}\right)\right]\right\} ,\end{aligned}$$ $$\begin{aligned}
\int\left|\nabla_{\bot}\Phi\right|^{2}d\mathbf{r}_{\bot} & = & N_{0}R_{x}R_{y}\left\{ A_{1}R_{0}^{2}\left(B_{x}^{2}+2B_{xy}^{2}+B_{y}^{2}\right)+2A_{2}R_{0}^{4}\left(B_{x}C_{x}+B_{y}C_{y}\right)+A_{3}R_{0}^{6}\left(C_{x}^{2}+C_{y}^{2}\right)\right.\nonumber \\
& & +\frac{\ell^{2}}{R_{x}^{2}}\left[A_{4}+I_{12}\delta\alpha_{x}+I_{13}\delta\alpha_{y}+I_{14}\delta\alpha_{x}^{2}+I_{15}\delta\alpha_{y}^{2}+I_{16}\delta\alpha_{x}\delta\alpha_{y}+I_{17}\delta\alpha_{xy}^{2}\right]\nonumber \\
& & +\frac{\ell^{2}}{R_{y}^{2}}\left[A_{4}+I_{13}\delta\alpha_{x}+I_{12}\delta\alpha_{y}+I_{15}\delta\alpha_{x}^{2}+I_{14}\delta\alpha_{y}^{2}+I_{16}\delta\alpha_{x}\delta\alpha_{y}+I_{17}\delta\alpha_{xy}^{2}\right]\nonumber \\
& & +\frac{\ell^{2}}{R_{0}^{2}}\left[A_{5}-\frac{A_{5}}{R_{0}}\left(\delta R_{x}+\delta R_{y}\right)+\frac{A_{5}}{R_{0}^{2}}\left(\delta R_{x}^{2}+\delta R_{y}^{2}+\delta R_{x}\delta R_{y}\right)\right.\nonumber \\
& & +\frac{I_{18}}{R_{0}}\left(\delta R_{x}\delta\alpha_{x}+\frac{1}{3}\delta R_{x}\delta\alpha_{y}+\frac{1}{3}\delta R_{y}\delta\alpha_{x}+\delta R_{y}\delta\alpha_{y}\right)\nonumber \\
& & \left.\left.-\frac{2}{3}I_{18}\left(\delta\alpha_{x}+\delta\alpha_{y}\right)+I_{19}\left(\delta\alpha_{x}^{2}+\delta\alpha_{y}^{2}\right)+I_{20}\delta\alpha_{x}\delta\alpha_{y}+I_{21}\delta\alpha_{xy}^{2}\right]\right\} ,\label{eq:Ekin}\end{aligned}$$ and $$\int\left|\Phi\right|^{4}d\mathbf{r}_{\bot}=N_{0}^{2}R_{x}R_{y}\left[A_{6}+I_{22}\left(\delta\alpha_{x}+\delta\alpha_{y}\right)+I_{23}\left(\delta\alpha_{x}^{2}+\delta\alpha_{y}^{2}\right)+I_{24}\delta\alpha_{x}\delta\alpha_{y}+I_{25}\delta\alpha_{xy}^{2}\right].\label{eq:Eint}$$
By scaling according to Table \[tab:scale2D\], each of the three first terms from (\[eq:L-1\]) are given by $$L^{\left(0\right)}=A_{0}^{-1}\left[A_{1}r_{\rho0}^{2}+\frac{\ell^{2}}{r_{\rho0}^{2}}\left(A_{4}+\frac{1}{2}A_{5}\right)\frac{\sqrt{2\pi}\gamma A_{6}}{r_{\rho0}^{2}A_{0}}\right],$$ $$\begin{aligned}
L^{\left(1\right)} & = & \frac{1}{2}A_{0}^{-1}\left[A_{1}r_{\rho0}^{2}\left(\dot{B_{x}}+\dot{B_{y}}\right)+\frac{1}{2}A_{2}r_{\rho0}^{4}\left(\dot{C_{x}}+\dot{C_{y}}\right)\right.\nonumber \\
& & \left.+S_{\rho}\left(\delta R_{x}+\delta R_{y}\right)+S_{\alpha}\left(\delta\alpha_{x}+\delta\alpha_{y}\right)\right],\end{aligned}$$ $$\begin{aligned}
L^{\left(2\right)} & = & \frac{1}{2}A_{0}^{-1}\left[A_{1}r_{\rho0}^{2}\dot{\beta}_{x}\left(2\frac{\delta r_{x}}{r_{\rho0}}+F_{1}\delta\alpha_{x}+F_{2}\delta\alpha_{y}\right)+A_{1}r_{\rho0}^{2}\dot{\beta}_{y}\left(2\frac{\delta r_{y}}{r_{\rho0}}+F_{2}\delta\alpha_{x}+F_{1}\delta\alpha_{y}\right)\right.\nonumber \\
& & +\frac{1}{2}A_{2}r_{\rho0}^{4}\dot{\zeta}_{x}\left(4\frac{\delta r_{x}}{r_{\rho0}}+F_{3}\delta\alpha_{x}+F_{4}\delta\alpha_{y}\right)+\frac{1}{2}A_{2}r_{\rho0}^{4}\dot{\zeta}_{y}\left(4\frac{\delta r_{y}}{r_{\rho0}}+F_{4}\delta\alpha_{x}+F_{3}\delta\alpha_{y}\right)\nonumber \\
& & +A_{1}r_{\rho0}^{2}\left(\beta_{x}^{2}+\beta_{y}^{2}\right)+2A_{2}r_{\rho0}^{2}\left(\beta_{x}\zeta_{x}+\beta_{y}\zeta_{y}\right)+A_{3}r_{\rho0}^{6}\left(\zeta_{x}^{2}+\zeta_{y}^{2}\right)+V_{\rho}\left(\delta r_{x}^{2}+\delta r_{y}^{2}\right)\nonumber \\
& & +V_{\rho\rho}\delta r_{x}\delta r_{y}+V_{\alpha}\left(\delta\alpha_{x}^{2}+\delta\alpha_{y}^{2}\right)+V_{\alpha\alpha}\delta\alpha_{x}\delta\alpha_{y}+V_{\rho\alpha}\left(\delta r_{x}\delta\alpha_{x}+\delta r_{y}\delta\alpha_{y}\right)\nonumber \\
& & \left.+V_{\alpha\rho}\left(\delta r_{x}\delta\alpha_{y}+\delta r_{y}\delta\alpha_{x}\right)+2r_{\rho0}^{2}\left(A_{1}\beta_{xy}^{2}+I_{10}\dot{\beta}_{xy}\delta\alpha_{xy}\right)+V_{xy}\delta\alpha_{xy}^{2}\right],\end{aligned}$$ where $$S_{\rho}=2A_{1}r_{\rho0}-\frac{\ell^{2}}{r_{\rho0}^{3}}\left(2A_{4}+A_{5}\right)-\frac{2\sqrt{2\pi}\gamma A_{6}}{r_{\rho0}^{3}A_{0}},$$ $$S_{\alpha}=A_{1}r_{\rho0}^{2}\left(F_{1}+F_{2}\right)+\frac{\ell^{2}}{r_{\rho0}^{2}}\left[A_{4}\left(F_{5}+F_{6}\right)-A_{5}\left(F_{7}+F_{8}\right)\right]+\frac{2\sqrt{2\pi}\gamma A_{6}}{r_{\rho0}^{2}A_{0}}F_{9},$$ $$V_{\rho}=A_{1}+\frac{\ell^{2}}{r_{\rho0}^{4}}\left(3A_{4}+A_{5}\right)+\frac{2\sqrt{2\pi}\gamma A_{6}}{r_{\rho0}^{4}A_{0}},$$ $$V_{\rho\rho}=\frac{\ell^{2}}{r_{\rho0}^{4}}A_{5}+\frac{2\sqrt{2\pi}\gamma A_{6}}{r_{\rho0}^{4}A_{0}},$$ $$\begin{aligned}
V_{\alpha} & = & A_{1}r_{\rho0}^{2}\left[\frac{I_{6}+I_{7}}{A_{1}}-2\frac{I_{2}}{A_{0}}-\frac{I_{1}}{A_{0}}\left(F_{1}+F_{2}\right)\right]\nonumber \\
& & +\frac{A_{4}\ell}{r_{\rho0}^{2}}^{2}\left[\frac{I_{14}+I_{15}}{A_{4}}-2\frac{I_{2}}{A_{0}}-\frac{I_{1}}{A_{0}}\left(F_{5}+F_{6}\right)\right]\nonumber \\
& & +\frac{A_{5}\ell^{2}}{r_{\rho0}^{2}}\left[\frac{I_{19}}{A_{5}}-\frac{I_{2}}{A_{0}}-\frac{I_{1}}{A_{0}}\left(F_{7}+F_{8}\right)\right]\nonumber \\
& & +\frac{2\sqrt{2\pi}\gamma A_{6}}{r_{\rho0}^{2}A_{0}}\left[\frac{I_{23}}{A_{6}}-2\frac{I_{2}}{A_{0}}-\frac{I_{1}}{A_{0}}\left(2F_{9}+\frac{I_{1}}{A_{0}}\right)\right],\end{aligned}$$ $$\begin{aligned}
V_{\alpha} & = & A_{1}r_{\rho0}^{2}\left[\frac{I_{6}+I_{7}}{A_{1}}-2\frac{I_{2}}{A_{0}}-\frac{I_{1}}{A_{0}}\left(F_{1}+F_{2}\right)\right]\nonumber \\
& & +\frac{A_{4}\ell}{r_{\rho0}^{2}}^{2}\left[\frac{I_{14}+I_{15}}{A_{4}}-2\frac{I_{2}}{A_{0}}-\frac{I_{1}}{A_{0}}\left(F_{5}+F_{6}\right)\right]\nonumber \\
& & +\frac{A_{5}\ell^{2}}{r_{\rho0}^{2}}\left[\frac{I_{19}}{A_{5}}-\frac{I_{2}}{A_{0}}-\frac{I_{1}}{A_{0}}\left(F_{7}+F_{8}\right)\right]\nonumber \\
& & +\frac{2\sqrt{2\pi}\gamma A_{6}}{r_{\rho0}^{2}A_{0}}\left[\frac{I_{23}}{A_{6}}-2\frac{I_{2}}{A_{0}}-\frac{I_{1}}{A_{0}}\left(2F_{9}+\frac{I_{1}}{A_{0}}\right)\right],\end{aligned}$$ $$\begin{aligned}
V_{\alpha\alpha} & = & 2A_{1}r_{\rho0}^{2}\left[\frac{I_{8}}{A_{1}}-\frac{I_{3}}{A_{0}}-\frac{I_{1}}{A_{0}}\left(F_{1}+F_{2}\right)\right]\nonumber \\
& & +2\frac{A_{4}\ell}{r_{\rho0}^{2}}^{2}\left[\frac{I_{16}}{A_{4}}-\frac{I_{3}}{A_{0}}-\frac{I_{1}}{A_{0}}\left(F_{5}+F_{6}\right)\right]\nonumber \\
& & +\frac{A_{5}\ell^{2}}{r_{\rho0}^{2}}\left[\frac{I_{20}}{A_{5}}-\frac{I_{3}}{A_{0}}+\frac{I_{1}}{A_{0}}\left(F_{7}+F_{8}\right)\right]\nonumber \\
& & +\frac{2\sqrt{2\pi}\gamma A_{6}}{r_{\rho0}^{2}A_{0}}\left[\frac{I_{24}}{A_{6}}-2\frac{I_{3}}{A_{0}}-2\frac{I_{1}}{A_{0}}\left(2F_{9}+\frac{I_{1}}{A_{0}}\right)\right],\end{aligned}$$ $$V_{\rho\alpha}=2A_{1}r_{\rho0}F_{1}-\frac{\ell^{2}}{r_{\rho0}^{3}}\left(2A_{4}F_{5}-A_{5}F_{7}\right)-\frac{2\sqrt{2\pi}\gamma A_{6}}{r_{\rho0}^{3}A_{0}}F_{9},$$ $$V_{\alpha\rho}=2A_{1}r_{\rho0}F_{2}-\frac{\ell^{2}}{r_{\rho0}^{3}}\left(2A_{4}F_{6}-A_{5}F_{8}\right)-\frac{2\sqrt{2\pi}\gamma A_{6}}{r_{\rho0}^{3}A_{0}}F_{9},$$ $$\begin{aligned}
V_{xy} & = & 2A_{1}r_{\rho0}^{2}\left(\frac{I_{9}}{A_{1}}-\frac{I_{4}}{A_{0}}\right)+2\frac{A_{4}\ell^{2}}{r_{\rho0}^{2}}\left(\frac{I_{17}}{A_{4}}-\frac{I_{4}}{A_{0}}\right)\\
& & +\frac{A_{5}\ell^{2}}{r_{\rho0}^{2}}\left(\frac{I_{21}}{A_{5}}-\frac{I_{4}}{A_{0}}\right)+\frac{2\sqrt{2\pi}\gamma A_{6}}{r_{\rho0}^{2}A_{0}}\left(\frac{I_{25}}{A_{6}}-2\frac{I_{4}}{A_{0}}\right),\end{aligned}$$ with $$\begin{aligned}
F_{1} & = & \frac{I_{5}}{A_{1}}-\frac{I_{1}}{A_{0}},\\
F_{2} & = & \frac{I_{5}}{3A_{1}}-\frac{I_{1}}{A_{0}},\\
F_{3} & = & \frac{I_{11}}{A_{2}}-\frac{I_{1}}{A_{0}},\\
F_{4} & = & \frac{I_{11}}{5A_{2}}-\frac{I_{1}}{A_{0}},\\
F_{5} & = & \frac{I_{12}}{A_{4}}-\frac{I_{1}}{A_{0}},\\
F_{6} & = & \frac{I_{13}}{A_{4}}-\frac{I_{1}}{A_{0}},\\
F_{7} & = & \frac{I_{18}}{A_{5}}-\frac{I_{1}}{A_{0}},\\
F_{8} & = & \frac{I_{18}}{3A_{5}}-\frac{I_{1}}{A_{0}},\\
F_{9} & = & \frac{I_{22}}{A_{6}}-2\frac{I_{1}}{A_{0}}.\end{aligned}$$
The terms proportional to $\ell^{2}/r_{\rho0}^{2}$ and $\ell^{2}/r_{\rho0}^{3}$ are due to the centrifugal energy added by the multi-charged vortex, and the interaction parameter in dimensionless units is given by $\gamma=Na_{s}/d_{z}$. In the next section, we discuss the Euler-Lagrange equations for the deviations that lead to the four collective modes. Where one of them is dynamically unstable.
Dimensionless scale
----------------- ---------------------------------
$t$ $\omega_{\rho}^{-1}\tilde{t}$
$\mu$ $\hbar\omega_{\rho}\tilde{\mu}$
$\Omega$ $\omega_{\rho}\tilde{\Omega}$
$R_{0}$ $d_{\rho}r_{\rho0}$
$\delta R_{j}$ $d_{\rho}\delta r_{j}$
$\xi_{0}$ $d_{\rho}r_{\xi0}$
$\delta\xi_{j}$ $d_{\rho}\delta r_{\xi j}$
$B_{j}$ $d_{\rho}^{-2}\beta_{j}$
$C_{j}$ $d_{\rho}^{-4}\zeta_{j}$
$\varpi$ $\omega_{\rho}\tilde{\varpi}$
: Scale table.[]{data-label="tab:scale2D"}
Dimensionless parameters
--------------------------
$\gamma=Na_{s}/d_{z}$
$\alpha=\xi/R$
: Scale table.[]{data-label="tab:scale2D"}
Energy per atoms, collective modes, and instability of a quadrupole mode
========================================================================
\[sec:collectiveM\]
First, in order to calculate both energy per atoms and collective modes, we need to know the equilibrium points $r_{\rho0}$ and $\alpha_{0}$. They are obtained from Euler-Lagrange equations for $\delta r_{i}$ and $\delta\alpha_{i}$, resulting in $$S_{\rho}=0,\text{ and }S_{\alpha}=0.\label{eq:se}$$ Thus we have different pairs of $r_{\rho0}$ and $\alpha_{0}$ for each value of $\ell$ and $\gamma$, which are obtained by applying Newton’s method to solve these coupled stationary equations (\[eq:se\]). Note that for $\ell=0$ its solution is trivial, given by $$r_{\rho0}=2\left(2/\pi\right)^{1/8}\gamma^{1/4}.$$ These equations (\[eq:se\]) do not have physically consistent solutions for low values of $\gamma$ depending on the value of $\ell$, as can be seen in fig.\[fig:equilibrium points\]. We have evaluated the values of the pair $r_{\rho0}$ and $\alpha_{0}$ for the vortex-states with $\ell=2,4,7$, where the lowest values of interaction are around $\gamma\equiv Na_{s}/d_{z}=29,76,125$, respectively.
![(Color online) Energy per atom as function of interaction parameter.[]{data-label="fig:energia"}](energia)
The energy per atom $L^{\left(0\right)}$ increases proportionally to $\gamma^{1/2}$ being more evident for the vortex-free state ($\ell=0$), where $$L^{\left(0\right)}=4\left(2/\pi\right)^{1/4}\gamma^{1/2}/3.$$ We show this behavior for others values of $\ell$ in fig.\[fig:energia\]. The energy gap between the vortex-free state and the remaining states corresponds to the amount of energy needed to create the $\ell$-charged vortex states. For instance, if a focused laser beam is used to stir a Bose-Einstein condensate in order to nucleate vortices, the stirring frequency must exceeds a critical value [@vf1], which is defined by difference of energy between the vortex-free state and the singly vortex state.
Calculating the Euler-Lagrange equations from $L^{\left(2\right)}$ we obtain ten coupled equations, being five equations for phase $$\begin{aligned}
\frac{\dot{\delta r_{x}}}{r_{\rho0}}+\frac{F_{1}}{2}\dot{\delta\alpha_{x}}+\frac{F_{2}}{2}\dot{\delta\alpha_{y}} & = & \beta_{x}+\frac{A_{2}}{A_{1}}r_{\rho0}^{2}\zeta_{x},\\
\frac{\dot{\delta r_{y}}}{r_{\rho0}}+\frac{F_{2}}{2}\dot{\delta\alpha_{x}}+\frac{F_{1}}{2}\dot{\delta\alpha_{y}} & = & \beta_{y}+\frac{A_{2}}{A_{1}}r_{\rho0}^{2}\zeta_{y},\\
\frac{\dot{\delta r_{x}}}{r_{\rho0}}+\frac{F_{3}}{4}\dot{\delta\alpha_{x}}+\frac{F_{4}}{4}\dot{\delta\alpha_{y}} & = & \beta_{x}+\frac{A_{3}}{A_{2}}r_{\rho0}^{2}\zeta_{x},\\
\frac{\dot{\delta r_{y}}}{r_{\rho0}}+\frac{F_{4}}{4}\dot{\delta\alpha_{x}}+\frac{F_{3}}{4}\dot{\delta\alpha_{y}} & = & \beta_{y}+\frac{A_{3}}{A_{2}}r_{\rho0}^{2}\zeta_{y},\\
I_{10}\dot{\delta\alpha_{xy}} & = & 2A_{1}\beta_{xy},\end{aligned}$$ and other five equations for variational parameter in the amplitude $$\begin{aligned}
A_{1}r_{\rho0}\dot{\beta}_{x}+A_{2}r_{\rho0}^{3}\dot{\zeta}_{x}+2V_{\rho}\delta r_{x}+V_{\rho\rho}\delta r_{y}+V_{\rho\alpha}\delta\alpha_{x}+V_{\alpha\rho}\delta\alpha_{y} & \!\!\!\!=\!\!\!\! & 0,\\
A_{1}r_{\rho0}\dot{\beta}_{y}+A_{2}r_{\rho0}^{3}\dot{\zeta}_{y}+V_{\rho\rho}\delta r_{x}+2V_{\rho}\delta r_{y}+V_{\alpha\rho}\delta\alpha_{x}+V_{\rho\alpha}\delta\alpha_{y} & \!\!\!\!=\!\!\!\! & 0,\\
A_{1}r_{\rho0}^{2}\!\left(\!\dot{\beta}_{x}F_{1}\!+\!\dot{\beta}_{y}F_{2}\!\right)\!+\!\frac{1}{2}A_{2}r_{\rho0}^{4}\!\left(\!\dot{\zeta}_{x}F_{3}\!+\!\dot{\zeta}_{y}F_{4}\!\right)\!+\! V_{\rho\alpha}\delta r_{x}\!+\! V_{\alpha\rho}\delta r_{y}\!+\!2V_{\alpha}\delta\alpha_{x}\!+\! V_{\alpha\alpha}\delta\alpha_{y} & \!\!\!\!=\!\!\!\! & 0,\\
A_{1}r_{\rho0}^{2}\!\left(\!\dot{\beta}_{x}F_{2}\!+\!\dot{\beta}_{y}F_{1}\!\right)\!+\!\frac{1}{2}A_{2}r_{\rho0}^{4}\!\left(\!\dot{\zeta}_{x}F_{4}\!+\!\dot{\zeta}_{y}F_{3}\!\right)\!+\! V_{\alpha\rho}\delta r_{x}\!+\! V_{\rho\alpha}\delta r_{y}\!+\! V_{\alpha\alpha}\delta\alpha_{x}\!+\!2V_{\alpha}\delta\alpha_{y} & \!\!\!\!=\!\!\!\! & 0,\\
r_{\rho0}^{2}I_{10}\dot{\beta}_{xy}+V_{xy}\delta\alpha_{xy} & \!\!\!\!=\!\!\!\! & 0.\end{aligned}$$ We can reduce these ten equations into 4 coupled equations plus one uncoupled equation. The equation for $\delta\alpha_{xy}$ is uncoupled from the others according to $$\ddot{\delta\alpha_{xy}}+\frac{2A_{1}V_{xy}}{I_{10}^{2}r_{\rho0}^{2}}\delta\alpha_{xy}=0,$$ i.e., the motion represented by the deviation $\delta\alpha_{xy}$ is independent of the other collective modes. Those four equations lead to the linearized matrix equation $$M\ddot{\delta}+V\delta=0,$$ $$\begin{pmatrix}M_{\rho} & 0 & M_{\rho\alpha} & M_{\alpha\rho}\\
0 & M_{\rho} & M_{\alpha\rho} & M_{\rho\alpha}\\
M_{\rho\alpha} & M_{\alpha\rho} & M_{\alpha} & M_{\alpha\alpha}\\
M_{\alpha\rho} & M_{\rho\alpha} & M_{\alpha\alpha} & M_{\alpha}
\end{pmatrix}\begin{pmatrix}\ddot{\delta r_{x}}\\
\ddot{\delta r_{y}}\\
\ddot{\delta\alpha_{x}}\\
\ddot{\delta\alpha_{y}}
\end{pmatrix}+\begin{pmatrix}2V_{\rho} & V_{\rho\rho} & V_{\rho\alpha} & V_{\alpha\rho}\\
V_{\rho\rho} & 2V_{\rho} & V_{\alpha\rho} & V_{\rho\alpha}\\
V_{\rho\alpha} & V_{\alpha\rho} & 2V_{\alpha} & V_{\alpha\alpha}\\
V_{\alpha\rho} & V_{\rho\alpha} & V_{\alpha\alpha} & 2V_{\alpha}
\end{pmatrix}\begin{pmatrix}\delta r_{x}\\
\delta r_{y}\\
\delta\alpha_{x}\\
\delta\alpha_{y}
\end{pmatrix}=0,\label{eq:eigensystem}$$ where the entries in the matrix $M$ are given by $$\begin{aligned}
M_{\rho} & = & 2A_{1},\\
M_{\alpha} & = & \frac{A_{1}A_{2}^{2}r_{\rho0}^{2}}{2\left(A_{2}^{2}-A_{1}A_{3}\right)}\left[F_{1}F_{3}+F_{2}F_{4}-\frac{F_{3}^{2}}{4}-\frac{F_{4}^{2}}{4}-\frac{A_{1}A_{3}}{A_{2}^{2}}\left(F_{1}^{2}+F_{2}^{2}\right)\right],\\
M_{\alpha\alpha} & = & \frac{A_{1}A_{2}^{2}r_{\rho0}^{2}}{2\left(A_{2}^{2}-A_{1}A_{3}\right)}\left[F_{1}F_{4}+F_{2}F_{3}-\frac{F_{3}F_{4}}{2}-\frac{2A_{1}A_{3}}{A_{2}^{2}}F_{1}F_{2}\right],\\
M_{\rho\alpha} & = & A_{1}F_{1}r_{\rho0},\\
M_{\alpha\rho} & = & A_{1}F_{2}r_{\rho0}.\end{aligned}$$ Matrix $V$ results from the energy part of the Lagrangian, i.e. from Eqs. (\[eq:Etrap\]), (\[eq:Ekin\]), and (\[eq:Eint\]). This determinant may be either positive or negative reflecting the system stability. In the other hand, the determinant of $M$ cannot be negative or zero, since it results from our choice for the wave-function phase. The equation (\[eq:eigensystem\]) seems the Newton’s equation therefore we can say that matrix $M$ has an effect of mass-like, and matrix $V$ works as a potential [@peristalticmode]. Solving the characteristic equation, $$\det\left(M^{-1}V-\varpi^{2}I\right)=0,\label{eq:ce}$$ results in the frequencies of the collective modes of oscillation. Eq.(\[eq:ce\]) is a quartic equation of $\varpi^{2}$. This means that we have four pairs of frequencies $\pm\varpi_{n}^{2}$ being one pair for each oscillatory mode. Among these four modes, two of them have a static vortex representing the collective modes for cloud: they are the breathing mode $B_{c}$, and the quadrupole mode $Q_{c}$. In other words, these modes are similar to collective oscillations of the vortex-free state, where the difference is in a small shift in their frequencies depending on the charge of the vortex, as it is shown in Fig.\[fig:MC\]. Therefore $B_{c}$ decreases the frequency value while $Q_{c}$ has the opposite effect shifting to higher frequency value. Note that for a vortex-free condensate $\ell=0$, Eq.(\[eq:ce\]) is a quadratic equation in $\varpi^{2}$. That means the system presents only two modes ($B_{c}$ with $\varpi=2$, and $Q_{c}$ with $\varpi=\sqrt{2}$) in absence of vortex, whose frequencies are constant with respect to the interaction parameter $\gamma$. There are still other two modes which couple vortex dynamics with collective modes. They are another breathing mode $B_{v}$ and another quadrupole mode $Q_{v}$. In this breathing mode $B_{v}$, the vortex-core sizes oscillate out of phase with cloud radii, while $\delta r_{x}$ ($\delta\alpha_{x}$) and $\delta r_{y}$ ($\delta\alpha_{y}$) are in phase. In the quadrupole mode $Q_{v}$, both these sizes $\delta\alpha_{i}$ and $\delta R_{i}$ are oscillating in phase while $\delta r_{x}$ ($\delta\alpha_{x}$) and $\delta r_{y}$ ($\delta\alpha_{y}$) have a $\pi$-phase difference between their oscillations. These modes are sketched in fig.\[fig:MV\]. The second quadrupole mode $Q_{v}$ has an imaginary frequency (fig.\[fig.2DcmQV\]), i.e. $Q_{v}$-mode is one possible channel to a multi-charged vortex decay into unitary vortices. Therefore, the multi-charged vortex decay can be explained by the appearance and growth of this unstable quadrupole mode due to quantum or thermal fluctuations. These fluctuations work inducing collective modes, which are coupled to the vortex dynamics through their sound waves.
This model is completely consistent with CE Bogoliubov modes for $\ell=2$, which are composed by only the CE mode associated to two-fold symmetry being our quadrupole mode $Q_{v}$ [@stab03]. However when $\ell>2$ this calculation is incomplete since we considered only breathing and quadrupole modes. Hence for a complete description it is necessary to add others symmetries for each higher order of $\ell$, which is not a trivial task. Because the Ansatz requires more degrees of freedom, that means we should increase the number of variational parameters.
In order to check our results we proceed the full numerical calculation of the Gross-Pitaevskii equation (with the usual phenomenological dissipation $\epsilon$ used since Ref.[@vortexLatticeFormation]). The reason of this dissipative description is the prevention of non-physical waves created by the grid edge. The initial state is calculated by evolving a trial function in imaginary-time with the parameters given by the equilibrium point from Eq. (\[eq:se\]). We introduce the eigenvector from Eq. (\[eq:eigensystem\]) corresponding to the unstable quadrupole mode ($Q_{v}$). This trial function is given by $$\Phi_{\ell}\propto\left\{ \frac{\left[x/\left(\xi_{0}+\delta\xi_{x}\right)\right]+i\left[y/\left(\xi_{0}+\delta\xi_{y}\right)\right]}{\sqrt{\left[x/\left(\xi_{0}+\delta\xi_{x}\right)\right]^{2}+\left[y/\left(\xi_{0}+\delta\xi_{y}\right)\right]^{2}+1}}\right\} ^{\ell}\sqrt{1-\left[\frac{x}{\left(R_{0}+\delta R_{x}\right)}\right]^{2}-\left[\frac{y}{\left(R_{0}+\delta R_{y}\right)}\right]^{2}}.\label{eq:numTrial}$$ Furthermore we have done the evolution in real-time where we could check the multi-charged vortex decaying to an initial state containing only the deviations of $Q_{v}$-mode. In figure \[fig:2vD\], is shown the evolution of the condensate in real-time for a doubly-charged vortex, such that it starts to split around $\omega_{\rho}t=20.2$. In figure \[fig:4vD\], we notice that the life-time of quadruply-charged vortex is around $\omega_{\rho}t=22$. It is necessary to observe that these life-times are different depending on the amplitude of deviations and imaginary-time evolution. It is also possible to induce the decaying by shaping an anisotropic trap, however our semi-analytic approach is valid only for an isotropic trap.
It is interesting to observe the way in which multi-charged vortices decay by $Q_{v}$-mode excitations, which makes the multi-charged vortices split into a straight line of vortices with unitary angular momentum. For instance, we see in figure \[fig:4vD\] the quadruply-charged vortex splitting into four vortices and forming a straight line, then evolving based on its interaction with the velocity fields until the final configuration.
Stability diagram due to a static Gaussian potential
====================================================
\[sec:staticLG\]
Some articles on numerical simulations propose to stabilize an multi-charged vortex by turning on a Gaussian laser beam at the middle of the vortex-core. It means basically that we need to add an external potential with Gaussian shape to the harmonic potential, i.e. $$\begin{aligned}
V_{\bot}\left(\mathbf{r}_{\bot}\right) & = & V_{trap}\left(\mathbf{r}_{\bot}\right)+V_{G}\left(\mathbf{r}_{\bot}\right)\nonumber \\
& = & \frac{1}{2}m\omega_{\rho}^{2}\rho^{2}+\frac{1}{2}V_{0}e^{-\rho^{2}/\xi_{0}^{2}},\end{aligned}$$ where the Gaussian width must be proportional to the vortex-core radius ($w=\sqrt{2}\xi_{0}$). An apparent objection to our approach could lie on the fact that optical resolution limit of a laser beam is around of some microns, while single-charged vortex core is usually smaller than $0.5\text{\ensuremath{\mu}m}$. However, multi-charged vortices may attain much larger sizes depending on its charge, the trap anisotropy, number of atoms, and atomic species. For instance, a quadruply-charged vortex in a $^{85}Rb$ condensate ($N=10^{5}$, $a_{s}=100a_{0}$, $\omega_{\rho}=10\text{Hz}$, and $\omega_{z}=100\text{Hz}$) has $5.9\mu m$. By applying a Gaussian beam with $w=10\mu m$ inside of this vortex, its radius grows to $7.1\mu m$. Thus we use this procedure in our semi-analytical method in order to draw a stability diagram, and show that it is enough to stabilize the quadrupole mode $Q_{v}$. So we have to calculate now the Lagrangian part corresponding to the Gaussian potential, $$L_{G}=\int V_{G}\left(\mathbf{r}_{\bot}\right)\left|\Phi\left(\mathbf{r}_{\bot}\right)\right|^{2}d\mathbf{r}_{\bot}.$$ By expanding in Taylor series the integral of Gaussian potential $V_{LG}\left(\mathbf{r}_{\bot}\right)$ we have $$\begin{aligned}
\int e^{-\rho^{2}/\xi_{0}^{2}}\left(\Phi^{*}\Phi\right)d\mathbf{r}_{\bot} & = & N_{0}R_{x}R_{y}\left[A_{7}+\frac{I_{26}}{R_{0}}\left(\delta R_{x}+\delta R_{y}\right)+\frac{I_{27}}{R_{0}^{2}}\left(\delta R_{x}^{2}+\delta R_{y}^{2}\right)+\frac{I_{28}}{R_{0}^{2}}\delta R_{x}\delta R_{y}\right.\nonumber \\
& & +I_{29}\left(\delta\alpha_{x}+\delta\alpha_{y}\right)+I_{30}\left(\delta\alpha_{x}^{2}+\delta\alpha_{y}^{2}\right)+I_{31}\delta\alpha_{x}\delta\alpha_{y}\nonumber \\
& & \left.\!\!\!\!\!\!+\frac{I_{32}}{R_{0}}\left(\delta R_{x}\delta\alpha_{x}+\delta R_{y}\delta\alpha_{y}\right)+\frac{I_{33}}{R_{0}}\left(\delta R_{x}\delta\alpha_{y}+\delta R_{y}\delta\alpha_{x}\right)+I_{34}\delta\alpha_{xy}^{2}\right],\end{aligned}$$ where Lagrangian part becomes $$\begin{aligned}
L_{G} & = & -\frac{\tilde{V}_{0}A_{7}}{2A_{0}}\left\{ 1+\frac{I_{26}}{A_{7}R_{0}}\left(\delta R_{x}+\delta R_{y}\right)+\frac{I_{27}}{A_{7}R_{0}^{2}}\left(\delta R_{x}^{2}+\delta R_{y}^{2}\right)\right.\nonumber \\
& & +\frac{I_{28}}{A_{7}R_{0}^{2}}\delta R_{x}\delta R_{y}+\left(\frac{I_{29}}{A_{7}}-\frac{I_{1}}{A_{0}}\right)\left(\delta\alpha_{x}+\delta\alpha_{y}\right)\nonumber \\
& & +\left[\frac{I_{30}}{A_{7}}-\frac{I_{2}}{A_{0}}-\frac{I_{1}}{A_{0}}\left(\frac{I_{29}}{A_{7}}-\frac{I_{1}}{A_{0}}\right)\right]\left(\delta\alpha_{x}^{2}+\delta\alpha_{y}^{2}\right)\nonumber \\
& & +\left[\frac{I_{31}}{A_{7}}-\frac{I_{3}}{A_{0}}-2\frac{I_{1}}{A_{0}}\left(\frac{I_{29}}{A_{7}}-\frac{I_{1}}{A_{0}}\right)\right]\delta\alpha_{x}\delta\alpha_{y}\nonumber \\
& & +\frac{1}{R_{0}}\left(\frac{I_{32}}{A_{7}}-\frac{I_{26}I_{1}}{A_{7}A_{0}}\right)\left(\delta R_{x}\delta\alpha_{x}+\delta R_{y}\delta\alpha_{y}\right)\nonumber \\
& & \left.+\frac{1}{R_{0}}\left(\frac{I_{33}}{A_{7}}-\frac{I_{26}I_{1}}{A_{7}A_{0}}\right)\left(\delta R_{x}\delta\alpha_{y}+\delta R_{y}\delta\alpha_{x}\right)+\left(\frac{I_{34}}{A_{7}}-\frac{I_{4}}{A_{0}}\right)\delta\alpha_{xy}^{2}\right\} .\label{eq:Lpin}\end{aligned}$$ Notice that we have terms of first order in deviations in Eq. (\[eq:Lpin\]), it means that the stationary solution is modified when the condensate is under the influence of a Gaussian potential. The first-order contribution in (\[eq:Lpin\]) becomes $$L_{G}^{\left(1\right)}=-\frac{\tilde{V}_{0}A_{7}}{2A_{0}}\left\{ \overset{s_{\rho}}{\overbrace{\frac{I_{26}}{A_{7}R_{0}}}}\left(\delta R_{x}+\delta R_{y}\right)+\overset{s_{\alpha}}{\overbrace{\left(\frac{I_{29}}{A_{7}}-\frac{I_{1}}{A_{0}}\right)}}\left(\delta\alpha_{x}+\delta\alpha_{y}\right)\right\} ,$$ while the second-order terms are $$\begin{aligned}
L_{G}^{\left(2\right)} & = & -\frac{\tilde{V}_{0}A_{7}}{2A_{0}}\left\{ \stackrel{p_{\rho}}{\overbrace{\frac{I_{27}}{A_{7}R_{0}^{2}}}}\left(\delta R_{x}^{2}+\delta R_{y}^{2}\right)+\stackrel{p_{\rho\rho}}{\overbrace{\frac{I_{28}}{A_{7}R_{0}^{2}}}}\delta R_{x}\delta R_{y}\right.\nonumber \\
& & +\stackrel{p_{\alpha}}{\overbrace{\left[\frac{I_{30}}{A_{7}}-\frac{I_{2}}{A_{0}}-\frac{I_{1}}{A_{0}}\left(\frac{I_{29}}{A_{7}}-\frac{I_{1}}{A_{0}}\right)\right]}}\left(\delta\alpha_{x}^{2}+\delta\alpha_{y}^{2}\right)\nonumber \\
& & +\stackrel{p_{\alpha\alpha}}{\overbrace{\left[\frac{I_{31}}{A_{7}}-\frac{I_{3}}{A_{0}}-2\frac{I_{1}}{A_{0}}\left(\frac{I_{29}}{A_{7}}-\frac{I_{1}}{A_{0}}\right)\right]}}\delta\alpha_{x}\delta\alpha_{y}\nonumber \\
& & +\stackrel{p_{\rho\alpha}}{\overbrace{\frac{1}{R_{0}}\left(\frac{I_{32}}{A_{7}}-\frac{I_{26}I_{1}}{A_{7}A_{0}}\right)}}\left(\delta R_{x}\delta\alpha_{x}+\delta R_{y}\delta\alpha_{y}\right)\nonumber \\
& & \left.+\stackrel{p_{\alpha\rho}}{\overbrace{\frac{1}{R_{0}}\left(\frac{I_{33}}{A_{7}}-\frac{I_{26}I_{1}}{A_{7}A_{0}}\right)}}\left(\delta R_{x}\delta\alpha_{y}+\delta R_{y}\delta\alpha_{x}\right)\right\} .\label{eq:Lpin2}\end{aligned}$$ The equilibrium points are changed to $$S_{\rho}+s_{\rho}=0,\text{ and }S_{\alpha}+s_{\alpha}=0.$$ Each terms in (\[eq:Lpin2\]) adds a contribution to a different element in the matrix $V$ of the linearized Euler-Lagrange equation (\[eq:eigensystem\]) which then becomes $$M\ddot{\delta}+\left(V+V_{G}\right)\delta=0,$$ where $$V_{G}=\frac{\tilde{V}_{0}A_{7}}{2A_{0}}\begin{pmatrix}2p_{\rho} & p_{\rho\rho} & p_{\rho\alpha} & p_{\alpha\rho}\\
p_{\rho\rho} & 2p_{\rho} & p_{\alpha\rho} & p_{\rho\alpha}\\
p_{\rho\alpha} & p_{\alpha\rho} & 2p_{\alpha} & p_{\alpha\alpha}\\
p_{\alpha\rho} & p_{\rho\alpha} & p_{\alpha\alpha} & 2p_{\alpha}
\end{pmatrix}.\label{eq:VLG}$$
Since the stability of the eigensystem depends only on the $Q_{v}$-frequency, we can build a stability diagram of $V_{0}/\hbar\omega_{\rho}$ versus $Na_{s}/d_{z}$. In fig.\[fig:PinDiagram\], this diagram is shown considering two cases, $\ell=2$ and $\ell=4$. As the angular momentum $\ell$ gets larger the stable region decreases. Hence the pinning potential can prevent the vortices from splitting for some values of $V_{0}/\hbar\omega_{\rho}$ depending on $Na_{s}/d_{\rho}$.
In order to validate these stability diagrams, we make a numerical simulation of the Gross-Pitaevskii equation. When the Gaussian potential is turned on, we have seen that it provokes some phonon-waves on the condensate surface and increases a little the vortex-core size besides preventing the vortex decay. Figure \[fig:2vDpin\] shows phonon-waves rising and vanishing due to dissipation. The same phenomena may be seen in figure \[fig:4vDpin\].
The vortex decay happens when the sound waves couple the quadrupole mode from the edge of the condensate with the vortex-core, which breaks the polar symmetry of vortex. Therefore, the pinning potential acts as a wall reflecting these sound waves, and preventing the vortex symmetry break.
Diagram of stability due to a dynamic Gaussian potential
========================================================
\[sec:dynamicLG\]
![(Color online) Diagram of amplitude versus frequency where hatched stable regions are found for a condensate containing a triply-charged vortex subjected to **height modulation**. We used $Na_{s}/d_{z}=125$. Convergence is obtained already with two iterations.[]{data-label="fig:pD"}](mPin-L3)
In section \[sec:staticLG\], we have seen that it is possible to make a multi-charged vortex stable using a static Gaussian potential. In addition, we calculated a diagram of height versus interaction strength which shows the stable region. Here we propose to stabilize a multi-charged vortex with a sinusoidal modulation of height of the Gaussian potential with an amplitude given by $\delta V$, $$V_{0}\left(t\right)=V_{0}-\delta V\cos\left(\Omega t\right),$$ at the specific region of interaction strength where the static potential is not capable of stabilizing the vortex, i.e. $0<Na_{s}/d_{\rho}\leq160$. The equation for this case is given by $$M\ddot{\delta}+\left\{ V+V_{G}\left[1-\frac{\delta V}{V_{0}}\cos\left(\tilde{\Omega}\tilde{t}\right)\right]\right\} \delta=0,$$ where matrices $M$ and $V$ can be found at Eq.(\[eq:eigensystem\]), and $V_{LG}$ is given by Eq.(\[eq:VLG\]). By scaling the time as follows $$\frac{\Omega}{\omega_{\rho}}\tilde{t}\rightarrow2\tau,$$ we obtain the Mathieu equation $$\frac{\tilde{\Omega}^{2}}{4}\ddot{\delta}+\left\{ A-2\frac{\delta V}{V_{0}}Q\cos\left(2\tau\right)\right\} \delta=0,\label{eq:Mathieu}$$ where $A=M^{-1}\left(V+V_{LG}\right)$ and $Q=\left(1/2\right)M^{-1}V_{LG}$ are constants depending on the initial conditions. This equation becomes solvable by using Floquet theory [@mathieu1; @mathieu2; @floquet; @Floquet1; @Floquet2]. The basic idea of this theory is that if a linear differential equation has periodic coefficients, the solutions will be a linear periodic combination of functions times exponentially increasing (or decreasing) functions. Thus linear independent solutions of the Mathieu equation for any pair of $A$ and $B$ can be expressed as $$\delta\left(\tau\right)=e^{\pm\eta\tau}P\left(\pm\tau\right),$$ where $\eta$ is called the characteristic exponent which is a constant depending on both $A$ and $Q$, and $P\left(\tau\right)$ is $\pi$-periodic in $\tau$ that which can be written as an infinity series $$\delta\left(\tau\right)=e^{\eta\tau}\sum_{n=-\infty}^{\infty}b_{2n}e^{2ni\tau},\label{eq:FAnsatz}$$ with $b_{2n}$ being a Fourier component. Doing the substitution of (\[eq:FAnsatz\]) into (\[eq:Mathieu\]), we have $$\left[A+\frac{\tilde{\Omega}^{2}}{4}\left(\eta+2ni\right)^{2}I\right]b_{2n}-Q\left(b_{2n+2}+b_{2n-2}\right)=0.\label{eq:M1}$$ At this point it is wise to define ladder operators $L_{2n}^{\pm}b_{2n}=b_{2n\pm2}$ which yields $$L_{2n}^{\pm}=\left\{ A+\frac{\tilde{\Omega}^{2}}{4}\left[\eta+2i\left(n\pm1\right)\right]^{2}I-QL_{2n\pm2}^{\pm}\right\} ^{-1}Q.\label{eq:Ladder}$$ By using (\[eq:Ladder\]) to write (\[eq:M1\]) in terms of $b_{0}$ only, we obtain an iteration algorithm wherein we replace the ladder operator over and over inside itself which then becomes $$\begin{aligned}
\left(A+\frac{\tilde{\Omega}^{2}}{4}\eta^{2}I-Q\left\{ \left[A+\frac{\tilde{\Omega}^{2}}{4}\left(\eta+2i\right)^{2}I-\cdots\right]^{-1}\right.\right.\nonumber \\
\left.\left.+\left[A+\frac{\tilde{\Omega}^{2}}{4}\left(\eta-2i\right)^{2}I-\cdots\right]^{-1}\right\} Q\right)b_{0} & = & 0.\label{eq:FloquetDet}\end{aligned}$$ Since we are not interested in trivial solutions for $b_{0}$, the determinant of (\[eq:FloquetDet\]) must vanish. Thus the stability diagram for a modulation of the Gaussian potential with frequency $\Omega$ and amplitude $V_{0}$ is presented in fig. \[fig:pD\], where its resonant behavior does not depend on the initial conditions [@PR1].
The edges between stable and unstable domains (also called as Floquet fringes) were calculated by making $\eta=0$. Since the equilibrium configuration rarely has solution for $V_{0}/\hbar\omega_{\rho}\geq Na_{s}/d_{z}$, we only build the stability diagram for $V_{0}/\hbar\omega_{\rho}<Na_{s}/d_{z}$. The iterative algorithm converges very fast, and does not require more than two iterations.
The stable regions, also called resonance region, can lead the system to lose coherence if the excitation time is long enough (hundreds of milliseconds according to number of atoms) which leads to destruction of the condensate state.
The dynamical mechanism works exciting the resonant mode by the oscillatory potential placed at the center of the condensate that suppresses completely the $Q_{v}$-mode, when the correct frequency and amplitude are considered. Since this mode no longer exists, the vortex becomes stable (Fig.\[fig.mPin\]). It is what happens for the case where the static potential cannot stabilize the vortex by itself. On the other hand, in the case of static potential is enough to prevent the vortex decay, the modulation of the height plays an opposite role inducing the vortex decay in resonance regions.
Conclusions
===========
In this paper we have studied the stability of collective modes as well as its dynamical stability for a quasi-2D Bose-Einstein condensate with a multi-charged vortex. The presence of a $\ell$-charged vortex causes a shift in the frequencies of the cloud collective modes, however such changes are not substantial. The vortex rotational mode is an independent degree of freedom and does not affect vortex stability. The vortex dynamics couples with collective excitations, and it can be the cause for the $\ell$-charged vortex decay. Its decay has as responsible the quadrupole oscillation $Q_{v}$, which is one channel that leads the $\ell$-charged vortex to decay into $\ell$ singly vortices. This quadrupole is the main channel to doubly-charged vortex decay into two singly vortices. By applying a static Gaussian potential we can prevent the decay of a vortex for specific potential amplitudes, whereas for some regions in the parameter space can be stabilized by a time periodic modulation of the laser potential.
We acknowledge financial support from the National Council for the Improvement of Higher Education (CAPES) and from the State of S�o Paulo Foundation for Research Support (FAPESP).
Functions $A_{i}\left(\ell,\alpha_{0}\right)$ and $I_{i}\left(\ell,\alpha_{0}\right)$ {#sec:A}
=====================================================================================
Similar functions to $A_{i}\left(\ell,\alpha_{0}\right)$ for a 3D case have been calculated in Ref.[@rpteles2]. Since it is a Thomas-Fermi wave-function the procedure to evaluate each integral is the same, where we start changing the scale of both $x$ and $y$ coordinates according to $x\rightarrow R_{x}x$ and $y\rightarrow R_{y}y$. By doing this $\xi_{i}$ becomes $\alpha_{i}=\xi_{i}/R_{i}$, i.e. the integral becomes dimensionless. Now it is convenient to change the coordinates from cartesians to polar ($x=\rho\cos\phi$ and $y=\rho\sin\phi$) where the integration domains are $0\leq\rho\leq1$ and $0\leq\phi\leq2\pi$, in this way we have $$A_{0}\left(\ell,\alpha_{0}\right)=\frac{\pi}{\alpha_{0}^{2\ell}}\left[\frac{_{2}F_{1}\left(\ell,\ell+1;\ell+2;-\alpha_{0}^{-2}\right)}{\ell+1}-\frac{_{2}F_{1}\left(\ell,\ell+2;\ell+3;-\alpha_{0}^{-2}\right)}{\ell+2}\right],$$ $$A_{1}\left(\ell,\alpha_{0}\right)=\frac{1}{2}\left(-1\right)^{\ell}\pi\alpha_{0}^{4}\left[B\left(-\alpha_{0}^{-2};\ell+2,\ell-1\right)+\alpha_{0}^{2}B\left(-\alpha_{0}^{-2};\ell+3,\ell-1\right)\right],$$ $$A_{2}\left(\ell,\alpha_{0}\right)=\frac{3}{8}\pi\alpha_{0}^{-2\ell}\left[\frac{_{2}F_{1}\left(\ell,\ell+3;\ell+4;-\alpha_{0}^{-2}\right)}{\ell+3}-\frac{_{2}F_{1}\left(\ell,\ell+4;\ell+5;-\alpha_{0}^{-2}\right)}{\ell+4}\right],$$ $$A_{3}\left(\ell,\alpha_{0}\right)=\frac{5}{16}\left(-1\right)^{\ell}\pi\alpha_{0}^{8}\left[B\left(-\alpha_{0}^{-2};\ell+4,\ell-1\right)+\alpha_{0}^{2}B\left(-\alpha_{0}^{-2};\ell+5,\ell-1\right)\right],$$ $$A_{4}\left(\ell,\alpha_{0}\right)=\frac{\pi\left(1+\alpha_{0}^{2}\right)^{-\ell}}{2\ell\left(1+\ell\right)},$$ $$A_{5}\left(\ell,\alpha_{0}\right)=\frac{\pi}{\alpha_{0}^{2\ell}}\left[\frac{_{2}F_{1}\left(\ell,\ell;\ell+1;-\alpha_{0}^{-2}\right)}{\ell}-\frac{_{2}F_{1}\left(\ell,\ell+1;\ell+2;-\alpha_{0}^{-2}\right)}{\ell+1}\right],$$ $$A_{6}\left(\ell,\alpha_{0}\right)=\frac{\pi}{\alpha_{0}^{4\ell}}\left[\frac{_{2}F_{1}\left(2\ell,2\ell+1;2\ell+2;-\alpha_{0}^{-2}\right)}{2\ell+1}-\frac{_{2}F_{1}\left(2\ell,2\ell+2;2\ell+3;-\alpha_{0}^{-2}\right)}{2\ell+2}\right],$$ $$I_{1}\left(\ell,\alpha_{0}\right)=\pi\ell\alpha_{0}\left\{ \left(1+\alpha_{0}^{2}\right)^{-\ell}+\left(-1\right)^{\ell}\left[1+\left(1+\ell\right)\alpha_{0}^{2}\right]B\left(-\alpha_{0}^{-2};\ell+1,-\ell\right)\right\} ,$$ $$I_{2}\left(\ell,\alpha_{0}\right)=\frac{3}{4}\pi\ell\left[\left(1+\alpha_{0}^{2}\right)^{-\ell-1}-\left(\frac{\ell+1}{\alpha_{0}^{2\ell+2}}\right)\frac{_{2}F_{1}\left(\ell+2,\ell+2;\ell+3;-\alpha_{0}^{-2}\right)}{\ell+2}\right],$$ $$\begin{aligned}
I_{3}\left(\ell,\alpha_{0}\right) & = & \frac{\pi\ell}{2\alpha_{0}^{2\ell+2}}\left[\left(\frac{1+3\ell}{\ell+1}\right)\left(1+\alpha_{0}^{-2}\right)^{-\ell-1}\right.\nonumber \\
& & \left.-\left(\frac{2}{\alpha_{0}^{2}}+3\ell+3\right)\frac{_{2}F_{1}\left(\ell+2,\ell+2;\ell+3;-\alpha_{0}^{-2}\right)}{\ell+2}\right],\end{aligned}$$ $$\begin{aligned}
I_{4}\left(\ell,\alpha_{0}\right) & = & \frac{\pi\ell}{4\alpha_{0}^{2\ell-4}}\left[\left(\frac{3\ell+1}{\ell+1}\right)\left(1+\alpha_{0}^{-2}\right)^{-\ell-1}\right.\nonumber \\
& & \left.-\left(\frac{2}{\alpha_{0}^{2}}+3\ell+3\right)\frac{_{2}F_{1}\left(\ell+2,\ell+2;\ell+3;-\alpha_{0}^{-2}\right)}{\ell+2}\right],\end{aligned}$$ $$I_{5}\left(\ell,\alpha_{0}\right)=-\frac{3}{4}\left(-1\right)^{\ell}\pi\alpha_{0}^{3}\left[B\left(-\alpha_{0}^{-2};\ell+2,-\ell\right)+\alpha_{0}^{2}B\left(-\alpha_{0}^{-2};\ell+3,-\ell\right)\right]$$ $$\begin{aligned}
I_{6}\left(\ell,\alpha_{0}\right) & = & \frac{\pi\ell}{16\alpha_{0}^{2\ell}}\left\{ \left[\frac{11\left(2\ell+1\right)+12\alpha_{0}^{-2}}{\left(\ell+1\right)^{-1}}\right]\frac{_{2}F_{1}\left(\ell+2,\ell+2;\ell+3;-\alpha_{0}^{-2}\right)}{\ell+2}\right.\nonumber \\
& & \left.-\frac{\alpha_{0}^{-2}+11\left(\ell+1\right)}{\left(1+\alpha_{0}^{-2}\right)^{\ell+1}}\right\} ,\end{aligned}$$ $$I_{7}\left(\ell,\alpha_{0}\right)=\frac{\pi\ell}{16\alpha_{0}^{2\ell}}\left[\frac{_{2}F_{1}\left(\ell+2,\ell+2;\ell+3;-\alpha_{0}^{-2}\right)}{\left(\ell+1\right)^{-1}}-\frac{\ell+1-\alpha_{0}^{-2}}{\left(1+\alpha_{0}^{-2}\right)^{\ell+1}}\right],$$ $$\begin{aligned}
I_{8}\left(\ell,\alpha_{0}\right) & = & \frac{\pi\ell}{4\alpha_{0}^{2\ell}}\left\{ \left[\frac{2\ell+4+3\alpha_{0}^{-2}}{\left(\ell+1\right)^{-1}}\right]\frac{_{2}F_{1}\left(\ell+2,\ell+2;\ell+3;-\alpha_{0}^{-2}\right)}{\ell+2}\right.\nonumber \\
& & \left.-\frac{\alpha_{0}^{-2}+2\ell+2}{\left(1+\alpha_{0}^{-2}\right)^{\ell+1}}\right\} ,\end{aligned}$$ $$\begin{aligned}
I_{9}\left(\ell,\alpha_{0}\right) & = & \frac{\pi\ell}{8\alpha_{0}^{2\ell-6}}\left[\left(\ell+1\right)\left(\frac{3}{\alpha_{0}^{2}}+4+2\ell\right)\frac{_{2}F_{1}\left(\ell+2,\ell+2;\ell+3;-\alpha_{0}^{-2}\right)}{\ell+2}\right.\nonumber \\
& & \left.-\frac{\alpha_{0}^{-2}+2\ell+2}{\left(1+\alpha_{0}^{-2}\right)^{\ell+1}}\right],\end{aligned}$$ $$I_{10}\left(\ell,\alpha_{0}\right)=\!\frac{\pi\ell}{4\alpha_{0}^{2\ell-2}}\!\!\left[\!\frac{_{2}F_{1}\left(\ell+1,\ell+2;\ell+3;-\alpha_{0}^{-2}\right)}{\ell+2}\!-\!\frac{_{2}F_{1}\left(\ell+1,\ell+3;\ell+4;-\alpha_{0}^{-2}\right)}{\ell+3}\!\right]\!\!,$$ $$I_{11}\left(\ell,\alpha_{0}\right)=\frac{5}{8}\left(-1\right)^{\ell}\pi\ell\alpha_{0}^{5}\left[B\left(-\alpha_{0}^{-2};\ell+3,-\ell\right)+\alpha_{0}^{2}B\left(-\alpha_{0}^{-2};\ell+4,-\ell\right)\right],$$ $$I_{12}\left(\ell,\alpha_{0}\right)=\frac{\pi}{4}\frac{2\alpha_{0}^{-1}+\left(3\ell+2\right)\alpha_{0}}{\ell\left(\ell+1\right)\left(1+\alpha_{0}^{2}\right)^{\ell+1}},$$ $$I_{13}\left(\ell,\alpha_{0}\right)=\frac{\pi}{4}\frac{2\alpha_{0}^{-1}-\left(\ell-2\right)\alpha_{0}}{\ell\left(\ell+1\right)\left(1+\alpha_{0}^{2}\right)^{\ell+1}},$$ $$I_{14}\left(\ell,\alpha_{0}\right)=\frac{\pi}{8}\frac{3\ell+8+4\alpha_{0}^{-2}+\left[4+\ell\left(5\ell+8\right)\right]\alpha_{0}^{2}}{\ell\left(\ell+1\right)\left(1+\alpha_{0}^{2}\right)^{\ell+2}},$$ $$I_{15}\left(\ell,\alpha_{0}\right)=\frac{\pi}{8}\frac{\left(\ell-2\right)\alpha_{0}^{2}-3}{\left(\ell+1\right)\left(1+\alpha_{0}^{2}\right)^{\ell+2}},$$ $$I_{16}\left(\ell,\alpha_{0}\right)=\frac{\pi}{4}\frac{\left(\ell^{2}-\ell+2\right)\alpha_{0}^{2}-2\alpha_{0}^{-2}-2\ell-4}{\ell\left(\ell+1\right)\left(1+\alpha_{0}^{2}\right)^{\ell+2}},$$ $$I_{17}\left(\ell,\alpha_{0}\right)=\frac{\pi}{8}\frac{2\alpha_{0}^{4}-\left(2\ell-2\right)\alpha_{0}^{6}+\left(\ell^{2}-\ell+2\right)\alpha_{0}^{8}}{\ell\left(\ell+1\right)\left(1+\alpha_{0}^{-2}\right)^{\ell+2}},$$ $$I_{18}\left(\ell,\alpha_{0}\right)=\frac{3\pi}{2\alpha_{0}}\left(1+\alpha_{0}^{2}\right)^{-\ell}\left[1-\ell\frac{_{2}F_{1}\left(1,1;\ell+2;-\alpha_{0}^{-2}\right)}{\ell+1}\right],$$ $$I_{19}\left(\ell,\alpha_{0}\right)=\frac{3\pi}{4\alpha_{0}^{2}}\left(1+\alpha_{0}^{2}\right)^{-\ell},$$ $$I_{20}\left(\ell,\alpha_{0}\right)=\frac{\pi}{2\alpha_{0}^{2}}\left(1+\alpha_{0}^{2}\right)^{-\ell-1}\left[\left(\frac{\ell-1}{\ell+1}\right)\alpha_{0}^{2}-1+2\ell\frac{_{2}F_{1}\left(1,1;\ell+3;-\alpha_{0}^{-2}\right)}{\ell+2}\right],$$ $$I_{21}\left(\ell,\alpha_{0}\right)=\frac{\pi\alpha_{0}^{4}}{4}\left(1+\alpha_{0}^{2}\right)^{-\ell-1}\left[\left(\frac{\ell-1}{\ell+1}\right)\alpha_{0}^{2}-1+2\ell\frac{_{2}F_{1}\left(1,1;\ell+3;-\alpha_{0}^{-2}\right)}{\ell+2}\right],$$ $$\begin{aligned}
I_{22}\left(\ell,\alpha_{0}\right) & = & \frac{\pi\ell}{\alpha_{0}^{4\ell-1}}\left[\frac{_{2}F_{1}\left(2\ell+2,2\ell+1;2\ell+3;-\alpha_{0}^{-2}\right)}{\ell+1}\right.\nonumber \\
& & \left.-2\frac{_{2}F_{1}\left(2\ell+1,2\ell+1;2\ell+2;-\alpha_{0}^{-2}\right)}{2\ell+1}\right],\end{aligned}$$ $$I_{23}\left(\ell,\alpha_{0}\right)=\frac{3}{4}\pi\ell\left[\frac{2}{\left(1+\alpha_{0}^{2}\right)^{2\ell+1}}-\left(\frac{2\ell+1}{\alpha_{0}^{4\ell+2}}\right)\frac{_{2}F_{1}\left(2\ell+2,2\ell+2;2\ell+3;-\alpha_{0}^{-2}\right)}{\ell+1}\right],$$ $$\begin{aligned}
I_{24}\left(\ell,\alpha_{0}\right) & = & \frac{\pi\ell}{\alpha_{0}^{4\ell}}\left[\left(\frac{2\ell-1}{2\ell+1}\right)\alpha_{0}^{-2}\left(1+\alpha_{0}^{-2}\right)^{-2\ell-1}\right.\nonumber \\
& & +\frac{4}{\alpha_{0}^{4}}\left(\frac{\ell+1}{2\ell+2}\right)\frac{_{2}F_{1}\left(2\ell+2,2\ell+3;2\ell+4;-\alpha_{0}^{-2}\right)}{2\ell+3}\nonumber \\
& & \left.-\left(\frac{2\ell-1}{\alpha_{0}^{2}}+\frac{2}{\alpha_{0}^{4}}\right)\frac{_{2}F_{1}\left(2\ell+2,2\ell+2;2\ell+3;-\alpha_{0}^{-2}\right)}{2\ell+2}\right],\end{aligned}$$ $$\begin{aligned}
I_{25}\left(\ell,\alpha_{0}\right) & = & \frac{\pi\ell}{2\alpha_{0}^{4\ell}}\left\{ \left(\frac{2\ell-1}{2\ell+1}\right)\alpha_{0}^{4}\left(1+\alpha_{0}^{-2}\right)^{-2\ell-1}\right.\nonumber \\
& & -\left[2+\left(2\ell-1\right)\alpha_{0}^{4}\right]\frac{_{2}F_{1}\left(2\ell+2,2\ell+2;2\ell+3;-\alpha_{0}^{-2}\right)}{2\ell+2}\nonumber \\
& & \left.+4\alpha_{0}^{2}\left(\frac{\ell+1}{2\ell+2}\right)\frac{_{2}F_{1}\left(2\ell+2,2\ell+3;2\ell+4;-\alpha_{0}^{-2}\right)}{2\ell+3}\right\} .\end{aligned}$$ Where $_{p}F_{q}\left(a_{1},\ldots,a_{p};b_{1},\ldots,b_{q};x\right)$ are the hypergeometric functions, and $B\left(x;a,b\right)$ are beta functions. The functions derived from Gaussian potential have not an easy general form, then we write them in integral form: $$A_{7}\left(\ell,\alpha_{0}\right)=2\pi\int_{0}^{1}e^{-\rho^{2}/\alpha_{0}^{2}}\left(\frac{\rho^{2}}{\rho^{2}+\alpha_{0}^{2}}\right)^{\ell}\left(1-\rho^{2}\right)\rho d\rho,$$ $$I_{26}\left(\ell,\alpha_{0}\right)=-\frac{2\pi}{\alpha_{0}^{2}}\int_{0}^{1}e^{-\rho^{2}/\alpha_{0}^{2}}\left(\frac{\rho^{2}}{\rho^{2}+\alpha_{0}^{2}}\right)^{\ell}\left(1-\rho^{2}\right)\rho^{3}d\rho,$$ $$I_{27}\left(\ell,\alpha_{0}\right)=\frac{\pi}{\alpha_{0}^{4}}\int_{0}^{1}e^{-\rho^{2}/\alpha_{0}^{2}}\left(\frac{\rho^{2}}{\rho^{2}+\alpha_{0}^{2}}\right)^{\ell}\left(\frac{3}{2}\rho^{2}-\alpha_{0}^{2}\right)\left(1-\rho^{2}\right)\rho^{3}d\rho,$$ $$I_{28}\left(\ell,\alpha_{0}\right)=\frac{\pi}{\alpha_{0}^{4}}\int_{0}^{1}e^{-\rho^{2}/\alpha_{0}^{2}}\left(\frac{\rho^{2}}{\rho^{2}+\alpha_{0}^{2}}\right)^{\ell}\left(1-\rho^{2}\right)\rho^{5}d\rho,$$ $$I_{29}\left(\ell.\alpha_{0}\right)=-2\pi\ell\alpha_{0}\int_{0}^{1}e^{-\rho^{2}/\alpha_{0}^{2}}\left(\frac{\rho^{2}}{\rho^{2}+\alpha_{0}^{2}}\right)^{\ell+1}\left(1-\rho^{2}\right)d\rho,$$ $$I_{30}\left(\ell,\alpha_{0}\right)=\frac{3}{2}\pi\ell\left(\ell+1\right)\alpha_{0}^{2}\int_{0}^{1}e^{-\rho^{2}/\alpha_{0}^{2}}\left(\frac{\rho^{2}}{\rho^{2}+\alpha_{0}^{2}}\right)^{\ell}\frac{\left(1-\rho^{2}\right)}{\left(\rho^{2}+\alpha_{0}^{2}\right)^{2}}\rho d\rho,$$ $$I_{31}\left(\ell,\alpha_{0}\right)=\pi\ell\int_{0}^{1}e^{-\rho^{2}/\alpha_{0}^{2}}\left(\frac{\rho^{2}}{\rho^{2}+\alpha_{0}^{2}}\right)^{\ell}\left[\frac{\left(\ell-1\right)\alpha_{0}^{2}-2\rho^{2}}{\left(\rho^{2}+\alpha_{0}^{2}\right)^{2}}\right]\left(1-\rho^{2}\right)\rho d\rho,$$ $$I_{32}\left(\ell,\alpha_{0}\right)=\frac{3\pi\ell}{\alpha_{0}}\int_{0}^{1}e^{-\rho^{2}/\alpha_{0}^{2}}\left(\frac{\rho^{2}}{\rho^{2}+\alpha_{0}^{2}}\right)^{\ell+1}\left(1-\rho^{2}\right)\rho d\rho,$$ $$I_{33}\left(\ell,\alpha_{0}\right)=\frac{\pi\ell}{\alpha_{0}}\int_{0}^{1}e^{-\rho^{2}/\alpha_{0}^{2}}\left(\frac{\rho^{2}}{\rho^{2}+\alpha_{0}^{2}}\right)^{\ell+1}\left(1-\rho^{2}\right)\rho d\rho,$$ $$I_{34}\left(\ell,\alpha_{0}\right)=\frac{1}{2}\pi\ell\alpha_{0}^{6}\int_{0}^{1}e^{-\rho^{2}/\alpha_{0}^{2}}\left(\frac{\rho^{2}}{\rho^{2}+\alpha_{0}^{2}}\right)^{\ell}\left[\frac{\left(\ell-1\right)\alpha_{0}^{2}-2\rho^{2}}{\left(\alpha_{0}+\rho^{2}\right)^{2}}\right]\left(1-\rho^{2}\right)\rho d\rho.$$
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'In [@us] we proposed a generalization of the BMS group ${\cal G}$ which is a semi-direct product of supertranslations and smooth diffeomorphisms of the conformal sphere. Although an extension of BMS, ${\cal G}$ is a symmetry group of asymptotically flat space times. By taking ${\cal G}$ as a candidate symmetry group of the quantum gravity S-matrix, we argued that the Ward identities associated to the generators of $\textrm{Diff}(S^{2})$ were equivalent to the Cachazo-Strominger subleading soft graviton theorem. Our argument however was based on a proposed definition of the $\textrm{Diff}(S^{2})$ charges which we could not derive from first principles as ${\cal G}$ does not have a well defined action on the radiative phase space of gravity. Here we fill this gap and provide a first principles derivation of the $\textrm{Diff}(S^{2})$ charges. The result of this paper, in conjunction with the results of [@supertranslation; @us] prove that the leading and subleading soft theorems are equivalent to the Ward identities associated to ${\cal G}$.'
author:
- Miguel Campiglia
- Alok Laddha
title: 'New symmetries for the Gravitational S-matrix'
---
Introduction
============
The space of asymptotically flat spacetimes which satisfy Einstein’s equations is very rich. For example, there is an infinite dimensional symmetry group associated to this space which is known as BMS group [@bms; @sachs]. BMS group is intrinsically tied to the null boundary of any asymptotically flat spacetime, which in turn has a topology of $S^{2}\times{\bf R}$. The group is a semidirect product of an abelian group of angle-dependent translations along the null direction (referred to as ‘supertranslations’) times the group of global conformal transformations of the 2-sphere, the Lorentz group. As asymptotically flat spacetimes have future as well as past null infinities, the complete group which can be associated to such spaces is a direct product $\textrm{BMS}^{+}\times\textrm{BMS}^{-}$. In a beautiful piece of work [@strominger0] Strominger introduced a remarkable notion of “energy conserving" diagonal subgroup $\textrm{BMS}^{0}$. It was then shown in [@supertranslation] that if we assume $\textrm{BMS}^{0}$ is a symmetry group of the (perturbative) quantum gravity S-matrix, then the Ward identities associated to supertranslations are in a precise sense equivalent to Weinberg’s soft graviton theorem [@weinberg] which relates $n$-particle scattering amplitude with $(n-1)$-particle scattering amplitude when one of the external particles is a graviton of vanishing energy. We refer the reader to [@strominger0; @supertranslation] for more details and for the precise definition of $\textrm{BMS}^{0}$.\
A natural question then arises, namely if the subleading soft graviton theorem conjectured by Strominger and proved in [@cs; @plefka; @bern] [^1] is also a manifestation of Ward identities associated to some symmetry of the perturbative S matrix. In [@virasoro] it was argued that the subleading soft theorem can yield Ward identities associated to ‘extended BMS’ symmetries [@bt0; @bt]. The extended BMS group is a semidirect product of local (as opposed to global in the BMS case) conformal group of the two-sphere, also known as Virasoro group, and supertranslations. However due to difficulties associated to the singular nature of local conformal Killing vector fields (CKVs) on the conformal sphere, it was not clear how to obtain the subleading soft theorem from Ward identities.\
In [@us], motivated by the precise equivalence between supertranslation Ward Identities and Weinberg’s soft graviton theorem, we argued that the Cachazo-Strominger (CS) theorem[^2] was in fact equivalent to Ward identities associated not to the Virasoro group but to the group of sphere diffeomorphisms $\diff(S^{2})$ which belongs not to the ‘extended BMS’ group but to what we called ‘generalized BMS’ group $\G$. This group has the same structure as the (extended) BMS group, but instead of (local) CKVs of the conformal sphere one allows for arbitrary smooth sphere vector fields. That is, $\G$ is a semidirect product of diffeomorphisms of the conformal sphere and supertranslations. Based on earlier literature on radiative phase space and asymptotic symmetries [@aaprl; @as; @aajmp; @aaoxf; @aabook; @bt0; @bt] it was argued in [@us] that $\G$ is a symmetry of Einstein’s equations (with zero cosmological constant) if one allows for arbitrary metrics on the conformal sphere. We then showed that if the charges[^3] of the $\diff(S^2)$ generators were exactly equal to the charges of Virasoro generators given in [@virasoro] then the Cachazo-Strominger theorem was equivalent to the Ward identities associated to $\diff(S^2)$\
A key question left unanswered in [@us] was whether the proposed charges could be derived from canonical methods. The difficulty stem from the fact that the action of $\G$ does not preserve Ashtekar’s radiative phase space (see [@aa2014] for a recent review). More in detail, the radiative phase space $\Gamma^{q}$ depends an a given choice of sphere 2-metric $q_{AB}$ or ‘frame’[^4] at null infinity. In contrast to the BMS group, $\G$ does not preserve $\Gamma^{q}$ since the $\diff(S^2)$ factor does not preserve the given frame $q_{AB}$. Thus, the strategy that had successfully lead to BMS charges [@as] could not be applied here.\
It is then natural to attempt to work in the space $\Gamma \sim \cup_{\{ q \}} \Gamma^{q}$ off *all* radiative phase spaces on which $\G$ acts in a well defined manner. However we were so far lacking a symplectic structure on $\Gamma$. It is here that we turn to covariant phase space methods [@abr; @lw].
It is well known [@am] that the symplectic structure on $\Gamma^{q}$ corresponds to the GR covariant phase space symplectic structure $\Ocov$ evaluated at null infinity. Here we will show that $\Ocov$ naturally defines a symplectic structure on (a suitable subspace of) $\Gamma$. By realizing $\Gamma^{\qo}$ as a symplectic subspace of $\Gamma$, we will be able to derive the $\G$-charges that were postulated in [@us].\
The outline of the paper is as follows. Section \[sec2\] provides the background material for our discussion. In \[sec2.1\] we describe the class of spacetimes under consideration following closely reference [@bt]. In \[sec2.2\] we review the definition of generalized BMS group $\G$ as a symmetry group of such spacetimes. In \[sec2.3\] we recall the definition of radiative phase space associated to an arbitrary ‘frame’ an introduce the total space $\Gamma$ of all such radiative phase spaces. We also introduce certain subspaces with stronger fall-offs in $u$ that play a crucial role in the later discussion.
Section \[sec3\] is the main part of the paper. \[sec3.1\] describes the general idea behind our computation. In \[sec3.2\] we show that the covariant phase space symplectic structure induces a symplectic structure on (suitable subspace of) $\Gamma$. In section \[sec3.3\] we use this symplectic structure to derive the charges associated to the generators of $\diff(S^2)$. This represents the main result of the paper. In \[sec3.4\] we summarize the analogue results at past null infinity (our detailed calculations take place in future null infinity). Finally in \[sec3.5\] we give a brief summary of the results presented in [@us] on the equivalence between the $\diff(S^2)$ Ward identities and the CS theorem.
In section \[sec4\] we argue that subleading soft gravitons can be thought of as Goldstone modes of a spontaneous symmetry breaking $\G \to \textrm{BMS}$, in complete parallel to how leading soft gravitons are thought as Goldstone modes of a spontaneous symmetry breaking from supertranslations to translations [@supertranslation].
We end with the conclusions in section \[sec5\].\
Preliminaries {#sec2}
=============
Spacetimes under consideration {#sec2.1}
------------------------------
As in [@strominger0; @supertranslation; @virasoro] we are interested in spacetimes that are asymptotically flat at both future and past null infinity. For concreteness we focus on the description from future null infinity; similar considerations apply to the description from past null infinity. We follow closely Reference [@bt]. In Bondi coordinates $(u,r,x^A)$ the 4-metric is parameterized as ds\^2 = (V/r) e\^[2 ]{} du\^2 - 2 e\^[2]{} du dr + g\_[AB]{}(dx\^A-U\^A du)(dx\^B - U\^B du) , \[4metric\] with $\beta,V/r,U^A$ and $g_{AB}$ satisfying the $r\to \infty$ fall-offs = r\^[-2]{}+ O(r\^[-3]{}) , V/r= + r\^[-1]{} 2 M +O(r\^[-2]{}), U\^A= r\^[-2]{} \^A +O(r\^[-3]{}) \[falloffs\] g\_[AB]{}= r\^[2]{}q\_[AB]{} + r C\_[AB]{} + q\_[AB]{} C\^2 +O(r\^[-1]{}). \[gAB\] Here $C^2 \equiv C_{AB} C^{AB}$ (sphere indices are raised and lowered with $q_{AB}$) and the coefficients of the $1/r$ expansion in (\[falloffs\]) and (\[gAB\]) are functions of $u$ and $x^A$ except for $q_{AB}$ which, in contrast to [@bt], we assume to be $u$-independent. There is an additional gauge fixing condition (g\_[AB]{})= r\^4(q\_[AB]{}) which in particular implies that $q^{AB}C_{AB}=0$ and that the trace part of the $O(1)$ term of $g_{AB}$ has the form given in (\[gAB\]). We take the trace-free $O(1)$ part of $g_{AB}$ to be zero as in the original treatment by Sachs (see discussion following Eq. (4.38) in [@bt]).
An important difference with the treatment of [@bt] is that we do not demand $q_{AB}$ to be proportional to the unit round metric $\qo_{AB}$. So far it can be *any* sphere metric. One can verify that Einstein equations still imply the relations given in Equations (4.42), (4.36) and (4.37) of [@bt]: = - , = - C\^2 , U\^A= - D\_B C\^[AB]{}, where $\R$ and $D_A$ are respectively the scalar curvature and covariant derivative of $q_{AB}$. Finally, as it will become clear in the next subsection, the natural space of metrics to consider from the point of view of the generalized BMS group $\G$ is one where the area form of $q_{AB}$ is fixed: $\sqrt{q}= \sqrt{\qo}$.
To summarize, we will be interested in spacetime metrics of the form (\[4metric\]) parametrized by ‘free data’ $(q_{AB},C_{AB})$ [^5] satisfying \_u q\_[AB]{}=0, = , q\^[AB]{} C\_[AB]{} =0. \[fdspace\] In section \[fallC\] we describe conditions on $C_{AB}$ as $u \to \pm \infty$.
Definition of $\G$ {#sec2.2}
------------------
As described in [@us], from a spacetime perspective $\G$ can be characterized as the group of diffeomorphisms generated by (non-trivial at null infinity) vector fields $\xi^a$ preserving the form of the metric (\[4metric\]) and such that they are asymptotically divergence-free (instead of asymptotically Killing as in the BMS case). Such vector fields are parametrized by a sphere function $f(\xh)$ (supertranslation) and sphere vector field $V^A(\xh)$ according to [@bt; @us]: \^a\_f = f \_u + …, \_V\^[a]{} =V\^[A]{} \_A + u \_u - r \_r + …\[xiVf\] where $\alpha = (D_C V^C)/2$ and the dots indicate subleading term in the $1/r$ expansion that depend on $f$ and $V$ and in the 4-metric ‘free data’. The relations defining the algebra $\lie(\G)$ are obtained by computing the leading terms of the Lie brackets of the vector fields (\[xiVf\]). One finds: =0, = \_[\[V\_1,V\_2\]]{}, = \_[Ł\_V f- f]{} .\[algebra\] Thus as in the (extended) BMS case, $\lie(\G)$ has a semidirect sum algebra structure, where supertranslations form an abelian ideal $\lie(\st)$ and $\lie(\G)/\lie(\st)$ is the algebra of sphere vector fields. Similarly to the BMS case, one can also characterize the group $\G$ as diffeomorphisms of an abstract $\scri$ preserving certain structure (see section 4.1 of [@us]).
By computing the Lie derivative of the metric (\[4metric\]) along the vector fields (\[xiVf\]) one obtains the following action of $\lie(\G)$ on the free data [@bt; @us]: \_f q\_[AB]{} = 0 , \_f C\_[AB]{} = f \_[AB]{} - 2 (D\_A D\_B f)\^ \_V q\_[AB]{} = Ł\_V q\_[AB]{}-2 q\_[AB]{} , \_V C\_[AB]{} = Ł\_V C\_[AB]{} - C\_[AB]{} + u \_[AB]{} - 2 u (D\_A D\_B )\^ , \[deltaV\] where ‘$\tf$’ denotes trace-free part with respect to $q_{AB}$. In appendix \[appclosure\] we verify this action indeed reproduces the algebra (\[algebra\]): =0, = - \_[\[V\_1,V\_2\]]{}, =- \_[Ł\_V f- f]{}. \[algebrafd\] Here $\delta_f$ and $\delta_V$ are understood as vector fields on the space of free data $\{ (q_{AB},C_{AB}) \}$ satisfying (\[fdspace\]) (strictly speaking $-\delta_f$ and $-\delta_V$ are the vector fields that provide the representation of the algebra (\[algebra\])).\
Similar analysis on past null infinity yields a generalized BMS group associated to $\scri^-$. Thus the total group acting on the spacetimes we are interested in is $\G^+ \times \G^-$, ($\G^+$ is what we have been calling $\G$). The proposed symmetry group of the gravitational S matrix is the ‘diagonal’ subgroup $\G^0 \subset \G^+ \times \G^-$ defined in analogy to Strominger’s $\textrm{BMS}^0$ [@us].
Radiative phase spaces {#fallC}
----------------------
\[sec2.3\]
We first recall the asymptotic conditions on $C_{AB}$ that ensure well-definedness of the radiative phase spaces [@as]. The radiative phase space $\Gamma^{q}$ associated to a sphere metric $q_{AB}$ is given by tensors $C_{AB}$ on $\scri$ satisfying: $$\Gamma^{q}\ :=\ \{ C_{AB} : \;\ q^{AB} C_{AB} =0, \quad C_{AB}(u,\xh) = u (\rho_{AB})^{\tf}+ C^{\pm}_{AB}(\xh) + O(u^{-\epsilon}) \}, \label{defGq}$$ where $\epsilon>0$. Here $\rho_{AB}$ is a fixed tensor that depends on $q_{AB}$; its definition is reviewed in section \[ssb1\]. The radiative phase space traditionally used in the literature is the one associated to the unit round metric $\qo_{AB}$ on which $(\rho_{AB})^\tf = 0$.\
We define $\Gamma$ as the union of all $\Gamma^q$ spaces with given area element $\sqrt{q}=\sqrt{\qo}$: := \_[=]{} \^q. \[defGamma\] The properties of $\rho_{AB}$ in (\[defGq\]) ensure that the action (\[deltaV\]) preserves the form of the linear in $u$ term in (\[defGq\]), so that indeed $\G$ has a well defined action on $\Gamma$. The precise mechanism by which this occurs is described in section \[ssb1\].\
As in [@us], due to infrared issues, the charges associated to (extended and) generalized BMS group will be defined on the following subspace of $\Gamma^{\qo}$: $$\begin{array}{lll}
\Gamma^{\qo}_0 := \{ C_{AB} \in \Gamma^{\qo} : C_{AB}(u,\xh)= O(u^{-1-\epsilon}) \quad \text{as } \; u \to \pm \infty \}. \label{defGqoo}
\end{array}$$
We similarly define a subspace $\Gamma$ on which the covariant phase space symplectic structure will turn out to be well-defined: \_0:={ (q\_[AB]{},C\_[AB]{}) : \_u q\_[AB]{}=0, = , q\^[AB]{} C\_[AB]{} =0, C\_[AB]{}(u,) = O(u\^[-1-]{}) } . \[defGo\]
The spaces (\[defGqoo\]) and (\[defGo\]) will play an essential role in ensuring integrals in $u$ are finite. We would like to emphasize however that this way of avoiding IR divergences is not entirely satisfactory: $\Gamma_0$ is not preserved by $\G$ and $\Gamma^{\qo}_0$ is not preserved by supertranslations[^6]. There may be better ways of dealing with these IR issues, for instance by introducing appropiate counterterms (see footnote \[countertermfn\]). We hope to return to this point in future investigations.
Main section {#sec3}
============
General idea {#sec3.1}
------------
In this section we show that starting from the covariant phase space derived from the Einstein Hilbert action, one can obtain a phase space at null infinity which is coordinatized not only by radiative degrees of freedom $C_{AB}$ but also by the 2-metric $q_{AB}$ on the conformal sphere. It turns out that the radiative phase space is a symplectic subspace of this larger space. This will allow us to compute the corresponding charge, which is well defined on a (suitable) subspace of $\Gamma^{\qo}$, where $\qo_{AB}$ is the unit round-metric on the 2 sphere. This is the main result of the paper, which combined with the result of [@us] show that Ward identities associated to $\G$ are equivalent to the CS soft theorem.\
The main idea can be summarized as follows. Let Ø\_[t,g]{}(,’ ) := \_[\_t]{} dS\_a \_g\^a(,’), \[Ot\] be the standard covariant phase space symplectic form [@abr; @lw] evaluated on a $t := r+ u=$constant slice $\Sigma_t$. If we characterize 4-metrics $g_{ab}$ by the free data $(q_{AB},C_{AB})$, the $t \to \infty$ limit of (\[Ot\]) could correspond to a symplectic product defined at $\scri^+$. However, one needs to impose conditions on the given fields $(q_{AB},C_{AB}, \delta, \delta')$ in order for this limit to be well defined. For instance, for variations $\delta,\delta'$ such that $\delta q_{AB}=\delta' q_{AB}=0$ and such that $\delta C_{AB}, \delta' C_{AB}$ satisfies appropriate fall-offs in $u$, this procedures reproduces Ashtekar’s radiative phase space symplectic structure [@am]. In section \[sstr\] we show that if $C_{AB}$ and its variation are taken to be $O(u^{-1-\epsilon})$ and $q_{AB}$ is allowed to vary, then (\[Ot\]) also has a well defined $t \to \infty$ limit. In other words, the covariant phase space symplectic form induces a well defined symplectic form on the space $\Gamma_0$ defined in Eq. (\[defGo\]).
This is not quite yet what we need, since $\delta_V \notin T \Gamma_0$ due the linear in $u$ term in (\[deltaV\]). We now explain the computation we are really interested in. Given the phase space $\Gamma^{\qo}_{0}$ and given a symmetry generator $V \in \lie(\G)$, we would like to compute its associated charge $H_{V}$ as a function on $\Gamma^{\qo}_{0}$. Now in general, given a Hamiltonian Vector field on any phase space, we first compute the differential of the corresponding Hamiltonian function, which upon integration yields the corresponding Hamiltonian. This implies that in our case, given the action of $\delta_{V}$ what we would really like to compute is $$\O(\delta_{V},\delta)\ =:\ \delta H_{V} \label{defHV}$$ where the variation $\delta$ must be along $\Gamma^{\qo}_{0}$. As we will see, the symplectic product needed in (\[defHV\]) is still well defined. In other words Eq. (\[Ot\]) has also a well defined $t \to \infty$ limit when $\delta =\delta_V$ and $\delta' \in T \Gamma^{\qo}_{0}$.
Symplectic structure on $\Gamma_0$ {#sstr}
----------------------------------
\[sec3.2\] In this section we evaluate the $t \to \infty$ limit of (\[Ot\]) on $\Go$. The computation simplifies by working with the symplectic potential \_[t,(q,C)]{}() := \_[\_t]{} dS\_a \^a, \^a := ( g\^[bc]{} \^a\_[bc]{} - g\^[a b]{} \^c\_[c b]{} ). Recall $t=r+u$ so that the relevant component will be $\theta^t = \theta^r+\theta^u$. The limit is taken $t \to \infty$ with $u$ constant. The variable $r$ will be understood as given by $r=t-u$.\
Before proceeding with the details, we summarize certain salient aspects of the computation which critically use the fall-off conditions on $C_{AB}$.\
[**(a)**]{} We will see that in the limit $t \to \infty$, conditions $C_{AB}=O(u^{-1-\epsilon})$ and $\delta \in T\Gamma_0$ will ensure finiteness of the integrals as well as cancellation of various boundary terms.[^7] We will discard total variation terms, since they do not contribute to the symplectic form.
[**(b)**]{} We will find that $\theta^t$ has an $1/r$ expansion of the form: \^t = r \^t\_1 + \^t\_0 + O(r\^[-1]{}), which gives the following $1/t$ expansion: \^t = t \^t\_1 + (\^t\_0 - u \^t\_1) + O(t\^[-1]{}). The would-be divergent term $t \theta^t_1$ will turn out to integrate to zero on the space of $C$’s we are restricting attention to.\
We now proceed with the details of the computation.
The form of the metric (\[4metric\]) implies the non-zero term appearing in $\theta^t$ are: $$\begin{gathered}
2 \theta^t = \sqrt{q} e^{2 \beta} r^{2} \big( 2 g^{ur} \delta \Gamma^r_{ur} + g^{rr}\delta \Gamma^r_{ur}+ 2 g^{Ar}\delta \Gamma^r_{A r} + g^{AB}(\delta \Gamma^r_{AB}+ \delta \Gamma^u_{AB}) \\ - g^{ur}(\delta \Gamma^c_{cu}+ \delta \Gamma^c_{c r}) -g^{Ar} \delta \Gamma^c_{cA} \big). \label{thetat}\end{gathered}$$ There are 6 terms in (\[thetat\]), lets call them (1)...(6). Setting to zero terms that are $O(r^{-1})$ we get: (1)= -2 M , (2)= (3)=0, (5)= 2 , (6) = \^A \_A () For the space we are interested where $\sqrt{q}$ is fixed, the term (6) vanishes and the terms (1) and (5) are total variations which do not contribute to the symplectic structure. The only nonzero term is the fourth one in (\[thetat\]) so we rewrite $\theta^t$ as:
\^t = e\^[2 ]{} r\^[2]{} g\^[AB]{}(\^r\_[AB]{}+ \^u\_[AB]{}) . \[thetat2\] The relevant Christoffel symbols are (see [@bt]): \_[AB]{}\^r & = & D\_[(A]{} \_[B)]{} + \_[AB]{} +q\_[AB]{} \_u (C\^2)+ r q\_[AB]{} + C\_[AB]{}+ 2 M q\_[AB]{} +O(r\^[-1]{})\[GammarAB\]\
\^[u]{}\_[AB]{} &= & - g\^[ur]{} \_r g\_[AB]{} = r q\_[AB]{} + C\_[AB]{} +O(r\^[-1]{}). We now evaluate the first term in (\[thetat2\]): r\^[2]{}e\^[2]{}g\^[AB]{} \^[r]{}\_[AB]{} = q\^[AB]{}\^[r]{}\_[AB]{} -r\^[-1]{} C\^[AB]{} \^[r]{}\_[AB]{} +O(r\^[-1]{}). \[thetar\] In evaluating (\[thetar\]) we will discard terms of the form $q^{AB} \delta (f q_{AB})$ for any quantity $f$ since they give a total variation by the identity q\^[AB]{} (f q\_[AB]{}) = ( f) that follows from $q^{AB}q_{AB}=2$ and $2 \delta \sqrt{q}= \sqrt{q} q^{AB} \delta q_{AB}$. Substituting (\[GammarAB\]) in (\[thetar\]) we get r\^[2]{}e\^[2]{}g\^[AB]{} \^[r]{}\_[AB]{} = q\^[AB]{} ( D\_A \_B) - C\^[AB]{}\_[AB]{} - C\^[AB]{}q\_[AB]{} + q\^[AB]{} \_[AB]{} + ( ) +O(r\^[-1]{}) \[thetar2\] where $\delta()$ indicates a total variation term. When we substitute $r=t-u$, the linear in $r$ term in (\[thetar2\]) gives a potential diverging linear in $t$ term and a finite term - q\^[AB]{} \_[AB]{} = C\^[AB]{} q\_[AB]{} - \_u(C\_[AB]{})q\^[AB]{} + () \[uqC\] which we rewrote up to total derivative in $u$ and a total variation. *Now, the condition $C_{AB}=O(u^{-1-\epsilon})$ implies the total derivative in (\[uqC\]) as well as the potential diverging term $\frac{t}{2}q^{AB} \delta \dot{C}_{AB}$ give a vanishing contribution upon integration.*
The second term in (\[thetat2\]) gives: r\^[2]{}e\^[2]{}g\^[AB]{} \^[u]{}\_[AB]{} & = & q\^[AB]{}\^[u]{}\_[AB]{} -r\^[-1]{} C\^[AB]{} \^[u]{}\_[AB]{} +O(r\^[-1]{})\
& = & - C\^[AB]{}q\_[AB]{} + ()+O(r\^[-1]{}) Note that this term cancels the term in (\[uqC\]). Collecting all terms and writing for later convenience q\^[AB]{} ( D\_A \_B) = D\^A \^B q\_[AB]{} + () we obtain the following expressions for the symplectic potential $\Theta(\delta):= \lim_{t \to \infty} \Theta_{t}(\delta)$ at $\scri^+$: ()= \_ du (- C\^[AB]{} \_[AB]{} + q\_[AB]{} ). The corresponding symplectic form at $\scri^+$ is then: Ø(,’) = \_ du (C\^[AB]{} ’ \_[AB]{} - (2 D\^[A]{} \^B - C\^[AB]{} ) ’ q\_[AB]{} ) - ’. \[sf\] We have thus obtained a symplectic form on the space $\Gamma_0$ defined in Eq. (\[defGo\]). Clearly, the radiative phase space $\Gamma^{\qo}_0$ is symplectic subspace of $\Gamma_0$.\
We conclude with the observation that (\[sf\]) can actually be used for the evaluation of the symplectic product between $\delta_V \in T \Gamma$ and $\delta_0 \in T \Gamma^{\qo}_0$.
By introducing a second variation in all steps above, one can verify that: \_[t ]{} Ø\_t(\_V,\_0) = Ø(\_V,\_0), with $\O$ given in (\[sf\]). Indeed, there are only two potentially problematic terms in the computation that are described after Eq. (\[uqC\]). Their contribution to the density $\omega^t(\delta_V,\delta_0)$ is: -\_V q\^[AB]{} \_0 \_[AB]{} + \_u(\_0 C\_[AB]{}) \_V q\^[AB]{},\[wdiv\] where we used that $\delta_0 q_{AB}=0$. The condition $\delta_0 C_{AB}= O(u^{-1-\epsilon})$ implies that both terms in (\[wdiv\]) integrate to zero. In summary, the symplectic product between $\delta_V$ and $\delta_0 \in T \Gamma^{\qo}_0$ is well defined and given by evaluation on the form (\[sf\]). This evaluation is used in the next section to obtain the charge $H_V$.
$\diff(S^{2})$ charges {#sec3.3}
----------------------
We now apply the above results to find the charge $H_V$ satisfying H\_V = Ø(\_V, ) ,\[hvfV\] for $\delta_V$ given in Eq. (\[deltaV\]) and for $\delta \in T \Gamma^{\qo}_0$, i.e. $\delta C_{AB}=O(u^{-1-\epsilon})$ and $\delta q_{AB}=0$. Since we already have a candidate for $H_V$, namely the one postulated in [@us], we will just verify that such $H_V$ indeed satisfies (\[hvfV\]).
$H_V$ is a sum of a ‘hard’ quadratic in $C_{AB}$ term and a ‘soft’ linear in $C_{AB}$ term [@us]: H\_V = \_V + \_V ,\[Hhs\] \_V & := & du \^[AB]{}(Ł\_V C\_[AB]{} - C\_[AB]{} + u \_[AB]{}) \[Hhard\]\
\_V & := & du C\^[AB]{} s\_[AB]{}, \[Hsoft\] with $s_{AB}$ a symmetric trace-free tensor such that its components in $(z,\zb)$ coordinates are given by: s\_[zz]{} := D\^3\_z V\^z \[szz\], and corresponding complex conjugated expression (the trace-free condition sets $s_{z \zb}=0$). We now verify that $H_V$ satisfies (\[hvfV\]).\
For the RHS of (\[hvfV\]) we have Ø(\_V,) = du (\_V C\^[AB]{} \_[AB]{} - C\^[AB]{} \_u (\_V C\_[AB]{}) + (2 D\^[A]{} \^B + C\^[AB]{} ) \_V q\_[AB]{} ) , \[OddV\] where we used that $\delta q_{AB}=0$ and $\vo = -\R/2=-1$ since we are at $q_{AB}=\qo_{AB}$. Using (\[deltaV\]) and the corresponding transformations: \_V C\^[AB]{} = Ł\_V C\^[AB]{} + 4 C\^[AB]{} + u \^[AB]{} - 2 u (D\^A D\^B )\^ \[delVCcont\] \_V q\^[AB]{} = Ł\_V q\^[AB]{}+2 q\_[AB]{} \[delVqinv\], one verifies that the ‘hard’ terms in (\[OddV\]) combine to give $\delta \Hhard_V$. By integration by parts one can bring all ‘soft’ terms in a form that is proportional to $\delta C^{AB}$. The end result is: Ø(\_V,) = \_V + du C\^[AB]{} s’\_[AB]{} where s’\_[AB]{} := ( 2 D\_A D\_B - D\_[(A]{} D\^M \_V q\_[B) M]{} + D\_[(A]{} V\_[B)]{})\^. \[sp\] We finally show that $s'_{AB}=s_{AB}$ from which (\[hvfV\]) follows.
From (\[deltaV\]) and using the identity $D^M D_B X_M = X_B + 2 D_B \alpha$ one finds D\^M \_V q\_[B M]{} = V\_B + V\_B. Using this we can rewrite (\[sp\]) as s’\_[AB]{} = (D\_[(A]{} s’\_[B)]{})\^ \[Dsp\] with s’\_A := D\_A D\_M V\^M - D\_M D\^M V\_A + V\_A. \[spA\] Finally, writing (\[spA\]) in $(z,\zb)$ coordinates and using $\qo^{z \zb} [D_{\zb},D_z] V_z = V_z$ one finds s’\_z = D\^2\_z V\^z . Going back to (\[Dsp\]) we conclude that $s'_{AB}=s_{AB}$ as desired.\
*Comment:* In order to highlight the role played by the ‘extra’ terms in the symplectic structure (\[sf\]), it is interesting to repeat the computation by writing the symplectic structure as $\O(\delta,\delta') = \frac{1}{4}\int \left( \delta C^{AB} \delta' \dot{C}_{AB} + \delta(2 a \,D^{(A} \Uo^{B)} + b \, C^{AB} ) \delta' q_{AB} \right) - \delta \leftrightarrow \delta'$ and setting the correct values $a=-1,b=\vo=-1$ at the end of the computation (for the $\delta$ considered here $\delta \vo=0$ and so $\vo$ can be treated as a constant). Doing so one obtains: $s'_z = D^2_z V^z + (1+ a) D_z D_{\zb} V^z + (a-b) V_z$.\
Past null infinity {#sec3.4}
------------------
A similar analysis to the one given in the previous two subsections goes through for past null infinity. The form of the metric in that case can be obtained by doing the substitution $v = -u$ in (\[4metric\]). The relevant component of the symplectic potential density is now $\theta^t= -(\theta^r - \theta^v)$. Thus, the symplectic structure at past null infinity can be obtained by the replacement $u \to -v$ (up to an overall sign). The result is: Ø\^-(,’) = \_ dv (C\^[- AB]{} ’ \^-\_[AB]{} + (2 D\^[A]{} \^[- B]{} - \^- C\^[- AB]{} ) ’ q\^-\_[AB]{} ) - ’. \[sfm\] On the other hand, the transformation rule for $C^-_{AB}$ is the same as for $C^+_{AB}$ except that the soft factor comes with opposite sign: \_V C\^-\_[AB]{} = Ł\_V C\^-\_[AB]{} - C\^-\_[AB]{} + v \^-\_[AB]{} + 2 v (D\_A D\_B )\^. The corresponding charge $H^-_V$ has thus the same form as (\[Hhs\]), (\[Hhard\]), (\[Hsoft\]), with an opposite sign in the soft term: s\^-\_[zz]{}= -D\^3\_z V\^z .
$\diff(S^2)$ Ward identities and CS soft theorem {#sec3.5}
------------------------------------------------
We sketch here how the new symmetry relates to CS soft theorem. We refer to [@virasoro; @us] for further details.
Given a vector field $V^A$ and corresponding charges $H^\pm_V$ at future and past null infinity, the proposed Ward identities arise from assuming the S matrix satisfies: H\^+\_V S = S H\^-\_V ,\[hssh\] or equivalently: H\^[ +]{}\_V S - S H\^[ -]{}\_V = - H\^[ +]{}\_V S + S H\^[ -]{}\_V. \[wardid\]
When one takes the matrix element of (\[wardid\]) between a $n^+$ particle state $\bra {\rm out} |$ and a $n^-$ particle state $| {\rm in} \ket$, the RHS of (\[wardid\]) becomes an operator on the scattering amplitude $\bra {\rm out} | S | {\rm in} \ket$ that consists of a sum of differential operators acting on the individual particle labels (momentum and helicity). On the other hand, the LHS of (\[wardid\]) can be realized as creation operators of gravitons with vanishing energy, where the helicity and smeared momentum direction is determined by $V^A$.
The choice V\^A(z,) = K\^A\_[(z\_s, \_s)]{}(z,) := (-\_s)\^[-1]{}(z-z\_s)\^2\_[z]{} \[VK\] gives, in the first term of the LHS of (\[wardid\]), the insertion of a negative helicity outgoing soft graviton with momentum pointing in the direction determined by $(z_s,\zb_s)$. By crossing symmetry the second term in the LHS of (\[wardid\]) can be shown to be equal to the first one. Now, the differential operators arising on the RHS of (\[wardid\]) for the choice (\[VK\]) reproduce those of the CS theorem. In short, for $V^A=K^A_{(z_s, \zb_s)}$, Eq. (\[wardid\]) reproduces CS soft theorem for a negative helicity graviton (the positive helicity case is obtained by choosing the complex conjugated vector, $V^A= \bar{K}^A_{(z_s,\zb_s)}$).
Conversely, the Ward identities associated to the vector fields (\[VK\]) and its complex conjugate (which we just argued are equivalent to CS theorem), can be shown to imply the Ward identity (\[wardid\]) for *any* vector field $V^A$. Essentially the vectors $K^A_{(z_s, \zb_s)}(z,\zb)$ have the role of elementary kernels, and by appropriate smearing in the $(z_s,\zb_s)$ variables one can reproduce any desired vector field.
Goldstone modes of ${\cal G}$ {#ssb}
=============================
\[sec4\] In the case of supertranslation symmetry, it was argued in [@supertranslation] that as supertranslations map an asymptotic configuration $C_{AB}$ with zero news $N_{AB}\ =\ 0$ to a distinct configuration with zero news (by creating a soft graviton), the choice of a particular vacuum implies a spontaneous breaking of supertranslation symmetry with soft gravitons playing the role of Goldstone modes.\
In this section we argue that one can interpret the subleading soft gravitons in a similar manner and that they can be thought of as Goldstone modes associated to spontaneous breaking of ${\cal G}$ to $\textrm{BMS}$. At first sight this statement looks obviously wrong as for a given choice of the sphere metric, such subleading changes in $C_{AB}$ are not gapless. This can be seen as follows. Given $(C_{AB}, q_{CD})\ \in \Gamma$, a vector field $V\ \in \textrm{Lie}({\cal G})$ maps it to $$\begin{array}{lll}
\delta_V q_{AB} = \L_V q_{AB}-2 \alpha q_{AB} , \quad \delta_V C_{AB} = \L_V C_{AB} - \alpha C_{AB} + \alpha u \dot{C}_{AB} - 2 u (D_A D_B \alpha)^{\tf}
\end{array}$$ Whence it naively appears as if a configuration $C^{(0)}$ which has zero news in say Bondi frame (where the associated $q_{AB}\ =\ \qo_{AB}$) goes to a new configuration $C^{(0)}+\delta_{V}C^{(0)}$ whose news is given by $\delta N_{AB}\ =\ 2 (D_{A}D_{B}\alpha)^{\tf}$. However the above assertion is wrong as it relies upon the definition of news given by $$N_{AB}(u,\hat{x})\ =\ -\partial_{u}C_{AB}(u,\hat{x}).$$ This definition of news is only valid when the metric on $S^{2}$ is the unit metric $\qo_{AB}$. In a generic case there is a slight technicality regarding the news tensor.\
Given an arbitrary sphere metric $q_{AB}$ there exists a unique symmetric tensor $\rho_{AB}[q]$ which is implicitly defined via [@geroch]: \_[AB]{}q\^[AB]{} = , D\_[\[A]{}\_[B\]C]{} = 0 . It can be split into a trace-free part and the trace part as $$\rho_{AB}\ =\ \rho^{(0)}_{AB} +\ \frac{1}{2}\R[q]q_{AB},$$ and as shown in [@geroch] $\rho^{(0)}[\qo] =\ 0$.\
The news tensor associated to a configuration $C_{AB}\ \in\ \Gamma^{q}$ is then defined as (see for instance Eq. (23) of [@newman]) $$\label{newsdef}
N_{AB}(u,\hat{x})\ :=\ -\partial_{u}C_{AB}(u,\hat{x})\ - \rho^{(0)}_{AB}(\hat{x}) .$$ We thus see that as $V\ \in\ \textrm{Lie}({\cal G})$ change $C_{AB}$ as well as $q_{AB}$, the corresponding change in news is given by $$\delta_{V}N_{AB}(u,\hat{x})\ =\ -\partial_{u}\delta_{V}C_{AB}(u,\hat{x})\ -\ \delta_{V}\rho^{(0)}_{AB}(\xh).$$ In section \[ssb1\] we show that $\delta_{V}\rho^{(0)}_{AB}(\xh)$ is precisely such that a zero news configuration is mapped into a distinct zero news configuration. Whence the corresponding change in the news vanishes. That is *any* element of $V\ \in\ \textrm{Lie}({\cal G})$ maps a configuration with zero news to a configuration with zero news because the definition of the news before and after the action of $V$ refer to different frames. Hence choosing a $q_{AB}$ (and working with $\Gamma^{q}$) implies breaking the ${\cal G}$ symmetry spontaneously to $\textrm{BMS}$ and the subleading soft gravitons can be thought of as goldstone modes associated to this symmetry breaking as they map one family of vacua (associated to a given $q_{AB}$) to a distinct family of vacua associated to a different $q_{AB}$.
Evaluating $\delta_{V}N_{AB}$ {#ssb1}
-----------------------------
From (\[deltaV\]) and the definition of the news tensor (\[newsdef\]) we have \_V N\_[AB]{} & =& - \_u \_V C\_[AB]{}- \_V \^[(0)]{}\_[AB]{}\
& =& -Ł\_V \_[AB]{} - u \_[AB]{} + 2 (D\_A D\_B )\^ - \_V \^[(0)]{}\_[AB]{}. \[delVN2\] We now evaluate the last term in (\[delVN2\]). From $\delta_V q_{AB}= \L_Vq_{AB}- 2 \alpha q_{AB}$ we have that $\delta_V \rho_{AB}$ is a sum of a Lie derivative term plus a scale transformation term. Since the behaviour of $\rho_{AB}$ under scale transformation is known [@geroch] the effect of the second term can be obtained explicitly. The total change is found to be: \_[V]{}\_[AB]{}= Ł\_V \_[AB]{} + 2 D\_A D\_B , \[delrho\] which in turn implies, \_[V]{}\^[(0)]{}\_[AB]{}= Ł\_V \^[(0)]{}\_[AB]{} + 2 (D\_A D\_B )\^.\[delrho0\] When substituting (\[delrho0\]) in (\[delVN2\]) the ‘soft’ factors cancel out and one obtains: \_V N\_[AB]{} & =& - Ł\_V \_[AB]{} - u \_[AB]{} - Ł\_V \^[(0)]{}\_[AB]{}\
&=& Ł\_V N\_[AB]{} + u \_[AB]{}, where in the second line we used the definition of the news tensor (\[newsdef\]) and the fact that $-\ddot{C}_{AB}= \dot{N}_{AB}$ since $\partial_u \rho^{(0)}_{AB}=0$.
Thus the news tensor transforms homogeneously. In particular if $N_{AB}=0$ then $\delta_V N_{AB}=0$.
Conclusions {#sec5}
===========
Analyzing the symmetry structure of the quantum gravity S-matrix is of paramount importance. It has been well known since the 60’s that (at least at the semiclassical level) this symmetry group contains an infinite dimensional group known as the BMS group. The relationship of BMS symmetry to infrared issues in Quantum Gravity (for instance the existence of various superselection sectors) has been rigorously studied by Ashtekar et. al. in the beautiful framework of Asymptotic Quantization [@aaprl; @as; @aajmp; @aaoxf; @aabook]. This relationship (of BMS group to infrared issues in quantum gravity in asymptotically flat spacetimes) got a new lease due to seminal work of Strominger et. al. [@strominger0; @supertranslation; @virasoro]. One of the outcomes of this recent study is the universality concerning subleading corrections to soft graviton amplitudes, referred to in this paper as Cachazo-Strominger (CS) soft theorem [@cs].\
A natural question first posed in [@virasoro] was if the CS soft theorem could be understood as Ward identities associated to certain symmetries of the semi-classical S matrix. It was shown in [@virasoro] that the Ward identities associated to Virasoro symmetries contained in the so-called extended BMS group can be derived from CS soft theorem. However, the question of how to go in the reverse direction and derive the CS soft theorem from the Virasoro Ward Identities remained unanswered. In [@us] we argued in favor of a different possibility: If a different generalization of the BMS group (referred to unimaginatively as generalized BMS) ${\cal G}$ was a symmetry of the gravitational S matrix, then the Ward identities associated to $\diff(S^{2})$ contained in ${\cal G}$ were shown to be equivalent to CS soft theorem. Our argument however relied on an ad-hoc assumption that the charges associated to such symmetries had the same form as the charges associated to Virasoro symmetries, which (modulo certain IR issues) could be derived from first principles. The main block for computing such charges was lack of a suitable phase space on which ${\cal G}$ acted in a well-defined manner and whose corresponding charges were finite.\
We have filled these gaps in the current paper. Starting from the covariant phase space associated to Einstein Hilbert action, we derive a phase space at null infinity which is coordinatized by the well known radiative degrees of freedom as well as the space of metrics on the conformal sphere. The symplectic structure on this phase space can be used to compute the charges associated to $\diff(S^{2})$ which is, rather remarkably well-defined on an appropriate subspace of the radiative phase space. Surprisingly *these charges turn out to be exactly equal to the charges corresponding to the Virasoro symmetries computed in [@virasoro].* This proves the key assumption that we made in [@us] and hence completes the proof of the equivalence between Ward identities associated to the generators of $\diff(S^{2})$ and CS soft theorem.\
One of the nice corollaries of our analysis is the representation of ${\cal G}$ on $\Gamma$. However the symplectic structure arising from the Einstein Hilbert action is only well-defined in the stronger fall-offs subspace $\Gamma_0 \subset \Gamma$ which unfortunately is not preserved under the action of $\G$. We believe however that the inclusion of appropriate counter-terms to the action at $i^{0}, i^{\pm}$ could yield a well-defined symplectic structure on $\Gamma$. If this were to be the case, we could hope for the action of $\G$ to be symplectic on $\Gamma$. This would solve another issue which arose in [@us], namely that the charges on radiative phase space which correspond to subleading soft factors do not close to form an algebra. We hope to come back to this point in the near future. We finally wish to emphasize that the physical phase space of the theory really is the radiative phase space (or an appropriate subspace thereof) and the bigger phase space $\Gamma$ is an “auxiliary" arena which however is an indispensable tool to implement ${\cal G}$ in classical as well as quantum theory.\
[**Acknowledgements**]{}\
We are indebted to Abhay Ashtekar for crucial discussions in the initial stages of this work and for his constant encouragement and interest. We thank Vyacheslav Lysov, Michael Reisenberger and Andrew Strominger for their comments on the manuscript. MC is supported by Anii and Pedeciba. AL is supported by Ramanujan Fellowship of the Department of Science and Technology.
Closure of generalized BMS action {#appclosure}
=================================
The first relation (\[algebrafd\]) is easily verified. We now show the second and third relations.\
Let $V_3:=[V_1,V_2]$. To shorten notation we will omit ‘$V$’ labels and use only subscripts $1,2,3$. Thus the second equation in (\[algebrafd\]) reads: = \_3 , \[123\] and Equations (\[deltaV\]) for $V_1$ become: \_1 C\_[AB]{} & =& Ł\_1 C\_[AB]{} - \_1 C\_[AB]{} + \_1 u \_[AB]{} - 2 u (D\_A D\_B \_1)\^\
\[del1C\] \_1 q\_[AB]{} & = & Ł\_1 q\_[AB]{}-2 \_1 q\_[AB]{} \[del1q\], and similarly for $V_2$ and $V_3$. For the computation it is important to keep in mind that the $\alpha$’s are in fact independent of the 2-metric $q_{AB}$ due to the condition $\sqrt{q}= \sqrt{\qo}$. This can be seen explicitly by defining $\alpha$ purely in terms of $\sqrt{q}$ according to: Ł\_V = 2 . \[defalpha\] We first verify (\[123\]) along the $\delta q_{AB}$ direction: q\_[AB]{} & = & \_2 (Ł\_1 q\_[AB]{} - 2 \_1 q\_[AB]{} ) - 1 2\
&=& Ł\_1 \_2 q\_[AB]{} - 2 \_2 \_1 q\_[AB]{} - 2 \_1 \_2 q\_[AB]{} ) - 1 2\
&= & Ł\_3 q\_[AB]{} - 2( Ł\_1 \_2 - Ł\_2 \_1) q\_[AB]{}\
&=& \_3 q\_[AB]{}. \[delta3q\] Here we used the fact that $\delta_2 \alpha_1=0$ since $\alpha_1$ is independent of $q_{AB}$ as mentioned above Eq. (\[defalpha\]). In the last equality we used \_3 = Ł\_1 \_2 - Ł\_2 \_1, \[alpha3\] which directly follows from the definition of $\alpha$ given in (\[defalpha\]): 2 \_3 = Ł\_3 = (Ł\_1 Ł\_2 - Ł\_2 Ł\_1) = 2 Ł\_1 ( \_2 ) - 2 Ł\_2(\_1 ) =2 (Ł\_1 \_2 - Ł\_2 \_1) . For the $\delta C_{AB}$ direction, one finds $$\begin{array}{lll}
[\delta_2, \delta_1] C_{AB} = \L_1 \delta_2 C_{AB} - \alpha_1 \delta_2 C_{AB} + \alpha_1 u \partial_u \delta_2 C_{AB} - 2 u \delta_2 (D_A D_B \alpha_1)^{\tf} - 1 \leftrightarrow 2 \\
= \L_3 C_{AB} - \alpha_3 C_{AB} + u \alpha_3 \dot{C}_{AB} - 2u \L_1(D_A D_B \alpha_2)^\tf - 2 u \delta_2 (D_A D_B \alpha_1)^\tf-\\
\hspace*{4.5in}1 \leftrightarrow 2 . \label{del3C}
\end{array}$$
where we used similar simplifications as when getting (\[delta3q\]) above. The ‘hard’ term in (\[del3C\]) corresponds to the hard term of $\delta_3 C_{AB}$. We now show that the ‘soft’ term also matches. This amounts to show the equality: \_2 (D\_A D\_B \_1)\^- Ł\_2(D\_A D\_B \_1)\^- 1 2 = (D\_A D\_B \_3)\^.\[soft123\] The variation $\delta_2$ in (\[soft123\]) only involve variations along $\delta q_{AB}$. It is convenient to write them as an explicit sum of ‘Lie derivative’ and ‘scale’ terms: \_2 q\_[AB]{} = \^L\_2 q\_[AB]{} + \^S\_2 q\_[AB]{} ; \^L\_2 q\_[AB]{} := Ł\_2 q\_[AB]{}, \^S\_2 q\_[AB]{}:= -2 \_2 q\_[AB]{}. \[deltaLS\] In this way, the first term in (\[soft123\]) takes the form: \_2(D\_A D\_B \_1)\^= \^L\_[21 AB]{}+ \^S\_[21 AB]{} where: \^L\_[21 AB]{} & := & \^L\_2 (D\_A) \_B \_1 - \^L\_2() \_1 q\_[AB]{} - \_1 \^L\_2 q\_[AB]{} \[delLAB\]\
\^S\_[21 AB]{} & := & \^S\_2 (D\_A) \_B \_1 - \^S\_2() \_1 q\_[AB]{} - \_1 \^S\_2 q\_[AB]{} \[delSAB\] For the $\delta^S$ term one obtains: \^S\_[21 AB]{} = 2 D\_[(A]{} \_1 D\_[B)]{} \_2 - q\_[AB]{} D\_C \_1 D\^C \_2. \[delS21\] Since it is symmetric under $1 \leftrightarrow 2$ it does not contribute to the LHS of (\[soft123\]). For the $\delta^L$ term, we notice that from the definition of $\delta^L_2$ one has: \^L\_2 D\_A = \[Ł\_2, D\_A\], \^L\_2 = \[Ł\_2, \], from which it follows that (\[delLAB\]) can be written as: \^L\_[21 AB]{} := Ł\_2 (D\_A D\_B \_1)\^-D\_A \_B (Ł\_2 \_1)+ (Ł\_2 \_1) q\_[AB]{}. \[delLAB2\] The first term in (\[delLAB2\]) cancels the Lie derivative term in (\[soft123\]). Including the $1 \leftrightarrow 2$ term one recovers Equation (\[soft123\]) with $\alpha_3$ given in (\[alpha3\]). This concludes the proof of Eq. (\[123\]).\
We finally show the last relation in (\[algebrafd\]): =- \_[Ł\_V f- f]{} . Along $\delta q_{AB}$ direction this relation trivializes to $0=0$. Evaluating the commutator along $\delta C_{AB}$ one finds $$\begin{gathered}
[\delta_f,\delta_V] C_{AB} = (\L_V f - \alpha f ) \dot{C}_{AB} \\+2 \alpha(D_A D_B f)^\tf +2 f (D_A D_B \alpha)^\tf + 2\delta_V(D_A D_B f)^\tf -2 \L_V(D_A D_B f)^\tf. \label{delfv}\end{gathered}$$ The ‘hard’ term in (\[delfv\]) matches the hard term of $\delta_{\L_V f - \alpha f}$. That the ‘soft’ term (displayed in the second line) also matches can be shown along similar lines as for the soft term of $[\delta_{V_1}, \delta_{V_2}]$ computed above. Writing $\delta_V q_{AB} = \delta^L_V q_{AB} +\delta^S_V q_{AB}$ as in Eq. (\[deltaLS\]) and using relations as those given in Eqns. (\[delS21\]) and (\[delLAB2\]) and finds that the last two terms in (\[delfv\]) combine to 2\_V(D\_A D\_B f)\^-2 Ł\_V(D\_A D\_B f)\^= -2 (D\_A D\_B Ł\_V f)\^+ 4 D\_[(A]{} D\_[B)]{} f -2 q\_[AB]{} D\_C D\^C f. \[df2\] The first term in the RHS of (\[df2\]) is the soft factor of $\delta_{\L_V f}$. The remaining terms combine to give the soft factor of $\delta_{-\alpha f}$ due to the identity: (D\_A D\_B( f))\^ = (D\_A D\_B f)\^+ f (D\_A D\_B )\^+ (2 D\_[(A]{} D\_[B)]{}f)\^.
[99]{}
H. Bondi, M. G. J. van der Burg and A. W. K. Metzner, “Gravitational waves in general relativity. 7. Waves from axisymmetric isolated systems,” Proc. Roy. Soc. Lond. A [**269**]{}, 21 (1962).
R. K. Sachs, “Gravitational waves in general relativity. 8. Waves in asymptotically flat space-times,” Proc. Roy. Soc. Lond. A [**270**]{}, 103 (1962).
A. Strominger, “On BMS Invariance of Gravitational Scattering,” JHEP [**1407**]{}, 152 (2014) \[arXiv:1312.2229 \[hep-th\]\]
T. He, V. Lysov, P. Mitra and A. Strominger, “BMS supertranslations and Weinberg’s soft graviton theorem,” arXiv:1401.7026 \[hep-th\]
S. Weinberg, “Infrared photons and gravitons,” Phys. Rev. [**140**]{}, B516 (1965)
F. Cachazo and A. Strominger, “Evidence for a New Soft Graviton Theorem,” arXiv:1404.4091 \[hep-th\].
J. Broedel, M. de Leeuw, J. Plefka and M. Rosso, “Constraining subleading soft gluon and graviton theorems,” Phys. Rev. D [**90**]{}, 065024 (2014) \[arXiv:1406.6574 \[hep-th\]\]
Z. Bern, S. Davies, P. Di Vecchia and J. Nohle, “Low-Energy Behavior of Gluons and Gravitons from Gauge Invariance,” Phys. Rev. D [**90**]{}, no. 8, 084035 (2014) \[arXiv:1406.6987 \[hep-th\]\]
D. J. Gross and R. Jackiw, “Low-Energy Theorem for Graviton Scattering,” Phys. Rev. [**166**]{}, 1287 (1968)
S. G. Naculich and H. J. Schnitzer, “Eikonal methods applied to gravitational scattering amplitudes,” JHEP [**1105**]{}, 087 (2011) \[arXiv:1101.1524 \[hep-th\]\].
C. D. White, “Factorization Properties of Soft Graviton Amplitudes,” JHEP [**1105**]{}, 060 (2011) \[arXiv:1103.2981 \[hep-th\]\]
D. Kapec, V. Lysov, S. Pasterski and A. Strominger, “Semiclassical Virasoro symmetry of the quantum gravity $ \mathcal{S}$-matrix,” JHEP [**1408**]{}, 058 (2014) \[arXiv:1406.3312 \[hep-th\]\].
G. Barnich and C. Troessaert, “Symmetries of asymptotically flat 4 dimensional spacetimes at null infinity revisited,” Phys. Rev. Lett. [**105**]{}, 111103 (2010) \[arXiv:0909.2617 \[gr-qc\]
G. Barnich and C. Troessaert, “Aspects of the BMS/CFT correspondence,” JHEP [**1005**]{}, 062 (2010) \[arXiv:1001.1541 \[hep-th\]\]
M. Campiglia and A. Laddha, “Asymptotic symmetries and subleading soft graviton theorem,” Phys. Rev. D [**90**]{}, no. 12, 124028 (2014) \[arXiv:1408.2228 \[hep-th\]\]
A. Ashtekar, “Asymptotic Quantization of the Gravitational Field,” Phys. Rev. Lett. [**46**]{}, 573 (1981)
A. Ashtekar and M. Streubel, “Symplectic Geometry of Radiative Modes and Conserved Quantities at Null Infinity,” Proc. Roy. Soc. Lond. A [**376**]{}, 585 (1981)
A. Ashtekar, “Radiative Degrees of Freedom of the Gravitational Field in Exact General Relativity,” J. Math. Phys. [**22**]{}, 2885 (1981)
A. Ashtekar, “Quantization of the Radiative Modes of the Gravitational Field”, In: Quantum Gravity 2; Edited by C. J. Isham, R. Penrose, and D. W. Sciama (Oxford University Press, Oxford, 1981).
A. Ashtekar, “Asymptotic Quantization”, Naples, Italy: Bibliopolis (1987)
A. Ashtekar, “Geometry and Physics of Null Infinity,” arXiv:1409.1800 \[gr-qc\]
Ashtekar, A., L. Bombelli, and O. Reula. "The covariant phase space of asymptotically flat gravitational fields”, in *Analysis, Geometry and Mechanics: 200 Years After Lagrange*, ed. M Francaviglia, North-Holland (1991).
J. Lee and R. M. Wald, “Local symmetries and constraints,” J. Math. Phys. [**31**]{}, 725 (1990)
A. Ashtekar and A. Magnon-Ashtekar “On the symplectic structure of general relativity” Commun. Math. Phys. [**86**]{}, 55 (1982).
R. Geroch, “Asymptotic structure of space-time,” in *Asymptotic structure of space-time*, ed. L. Witten, Plenum, New York (1976)
C. Kozameh and E. T. Newman, “A note on asymptotically flat spaces. II.” , General Relativity and Gravitation 15.5 (1983): 475-487.
[^1]: In [@cs], this theorem was proved in the holomorphic limit. More general proofs were later given in [@plefka; @bern]. See [@gj; @naculich; @white] for earlier work on soft graviton amplitudes.
[^2]: Based on earlier papers, we refer to the subleading soft theorem as Cachazo-Strominger theorem.
[^3]: Here ‘charge’ refers to what is called ‘flux’ in the radiative phase space literature, i.e. it involves a three dimensional integral over null infinity.
[^4]: We are deviating from the standard radiative phase space terminology in which ‘frame’ denotes a conformal class of $[(q_{ab},n^a)]$ of metric *and* null normal [@as].
[^5]: This actually is not the totality of free data since there are additional $u$-independent sphere functions that arise as integration ‘constants’ [@bt]. These however play no role in our analysis.
[^6]: The analogue of the space $\Gamma^{\qo}_0$ that was used in [@us] actually allows for a $u$-independent term and hence is invariant under supertranslations. The stronger condition (\[defGqoo\]) is used here in order to allow for certain integration by parts in $u$
[^7]: For slower fall-offs of the type which define $\Gamma$ (Eqns. (\[defGq\]), (\[defGamma\])) there may be a possibility of obtaining a finite symplectic structure by supplementing the action with counterterms at $i^{0}, i^{\pm}$. We have not pursued this direction here as it is not needed for our analysis. \[countertermfn\]
| {
"pile_set_name": "ArXiv"
} |
Recently, there has been great interest in four-fermion contact interactions between quarks and leptons, as such terms might account for the reported excess of high $Q^2$ events in the HERA experiments [@hone; @zeus; @alt; @babu; @barger; @kalinowski; @GGN]. The 8 relevant terms are usually written in the form \[contact\] = \_[i,j=L,R;q=u,d]{}[4 \^q\_[ij]{}(\_[ij]{}\^q)\^2]{} |e\_i\_e\_i |q\_j\^q\_j where $\eta^q_{ij}=\pm1$.
It is just possible to find such terms which can account for the HERA excess while still satisfying the constraints deduced from studies of $e^+ e^-\rightarrow hadrons$ [@opal] and $p\bar
p\rightarrow
e^+e^-X$ [@tevatron], for $\Lambda\sim 3$ TeV [@alt; @babu; @barger].
Stronger limits on such contact terms arise from atomic parity violation (APV) measurements [@langacker; @leurer; @davidson]. A contact interaction apparently shifts the nuclear weak charge $Q_W$ by an amount Q\_W=-2 \[C\_[1u]{}(2Z+N)+C\_[1d]{}(2N+Z)\]where \[weakcharge\]C\_[1q]{}=[G\_F]{}( [\^q\_[RL]{}(\^q\_[RL]{})\^2 ]{}-[\^q\_[LR]{}(\^q\_[LR]{})\^2 ]{}+[\^q\_[RR]{}(\^q\_[RR]{})\^2 ]{}-[\^q\_[LL]{}(\^q\_[LL]{})\^2 ]{}).
If no cancellations in eq. \[weakcharge\] occur amongst the various terms, measurements of the weak charge of Cesium [@cesium] imply $\Lambda$’s$\gtap 10$ TeV. Thus the bounds from atomic parity violation on quark-lepton contact terms appear to be much stronger than those from any collider experiments. Several authors [@babu; @barger] have invoked a new parity conserving contact interaction in order to explain the HERA data while avoiding the APV constraint. They therefore assume that $$\begin{aligned}
\label{parity}
{\eta^q_{RL}\over (\Lambda^q_{RL})^2
}&=&{\eta^q_{LR}\over (\Lambda^q_{LR})^2}\\
{\eta^q_{RR}\over (\Lambda^q_{RR})^2
}&=&{\eta^q_{LL}\over (\Lambda^q_{LL})^2}\ .\end{aligned}$$ The theoretical motivation for imposing the restrictions of eq. \[parity\] is unclear. An awkward feature of eq. \[parity\] is that SU(2) gauge symmetry makes it necessary to introduce a right handed neutrino in order to have parity invariant and gauge invariant contact terms involving leptons.
One interesting class of models which will lead to contact terms at low energies are theories of composite quarks and leptons. In such theories there are new strong confining dynamics at a scale $\Lambda$. Unbroken chiral global symmetries of the strong dynamics explain why the quark and lepton bound states are much lighter than $\Lambda$ [@thooft]. Any contact terms produced by the strong dynamics will respect its global symmetries. These chiral symmetries may be explicitely broken by small effects, [*e.g.*]{} by weak gauge interactions, however small symmetry breaking terms do not affect the conclusions of this note.
It is an easy matter to find plausible approximate global symmetries, other than parity, which will ensure cancellations in eq. \[weakcharge\][@chivran]. For instance consider an approximate global SU(12) acting on all left handed first generation quark states. The left chiral fields (u\_L, d\_L, u\^c\_L, d\^c\_L)transform as a 12-plet $\psi_L$.
Assuming that the new strong dynamics respects such a symmetry, it could generate only an SU(12) singlet combination of the operators in eq. \[contact\], which can be written in the form $$\begin{aligned}
&& \sum_{i=L,R}{4 \pi\eta_{i}\over(\Lambda_{i})^2}\bar e_i\gamma_\mu
e_i
\bar\psi_L\gamma^\mu\psi_L
\\&&=\sum_{i=L,R;q=u,d}{4 \pi\eta_{i}\over(\Lambda_{i})^2}\bar e_i\gamma_\mu
e_i(\bar q_L\gamma^\mu q_L-\bar q_R\gamma^\mu q_R)\ .\end{aligned}$$ Thus the SU(12) symmetry guarantees that \[cancel\] [\^q\_[iL]{}(\^q\_[iL]{})\^2 ]{}=-[\^q\_[iR]{}(\^q\_[iR]{})\^2]{} , and so there is a cancellation in the contribution to $Q_W$.
The SU(12) symmetry still allows for a non zero contribution to the parity violating weak coefficient $C_{2q}$ [@pdg], however the experimental constraints on this term are less severe. In any case, an SU(3) symmetry acting on all the left handed first generation leptons (\^e\_L, e\_L, e\^c\_L)would eliminate this contribution as well.
Much stronger constraints on contact terms can be obtained by considering flavor changing neutral current decays and muon number violation. However such constraints can be satisfied by contact terms which respect a horizontal flavor symmetry, such as an $SU(2)\times SU(2)$, where one SU(2) acts on the first two quark generations and the other on the first two lepton generations.
In summary, I have shown that composite models of quarks and leptons could contain approximate global symmetries, other than parity, which would prevent four fermion contact terms from contributing to atomic parity violation. It would be interesting to reanalyze the effects of contact terms on physics at the various colliders, assuming the relations of eq. \[cancel\] are satisfied. With SU(2) gauge invariance =[\^d\_[iL]{}(\^d\_[iL]{})\^2]{}(neglecting quark CKM mixing) and so only two independent contact terms need be considered.
This work was supported in part by the DOE under grant \#DE-FG03-96ER40956.
H1 Collaboration, DESY-97-024, hep-ex/9702012 Zeus collaboration, DESY-97-25, hep-ex/9702015 G. Altarelli et al., CERN-TH/97-40, hep-ph/9703276 K. S. Babu, C. Kolda, J. March-Russell and Frank Wilczek, IASSNS-HEP-97-04, hep-ph/9703299 V. Barger et al., MadPH-97-991, hep-ph/9703311 J. Kalinowski, R. Ruckl, H. Spiesberger and P.M.Zerwas, BI-TP-97/07, hep-ph/9703288 M. C. Gonzalez-Garcia and S.F. Novaes, IFT-P.024/97, hep-ph/9703346 S. Komamiya for the OPAL collaboration, CERN talk on 2/25/97, [http://www1.cern.ch/Opal/plots/ komamiya/koma.html]{} A. Bodek for the CDF collaboration, preprint Fermilab-Conf-96/341-E (1996). P. Lanngacker, Phys. Lett. [**B256**]{} 277, (1991) M. Leurer, Phys. Rev. [**D49**]{}, 333 (1994) S. Davidson, D. Bailey and B. Campbell, Zeit. fur Phys. [**C61**]{}, 613 (1994) M.C. Noecker et al., Phys. Rev. Lett. [**61**]{}, 310 (1988) G. ’t Hooft, 1979 Cargese Lectures, in [*Recent Developments in Gauge Theories,*]{} (New York, Plenum, 1980) An SU(36) symmetry which suppressed quark flavor violation and APV was considered in R.S. Chivukula and L. Randall, Nucl. Phys. [ **B326**]{} (1989) 1 See the [*Review of Particle Physics*]{}, Phys. Rev. [**D54**]{}, pp. 85-93 for definitions of $C_{iq}$
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Topological insulator surfaces in proximity to superconductors have been proposed as a way to produce Majorana fermions in condensed matter physics. One of the simplest proposed experiments with such a system is Majorana interferometry. Here, we consider two possibly conflicting constraints on the size of such an interferometer. Coupling of a Majorana mode from the edge (the arms) of the interferometer to vortices in the center of the device sets a lower bound on the size of the device. On the other hand, scattering to the usually imperfectly insulating bulk sets an upper bound. From estimates of experimental parameters, we find that typical samples may have no size window in which the Majorana interferometer can operate, implying that a new generation of more highly insulating samples must be explored.'
author:
- 'Henrik Schou R[ø]{}ising'
- 'Steven H. Simon'
bibliography:
- 'ReferencesMajorana.bib'
title: |
Size Constraints on Majorana Beamsplitter Interferometer:\
Majorana Coupling and Surface-Bulk Scattering
---
=1
Introduction
============
There has been an ongoing search for Majorana fermions in condensed matter systems which has been intensified over several years. [@fermMajo12] Vortices in a spinless $p$-wave superconductor have long been known to bind zero energy Majorana modes.[@ReadGreen; @Kopnin; @Volovik1999; @VorticesPWave] With $p$-wave superconductors being very rare in nature, no experiment has convincingly observed such Majoranas yet.[@Kallin] More recently it was predicted that Majorana bound states also exist in vortices in a proximity-induced superconductor on the surface of a topological insulator (TI).[@FuKane_proximity] A number of recent experiments on TIs in proximity to superconductors[@TopSuper; @RobustFabryPerotBiSE; @ProximityIndGap2ZBCP; @ProximityArtificialSC; @ProximityIndGapZBCP; @Nature_proximityBi2Se3] and other similar experimental systems[@Mourik1003; @NadjPerge602] have increased the interest in this possibility.
![The Majorana beamsplitter interferometer proposed by Fu and Kane, and Akhmerov et al.[@InterferometryBeenakker; @FuKane_interferometer] It consists of a 3D strong TI in proximity to a superconductor and magnets of opposite polarization.[]{data-label="fig:Interferometer3D"}](FigI1){width="\linewidth"}
The surfaces of TIs support gapless excitations with the dispersion of a Dirac cone[@TopInsulators] (in principle, TIs can have any odd number of Dirac cones in the Brillouin zone but we consider the simplest case of a single cone for simplicity). The spectrum can be gapped either by applying a magnetic field to give the Dirac fermion either a positive or negative mass, or by placing a superconductor in proximity to the surface. At interfaces between different gapped regions, gapless one-dimensional fermionic channels can develop. For example, an interface between two magnetically gapped regions with opposite mass signs will contain a gapless and chiral one-dimensional Dirac fermion mode. An interface between a magnetically gapped region and a superconducting region will contain a gapless chiral Majorana mode.[@FuKane_proximity; @InterferometryBeenakker; @FuKane_interferometer] It is the physics of these modes that we are exploring in the current paper.
A very elegant experiment, building an interferometer out of these gapless chiral modes, was proposed by Fu and Kane[@FuKane_interferometer] and simultaneously by Akhmerov et al.[@InterferometryBeenakker] The device is depicted schematically in Fig. \[fig:Interferometer3D\]. Incoming particles or holes, biased at a low voltage, flow into a Dirac channel between two oppositely polarized magnetic regions. The Dirac fermion is split into two Majorana fermions upon hitting the superconductor, one flowing in each direction around the superconducting region (drawn as a disk in Fig. \[fig:Interferometer3D\]). At the other end of the superconducting region the two Majorana modes are re-combined into a Dirac mode. The differential conductance of this device was predicted to take the values $0$ or $2e^2/h$ depending on whether the number of $\Phi_0 = h/(2e)$ vortices in the superconductor is even or odd, respectively. With the exception of quantum Hall systems, this was the first proposed realistic Majorana interferometry experiment, and it remains a good candidate for the first experiment to successfully establish the existence of chiral Majorana modes (although, we note that promising evidence of chiral Majorana modes was reported very recently[@HeChiralMajorana17]).
The most experimentally explored TI materials are the Bismuth based compounds,[@AndoReview; @TopInsFundPersp; @TopInsulators; @TopInsulators] including (among many others) Bi$_2$Se$_3$, Bi$_2$Te$_3$, and Bi$_2$Te$_2$Se. Within this class of materials, several experiments have successfully formed some sort of superconducting interfaces.[@TopSuper; @RobustFabryPerotBiSE; @ProximityIndGap2ZBCP; @ProximityArtificialSC; @ProximityIndGapZBCP; @Nature_proximityBi2Se3] In this paper we mainly have this type of material in mind. However, we note that the material SmB$_6$,[@SmB6_PRB; @SmB6_PhysRevX] which is possibly a topological Kondo-insulator, will be discussed in the conclusion.
In this paper we study two effects that restrict the size of the interferometer device, as summarised in Fig. \[fig:VortexEdgeCouplingSchematic\]. On the one hand, we consider coupling of the Majorana edge mode to the Majorana modes trapped in the core of the vortex in the center of the interferometer. When this coupling is sufficiently strong, i.e. when the interferometer is sufficiently small, the conductance signal will be distorted, rendering the interpretation of the experiment difficult. The coupling is generally strong on the scale of the coherence length $\xi=\hbar v_F/\Delta_0$ where $v_F$ is the Fermi velocity and $\Delta_0$ is the proximity gap. This length scale can be on the order of a micron (we discuss materials parameters in section \[sec:LowerBoundSize\], \[sec:SBRate\], and \[sec:ContradictingRequirements\]).
Next, we consider surface-bulk scattering due to the unintentionally doped and poorly insulating bulk of TIs.[@PRLNonmetallicBismuthSelenide; @AndoReview; @Skinner2013] We find that when current leaks from the Majorana edge channel to the ground, the signal obtained at the end of the interferometer drops exponentially with a length scale set by the surface-to-bulk scattering length from disorder. For typical samples this length scale can be shorter than a micron. Thus we have two potentially conflicting constraints on the size of the proposed interferometer. We will discuss the possible directions forward in the conclusion.
The outline of this paper is as follows. In section \[sec:Background\] we review the formalism of Majorana interferometry and the characteristic conductance signal of the experiment. In section \[sec:MajoranaCoupling\] we consider the impact of coupling between the chiral Majorana and a bound Majorana mode in a vortex core. We study how this limits the source-drain voltage and sets a lower bound on the size of the system. In section \[sec:SBScattering\] we consider the leakage of current from the device to the TI substrate and we establish an upper limit on the system size due to this leakage. We conclude with numerical estimates and an outlook for Majorana interferometry on the surface of TIs. Appendix \[sec:AverageCurrentDerivation\] contains a derivation of the average currents. In Appendix \[sec:GeneralizedMajoranaCouplings\] we provide generalizations of the single-point Majorana coupling displayed in Fig. \[fig:VortexEdgeCouplingSchematic\] (a). Vortex-bound Majorana fermions in a TI/SC hybrid structure and the energy splitting between zero modes is discussed in Appendix \[sec:MajoranaBoundStates\]. In Appendix \[sec:SBScatteringAppendix\] we present details of our estimate of the surface-bulk scattering rate based on Fermi’s Golden rule. Finally, in Appendix \[sec:AcousticPhonons\] signal loss due to scattering with acoustic phonons is briefly discussed.
Background on Interferometry with Majorana Fermions {#sec:Background}
===================================================
In this section, we first review the formalism needed to calculate the conductance and interferometry current.[@FuKane_interferometer; @InterferometryBeenakker; @MajoResonantAndreevReflection; @ScatteringChiralMajorana] In Fig. \[fig:VortexEdgeCouplingSchematic\] (a) we show a top view of the interferometer that was described in the introduction. For the moment we ignore the Majorana coupled to the top arm (marked as an “X") at position 0. Charge transport from the source to the drain is computed by using a transfer matrix which describes transport of particles and holes from the source on the left ($L$), via the perimeter of the superconductor, to the drain on the right ($R$), $[ \psi_e, \psi_h]_R^T = \pazocal{T}[\psi_e, \psi_h]_L^T$ where ${}^T$ here means transpose. The matrix $\pazocal{T}$ can be decomposed into three pieces corresponding to the three key steps between the source and the drain $$\label{eq:TransferMatrix}
\pazocal{T} = S^{\dagger}
\pazocal{P} S = \begin{pmatrix}
\pazocal{T}_{ee} & \pazocal{T}_{eh} \\
\pazocal{T}_{he} & \pazocal{T}_{hh}
\end{pmatrix}.$$ The unitary matrix $S$ relates the Majorana states $[ \xi_1, \xi_2]$ running along the upper and lower edge of the superconducting disk to the electron and hole states $[\psi_e, \psi_h]$ that enter via the leads, $[ \xi_1, \xi_2]^T = S[\psi_e, \psi_h]^T$. The matrix $\pazocal{P}$ contains plane wave phases that the low energy chiral Majorana modes accumulate as they move along the edge of the superconducting disk. Finally, the matrix $S^\dagger$ reassembles the two Majoranas into outgoing electron and hole states that enter the drain.
The matrices $S$ and $\pazocal{T}$ are functions of energy. Due to the particle-hole symmetry, we must have $S(E) = S^{\ast}(-E) \tau_x$, where $\tau_x$ is a Pauli matrix in particle-hole space. At $E = 0$ these constraints fix $S(0)$ up to an overall phase that observables do not depend on,[@InterferometryBeenakker] $$S(0) = {\frac{1}{\sqrt{2}}}\begin{pmatrix}
1 & 1 \\
i & - i
\end{pmatrix} \begin{pmatrix}
e^{i\alpha} & 0 \\ 0 & e^{-i\alpha}
\end{pmatrix}.
\label{eq:SzeroEnergy}$$ We will apply $S(0)$ with $\alpha = 0$ for convenience. As discussed in Ref. this form is exact when the system has a left-right symmetry. Even in cases where the system breaks this symmetry, corrections are $\pazocal{O}(E^2)$ and can thus be ignored at low temperature and low voltage (see Appendix \[sec:SmatrixCorrections\]). We study the symmetric situation where the magnetization is $ M_0 \equiv M_{\uparrow} = -M_{\downarrow}$ throughout the main text. The magnetization enters the model Hamiltonian given in Eq. .
The transfer matrix is used to calculate the average current in the drain as the difference between the electron and hole current, with the source biased at voltage $V$ (see Appendix \[sec:AverageCurrentDerivation\] for a derivation), $$I_D = {\frac{e}{h}} \int_0^{\infty} {\mathrm{d}}E \hspace{1mm} \delta f(E) \left( \lvert \pazocal{T}_{ee}\rvert^2 - \lvert \pazocal{T}_{eh}\rvert^2 \right).
\label{eq:DrainCurrent}$$ Here, $\delta f = f_e - f_h$ with $f_{e/h}(E) = f(E\mp eV)$, and $f(E) = (1+e^{\beta E})^{-1}$ is the Fermi-Dirac distribution with $\beta = 1/(k_B T)$ and $E$ measured relative to the Dirac point. The incoming current is $$I_S = {\frac{e}{h}} \int_0^{\infty} {\mathrm{d}}E \hspace{1mm} \delta f(E).
\label{eq:SourceCurrent}$$ By current conservation, a net current of $I_{\mathrm{SC}} = I_S - I_D$ is absorbed by the grounded superconductor. As follows from Eq. , , and unitarity of $\pazocal{T}$ the differential conductance measured in the grounded superconductor at zero temperature is: $$G_{\mathrm{SC}}(V) = {\frac{{\mathrm{d}}I_{SC}}{{\mathrm{d}}V}} \Big\lvert_{T = 0} = {\frac{2e^2}{h}} \lvert \pazocal{T}_{eh}(eV) \rvert^2.
\label{eq:DifferentialConductance0}$$ In order to calculate $G_{\mathrm{SC}}(V)$ we need only establish the properties of the propagation matrix $\pazocal{P}$. If the arms of the interferometer are of length $l_1$ and $l_2$ we have $$\pazocal{P}(E) =
\begin{pmatrix}
e^{ik(E) l_1 + 2i\phi(E)} & 0 \\
0 & e^{ik(E) l_2}
\end{pmatrix}.
\label{eq:PhaseMatrix}$$ Above the wavevector is of the form $k(E) = E/v_m$ due to the linear dispersion of the Majorana modes[@FuKane_interferometer] where $v_m$ is the Majorana velocity. We have included an additional phase shift $\phi$ which may come from a number of sources. In Refs. the possibility of $n$ vortices being added in the center of the superconducting region was considered. In this case the additional phase is $e^{2 i\phi}=(-1)^n$. Inserting $\pazocal{P}$ into Eq. and at zero temperature yields the conductance $$G_{\mathrm{SC},0}(V) = \frac{2e^2}{h}\sin^2\left( {\frac{n\pi}{2}} + {\frac{eV \delta L}{2\hbar v_m}} \right) .
\label{eq:DifferentialConductanceOnePrime}$$ Here, $\delta L = l_1 - l_2$ is the difference between the lengths of the two arms. At $\delta L = 0$, $G_{\mathrm{SC},0}(V=0) = 2 e^2/h$ for $n$ odd, and is zero for $n$ even. This would be a rather clear experimental signature. For the same phase matrix $\pazocal{P}$, the drain current in Eq. evaluates to $$I_{D,0}(V) = (-1)^n \pi k_B T {\frac{e }{h}} {\frac{\sin{({\frac{eV \delta L}{\hbar v_m}} )}}{\sinh{({\frac{\pi \delta L k_B T}{\hbar v_m}} )}}} ,
\label{eq:CurrentZerothOrder}$$ which holds in the low temperature and low voltage limit (compared to the bulk gap). We emphasise that these results are derived with zero coupling to the central Majorana and to any other degrees of freedom (e.g. phonons or conducting bulk states), hence the subscript $0$.
Effect of Majorana Coupling {#sec:MajoranaCoupling}
===========================
We now consider the effect of coupling the Majoranas trapped in the cores of vortices in the superconductor to the chiral edge states. Each vortex traps a single Majorana mode. If a vortex is close to the edge (roughly within the coherence length) there will be tunnelling coupling as shown in in Fig. \[fig:VortexEdgeCouplingSchematic\] (a) where the Majorana zero mode is marked as an “X” and the (tunnelling) coupling matrix element is of magnitude $\lambda$. The magnitude of the coupling drops exponentially with the distance between the vortex and the edge, see Appendix \[sec:VortexBoundStates\] and \[sec:Energysplittingpplusip\].
We very generally describe the chiral Majorana on the upper interferometer arm, $\xi_1$, by a Lagrangian density $\pazocal{L}_1 = i\xi_1(\partial_t + v_m \partial_x) \xi_1$ where $x$ is the spatial coordinate along the upper edge. A similar description holds for the state on the lower edge, $\xi_2$. The vortex bound Majorana, $\xi_0$, is described by $\pazocal{L}_0 = i \xi_0 \partial_t \xi_0 $. We add the coupling term between the central bound state and the chiral mode $$\pazocal{L}_{\mathrm{bulk}-\mathrm{edge}} = 2 i \lambda \xi_1(x=0) \xi_0.
\label{eq:CouplingLagrangian}$$ The equations of motion, following from the full Lagrangian $\pazocal{L}_0 + \pazocal{L}_1 + \pazocal{L}_{\mathrm{bulk}-\mathrm{edge}}$, are given by [@Fendley2009; @Bishara2009] $$\begin{aligned}
2\partial_t \xi_0 &= \lambda \left[ \xi_{1R} + \xi_{1L} \right], \\
v_m \xi_{1R} &= v_m \xi_{1L} + \lambda \xi_0.
\label{eq:EquationsOfMotion0}\end{aligned}$$ Here, the notation $\xi_{1R} = \xi_1(x = 0^+)$ and $\xi_{1L} = \xi_1(x = 0^-)$ was introduced. A Fourier transformation yields a phase shift across the coupling point, $$\xi_{1R}(\omega) = {\frac{\omega + i\nu}{\omega - i\nu}} \hspace{1mm} \xi_{1L}(\omega) = e^{2i\phi} \hspace{1mm} \xi_{1L}(\omega),
\label{eq:FrequencyShift}$$ with $\omega$ the frequency, $\nu \equiv \lambda^2/(2\hbar v_m)$, and $\phi(\omega) = \arctan(\nu/\omega)$. This energy-dependent phase shift is inserted into Eq. and we obtain the zero-temperature result $$G_{\mathrm{SC}}(V) = {\frac{2e^2}{h}}\sin^2\left( {\frac{n\pi}{2}} + {\frac{eV \delta L}{2\hbar v_m}} + \arctan{\left({\frac{\nu}{eV}} \right) } \right).
\label{eq:DifferentialConductanceOne}$$ Observe that the even-odd effect undergoes a crossover when the coupling strength is of the order $\lambda \approx \sqrt{2\hbar v_m eV}$. At zero voltage, or at infinite coupling strength, the even-odd effect is reversed from the value at high voltage or low coupling strength. This crossover is equivalent to shifting $n$ by one in $G_{\mathrm{SC,0}}$, causing $\xi_1$ to acquire a phase shift of $\pi$ at low energy, and it is assigned the interpretation that a vortex Majorana effectively is absorbed by the edge.[@Fendley2009] Similar results are known from Quantum Hall interferometers at filling fraction $5/2$. [@QHI52PRL2008; @QHInterferometers52; @Bishara2009] The original conductance is recovered at high voltage (see Fig. \[fig:DiffConductance\]).
![The differential conductance from Eq. as a function of the voltage. Here, $\nu = 1$ $\mu$eV and $\hbar v_m / \delta L = 2$ $\mu$eV. The dotted lines show the conductance with $\nu = 0$. At low voltage the even-odd effect is reversed.[]{data-label="fig:DiffConductance"}](FigIII1){width="0.8\linewidth"}
The above result applies when there is no position dependence in the coupling to the edge. If we instead consider a continuous bulk-edge coupling $$\pazocal{L}_{\mathrm{bulk}-\mathrm{edge}} = 2 i \int {\mathrm{d}}x \hspace{1mm} \lambda(x) \xi_1(x) \xi_0,
\label{eq:CouplingContinuous}$$ then the total phase shift is again given by Eq. with $$\lambda^2 \to 2 \int {\mathrm{d}}x \hspace{1mm} \lambda( x ) e^{i k x} \int_{x} {\mathrm{d}}x' \lambda(x') e^{- i k x'}
\label{eq:CouplingReplacement}$$ in the numerator and a similar replacement in the denominator, see Appendix \[sec:SmearedCoupling\]. We note that if $x_c \ll 1/k$, where $x_c$ is defined such that $\lambda( \lvert x \rvert > x_c ) \approx 0$, then the effective $\lambda^2$ above becomes $ \langle \lambda(x) \rangle^2$ with $\langle \lambda(x) \rangle = \int {\mathrm{d}}x \lambda(x)$ up to corrections of order $kx_c$.
The scheme above can also be generalized to include more complicated couplings to multiple (vortex) Majorana modes. So long as these modes are coupled to only a single edge, and not to each other, each coupling causes a phase shift of $\arctan(\nu/(eV))$ in the phase of propagation along the edge, see Appendix \[sec:CouplingMultipleVortices\]. In the case that multiple vortex Majoranas are coupled to each other, an even number of Majoranas will gap out, whereas an odd number will leave a single effective Majorana zero mode.
In the above calculation we assumed Majorana coupling to one edge only. Since coupling varies exponentially with distance to the edge it is not unreasonable that this will effectively be the case. However, it is also realistic that a vortex will be roughly equal distance from, and hence equally coupled to, both edges. This case is discussed in detail in Appendix \[sec:CouplingBothEdges\]. While the general result becomes complicated, at least in the case of equal couplings to both edges and both edges of equal length, the physics of the even-odd crossover found in this section remains unchanged. Finally, coupling between a single vortex Majorana and the edge at finite temperature is discussed in Appendix \[sec:CouplingFiniteTemperature\].
Lower Bound on Size and Voltage Constraints {#sec:LowerBoundSize}
-------------------------------------------
The above derived crossover makes the observation of the even-odd conductance effect impossible at low voltage. The proposed experiment is to add a single vortex and observe a change in conductance (say, from zero to $2e^2/h$). However, if the Majorana is then effectively absorbed into the edge, the conductance remains zero even once the vortex is added, destroying the predicted effect. This will occur for interferometers of size comparable to the coherence length, i.e. the length scale of an order-parameter deformation. Assuming that $\Delta_0 = 0.1$ meV is an achievable proximity-induced gap in e.g. Bi$_2$Se$_3$,[@ProximityIndGapZBCP; @ProximityIndGap2ZBCP] the naive lower size bound for the disk is $\xi = \hbar v_F/\Delta_0 \simeq 4$ $\mu$m.[@NatureBismuthFermiVel] The tunnelling coupling is expected to decay like $\lambda \propto \mu \exp(- R/\xi)$ when $R\gg \xi$ (see Eq. ) where $\mu$ is the chemical relative to the Dirac point.[@MajoranaSupercondIslands] For disks of size $R \simeq \xi$ the energy splitting is comparable to the energy gap and the notion of stable edge/vortex states breaks down.
We note, however, that the Majorana coupling term vanishes identically at $\mu = 0$ in the topological insulator superconductor hybrid structure.[@TunnelingMajoranamodes; @StronglyInteractingMajoranas] This is related to appearance of an additional symmetry at the Dirac point, bringing the Hamiltonian (Eq. in the absence of a magnetic field) from symmetry class D to the BDI in the Altland-Zirnbauer classification. If there is disorder inducing local fluctuations in the chemical potential (see section \[sec:ContradictingRequirements\]),[@BandFluctuationsPuddlesNat] say on some scale $\delta \mu$, a random coupling term of the type in Eq. will be present and cause energy splitting. Still, the scenario of having the average $\langle \lambda(x) \rangle \sim \langle \mu(x) e^{-r(x)/\xi} \rangle \approx 0$, which would make the parasitic phase shift vanish, is possible but becomes extremely geometry sensitive (e.g. sensitive to the vortex position) if $R \simeq \xi$. Moreover, we should expect $\langle \lambda(x) \rangle $ to be on the scale of $\delta \mu \sqrt{l/d}$, which in general will not be small. Here, $d$ is the length scale associated with the energy fluctuations. Assuming that we cannot control disorder, the only way to assure suppression of unwanted phase shifts and energy splitting is to increase $R$. Thus, we will use $R = \xi$ as a strict lower size bound on the device.
As far as chiral transport on the interferometer arms is concerned, it seems at this stage that one can operate at high voltage, $eV \gtrsim \nu$ to avoid the coupling. Naturally, the voltage is constrained from above by the global bulk gap, $eV \lesssim \min \lbrace M_0, \Delta_0 \rbrace$, to avoid excitation of non-topological states. Thus, if $R \simeq \xi$ the remaining voltage range, $\nu \lesssim eV \lesssim \min \lbrace M_0, \Delta_0 \rbrace$, might be too limited to have a clear experimental signature. Increasing the disk radius to suppress the coupling (and therefore lower $\nu$) induces the problem of signal leakage to conducting bulk states, which we estimate below.
Finally, we note that the upper voltage bound in practice can be lower. This is due to the Caroli-de Gennes-Matricon excited states of the vortex, characterised by a minigap, $\sim \Delta_0^2/\mu$,[@CdGM; @CdGMstatesPwaveFluids; @VortexCdGMstates] which can be less than one mK. The excited vortex bound states can be activated thermally or by tunnelling from the edge states, in analogy to the zero mode tunnelling in the beginning of this section. By pinning the vortex to a hole in the superconductor, the minigap can be increased to a substantial fraction of $\Delta_0$. [@PinnedVortex; @RobustMajoranaSCislands] Although the details of such a tunnelling process is outside the scope of this paper, additional resonances and conductance phase shifts are expected as $eV$ hits the bound state energies.
Surface-Bulk Scattering {#sec:SBScattering}
=======================
Topological qubits are intrinsically protected from decoherence. [@AnyonQuantumComputation] In protocols based on braiding with topological qubits, leakage of current is harmful since it generically causes entanglement with the environment that potentially corrupt the qubits.[@FuKane_proximity] Although the experiment we study here does not probe a topological qubit, both types of experiment are sensitive to bulk leakage. Since most TIs are poorly insulating,[@TopInsFundPersp] bulk leakage is a relevant problem to consider.
In this section we model leakage of Majorana modes from the the surface of the TI to its poorly insulating bulk. This is done by coupling the interferometer arm first to a single metallic lead. Then, we take multiple weakly coupled metallic leads to represent a continuously leaking environment. We combine the result of this scattering process with an estimate of the surface-to-bulk scattering rate from disorder in doped TIs. Our results suggest an upper size bound that potentially coincides with the lower bound discussed in section \[sec:MajoranaCoupling\] for many unintentionally doped TIs.
Scattering on Conducting Leads
------------------------------
Let the upper interferometer arm be coupled to a metallic lead as depicted in Fig. \[fig:VortexEdgeCouplingSchematic\] (b). Referring to the formalism of Ref. , the lead fermions are transformed to a Majorana basis, $[\eta_1^{(\pm)}, \eta_2^{(\pm)}]^T = S[\psi_e^{(\pm)}, \psi_h^{(\pm)}]^T$ with the superscript indicating incoming ($-$) or outgoing ($+$) states. The $S$ matrix here may differ from the one in Eq. at low energy only by having a different phase $\alpha$, which is irrelevant for observables. The scattering process in the Majorana basis is denoted by $\pazocal{A}(E)$, $$\begin{pmatrix}
\xi_{1R}, & \eta_1^{(+)}, & \eta_2^{(+)}
\end{pmatrix}^T
= \pazocal{A}(E) \begin{pmatrix}
\xi_{1L}, & \eta_1^{(-)}, & \eta_2^{(-)}
\end{pmatrix}^T.
\label{eq:ScatteringSingleLead}$$ By rotating the lead particles and holes into the appropriate Majorana basis, the chiral Majorana mode can be shown to decouple from one of the (artificial) lead Majoranas.[@ScatteringChiralMajorana] In the low energy limit, the scattering matrix is a real rotation matrix, $ \pazocal{A}(E) \in \mathrm{SO}(3)$. If we denote the local reflection amplitude of $\eta_1$ ($\eta_2$) by $r_1$ ($r_2 = 1$), this means that the scattering matrix can be parametrized by $r_1$ at the junction only (the transmission amplitude is $t_1 = \sqrt{1-r_1^2}$), $$\pazocal{A}(E) = \begin{pmatrix}
r_1 & - t_1 \\
t_1 & r_1
\end{pmatrix} \oplus r_2.
\label{eq:RelationWandW_M}$$ Including the state $\xi_2$ on the lower arm and the plane wave phases acquired across the interferometer, we obtain the matrix $\pazocal{P}$ acting on $[\xi_{1L},\xi_{2},\eta_1^{(-)}, \eta_2^{(-)}]^T$, $$\pazocal{P} = \begin{pmatrix}
r_1 e^{ikl_1 + in\pi} & 0 & - t_1 e^{i{\frac{kl_1}{2}}} \\
0 & e^{ikl_2} & 0 \\
t_1 e^{i {\frac{kl_1}{2}}+ in\pi} & 0 & r_1 \end{pmatrix} \oplus 1.
\label{eq:PmatrixScattering}$$ Here, $r_2 = 1$ was used. The transfer matrix can be computed, and it has the $2\times 2$ sub block structure $$\pazocal{T} = \big( S^{\dagger}
\oplus S^{\dagger} \big)
\pazocal{P} \big( S
\oplus S \big)
= \begin{pmatrix}[c | c]
\pazocal{T}_{S \to D} & \pazocal{T}_{\ell_{\mathrm{in}} \to D} \\ \hline
\pazocal{T}_{S \to \ell_{\mathrm{out}}} & \pazocal{T}_{\ell_{\mathrm{in}} \to \ell_{\mathrm{out}}}
\end{pmatrix}.
\label{eq:SingleLeadScatteringT}$$ Above, the subscripts indicate the result of transport from/to the source ($S$), the drain ($D$), or the incoming/outgoing lead ($\ell_{\mathrm{in/out}}$). The blocks in this transfer matrix are used to find the average current contribution as measured in contact $\beta$, with $$\begin{split}
&\lvert (\pazocal{T}_{\alpha \to \beta})_{ee} \rvert^2 - \lvert (\pazocal{T}_{\alpha \to \beta})_{eh} \rvert^2 \\
&= \begin{cases}
r_1(-1)^n\cos{\left( {\frac{E\delta L}{v_m}}\right)} & (\alpha, \beta) = (S, D) \\
r_1 & (\alpha, \beta) = (\ell_{\mathrm{in}}, \ell_{\mathrm{out}}) \\
0 & (\alpha, \beta) = (S, \ell_{\mathrm{out}}), (\ell_{\mathrm{in}}, D)
\end{cases}.
\end{split}
\label{eq:SingleLeadScatteringCurrentKernel}$$ Here, the first line gives the contribution from combining Majoranas $\xi_1$ and $\xi_2$, the second line from combining $\eta_1$ and $\eta_2$, and the third line from combining $\xi_1$ and $\eta_1$. Combining Majoranas from different sources always give a vanishing contribution to the average current.[@ScatteringChiralMajorana]
The drain current is thus $I_{D}(V) = r_1 I_{D,0}(V)$, where $I_{D,0}$ is defined in Eq. ; the visibility is coherently reduced by $r_1$. The current in the outgoing lead is $I_{\ell_{\mathrm{out}}} = r_1 e^2 V'/h$ at $T= 0$ when the lead is biased at $V'$. Subtracting the incoming current $I_{\ell_{\mathrm{in}}} = e^2V'/h$ yields a net current of $I_{\ell_{\mathrm{out}}} - I_{\ell_{\mathrm{in}}} = (r_1 - 1) e^2 V'/h$ in the conducting lead. When decoupling the lead completely, $r_1 = 1$, no net current goes in the lead.
The matrix $\pazocal{P}$ can be trivially extended to include scattering on many single-mode leads. Repeating the calculation above with two scattering leads of the same reflection amplitude $r_1$ gives the same reduction of current in both leads, independent of which arms the leads are coupled to. The drain current is reduced by $r_1^2$. Generalising these statements by induction,[^1] with $N$ identical scatterers of reflection amplitude $r_1$, the total conductance from the collection of leads (representing the bulk of the TI) is $G_{\mathrm{leads}} = e^2 N(1-r_1)/h$. Furthermore, the $N$ scatterers reduce the visibility of the drain current multiplicatively as $I_{D}(V) = r_1^N I_{D, 0}(V)$.
If we let $l_S$ denote the average length a chiral Majorana travels before it scatters into the bulk, we may by definition express the reflection amplitude as $r_1 = 1 - {\frac{1}{N}} {\frac{ l }{ l_S }}$, which is equivalent to defining the leakage conductance into the collection of leads by $G_{\mathrm{leads}} = e^2 l/(h l_S)$. Taking the continuum limit of infinitely many weak scatterers means that $\lim_{ N\to \infty} r_1^N = \exp(- l/l_S)$, and the current is exponentially suppressed in the drain, $$I_{D}(V) = I_{D,0}(V) e^{ - l/l_S }.
\label{eq:ReducedDrainCurrent}$$ As for the differential conductance measured in the grounded superconductor, the amplitude of the oscillations are suppressed $$G_{\mathrm{SC}}(V) = G_{\mathrm{SC},0}(V) e^{ - l/l_S } + {\frac{e^2}{h}} (1- e^{ - l/l_S } ),
\label{eq:ReducedDiffConductance}$$ and distinguishing even from odd $n$ is rendered difficult as $l$ surpasses $l_S$. Thus, $l_S$ acts as an upper bound on the circumference of the length of the interferometer arms.
The Surface-Bulk Scattering Rate {#sec:SBRate}
--------------------------------
The remaining step of our argument is to estimate the scattering length $l_S$. We start by calculating the surface-bulk (SB) scattering length for a TI surface electron ($l_S^{(e)}$) in the absence of any surface superconductor or magnet. Then, we use a similar calculation to estimate the Majorana scattering length ($l_S^{(m)}$). We restrict ourselves to elastic (zero temperature) surface-bulk scattering, where the electron-phonon coupling is irrelevant (see subsection \[sec:OtherLimitations\] and Appendix \[sec:AcousticPhonons\]). Building on the formalism developed in Ref. we consider scattering via screened charge impurities. We note that the consideration in Ref. is restricted to point scatterers where the overall scattering strength is a priori unknown, whereas in our approach the effective potential strength is fixed by the screened Coulomb potential and the dopant concentration.
Specifically, we consider a surface state ($S$) with initial wave vector ${\boldsymbol{k}} = ({\boldsymbol{k}}_{\parallel}, 0)$ that scatters to a lowered conduction band (a charge puddle) $n'$ in the bulk ($B$) with final wave vector ${\boldsymbol{k}}' = ({\boldsymbol{k}}_{\parallel}', k_z')$. The incoming surface state has energy $\epsilon_F = v_F k_{\parallel}$. For numerical purposes we work with a TI slab of thickness $L = 40$ nm. In Fig. \[fig:WavefuncsAndRate\] (a) we show the form of $\lvert \Psi(z) \rvert^2$ at zero parallel momentum for the lowest three eigenstates (derived in Ref. ). One of the TI surface states, penetrating tens of Ås into the bulk, is seen in purple. Assuming randomly distributed screened (dopant) charges with average concentration $n_{\mathrm{3D}}$ we get the scattering rate from Fermi’s Golden rule (cf. Appendix \[sec:SBScatteringAppendix\]): $$\begin{aligned}
\Gamma^{\mathrm{imp}}_{\mathrm{SB}}(\epsilon_F) &= \label{eq:ScatteringRateLong} \\
& \hspace{-40pt} n_{\mathrm{3D}} \left( {\frac{e^2 }{2\pi \epsilon_0 \epsilon_r k_{\parallel}^2}} \right)^2 \sum_{n' \in B} {\frac{ k_{\parallel}' }{\lvert \nabla \xi_{B,n',k_{\parallel}' } \rvert}} \int_0^{\infty} {\mathrm{d}}k_z' {\frac{{\mathrm{d}}\sigma_{\mathrm{long}}^{(n')}(\epsilon_F)}{{\mathrm{d}}k_z'}} \nonumber.\end{aligned}$$ Here, $\xi_{B,n',k_{\parallel}'}$ is the dispersion of bulk band $n'$ defining the outgoing wave vector $k_{\parallel}'$ by $\xi_{B,n',k_{\parallel}'} = \epsilon_F$. The differential scattering rate ${\mathrm{d}}\sigma_{\mathrm{long}}^{(n')}(\epsilon_F)/ {\mathrm{d}}k_z'$ is the screened Coulomb coupling convoluted with the overlap between the bulk and surface states (cf. Eq ), see Fig. \[fig:WavefuncsAndRate\] (c).
In most topological insulators the Fermi level resides in the bottom of the conduction band or the top of the valence band, and a significant doping of acceptors/donors is needed to reach a bulk insulating state.[@Nature09; @Skinner2013] For Bi$_2$Se$_3$ films (typically being $n$-doped) we use the dopant concentration $n_{\mathrm{3D}} = 10^{19}$ cm$^{-3}$. [@GapsHybridStruct17; @Skinner2013; @ChargePuddlesPRB16] Such doping causes fluctuations in the average (doping) concentration, making the conduction band bend and induce electron and hole puddles,[@Skinner2013; @BandFluctuationsPuddlesNat] i.e. effectively filled pockets from the conduction band at the Fermi level. As a simple model of a typical situation we imagine that chemical doping of the TI bulk lowers the bulk Fermi level to lie in some region $0.10$ eV $ \lesssim \epsilon_F \lesssim 0.18$ eV relative to the Dirac point. This would naively be bulk insulating since the Dirac point in Bi$_2$Se$_3$ is separated from the conduction band by a gap of $0.28$ eV.[@NatureBi2Se3] Puddles coming from stretching the bottom of the conduction band are simply modelled by setting the TI gap to $\Delta_{\mathrm{TI}} = 0.1$ eV, making elastic scattering to the puddles (i.e. the lowered conduction bands) allowed at the Fermi level, see Fig. \[fig:WavefuncsAndRate\] (b). This captures the generic features in unintentionally doped TIs.
In order to consider scattering from the surface state into the bulk, we need to know how close the surface Fermi energy is to the Dirac point. Note that, as we will see below, the scattering rate increases strongly if we are not very close (compared to $\Delta_0$). Let us therefore assume that the surface Fermi energy is tuned near the Dirac point. The electronic scattering rates as a function of the bulk Fermi energy are shown in Fig. \[fig:WavefuncsAndRate\] (d). We note that the rate stays similar in magnitude for a large TI film thickness range. When the film is made thinner, there are fewer bulk states to scatter into, but this is compensated by an increased surface-bulk overlap. Moreover, the rates in Fig. \[fig:WavefuncsAndRate\] (d) have the same qualitative features as seen for point source scattering.[@SBcouopling14]
The (elastic) scattering lifetime of the surface electrons is $\tau_{\mathrm{SB}}^{(e)} = 1/\Gamma^{\mathrm{imp}}_{\mathrm{SB}}$, and the scattering length is given by $l_S^{(e)} = \tau_{\mathrm{SB}}^{(e)} v_{F}$. As a typical value of the rates in Fig. \[fig:WavefuncsAndRate\] (d) we use $\Gamma^{\mathrm{imp}}_{\mathrm{SB}} = 3$ $\mu$eV, and we arrive at the unusually long scattering time $\tau_{\mathrm{SB}}^{(e)} \approx 0.2$ ns ($l_S^{(e)} \approx 0.1$ mm). This is typically a factor of $10^2-10^3$ longer than for bulk scattering lifetimes seen in experiment.[@ChargePuddlesPRB16] Yet, our results could be consistent with what was attributed to be unusually long surface lifetimes as observed after optically exciting bulk states.[@PRLElectronlifetimeBiSe; @PRLElectroniclifetime2] In Ref. one deduced that $\tau_{\mathrm{BS}}^{(e)}>10$ ps after seeing a stable population of the surface state induced by elastic bulk-surface scattering in Bi$_2$Se$_3$ (the samples were kept at $T = 70$ K) after subsequent inelastic decays associated with much shorter time scales. It would be desirable to see similar experiments engineered to measure the elastic surface-bulk scattering rate in doped compounds conducted at lower temperatures.
Velocity Suppression and Contradicting Requirements {#sec:ContradictingRequirements}
---------------------------------------------------
We now use the same expression in Eq. to extract information about the Majorana lifetime and scattering rate. In doing so we ignore the (confined) transverse profile of the Majorana surface state. This is a valid approximation because the reciprocal coherence length is far less than the maximal transverse scattering momentum in the studied energy regime, see Appendix \[sec:ConfinedWF\]. Moreover, we assume that the induced superconductivity does not gap the TI bulk such that elastic surface-bulk scattering is not precluded. The Majorana scattering rate will typically be larger or equal to the electronic scattering rate due to a suppression of the Majorana velocity relative to the Fermi velocity.
The Majorana velocity is calculated from the low-energy chiral solution of the Hamiltonian in Eq. : $v_m/v_F = \sqrt{1-(\mu/M_0)^2} [1+(\mu/\Delta_0)^{2} ]^{-1}$.[@FuKane_interferometer] Here, $\mu$ is the surface chemical potential relative to the Dirac point. From the $k_{\parallel} \propto 1/v_F$ dependency of $\Gamma^{\mathrm{imp}}_{\mathrm{SB}}$ in Eq. we deduce that $l_S^{(m)} = \tau_{\mathrm{SB}}^{(m)} v_m $ is suppressed approximatively as $l_S^{(m)} \sim (v_m/v_F)^{\alpha} \hspace{1mm} l_S^{(e)}$ with $\alpha \approx 4$ (up to some prefactor of order $1$).[^2] Correspondingly, $\tau_{\mathrm{SB}}^{(m)} \sim (v_m/v_F)^{\alpha-1} \tau_{\mathrm{SB}}^{(e)}$. A non-zero size window for the interferometer size is required by imposing $l_S^{(m)} \gtrsim \xi$, which in turn means that $$v_m/v_F \gtrsim \left( \xi / l_S^{(e)} \right)^{1/4}.
\label{eq:LengthCriterion}$$ Inserting the electronic rate from the end of the last subsection and $\xi \simeq 4$ $\mu$m in this yields $v_m/v_F \gtrsim 0.4$. Hence, we must require a very fine tuning of $\mu$, $\lvert \mu \rvert \lesssim 1.2 \Delta_0 \simeq 0.1$ meV given that one can achieve $M_0 \gg \mu$ .
When the TI bulk is increasingly doped with screened Coulomb charges to make the Fermi energy approach the Dirac point, fluctuations in the surface electrical potential energy are enlarged. The spatial dependence of the local density of states broadens the Fermi energy into a Gaussian distribution of finite width. The standard deviation of this energy smearing has been estimated in theory[@Skinner2013] and measured in experiment[@BandFluctuationsPuddlesNat] to be $\delta \mu \simeq 10 - 20$ meV within some $0.1$ eV from the average Dirac point ($\delta \mu$ decays inversely proportional to the average chemical potential further away). For Fermi energies close to the average Dirac point the notion of a uniform local density of states breaks down. Tuning of the surface chemical potential is consequently not globally possible within an energy resolution set by the distribution width.[@ChargePuddlesPRB16] Importantly, the smearing greatly exceeds the tuning required above, $\lvert \mu \rvert \sim \delta \mu \gg \Delta_0$, even if we relax the assumption of a very small $\Delta_0$ by increasing it one order of magnitude.
Finally, the spatial scale of the chemical potential fluctuations is $n_{\mathrm{3D}}^{-1/3} \approx 5$ nm. Hence, a precise tuning of the chemical potential might be possible within local regions of this size. Still, this would not help in probing the experiment since $n_{\mathrm{3D}}^{-1/3} \ll \xi$. Thus, unless these spatial fluctuations in the local density of states can be brought under control, unintentionally doped TIs are left unsuited for Majorana interferometry.
Other Limitations {#sec:OtherLimitations}
-----------------
Majorana interactions[@StronglyInteractingMajoranas] and coupling between Majorana modes and other degrees of freedom, such as phonons, are other potential sources of decoherence. Local interactions between chiral Majorana fermions are expected to be heavily suppressed at low temperatures and momenta with the leading order term going like $\pazocal{O}(k^6)$. [@FuKane_interferometer] The chiral Majoranas on the interferometer arms can also excite phonons if an electron-phonon coupling is present; see Appendix \[sec:AcousticPhonons\] for a short discussion. For spatial inversion symmetric materials, e.g. Bi$_2$Se$_3$ and Bi$_2$Te$_3$, where this coupling is dictated by a deformation potential, the decay rate of quasiparticles due to scattering on acoustic phonons at $T=0$ exhibits a $\Gamma^{\mathrm{ph}} \sim (eV)^3$ behaviour. This is a posteriori expected to be small compared to bulk leakage at low voltage. Without spatial inversion symmetry, a piezoelectric interaction can cause the electron-phonon coupling to follow a reciprocal power law in $q$, making scattering an increasing problem for small momenta.
Conclusions
===========
Two limiting effects in Majorana interferometers are considered. We include a previously neglected coupling between chiral Majoranas and vortex-pinned modes. At low voltage this coupling yields a crossover in the conductance even-odd effect, distorting the interpretation of the experiment. We also find that surface-to-bulk scattering in the TI sets an upper bound on the size of the device. With a proximity gap of $\Delta_0 = 0.1$ meV, the lower bound (the coherence length) is about $\xi = \hbar v_F/\Delta_0 \simeq 4$ $\mu$m for Bi$_2$Se$_3$. Due to the doping needed to reach a bulk insulating state in many TIs, conduction band puddles lead to large fluctuations in the surface Fermi energy. In turn, this is in conflict with the surface chemical potential fine tuning required to have a non-zero size window for the interferometer. This leaves the possibility of probing the experiment in many poorly bulk insulating Bismuth compounds potentially extremely difficult.
The most natural ways to overcome the restrictions considered here are (i) to find superconductors with excellent contact to the TI such that $\Delta_0$ can be made (ideally orders of magnitude) larger, or (ii) to pursue a search for TIs with a highly insulating bulk. Most Bismuth based TIs are poorly insulating and are unintentionally doped,[@AndoReview] which results in them being unsuited for Majorana interferometry. However, recently reported mixed Bismuth compounds have shown evidence of an appreciably insulating bulk,[@ExcellentBulkResistivity] which could potentially open a window of opportunity for this material. We note that the recommendation (i) above is similar to the need for high-quality interfaces in superconductor-semiconductor nanowire devices.[@NanoWireReview]
Another possible material system to consider is the putative topological Kondo insulator SmB$_6$, which gives strong sign of surface conduction.[@SmB6_PhysRevX; @SmB6_SciRep] The bulk resistivity of this material can reach several $\Omega$cm at temperatures below a few Kelvins.[@SmB6_PhysRevX] Very recently, evidence of the superconducting proximity effect in Nb/SmB$_6$ bilayers was reported,[@SmB6_ProximityEffect] hence providing a possible platform for this experiment. However, the physics of highly interacting topological Kondo insulators is still poorly understood, [@SmB6_Nature; @SmB6_PRB] and it is unclear how much of the details of the simple non-interacting TI surface physics will carry through to this more complicated case.
We thank Mats Horsdal, Dmitry Kovrizhin, Ramil Akzyanov for useful discussions and Kush Saha for feedback on the numerics. H. S. R. acknowledges Aker Scholarship. S. H. S. is supported by EPSRC grant numbers EP/I031014/1 and EP/N01930X/1.
Derivation of the Average Current {#sec:AverageCurrentDerivation}
=================================
The derivation of Eq. and goes along the lines of Ref. . The scattering states of the interferometer are (chiral) plane waves in the one-dimensional channel convoluted with a transverse wavefunction (we let $k>0$ by definition below), $$\begin{aligned}
\psi_L^e &= e^{ik_L x} \varphi_L^e(y), \label{eq:Scatter1} \\
\psi_L^h &= e^{-ik_L x} \varphi_L^h(y), \label{eq:Scatter2} \\
\psi_R^e &= \pazocal{T}_{ee} e^{ik_R x} \varphi_R^e(y) + \pazocal{T}_{eh} e^{-ik_R x} \varphi_R^h(y), \label{eq:Scatter3} \\
\psi_R^h &= \pazocal{T}_{hh} e^{-ik_R x} \varphi_R^h(y) + \pazocal{T}_{he} e^{ik_R x} \varphi_R^e(y). \label{eq:Scatter4}\end{aligned}$$ We have assumed that the contacts $\alpha \in \lbrace L, R \rbrace$ contain only single mode states and that the incident state in the $L$ contact is either an electron or a hole.
![The scattering states at the two contact points. []{data-label="fig:Blackbox"}](FigA1){width="0.9\linewidth"}
One can construct arbitrary states by expanding in the scattering states above. This is incorporated in the (second quantized) field operator $$\hat{\Psi}_{\alpha,\sigma}({\boldsymbol{r}},t) = {\frac{1}{\sqrt{2\pi}}} \int {\frac{{\mathrm{d}}E_{\alpha}}{\sqrt{\hbar v_{\alpha}}}} \hspace{1mm} \psi_{\alpha}^{\sigma}(E_{\alpha},{\boldsymbol{r}}) \hat{a}_{\alpha,\sigma}(E_{\alpha}) e^{-i\omega_{\alpha} t},
\label{eq:FieldOperator}$$ where we introduced $\omega_{\alpha} = E_{\alpha}/\hbar = v_{\alpha} k_{\alpha}$ and the annihilation operator $\hat{a}_{\alpha,\sigma}(E_{\alpha})$ of type $\sigma \in \lbrace e, h \rbrace$ satisfying $$\big\lbrace \hat{a}_{\alpha,\sigma}^{\dagger}(E),\hat{a}_{\beta,\sigma'}(E') \big\rbrace = \delta_{\alpha,\beta} \delta_{\sigma, \sigma'} \delta(E-E').
\label{eq:CommutationsAs}$$ The current operator of type $\sigma$ in contact $\alpha$ is defined by $$\hat{I}_{\alpha,\sigma} = {\frac{\hbar e}{2im}} \int {\mathrm{d}}{\boldsymbol{r}}_{\perp, \alpha} \hspace{1mm} \left[ \hat{\Psi}_{\alpha,\sigma}^{\dagger} \partial_x \hat{\Psi}_{\alpha,\sigma} - (\partial_x \hat{\Psi}_{\alpha,\sigma}^{\dagger}) \hat{\Psi}_{\alpha,\sigma} \right].
\label{eq:CurrentOperator}$$ Here, $m$ is the effective mass, $m v_{\alpha} = \hbar k_{\alpha}$. Assuming further that the contacts act as thermal reservoirs kept at equal temperature, we can average the density operator by $$\big\langle \hat{a}_{\alpha,\sigma}^{\dagger}(E) \hat{a}_{\alpha',\sigma'}(E') \big\rangle = \delta_{\alpha,\alpha'} \delta_{\sigma,\sigma'} \delta(E - E') f_{\sigma}(E),
\label{eq:ThermalAverage}$$ where $f_{e/h}(E) = [\exp(\beta [E \mp \mu])+1]^{-1}$ is the Fermi function for particles and holes. Finally, we assume the transverse wavefunctions to be orthonormal, $$\int {\mathrm{d}}y \hspace{1mm} (\varphi_{\alpha}^{\sigma}(y))^{\ast} \varphi_{\alpha'}^{\sigma'}(y) = \delta_{\alpha,\alpha'} \delta_{\sigma,\sigma'}.
\label{eq:OrthonormalWFs}$$ The source and the drain current are defined by the average particle minus hole current, $I_D \equiv \big< \hat{I}_{D,e} - \hat{I}_{D,h} \big>$ and $I_S \equiv \big< \hat{I}_{S,e} - \hat{I}_{S,h} \big>$. Calculating these explicitly by using Eq. , , , and unitarity of $\pazocal{T}$ leads exactly to the expressions in Eq. and .
Charge Transport with Majorana Coupling Disorder {#sec:GeneralizedMajoranaCouplings}
================================================
One Majorana with Smeared Coupling to the Edge {#sec:SmearedCoupling}
----------------------------------------------
Consider the case where the edge Majorana is coupled continuously to the bound state as in Eq. . Using the ansatz $\xi_1 (x) = f(x) e^{i (k x - \omega t) } $ with $f(-\infty) $ being the Majorana (fermion) field on the far left and $\omega = k/v_m$ for the linearly dispersing modes. The equations of motion can be combined to give the continuum version of Eq. : $$- iv_m \omega \partial_x f(x) = \lambda(x) e^{- i k x} \int {\mathrm{d}}x' \hspace{1mm} \lambda(x') f(x') e^{ i k x'}.
\label{eq:SmearedCoupling}$$ Here, the plane wave phase contribution that can be neglected in the discrete case where $\lambda(x) = \lambda \delta(x)$ is included. The equation above can be solved as follows. Integrating both sides gives the solution implicitly by $$f(x) = f(-\infty) - {\frac{\zeta}{i\omega v_m}} \int^{x} {\mathrm{d}}x' \hspace{1mm} \lambda(x') e^{-i k x'},
\label{eq:Functionf}$$ with $\zeta = \int {\mathrm{d}}x \lambda(x) f(x) e^{ i k x }$. Inserting this back into yields $$\zeta = {\frac{ f( -\infty) \int {\mathrm{d}}x \hspace{1mm} \lambda(x) e^{ i k x } }{ 1+ {\frac{1}{ i\omega v_m }} \int {\mathrm{d}}x \hspace{1mm} \lambda( x ) e^{i k x} \int^{x} {\mathrm{d}}x' \lambda(x') e^{- i k x'} }},
\label{eq:FunctionZeta}$$ Finally, this expression is used in , and we obtain $$\begin{aligned}
\label{eq:PhaseshiftContinuous}
f(+\infty) &= \\
&\hspace{-20pt} {\frac{\omega + {\frac{i}{\hbar v_m}} \int {\mathrm{d}}x \hspace{1mm} \lambda( x ) e^{i k x} \int_{x} {\mathrm{d}}x' \lambda(x') e^{- i k x'} }{\omega - {\frac{i}{\hbar v_m}} \int {\mathrm{d}}x \hspace{1mm} \lambda( x ) e^{i k x} \int^{x} {\mathrm{d}}x' \lambda(x') e^{- i k x'} }} f(-\infty) \nonumber.\end{aligned}$$ Comparing this to Eq. proves the statement in Eq. .
Multiple Majoranas Coupled to Each Edge {#sec:CouplingMultipleVortices}
---------------------------------------
In the main text, we obtained a phase contribution of $\phi = \arctan({\frac{\nu}{eV}})$ in the differential conductance when the edge was coupled to one vortex (Eq. ). This can be generalized if the chiral Majoranas have single-point couplings to several vortices. In that case $\phi$ is replaced by $\sum_i \phi_i - \sum_j \phi_j$, where $\phi_i = \arctan({\frac{\nu_i}{eV}})$ comes from vortex-edge-coupling with the upper edge and $\phi_j$ from coupling with the lower edge. Hence, multiple phase crossovers will occur if several vortices are located close to the edge.
One Majorana Coupled to Both Edges {#sec:CouplingBothEdges}
----------------------------------
Another generalization is to study point-couplings to both the lower and the upper arm, in which case the coupling term in the Lagrangian is $ \pazocal{L}_{\mathrm{bulk}-\mathrm{edge}} = 2 i \lambda_1 \xi_1(x=0) \xi_0 + 2 i \lambda_2 \xi_2(x=0)\xi_0$. The corresponding equations of motion are $$\begin{aligned}
2\partial_t \xi_0 &= \lambda_1\left[ \xi_{1R} + \xi_{1L} \right] + \lambda_2\left[ \xi_{2R} + \xi_{2L} \right], \\
v_m \xi_{1R} &= v_m \xi_{1L} + \lambda_1 \xi_0, \\
v_m \xi_{2R} &= v_m \xi_{2L} + \lambda_2 \xi_0.
\label{eq:EquationsOfMotion}\end{aligned}$$ Here, we again use the notation $\xi_{1R} = \xi_1(x=0^+)$, $\xi_{2R} = \xi_2(x=0^+)$, $\xi_{1L} = \xi_1(x=0^-)$, and $\xi_{2L} = \xi_2(x=0^-)$. In frequency space we find a relation between $\xi_1$ and $\xi_2$ across the coupling points, $[\xi_{1}, \xi_{2}]_R^T = U(\omega)[\xi_{1}, \xi_{2}]_L^T$, where $U(\omega)$ is the unitary matrix $$\label{eq:UnitaryMat}
U(\omega) =
{\frac{1}{W(\omega)}}\begin{pmatrix}
-\nu_1 + \nu_2 + i\omega & -2\sqrt{\nu_1\nu_2} \\
-2\sqrt{\nu_1\nu_2} & \nu_1 - \nu_2 + i\omega
\end{pmatrix}.$$ Above, $\nu_i \equiv \lambda_i^2/(2\hbar v_m)$ and $W(\omega) = \nu_1 + \nu_2 + i\omega$. Notice how $U(\omega)$ is off-diagonal in the low-energy limit for the configuration $\nu_1 = \nu_2$. The chiral Majoranas on the perimeter therefore switch place by tunnelling across the vortex Majorana in this case. The phase matrix is given by $$\pazocal{P} =
\begin{pmatrix}
e^{i{\frac{kl_1}{2}}} & 0 \\
0 & e^{i {\frac{k l_2}{2}}}
\end{pmatrix}
U(\omega)
\begin{pmatrix}
e^{i {\frac{k l_1}{2}}+ in \pi} & 0 \\
0 & e^{i{\frac{k l_2}{2}}}
\end{pmatrix},
\label{eq:PhaseMatrixTwoCoupling}$$ which leads to the differential conductance $$\begin{aligned}
\label{eq:DifferentialConductanceTwo}
G_{\mathrm{SC}}(V) &= {\frac{2e^2}{h}} {\frac{1}{(eV)^2 + (\nu_1 + \nu_2)^2}} \Big[ (\nu_1 - \nu_2)^2 \nonumber \\
& \hspace{10pt} + \left[(eV)^2 - (\nu_1 - \nu_2)^2 \right] \sin^2\Big( {\frac{n\pi}{2}} + {\frac{eV \delta L}{2\hbar v_m}} \Big) \nonumber \\
& \hspace{10pt} + eV(\nu_1-\nu_2)\sin\Big( n\pi + {\frac{eV \delta L}{\hbar v_m}} \Big) \\
& \hspace{10pt} + 2\nu_1\nu_2(1+(-1)^n) \Big] \nonumber.\end{aligned}$$ This reduces to Eq. when $\nu_2 = 0$. For $\delta L = 0$ the expression above is identical to Eq. with the replacement $\nu \to \nu_1 + \nu_2$.
One Majorana Coupled to One Edge at Finite Temperature {#sec:CouplingFiniteTemperature}
------------------------------------------------------
We return to the case with a one-point Majorana coupling (Fig. \[fig:VortexEdgeCouplingSchematic\] (a)). The drain current can generally be re-expressed as a residue sum over poles $z_j^{+}$ in the upper complex half-plane. Rewriting Eq. , $$I_D(\nu) = (-1)^n {\frac{e}{4}}i\sinh(\beta eV) \sum_{j} \mathrm{Res
}\lbrace h(z), z_j^{+} \rbrace,
\label{eq:ResidueSum}$$ where $h(z)$ is the function $$h(z) = {\frac{1}{z^2+\nu^2}} {\frac{(z^2-\nu^2)\cos({\frac{\delta L z}{v_m}}) -2z\nu \sin({\frac{\delta L z}{v_m}})}{\cosh[{\frac{\beta}{2}}(z-eV)] \cosh[{\frac{\beta}{2}}(z+eV)]}}.
\label{eq:ResidueSumFunction}$$ The set of simple poles of $h(z)$ in the upper half-plane is $z_j^{+} \in \lbrace i\nu \rbrace \cup \lbrace i\pi(2m-1)/\beta + eV \rbrace_{m=1}^{\infty}$. For $\nu = 0$, the sum is obtainable in closed form and stated in Eq. .
For weak coupling at finite temperature, $\nu \ll k_B T, eV$, we expand $h(z)$ in powers of $\beta \nu$, with $\beta^{-1} = k_B T$. For convenience we define the reduced current as $R \equiv I_D(\nu)/I_{D,0}$. To second order in $\beta \nu$ with $\delta L = 0$ we find $$\begin{aligned}
R &= 1 - {\frac{\nu \pi}{eV}} \tanh\left( \beta eV/2 \right) \label{eq:ReducedCurrentSecondOrder} \\
& + {\frac{ \beta \nu^2 }{\pi eV}} \Im\Big\lbrace \psi^{(1)}\Big({\frac{1}{2}} - i {\frac{\beta \nu}{2\pi}} \Big) \Big\rbrace + \pazocal{O}\left( \beta \nu \right)^3. \nonumber\end{aligned}$$
Above, $\psi^{(1)}(z) = {\frac{{\mathrm{d}}^2}{{\mathrm{d}}z^2}} \log \Gamma(z)$ is the trigamma function. In the infinite coupling limit the sign of the drain current is flipped, $R \to -1$, which can be seen from Eq. . The reduced current decreases monotonically, a feature of setting $\delta L = 0$, with coupling strength until the vortex Majorana is fully absorbed by the edge.
![The reduced drain current $R$ as function of $\beta \nu$ for symmetric arms $\delta L = 0$. The dotted lines represent the weak coupling result from Eq. and the full lines are numerical results for $\beta eV$ being $5$ (purple), $10$ (orange), and $25$ (green).[]{data-label="fig:RedCurrent"}](FigB3){width="0.70\linewidth"}
At $T = 0$ the current is found to be (with $\hbar = 1$ here): $$\begin{aligned}
(-1)^n{\frac{2\pi}{e}} I_D &= {\frac{v_m}{\delta L}} \sin\Big( {\frac{\delta L eV}{v_m}}\Big) \nonumber \\
&\hspace{-20pt} + 2\nu \Big[ \cosh\Big( {\frac{\delta L eV}{v_m}} \Big) - \sinh\Big( {\frac{\delta L eV}{v_m}} \Big) \Big] \\
&\hspace{-50pt} \times \Big[ \Im \Big\lbrace \mathrm{Ci}\Big( {\frac{\delta L}{v_m}}(eV + i \nu) \Big) \Big\rbrace + \Im \Big\lbrace \mathrm{Ci}\Big( -i \nu {\frac{\delta L}{v_m}}\Big) \Big\rbrace \nonumber \\
&\hspace{-20pt} - \Re \Big\lbrace \mathrm{Si}\Big( {\frac{\delta L}{v_m}}(eV + i \nu) \Big) \Big\rbrace \Big] \nonumber.
\label{eq:CurrentatzeroTemp}\end{aligned}$$ Here, $\mathrm{Ci}$ and $\mathrm{Si}$ are trigonometric integral functions. For symmetric arms the above result simplifies to give the reduced current $R = 1+ 2{\frac{\nu}{eV}}[\arctan{({\frac{\nu}{eV}})} -{\frac{\pi}{2}}]$. From this, the aforementioned result $\lim_{\nu \to \infty} R = -1$ follows trivially.
Majorana Fermions in a TI/SC Hybrid Structure {#sec:MajoranaBoundStates}
=============================================
Superconductivity induced on the surface of a strong TI support chiral Majorana fermions on domain walls and Majorana bound states localized in vortices. The Fu and Kane Hamiltonian of a TI/SC hybrid structure with a single Dirac-like dispersion is given by [@FuKane_proximity; @FuKane_interferometer] $$\pazocal{H} = (v_F \boldsymbol{\sigma} \cdot \boldsymbol{p}-\mu) \tau_z + M({\boldsymbol{r}}) \sigma_z + \Delta({\boldsymbol{r}}) \tau_{+} + \Delta^{\ast}({\boldsymbol{r}}) \tau_{-},
\label{eq:Model_Hamiltonian}$$ where $\tau_j$ and $\sigma_j$ are Pauli matrices acting in particle-hole and spin space, respectively. Moreover, $\tau_{\pm} = (\tau_{x} \pm i \tau_y)/2$ and $M$ is the Zeeman energy associated with a magnetic field applied in the $z$ direction. The standard procedure is to introduce $u_{\sigma}(\boldsymbol{r})$ and $v_{\sigma}(\boldsymbol{r})$ that define the quasiparticles, conveniently arranged in $\boldsymbol{\Psi} = [u_{\uparrow}, u_{\downarrow}, v_{\uparrow}, -v_{\downarrow}]^T$ satisfying the BdG equations $\pazocal{H} \boldsymbol{\Psi} = E \boldsymbol{\Psi}$.
Corrections to the $S$ Matrix {#sec:SmatrixCorrections}
-----------------------------
One may solve Eq. on a magnetic domain wall in the absence of a superconductor. There are then *two* chiral solutions: one in the particle sector, $\ket{\tau_z = +1}$, and one in the hole sector, $\ket{\tau_z = -1}$, both localized at the interface.[@FuKane_interferometer] On a magnet-superconductor interface there exists a single chiral Majorana solution with linear dispersion. By the definition of the $S$ matrix, $$\begin{pmatrix}
\xi_1 \\ \xi_2
\end{pmatrix} = \begin{pmatrix}
S_{ee} & S_{eh} \\
S_{he} & S_{hh}
\end{pmatrix} \begin{pmatrix}
\psi_e \\ \psi_h
\end{pmatrix},
\label{eq:SmatrixDef}$$ one can estimate the scattering elements at the trijunction as overlaps between the chiral states, $S_{ee} = \braket{\tau_z = +1}{\xi_1}$, $S_{eh} = \braket{\tau_z = -1}{\xi_1}$, etc. Here, $\xi_1$ ($\xi_2$) is the chiral Majorana fermion on the upper (lower) interferometer arm. The Majorana wavefunction decays exponentially with the length scale $v_F/\sqrt{M^2-\mu^2}$ ($v_F/\Delta_0$) in the magnetic (superconducting) region (inset of Fig. \[fig:WavefuncsAndRate\] (a)). Interestingly, if the magnetization on each side of the Dirac channel are of equal magnitude (with opposite polarization), compactly expressed in terms of the symmetry $\pazocal{H}(-y) = \pazocal{M}^{-1}\pazocal{H}(y) \pazocal{M}$ where $\pazocal{M} = i\sigma_y$,[@FuKane_interferometer] the zero energy result in Eq. is obtained at all energies. Experimentally, this is likely a weak assumption, and we can therefore safely apply $S(E=0)$ throughout. If the two magnetic fields in the Dirac region are of different strengths there will be corrections to $S$ that are small when $E \ll {\frac{v_m}{v_F}}\min{\lbrace M_0, \Delta_0 \rbrace}$. One can heuristically add a correction term to $S(E)$ proportional to $E/\Lambda$ (where $\Lambda \sim {\frac{v_m}{v_F}}\min{\lbrace M_0, \Delta_0 \rbrace}$ is some phenomenological scale) and impose unitarity to order $\pazocal{O}(E^2)$ and particle-hole symmetry. This leads to order $\pazocal{O}(E^2)$ corrections to the drain current.
Majorana Bound States and Energy Splitting {#sec:VortexBoundStates}
------------------------------------------
If a vortex is present in the system described by Eq. , $\Delta({\boldsymbol{r}}) = \Delta(r) e^{i\ell \theta}$, a single Majorana bound state will be localized in the vortex core when the vorticity $\ell$ is odd.[@TunnelingMajoranamodes; @MajoranaSupercondIslands] Following the procedure in Ref. , the zero energy BdG equations for the model in Eq. with a central vortex of vorticity $\ell = 1$ are expressed in terms of two real and coupled radial equations, $$\begin{pmatrix}
- \mu & \eta \Delta(r) + v_F\left( \partial_r + {\frac{1}{r}} \right) \\
-\eta \Delta(r) - v_F\partial_r & - \mu
\end{pmatrix}
\begin{pmatrix}
u_{\uparrow}(r) \\
u_{\downarrow}(r)
\end{pmatrix} = 0.
\label{eq:BdGZeroRadial}$$ Here, $v_{\sigma}(r) = \eta u_{\sigma}(r)$ with $\eta = \pm 1$ dictating two possible solution channels. Modelling the vortex by a hard step of size $R_1 \approx \xi$, $\Delta(r) = \Delta_0 \Theta(r - R_1)$, the resulting zero mode is expressed in terms of Bessel functions, $$\begin{aligned}
\begin{pmatrix}
u_{\uparrow}(r),
u_{\downarrow}(r)
\end{pmatrix}^T &= \\
&\hspace{-60pt} \pazocal{N} \begin{pmatrix}
J_0({\frac{\mu r}{v_F}}) \\ J_1({\frac{\mu r}{v_F}})
\end{pmatrix} \left[ \Theta(R_1-r) + e^{-{\frac{\Delta_0}{v_F}}(r-R_1)} \Theta(r-R_1) \right] \nonumber.
\label{eq:SolutionVortexCore}\end{aligned}$$ Above, $\pazocal{N}$ is given by normalization of the two-component spinor. Two vortex bound Majoranas separated by a distance $R$ will generally lift from zero energy by an energy splitting exponentially small in $R/\xi$ when $R \gg \xi$. If $\mu$ is tuned close to the Dirac point (typically achieved by bulk doping of screened charges[@BandFluctuationsPuddlesNat]), this splitting has previously been estimated to asymptotically approach $\varepsilon_+ \sim - \mu (R/\xi)^{3/2} \exp(-R/\xi)$ up to some prefactor of order one.[@TunnelingMajoranamodes]
Similarly, a superconducting disk of radius $R$ (with a central vortex) deposited on a TI supports a Majorana edge state located exponentially close to the edge. For weak magnetic fields the splitting between the central bound state and the edge state has been estimated to decay like [@MajoranaSupercondIslands] $$\varepsilon_{+} \sim - \mu l_B e^{-{\frac{R}{\xi}} } / \xi,
\label{eq:EnergySplit}$$ with $l_B$ the magnetic length.
Toy Model Calculation: Energy Splitting in a Spinless $p$-wave Superconductor {#sec:Energysplittingpplusip}
-----------------------------------------------------------------------------
For completeness, and serving as an illuminating example not found elsewhere, we explain how the energy splitting between edge states in a Corbino geometry can be calculated in the spinless $p+ip$ superconductor. See e.g. Ref. for general aspects of this system and Ref. for a presentation of the similar intervortex splitting. Finally, the problem of the Majorana energy hybridization in superconductor-semiconductor hybrid structures is addressed in Ref. .
The spinless $p$-wave superconductor realises a topological phase for $\mu > 0$ and the trivial phase for $\mu < 0$. The model has the corresponding zero energy BdG equations[@AnyonQuantumComputation] $$\begin{pmatrix} -{\frac{1}{2m}}\nabla^2 - \mu & {\frac{1}{2 p_F}}\{\Delta({\boldsymbol{r}}),\partial_{z^{\ast}} \} \\ -{\frac{1}{2 p_F}}\{\Delta^{\ast}({\boldsymbol{r}}),\partial_{z} \} & {\frac{1}{2m}}\nabla^2 + \mu \end{pmatrix}\begin{pmatrix} u({\boldsymbol{r}}) \\ v({\boldsymbol{r}}) \end{pmatrix} = 0,
\label{eq:BdGSpinlessp+ip}$$ with the operator $\partial_{z^{\ast}} = e^{i\theta}(\partial_r + {\frac{i}{r}}\partial_{\theta})$. We let a radial annulus geometry between $R_1$ and $R_2$ be superconducting with a vortex located in the center hole, $\Delta({\boldsymbol{r}}) = \Delta_0 e^{i\theta} \Theta(r-R_1) \Theta(R_2-r)$, with chemical potential $2m v_F^2 \mu > \Delta_0^2$ (causing the wavefunctions to oscillate) and $\mu < 0$ outside the disk. When $R \equiv R_2 - R_1 \gg \xi$, we treat the two single-edged systems separately and construct the ground state candidates ${\boldsymbol{\phi}}_{\pm} = {\frac{1}{\sqrt{2}}}({\boldsymbol{\psi}}_1\pm i{\boldsymbol{\psi}}_2)$, where ${\boldsymbol{\psi}}_1$ (${\boldsymbol{\psi}}_2$) is the solution localized on the inner (outer) edge.[@PRLMajoranaSplitting] The two solutions are exponentially damped away from their respective edges and they oscillate with frequency $k = \sqrt{2m\mu - (\Delta_0/v_F)^2}$. Moreover, if both $R_1$, $R_2 \gg 1/k$, the Bessel functions that appear as radial solutions can be expanded asymptotically. The splitting integral can be calculated, and we send $\mu \to -\infty$ outside the annulus in the end. Upon evaluating the splitting integrals and then taking the limit, we obtain $$\varepsilon_{\pm} = \expval{\pazocal{H}}{{\boldsymbol{\phi}}_{\pm}} \approx \mp {\frac{4\Delta_0 \mu}{v_F k}} \sin \left( k R \right) e^{- R/\xi }.
\label{eq:Corbino_E+}$$ The splitting is zero for particular values of the domain separation. Whenever $R = \pi n /k$ for $n \in \mathbb{N}$, there are two degenerate ground states. The zero mode condition gives associations of an interference phenomenon, caused by the oscillating edge modes that convolute in a destructive manner for certain values of the separation. The expression in Eq. agrees with numerical diagonalization, already when $R$ is a small multiple of $\xi$. The Corbino geometry has been studied for small disk size, in which case the same zero energy criterion is found in the limit $2m v_F^2 \mu \gg \Delta_0^2$ by imposing Dirichlet boundary conditions on the wavefunctions directly.[@Majoranamodes_disk]
Surface-Bulk Scattering with Screened Disorder {#sec:SBScatteringAppendix}
==============================================
In Ref. Fermi’s Golden rule is used to find the scattering rate from the surface ($S$) to the bulk ($B$) of a TI in the presence of static and dilute point impurities. Here, we take Ref. as a starting point (the reader is referred to this reference for further details) and apply the formalism to the case of long range scatterers. As described in the main text we study scattering from surface initial wave vector ${\boldsymbol{k}} = ({\boldsymbol{k}}_{\parallel}, 0)$ (in practice we let ${\boldsymbol{k}}_{\parallel} = (k_x, 0)$) to the final bulk wave vector ${\boldsymbol{k}}' = ({\boldsymbol{k}}_{\parallel}', k_z')$. The energy of the incoming surface state determines the Fermi energy, $\xi_{S,{\boldsymbol{k}}_{\parallel} } = v_F k_{\parallel} \equiv \epsilon_F$. Assuming low energy elastic scattering, and making use of the continuum limit $\sum_{{\boldsymbol{k}}'} \to {\frac{V}{(2\pi)^3}} \int {\mathrm{d}}^3{\boldsymbol{k}}'$, we find the scattering rate
$$\begin{aligned}
\Gamma^{\mathrm{imp}}_{\mathrm{SB}}(\epsilon_F) &= 2\pi \sum_{{\boldsymbol{k}}', n'} \lvert g^{\mathrm{imp}}_{{\boldsymbol{k}}-{\boldsymbol{k}}'} \rvert^2 \big( \lvert F_{S,{\boldsymbol{k}}_{\parallel}; B, {\boldsymbol{k}}',1n' } \rvert^2 + \lvert F_{S,{\boldsymbol{k}}_{\parallel}; B, {\boldsymbol{k}}', 2n' } \rvert^2 \big) \delta( \xi_{B,n', k_{\parallel}' } - \epsilon_F ) \label{eq:ScatteringRateFull} \\
&\approx \frac{ V }{(2\pi)^2} \sum_{\substack{n' : \\ \min \lbrace \xi_{B,n', k_{\parallel}' } \rbrace < \epsilon_F } } {\frac{ k_{\parallel}' }{\lvert \nabla \xi_{B,n', k_{\parallel}' } \rvert}} \int_0^{\infty} {\mathrm{d}}k_z' \int_0^{2\pi} {\mathrm{d}}\varphi \hspace{1mm} \lvert g^{\mathrm{imp}}_{ k_{\parallel} - k_{\parallel}', k_z', \varphi} \rvert^2 \big( \lvert F_{S,{\boldsymbol{k}}_{\parallel}; B, {\boldsymbol{k}}',1n' } \rvert^2 + \lvert F_{S,{\boldsymbol{k}}_{\parallel}; B, {\boldsymbol{k}}', 2n' } \rvert^2 \big). \nonumber\end{aligned}$$
In going from the first to the second line we integrated over $k_{\parallel}'$, which in the last line is then defined implicitly by $\xi_{B,n', k_{\parallel}' } = \epsilon_F$. Above, $\varphi$ is the polar angle between the incoming and the outgoing momentum projected onto the $k_x k_y$-plane, i.e. $k_{x}' = k_{\parallel}' \cos{\varphi}$ and $k_{y}' = k_{\parallel}' \sin{\varphi}$. The $F$’s are defined as convoluted overlaps between the surface and the bulk states, $$\begin{aligned}
F_{S,{\boldsymbol{k}}_{\parallel}; B, {\boldsymbol{k}}',1n' } &= \big<{\boldsymbol{\Psi}}_{S, {\boldsymbol{k}}_{ \parallel}} \big| e^{ik_z' z} \big| {\boldsymbol{\Psi}}_{B, {\boldsymbol{k}}', 1n'} \big>, \label{eq:Overlap1} \\
F_{S,{\boldsymbol{k}}_{\parallel}; B, {\boldsymbol{k}}',2n' } &= \big<{\boldsymbol{\Psi}}_{S, {\boldsymbol{k}}_{ \parallel}} \big| e^{ik_z' z} \big| {\boldsymbol{\Psi}}_{B, {\boldsymbol{k}}', 2n'} \big>, \label{eq:Overlap2}\end{aligned}$$ where two bulk bands $1n'$ and $2n'$ are degenerate. The four-component wavefunctions in these expression are found by solving the BdG equations for the 3D TI exactly at $k_{\parallel} = 0$ and then applying perturbation theory to leading order in the wave vector (see Appendix C in Ref. where the full expressions are listed in Eq. (C1)–(C11)). This procedure also yields the dispersion relations to leading order in the parallel wave vector. In Fig. \[fig:WavefuncsAndRate\] (a) and (b) the surface and some of the bulk states are visualised in a thin film with model parameters as for Bi$_2$Se$_3$,[@HamiltonianSurface10] except for the bulk gap which is set to $\Delta_{\mathrm{TI}} = 0.1$ eV as motivated in the main text.
The coupling $g^{\mathrm{imp}}_{{\boldsymbol{k}}-{\boldsymbol{k}}'}$ is here an ensemble averaged Coulomb potential due to screened charge impurities. Assuming randomly distributed impurities with zero mean, the coupling should be proportional to $\lvert g^{\mathrm{imp}}_{{\boldsymbol{k}}-{\boldsymbol{k}}'} \rvert^2 \propto n_{\mathrm{3D}} ( Ze^2/(\epsilon_0 \epsilon_r) )^2 \left( \lvert {\boldsymbol{k}}-{\boldsymbol{k}}' \rvert^2 + k_{TF}^2 \right)^{-2}$,[@ElectronsAndPhonons] where $\epsilon_r$ is the relative permittivity, $k_{TF}$ is the Thomas-Fermi wave vector, and $n_{\mathrm{3D}}$ is the average dopant concentration. In cylindrical coordinates the coupling is expressed as $$\begin{aligned}
\label{eq:ScatteringPotentialScreened}
\lvert g^{\mathrm{imp}}_{k_{\parallel} - k_{\parallel}', k_z',\varphi} \rvert^2 &= {\frac{n_{\mathrm{3D}}}{V}} \left( {\frac{Ze^2}{\epsilon_0 \epsilon_r k_{\parallel}^2}} \right)^2 \\
& \hspace{-50pt} \times \left[ \left(1- k_{\parallel}' / k_{\parallel} \right)^2 + \left( k_z' / k_{\parallel} \right)^2 + r_s^2 + 4 {\frac{k_{\parallel}'}{k_{\parallel}}} \sin^2{{\frac{\varphi}{2}}} \right]^{-2}, \nonumber\end{aligned}$$ where $r_s = k_{TF}/k_{\parallel}$ was introduced. In the main text we assume that the screened charges have $Z = 1$. With this coupling established we define the differential cross section for the screened Coulomb scattering as $$\begin{aligned}
{\frac{{\mathrm{d}}\sigma_{\mathrm{long}}^{(n')}(\epsilon_F) }{{\mathrm{d}}k_z'}} &\equiv \label{eq:DifferentialCrossectionLong} \\
& \hspace{-40pt} \int_0^{2\pi} {\mathrm{d}}\varphi \hspace{1mm} {\frac{ \lvert F_{S,{\boldsymbol{k}}_{\parallel}; B, {\boldsymbol{k}}',1n' } \rvert^2 + \lvert F_{S,{\boldsymbol{k}}_{\parallel}; B, {\boldsymbol{k}}', 2n' } \rvert^2 }{\left[ \left(1- {\frac{k_{\parallel}'}{ k_{\parallel} }} \right)^2 + \left( {\frac{k_z'}{ k_{\parallel} }} \right)^2 + r_s^2 + 4 {\frac{k_{\parallel}'}{k_{\parallel}}} \sin^2{{\frac{\varphi}{2}}} \right]^{2}}} \nonumber.\end{aligned}$$ This function is shown in Fig. \[fig:WavefuncsAndRate\] (c) for $r_s = 0.035$, which corresponds to $\epsilon_r = 100$.[@SurfImpScat10] Note that in the case of point scatterers, the differential cross section is obtained simply by replacing the denominator in Eq. by $1$.
Using Eq. and in assembles the expression for the scattering rate in the main text, Eq. . We note that the rates as shown in figure in Fig. \[fig:WavefuncsAndRate\] (d) are seen to be in good agreement with the Ref. .
Effect of Spatially Confined Transverse Wavefunction {#sec:ConfinedWF}
----------------------------------------------------
When applying the formula of the electronic scattering rate in Eq. to the case of the surface Majorana we neglect the transverse profile of the surface wavefunction (inset of Fig. \[fig:WavefuncsAndRate\] (a)). Here, we argue why this is a valid approximation in our parameter regime.
Assume for simplicity that the two surface gaps are of similar magnitude $\Delta_0 \approx M_0 \gg \mu$, so that the the transverse wavefunction is confined over a the scale $\sigma_y \sim \xi = v_F/\Delta_0$. By the uncertainty principle, this confinement leads to an uncertainty in $k_y'$ of $\sigma_{k_y'} \sim \xi^{-1}$. To simulate this uncertainty we draw random additions to $k_y' = k_{\parallel}' \sin \varphi$ from a normal distribution with the standard deviation above and zero mean for each scattering direction $\varphi$. The resulting scattering rates display an average absolute deviation $\langle \lvert \varepsilon \rvert \rangle$ (averaged over the Fermi energy range in Fig. \[fig:WavefuncsAndRate\] (d)) from the sharp $k_y'$ curve of roughly $\langle \lvert \varepsilon \rvert \rangle \approx (\sigma_{k_y'}/\mathrm{max}\lbrace k_y' \rbrace )^2$. Since $ \mathrm{max}\lbrace k_y' \rbrace \gg \xi^{-1}$ in the energy range and with the proximity gaps we consider, the effect of uncertainty in $k_y'$, and hence confinement in the $y$-direction, can be ignored.
Scattering with Acoustic Phonons {#sec:AcousticPhonons}
================================
Consider the toy model coupling electrons to (acoustic) phonons through a deformation potential, $$H_{\mathrm{ep}} = \sum_{{\boldsymbol{q}}_1,{\boldsymbol{q}}_2,s, s'} M_{{\boldsymbol{q}}_1,{\boldsymbol{q}}_2}^{s,s'} c_{{\boldsymbol{q}}_1+{\boldsymbol{q}}_2,s}^{\dagger} c_{{\boldsymbol{q}}_1,s'}\left( a_{{\boldsymbol{q}}_2} + a_{-{\boldsymbol{q}}_2}^{\dagger} \right).
\label{eq:ElectronPhononCoupling}$$ For small momenta, the electron-phonon coupling goes as $\lvert M(q) \rvert \sim q$ for surface phonons.[@ElectronPhononScatteringTI; @PhononScatteringvonOppen] Acoustic phonons have linear dispersion, $\omega(q) = c_R \lvert {\boldsymbol{q}} \rvert$ and are expected to dominate the coupling Hamiltonian at low temperatures.[@ElectronPhononScatteringTI] The scattering rate follows once again from Fermi’s Golden rule, $\Gamma_{i\to f}^{\mathrm{ph}} = 2\pi \nu_f(E) \lvert \mel{f}{H_{\mathrm{ep}}}{i} \rvert^2$, where $\nu_f(E)$ is the final density of states. Recall also that states with a linear dispersion in two dimensions have $\nu(E) \propto E$. In a superconducting system, there will be a non-zero amplitude for the creation of a phonon with the cost of annihilating two quasiparticles. For illustrative purposes, we consider quasiparticles excitations of the ($s$-wave) Bardeen-Cooper-Schrieffer (BCS) ground state $\ket{\Omega}$, $$\begin{aligned}
\ket{i;\sigma_1, \sigma_2} &= \gamma_{{\boldsymbol{k}}_1,\sigma_1}^{\dagger} \gamma_{{\boldsymbol{k}}_2,\sigma_2}^{\dagger} \ket{\Omega}, \label{eq:InitialState} \\
\ket{f} &= a_{{\boldsymbol{q}}}^{\dagger} \ket{\Omega}, \label{eq:FinalState} \\
\ket{\Omega} &= \prod_{{\boldsymbol{k}}} (u_{{\boldsymbol{k}}} + v_{{\boldsymbol{k}}}c_{{\boldsymbol{k}},\uparrow}^{\dagger}c_{-{\boldsymbol{k}},\downarrow}^{\dagger})\ket{0}.
\label{eq:BCSGroundstate}\end{aligned}$$ Above, the quasiparticle creation operators are $\gamma_{{\boldsymbol{k}},+}^{\dagger} = u_{{\boldsymbol{k}}}^{\ast} c_{{\boldsymbol{k}},\uparrow}^{\dagger} - v_{{\boldsymbol{k}}}^{\ast} c_{-{\boldsymbol{k}},\downarrow}$ and $\gamma_{{\boldsymbol{k}},-}^{\dagger} = u_{{\boldsymbol{k}}}^{\ast} c_{-{\boldsymbol{k}},\downarrow}^{\dagger} + v_{{\boldsymbol{k}}}^{\ast} c_{{\boldsymbol{k}},\uparrow}$. The scattering element is found to be
$$\begin{aligned}
\mel{f}{H_{\mathrm{ep}}}{i;\sigma_1, \sigma_2} &= \Big( M_{\sigma_1 {\boldsymbol{k}}_1, -\sigma_1{\boldsymbol{k}}_1 - {\boldsymbol{k}}_2}^{\uparrow, s(\sigma_1)} \delta_{{\boldsymbol{q}},\sigma_1 {\boldsymbol{k}}_1 + {\boldsymbol{k}}_2} - M_{\sigma_1 {\boldsymbol{k}}_1,-\sigma_1{\boldsymbol{k}}_1 + {\boldsymbol{k}}_2}^{\downarrow, s(\sigma_1)} \delta_{{\boldsymbol{q}},\sigma_1 {\boldsymbol{k}}_1 - {\boldsymbol{k}}_2} \Big) v_{-{\boldsymbol{k}}_2}^{\ast} u_{{\boldsymbol{k}}_1}^{\ast} \\
&\hspace{10pt} - \Big( M_{\sigma_2 {\boldsymbol{k}}_2, -\sigma_2{\boldsymbol{k}}_2 - {\boldsymbol{k}}_1}^{\uparrow, s(\sigma_2)} \delta_{{\boldsymbol{q}},\sigma_2 {\boldsymbol{k}}_2 + {\boldsymbol{k}}_1} - M_{\sigma_2 {\boldsymbol{k}}_2,-\sigma_2{\boldsymbol{k}}_2 + {\boldsymbol{k}}_1}^{\downarrow, s(\sigma_2)} \delta_{{\boldsymbol{q}},\sigma_2 {\boldsymbol{k}}_2 - {\boldsymbol{k}}_1} \Big) v_{-{\boldsymbol{k}}_1}^{\ast} u_{{\boldsymbol{k}}_2}^{\ast}.
\end{aligned}
\label{eq:Scattering_Majorana_withSpin}$$
Above, we introduced the symbol $s(+) = \hspace{1mm} \uparrow$ and $s(-) = \hspace{1mm} \downarrow$. The scattering amplitude depends only on the momentum transfer for small energies, in which case the coupling above vanishes identically. This is presumably because the average charge of the quasiparticles becomes zero in the low energy limit.
A careful analysis for the electron decay rate alone yields a $\Gamma^{\mathrm{ph}} \sim T^3$ law well below the Bloch-Grüneisen temperature at the Fermi surface.[@ElectronPhononScatteringTI] Away from the Fermi surface, by biasing the electrons at a small voltage $V$, the decay rate is finite at $T= 0$ and goes as $\Gamma^{\mathrm{ph}} \sim (eV)^3$. Inserting the exact prefactor (for Bi$_2$Te$_3$), by following the steps in Ref. , we find a decay rate in the peV range when $V \simeq 1$ $\mu$V. This means that the electron lifetime due to acoustic phonon scattering $\tau_{\mathrm{ph}} = 1/\Gamma^{\mathrm{ph}}$ is in the ms range, making this effect negligible at $T = 0$ at low voltage.
[^1]: This can be proven formally; the proof is a straightforward generalization of Eq. and when identifying the structure of the sub blocks in $\pazocal{T}$. Blocks characterizing transport between different leads always give a vanishing contribution to the average current, related to the observation right below Eq. . If two leads are not causally connected by the chiral arm, the corresponding block in the $\pazocal{T}$ matrix is zero. Finally, no matter which arms the $N$ leads are coupled to, the drain current is reduced by the factor $\prod_{j=1}^N r_j$, where $r_j$ is the local reflection amplitude of lead $j$.
[^2]: The prefactor of $\Gamma^{\mathrm{imp}}_{\mathrm{SB}}$ in Eq. has a $k_{\parallel}^{-4}$ dependency. There is, however, also a less significant dependency on $k_{\parallel}$ in ${\mathrm{d}}\sigma_{\mathrm{long}}^{(n')}(\epsilon_F) / {\mathrm{d}}k_z' $, which can be inferred from Fig. \[fig:WavefuncsAndRate\] (b) (recall that $\epsilon_F \propto k_{\parallel}$) to be very close to linear.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'The observed strong phase difference of 30$^{\text{o}}$ between $I=\frac{3}{2}$ and $I=\frac{1}{2}$ final states for the decay $B\rightarrow D\pi $ is analyzed in terms of rescattering like $D^{\ast }\pi \rightarrow D\pi ,$ etc. It is concluded that for the decay $B^{o}\rightarrow D^{+}\pi ^{-}$ the strong phase is only about 10$^{\text{o}}$. Implications for the determination of $\sin \left( 2\beta +\gamma \right) $ are discussed.'
author:
- |
[Lincoln Wolfenstein]{}\
[Department of Physics, Carnegie Mellon University, Pittsburgh, PA 15213]{}
title: '[Strong Phases in the Decays ]{}$B$[ to ]{}$D\pi $'
---
The weak decay amplitude to a specific final state can be written as $Ae^{i\delta }$ where $A$ is the decay amplitude, in general complex, and $\delta $ is the “strong phase”. For a final eigenstate $\delta $ is simply the elastic scattering phase in accordance with the Watson theorem. For the case of $B$ decays the final scattering is primarily inelastic; $\delta $ arises from the absorptive part of the decay amplitude corresponding to a weak decay to intermediate states followed by a strong scattering to the final state. Many papers have discussed the expected size of $\delta $ [@Beneke].
Recently data has suggested a significant non-zero phase for the decays $\overline{B}$ to $D\pi $ [@Ahmed]. These decays can be analyzed in terms of two amplitudes $A_{3}\ e^{i\delta _{3}}$ and $A_{1}\
e^{i\delta _{1}}$ corresponding to the final isospin states $\dfrac{3}{2}$ and $\dfrac{1}{2}.$ The amplitudes for the three decays of interest are the following:
$$\begin{aligned}
A_{o-} &=&A\left( D^{\text{o}}\ \pi ^{-}\right) =A_{3}e^{i\delta _{3}}
\TCItag*{(1a)} \\
A_{+-} &=&A\left( D^{+}\ \pi ^{-}\right) =\frac{1}{3}A_{3}\ e^{i\delta _{3}}+\frac{2}{3}A_{1}\ e^{i\delta _{1}} \TCItag*{(1b)} \\
A_{oo} &=&A\left( D^{\text{o}}\pi ^{\text{o}}\right) =\frac{\sqrt{2}}{3}\left( A_{3}e^{i\delta _{3}}-A_{1}\ e^{i\delta _{1}}\right) \TCItag*{(1c)}\end{aligned}$$
The experimental result gives the ratio of decay probabilities
$$D^{\text{o}}\pi ^{-}\colon D^{+}\pi ^{-}\colon D^{\text{o}}\pi ^{\text{o}}=46\colon 27\colon 2.9$$
From this one deduces
$$\frac{A_{1}}{A_{3}}=0.69,\quad \cos \left( \delta _{3}-\delta _{1}\right)
=0.86 \tag*{(2)}$$
Thus we find approximately a 30 degree phase difference, significantly different from zero. We discuss here the implications of such a phase difference; there remain, of course, sizable errors on this value.
We now make the assumption that the major rescattering comes from states of the form $D_{i}^{+}\pi _{j}^{-}$ such as $D^{\ast +}\pi ^{-},\ D^{+}\rho
^{-},\ $etc.[@Chuva] [@Falk] Such states are expected in factorization and about 10% of the $b$ to $u\bar{c}\ d$ transitions have been identified to be of this type. It is much less likely that complicated many-particle states should rescatter to $D\pi .$
We consider first the simple factorization (large $N_{c}$) limit:
$$A_{+-}=A_{o-}\qquad A_{oo}=0$$
Here the $\pi ^{-}$ is assumed to come directly from the $\bar{u}\ d$ current, the amplitudes are real and there is no $D^{\text{o}}\pi ^{\text{o}} $ decay. This corresponds to $A_{3}=A_{1}=A.$ We now add rescattering from states of the form $D_{i}^{+}\pi _{j}^{-}.$ We label these amplitudes $X_{3}\ $and $X_{1}$ and again $X_{3}=X_{1.}$ The most obvious rescattering occurs via the exchange of an isospin 1 particle, either $\pi $ or $\rho .$ As a result the rescattering amplitude is proportional to $\tau _{i}\cdot
T_{j}$ with the values $\left( \frac{1}{2},\ -1\right) $ for $I=\left( \frac{3}{2},\frac{1}{2}\right) .$ The resultant imaginary amplitudes
$$\frac{\func{Im}\ A_{3}}{\func{Im}\ A_{1}}=\frac{X_{3}\ \left( \frac{1}{2}\right) }{X_{1}\left( -1\right) }=-\frac{1}{2} \tag*{(3)}$$
Considering the $\func{Im}\ A_{i}$ as fairly small, this means
$$\delta _{1}=-2\delta _{3} \tag*{(4a)}$$
If we use the empirical value $\left( \delta _{3}-\delta _{1}\right) =30^{\text{o}}$
$$\delta _{3}=10^{\text{o}}\qquad \delta _{1}=-20^{\text{o}} \tag*{(4b)}$$
and to lowest order in $\delta _{1}$ and $\delta _{3}$
$$\begin{aligned}
A_{o-} &=&Ae^{i\delta _{3}}\quad ,\qquad A_{+-}=Ae^{-i\delta _{3}} \\
A_{oo} &=&\sqrt{2}\quad iA\ \delta _{3}\end{aligned}$$
Thus the phase difference of 30$^{\text{o}}$ corresponds to a fairly small phase of magnitude 10$^{\text{o}}$ for both of the favored decays. Of course in this approximation the $D^{\text{o}}\pi ^{\text{o}}$ decay is purely imaginary entirely due to rescattering.
If we now use the empirical value $A_{1}=0.7A_{3}=0.7A$ but still assume $X_{3}=X_{1},$ we have using Eq. (1) to lowest order in $\delta _{1}$ and $\delta _{3x}.$
$$\begin{aligned}
A_{3} &=&A+i\ \delta _{3}A \\
A_{1} &=&0.7A-2i\ \delta _{3}A \\
\delta _{1} &=&\frac{-2\delta _{3}}{0.7} \\
A_{+-} &=&0.8A-iA\ \delta _{3}\end{aligned}$$
and with $\delta _{3}-\delta _{1}=30^{\text{o}}$ we get $\delta _{1}=-22^{\text{o}}\quad ,\quad \delta _{3}=8^{\text{o}}$ and the phase for $\bar{B}\rightarrow D^{+}\pi ^{-}\ $is again $10^{\text{o}}.\bigskip $
Finally if we also assume$\ X_{1}=0.7\ X_{3}$ we get Eqs. (4) again and
$$A_{+-}=0.8A-i0.6A\ \delta _{3}$$and the phase for $\bar{B}\rightarrow D^{+}\pi ^{-}\ $is $7.5^{\text{o}}.$
Thus we conclude that $\left( \delta _{3}-\delta _{1}\right) =30^{\text{o}}$ corresponds to a small phase of order $10^{\text{o}}$ for $\bar{B}\rightarrow D^{+}\pi ^{-}.$ In contrast the phase for the unfavored decay $\bar{B}\rightarrow D^{\text{o}}\pi ^{\text{o}}$ is greater than $45^{\text{o}}.$ Similar results are implied by more detailed analysis [@Chuva].
One reason for the interest in the strong phase for $\bar{B}\rightarrow D^{+}\pi ^{-}$ is the possible use of this decay or the related decay $\bar{B}\rightarrow D^{\ast +}\ \pi ^{-}$ in the determination of the phase $\gamma $ in the CKM matrix. One can look at the time-dependence of the decay due to interference with the double-Cabibbo suppressed decay $B\rightarrow D^{+}\pi ^{-}$ which corresponds to $\bar{b}\rightarrow \bar{u}+c+\bar{d}.$ The time-dependent term can be used [@Dunietz] to find $\sin \left( 2\beta +\gamma \right) .$ The detailed analysis [@Suprun] [@Silva] involves the strong phase $\Delta ,$ which is the difference between the strong phase for $\bar{B}\rightarrow D^{+}\pi ^{-}$ and that for $B\rightarrow D^{+}\pi ^{-}.$ There is an ambiguity in the result unless one can assume $\Delta $ is small.
The same isospin analysis given for $\bar{B}\rightarrow D^{+}\pi ^{-}$ can be applied to $B\rightarrow D^{+}\pi ^{-}$ and one expects again that the final state phase is due to the same rescattering from status like $D^{\ast
}\pi ,\ D\rho ,\ $etc. The relative importance will be different for the case of $B$ as compared to $\bar{B},$ but theoretical estimates [@Suprun] indicate the difference is not large. Thus $\Delta $ is expected to innvolve a cancellation between the two strong phases and thus be smaller than either one. Given our conclusion that the phase for $\bar{B}\rightarrow D^{+}\pi ^{-}$ is of order $10^{\text{o}}$ we conclude that $\Delta $ is very small.
All the analysis here holds equally well for the decays to $D^{\ast }\pi .$ In fact the experimental results [@Coan] for the decays to $D^{\ast
}\pi $ are the same within errors as for $D\pi $ and give essentially the same strong phase shift Eq. (2).
Decays in which the final $\pi ^{-}$ is replaced by a $\rho ^{-}$ are found to have a branching ratio 2 to 3 times as large as those with a $\pi ^{-}.$ Thus it may be expected that rescattering from states $D_{i}\rho $ to $D_{i}\pi $ may have a larger effect than $D_{i}\pi $ to $D_{i}\rho .$ Thus while our general analysis might be applicable to $D_{i}\rho $ we expect the magnitude of the strong phase shifts would be smaller. This seems to be true from the first data on the $D^{\text{o}}\rho ^{\text{o}}$ decay [Sapathy]{}.
This research was supported in part by the Department of Energy under grant no. DE-FG02-91ER40682. Part of this work was carried out at the Aspen Center for Physics.
[9]{} M. Beneke et al, *Nuc. Phys.* B* ***591,** 313 (2000); M. Suzuki and L. Wolfenstein, *Phys. Rev.* D **60,** 74019 (1999).
S. Ahmed et al \[CLEO Collaboration\], *Phys Rev.* D **66**, 031101 (R) (2002).
C.K. Chuva, W.S. Hou and K.C. Yang, *Phys. Rev.* D **65,** 09600 (2002).
The importance of such rescattering in $B\rightarrow K\pi $ is discussed by A.F. Falk et. al, *Phys. Rev*. D **57,** 4290 (1998) and D. Atwood and A. Soni, *Phys. Rev*. D **58,** 036005 (1998).
I. Dunietz, *Phys. Lett.* **B 427,** 179 (1998) and references therein.
D.A. Suprun, C.W. Chiang and J.L. Rosner, *Phys. Rev.* D **65,** 054025 (2002).
J.P. Silva, A. Soffer, F. Wu, and L. Wolfenstein, *Phys. Rev* D. **67,** 036004 (2003).
T.E. Coan et al \[CLEO Collaboration\], *Phys. Rev. Lett.* 88, 062001 (2002); K. Abe et al \[BELLE Collaboration\], *Phys. Rev. Lett.* 88, 052002 (2002).
A. Sapathy et al \[BELLE Collaboration\], *Phys. Lett.* **B 553,159** (2003).
| {
"pile_set_name": "ArXiv"
} |
---
abstract: |
Numerical simulations and ‘proof of principle’ experiments showed clearly the interest of using crystals as photon generators dedicated to intense positron sources for linear colliders. An experimental investigation, using a 10 GeV secondary electron beam, of the SPS-CERN, impinging on an axially oriented thick tungsten crystal, has been prepared and operated between May and August 2000.
After a short recall on the main features of positron sources using channeling in oriented crystals, the experimental set-up is described. A particular emphasis is put on the positron detector made of a drift chamber, partially immersed in a magnetic field. The enhancement in photon and positron production in the aligned crystal have been observed in the energy range 5 to 40 GeV, for the incident electrons, in crystals of 4 and 8 mm as in an hybrid target. The first results concerning this experiment are presented hereafter.
author:
- |
R.Chehab, R.Cizeron, C.Sylvia (LAL-IN2P3, Orsay, France)\
V.Baier, K.Beloborodov, A.Bukin, S.Burdin, T.Dimova, A.Drozdetsky, V.Druzhinin, M.Dubrovin[^1],\
V.Golubev, S.Serednyakov, V.Shary, V.Strakhovenko (BINP, Novosibirsk, Russia)\
X.Artru, M.Chevallier, D.Dauvergne, R.Kirsch, Ph.Lautesse, J-C.Poizat,\
J.Remillieux (IPNL-IN2P3, Villeurbanne, France)\
A.Jejcic (LMD-Universite, Paris, France)\
P.Keppler, J.Major (Max-Planck Institute, Stuttgart, Germany)\
L.Gatignon (CERN, Geneva, Switzerland)\
G.Bochek, V.Kulibaba, N.Maslov (KIPT, Kharkov, Ukraine)\
A.Bogdanov, A.Potylitsin, I.Vnukov (NPI-TPU, Tomsk, Russia)
title: 'Experimental Determination of the Characteristics of a Positron Source Using Channeling[^2].'
---
Introduction
============
The enhancement of radiation observed, in channeling conditions, in a crystal with respect to bremsstrahlung, makes crystal targets interesting for obtaining large positron yields: the high rate of photons generated along a crystal axis produces a corresponding high positron yield in the same crystal target [@b1]. Association of crystals, where intense radiation takes place and amorphous targets, where photons are materialized in e+e- pairs, have also been investigated [@b2]. The insertion of this kind of target in a typical scheme for a positron facility of a linear collider has also been considered [@b3]. The high current of the incident electron beam on the target leading to important energy deposition in the target and to possible crystal damages, mainly caused by Coulomb scattering of the electron beam on the nuclei, these two aspects have been studied. Simulations of warm crystals, heated by the energy deposited, were provided in actual working conditions using the JLC (Japanese Linear Collider) conditions [@b3]. Concerning the radiation damages, a beam test has been performed with the SLC beam; fluences as high as $2\cdot10^{18} mm^{-2}$, corresponding to hundred hours of continuous working of a linear collider as JLC, appeared harmless [@b3.1]. Proof of principle experiments (1992-93, Orsay) [@b4] and (1996, Tokyo) [@b5] showed photon and positron yields enhancements. An experimental verification of the yield, energy spectrum and transverse emittance of a crystal positron source should give definite answers on the useable positron yield for a LC (Linear Collider)[@b6]. A beam test on the transfer lines of the SPS-CERN started this spring. After a description of the experimental set-up, with some emphasis on the positron detector, the first results are provided.
The experimental set-up
=======================
The experiment is using multi-GeV electron beams of the SPS. The electrons, after passing through profile monitors and counters (trigger) impinge on the targets with energies from 5 to 40 GeV (mainly at 10 GeV). Photons, as well as e+e- pairs are produced in the target. These particles come mainly in the forward direction and travel across the magnetic spectrometer, consisting of the drift chamber and positron counters inserted between the poles of a spectrometer magnet (MBPS). The most energetic photons and charged particles come out nearly in the forward direction. The charged ones are swept by a second magnet (MBPL) after which the photons reach the photon detector made of preshowers and a calorimeter (see figure \[exp\]).
![Experimental set-up[]{data-label="exp"}](pict/plan1.eps){width="80mm"}
The main elements of the set-up
===============================
**The beam:** the SPS bursts are made of 3.2 seconds duration pulses with a 14.4 seconds period. $10^4$ electrons/burst are usually obtained at 10 GeV electron energy.
**The trigger:** the channeling condition requires that the incident electron direction angle, with respect to the crystal axis, be smaller than the Lindhard critical angle, $\Psi_c~=~\sqrt{2\cdot U/E}$ where $U$ represents the potential well of an atomic row and E the incident electron energy. In order to fulfil it we installed a trigger system made of scintillator counters. For 10 GeV and $<\!111\!>$ axis for the tungsten crystal, $\Psi_c=0.45
mrad$. Taking into account the presence of crystal effects at angles slightly larger than the critical angle, the acceptance angle for the trigger has been chosen at $0.75 mrad$. That gives typically a rate of “good events” in the experimental conditions of 1%.
**The target:** two kinds of tungsten targets have been installed on a 0.001 degree precision goniometer: a 8 mm thick tungsten crystal and an hybrid target made of 4 mm crystal and 4 mm amorphous. Mosaic spreads of both crystals are less than $0.5 mrad$.
**The positron detector:** the drift chamber is made of hexagonal cells, filled with a gas mixture $He(90\%) CH_4(10\%)$ and presents two parts:\
**the first part (DC1)**, with a cell radius of 0.9 cm, is escaping mainly the magnetic field of the bending magnet MBPS. It allows the measurement of the position and exit angle of the emitted pairs.\
**the second part (DC2)**, with a cell radius of 1.6 cm, is submitted to the magnetic field. It allows the measurement of the positron momentum. Two values of the magnetic field are used, 1 and 4 kGauss, to investigate the two momenta regions: from 5 to $20 MeV/c$ and from 20 to $80 MeV/c$.
Signal and field wires are short (6 cm) and made of gold plated tungsten and titanium, respectively. Counters (scintillators) are put on the lateral walls in order to define the useful region. The drift chamber exhibits 21 layers and the resolution is of 300 microns. The maximum horizontal angle being accepted is 30 degrees. The limited vertical size sets the overall acceptance of the chamber to 6%. The choice of Helium provides a small multiple scattering ($0.001 X_0$).
**The electronics:** for drift time measurements the electronics detects the leading edge of the signal, coming from the sense wire, and digitize the time with 3ns resolution. Front end electronics is made of preamplifier, shaper and ECL discriminator. The TDC (Time to Digital Converter) have a scale range of 1.5 microseconds. A common stop is used.
**The photon detector:** Photon multiplicity is rather high: about 200 photons/event for a 8 mm thick tungsten crystal oriented along its $<\!111\!>$ axis. The angular acceptance of the photon detector is 5.5 mrad (half cone angle). That gives 2000 triggered photons in an SPS burst. The photon detector is made of:\
2 preshowers giving the relative photon multiplicity (one with nominal 5.5 mrad acceptance and the other with reduced 1.5 mrad acceptance),\
a NaI calorimeter providing the energy radiated. The role of the photon detector is essential for operating the crystal orientation.
First results
=============
![Rocking curve measured on preshower for the hybrid target.[]{data-label="rock"}](pict/pres10_8.eps){width="60mm"}
**Enhancement in photon production.** On the $<\!111\!>$ axis of the tungsten crystals, the ultrarelativistic electrons radiate more photons than by classical bremsstrahlung. The preshower provides the relative photon multiplicity with respect to the crystal orientation. We reported, on figure \[rock\], the associated rocking curve
![Rocking curves measured on DC counter 1 for the hybrid target with two values of the magnetic field.[]{data-label="DCcounters"}](pict/pc1_10_8mm.eps){width="60mm"}
for the 8 mm hybrid target, for an incident energy of 10 GeV. The enhancement is about 1.8.
![Rocking curve measured with number of wires in DC for the hybrid target.[]{data-label="DCwires"}](pict/nwr_10_8mm.eps){width="60mm"}
**Enhancement in positron production.** A corresponding enhancement in positron production is observed in channeling conditions as can be seen on figure \[DCcounters\]. Rocking curves for the positrons use the informations from the two counters put on the lateral walls of the Drift Chamber. Counter 1 is receiving low energy positrons strongly deflected by the magnetic field (1 and 4 kGauss) and counter 2, the high energy positrons (and the electrons) weakly deflected. It can be seen that the ratio aligned/random is about 2, for an incident energy of 10 GeV, in agreement with the simulations (Counter 1). The enhancement in positron production has also been observed with the number of hitted wires in the Drift Chamber (see figure \[DCwires\]).
![Typical event for the hybrid target. Field $0.4T$[]{data-label="event"}](pict/xc8x402.eps){width="85mm"}
**Positron tracks.** Positron tracks, for different running conditions (Beam energy, crystal thickness, magnetic field) have been reconstructed. We present on figure \[event\] an example of these tracks reconstructed separately in the two parts of the chamber on the basis of the signals delivered by the Drift Chamber.
![Positron momentum distribution measured by DC with hybrid target. Field 1 kGauss. Incident energy is 10 GeV. The unfilled histogram represent the oriented crystal, dark one — the target in random position.[]{data-label="momentum"}](pict/xc8101p.eps){width="60mm"}
**Momentum distribution.** Preliminary momentum distribution has been determined for the accepted positrons in the Drift Chamber (see for example figure \[momentum\]). Such distribution has also been obtained for the two values of the magnetic field: 1 and 4 kGauss for several energies (from 5 GeV to 40 GeV).
**Incident energy dependence.** Variation of the incident energy from 5 to 40 GeV, clearly shows enhancement in photon as in positron production with growing energy; At 40 GeV, the ratio axis/random is more than 4.
Summary and Conclusions
=======================
These first results on positron production in channeling conditions with high energy electrons clearly show an enhancement in photon as in positron production in agreement with the simulations. For the positrons the enhancement takes place mainly at low energy. This aspect is of particular importance for the positron capture in the accelerator channel. The Drift Chamber operates properly to determine the positron track and, hence to provide the exit angle and positron momentum. These first results, which give a clear confirmation for the interest of using this kind of sources will be completed by an appropriated analysis in the next months.
[9]{}
X.Artru et al. Nucl.Instr.Methods A 344 (1994) 443
X.Artru et al. Nucl.Instr.Methods B 119 (1996) 246
X.Artru et al. Particle Accelerators Vol.59 (1998) 19
R.Chehab et al. Proceedings of the 1999 EPAC, Stockholm, June 1999 and LAL-RT 98-02
R.Chehab et al. Proceedings of the 1993 PAC, Washington DC, May 1993 and LAL-RT 93-05
B.N.Kalinin et al Nucl.Instr.Methods B 145 (1998) 209
V.N.Baier et al Nucl.Instr.Methods B 145 (1998) 221
[^1]: presently at Wayne State University at Cornell, Ithaca NY, USA
[^2]: Research made in the framework of INTAS Contract 97-562
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Dark-matter halos grown in cosmological simulations appear to have central NFW-like density cusps with mean values of $d\log\rho/d\log r \approx -1$, and some dispersion, which is generally parametrized by the varying index $\alpha$ in the Einasto density profile fitting function. Non-universality in profile shapes is also seen in observed galaxy clusters and possibly dwarf galaxies. Here we show that non-universality, at any given mass scale, is an intrinsic property of DARKexp, a theoretically derived model for collisionless self-gravitating systems. We demonstrate that DARKexp—which has only one shape parameter, $\phi_0$—fits the dispersion in profile shapes of massive simulated halos as well as observed clusters very well. DARKexp also allows for cored dark-matter profiles, such as those found for dwarf spheroidal galaxies. We provide approximate analytical relations between DARKexp $\phi_0$, Einasto $\alpha$, or the central logarithmic slope in the Dehnen–Tremaine analytical $\gamma$-models. The range in halo parameters reflects a substantial variation in the binding energies per unit mass of dark-matter halos.'
author:
- 'Jens Hjorth, Liliya L. R. Williams, Rados[ł]{}aw Wojtak, and Michael McLaughlin'
title: 'Non-universality of dark-matter halos: cusps, cores, and the central potential'
---
Introduction
============
The basic shape of N-body simulated cold dark matter halos is now well established. The NFW density profile [@1997ApJ...490..493N] is a remarkably good representation [e.g., @2004ApJ...607..125T; @2004MNRAS.353..624D; @2005Natur.433..389D; @2005MNRAS.364..665D; @2010MNRAS.402...21N; @2011ASL.....4..297D; @2011MNRAS.415.3895L], especially given its simplicity: it has no shape parameters. However, a closer examination of the profile shapes of relaxed halos shows that NFW does not fully capture the density run. Most importantly, while the NFW profile has a central density cusp, with logarithmic slope $d\log \rho(r)/d\log r = -1$, recent high resolution simulations have shown that the central cusp tends to become somewhat shallower at smaller and smaller radii [e.g., @2010MNRAS.402...21N], prompting the use of an Einasto $\alpha$-model [@2004MNRAS.349.1039N; @1965TrAlm...5...87E]. The Einasto profile fits halos better at most radii, not just in the center, suggesting that the additional parameter $\alpha$ is a real characteristic of dark matter halos [@2006AJ....132.2685M; @2014arXiv1411.4001K].
@2013MNRAS.432.1103L find that the mean value of $\alpha$ ranges from about 0.17 for $M_{200}=10^{11}$ $h^{-1}$ M$_\odot$ halos to $0.21$ at $M_{200}=10^{15}$ $h^{-1}$ M$_\odot$, which is consistent with @2008MNRAS.387..536G who find a relation between Einasto’s $\alpha$ and the ratio of the linear density threshold for collapse to the rms linear density fluctuation at a given mass scale. In addition to the mass-dependent $\alpha$, there is significant scatter of around $0.05$ in $\alpha$. Moreover, halos with $\alpha$ in the range $0.05$ to $0.45$ are also realized in simulations [@2013MNRAS.432.1103L; @2014arXiv1411.4001K]. These works show that the shape of the density profile, including the central logarithmic slope, is tied to the initial conditions and evolutionary history of the halos in simulations. @2013MNRAS.432.1103L suggest that the mass profiles of halos reflect their accretion history.
The spread in $\alpha$ index is observed not just for main halos, but also subhalos. @2008MNRAS.391.1685S find that the profiles of subhalos do not converge to a central power law but exhibit curving shapes well described by the Einasto form with $\alpha$ in the range $0.15$ to $0.28$. This is consistent with the findings of @2013MNRAS.428.1696V, who note that simulated satellite dark matter halos around galaxies are better fit with Einasto profiles of index $\alpha=0.2-0.5$. @2013MNRAS.431.1220D find a correlation between $\alpha^{-1}$ (ranging from 0.1 to 1) and satellite dark matter halo mass, i.e., with higher masses corresponding to smaller values of $\alpha$, contrary to the trend found for more massive halos. The diversity in the density profiles of these satellites is attributed to tidal stripping, which acts mainly in the outer parts of the subhaloes.
The conclusion is that simulated dark matter halos do not form a strictly self-similar family of systems as was originally suggested, but display non-universality [@2006AJ....132.2685M].
Neither NFW nor Einasto have any theoretical underpinnings; they are purely empirical. In a series of papers [@2010ApJ...722..851H; @2010ApJ...722..856W; @2010ApJ...725..282W; @2014ApJ...783...13W] we have proposed a theoretically based model, DARKexp, which has been shown to allow good fits to a few simulated halos [@2010ApJ...725..282W; @2014ApJ...783...13W] and observed galaxy clusters [@2013MNRAS.436.2616B; @2015arXiv150704385U].
DARKexp has one shape parameter, and its density profiles are very similar to Einasto within a limited range of shape parameters. Taking the broader range of achievable values into account, DARKexp suggests that there should be a dispersion in the shapes of the density profiles between systems, parametrized by the shape parameter. At small radii, but sufficiently above the resolution limit achieved in cosmological simulations, these density profiles differ significantly. Some have nearly flat cores, while others are steep, $d\log \rho(r)/d\log r \approx -2$. In other words, unless halos have identical normalized central potentials, DARKexp predicts that the density profiles should not be universal, and that a logarithmic slope of $-1$ at small radii robustly resolved in simulations is not a universal attractor of halos with different formation histories. Testing this would be important because DARKexp is not a fitting function but is derived from basic physics.
We summarize the rationale and properties of DARKexp in Section \[darkexp\]. While DARKexp has the advantage of being derived from first principles, the resulting density profiles unfortunately cannot be expressed analytically. We therefore also relate the properties of DARKexp to those of phenomenological fitting functions, i.e., generalized Hernquist, Dehnen–Tremaine, and Einasto models. We discuss non-universality of simulated and observed massive dark-matter halos in Section 3 and show that, in cases where exact fits are not required, DARKexp, Einasto, or Dehnen–Tremaine profiles provide equally good representations, with considerable dispersion in their respective shape parameters. We relate observed, simulated and theoretical properties of lower mass halos in Section 4 and conclude in Section 5
DARK and the central potential–density slope relation {#darkexp}
=====================================================
DARKexp is a theoretical model for the differential energy distribution in a self-gravitating collisionless system. Specifically, defining $N(\epsilon) = dM/d\epsilon$, where $\epsilon$ is the normalized absolute particle energy per unit mass, a statistical mechanical analysis leads to $$N(\epsilon) \propto \exp(\phi_0-\epsilon)-1$$ [@2010ApJ...722..851H]. The central dimensionless potential, $\phi_0$, is the only free shape parameter of the model. Some of the global properties of the model are analytic (see Appendix A), while the phase space distribution function and derived properties, such as the density profile, must be obtained numerically using an iterative procedure [@2010ApJ...725..282W]. Apart from the assumption of isotropy (which leads to spherical structures), there are no additional assumptions involved in this computation. The scale parameters of the corresponding density profile follow from the resulting profiles themselves. DARKexp is a maximum entropy theory and as such describes only the final, equilibrium states of systems, and not the evolutionary paths that brought them to the final state. The density profiles apply to isolated systems, i.e., there is no cosmological context and so no concept of a virial radius or formation redshift.
While the asymptotic logarithmic density slope is $-1$ for $r\to 0$ [@2010ApJ...722..851H], in reality the central logarithmic slope robustly resolved in simulations depends on $\phi_0$: $\phi_0\lesssim 4$ systems have inner logarithmic slopes shallower than $-1$, while $\phi_0\gtrsim 5$ have inner logarithmic slopes between $-1$ and $-2$ [@2010ApJ...722..856W]. DARKexp was not designed to fit the properties of dark-matter structures. Nevertheless, the energy distributions resulting from this first principles derivation are excellent representations of numerical simulations and the density profiles are reminiscent of Einasto $\alpha$ profiles for values of $\phi_0$ around 4 [@2010ApJ...725..282W]. Note that in the @2014ApJ...783...13W formalism, the DARKexp density run is independent of any velocity anisotropy. DARKexp also provides very good fits to observed galaxy clusters [@2013MNRAS.436.2616B; @2015arXiv150704385U], and globular clusters [@2012MNRAS.423.3589W].
We here compare DARKexp to phenomenological fitting functions, notably the generalized Hernquist, the special case of the Dehnen–Tremaine $\gamma-$model, and the Einasto profile (see Appendix B for details).[^1] We provide the correspondence between the shape parameters of these analytic fitting functions and the dimensionless parameter of DARKexp, $\phi_0$. These results can be used in any situation where fitting with DARKexp is required.
By construction, the DARKexp energy distribution is discontinuous at the escape energy, and as a consequence the density falls off as $r^{-4}$ at large radii [@1987IAUS..127..511J; @1991MNRAS.253..703H]. When searching for phenomenological or analytical profiles that may be useful approximations to DARKexp, we recall that the generalized @1990ApJ...356..359H model has the same outer logarithmic slope of $-4$ as DARKexp, and also allows for a range of central logarithmic slopes, $$\label{hernmod}
\rho \propto r^{-\gamma}(1+r^\beta)^{(\gamma-4)/\beta} ,$$ where $\beta$ governs the sharpness of the transition between the asymptotic inner logarithmic slope of $-\gamma$ and the outer logarithmic slope of $-4$. For $\beta=1$ the resulting $\gamma$-models, $$\label{gamod}
\rho \propto r^{-\gamma}(1+r)^{\gamma-4},$$ are largely analytical [@1993MNRAS.265..250D; @1994AJ....107..634T].
Another phenomenological model, which provides an excellent fit to numerical simulations, is the Einasto model [@2004MNRAS.349.1039N]; it is controlled by the shape parameter $\alpha$, $$\label{einmod}
\rho \propto \exp\left [-\frac{2}{\alpha}(r^\alpha-1)\right].
$$ As shown by @2010ApJ...722..856W and @2010ApJ...725..282W, DARKexp halos have a range of logarithmic slopes at small radii robustly resolved in simulations depending on $\phi_0$; models in a range around $\phi_0\approx4$ are well represented by Einasto models with $\alpha$ in the range 0.15–0.2.
To investigate the relations between the shape parameters describing DARKexp and phenomenological models we now turn to a detailed comparison of their density profiles (see Appendix B). For DARKexp of a given $\phi_0$ we find an Einasto model or $\gamma$-model that matches DARKexp (i.e., has the same value as DARKexp) in either density or log–log density slope. The anchor radius at which the matching is done is chosen to be $\log(r_a/r_{-2}) = -1.5$, which is the typical resolution limit in cosmological simulations [e.g., @2010MNRAS.402...21N]. We choose to anchor the profiles at a fixed radius rather than fitting the (log) density or slope as a function of (log) radius, over some range, for visual clarity.
Figure \[phi0-gamma\] summarizes the results of the matching between DARKexp and phenomenological models. It shows the relations between $\phi_0$ and either $\gamma$ or $\alpha$, matched in either density or logarithmic slope. We also provide analytical approximations to these relations. For the $\gamma$-model $$\label{gamma-approx}
\gamma\approx 3\log \phi_0 -0.65; \ \ \ \ \ \ \ \ \ \ 1.7\le\phi_0\le 6,$$ which suggests $\phi_0\approx 3.5$ for $\gamma=1$. For the Einasto model $$\label{alpha-approx}
\alpha\approx 1.8\exp(-\phi_0/1.6)$$ and $\phi_0\approx 3.8$ for $\alpha=0.17$.
Comparisons between DARKexp and the empirical models, using the above analytical relations, are shown in Figure \[analytical\]. We will return to the central potential–density slope relation in the following sections.
Non-universality of massive halos
=================================
Simulated dark matter halos\[DMsim\]
------------------------------------
Until about 5–10 years ago simulation results were consistent with halos being universal in shape. Now, there is considerable evidence to the contrary. How does the DARKexp expectation of non-universality compare to simulations? We here analyze recent simulations, show that DARKexp fits the density profiles at least as well as Einasto, and quantify the non-universality.
We use cluster halos identified in the Bolshoi simulation[^2], a dark matter-only simulation run in a $250{{\ifmmode{h^{-1}{\rm Mpc}}\else{$h^{-1}$Mpc }\fi}}$ box (WMAP5 cosmological parameters) with $2048^{3}$ particles, which corresponds to a mass resolution of $1.35\times10^{8}{{\ifmmode{h^{-1}{\rm {M_{\odot}}}}\else{$h^{-1}{\rm{M_{\odot}}}$}\fi}}$ [for more details, see @2011ApJ...740..102K]. For the purpose of our analysis, we select well-resolved equilibrium halos with at least $5\times10^{5}$ particles within the virial radius $r_{\rm v}$ (defined as the radius of a sphere, whose enclosed density is $360$ times larger then the background density). The state of equilibrium is assessed using two diagnostics: the virial ratio and the relative offset between the minimum of the potential and the halo mass center. We find $261$ equilibrium halos defined by the virial ratio less than $1.3$ and the mass center offset less than $0.05r_{\rm v}$, which are two commonly used relaxation criteria proposed by @2007MNRAS.381.1450N. This selection is done in order to make a meaningful comparison with DARKexp, which by definition only applies to equilibrium systems.
We calculate the density profiles in $30$ spherical shells equally spaced in a logarithmic scale of radius spanning the range from $r_{\rm min}$ to $r_{\rm v}$, where $r_{\rm min}=0.015r_{\rm v}$ is the convergence radius defined by @2003MNRAS.338...14P and estimated for the minimum number of particles per halo. The halo centers are given by the minimum of the potential. DARKexp, Einasto, and Dehnen–Tremaine models are fitted to the density profiles of the halos by minimizing the following function $$\chi=\sum_{i=1}^{n_{\rm shell}}[\ln\rho_{i}-\ln\rho_{\rm model}(r_{i})]^{2},$$ where $n_{\rm shell}$ is the number of shells and $r_{i}$ is the mean radius of particles inside the $i$-th shell. Following @2011MNRAS.415.3895L, we restrict all fits to the radial range $r_{\rm min}<r<3r_{-2}$, which prevents including dynamically less relaxed parts of the halos. Here, $r_{-2}$ is defined as the radius at which $d\log \rho(r)/d\log r = -2$, and $\rho_{-2}=\rho(r_{-2})$. We first estimate $r_{-2}$ by fitting an Einasto profile in all $30$ shells, i.e., within the radial range $r_{\rm min}<r<r_{\rm v}$. This sets the restricted fitting range to be the same in both cases. Next, $r_{-2}$, $\rho_{-2}$, and $\gamma$, $\alpha$ or $\phi_{0}$ are obtained from a restricted fit to either model. In fitting DARKexp, we used density profiles computed on a grid of $\phi_{0}$, ranging from 2.0 to 8.0 with an interval of 0.01.
Figure \[density-sim\] shows the density profiles of all halos selected for the analysis, in three ranges of the shape parameter, i.e., $\alpha$ for the Einasto model, $\gamma$ for the Dehnen–Tremaine model, and $\phi_{0}$ for DARKexp. The ranges were chosen such that there are equal numbers of halos in each of the three bins. The mean value of $\gamma$ is 1.36 with an rms scatter of 0.18 while the mean value of $\alpha$ is 0.19 (rms scatter 0.05). For $\phi_0$ the mean value is 3.9 (rms scatter 0.6). There is no significant correlation in these parameters with the mass over the narrow range of masses considered ($10^{14}$ – $10^{14.8}$ M$_\odot$). The lack of any difference between residuals from the best-fitting Einasto, Dehnen–Tremaine, and DARKexp profile leads to the conclusion that *these models recover spherically averaged density profiles of simulated cosmological halos equally well.*
The second important conclusion is that, in the narrow mass range considered, *there is a range of density profile shapes among simulated systems, and that the DARKexp family of models captures that spread very well*. Figure \[phi0-gamma\] (top) illustrates this pictorially; it shows the best-fit shape parameters of the Einasto and DARKexp fits to the simulated halos; $\phi_0$ and $\alpha$ range between 2.5–6 and 0.05–0.35, respectively ($\gamma$ ranges from 0.8 to 1.7). The scatter in $\alpha$ is about 0.05. As expected (Section 2), the data show a tight relation between the central potential $\phi_{0}$ and the shape parameter $\alpha$ of the Einasto profile, which traces the $\alpha$–$\phi_{0}$ relations obtained from matching Einasto and DARKexp density profiles.
Observed clusters of galaxies\[DMobs\]
--------------------------------------
have emphasized the wide range of central logarithmic slopes inferred observationally for different clusters of galaxies. DARKexp applies to equilibrium collisionless systems, so in addition to dark matter halos it can also be applied to galaxies and clusters that host baryons, regardless of the physical processes that led to the formation of the final equilibrium systems, provided that the evolution and the present state of these systems are not dominated by two-body encounters, and the systems are not far from being spherical. We here show that relaxed galaxy clusters exhibit non-universality that can be described by the DARKexp model.
@2013ApJ...765...25N constructed spherically averaged density profiles of seven massive relaxed clusters, using central galaxy kinematics on the smallest scales, strong lensing on intermediate scales and weak lensing on large scales. To separate out dark matter density profiles they subtracted the stellar contribution of the central galaxy assuming constant mass-to-light ratio for the stellar population. The remaining dark matter profiles were fit with cored NFW and generalized NFW (gNFW) forms.
The $\gamma$-model we use here and the gNFW differ in the asymptotic outer logarithmic slope ($-4$ vs. $-3$), so the differences are confined mostly to radii outside $r_{-2}$. At $r\lesssim r_{-2}$ the two models are parametrized similarly, therefore one can compare the two models, if one leaves out the outermost radii. The @2013ApJ...765...25N cluster profiles extend about 2 decades in radius interior to $r_{-2}$, and about 0.75 of a decade exterior to $r_{-2}$, so the bulk of the radial range is usable in the comparison.
We took the @2013ApJ...765...25N inner logarithmic slope for each cluster and set it equal to $\gamma$. This supplies the vertical axis coordinate in Figure \[phi0-gamma\] (bottom). We then fit DARKexp directly to @2013ApJ...765...25N gNFW fits, excluding the outer radii beyond which the logarithmic slope is steeper than $-2.7$ (the fits are insensitive to the truncation logarithmic slope for values between $-2$ and $-2.8$). This gave us $\phi_0$, plotted along the horizontal axis. The seven galaxy clusters are plotted as solid points in Figure \[phi0-gamma\] (bottom). They fit well the $\phi_0$–$\gamma$ relation matched in slope and are consistent with our proposed analytical approximation (the dotted red line).
From Figure \[phi0-gamma\] we conclude that clusters are fit with a range of $\phi_0$’s, suggesting non-universality. Observed relaxed clusters of galaxies appear to follow DARKexp; and seem to prefer $\phi_0$ values about 2–3.5, on average smaller than cluster-sized dark matter halos in pure dark matter simulations, which are fitted with values of $\phi_0$ in the range 2–6.
Dwarf galaxies\[dwarfs\]
========================
So far we discussed dispersion in density profiles among massive pure dark matter halos. We next consider the differences between pure dark matter halos and observed low mass systems.
The central density profiles of observed galaxy clusters appear to be, on average, shallower than those of clusters in pure dark matter simulations. Likewise, the central logarithmic slopes of observed dwarf galaxies are shallower than simulated dark-matter subhalos. In this section we discuss the relative energies between the steeper pure dark matter and shallower observed structures. We will show that the differences amount to a large fraction of the total binding energies of the haloes, demonstrating that the differences in central logarithmic slopes or $\phi_0$ represent significant differences between the halos in terms of global energetics. Moreover, assuming the difference between two states or halos is brought about by some dynamical or baryonic proces, we compute what it would take to induce a transformation between the two, such as the cusp–core transition. Calculating relative (fractional) energetics allows us to bypass the uncertainties and unknowns associated with the specific physical processes that would bring about the transformation.
Lack of cusps in dwarf galaxies
-------------------------------
The differences between the NFW and Einasto descriptions of pure dark matter halos are small in comparison to the differences between dark matter profiles of subhalos/satellite halos and observed dwarf galaxies which may have central cores. Most dwarf galaxies appear to exhibit significant central density cores [@1994ApJ...427L...1F; @1994Natur.370..629M; @2006ApJ...652..306S; @2007ApJ...663..948G; @2009ApJ...704.1274W; @2011MNRAS.415L..40B; @2011ApJ...742...20W; @2012MNRAS.419..184A; @2013MNRAS.429L..89A; @2013arXiv1309.2637J; @2014ApJ...783....7C]. Whether all dwarfs have a universal profile or not is still unclear because of the large uncertainties associated with the measurement of the central logarithmic density slopes [e.g., @2014arXiv1406.6079S]; conclusions seem to depend on the sample and the methods used. For example, @2014ApJ...789...63A claim that dwarfs are consistent with having a universal central logarithmic density slope of $-0.63$, with a scatter of 0.28, while @2013arXiv1309.2637J claim that individual dwarf spheroidals show considerable scatter around an average logarithmic slope of $-1$.
For reviews of this small scale controversy, see @2013arXiv1306.0913W and @2014Natur.506..171P. The suggested solutions to this set of problems are either baryonic or particle in nature. Warm dark matter does not seem to be the solution: @2012MNRAS.424.1105M [@2013MNRAS.428.3715M] and @2013MNRAS.430.2346S find that warm dark matter leads to cusps just like cold dark matter does, although with very small central cores which however are astrophysically uninteresting and much smaller than what is claimed to be present in dwarf satellites. Indeed, @2014MNRAS.439..300L conclude that warm dark matter halos and subhalos have cuspy density distributions that are well described by NFW or Einasto profiles [see also @2012MNRAS.420.2318L]. @2012MNRAS.421.3464P and @2014ApJ...792...99S argue that baryonic physics [@2012MNRAS.422.1231G], via a process similar to violent relaxation, can lead to changes in the potential, specifically, a more shallow potential and hence a core [see also @2012MNRAS.424.1275B; @2012ApJ...761...71Z]. Alternatively, self-interacting dark matter can produce large cores [@2012MNRAS.423.3740V; @2013MNRAS.430...81R; @2013MNRAS.431L..20Z; @2014MNRAS.444.3684V].
Cusp–core transition
--------------------
Real systems like dwarf spheroidals may have shallower central logarithmic slopes and larger central regions of near constant density than pure dark matter halos. How does one make the central potential less deep than found in numerical simulations? The change is likely brought about by baryonic physics [@1996MNRAS.283L..72N; @2012MNRAS.422.1231G; @2015arXiv150202036O] which may induce a transformation from a typical $-1$ logarithmic slope to a cored halo [@2012ApJ...759L..42P]. In our picture this would correspond to a change in the central potential, through some energy injection process. For DARKexp to be valid, this energy injection would have to be collisionless to preserve the assumptions behind the DARKexp derivation. It is interesting to note that the baryonic feedback process proposed by @2012MNRAS.421.3464P is not only collisionless but also dissipationless.
We here illustrate the cusp-core transition as a transformation between two DARKexp halos. We can use DARKexp as a starting point for modeling baryonic effects if they are mostly non-collisional, in which case DARKexp is also a good descriptor of the final state. Note, however, that we do not intend to model the details of the transition in terms of detailed physics [e.g., @2015arXiv150202036O] such as the mass accretion history or star-formation history of the system, the supernova energy deposition efficiency, or the stellar initial mass function.
Cusps or cores of dark matter halos are governed by the normalized dimensionless depth of the central potential well, to the extent that their potentials are shaped by collective relaxation. How different are halos with different normalized central potentials, apart from their central logarithmic density slopes? We obtain analytical expressions for their total potential energies in Appendix A. We can also express the total potential energy of a halo as $$\label{zeta}
W=-\zeta \frac{GM^2}{r_h}$$ with $\zeta\approx 0.4$ [@1969ApJ...158L.139S] (see Figure \[w\]). Here $M$ is the total mass (which is finite for DARKexp and the $\gamma$-models) and $r_h$ is the half-mass radius.
For a $\gamma$-model, the halo binding energy relative to an NFW cusp model, i.e., $\gamma_{\rm cusp}=1$, is $| \Delta W/2 W_{\rm core} | = 1/3(1-\gamma_{\rm core})$ (Equation \[Wgamma\]). This relation is plotted in Figure \[w\] along with the corresponding relation based on DARKexp. The binding energy of a cored halo ($\gamma_{\rm core}=0$) is $\sim 30\%$ less than that of a cuspy halo ($\gamma_{\rm cusp}=1$), relative to the binding energy of the cored halo. For $\gamma_{\rm core}=0.5$, $\gamma_{\rm cusp}=1.5$ the difference is $\sim 25\%$. These differences are significant, i.e., halos do not have universal structures or relative binding energies if they have different central cusp strengths.
For a dwarf galaxy with virial mass $M=3\times10^9$ M$_\odot$, $r_h=10$ kpc, the total potential energy is $W\approx3\times10^{55}$ erg. The cusp–core transition therefore requires $\sim 10^{55}$ erg or $\sim 10^{4}$ supernovae (for a supernova energy input of $\sim 10^{51}$ erg) for $\gamma_{\rm core}=0$ (higher values of $\gamma_{\rm core}$ require less energy input). This corresponds to a total stellar mass produced of $\sim 10^{6}$ M$_\odot$, with an uncertainty of about a factor of 2 depending on the initial mass function. These estimates are consistent with those of @2012ApJ...759L..42P and @2013MNRAS.433.3539G. Because supernovae do not transfer all their kinetic energy into the dark matter, the energy injection is less efficient and more supernovae are needed. On the other hand, the energy input needed is diminished when considering the actual star formation and mass assembly history of dwarf spheroidals [@2014ApJ...782L..39A; @2014ApJ...789L..17M]. Finally, observations do not strictly require $\gamma=0$, further reducing the energy requirements.
As discussed in Section 3.2, dark matter halos of clusters of galaxies also appear to have central logarithmic slopes that are more shallow than $-1$ [@2004ApJ...604...88S; @2009ApJ...703L.132Z; @2013ApJ...765...24N; @2013ApJ...765...25N]. The effect seems to be generic [@2013ApJ...765...25N], giving central logarithmic slopes in the range $0.2$–0.8. While some of this may be due to substructure, line of sight effects, or elongated structures (e.g., due to merging), it does appear that central profiles more shallow than NFW are observed in clusters of galaxies. However, for a massive cluster of galaxies with $M=1\times10^{15}$ M$_\odot$ and $r_h=500$ kpc, the energy input required is $\sim 10^{64}$ erg, corresponding to an unrealistic total stellar mass of $10^{15}$ M$_\odot$, if supernovae were to provide the energy input. It is therefore unlikely that supernova feedback is the origin of cores in clusters of galaxies. It is possible that dynamical friction [@2001ApJ...560..636E] of infalling satellites or black holes, active galactic nuclei [@2012MNRAS.422.3081M] or cluster mergers [@2013ApJ...765...25N] [although see @2005MNRAS.360..892D] may provide the energy injection needed to erase the central cusp. Some clusters may also be born with a shallow potential, although it is noteworthy that a fair fraction of simulated cluster dark halos have central logarithmic slopes steeper than $-1$.
Conclusions\[conc\]
===================
It has become standard practice to fit spherically averaged density profiles of dark matter halos, galaxies and clusters with phenomenological models, such as the NFW or its variants, or the Einasto $\alpha$-model. We suggest that the reason these functions fit well is because they happen to resemble DARKexp, which is a theoretically derived model for collisionless self-gravitating systems, based on statistical mechanical arguments.
Moreover, in this paper we have argued that non-universality of dark-matter halos is a natural expectation of DARKexp. If dark-matter halos would have logarithmic slopes at small radii robustly resolved in simulations equal to $-1$, this would require some fine tuning of the model. The expectation of a range of properties appears to be borne out in simulations, and in observations of dwarf galaxies, and clusters of galaxies. In this picture, the range of central logarithmic slopes inferred can be seen as a reflection of a range in their central normalized potentials, or, equivalently, their total normalized binding energies. This insight is useful when quantifying the extent to which halos differ as a result of their formation history and also can be used to analytically relate halos with different cusps to each other, such as required in the cusp-core transition. We do not suggest that the central potential is set up by some external process which then in turn causally sets the density profile. Rather, the central potential and the logarithmic slope of the density profiles are both signatures of an underlying formation process which determines the final equilibrium state. Future data on both dwarfs and clusters will be necessary to more fully examine the applicability of DARKexp to the whole range of systems.
Because DARKexp does not have an analytical density profile, and if using tables of DARKexp $\rho(r)$ is not an option, we have identified phenomenological fitting functions that match DARKexp reasonably well, and have presented relations between DARKexp’s $\phi_0$ and the best fitting $\gamma$ of the $\gamma$-model, and $\alpha$ of the Einasto model. The resulting DARKexp [@2010ApJ...722..851H] density profiles are well represented by Dehnen–Tremaine $\gamma$-models [@1993MNRAS.265..250D; @1994AJ....107..634T]. The main advantage of these models is that they allow for a range of values of the central cusp logarithmic slope and so can be used to model both the standard NFW-type halos with $d\log \rho(r)/d\log r \approx -1$ as well as more shallow (or steep) halos. As shown in this paper (Figures \[phi0-gamma\] and \[density-sim\]) and in other recent work [@2008MNRAS.387..536G; @2008MNRAS.391.1685S; @2013MNRAS.428.1696V; @2013MNRAS.432.1103L], cold dark matter numerical simulations by themselves generate a significant dispersion in $\alpha$ of simulated halos, reflecting different formation, accretion, or tidal stripping histories. DARKexp naturally accounts for such non-universality in profile shapes. In Figure \[density-sim\] we demonstrate the non-universality of the pure dark matter halos. Both the shallower and the steeper systems are well fit by DARKexp. While we have demonstrated a significant non-universality found in simulations of massive dark-matter halos, with a similar dispersion noticable in simulations of lower mass halos [e.g., @2013MNRAS.432.1103L], it would be important to quantify the degree of non-universality and range of central potentials realized as a function of halo mass [see also @2014MNRAS.441.2986D].
In Section 4 we compare the potential energies associated with the core and cusp systems and discuss the required amount of energy and the possible injection mechanisms to bring about the transformation from a steeper pure dark matter system to a shallower observed system. The cusp–core transition cannot be simply modeled as a ‘one-size-fits-all’ transition from an NFW halo (or a DARKexp with $\phi_0\sim 3-4$) to a cored halo (or a DARKexp with $\phi_0<2$), because for both dwarf galaxies [@2014ApJ...783....7C] and clusters [@2013ApJ...765...25N] there is a real dispersion in their observed properties. As highlighted in this paper the same is true for simulated massive dark-matter halos [see @2013MNRAS.432.1103L; @2013MNRAS.431.1220D for other mass ranges]. When modeling any transition of the central logarithmic slope properties as a result of some energy injection process, it is important to consider the significant dispersion in both the initial dark matter halos and those observed or affected by energy injection processes.
Finally, we note that the $\phi_0$–$\gamma$ relation (Figure \[phi0-gamma\]) can be directly tested in numerical simulations, e.g., dissipationless collapse or cosmological simulations. Halos with steeper profiles should have deeper central normalized potentials (possibly due to different formation times and accretion histories). For clusters of galaxies the $\phi_0$–$\gamma$ relation can be probed using gravitational redshift measurements [@2011Natur.477..567W] of the central potential and gravitational lensing measurements of the density profiles [@2011ApJ...738...41U; @2013MNRAS.436.2616B; @2015arXiv150704385U].
We would like to thank Drew Newman for kindly providing us with their galaxy cluster fits and Nicola Amorisco, Arianna Di Cintio, Marceau Limousin, Keiichi Umetsu, and Jesús Zavala for comments. The Dark Cosmology Centre (DARK) is funded by the Danish National Research Foundation. L.L.R.W. would like to thank DARK for their hospitality.
A. Potential energy of cored and cuspy DARK and $\gamma$-models
===============================================================
For DARKexp, $N(\epsilon) = \exp(\phi_0-\epsilon)-1$, $$M(\phi_0)=\int_0^{\phi_0} N(\epsilon) d\epsilon = \exp{\phi_0}-(\phi_0+1)$$ and $$K+2W=-\int_0^{\phi_0} \epsilon N(\epsilon) d\epsilon ,$$ where $K$ is the total kinetic energy and $W$ is the total potential energy. Using the virial theorem, $W=-2K$, $$\frac{3}{2} W=-M(\phi_0)+\frac{\phi_0^2}{2}.$$ Assuming unit-mass DARKexp models, the energy difference between a cusp and a core relative to the current core is $$\label{WDARK}
\left | \frac{\Delta W}{2 W_{\rm core}} \right | = \frac{1}{2} \left ( \frac{2-\left. \frac{\phi_0^2}{M(\phi_0)}\right |_{\rm cusp}}{2-\left. \frac{\phi_0^2}{M(\phi_0)}\right |_{\rm core}} -1 \right ).$$
For a $\gamma$-model normalized to unit mass, $\rho = \frac{3-\gamma}{4\pi}r^{-\gamma}(1+r)^{\gamma-4}$, $$W=-(10-4\gamma)^{-1}$$ and $$\label{Wgamma}
\left | \frac{\Delta W}{2 W_{\rm core}} \right | =
\frac{1}{2}
\left ( \frac{5-2\gamma_{\rm core}}{5-2\gamma_{\rm cusp}} -1 \right ).$$
For the unit-mass $\gamma$-models, the relation between central potential and inner logarithmic slope is $$\phi_\gamma=(2-\gamma)^{-1}.$$ The relevant radii are also directly related to $\gamma$ through $$r_{-2}=1-\gamma/2$$ and the half-mass radius, $$r_h= (2^{1/(3-\gamma)}-1)^{-1}$$ [@1993MNRAS.265..250D]. Note that $\phi_\gamma r_{-2} = 0.5$ for $\gamma$-models; $\phi_0 r_{-2} \approx 0.5$ also holds for a unit mass DARKexp, further demonstrating that the two models are similar.
B. Phenomenological fits to DARK
================================
Figure \[einasto\] shows a comparison between DARKexp and Einasto profiles. The correspondence is very good at small radii but poor at large radii. The poorer fit at larger radii is because the Einasto profile does not have a ‘built-in’ outer logarithmic slope of $-4$, except for values around the canonical $\alpha=0.17$. The profiles were matched in density, so the corresponding blue and green curves intersect each other at $r_a$. The dashed curves in the same plot are the corresponding logarithmic density slopes; here too Einasto models track the respective DARKexp very well, although the blue and green dashed curves do not intersect each other at $r_a$ because the models can be matched either in density, or logarithmic density slope, not both.
The left panel of Figure \[benchmark\] shows DARKexp of $\phi_0=2,4,6$ as solid blue curves, and the matched $\gamma$-models (Equation \[gamod\]) as red solid curves. These were matched in density. The $\gamma$-models approximate DARKexp very well over nearly 4 decades in radius. The right panel of Figure \[benchmark\] shows the same DARKexp density profiles with the $\gamma$-models that were matched in logarithmic density slope, at $r_a$. These too provide very good fits to DARKexp.
Figure \[beta\] shows the effect of varying $\beta$ in the generalized Hernquist model (Equation \[hernmod\]), between $0.5$ and $2.0$ (here we show model pairs matched only in density, not logarithmic slope). A value of $\beta=0.7$ appears to provide a slightly better fit than $\beta=1$ but the improvement is not sufficiently significant to justify the introduction of another parameter. Note, however, that in the inner parts the correspondence with DARKexp is inferior to that of the Einasto profile, despite the fact that the generalized Hernquist model has an extra shape parameter. Overall, $\gamma$-models appear to be useful representations of DARKexp.
[80]{} natexlab\#1[\#1]{}
, J. J., [et al.]{} 2014, , 789, 63
, N. C., [Agnello]{}, A., & [Evans]{}, N. W. 2013, , 429, L89
, N. C., & [Evans]{}, N. W. 2012, , 419, 184
, N. C., [Zavala]{}, J., & [de Boer]{}, T. J. L. 2014, , 782, L39
, L. J., [Lima]{}, M., & [Sodr[é]{}]{}, L. 2013, , 436, 2616
, M., [Bullock]{}, J. S., & [Kaplinghat]{}, M. 2011, , 415, L40
, C. B., [Stinson]{}, G., [Gibson]{}, B. K., [Wadsley]{}, J., & [Quinn]{}, T. 2012, , 424, 1275
, M. L. M., [et al.]{} 2014, , 783, 7
, W. 1993, , 265, 250
—. 2005, , 360, 892
, A., [Brook]{}, C. B., [Dutton]{}, A. A., [Macci[ò]{}]{}, A. V., [Stinson]{}, G. S., & [Knebe]{}, A. 2014, , 441, 2986
, A., [Knebe]{}, A., [Libeskind]{}, N. I., [Brook]{}, C., [Yepes]{}, G., [Gottl[ö]{}ber]{}, S., & [Hoffman]{}, Y. 2013, , 431, 1220
, J., & [Moore]{}, B. 2011, Advanced Science Letters, 4, 297
, J., [Moore]{}, B., & [Stadel]{}, J. 2004, , 353, 624
—. 2005, , 433, 389
, J., [Zemp]{}, M., [Moore]{}, B., [Stadel]{}, J., & [Carollo]{}, C. M. 2005, , 364, 665
, J. 1965, Trudy Astrofizicheskogo Instituta Alma-Ata, 5, 87
, A., [Shlosman]{}, I., & [Hoffman]{}, Y. 2001, , 560, 636
, R. A., & [Primack]{}, J. R. 1994, , 427, L1
, L., [Navarro]{}, J. F., [Cole]{}, S., [Frenk]{}, C. S., [White]{}, S. D. M., [Springel]{}, V., [Jenkins]{}, A., & [Neto]{}, A. F. 2008, , 387, 536
, S., [Rocha]{}, M., [Boylan-Kolchin]{}, M., [Bullock]{}, J. S., & [Lally]{}, J. 2013, , 433, 3539
, G., [Wilkinson]{}, M. I., [Wyse]{}, R. F. G., [Kleyna]{}, J. T., [Koch]{}, A., [Evans]{}, N. W., & [Grebel]{}, E. K. 2007, , 663, 948
, F., [et al.]{} 2012, , 422, 1231
, L. 1990, , 356, 359
, J., & [Madsen]{}, J. 1991, , 253, 703
, J., & [Williams]{}, L. L. R. 2010, , 722, 851
, W. 1987, in IAU Symposium, Vol. 127, Structure and Dynamics of Elliptical Galaxies, ed. P. T. [de Zeeuw]{}, 511
, J. R., & [Gebhardt]{}, K. 2013, , 775, L30
, A., [Yepes]{}, G., [Gottlober]{}, S., [Prada]{}, F., & [Hess]{}, S. 2014, arXiv:1411.4001
, A. A., [Trujillo-Gomez]{}, S., & [Primack]{}, J. 2011, , 740, 102
, M., [et al.]{} 2008, , 489, 23
, M. R., [Frenk]{}, C. S., [Eke]{}, V. R., [Jenkins]{}, A., [Gao]{}, L., & [Theuns]{}, T. 2014, , 439, 300
, M. R., [et al.]{} 2012, , 420, 2318
, A. D., [Navarro]{}, J. F., [White]{}, S. D. M., [Boylan-Kolchin]{}, M., [Springel]{}, V., [Jenkins]{}, A., & [Frenk]{}, C. S. 2011, , 415, 3895
, A. D., [et al.]{} 2013, , 432, 1103
, A. V., [Paduroiu]{}, S., [Anderhalden]{}, D., [Schneider]{}, A., & [Moore]{}, B. 2012, , 424, 1105
—. 2013, , 428, 3715
, P., [Shen]{}, S., & [Governato]{}, F. 2014, , 789, L17
, D., [Teyssier]{}, R., [Moore]{}, B., & [Wentz]{}, T. 2012, , 422, 3081
, D., [Graham]{}, A. W., [Moore]{}, B., [Diemand]{}, J., & [Terzi[ć]{}]{}, B. 2006, , 132, 2685
, B. 1994, , 370, 629
, J. F., [Eke]{}, V. R., & [Frenk]{}, C. S. 1996, , 283, L72
, J. F., [Frenk]{}, C. S., & [White]{}, S. D. M. 1997, , 490, 493
, J. F., [et al.]{} 2004, , 349, 1039
—. 2010, , 402, 21
, A. F., [et al.]{} 2007, , 381, 1450
, A. B., [Treu]{}, T., [Ellis]{}, R. S., & [Sand]{}, D. J. 2013, , 765, 25
, A. B., [Treu]{}, T., [Ellis]{}, R. S., [Sand]{}, D. J., [Nipoti]{}, C., [Richard]{}, J., & [Jullo]{}, E. 2013, , 765, 24
, J., [Boylan-Kolchin]{}, M., [Bullock]{}, J. S., [Hopkins]{}, P. F., [Ker[ě]{}s]{}, D., [Faucher-Gigu[è]{}re]{}, C.-A., [Quataert]{}, E., & [Murray]{}, N. 2015, arXiv:1502.02036
, J., [Pontzen]{}, A., [Walker]{}, M. G., & [Koposov]{}, S. E. 2012, , 759, L42
, A., & [Governato]{}, F. 2012, , 421, 3464
—. 2014, , 506, 171
, C., [Navarro]{}, J. F., [Jenkins]{}, A., [Frenk]{}, C. S., [White]{}, S. D. M., [Springel]{}, V., [Stadel]{}, J., & [Quinn]{}, T. 2003, , 338, 14
, K., [et al.]{} 2013, Astronomische Nachrichten, 334, 691
, M., [Peter]{}, A. H. G., [Bullock]{}, J. S., [Kaplinghat]{}, M., [Garrison-Kimmel]{}, S., [O[ñ]{}orbe]{}, J., & [Moustakas]{}, L. A. 2013, , 430, 81
, D. J., [Treu]{}, T., [Smith]{}, G. P., & [Ellis]{}, R. S. 2004, , 604, 88
, S., [Gao]{}, L., [Theuns]{}, T., & [Frenk]{}, C. S. 2013, , 430, 2346
, S., [Madau]{}, P., [Conroy]{}, C., [Governato]{}, F., & [Mayer]{}, L. 2014, , 792, 99
, Jr., L. 1969, , 158, L139
, V., [et al.]{} 2008, , 391, 1685
, L. E., [Bullock]{}, J. S., [Kaplinghat]{}, M., [Kravtsov]{}, A. V., [Gnedin]{}, O. Y., [Abazajian]{}, K., & [Klypin]{}, A. A. 2006, , 652, 306
, L. E., [Frenk]{}, C. S., & [White]{}, S. D. M. 2014, arXiv:1406.6079
, A., [Kravtsov]{}, A. V., [Gottl[ö]{}ber]{}, S., & [Klypin]{}, A. A. 2004, , 607, 125
, S., [Richstone]{}, D. O., [Byun]{}, Y.-I., [Dressler]{}, A., [Faber]{}, S. M., [Grillmair]{}, C., [Kormendy]{}, J., & [Lauer]{}, T. R. 1994, , 107, 634
, K., [Broadhurst]{}, T., [Zitrin]{}, A., [Medezinski]{}, E., [Coe]{}, D., & [Postman]{}, M. 2011, , 738, 41
, K., [Zitrin]{}, A., [Gruen]{}, D., [Merten]{}, J., [Donahue]{}, M., & [Postman]{}, M. 2015, arXiv:1507.04385
, C. A., [Helmi]{}, A., [Starkenburg]{}, E., & [Breddels]{}, M. A. 2013, , 428, 1696
, M., [Zavala]{}, J., & [Loeb]{}, A. 2012, , 423, 3740
, M., [Zavala]{}, J., [Simpson]{}, C., & [Jenkins]{}, A. 2014, , 444, 3684
, M. G., [Mateo]{}, M., [Olszewski]{}, E. W., [Pe[ñ]{}arrubia]{}, J., [Wyn Evans]{}, N., & [Gilmore]{}, G. 2009, , 704, 1274
, M. G., & [Pe[ñ]{}arrubia]{}, J. 2011, , 742, 20
, D. H., [Bullock]{}, J. S., [Governato]{}, F., [Kuzio de Naray]{}, R., & [Peter]{}, A. H. G. 2013, arXiv:1306.0913
, L. L. R., [Barnes]{}, E. I., & [Hjorth]{}, J. 2012, , 423, 3589
, L. L. R., & [Hjorth]{}, J. 2010, , 722, 856
, L. L. R., [Hjorth]{}, J., & [Wojtak]{}, R. 2010, , 725, 282
—. 2014, , 783, 13
, R., [Hansen]{}, S. H., & [Hjorth]{}, J. 2011, , 477, 567
, J., [Vogelsberger]{}, M., & [Walker]{}, M. G. 2013, , 431, L20
, A., & [Broadhurst]{}, T. 2009, , 703, L132
, A., [et al.]{} 2012, , 761, 71
[^1]: For convenience we are making numerically generated models for $\phi_0 = 1$–10 available for download at the DARKexp website http://www.dark-cosmology.dk/DARKexp/.
[^2]: The simulation is publicly available through the MultiDark database (http://www.multidark.org). See @2013AN....334..691R for details of the database.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Axisymmetric, orbit-based dynamical models are used to derive dark matter scaling relations for Coma early-type galaxies. From faint to bright galaxies halo core-radii and asymptotic circular velocities increase. Compared to spirals of the same brightness, the majority of Coma early-types – those with old stellar populations – have similar halo core-radii but more than 2 times larger asymptotic halo velocities. The average dark matter density inside $2 \, {r_\mathrm{eff}}$ decreases with increasing luminosity and is $6.8$ times larger than in disk galaxies of the same $B$-band luminosity. Compared at the same stellar mass, dark matter densities in ellipticals are $13.5$ times higher than in spirals. Different baryon concentrations in ellipticals and spirals cannot explain the higher dark matter density in ellipticals. Instead, the assembly redshift (1+z) of Coma early-type halos is likely about two times larger than of comparably bright spirals. Assuming that local spirals typically assemble at a redshift of one, the majority of bright Coma early-type galaxy halos must have formed around $z \approx 2-3$. For about half of our Coma galaxies the assembly redshifts match with constraints derived from stellar populations. We find dark matter densities and estimated assembly redshifts of our observed Coma galaxies in reasonable agreement with recent semi-analytic galaxy formation models.'
author:
- 'J. Thomas, R. P. Saglia, R. Bender'
- 'D. Thomas'
- 'K. Gebhardt'
- 'J. Magorrian'
- 'E. M. Corsini'
- 'G. Wegner'
title: 'Dark matter scaling relations and the assembly epoch of Coma early-type galaxies'
---
Introduction {#sec:intro}
============
Present-day elliptical galaxies are known to host mostly old stellar populations (@Tra00, @Ter02, @DTho05). Whether their stars have formed in situ or whether ellipticals assembled their present-day morphology only over time (for example by mergers) is less clear. An important clue on the assembly redshift of a galaxy is provided by its dark matter density. For example, in the simple spherical collapse model [@Gun72] the average density of virialized halos is proportional to the mean density of the universe at the formation epoch: halos which form earlier become denser. Similarly, in cosmological $N$-body simulations, the concentration (and, thus, the inner density) is found to be higher in halos that have assembled earlier (e.g. @Nav96, @Wec02). In addition to this connection between formation epoch and halo density, the final halo mass distribution also depends on the interplay between dark matter and baryons during the actual galaxy formation process (e.g. @Blu86, @Bin01). Then, the properties of galaxy halos provide valuable information about when and how a galaxy has assembled its baryons.
Despite its cosmological relevance, the radial distribution of dark (and luminous) mass in early-type galaxies is not well known: because of the lack of cold gas as a dynamical tracer, masses are difficult to determine. Stellar dynamical models require the exploration of a galaxy’s orbital structure and have only recently become available for axisymmetric or more general systems (@Cre99, @Geb00, @Tho04, @Val04, @Cap05, @deLor07, @Cha08, @vdB08). Scaling relations for the inner dark matter distribution in early-types have by now only been reported for round and non-rotating galaxies (@Kr00, @G01) and spirals (@PSS1 [@PSS2] and @Kor04). The aim of the present paper is to provide empirical scaling relations for generic cluster early-types (flattened, with different degrees of rotation). In particular, this paper is focussed on the inner dark matter density and its implications on the assembly redshift of elliptical galaxy halos.
The paper is organized as follows. In Sec. \[sec:data\], we review the galaxy sample and its modelling. Dark matter scaling relations are presented in Sec. \[sec:scaling\]. Sec. \[sec:density\] is dedicated to the dark matter density. The effect of baryons on the dark matter density is discussed in Sec. \[sec:contraction\], while Sec. \[sec:assembly\] deals with the halo assembly redshift. In Sec. \[sec:semi-analytic\] our results are compared to semi-analytic galaxy formation models. A summary is given in Sec. \[sec:summary\]. In the following, we assume that the Coma cluster is at a distance of $d = 100 \, {\mathrm{Mpc}}$.
Galaxy sample, models and basic definitions {#sec:data}
===========================================
The dark halo parameters discussed in this paper are derived from the axisymmetric, orbit-based dynamical models of bright Coma galaxies presented in @Tho07. The original sample comprises two cD galaxies, nine ordinary giant ellipticals and six lenticular/intermediate type galaxies with luminosities between $M_B = -18.79$ and $M_B = - 22.56$. The spectroscopic and photometric observations are discussed in @Jor96, @Meh00, @Weg02 and @Cor08. Our implementation of Schwarzschild’s (1979) orbit superposition technique for axisymmetric potentials is described in @Tho04 [@Tho05]. For a detailed discussion of all the galaxy models the reader is referred to @Tho07.
Three of the 17 galaxies from @Tho07 are excluded from the analysis below. Firstly, we do not consider the two central cD galaxies (GMP2921 and GMP3329; GMP numbers from @GMP), because their dark matter profiles may be affected by the cluster halo. Secondly, we omit the E/S0 galaxy GMP1990, whose mass-to-light ratio is constant out to $3 \, {r_\mathrm{eff}}$. The galaxy either has no dark matter within this radius, or its dark matter density follows closer the stellar light profile than in any other Coma galaxy. In either case, the mass structure of this object is distinct from the rest of the sample galaxies. In addition to the remaining 14 galaxies we consider two further Coma galaxies for which we collected data recently (GMP3414, GMP4822). The models of these galaxies are summarized in App. \[app:3414\].
Similar dynamical models as used here have been applied to the inner regions of ellipticals, where it has been assumed that mass follows light (e.g. @Geb03, @Cap05). In contrast, our models explicitly include a dark matter component (cf. @Tho07). We probed for two parametric profiles. Firstly, logarithmic halos $$\label{logdef}
\rho_\mathrm{DM}(r) = \frac{{v_h}^2}{4 \pi G} \frac{3 {r_h}^2+r^2}{({r_h}^2+r^2)^2},$$ which posses a constant-density core of size ${r_h}$ and have an asymptotically constant circular velocity ${v_h}$. The central density of these halos reads $$\label{logrho0def}
{\rho_h}= \frac{3 {v_h}^2}{4 \pi G {r_h}^2}.$$ Secondly, NFW-profiles $$\label{nfwdef}
\rho_\mathrm{DM}(r) \propto \frac{1}{r(r+r_s)^2},$$ which are found in cosmological $N$-body simulations [@Nav96]. The majority of Coma galaxies are better fit with logarithmic halos, but the significance over NFW halo profiles is marginal. Even if the best fit is obtained with an NFW-halo, then the inner regions are still dominated by stellar mass (cf. @Tho07). In this sense, our models maximize the (inner) stellar mass.
In Sec. \[sec:scaling\] we will only discuss results based on logarithmic halo fits (i.e. we use the halo parameters from columns 5 and 6 in Tab. 2 of @Tho07 and columns 3 and 4 in Tab. \[tab:3414\] of App. \[app:3414\], respectively). While these are not necessarily the more realistic profiles, they minimize systematics in the comparison with published scaling relations for spirals that were performed using cored profiles similar to our logarithmic halos. The NFW fits are used in Sec. \[sec:contraction\].
The $B$-band luminosities of Coma galaxies used in this paper are taken from Hyperleda. We adopt a standard uncertainty of $\Delta M_B = 0.3$ to account for zero-point uncertainties, systematic errors in the sky subtraction, seeing convolution, profile extrapolation and others. Effective radii are taken from @Jor95 and @Meh00. Here we estimate the errors to be $\Delta \log {r_\mathrm{eff}}= 0.1$. This is slightly higher than the uncertainties given in @Jor95, but accounts for possible systematic errors [@Sag97]. Stellar masses were computed from our best-fit stellar mass-to-light ratios $\Upsilon$ and $R$-band luminosities of @Meh00. In case the best fit is obtained with a logarithmic halo, $\Upsilon$ is taken from column 4 of Tab. 2 in @Tho07. In case of an NFW fit, $\Upsilon$ comes from column 8 of the same table. The best-fit stellar mass-to-light ratios of GMP3414 and GMP4822 are given in column 1 of Tab. \[tab:3414\] (cf. App. \[app:3414\]).
[lcccc]{} 0144 & $10.61$ & $11.56 \pm 0.12$ & $ 0.64 \pm 0.31 $ & $ 2.33 \pm 0.10$\
0282 & $10.46$ & $11.60 \pm 0.12$ & $ 1.23 \pm 0.24 $ & $ 2.70 \pm 0.10$\
0756 & $10.89$ & $11.13 \pm 0.12$ & $ 1.10 \pm 0.09 $ & $ 2.33 \pm 0.10$\
1176 & $10.31$ & $10.73 \pm 0.13$ & $ 0.53 \pm 0.18 $ & $ 2.30 \pm 0.11$\
1750 & $10.75$ & $11.58 \pm 0.12$ & $ 1.27 \pm 0.95 $ & $ 2.70 \pm 0.23$\
2417 & $10.60$ & $11.43 \pm 0.12$ & $ 1.38 \pm 0.59 $ & $ 2.70 \pm 0.38$\
2440 & $10.30$ & $11.23 \pm 0.12$ & $ 1.04 \pm 0.15 $ & $ 2.68 \pm 0.13$\
3414 & $10.13$ & $11.02 \pm 0.15$ & $ 0.99 \pm 0.54 $ & $ 2.55 \pm 0.26$\
3510 & $10.34$ & $11.28 \pm 0.13$ & $ 1.07 \pm 0.39 $ & $ 2.46 \pm 0.21$\
3792 & $10.58$ & $11.56 \pm 0.13$ & $ 1.18 \pm 0.35 $ & $ 2.74 \pm 0.22$\
3958 & $ 9.70$ & $10.81 \pm 0.13$ & $ 0.83 \pm 0.35 $ & $ 2.44 \pm 0.29$\
4822 & $10.70$ & $11.69 \pm 0.16$ & $ 1.11 \pm 0.50 $ & $ 2.74 \pm 0.37$\
4928 & $11.08$ & $12.06 \pm 0.14$ & $ 1.46 \pm 0.39 $ & $ 2.71 \pm 0.19$\
5279 & $10.72$ & $11.59 \pm 0.12$ & $ 1.45 \pm 0.48 $ & $ 2.68 \pm 0.28$\
5568 & $10.79$ & $11.89 \pm 0.12$ & $ 1.82 \pm 0.39 $ & $ 2.81 \pm 0.20$\
5975 & $10.47$ & $11.04 \pm 0.12$ & $ 0.23 \pm 0.30 $ & $ 2.30 \pm 0.11$
Dark matter scaling relations {#sec:scaling}
=============================
![image](f1.eps){width="135mm"}
Fig. \[fig:rhvh\] shows the scalings of dark halo core-radii ${r_h}$ and halo asymptotic circular velocities ${v_h}$ with $B$-band luminosity $L_B$ and stellar mass ${M_\ast}$ (the corresponding galaxy parameters with errors are listed in Tab. \[tab:gals1\]). Both, halo core-sizes and halo circular velocities tend to increase with luminosity and mass. The case for a correlation between ${v_h}$ and $L_B$ is weak, if the sample as a whole is considered (cf. column 8 of Tab. \[tab:fits\]). However, four galaxies (GMP0144, GMP0756, GMP1176 and GMP5975) separate from the rest of the sample galaxies in having both noticeably smaller ${r_h}$ and ${v_h}$. These galaxies are shown in light color in Fig. \[fig:rhvh\]. As a general trend, halo parameters tend to scale more tightly with luminosity and mass when these galaxies are omitted. The solid lines in Fig. \[fig:rhvh\] show corresponding log-linear fits[^1]. For comparison, in Tab. \[tab:fits\] we give both, fits to all Coma galaxies as well as fits to the subsample without the four galaxies offset in Fig. \[fig:rhvh\]. The difference between these four galaxies and the rest of the sample is further discussed below.
The logarithmic halos of equation (\[logdef\]) have two free parameters. Any and pair of ${r_h}$, ${v_h}$ or ${\rho_h}$ characterizes a specific halo. Fig. \[fig:rhoh\_rh\] shows a plot of ${\rho_h}$ versus ${r_h}$. Both halo parameters are clearly correlated. A linear relation fits the points with a minimum $\chi^2_\mathrm{red} = 0.41$ (per degree of freedom; cf. Tab. \[tab:fits\]). This rather low value partly derives from a degeneracy between the halo parameters in the dynamical modeling (e.g. @Ger98, @Tho04) which correlates the errors in both quantities. In Tab. \[tab:fits\], such a correlation between the errors is not taken into account and the $\chi^2_\mathrm{red}$ might be thus underestimated. A $\chi^2_\mathrm{red}$ much larger than unity would indicate some intrinsic scatter in Fig. \[fig:rhoh\_rh\], whereas the low $\chi^2_\mathrm{red}$ quoted in Tab. \[tab:fits\] formally rules out any intrinsic scatter. Note that dark matter halos in cosmological $N$-body simulations can be approximated by a two-parameter family of halo models, where the parameters are correlated qualitatively in a similar way as revealed by Fig. \[fig:rhoh\_rh\] (e.g. @Nav96, @Wec02), but with some intrinsic scatter.
The four galaxies offset in Fig. \[fig:rhvh\] are also slightly offset in Fig. \[fig:rhoh\_rh\]. However, given the large uncertainties, this is not significant and Fig. \[fig:rhoh\_rh\] is consistent with the halos of the four galaxies belonging to the same one-parameter family as established by the remaining Coma galaxies. This implies that the four galaxies primarily differ in the amount of stellar light (and stellar mass, respectively) that is associated to a given halo. Noteworthy, the four galaxies have stellar ages $\tau_0 < 6 \, {\mathrm{Gyr}}$ [@Meh03], while all other Coma galaxies are significantly older (mostly $\tau_0 \ga 10 \, {\mathrm{Gyr}}$). A mere stellar population effect, however, is unlikely to explain the offset of the four galaxies. In this case differences to other Coma galaxies should vanish when galaxies are compared at the same stellar mass, which is not consistent with Fig. \[fig:rhvh\]b/d. It should be noted, though, that the stellar masses used here are taken from the dynamical models. A detailed comparison with mass-to-light ratios from stellar population synthesis models is planed for a future publication (Thomas et al., in preparation).
In Fig. \[fig:rh\_reff\]a we plot ${r_h}$ against ${r_\mathrm{eff}}$. Larger core-radii are found in more extended galaxies. Three of the four galaxies with young cores are again offset. These three galaxies have similar ${r_\mathrm{eff}}$ than other Coma galaxies of the same luminosity (cf. Fig. \[fig:rh\_reff\]b). This makes a higher baryon concentration unlikely to be the cause of their small halo core-radii. An exceptional case is GMP0756: it has a small core-radius, a ratio ${r_h}/{r_\mathrm{eff}}$ which is typical for Coma galaxies with old stellar populations and a relatively small ${r_\mathrm{eff}}$. The scalings of ${r_\mathrm{eff}}$ and ${r_h}$ with luminosity in Coma galaxies with old stellar populations imply a roughly constant ratio ${r_h}/{r_\mathrm{eff}}\approx 3$. Moreover, disk galaxies of the same luminosity show a similar ratio.
The fact that all four galaxies offset in Figs. \[fig:rhvh\] and \[fig:rh\_reff\] appear at projected cluster-centric distances $D > 1 \, \mathrm{Mpc}$ and have young stellar populations suggests that they may have entered the Coma cluster only recently. We will come back to this point in the next Sec. \[subsec:round\].
Comparison with round and non-rotating early-types {#subsec:round}
--------------------------------------------------
@Kr00 and @G01 studied dark matter halos of 21 nearly round (E0-E2) and non-rotating galaxies ([K2000 ]{}in the following). [K2000 ]{}galaxies have similar luminosities $M_B$ and half-light radii ${r_\mathrm{eff}}$ as ours, but the [K2000 ]{}sample contains a mixture of field ellipticals and galaxies in the Virgo and Fornax clusters. The dynamical models of @Kr00 differ in some respects from the ones described in Sec. \[sec:data\] and this will be further discussed below. However, in their mass decomposition @Kr00 assumed the same halo profile as in equation (\[logdef\]), such that we can directly compare their halo parameters to ours (cf. small black dots in Figs. \[fig:rhvh\] - \[fig:rh\_reff\]).
We find halo parameters of both samples in the same range, but Coma galaxies of the same $L_B$ have on average larger halo core-radii than [K2000 ]{}galaxies. However, the halos of the [K2000 ]{}galaxies themselves are not different from the ones around Coma early-types, as both belong to the same one-parameter family (cf. Fig. \[fig:rhoh\_rh\]). The main difference is that [K2000 ]{}galaxies are brighter (and have higher stellar mass) than Coma early-types with a similar halo. Can this be an artifact related to differences in the dynamical models?
Many of the [K2000 ]{}models are based on $B$-band photometry, while we used $R_C$-band images for the Coma galaxies. Elliptical galaxies become bluer towards the outer parts and $B$-band light profiles are slightly shallower than $R$-band profiles. Likewise, mass profiles of galaxies are generally shallower than their light profiles, such that there might be less need for dark matter in $B$-band models than in $R$-band models. @Kr00 checked for this by modelling one galaxy (NGC3379) in both bands and found comparable results. The photometric data is therefore unlikely to cause the differences between the two samples.
[K2000 ]{}galaxies were modelled assuming spherical symmetry. Not all apparently round galaxies need to be intrinsically spherical. Neglecting the flattening along the line-of-sight can result in an underestimation of a galaxy’s mass (e.g. @Tho07b). Based on the average intrinsic flattening of ellipticals in the luminosity interval of interest here, @Kr00 estimated that the assumption of spherical symmetry should affect mass-to-light ratios only at the 10 percent level. We expect the effect on ${v_h}$ to be correspondingly small. In addition, it is not obvious why spherical symmetry should enforce systematically small ${r_h}$, such that the different symmetry assumptions are also unlikely to explain the more extended cores and higher circular velocities in Coma galaxy halos.
Since the shape of a galaxy is related to its evolutionary history, the round and non-rotating [K2000 ]{}galaxies could be intrinsically different from the mostly flattened and rotating Coma galaxies. Structural differences could also be related to the fact that [K2000 ]{}galaxies are located in a variety of environments, with less galaxies in high density regions like Coma. For example, stellar population models indicate that field ellipticals are on average younger and have more extended star formation histories than cluster galaxies [@DTho05]. But a mere difference in the stellar populations can not explain the difference between [K2000 ]{}and Coma galaxies, as in this case the scalings with ${M_\ast}$ should be similar in both samples. This is ruled out by Fig. \[fig:rhvh\]b/d.
Noteworthy, most of the [K2000 ]{}galaxies in Figs. \[fig:rhvh\] - \[fig:rh\_reff\] appear similar to the four Coma galaxies with distinctly small ${r_h}$ and ${v_h}$. As it has been discussed above, these galaxies may have entered the Coma cluster only recently and – in this respect – are more representative for a field galaxy population rather than being genuine old cluster galaxies. A consistent explanation for both, the offset between young and old Coma galaxies on the one side and the difference between old Coma galaxies and [K2000 ]{}galaxies on the other would then be that field galaxies have lower ${r_h}$ and lower ${v_h}$ than cluster galaxies of the same stellar mass. Because there are more field galaxies in the [K2000 ]{}sample than in Coma, Coma galaxy halos would be expected to have on average larger cores and to be more massive (consistent with Fig. \[fig:rhvh\]). A larger comparison sample of field elliptical halos is required to conclude finally upon this point.
![image](f4.eps){width="135mm"}
Comparison with spiral galaxies
-------------------------------
Two independent derivations of dark matter scaling relations for spiral galaxies are included in Figs. \[fig:rhvh\] - \[fig:rh\_reff\] through the dotted and dashed lines. Dotted lines show scaling relations from @PSS1 [@PSS2]. They are based on maximum-disk rotation-curve decompositions with the halo density from equation (\[logdef\]). @PSS1 [@PSS2] give halo core-radii scaled by the optical disk radius. To reconstruct the underlying relationship between ${r_h}$ and $L_B$, we follow @G01 and assume exponential disks with $$\label{spireff}
\left( \frac{{r_\mathrm{eff}}{^{\mathrm{S}}}}{{\mathrm{kpc}}} \right) = 8.4 \left( \frac{L_B}{10^{11} L_\odot} \right)^{0.53}$$ (the empirical fit in @G01 has been transformed to our distance scale).
Dashed lines in Figs. \[fig:rhvh\] - \[fig:rh\_reff\] fit the combined sample of 55 rotation curve decompositions of @Kor04. These authors discuss various halo profiles, but we here only consider non-singular isothermal dark matter halos. Though these are most similar to equation (\[logdef\]), isothermal cores and circular velocities are not exactly identical as in logarithmic halos. To account for the difference, we fitted a logarithmic halo to a non-singular isothermal density profile (cf. Tab. 4.1 of @Bin87). The fit was restricted to the region with kinematic data (typically inside two core-radii). We found that the logarithmic halo fit yields a 3 percent larger core-radius ${r_h}$ and a 10 percent smaller circular velocity ${v_h}$. The central halo density is reproduced to 0.001 dex (not surprising given that the cores of the two profiles were matched). Thus, under the assumption that fits performed with the two profiles indeed match inside two core-radii, a correction of the derived halo parameters is not needed. Scaling relations from @Kor04 are shown in Figs. \[fig:rhvh\] - \[fig:rhoh\_rh\] without any correction.
@PSS1 [@PSS2] and @Kor04 discuss the scaling of disk galaxy halos with $B$-band luminosity. In order to compare early-types and spirals also at the same stellar masses, we used $$\label{eq:mlspir}
\left( \frac{M_\ast}{L_B} \right){^{\mathrm{S}}}= 2 \times \left( \frac{L_B}{10^{11} \, L_\odot} \right)^{0.33}$$ for the stellar mass-to-light ratios of disks. Equation (\[eq:mlspir\]) is derived from the Tully-Fisher and stellar-mass Tully-Fisher relations of @Bel01.
Both in luminosity and in stellar mass, spirals and ellipticals follow similar global trends. However, while old Coma early-types have halo core-radii of similar size as spirals with the same $B$-band luminosity, the asymptotic halo circular velocities are $2.4$ times higher than in corresponding spirals. In contrast, early-types with young central stellar populations have about 4 times smaller core-radii than spirals, but similar asymptotic halo velocities. When galaxies are compared at the same stellar mass, then differences between ellipticals and spirals become larger (60 percent smaller ${r_h}$ and $1.8$ times higher ${v_h}$ in old Coma early-types compared to spirals; 90 percent smaller ${r_h}$ and 20 percent smaller ${v_h}$ in Coma galaxies with young central stellar populations). In addition, the halos of early-types and spirals do not belong to the same one-parameter family (cf. Fig. \[fig:rhoh\_rh\]). At a given ${r_h}$ dark matter densities in ellipticals are about $0.5$ dex higher than in spirals.
The dark matter density {#sec:density}
=======================
Fig. \[fig:rho\_l\] shows scaling laws for dark matter densities. The central dark matter density ${\rho_h}$ (cf. equation \[logrho0def\]) of the logarithmic halo fits is plotted in panels (a) and (b) versus luminosity and stellar mass. For panels (c) and (d), the best-fit dark matter halo of each galaxy (being either logarithmic or NFW) is averaged within $2 \, {r_\mathrm{eff}}$: $$\label{avrhodef}
{\left< \rho_\mathrm{DM}\right>}\equiv \frac{3}{4 \pi} \, \frac{M_\mathrm{DM} (2 \, {r_\mathrm{eff}})}{(2 \, {r_\mathrm{eff}})^3}$$ Here, $M_\mathrm{DM}(r)$ equals the cumulative dark mass inside a sphere with radius $r$. Average dark matter densities are quoted in column (2) of Tab. \[tab:gals2\].
The general trend for both densities is to decrease with increasing galaxy luminosity. Thereby the central densities ${\rho_h}$ scatter more than the averaged ${\left< \rho_\mathrm{DM}\right>}$. For two reasons, ${\left< \rho_\mathrm{DM}\right>}$ quantifies the actual dark matter density more robustly than ${\rho_h}$. Firstly, our estimate of the very central dark matter density depends strongly on the assumed halo profile. Instead, averaged over $2 \, {r_\mathrm{eff}}$ differences between logarithmic halo fits and fits with NFW-profiles are small compared to the statistical errors (averaged over the Coma sample NFW-fits yield $0.1 - 0.2$ dex higher ${\left< \rho_\mathrm{DM}\right>}$ than fits with logarithmic halos).
Secondly, the most significant differences between ${\rho_h}$ and ${\left< \rho_\mathrm{DM}\right>}$ occur in the four Coma galaxies with distinct halos discussed in Sec. \[sec:scaling\]. These galaxies have young central stellar populations. If the bulk of stars in these galaxies is old, however, then the related radial increase of the stellar mass-to-light ratio could contribute to their small ${r_h}$ and large ${\rho_h}$. This, because in our models it is assumed that the stellar mass-to-light ratio is radially constant. By construction then, any increase in the mass-to-light ratio with radius (being either due to a stellar population gradient or due to dark matter) is attributed to the halo component. In galaxies with a significant increase of the stellar $M/L$ with radius, the ’halo’ component of the model thereby has to account for both, additional stellar and possible dark mass [@Tho06]. Any contamination with stellar mass will be largest at small radii, where the increase in the stellar $M/L$ dominates the shape of the mass profile. The averaging radius in equation (\[avrhodef\]) is therefore chosen as large as possible. The value of $2 \, {r_\mathrm{eff}}$ is a compromise for the whole sample, because the kinematic data extend to $1-3 \, {r_\mathrm{eff}}$ and the averaging should not go much beyond the last data point.
Compared to the Coma galaxies, the majority of [K2000 ]{}galaxies have larger ${\rho_h}$. After averaging inside $2 \, {r_\mathrm{eff}}$, the halo densities in both samples become comparable, however. In this respect, the [K2000 ]{}galaxies again resemble the four Coma galaxies with young stellar cores.
Dotted and dashed lines in Fig. \[fig:rho\_l\] show spiral galaxies. Their halo densities need not to be averaged before comparison, because core sizes of spirals (in the considered luminosity interval) are larger than $2 \, {r_\mathrm{eff}}$. Averaged over the whole sample, we find dark matter densities in Coma early-types a factor of $6.8$ higher than in spirals of the same luminosity. If early-types are compared to spirals of the same stellar mass, then the overdensity amounts to a factor of $13.5$. Does this imply that spirals and ellipticals of the same luminosity have formed in different dark matter halos?
Baryonic contraction {#sec:contraction}
====================
Even if ellipticals and spirals would have formed in similar halos, then the final dark matter densities after the actual galaxy formation process could be different, since the baryons in ellipticals and spirals are not distributed in the same way. This effect can be approximated as follows: assume that (a spherical) baryonic mass distribution ${M_\ast}(r)$ condenses slowly out of an original halo+baryon distribution $M_i(r)$. The halo responds adiabatically and contracts into the mass distribution $M_\mathrm{DM}(r)$. If the original particles move on circular orbits then $$\label{adiaeq}
r \left[ {M_\ast}(r) + M_\mathrm{DM}(r) \right] = r_i M_i (r_i)$$ turns out to be an adiabatic invariant [@Blu86].
In case of the Coma galaxies, ${M_\ast}$ and $M_\mathrm{DM}$ are known from the dynamical modeling and equation (\[adiaeq\]) can be solved for $M_i$[^2]. It characterizes the original halo mass distribution before the actual galaxy formation. Had a disk with baryonic mass ${M_\ast}{^{\mathrm{D}}}$ grown in this original halo – instead of an early-type – then the halo contraction would have been different such that in general $M_\mathrm{DM}{^{\mathrm{D}}}\ne M_\mathrm{DM}$. The difference between $M_\mathrm{DM}{^{\mathrm{D}}}$ and $M_\mathrm{DM}$ actually determines how much dark matter densities of ellipticals and spirals would differ, if both had formed in the same original halos. To quantify this further, let’s consider the spherically averaged mass distribution of a thin exponential disk for ${M_\ast}{^{\mathrm{D}}}$ [e.g. @Blu86]. It is fully determined by a scale-radius and a mass. For a given Coma elliptical with luminosity $L_B$, the scale-radius and the mass of a realistic disk with the same luminosity can be taken from equations (\[spireff\]) and (\[eq:mlspir\]). Then, given ${M_\ast}{^{\mathrm{D}}}$ and $M_i$ (the recontracted Coma galaxy halo), equation (\[adiaeq\]) can be solved for the baryon-contracted halo $M_\mathrm{DM}{^{\mathrm{D}}}$ around the comparison disk (cf. @Blu86). Once $M_\mathrm{DM}{^{\mathrm{D}}}$ is known, the average ${\left< \rho_\mathrm{DM}\right>}{^{\mathrm{D}}}$ follows directly.
![image](f5.eps){width="135mm"}
If ellipticals and spirals (of the same $L_B$) would have formed in the same halos, then $$\label{dbar}
{\delta_\mathrm{bar}}\equiv \frac{{\left< \rho_\mathrm{DM}\right>}}{{\left< \rho_\mathrm{DM}\right>}{^{\mathrm{D}}}}$$ should fully account for the observed ratio of elliptical to spiral dark matter densities. However, averaged over the Coma sample we find ${\delta_\mathrm{bar}}\approx 2$ and, thus, that the higher baryon concentration in early-types is not sufficient to explain the factor of $6.8$ between the dark matter densities of ellipticals and spirals at constant luminosity.
In general, the observed dark matter density ratio ${\delta_\mathrm{obs}}$ between ellipticals and spirals will be a combination of a difference in the halo densities before baryon infall and a factor that comes from the baryons. Let ${\delta_\mathrm{halo}}$ denote the baryon-corrected dark matter density ratio, then the simplest assumption is $$\label{eq:bcor}
{\delta_\mathrm{obs}}= {\delta_\mathrm{bar}}\times {\delta_\mathrm{halo}},$$ with ${\delta_\mathrm{bar}}$ from equation (\[dbar\]). After applying this approximate baryon correction, dark matter densities in Coma ellipticals are still a factor of ${\delta_\mathrm{halo}}= 3.4$ higher than in spirals of the same luminosity. If the comparison is made at the same stellar mass, then ${\delta_\mathrm{halo}}= 6.4$. Note that our baryonic contraction corrections are likely upper limits, because in equation (\[eq:mlspir\]) we only account for the stellar mass in the disk. In the presence of gas, the baryonic disk mass will be larger and so will be the halo contraction. The dark matter density contrast relative to the original elliptical will be therefore smaller.
Concluding, the differences between the baryon distributions of ellipticals and spirals are not sufficient to explain the overdensity of dark matter in ellipticals relative to spirals of the same luminosity or stellar mass. Ellipticals and spirals have not formed in the same halos. Instead, the higher dark matter density in ellipticals points to an earlier assembly redshift.
The dark-halo assembly epoch of Coma early-types {#sec:assembly}
================================================
In order to evaluate the difference between elliptical and spiral galaxy assembly redshifts quantitatively, let’s assume that dark matter densities scale with the mean density of the universe at the assembly epoch, i.e. ${\rho_\mathrm{DM}}\propto (1+{z_\mathrm{form}})^3$ (we will discuss this assumption in Sec. \[subsec:sams\_zzz\]). Let ${z_\mathrm{form}}{^{\mathrm{E}}}$ and ${z_\mathrm{form}}{^{\mathrm{S}}}$ denote the formation redshifts of ellipticals and spirals, respectively, then $$\label{zscale}
\frac{1+{z_\mathrm{form}}{^{\mathrm{E}}}(L_B)}{1+{z_\mathrm{form}}{^{\mathrm{S}}}(L_B)} = \left( \frac{{\rho_\mathrm{DM}}{^{\mathrm{E}}}(L_B)}{{\rho_\mathrm{DM}}{^{\mathrm{S}}}(L_B)} \right)^{1/3}$$ [@G01], with ${\rho_\mathrm{DM}}$ some measure of the dark matter density, e.g. ${\rho_\mathrm{DM}}= {\left< \rho_\mathrm{DM}\right>}$. Equation (\[zscale\]) can be solved for $$\label{zformdef}
{z_\mathrm{form}}{^{\mathrm{E}}}= \left[ 1+{z_\mathrm{form}}{^{\mathrm{S}}}\right] \times \delta^{1/3} - 1,$$ where we have omitted the dependency of dark matter densities and formation redshifts on $L_B$ and defined $\delta = {\rho_\mathrm{DM}}{^{\mathrm{E}}}/{\rho_\mathrm{DM}}{^{\mathrm{S}}}$. Equation (\[zformdef\]) allows to calculate formation redshifts of Coma ellipticals from ${z_\mathrm{form}}{^{\mathrm{S}}}$ and the observed $\delta$. Two estimates based on different assumptions about $\delta$ and ${z_\mathrm{form}}{^{\mathrm{S}}}$ are shown in Fig. \[fig:zl\] and are further discussed below. For each case, formation redshifts were calculated with both disk halo scaling laws shown in Fig. \[fig:rho\_l\] and the two results were averaged.
![image](f6.eps){width="135mm"}
Raw formation redshifts without any baryon correction ($\delta = {\delta_\mathrm{obs}}$) are shown in Fig. \[fig:zl\]a (and listed in column 3 of Tab. \[tab:gals2\]). We considered a wide range of spiral galaxy formation redshifts ${z_\mathrm{form}}{^{\mathrm{S}}}\in [0.5,2]$ and the related uncertainty in ${z_\mathrm{form}}{^{\mathrm{E}}}$ is indicated by vertical bars. Our fiducial value is ${z_\mathrm{form}}{^{\mathrm{S}}}\equiv 1$, because regular disks become rare beyond $z \ga 1$ [@Con05]. We did not allow for a luminosity dependence of spiral galaxy formation times. For the Coma ellipticals we then find ${z_\mathrm{form}}{^{\mathrm{E}}}$ ranging from ${z_\mathrm{form}}{^{\mathrm{E}}}\approx 0.5$ to ${z_\mathrm{form}}{^{\mathrm{E}}}\approx 5$, with the majority of galaxies having formed around ${z_\mathrm{form}}{^{\mathrm{E}}}\approx 3$. Brighter galaxies have assembled later than fainter galaxies.
Coma galaxy assembly redshifts shown in Fig. \[fig:zl\]b (see also column 5 of Tab. \[tab:gals2\]) include the baryon correction of Sec. \[sec:contraction\], because we used $\delta={\delta_\mathrm{halo}}$ in equation (\[zformdef\]). Moreover, because fainter spirals have denser halos than brighter ones, we allowed for a luminosity dependent ${z_\mathrm{form}}{^{\mathrm{S}}}(L)$. In analogy to equation (\[zscale\]) assume $$\label{zfspir}
\frac{1+{z_\mathrm{form}}{^{\mathrm{S}}}(L)}{1+{z_\mathrm{form}}{^{\mathrm{S}}}(L_0)} = \left( \frac{{\rho_\mathrm{DM}}{^{\mathrm{S}}}(L)}{{\rho_\mathrm{DM}}{^{\mathrm{S}}}(L_0)} \right)^{1/3}$$ and $z{^{\mathrm{S}}}_0 = 1$ for a reference luminosity $\log L_0/L_\odot = 10.5$ (the dashed line in Fig. \[fig:z\_sam\]c illustrates the resulting ${z_\mathrm{form}}{^{\mathrm{S}}}$ as a function of $L$). Because ${\delta_\mathrm{halo}}\la {\delta_\mathrm{obs}}$, the baryon corrected ${z_\mathrm{form}}{^{\mathrm{E}}}$ in Fig. \[fig:zl\]b are lower than the uncorrected ones in Fig. \[fig:zl\]a. The typical assembly redshift reduces to ${z_\mathrm{form}}{^{\mathrm{E}}}\approx 2$, as compared to ${z_\mathrm{form}}{^{\mathrm{E}}}\approx 3$ without the correction. The baryon correction is mostly smaller than the uncertainty related to our ignorance about ${z_\mathrm{form}}{^{\mathrm{S}}}$ (vertical bars in Fig. \[fig:zl\]b correspond to $z{^{\mathrm{S}}}_0 \in [0.5,2]$). The trend for lower ${z_\mathrm{form}}{^{\mathrm{E}}}$ in brighter galaxies is slightly diminished by the baryon correction such that the dependency of ${z_\mathrm{form}}{^{\mathrm{E}}}$ on $L$ in Fig. \[fig:zl\]b mainly reflects the luminosity dependence of spiral galaxy assembly redshifts.
Fig. \[fig:zz\] compares halo assembly redshifts with central stellar population ages $\tau_0$ from @Meh03. (We use $H_0 = 70 \, {\mathrm{km \, s^{-1}}}{\mathrm{Mpc}}^{-1}$, $\Omega_\Lambda = 0.75$ and $\Omega_m = 0.25$ to transform ages into redshifts.) Largely independent from applying the baryon correction or not, the agreement between the two redshifts is fairly good for about half of our sample. Among the remaining galaxies, some have halos which appear younger than their central stellar populations. This could indicate that the stellar ages are overestimated (they are sometimes larger than the age of the universe in the adopted cosmology; cf. Tab. \[tab:gals2\]). It could also point at these galaxies having grown by dry merging. In a dry merger, the dark matter density can drop, but the stellar ages stay constant. In Coma galaxies with young stellar cores, the halo assembly redshifts are instead larger than the central stellar ages. This indicates some secondary star-formation after the main epoch of halo assembly.
Comparison with semi-analytic galaxy formation models {#sec:semi-analytic}
=====================================================
In the following we will compare our results to semi-analytic galaxy formation models. To this end we have constructed a comparison sample of synthetic ellipticals and spirals using the models of @deL07, which are based on the Millennium simulation [@Spr05]. Comparison ellipticals are selected to rest in dark matter cluster structures with virial masses larger than $M_\mathrm{vir} > 10^{15} M_\odot$ and to obey $M_{B,\mathrm{bulge}}-M_B<0.4$ [@Sim86]. We ignore galaxies at the centers of simulated clusters since we have omitted the two central Coma galaxies from the analysis in this paper. Likewise, we exclude from the comparison galaxies that have been stripped-off their entire halo, because the only Coma galaxy that possibly lacks dark matter inside $3\, {r_\mathrm{eff}}$ has been excluded from the analysis in this paper as well (cf. Sec. \[sec:data\]). Isolated field spirals are drawn from objects with $M_{B,\mathrm{bulge}}-M_B>1.56$ in the semi-analytic models [@Sim86].
Simulated galaxies were chosen randomly from the catalogue of @deL07 in a way such that each of six luminosity intervals (between $M_B = -17$ and $M_B = -23$; width $\Delta M_B = 1.0$) contains roughly 50 galaxies. We use dust-corrected luminosities $M_B$ of the semi-analytic models.
Dark matter density
-------------------
Dark matter halos of simulated galaxies are reconstructed from tabulated virial velocities $v_\mathrm{vir}$, virial radii $r_\mathrm{vir}$, and maximum circular velocities $v_\mathrm{max}$ as follows. It is assumed that the halos can be approximated by an NFW-profile (cf. equation \[nfwdef\]), in which case the circular velocity profile reads $$\label{nfwcirc}
\left( \frac{v_\mathrm{circ}(r)}{v_\mathrm{vir}} \right)^2 =
\frac{1}{x} \frac{\ln(1+cx)-cx/(1+cx)}{\ln(1+c) - c/(1+c)}.$$ Here $x=r/r_\mathrm{vir}$ and the halo concentration is defined by $c=r_\mathrm{vir}/r_s$. The maximum circular velocity $v_\mathrm{max}$ of an NFW halo occurs at $r \approx 2 r_\mathrm{vir}/c$ [@Nav96], such that (with equation \[nfwcirc\]) $$4.63 \left( \frac{v_\mathrm{max}}{v_\mathrm{vir}} \right)^2 =
\frac{c}{\ln(1+c)-c/(1+c)}.$$ Using the tabulated $v_\mathrm{vir}$ and $v_\mathrm{max}$ this equation can be numerically solved for the halo concentration $c$, which in turn determines $r_s = r_\mathrm{vir}/c$ and, thus, the entire NFW profile of the halo.
Before compared to the Coma galaxy models, halo densities are averaged within $2 \, {r_\mathrm{eff}}$ (cf. equation \[avrhodef\]). In case of simulated spirals we use effective radii from the empirical relation (\[spireff\]). For ellipticals, we assume $$\label{ellreff}
\left( \frac{{r_\mathrm{eff}}}{{\mathrm{kpc}}} \right) = 15.34 \left( \frac{L_B}{10^{11} \, {L_\odot}} \right)^{1.02},$$ which is a fit to the Coma data.
![image](f7.eps){width="135mm"}
Fig. \[fig:dm\_sam\] shows that the average dark matter densities ${\left< \rho_\mathrm{DM}\right>}$ of the Coma early-types match fairly well with semi-analytical models. This is remarkable, because the simulations do not take into account the halo response during baryon infall. Therefore, either the net effect of the baryons on the dark matter distribution is small in the analyzed population of galaxies or there is actually a mismatch between the halos of observed galaxies and the $N$-body models. It may also be that real galaxies do not have maximum stellar masses. This can be checked by the comparison of dynamically derived stellar mass-to-light ratios with independent stellar population synthesis models (Thomas et al., in preparation).
Similarly to what is found in real galaxies, the dark matter densities of spirals are lower than in ellipticals in semi-analytic models (cf. Fig. \[fig:dm\_sam\]), but the density contrast in observed galaxies is larger. Again, a major uncertainty here is that the simulations do not take into account the gravitational effect of the baryons.
Assembly redshift {#subsec:sams_zzz}
-----------------
Formation redshifts of simulated and observed galaxies are compared in Fig. \[fig:z\_sam\]. Coma galaxy ${z_\mathrm{form}}$ are from Sec. \[sec:assembly\] and both cases discussed there – with and without baryon correction – are shown separately in panels (a) and (b), respectively. Formation redshifts of simulated galaxies are defined as the earliest redshift, when a halo has assembled 50 percent of its mass. Since we are mainly interested in cluster ellipticals, we need to take into account that interactions between the cluster halo and a galaxy’s subhalo cause a mass-loss in the latter. Although cluster-galaxy interactions happen in both simulated and observed galaxies, the mass-loss in the simulations may be overestimated because of the finite numerical resolution and the neglect of the baryon potential. In particular, for simulated subhalos with very low masses at $z=0$ the derived formation redshifts may be artificially high, when defined according to the assembly of half of the final mass. To avoid such artificially large assembly redshifts, we define ${z_\mathrm{form}}$ of simulated galaxies as the earliest time when half of the maximum mass was assembled, that a single progenitor in the merger tree of given galaxy had at some redshift. Our assumption is that even if dynamical interactions between cluster and galaxy halos take place, they do not significantly affect the very inner regions $<2 \, {r_\mathrm{eff}}$ of interest here. In case of field spirals, formation redshifts defined either from the final or from the maximum mass are very similar.
Without a baryon correction, our estimates of Coma galaxy formation redshifts are on average higher than in the semi-analytic models (Fig. \[fig:z\_sam\]a). This, although (1) the dark matter densities of ellipticals match with the simulations and (2) our assumption about the formation redshifts of spirals (${z_\mathrm{form}}{^{\mathrm{S}}}\approx 1$) is consistent with the simulations. The origin for the offset between Coma galaxies and semi-analytic models in Fig. \[fig:z\_sam\]a is that the density contrast between halos of ellipticals and spirals is larger in observed galaxies than in the simulations. After applying the baryon correction, the Coma galaxy formation redshifts become consistent with the simulations (Fig. \[fig:z\_sam\]b). This result indicates that the discrepancy between the measured and the simulated density ratio ${\rho_\mathrm{DM}}{^{\mathrm{E}}}/{\rho_\mathrm{DM}}{^{\mathrm{S}}}$ is due to baryon effects.
Our Coma galaxy formation redshifts are based on the assumption that ${\left< \rho_\mathrm{DM}\right>}\propto (1+{z_\mathrm{form}})^3$. Fig. \[fig:rho\_zf\_sam\] shows ${\left< \rho_\mathrm{DM}\right>}$ versus $(1+{z_\mathrm{form}})$ explicitly. Independent of including a baryon correction or not, the slope of the relationship between ${\left< \rho_\mathrm{DM}\right>}$ and $(1+{z_\mathrm{form}})$ in the Coma galaxies is roughly parallel to simulated $N$-body halos. This confirms that our assumption for the scaling between ${\left< \rho_\mathrm{DM}\right>}$ and ${z_\mathrm{form}}$ is approximately consistent with the cosmological simulations.
Concerning the absolute values of the dark matter densities it has been already stated above that they are only consistent with the simulations if either the net effect of the baryons is zero in the case of ellipticals or if galaxies do not have maximum stellar masses. The former case would imply that halos of spiral galaxies experience a net expansion during the baryon infall (several processes have been proposed for this, e.g. @Bin01).
![image](f8.eps){width="155mm"}
@deL06 quote a stellar assembly redshift below $z<1$ for simulated ellipticals more massive than $M_\ast > 10^{11} \, {M_\odot}$. The halo assembly redshifts in Fig. \[fig:z\_sam\] are mostly above $z>1$. In part, this is due to the fact that we only consider semi-analytic galaxies in high-density environments similar to Coma. In addition, formation redshifts defined according to the stellar mass assembly and the halo assembly, respectively, are not always equal. For example, in our comparison sample of simulated cluster ellipticals we find an average dark halo assembly redshift $\langle {z_\mathrm{form}}\rangle = 1.50$ for galaxies more massive than $M_\ast > 10^{11} \, {M_\odot}$. Evaluating for the same galaxies the redshift ${z_\mathrm{form}}^\ast$ (when half the stellar mass is assembled) yields $\langle{z_\mathrm{form}}^\ast \rangle = 1.07$. That ${z_\mathrm{form}}\ga {z_\mathrm{form}}^\ast$ is plausible if some star formation is going on between $0 \le z \le {z_\mathrm{form}}$ in the progenitor and/or in the subunits that are to be accreted after ${z_\mathrm{form}}$. It should also be noted that the simulations do not take into account stellar mass-loss due to tidal interactions.
Summary {#sec:summary}
=======
We have presented dark matter scaling relations derived from axisymmetric, orbit-based dynamical models of flattened and rotating as well as non-rotating Coma early-type galaxies. Dark matter halos in these galaxies follow similar trends with luminosity as in spirals. Thereby, the majority of Coma early-types – those with old stellar populations – have halo core-radii ${r_h}$ similar as in spirals with the same $B$-band luminosity, but their asymptotic halo velocities are about $2.4$ times higher. In contrast, four Coma early-types – with young central stellar populations – have halo velocities of the same order as in comparably bright spirals, but their core-radii are smaller by a factor of 4. Differences between spirals and ellipticals increase, when the comparison is made at the same stellar mass. The average halo density inside $2 \, {r_\mathrm{eff}}$ exceeds that of comparably bright spirals by about a factor of $6.8$. If the higher baryon concentration in ellipticals is taken into account, the excess density reduces to about a factor of 3, but if ellipticals and spirals are compared at the same stellar mass, then it is again of the order of $6.5$.
Our measured dark matter densities match with a comparison sample of simulated cluster ellipticals constructed from the semi-analytic galaxy formation models of @deL07. These synthetic ellipticals have ${z_\mathrm{form}}\approx 0.5-4$ and higher dark matter densities than simulated field spirals, which are appear on average around ${z_\mathrm{form}}{^{\mathrm{S}}}\approx 1$.
Assuming for local spirals ${z_\mathrm{form}}{^{\mathrm{S}}}= 1$ as well, and assuming further that the inner dark matter density scales with the formation redshift like $(1+{z_\mathrm{form}})^3$, our results imply that ellipticals have formed $\Delta {z_\mathrm{form}}\approx 1-2$ earlier than spirals. Without baryon correction, we find an average formation redshift around ${z_\mathrm{form}}\approx 3$, which is slightly larger than in semi-analytic galaxy formation models. Accounting for the more concentrated baryons in ellipticals, the average formation redshift drops to ${z_\mathrm{form}}\approx 2$.
![image](f9.eps){width="135mm"}
For about half of our sample, dark halo formation redshifts match with constraints derived from stellar populations [@Meh03]: the assembly epoch of these (old) early-types coincides with the epoch of formation of their stellar components.
We thank Ortwin Gerhard and the anonymous referee for comments and suggestions that helped to improve the manuscript. JT acknowledges financial support by the Sonderforschungsbereich 375 ’Astro-Teilchenphysik’ of the Deutsche Forschungsgemeinschaft. EMC receives support from grant CPDA068415/06 by Padua University. The Millennium Simulation databases used in this paper and the web application providing online access to them were constructed as part of the activities of the German Astrophysical Virtual Observatory.
GMP3414 and GMP4822 {#app:3414}
===================
The best-fit model parameters for the galaxies GMP3414 and GMP4822 (which were not included in the original sample of @Tho07) are given in Tab. \[tab:3414\]. The table is similar to Tab. 2 of @Tho07 and we refer the reader to this paper, in case more detailed information about the parameter definitions are required. The best-fit models with and without dark matter halo are compared to the observations in Figs. \[fig:3414\] and \[fig:4822\]. In both galaxies the best-fit inclination is $i=90\degr$, but the 68 percent confidence regions include models at $i\ge70\degr$ (GMP3414) and $i\ge50\degr$ (GMP4822).
GMP3414 and GMP4822 were observed with the Wide Field Planetary Camera 2 (WFPC2) on board the [*HST*]{} as part of the [*HST*]{} proposal 10844 (PI: G. Wegner). For each galaxy two exposures with 300s each were taken with the filter F622W. Four other objects were previously observed as part of this proposal and a full description of the respective observational parameters and the data analysis is given in @Cor08. A table with the final photometric parameters will be published in the journal version of this paper.
![image](f10a.eps){width="95mm"} ![image](f10b.eps){width="95mm"}
![image](f11a.eps){width="95mm"} ![image](f11b.eps){width="95mm"}
[99]{} Bell, E. F., & De Jong, R. S. 2001, , 550, 212 Binney, J., & Tremaine, S. 1987, Galactic Dynamics (Princeton: Princeton University Press) Binney, J., Gerhard, O., & Silk, J. 2001, , 321, 471 Blumenthal, G. R., Faber, S. M., Flores, R., Primack, J. R. 1986, , 301, 27 Cappellari, M., et al. 2006, , 366, 1126 Chanam[é]{}, J., Kleyna, J., & van der Marel, R. 2008, , 682, 841 Conselice, C. J., Blackburne, J. A., & Papovich, C. 2005, , 620, 564 Corsini, E. M., Wegner, G., Saglia, R. P., Thomas, J., Bender, R., Thomas, D. 2008, , 175, 462 Cretton, N., de Zeeuw, P. T., van der Marel, R. P., & Rix, H.-W. 1999, , 124, 383 De Lorenzi, F., Debattista, V. P., Gerhard, O., & Sambhus, N. 2007, , 376, 71 De Lucia, G., Springel, V., White, S. D. M., Croton, D., Kauffmann, G. 2006, , 366, 499 De Lucia, G., & Blaizot, J. 2007, , 375, 2 Gebhardt, K., et al. 2000, , 119, 1157 Gebhardt, K., et al. 2003, , 583, 92 Gerhard, O., Jeske, G., Saglia, R. P., & Bender, R. 1998, , 295, 197 Gerhard, O. E., Kronawitter, A., Saglia, R. P., Bender, R. 2001, , 121, 1936 Godwin, J. G., Metcalfe, N., & Peach, J. V. 1983, , 202, 113 Gunn, J. E., & Gott, J. R. I. 1972, , 176, 1 Jørgensen, I., Franx, M., & Kjærgard, P. 1995, , 273, 1097 Jørgensen, I., Franx, M., & Kjærgard, P. 1996, , 280, 167 Kormendy, J., & Freeman, K. C. 2004, in IAU Symp. 220, Dark Matter in Galaxies, ed. S. D. Ryder et al. (San Francisco: ASP), 377 Kronawitter, A., Saglia, R. P., Gerhard, O. E., Bender, R. 2000, , 144, 53 Mehlert, D., Saglia, R. P., Bender, R., Wegner, G. 2000, , 141, 449 Mehlert, D., Thomas, D., Saglia, R. P., Bender, R., Wegner, G. 2003, , 407, 423 Navarro, J. F., Frenk, C. S., & White, S. D. M. 1996, , 462, 563 Persic, M., Salucci, P., & Stel, F. 1996a, , 281, 27 Persic, M., Salucci, P., & Stel, F. 1996b, , 283, 1102 Press, W. H., Teukoslky, S. A., Vetterling, W. T., Flannery, B. P. 1992, Numerical Recipes in FORTRAN 77, 2nd edn. Cambridge Univ. Press, Cambridge Saglia, R. P., Burstein, D., Baggley, G., Davies, R. L., Bertschinger, E., Colless, M. M., McMahan, R. K., Jr., & Wegner, G. 1997, , 292, 499 Schwarzschild, M. 1979, , 232, 236 Simien, F., & De Vaucouleurs, G. 1986, , 302, 564 Springel, V., et al. 2005, , 435, 629 Terlevich, A. I., & Forbes, D. A. 2002, , 330, 547 Thomas, D., Maraston, C., Bender, R., Mendes de Oliveira, C. 2005a, , 621, 673 Thomas, J., Saglia, R. P., Bender, R., Thomas, D., Gebhardt, K., Magorrian, J., Richstone, D. 2004, , 353, 391 Thomas, J., Saglia, R. P., Bender, R., Thomas, D., Gebhardt, K., Magorrian, J., Corsini, E. M., Wegner, G. 2005b, , 360, 1355 Thomas, J. 2006, Ph.D. thesis (Univ. of Munich) Thomas, J., Saglia, R. P., Bender, R., Thomas, D., Gebhardt, K., Magorrian, J., Corsini, E. M., Wegner, G. 2007, , 382, 657 Thomas, J., Jesseit, R., Naab, T., Saglia, R. P., Burkert, A., & Bender, R. 2007, , 381, 1672 Trager, S. C., Faber, S. M., Worthey, G., & Gonz[á]{}lez, J. J. 2000, , 119, 1645 Wechsler, R. H., Bullock, J. S., Primack, J. R., Kravtsov, A. V., & Dekel, A. 2002, , 568, 52 Valluri, M., Merritt, D., & Emsellem, E. 2004, , 602, 66 Wegner, G., Corsini, E. M., Saglia, R. P., Bender, R., Merkl, D., Thomas, D., Thomas, J., Mehlert, D. 2002, , 395, 753 van den Bosch, R. C. E., van de Ven, G., Verolme, E. K., Cappellari, M., & de Zeeuw, P. T. 2008, , 385, 647
[lcccccccc]{}\
$\frac{{r_h}}{{\mathrm{kpc}}}$ & $\frac{L_B}{10^{11} \, {L_\odot}}$ & $1.24 \pm 0.14$ & $0.55 \pm 0.26$& 1.24 & 0.35 & 0.39 & 0.010&\
$\frac{{r_h}}{{\mathrm{kpc}}}$ & $\frac{{M_\ast}}{10^{11} \, {M_\odot}}$ & $0.71 \pm 0.12$ & $0.90 \pm 0.28$& 0.85 & 0.31 & 0.39 & 0.002&\
$\frac{{v_h}}{{\mathrm{km \, s^{-1}}}}$ & $\frac{L_B}{10^{11} \, {L_\odot}}$ & $0.52 \pm 0.12$ & $0.07 \pm 0.23$& 1.62 & 0.19 & 0.21 & 0.109&\
$\frac{{v_h}}{{\mathrm{km \, s^{-1}}}}$ & $\frac{{M_\ast}}{10^{11} \, {M_\odot}}$ & $0.33 \pm 0.07$ & $0.45 \pm 0.16$& 1.04 & 0.17 & 0.21 & 0.002&\
$\frac{{\rho_h}}{{M_\odot}{\mathrm{pc}}^3}$ & $\frac{{r_h}}{{\mathrm{kpc}}}$ & $0.67 \pm 0.64$ & $-1.99 \pm 0.57$& 0.41 & 0.41 & 0.66 & 0.001&\
$\frac{{r_h}}{{\mathrm{kpc}}}$ & $\frac{{r_\mathrm{eff}}}{{\mathrm{kpc}}}$ & $0.48 \pm 0.23$ & $0.79 \pm 0.32$& 1.15 & 0.31 & 0.39 & 0.026&\
$\frac{{\rho_h}}{{M_\odot}{\mathrm{pc}}^3}$ & $\frac{L_B}{10^{11} \, {L_\odot}}$ & $-1.87 \pm 0.18$ & $-1.28 \pm 0.33$& 0.75 & 0.47 & 0.66 & 0.019& \[fig:rho\_l\]a\
$\frac{{\rho_h}}{{M_\odot}{\mathrm{pc}}^3}$ & $\frac{{M_\ast}}{10^{11} \, {M_\odot}}$ & $-0.77 \pm 0.19$ & $-1.57 \pm 0.38$& 0.94 & 0.52 & 0.66 & 0.058& \[fig:rho\_l\]b\
$\frac{{\left< \rho_\mathrm{DM}\right>}}{{M_\odot}{\mathrm{pc}}^3}$ & $\frac{L_B}{10^{11} \, {L_\odot}}$ & $-2.36 \pm 0.14$ & $-1.56 \pm 0.24$& 1.31 & 0.38 & 0.32 & 0.004& \[fig:rho\_l\]c\
$\frac{{\left< \rho_\mathrm{DM}\right>}}{{M_\odot}{\mathrm{pc}}^3}$ & $\frac{{M_\ast}}{10^{11} \, {M_\odot}}$ & $-1.10 \pm 0.11$ & $-1.57 \pm 0.24$& 1.79 & 0.44 & 0.32 & 0.025& \[fig:rho\_l\]d\
\
\
$\frac{{r_h}}{{\mathrm{kpc}}}$ & $\frac{L_B}{10^{11} \, {L_\odot}}$ & $1.54 \pm 0.21$ & $0.63 \pm 0.33$& 0.16 & 0.16 & 0.44 & 0.002&\[fig:rhvh\]a\
$\frac{{r_h}}{{\mathrm{kpc}}}$ & $\frac{{M_\ast}}{10^{11} \, {M_\odot}}$ & $0.98 \pm 0.14$ & $0.54 \pm 0.29$& 0.18 & 0.17 & 0.44 & 0.007&\[fig:rhvh\]b\
$\frac{{v_h}}{{\mathrm{km \, s^{-1}}}}$ & $\frac{L_B}{10^{11} \, {L_\odot}}$ & $0.78 \pm 0.11$ & $0.21 \pm 0.21$& 0.14 & 0.07 & 0.24 & 0.016&\[fig:rhvh\]c\
$\frac{{v_h}}{{\mathrm{km \, s^{-1}}}}$ & $\frac{{M_\ast}}{10^{11} \, {M_\odot}}$ & $0.59 \pm 0.10$ & $0.19 \pm 0.17$& 0.12 & 0.07 & 0.24 & 0.008&\[fig:rhvh\]d\
$\frac{{\rho_h}}{{M_\odot}{\mathrm{pc}}^3}$ & $\frac{{r_h}}{{\mathrm{kpc}}}$ & $0.68 \pm 1.35$ & $-1.64 \pm 0.96$& 0.04 & 0.18 & 0.75 & 0.006&\[fig:rhoh\_rh\]\
$\frac{{r_h}}{{\mathrm{kpc}}}$ & $\frac{{r_\mathrm{eff}}}{{\mathrm{kpc}}}$ & $0.79 \pm 0.21$ & $0.62 \pm 0.30$& 0.10 & 0.14 & 0.44 & 0.017&\[fig:rh\_reff\]\
$\frac{{\rho_h}}{{M_\odot}{\mathrm{pc}}^3}$ & $\frac{L_B}{10^{11} \, {L_\odot}}$ & $-1.81 \pm 0.36$ & $-1.02 \pm 0.54$& 0.31 & 0.34 & 0.75 & 0.028&\
$\frac{{\rho_h}}{{M_\odot}{\mathrm{pc}}^3}$ & $\frac{{M_\ast}}{10^{11} \, {M_\odot}}$ & $-0.94 \pm 0.17$ & $-0.81 \pm 0.43$& 0.31 & 0.36 & 0.75 & 0.090&\
$\frac{{\left< \rho_\mathrm{DM}\right>}}{{M_\odot}{\mathrm{pc}}^3}$ & $\frac{L_B}{10^{11} \, {L_\odot}}$ & $-2.01 \pm 0.20$ & $-1.04 \pm 0.30$& 0.77 & 0.31 & 0.41 & 0.012&\
$\frac{{\left< \rho_\mathrm{DM}\right>}}{{M_\odot}{\mathrm{pc}}^3}$ & $\frac{{M_\ast}}{10^{11} \, {M_\odot}}$ & $-1.12 \pm 0.11$ & $-0.74 \pm 0.25$& 0.86 & 0.33 & 0.41 & 0.020&\
[lcccccc]{} 0144 & $-2.24 \pm 0.08$ & $1.23^{+1.11}_{-0.56}$ & $-2.00 \pm 0.08$ & $1.56^{+1.56}_{-0.64} $ & $5.8 \pm 0.5$ & $0.60^{+0.08}_{-0.07}$\
0282 & $-1.51 \pm 0.09$ & $2.66^{+1.83}_{-0.92}$ & $-1.98 \pm 0.09$ & $1.60^{+1.30}_{-0.65} $ & $7.7 \pm 0.8$ & $0.95^{+0.20}_{-0.17}$\
0756 & $-2.04 \pm 0.04$ & $1.92^{+1.46}_{-0.73}$ & $-1.89 \pm 0.04$ & $1.79^{+1.40}_{-0.70} $ & $3.1 \pm 0.2$ & $0.26^{+0.02}_{-0.02}$\
1176 & $-1.40 \pm 0.07$ & $2.77^{+1.89}_{-0.94}$ & $-1.61 \pm 0.07$ & $2.47^{+1.74}_{-0.87} $ & $3.3 \pm 0.4$ & $0.28^{+0.04}_{-0.04}$\
1750 & $-1.50 \pm 0.31$ & $3.15^{+2.08}_{-1.04}$ & $-1.80 \pm 0.31$ & $1.99^{+1.50}_{-0.75} $ & $11.3 \pm 1.7$ & $2.48^{+2.90}_{-0.98}$\
2417 & $-1.62 \pm 0.66$ & $2.57^{+1.79}_{-0.89}$ & $-2.01 \pm 0.66$ & $1.54^{+1.27}_{-0.64} $ & $11.5 \pm 2.4$ & $2.65^{+\infty}_{-1.33}$\
2440 & $-1.01 \pm 0.11$ & $4.07^{+2.54}_{-1.27}$ & $-1.69 \pm 0.11$ & $2.25^{+1.63}_{-0.81} $ & $13.5 \pm 2.1$ & $8.17^{+\infty}_{-5.61}$\
3414 & $-1.35 \pm 0.20$ & $2.64^{+1.82}_{-0.91}$ & $-1.60 \pm 0.20$ & $2.50^{+1.75}_{-0.88} $ & $11.2 \pm 2.7$ & $2.40^{+\infty}_{-1.25}$\
3510 & $-1.60 \pm 0.30$ & $2.27^{+1.64}_{-0.82}$ & $-2.02 \pm 0.30$ & $1.52^{+1.26}_{-0.63} $ & $14.2 \pm 1.6$ & $>4.26$\
3792 & $-1.24 \pm 0.40$ & $3.76^{+2.38}_{-1.19}$ & $-1.83 \pm 0.40$ & $1.93^{+1.47}_{-0.73} $ & $13.4 \pm 2.1$ & $7.33^{+\infty}_{-4.84}$\
3958 & $-1.11 \pm 0.42$ & $2.69^{+1.84}_{-0.92}$ & $-1.82 \pm 0.42$ & $1.97^{+1.48}_{-0.74} $ & &\
4822 & $-1.43 \pm 0.83$ & $3.30^{+2.15}_{-1.08}$ & $-1.65 \pm 0.83$ & $2.40^{+1.61}_{-0.77} $ & $11.2 \pm 1.3$ & $2.40^{+1.61}_{-0.77}$\
4928 & $-2.05 \pm 0.40$ & $2.14^{+1.57}_{-0.78}$ & $-2.08 \pm 0.40$ & $1.42^{+1.21}_{-0.61} $ & $14.5 \pm 1.4$ & $>5.70$\
5279 & $-1.86 \pm 0.49$ & $2.13^{+1.57}_{-0.78}$ & $-2.08 \pm 0.49$ & $1.42^{+1.21}_{-0.60} $ & $10.9 \pm 0.8$ & $2.16^{+0.67}_{-0.45}$\
5568 & $-2.47 \pm 0.56$ & $1.01^{+1.00}_{-0.50}$ & $-2.40 \pm 0.56$ & $0.89^{+0.95}_{-0.47} $ & $9.6 \pm 0.6$ & $1.50^{+0.26}_{-0.21}$\
5975 & $-1.26 \pm 0.09$ & $3.49^{+2.24}_{-1.12}$ & $-1.52 \pm 0.09$ & $2.70^{+1.85}_{-0.93} $ & $5.7 \pm 1.2$ & $0.58^{+0.20}_{-0.16}$\
[lccccccc]{} GMP3414 & SC & $6.0$ & & & & & $0.490$\
& LOG & $4.5$ & $9.7$ & $356$ & & & $0.239$\
& NFW & $4.0$ & & & $15.30$ & $1.0$ & $0.238$\
GMP4822 & SC & $6.5$ & & & & & $0.259$\
& LOG & $5.5$ & $13.1$ & $552$ & & & $0.229$\
& NFW & $5.0$ & & & $6.71$ & $1.0$ & $0.232$
[^1]: Fits for this paper are performed with the routine fitexy of @press.
[^2]: Because of the large core-radii in some galaxies (cf. Fig. \[fig:rhvh\]c,d) it is not always possible to find $M_i$ for logarithmic halos. Therefore, here we only consider the best-fit NFW-halo of each galaxy.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We prove that the number of real intersection points of a real line with a real plane curve defined by a polynomial with at most $t $ monomials is either infinite or does not exceed $6t -7$. This improves a result by M. Avendano. Furthermore, we prove that this bound is sharp for $t = 3$ with the help of Grothendieck’s dessins d’enfant.'
address:
- |
Laboratoire de Mathématiques\
Université Savoie Mont Blanc\
73376 Le Bourget-du-Lac Cedex\
France
- |
Laboratoire de Mathématiques\
Université Savoie Mont Blanc\
73376 Le Bourget-du-Lac Cedex\
France
author:
- Frédéric Bihan
- Boulos El Hilany
title: A sharp bound on the number of real intersection points of a sparse plane curve with a line
---
Introduction
============
The problem of estimating the number of real solutions of a system of polynomial equations is ubiquitous in mathematics and has obvious practical motivations. Fundamental notions like the degree or mixed volume give good estimates for the number of complex solutions of polynomial systems. However, these estimates can be rough for the number of real solutions when the equations have few monomials or a special structure (see [@S]). In the case of a non-zero single polynomial in one variable, this is a consequence of Descartes’ rule of signs which implies that the number of real roots is bounded by $2t-1$, where $t$ is the number of non-zero terms of the polynomial. Generalizations of Descartes’ bound for polynomial and more general systems have been obtained by A. Khovanskii [@Kho]. The resulting bounds for polynomial systems have been improved by F. Bihan and F. Sottile [@BS], but still very few optimal bounds are known, even in the case of two polynomial equations in two variables. Polynomial systems in two variables where one equation has three non-zero terms and the other equation has $t$ three non-zero terms have been studied by T.Y. Li, J.-M. Rojas and X. Wang [@LRW]. They showed that such a system, allowing real exponents, has at most $2^t-2$ non-degenerate solutions contained in the positive orthant. This exponential bound has recently been refined into a polynomial one by P. Koiran, N. Portier and S. Tavenas [@KPT]. The authors of [@LRW] also showed that for $t=3$ the sharp bound is five. Systems of two trinomial equations with five non-degenerate solutions in the positive orthant are in a sense rare [@DRRS]. Later M. Avendaño [@A] considered systems of two polynomial equations in two variables, where the first equation has degree one and the other equation has $t$ non-zero terms. He showed that such a system has either an infinite number of real solutions or at most $6t-4$ real solutions. Here all solutions are counted with multiplicities, with the exception of the solutions on the real coordinate axis which are counted at most once. This reduces to counting the number of real roots of a polynomial $f(x,ax+b)$, where $a,b \in {\mathbb{R}}$ and $f \in {\mathbb{R}}[x,y]$ has at most $t$ non-zero terms. The question of optimality was not adressed in [@A] and this was the motivation for the present paper. We prove the following result.
\[MainTh\] Let $f \in {\mathbb{R}}[x,y]$ be a polynomial with at most three non-zero terms and let $a,b$ be any real numbers. Assume that the polynomial $g(x)=f(x,ax+b)$ is not identically zero. Then $g$ has at most $6t - 7$ real roots counted with multiplicities except for the possible roots 0 and $-a/b$ that are counted at most once.
At a first glance this looks as a slight improvement of the main result of [@A]. In fact, our bound is optimal at least for $t=3$.
\[T:optimal\] The maximal number of real intersection points of a real line with a real plane curve defined by a polynomial with three non-zero terms is eleven.
Explicitly, the real curve with equation $$\label{E:equation}
-0,002404 \, xy^{18}+29 \, x^6y^3+x^3y=0$$ intersects the real line $y=x+1$ in precisely eleven points in ${\mathbb{R}}^2$.
The strategy to construct this example is first to deduce from the proof of Theorem \[MainTh\] some necessary conditions on the monomials of the desired equation. Then, the use of real Grothendieck’s dessins d’enfant [@B; @Br; @O] helps to test the feasibility of certain monomials. Ultimately, computer experimentations lead to the precise equation .
Preliminary results
===================
We present some results of M. Avendaño [@A] and add other ones. Consider a non-zero univariate polynomial $f(x) = \sum_{i=0}^d a_ix^i$ with real coefficients. Denote by $V(f)$ the number of change signs in the ordered sequence $(a_0,\ldots,a_d)$ disregarding the zero terms. Recall that the famous Descartes’ rule of signs asserts that the number of (strictly) positive roots of $f$ counted with multiplicities does not exceed $V(f)$.
[ [@A]]{} \[L:V\] We have $V((x+1)f)\leq\ V(f)$.
The following result is straighforward.
[@A]\[AvRem\] If $f, g\ \in \mathbb{R}[x]$ and $g$ has $t$ terms, then $V(f+g)\leq V(f) + 2t$.
Denote by ${\mathcal{N}}(h)$ the Newton polytope of a polynomial $h$ and by ${\overset{\circ}{\mathcal{N}}}(h)$ the interior of ${\mathcal{N}}(h)$.
\[MyTh1\] If $f, g\ \in \mathbb{R}[X]$, $g$ has $t$ terms and $V(f+g)=V(f) + 2t$, then ${\mathcal{N}}(g)$ is contained in ${\overset{\circ}{\mathcal{N}}}(f)$.
Assume that ${\mathcal{N}}(g)$ is not contained in ${\overset{\circ}{\mathcal{N}}}(f)$. Writing $f(x) = \sum_{i=1}^s{a_ix^{\alpha_i}}$ and $g(x) = \sum\limits_{j=1}^t{b_jx^{\beta_j}}$ with $0\leq\alpha_1 < \cdots< \alpha_s$ and $0\leq \beta_1< \cdots < \beta_t$, we get $\beta_1 \leq \alpha_1$ or $\alpha_s \leq \beta_t$. Assume that $\beta_1 \leq \alpha_1$ (the case $\alpha_s \leq \beta_t$ is symmetric). Then, obviously $$V(f(x)+g(x)) \leq 1+V(f(x)+g(x)-b_1x^{\beta_1}).$$ By Lemma \[AvRem\] we have $$V(f(x)+g(x)-b_1x^{\beta_1}) \leq V(f)+2(t-1).$$ All together this gives $V(f+g) \leq 1+V(f)+2(t-1)=V(f)+2t-1$.
[ [@A]]{} \[AvMainRes\] If $f\ \in \mathbb{R}[x,y]$ has $t$ non-zero terms, then $$V(f(x,x+1))\leq 2t-2.$$
Write $f(x,y)=\sum_{k=1}^{n}{a_{k}(x)y^{\alpha_k}}$, with $ 0 \leq\alpha_1< \cdots <\alpha_n$ and $\ a_k(x)\in\mathbb{R}[x]$. Denote by $t_{k}$ the number of non-zero terms of $a_k(x)$. Define $$f_k(x,y) = \sum_{j=k}^n a_j(x)y^{\alpha_j-\alpha_k} \, , \; k=1,\ldots,n,$$ and $f_{n+1}=0$. Then $f_k(x,x+1)=(x+1)^{\alpha_{k+1}-\alpha_k}f_{k+1}(x,x+1)+a_k(x)$ for $k=1, \ldots, n-1$ and $f_n(x,x+1)=a_n(x)$. Therefore, $V(f_k(x,x+1))\leq V(f_{k+1}(x,x+1)) + 2t_{k}$ by Lemma \[L:V\] and Lemma \[AvRem\]. Finally, $V(f(x,x+1))\leq V(f_1(x,x+1))$ since $f(x,x+1)=(x+1)^{\alpha_1}f_1(x,x+1)$. We conclude that $V(f(x,x+1)))\leq -2 + 2(t_{1}+\cdots+t_{n}) =2t-2$.
\[MyTh\] Let $f\ \in \mathbb{R}[x,y]$ be a polynomial with $t$ non-zero terms. Write it as $
f(x,y)
=
\sum_{i=1}^t b_ix^{\beta_i}y^{\gamma_i}$ with $0 \leq \gamma_1 \leq \gamma_2 \leq \cdots \leq \gamma_t$. If $V(f(x,x+1))=2t-2$, then $${\mathcal{N}}(b_ix^{\beta_i}(x+1)^{\gamma_i}) \subset {\overset{\circ}{\mathcal{N}}}(b_tx^{\beta_t}(x+1)^{\gamma_t})$$ (in other words, $ \beta_t < \beta_i \leq \beta_i+\gamma_i < \beta_t+\gamma_t$) for $i=1,\ldots,t-1$.
We use the proof of Proposition \[AvMainRes\] keeping its notations. Write $f(x,y)=\sum_{k=1}^{n}{a_{k}(x)y^{\alpha_k}}$ with $ 0 \leq\alpha_1< \cdots <\alpha_n$ and assume that $V(f(x,x+1))=2t-2$. It follows from the proof of Proposition \[AvMainRes\] that $$\label{E:sharp}
V(f_k(x,x+1))= V(f_{k+1}(x,x+1)) + 2t_{k} \, , \quad k=1,\ldots,n.$$ Recall that $f_k(x,x+1)=(x+1)^{\alpha_{k+1}-\alpha_k}f_{k+1}(x,x+1)+a_k(x)$ for $k \leq n-1$. By Lemma \[MyTh1\] and we get ${\mathcal{N}}(a_k(x)) \subset {\overset{\circ}{\mathcal{N}}}((x+1)^{\alpha_{k+1}-\alpha_k}f_{k+1}(x,x+1))$ and thus $$\label{NewtPol1}
{\mathcal{N}}(a_k(x)(x+1)^{\alpha_{k}}) \subset {\overset{\circ}{\mathcal{N}}}((x+1)^{\alpha_{k+1}}f_{k+1}(x,x+1))$$ for $k=1,\ldots,n-1$. We now show by induction on $n-k \geq 1$ that $$\label{NewtPol5}
{\overset{\circ}{\mathcal{N}}}((x+1)^{\alpha_{k+1}}f_{k+1}(x,x+1))\ \subset {\overset{\circ}{\mathcal{N}}}(a_n(x)(x+1)^{\alpha_n}).$$ Together with this will imply ${\mathcal{N}}(a_k(x)(x+1)^{\alpha_{k}}) \subset
{\overset{\circ}{\mathcal{N}}}(a_n(x) (x+1)^{\alpha_{n}})$ for $k=1,\ldots,n-1$, and thus ${\mathcal{N}}(b_ix^{\beta_i}(x+1)^{\gamma_i}) \subset {\overset{\circ}{\mathcal{N}}}(b_tx^{\beta_t}(x+1)^{\gamma_t})$ for $i=1,\ldots,t-1$. For $n-k=1$ the inclusion is obvious. Since $f_k(x,x+1)=(x+1)^{\alpha_{k+1}-\alpha_k}f_{k+1}(x,x+1)+a_k(x)$ and ${\mathcal{N}}(a_k(x)) \subset {\overset{\circ}{\mathcal{N}}}((x+1)^{\alpha_{k+1}-\alpha_k}f_{k+1}(x,x+1))$, we get ${\overset{\circ}{\mathcal{N}}}(f_k(x,x+1))={\overset{\circ}{\mathcal{N}}}((x+1)^{\alpha_{k+1}-\alpha_k}f_{k+1}(x,x+1))$. Assuming is true for $k$ (hypothesis induction), this immediately gives ${\overset{\circ}{\mathcal{N}}}((x+1)^{\alpha_{k}}f_k(x,x+1)) \subseteq {\overset{\circ}{\mathcal{N}}}(a_n(x)(x+1)^{\alpha_{n}})$ and thus is proved for $k-1$.
Proof of Theorem \[MainTh\]
===========================
We first recall the proof of the bound $6t-4$ in [@A]. Let $f(x,y)= \sum_{i=1}^{t}{b_ix^{\beta_i}y^{\gamma_i}}\in\mathbb{R}[x,y]$ be a polynomial with at most $t$ non-zero terms, and let $a$, $b$ $\in \mathbb{R}$. Set $g(x)=f(x,ax+b)$. If $a=0$ or $b=0$, then $f$ has at most $t$ non-zero terms and Descartes’ rule of signs implies that either $g=0$ or $g$ has at most $2t-1\leq 6t-4$ real roots (counted with multiplicities except for the possible root $0$). If $ab \neq 0$, then the real roots of $f(x,ax+b)$ correspond bijectively to the real roots of $f(bx/a,b(x+1))=\hat{f}(x,x+1)$, where $\hat{f}(x,y)=\sum_{i=1}^{t}{b_ia^{-\beta_i}b^{\beta_i+\gamma_i}x^{\beta_i}y^{\gamma_i}}$. Since this bijection preserves multiplicities and maps the possible roots $0$ and $-b/a$ of $g$ to the roots $0$ and $-1$ of $\hat{f}(x,x+1)$, it suffices to consider the case $a=b=1$, i.e. $g(x)=f(x,x+1)$. So we now consider $g(x)=f(x,x+1)$. Assume that $g\neq 0$ and denote by $d$ the degree of $g$.
Descartes’ rule of signs and Proposition \[AvMainRes\] imply that the number of positive roots of $g$ counted with multiplicities is at most $2t-2$. The roots of $g$ in $]-\infty ,-1[$ correspond bijectively to the positive roots of $g(-1-x)=f(-1-x,-x)=\sum_{i=1}^{t}{b_i(-1)^{\beta_i+\gamma_i}x^{\gamma_i} (x+1)^{\beta_i}}$. Therefore, by Proposition \[AvMainRes\] the number of roots (counted with multiplicities) of $g$ in $]-\infty , -1[$ cannot exceed $2t-2$. Finally, the roots of $g$ in $]-1,0[$ correspond bijectively to the positive roots of $ (x+1)^{d}g(\frac{-x}{x+1})=
(x+1)^{d}f(\frac{-x} {x+1},\frac{1}{x+1})=\sum_{i=1}^{t}{b_i(-1)^{\beta_i}x^{\beta_i}(x+1)^{d-\beta_i-\gamma_i}}$. Thus, by Proposition \[AvMainRes\] there are at most $2t-2$ such roots. All together, this leads to the conclusion that $g$ has at most $3(2t-2)+2=6t -4$ real roots counted with multiplicities except for the possible roots 0 and $-1$ that are counted at most once.
We now start the proof of Theorem \[MainTh\]. Set $I_1=]0,+\infty[$, $ I_2=]-\infty,-1[$ and $I_3=]-1,0[$. For $h \in {\mathbb{R}}[x]$ define $$V_{I_1}(h)=V(h) \, , \quad V_{I_2}(h)=V(h(-1-x)) \quad \mbox{and}$$ $$V_{I_3}(h)=V((x+1)^{\deg(h)}h(\frac{-x} {x+1})).$$
By Descartes’ rule of signs the number of roots of $h$ in $I_i$ does not exceed $V_{I_i}(h)$. To prove Theorem \[MainTh\], it suffices to show that $$\label{E:toshow}
V_{I_1}(g)+V_{I_2}(g)+V_{I_3}(g) \leq 3(2t-2)-3$$ Define polynomials $$h_1(x)=x^dh(\frac{1}{x}) \; , \quad h_2(x)=(x+1)^dh(\frac{-x}{x+1})
\quad \mbox{and} \quad h_3(x)=h(-1-x)$$ so that $V_{I_1} (h_1)=V_{I_1}(h)$, $V_{I_1} (h_2)=V_{I_3}(h)$ and $V_{I_1} (h_3)=V_{I_2}(h)$.
\[L:symmetry\] For any $i,j,k$ such that $\{i,j,k\}=\{1,2,3\}$, we have $$V_{I_i}(h_i)=V_{I_i}(h) \quad\mbox{and} \quad V_{I_i}(h_j)=V_{I_k}(h)$$
We have $h_1(-x-1)=(-1)^d (x+1)^d h(-\frac{1}{x+1})$. Thus $V(h_1(-x-1))=V((x^{-1}+1)^d h(-\frac{1}{x^{-1}+1}))=V((\frac{x+1}{x})^dh(-\frac{x}{x+1}))=
V((x+1)^dh(-\frac{x}{x+1}))$, and we get $V_{I_2}(h_1)=V_{I_3}(h)$. We have $(x+1)^d h_1(-\frac{x}{x+1})=(-x)^dh(-1-x^{-1})$ from which we obtain $V_{I_3}(h_1)=V_{I_2}(h)$.
Equalities $V_{I_2}(h_2)=V_{I_2}(h)$ and $V_{I_3}(h_2)=V_{I_1}(h)$ follow from $h_2(-1-x)=(-x)^d h(-1-x^{-1})$ and $(x+1)^dh_2(-\frac{x}{x+1})=h(x)$.
Finally, $V_{I_2}(h_3)=V_{I_1}(h)$ comes from $h_3(-x-1)=h(x)$ and $V_{I_3}(h_3)=V_{I_3}(h)$ is a consequence of $(x+1)^dh_3(-\frac{x}{x+1})=(x+1)^d h(-\frac{1}{x+1})$ and the equality $V((x+1)^d h(-\frac{1}{x+1}))=V_{I_3}(h)$ shown above.
We now proceed to the proof of . We already know that $V_{I_i}(g) \leq 2t-2$ for $i=1,2,3$. If $V_{I_i}(g) \leq 2t-3$ for all $i$, then is trivially true. With the help of Lemma \[L:symmetry\], it suffices now to show that if $V_{I_1}(g)=2t-2$ then $V_{I_2}(g) \leq 2t-3$, $V_{I_3}(g) \leq 2t-3$, and $V_{I_2}(g)+V_{I_3}(g)< 2(2t-3)$. So assume $V_{I_1}(g)= 2t-2$. Then by Proposition \[MyTh\] $$\label{E:useful}
\beta_t < \beta_i \leq \beta_i+\gamma_i < \beta_t+\gamma_t,\; , \quad i=1,\ldots,t-1.$$ We have $g(-1-x)=\sum_{i=1}^{t}
{b_i(-1)^{\beta_i+\gamma_i}x^{\gamma_i}(x+1)^{\beta_i}}$. Recall that $V_{I_2}(g)=V(g(-x-1)) \leq 2t-2$ by Proposition \[AvMainRes\]. From , we get $\gamma_t >\gamma_i$ for $i=1,\ldots,t-1$. It follows then from Proposition \[MyTh\] that $V(g(-x-1)) \leq 2t-3$.
Write $g(-1-x)=\tilde{g}(-x-1)+ b_t(-1)^{\beta_t+\gamma_t}x^{\gamma_t}(x+1)^{\beta_t}$, and then $g(-1-x)(x+1)^{-\beta_t}=\tilde{g}(-x-1)(x+1)^{-\beta_t}+ b_t(-1)^{\beta_t+\gamma_t} x^{\gamma_t}$. We note that implies $\beta_t < \beta_i$ for $i=1,\ldots,t-1$, so that both members of the previous equality are polynomials. Moreover, from we also get $\beta_i-\beta_t+\gamma_i < \gamma_t$, and thus $\gamma_t$ does not belong to the Newton polytope of the polynomial $\tilde{g}(-x-1)(x+1)^{-\beta_t}$. It follows that $V(g(-1-x)(x+1)^{-\beta_t}) \leq V(\tilde{g}(-x-1)(x+1)^{-\beta_t})+1$. By Lemma \[L:V\] we have $V(g(-1-x)) \leq V(g(-x-1)(x+1)^{-\beta_t})$. Therefore, $V(g(-1-x)) \leq V(\tilde{g}(-x-1)(x+1)^{-\beta_t})+1$. On the other hand Proposition \[AvMainRes\] yields $V(\tilde{g}(-x-1)(x+1)^{-\beta_t}) \leq 2(t-1)-2=2t-4$.
Therefore, if $V(g(-1-x))=2t-3$, then $V(\tilde{g}(-x-1)(x+1)^{-\beta_t})=2t-4$, and we may apply Proposition \[MyTh\] to $\tilde{g}(-x-1)(x+1)^{-\beta_t}$ in order to get
$$\label{i0}
\gamma_{i_0} < \gamma_i \leq \gamma_i+\beta_i < \gamma_{i_0}+\beta_{i_0} \; \mbox{for all} \; i=1,\ldots,t-1 \; \mbox{and} \; i \neq i_0,$$
where $i_0$ is determined by $\beta_{i_0} \geq \beta_i$ for $i=1,\ldots,t-1$.
Starting with $g_1(x)=x^d g(1/x)=\sum_{i=1}^t b_i x^{d-\beta_i-\gamma_i}(x+1)^{\gamma_i}$ instead of $g$ in the previous computation, we obtain that if $V(g_1)=2t-2$ then $V_{I_2}(g_1) \leq 2t-3$ and if $V_{I_2}(g_1)=2t-3$, then the substitution of $d-\beta_i-\gamma_i$ for $\beta_i$ in holds true: $$\label{i1}
\gamma_{i_1} < \gamma_i \leq d-\beta_i < d-\beta_{i_1} \; \mbox{for all} \; i=1,\ldots,t-1 \; \mbox{and} \; i \neq i_1,$$ where $i_1$ is determined by $d-\beta_{i_1}-\gamma_{i_1} \geq d-\beta_i-\gamma_i$ for $i=1,\ldots,t-1$.
On the other hand, $V(g)=V(g_1)$ and $V(g_1(-x-1))=V_{I_2}(g_1)=V_{I_3}(g)$ by Lemma \[L:symmetry\]. Thus if $V(g)=2t-2$ then $V_{I_3}(g) \leq 2t-3$ and if $V_{I_3}(g)=2t-3$, then formula holds true. It turns out that and are incompatible. Indeed, if and hold true simultaneously, then $i_0=i_1$ but then implies that $\gamma_{i_0}+\beta_{i_0} < \gamma_i+\beta_i $ for all $1,\ldots,t-1$ with $i \neq i_0$ which contradicts . Consequently, if $V(g)=V_{I_1}(g)=2t-2$, then $V_{I_2}(g) \leq 2t-3$, $V_{I_3}(g) \leq 2t-3$ and $V_{I_2}(g) +V_{I_3}(g) <2(2t-3)$.
Optimality
==========
We prove that the bound in Theorem \[MainTh\] is sharp for $t=3$ (Theorem \[T:optimal\]). We look for a polynomial $P \in {\mathbb{R}}[x,y]$ with three non-zero terms such that $P(x,x+1)$ has nine real roots distinct from $0$ and $-1$. It follows from the previous section that if such $P$ exists then, either $P(x,x+1)$ has three roots in each interval $I_1$, $I_2$ and $I_3$, or $P(x,x+1)$ has four roots in one interval, three roots in another interval, and two roots in the last one. We give necessary conditions for the second case, which thanks to Lemma \[L:symmetry\] reduces to the case where $P(x , x + 1)$ has four roots in $I_1=]0,+\infty[$, three roots in $I_3=]-1 ,0[$ and two roots in $I_2=]-\infty,-1[$.
Multiplication of $P$ by a monomial does not alter the roots of $P(x,x+1)$ in ${\mathbb{R}}\setminus \{0,-1\}$, so dividing by the smallest power of $x$, we may assume that $P$ has the following form $$P(x,y) = ay^{l_1} + bx^{k_2}y^{l_2} + x^{k_3}y^{l_3},$$ where $k_2$, $k_3$, $l_1$, $l_2$, $l_3$ are nonnegative integer numbers and $a,b$ are real numbers.
\[L:tech1\] If $P(x,x + 1)$ has four real positive roots, then $k_2 >0$, $k_3 >0$, $ l_1 >l_2 + k_2$ and $l_1>l_3 + k_3$.
If $P(x,x + 1)$ has four real positive roots, then $V(P(x , x + 1)) = 4$. Rewriting $P(x , x + 1) = \sum_{i=1}^3 b_ix^{\beta_i}(x + 1)^{\gamma_i}$ with $0 \leq \gamma_1 \leq
\gamma_2 \leq \gamma_3$, Proposition \[MyTh\] yields $\beta_3 < \beta_i \leq \beta_i + \gamma_i < \beta_3 + \gamma_3$ for $i = 1,2$. Since $k_2$ and $k_3$ are nonnegative, we get $\beta_3 = 0$, $k_2,k_3 >0$ and $\beta_3 + \gamma_3 = \gamma_3 = l_1$, so $l_1 >\max (l_2 + k_2 , l_3 + k_3)$.
Since $l_1>l_2$ and $l_1 > l_3$, we may divide $P(x,x+1)$ by $(x+1)^{l_2}$ or $(x+1)^{l_3}$ to get a polynomial equation with the same solutions in ${\mathbb{R}}\setminus \{0,-1\}$. So without loss of generality we may assume that $$\label{Preduced}
P(x,x+1)= a(x+1)^{l_1} + bx^{k_2}(x+1)^{l_2} + x^{k_3},$$ where $k_2, k_3 >0$, $l_2 \geq 0$, $l_1>k_2+l_2$ and $l_1 >k_3$.
\[NPlem\] Assume that the polynomial has four roots in $I_1$, and three roots in $I_3$ or $I_2$. Then $k_3$ does not belong to the interval $[k_2,k_2+l_2]$. Moreover, we have $a<0$ and $b>0$.
We prove that if $k_2 \leq k_3 \leq k_2 + l_2$, then has at most two roots in $I_2$ and in $I_3$.
The roots in $I_2$ are in bijection with the positive roots of $$\displaystyle P(-x -1 , -x) = (-1)^{l_1}ax^{l_1} + (-1)^{k_2 + l_2}bx^{l_2}(x + 1)^{k_2} +
(-1)^{k_3}(1 + x)^{k_3}.$$ Recall that $l_2 \geq 0$. If $k_2 \leq k_3 \leq k_2+l_2$ then Proposition \[MyTh\] yields $V((-1)^{k_2 + l_2}bx^{l_2}(x + 1)^{k_2} + (-1)^{k_3}(1 + x)^{k_3}) \leq 1$. Now, since $l_1 > k_2 + l_2$ and $l_1 >k_3$, we get $V( P(-x -1 , -x)) \leq 2$, and thus has at most two roots in $I_2$.
The roots in $I_3$ are in bijection with the positive roots of $$(1 + x)^{l_1}P(\frac{-x}{x + 1} , \frac{-x}{x + 1} + 1 )=
a + b(-1)^{k_2}x^{k_2}(1 + x)^{l_1 - k_2 -l_2} +
(-1)^{k_3}x^{k_3}(1 + x)^{l_1 - k_3}$$ From $k_3 \leq k_2+l_2$, we get $l_1-k_2-l_2 \leq l_1-k_3$. Thus, Proposition \[MyTh\] together with $k_2 \leq k_3$ yields $V(b(-1)^{k_2}x^{k_2}(1 + x)^{l_1 - k_2 -l_2} +
\displaystyle (-1)^{k_3}x^{k_3}(1 + x)^{l_1 - k_3})$ $\leq 1$. From $k_2,k_3>0$ we get $V((1 + x)^{l_1}P(\frac{-x}{x + 1} , \frac{-x}{x + 1} + 1 ))$ $\leq 2$, and thus has at most two roots in $I_3$.
Finally, if has four positive roots, then obviously $ab<0$. If $k_3$ does not belong to $[k_2,k_2+l_2]$ and $a>0$, then $V((x+1)^{l_1} + bx^{k_2}(x+1)^{l_2} + x^{k_3})=V((x+1)^{l_1} + bx^{k_2}(x+1)^{l_2})$ (recall that $k_2 \leq k_2+l_2 < l_1$). But the second sign variation is a most two by Proposition \[AvMainRes\]. We conclude that $a<0$ and $b>0$.
\[L:parity\] Assume that the polynomial has four roots in $I_1$, two roots in $I_2$ and three roots in $I_3$. Assume furthermore that $k_3 < k_2$. Then, $l_1$ is odd, $k_2$ is odd, $k_3$ is even and $l_2$ is even.
Since has exactly nine real roots counted with multiplicity, its degree $l_1$ is odd. We have already seen that if has four roots in $I_1=]0,+\infty[$, two roots in $I_2=]-\infty,-1[$ and three roots in $I_3=]-1 ,0[$, then $a<0$, $b>0$, $l_1>l_2 $ and $k_3 \notin [k_2,k_2+l_2]$. Assume from now on that $k_3 < k_2$.
Since has two roots in $I_2=]-\infty,-1[$, we have $V(P(-x -1 , -x)) \geq 2$, where $P(-x -1 , -x)=(-1)^{k_3}(1 + x)^{k_3} + (-1)^{k_2 + l_2}bx^{l_2}(x + 1)^{k_2} + (-1)^{l_1}ax^{l_1}$. But since $k_3 < k_2 \leq k_2+l_2 < l_1$, we get that $(-1)^{k_3} \cdot (-1)^{k_2 + l_2}b<0$ and $(-1)^{k_2 + l_2}b \cdot (-1)^{l_1}a <0$. Using $a<0$ and $b>0$, we obtain that $k_2+l_2$ is odd and $k_3$ is even.
Since has three roots in $I_3=]-1 ,0[$, we have $V((1 + x)^{l_1}P(\frac{-x}{x + 1} , \frac{-x}{x + 1} + 1 )) \geq 3$, where $(1 + x)^{l_1}P(\frac{-x}{x + 1} , \frac{-x}{x + 1} + 1 )= a + b(-1)^{k_2}x^{k_2}(1 + x)^{l_1 - k_2 -l_2} +
(-1)^{k_3}x^{k_3}(1 + x)^{l_1 - k_3 -l_3}$. We know that $k_3$ is even and that $b>0$. Thus in order to get coefficients with different signs in $b(-1)^{k_2}x^{k_2}(1 + x)^{l_1 - k_2 -l_2} +
(-1)^{k_3}x^{k_3}(1 + x)^{l_1 - k_3 -l_3}$, the integer $k_2$ should be odd. Since we know that $k_2+l_2$ is odd, this gives that $l_2$ is even.
Assume now that has four roots in $I_1$, two roots in $I_2$ and three roots in $I_3$. Then $a<0$, $b>0$ and $k_3$ does not belong to $[k_2,k_2+l_2]$ by Lemma \[NPlem\]. Assume that $k_3 < k_2$. Then $l_1$ is odd, $k_2$ is odd, $k_3$ is even and $l_2$ is even by Lemma \[L:parity\]. The roots of are solutions to the equation $f(x)=-a$, where $\displaystyle f(x)= bx^{k_2}(1 + x)^{l_2 - l_1} + x^{k_3}(1 + x)^{- l_1}$. Since the rational function $f$ has no pole outside $\{-1,0\}$, by Rolle’s Theorem its derivative has at least three roots in $I_1$, one root in $I_2$ and two roots in $I_3$. We compute that $f'(x)=0$ is equivalent to $\Phi(x)=1$, where $\Phi$ is the rational map $$\label{E:fraction}
\Phi(x)=\frac{-bx^{k_2 - k_3}(1 + x)^{l_2}A_1(x)}{A_2(x)},$$ with $A_1(x)=(k_2 + l_2 - l_1)x+k_2$ and $A_2(x)=(k_3 - l_1)x+k_3$. From $0 <k_3<k_2$, $l_2 \geq 0$ and $l_1>0$, we obtain that the roots of $A_1$ and $A_2$ satisfy $0 < \frac{k_3}{l_1 - k_3} < \frac{k_2}{l_1 - k_2 - l_2}$. Moreover, the roots of $\Phi$ are $-1$ with even multiplicity $l_2$, $0$ with odd multiplicity $k_2-k_3$ and the positive root of $A_1$ (which is a simple root of $\Phi$). The poles of $\Phi$ are the positive root of $A_2$ and the point at infinity which has multiplicity $\mbox{deg}(\Phi)-1$ if we homogeinize $\Phi$ into a rational map from the Riemann sphere ${\mathbb{C}}P^1$ to itself.
We find exact values of coefficients and exponents of in the following way. Note that the exponents of are independent of $l_1$. We first choose small values $k_2=5$, $k_3=2$, $l_2=2$ satisfying the above parity conditions. Then, we look for a function
$$\label{E:candidate}
\varphi(x)=\frac{cx^3(x+1)^2(x-\rho_1)}{x-\rho_2},$$
such that $c$ is some real constant, $0 <\rho_2 <\rho_1$ and $\varphi(x)=1$ has three solutions in $I_1$, one solution in $I_2$ and two solutions in $I_3$.
The existence of such a function $\varphi$ is certified by Figure \[F:dessin\] thanks to Riemann Uniformization Theorem. Figure \[F:dessin\] represents the intersection of the graph $\Gamma=\varphi^{-1}({\mathbb{R}}P^1)$ with one connected component of ${\mathbb{C}}P^1 \setminus {\mathbb{R}}P^1$ (the whole graph can be recovered from this intersection since it is invariant by complex conjugation). Such a graph is called [*real dessin d’enfant*]{} (see [@B; @Br; @O] for instance). Each connected component of ${\mathbb{C}}P^1 \setminus \Gamma$ (a disc) can be endowed with an orientation inducing the order $p<q<r$ for the three letters $p,q,r$ in its boundary so that two adjacent discs get opposite orientations. Choose coordinates on the target space ${\mathbb{C}}P^1$. Choose one connected component of ${\mathbb{C}}P^1 \setminus \Gamma$ and send it homeomorphically to one connected component of ${\mathbb{C}}P^1 \setminus {\mathbb{R}}P^1$ so that letters $p$ are sent to $(1:0)$, letters $q$ are sent to $(0:1)$ and letters $r$ to $(1:1)$. Do the same for each connected component of ${\mathbb{C}}P^1 \setminus \Gamma$ so that the resulting homeomorphisms extend to an orientation preserving continuous map $\varphi: {\mathbb{C}}P^1 \rightarrow {\mathbb{C}}P^1$. Note that two adjacent connected components of ${\mathbb{C}}P^1 \setminus \Gamma$ are sent to different connected components of ${\mathbb{C}}P^1 \setminus {\mathbb{R}}P^1$. The Riemann Uniformization Theorem implies that $\varphi$ is a real rational map for the standard complex structure on the target space and its pull-back by $\varphi$ on the source space. The degree of $\varphi$ is half the number of connected components of ${\mathbb{C}}P^1 \setminus \Gamma$ (a generic point on the target space has one preimage in each component of ${\mathbb{C}}P^1 \setminus \Gamma$ with the correct orientation). The critical points of $\varphi$ are the vertices of $\Gamma$, and the multiplicity of each critical point is half its valency. The letters $p$ are the inverse images of $(1:0)$, the letters $q$ are the inverse images of $(0:1)$ and the letters $r$ are inverse images of $(1:1)$. In Figure \[F:dessin\], we see three letters $p$ on $\Gamma$, two of them being critical points with multiplicities three and two respectively (the valencies are six and four, recall that Figure \[F:dessin\] show only one half of $\Gamma$). We also see two letters $q$, one of them being a critical point of multiplicity five. Choose coordinates on the source space ${\mathbb{C}}P^1$ so that the critical point $p$ of multiplicity three has coordinates $(1:0)$, the other critical point $p$ has coordinates $(1:-1)$ and the critical point $q$ is the point with coordinates $(0:1)$. In standard affines coordinates (for both the source and the target spaces) of the chart where the first homogeneous coordinate does not vanish, any rational map whose real dessin d’enfant is as depicted in Figure \[F:dessin\] is defined by . From Figure \[F:dessin\], we see that $0 < \rho_2< \rho_1$ and that $\varphi$ has the desired number of inverse images (letters $r$) of $1$ in each interval $I_i$.
Now we want to identify and . Recall that $k_2=5$, $k_3=2$, $l_2=2$ are fixed. We look at the function $\frac{x^3(x+1)^2(x-\rho_1)}{x-\rho_2}$, where $\rho_1=\frac{k_2}{l_1 - k_2 - l_2}$ and $\rho_2=\frac{k_3}{l_1 - k_3}$, and increase $l_1$ so that some level set of this function has three solutions in $I_1$, one solution in $I_2$ and two solutions in $I_3$. It turns out that $l_1 = 17$ is large enough and the level set gives the value $29$ for $b$. Finally, integrating $\Phi$ and choosing $a=-0,002404$, we get $$-0.002404(x+1)^{17}+29x^5(x+1)^2+x^2$$ for . This polynomial has fours roots in $I_1$, two roots in $I_2$ and three roots in $I_3$. This has been computed using SAGE version 6.6 which gives the following approximated roots: $0.18859$, $0.22206$, $0.25196$, $0.44416$ in $I_1$, $-3.96032$, $-1.15048$ in $I_2$, and $-0.61459$, $-0.58528$, $-0.03594$ in $I_3$.
Multiplying this polynomial by $x(x+1)$ gives a polynomial of the form $P(x,x+1)$ (where $P \in {\mathbb{R}}[x,y]$ has three non-zero terms) having eleven real roots.
[1]{}
M. Avendaño *The number of roots of a lacunary bivariate polynomial on a line,* J. Symbolic Comput. 44 (2009), no. 9, 1280 - 1284.
F. Bihan, *Polynomial systems supported on circuits and dessins d’enfant*, Journal of the London Mathematical Society **75** (2007), no. 1, 116–132.
F. Bihan and F. Sottile, *[New fewnomial upper bounds from [G]{}ale dual polynomial systems]{}*, 2007, Moscow Mathematical Journal, Volume 7, Number 3.
E. Brugallé, *Real plane algebraic curves with asymptotically maximal number of even ovals*, 2006, Duke Mathematical Journal 131 (3):575–587.
A. Dickenstein, J.-M. Rojas, K. Rusek, J. Shih *Extremal real algebraic geometry and A-discriminants,* Mosc. Math. J. 7 (2007), no. 3, 425 - 452, 574.
A. Khovanskii, [*Fewnomials*]{}, Trans. of Math. Monographs, 88, AMS, 1991.
P. Koiran, N. Portier, S. Tavenas, *A Wronskian approach to the real τ-conjecture*, J. Symbolic Comput. 68 (2015), part 2, 195 - 214.
T.-Y. Li, J.-M. Rojas and X. Wang, *Counting real connected components of trinomial curve intersections and m-nomial hypersurfaces,* Discrete Comput. Geom. 30 (2003), no. 3, 379 - 414.
S. Yu Orevkov, *Riemann existence theorem and construction of real algebraic curves*, Ann. Fac. Sci. Toulouse Math. (6) [**12**]{} (2003), no. 4, 517–531.
F. Sottile, *Real Solutions to Equations from Geometry*, University Lecture Series. American Mathematical Society, Providence, RI (2011).
| {
"pile_set_name": "ArXiv"
} |
---
bibliography:
- 'bibliography.bib'
---
Introduction and Motivation
===========================
Over the course of the last ten years, deep learning models have demonstrated to be highly effective in almost every machine learning task in different domains including (but not limited to) computer vision, natural language, speech, and recommendation. Their wide adoption in both research and industry have been greatly facilitated by increasingly sophisticated software libraries like Theano [@2016arXiv160502688short], TensorFLow [@tensorflow2015-whitepaper], Keras [@chollet2015keras], PyTorch [@paszke2017automatic], Caffe [@jia2014caffe], Chainer [@tokui2015chainer], CNTK [@seide2016cntk] and MXNet [@chen2015mxnet]. Their main value has been to provide tensor algebra primitives with efficient implementations which, together with the massively parallel computation available on GPUs, enabled researchers to scale training to bigger datasets. Those packages, moreover, provided standardized implementations of automatic differentiation, which greatly simplified model implementation. Researchers, without having to spend time re-implementing these basic building blocks from scratch and now having fast and reliable implementations of the same, were able to focus on models and architectures, which led to the explosion of new \*Net model architectures of the last five years. With artificial neural network architectures being applied to a wide variety of tasks, common practices regarding how to handle certain types of input information emerged. When faced with a computer vision problem, a practitioner pre-processes data using the same pipeline that resizes images, augments them with some transformation and maps them into 3D tensors. Something similar happens for text data, where text is tokenized either into a list of words or characters or word pieces, a vocabulary with associated numerical IDs is collected and sentences are transformed into vectors of integers. Specific architectures are adopted to encode different types of data into latent representations: convolutional neural networks are used to encode images and recurrent neural networks are adopted for sequential data and text (more recently self-attention architectures are replacing them). Most practitioners working on a multi-class classification task would project latent representations into vectors of the size of the number of classes to obtain logits and apply a *softmax* operation to obtain probabilities for each class, while for regression tasks, they would map latent representations into a single dimension by a linear layer, and the single score is the predicted value.
![image](img/model_definitions_examples.pdf){width="0.8\linewidth"}
Observing these emerging patterns led us to define abstract functions that identify classes of equivalence of model architectures. For instance, most of the different architectures for encoding images can be seen as different implementations of the abstract encoding function $T'_{h' \times w' \times c'} = e_\theta(T_{h \times w \times c})$ where $T_{dims}$ denotes a tensor with dimensions $dims$ and $e_\theta$ is an encoding function parametrized by parameters $\theta$ that maps from tensor to tensor. In tasks like image classification, $T'$ is pooled and flattened (i.e., a reduce function is applied spatially and the output tensor is reshaped as a vector) before being provided to, again, an abstract function that computes $T'_c=d_\theta(T_h)$ where $T_h$ is a one-dimensional tensor of hidden size $h$, $T'_c$ is a one-dimensional tensor of size $c$ equal to the number of classes, and $d_\theta$ is a decoding function parametrized by parameters $\theta$ that maps a hidden representation into logits and is usually implemented as a stack of fully connected layers. Similar abstract encoding and decoding functions that generalize many different architectures can be defined for different types of input data and different types of expected output predictions (which in turn define different types of tasks).
We introduce Ludwig, a deep learning toolbox based on the above-mentioned level of abstraction, with the aim to encapsulate best practices and take advantage of inheritance and code modularity. Ludwig makes it much easier for practitioners to compose their deep learning models by just declaring their data and task and to make code reusable, extensible and favor best practices. These classes of equivalence are named after the data type of the inputs encoded by the encoding functions (image, text, series, category, etc.) and the data type of the outputs predicted by the decoding functions. This type-based abstraction allows for a higher level interface than what is available in current deep learning frameworks, which abstract at the level of single tensor operation or at the layer level. This is achieved by defining abstract interfaces for each data type, which allows for extensibility as any new implementation of the interface is a drop-in replacement for all the other implementations already available.
Concretely, this allows for defining, for instance, a model that includes an image encoder and a category decoder and being able to swap in and out VGG [@DBLP:journals/corr/SimonyanZ14a], ResNet [@he2016deep] or DenseNet [@Huang_2017_CVPR] as different interchangeable representations of an image encoder. The natural consequence of this level of abstraction is associating a name to each encoder for a specific type and enabling the user to declare what model to employ rather than requiring them to implement them imperatively, and at the same time, letting the user add new and custom encoders. The same also applies to data types other than images and decoders.
With such type-based interfaces in place and implementations of such interfaces readily available, it becomes possible to construct a deep learning model simply by specifying the type of the features in the data and selecting the implementation to employ for each data type involved. Consequently, Ludwig has been designed around the idea of a declarative specification of the model to allow a much wider audience (including people who do not code) to be able to adopt deep learning models, effectively democratizing them. Three such model definition are shown in Figure \[fig:model\_definition\_examples\].
The main contribution of this work is that, thanks to this higher level of abstraction and its declarative nature, Ludwig allows for inexperienced users to easily build deep learning models , while allowing experts to decide the specific modules to employ with their hyper-parameters and to add additional custom modules. Ludwig’s other main contribution is the general modular architecture defined through the type-based abstraction that allows for code reuse, flexibility, and the performance of a wide array of machine learning tasks under a cohesive framework.
The remainder of this work describes Ludwig’s architecture in detail, explains its implementation, compares Ludwig with other deep learning frameworks and discusses its advantages and disadvantages.
Architecture
============
The notation used in this section is defined as follows. Let $d \sim D$ be a data point sampled from a dataset $D$. Each data point is a tuple of typed values called features. They are divided in two sets: $d_I$ is the set of input features and $d_O$ id the set of output features. $d_i$ will refer to a specific input feature, while $d_o$ will refer to a specific output features. Model predictions given input features $d_I$ are denoted as $d_P$, so that there will be a specific prediction $d_p$ for each output feature $d_o \in d_O$. The types of the features can be either atomic (scalar) types like binary, numerical or category, or complex ones like discrete sequences or sets. Each data type is associated with abstract function types, as is explained in the following section, to perform type-specific operations on features and tensors. Tensors are a generalization of scalars, vectors, and matrices with $n$ ranks of different dimensions. Tensors are referred to as $T_{dims}$ where $dims$ indicates the dimensions for each rank, like for instance $T_{l \times m \times n}$ for a rank 3 tensor of dimensions $l$, $m$ and $n$ respectively for each rank.
Type-based Abstraction
----------------------
![Data type functions flow.[]{data-label="fig:type_flow"}](img/type_flow.pdf){width="\columnwidth"}
Type-based abstraction is one of the main concepts that define Ludwig’s architecture. Currently, Ludwig supports the following types: binary, numerical (floating point values), category (unique strings), set of categorical elements, bag of categorical elements, sequence of categorical elements, time series (sequence of numerical elements), text, image, audio (which doubles as speech when using different pre-processing parameters), date, H3 [@brodsky2018] (a geo-spatial indexing system), and vector (one dimensional tensor of numerical values). The type-based abstraction makes it easy to add more types. The motivation behind this abstraction stems from the observation of recurring patterns in deep learning projects: pre-processing code is more or less the same given certain types of inputs and specific tasks, as is the code implementing models and training loops. Small differences make models hard to compare and their code difficult to reuse. By modularizing it on a data type base, our aim is to improve both code reuse, adoption of best practices and extensibility.
![image](img/ECD.pdf){width="0.7\linewidth"}
Each data type has five abstract function types associated with it and there could be multiple implementations of each of them:
- **Pre-processor**: a pre-processing function $T_{dims}=\mathrm{pre}(d_i)$ maps a raw data point input feature $d_i$ into a tensor $T$ with dimensions $dims$. Different data types may have different pre-processing functions and different dimensions of $T$. A specific type may, moreover, have different implementations of $\mathrm{pre}$. A concrete example is *text*: $d_i$ in this case is a string of text, there could be different tokenizers that implement $\mathrm{pre}$ by splitting on space or using byte-pair encoding and mapping tokens to integers, and $dims$ is $s$, the length of the sequence of tokens.
- **Encoder**: an encoding function $T'_{dims'} = e_\theta(T_{dims})$ maps an input tensor $T$ into an output tensor $T'$ using parameters $\theta$. The dimensions $dims$ and $dims'$ may be different from each other and depend on the specific data type. The input tensor is the output of a $\mathrm{pre}$ function. Concretely, encoding functions for *text*, for instance, take as input $T_s$ and produce $T_h$ where $h$ is an hidden dimension if the output is required to be pooled, or $T_{s \times h}$ if the output is not pooled. Examples of possible implementations of $e_\theta$ are CNNs, bidirectional LSTMs or Transformers.
- **Decoder**: a decoding function $\hat{T}_{dims''} = d_\theta(T'_{dims'})$ maps an input tensor $T'$ into an output tensor $\hat{T}$ using parameter $\theta$. The dimensions $dims''$ and $dims'$ may be different from each other and depend on the specific data type. $T'$ is the output of an encoding function or of a combiner (explained in the next section). Concretely, a decoder function for the *category* type would map $T_h$ input tensor into a $T_c$ tensor where $c$ is the number of classes.
- **Post-processor**: a post-processing function $d_p=\mathrm{post}(\hat{T}_{dims''})$ maps a tensor $\hat{T}$ with dimensions $dims''$ into a raw data point prediction $d_p$. $\hat{T}$ is the output of a decoding function. Different data types may have different post-processing functions and different dimensions of $T$. A specific type may, moreover, have different implementations of $\mathrm{post}$. A concrete example is *text*: $d_p$ in this case is a string of text, and there could be different functions that implement $\mathrm{post}$ by first mapping integer predictions into tokens and then concatenating on space or using byte-pair concatenation to obtain a single string of text.
- **Metrics**: a metric function $s=m(d_o, d_p)$ produces a score $s$ given a ground truth output feature $d_o$ and predicted output $d_p$ of the same dimension. $d_p$ is the output of a post-processing function. In this context, for simplicity, loss functions are considered to belong to the metric class of function. Many different metrics may be associated with the same data type. Concrete examples of metrics for the *category* data type can be accuracy, precision, recall, F1, and cross entropy loss, while for the *numerical* data type they could be mean squared error, mean absolute error, and R2.
A depiction of how the functions associated with a data type are connected to each other is provided in Figure \[fig:type\_flow\].
Encoders-Combiner-Decoders {#sub:ECD}
--------------------------
![image](img/ECD_instatiations.pdf){width="\linewidth"}
In Ludwig, every model is defined in terms of encoders that encode different features of an input data point, a combiner which combines information coming from the different encoders, and decoders that decode the information from the combiner into one or more output features. This generic architecture is referred to as Encoders-Combiner-Decoders (ECD). A depiction is provided in Figure \[fig:ECD\]. This architecture is introduced because it maps naturally most of the architectures of deep learning models and allows for modular composition. This characteristic, enabled by the data type abstraction, allows for defining models by just declaring the data types of the input and output features involved in the task and assembling standard sub-modules accordingly rather than writing a full model from scratch.
A specific instantiation of an *ECD* architecture can have multiple input features of different or same type, and the same is true for output features. For each feature in the input part, pre-processing and encoding functions are computed depending on the type of the feature, while for each feature in the output part, decoding, metrics and post-processing functions are computed, again depending on the type of each output feature. When multiple input features are provided a combiner function $\{T''\}=c_\theta({T'})$ that maps a set of input tensors $\{T'\}$ into a set of output tensors $\{T''\}$ is computed. $c$ has an abstract interface and many different functions can implement it. One concrete example is what in Ludwig is called *concat* combiner: it flattens all the tensors in the input set, concatenates them and passes them to a stack of fully connected layers, the output of which is provided as output, a set of only one tensor. Note that a possible implementation of a combiner function can be the identity function.
This definition of a decoder function allows for implementations where subsets of inputs are provided to different sub-modules which return subsets of the output tensors, or even for a recursive definition where the combiner function is a *ECD* model itself, albeit without pre-processors and post-processors, since inputs and outputs are already tensors and do not need to be pre-processed and post-processed. Although the combiner definition in the *ECD* architecture is theoretically flexible, the current implementations of combiner functions in Ludwig are monolithic (without sub-modules), non-recursive, and return a single tensor as output instead of a set of tensors. However, more elaborate combiners can be added easily.
The *ECD* architecture allows for many instantiations by combining different input features of different data types with different output features of different data types, as depicted in Figure \[fig:ECD\_instatiations\]. An *ECD* with an input text feature and an output categorical feature can be trained to perform text classification or sentiment analysis, and an *ECD* with an input image feature and a text output feature can be trained to perform image captioning, while an *ECD* with categorical, binary and numerical input features and a numerical output feature can be trained to perform regression tasks like predicting house pricing, and an *ECD* with numerical binary and categorical input features and a binary output feature can be trained to perform tasks like fraud detection. It is evident how this architecture is really flexible and is limited only by the availability of data types and the implementations of their functions.
![image](img/output_feature_dependency.pdf){width="\linewidth"}
An additional advantage of this architecture is its ability to perform multi-task learning [@DBLP:conf/icml/Caruana93]. If more than one output feature is specified, an *ECD* architecture can be trained to minimize the weighted sum of the losses of each output feature in an end-to-end fashion. This approach has shown to be highly effective in both vision and natural language tasks, achieving state of the art performance [@DBLP:conf/cidr/RatnerHR19]. Moreover, multiple outputs can be correlated or have logical or statistical dependency with each other. For example, if the task is to predict both parts of speech and named entity tags from a sentence, the named entity tagger will most likely achieve higher performance if it is provided with the predicted parts of speech (assuming the predictions are better than chance, and there is correlation between part of speech and named entity tag). In Ludwig, dependencies between outputs can be specified in the model definition, a directed acyclic graph among them is constructed at model building time, and either the last hidden representation or the predictions of the origin output feature are provided as inputs to the decoder of the destination output feature. This process is depicted in Figure \[fig:output\_feature\_dependency\]. When non-differentiable operations are performed to obtain the predictions, for instance, like *argmax* in the case of category features performing multi-class classification, the logits or the probabilities are provided instead, keeping the multi-task training process end-to-end differentiable.
This generic formulation of multi-task learning as a directed acyclic graph of task dependencies is related to the hierarchical multi-task learning in Snorkel MeTaL proposed by @DBLP:conf/sigmod/RatnerHDGR18 and its adoption for improving training from weak supervision by exploiting task agreements and disagreements of different labeling functions [@DBLP:conf/aaai/RatnerHDSPR19]. The main difference is that Ludwig can handle automatically heterogeneous tasks, i.e. tasks to predict different data types with support for different decoders, while in Snorkel MeTaL each task head is a linear layer. On the other hand Snorkel MeTaL’s focus on weak supervision is currently absent in Ludwig. An interesting avenue of further research to close the gap between the two approaches could be to infer dependencies and loss weights automatically given fully supervised multi-task data and combine weak supervision with heterogeneous tasks.
Implementation
==============
Declarative Model Definition
----------------------------
Ludwig adopts a declarative model definition schema that allows users to define an instantiation of the *ECD* architecture to train on their data.
The higher level of abstraction provided by the type-based *ECD* architecture allows for a separation between what a model is expected to learn to do and how it actually does it. This convinced us to provide a declarative way of defining the models in Ludwig, as the amount of potential users who can define a model by declaring the inputs they are providing and the predictions they are expecting, without specifying the implementation of how the predictions are obtained, is substantially bigger than the amount of developers who can code a full deep learning model on their own. An additional motivation for the adoption of a declarative model definitions stems from the separation of interests between the authors of the implementations of the models and the final users, analogous to the separation of interests of the authors of query planning and indexing strategies of a database and those users who query the database, which allows the former to provide improved strategies without impacting the way the latter interacts with the system.
![image](img/simple_complex_model.pdf){width="\linewidth"}
The model definition is divided in five sections:
- **Input Features**: in this section of the model definition, a list of input features is specified. The minimum amount of information that needs to be provided for each feature is the name of the feature that corresponds to the name of a column in the tabular data provided by the user, and the type of such feature. Some features have multiple encoders, but if one is not specified, the default one is used. Each encoder can have its own hyper-parameters, and if they are not specified, the default hyper-parameters of the specified encoder are used.
- **Combiner**: in this section of the model definition, the type of combiner can be specified, if none is specified, the default *concat* is used. Each combiner can have its own hyper-parameters, but if they are not specified, the default ones of the specified combiner are used.
- **Output Features**: in this section of the model definition, a list of output features is specified. The minimum amount of information that needs to be provided for each feature is the name of the feature that corresponds to the name of a column in the tabular data provided by the user, and the type of such feature. The data in the column is the ground truth the model is trained to predict. Some features have multiple decoders that calculate the predictions, but if one is not specified, the default one is used. Each decoder can have its own hyper-parameters and if they are not specified, the default hyper-parameters of the specified encoder are used. Moreover, each decoder can have different losses with different parameters to compare the ground truth values and the values predicted by the decoder and, also in this case, if they are not specified, defaults are used.
- **Pre-processing**: pre-processing and post-processing functions of each data type can have parameters that change their behavior. They can be specified in this section of the model definition and are applied to all input and output features of a specified type, and if they are not provided, defaults are used. Note that for some use cases it would be useful to have different processing parameters for different features of the same type. Consider a news classifier where the title and the body of a piece of news are provided as two input text features. In this case, the user may be inclined to set a smaller value for the maximum length of words and the maximum size of the vocabulary for the title input feature. Ludwig allows users to specify processing parameters on a per-feature basis by providing them inside each input and output feature definition. If both type-level parameters and single-feature-level parameters are provided, the single-feature-level ones override the type-level ones.
- **Training**: the training process itself has parameters that can be changed, like the number of epochs, the batch size, the learning rate and its scheduling, and so on. Those parameters can be provided by the user, but if they are not provided, defaults are used.
The wide adoption of defaults allows for really concise model definitions, like the one shown on the left side of Figure \[fig:simple\_complex\_models\], as well as a high degree of control on both the architecture of the model and training parameters, as shown on the right side of Figure \[fig:simple\_complex\_models\].
Ludwig adopts the convention to adopt YAML to parse model definitions because of its human readability, but as long its nested structure is representable, other similar formats could be adopted.
For the ever-growing list of available encoders, combiners, and decoders, their hyper-parameters, the pre-processing and training parameter available, please consult Ludwig’s user guide[^1]. For additional examples refer to the example[^2] section.
In order to allow for flexibility and ease of extendability, two well known design patters are adopted in Ludwig: the strategy pattern [@gamma1994design] and the registry pattern. The strategy pattern is adopted at different levels to allow different behaviors to be performed by different instantiations of the same abstract components. It is used both to make the different data types interchangeable from the point of view of model building, training, and inference, and to make different encoders and decoders for the same type interchangeable. The registry pattern, on the other hand, is implemented in Ludwig by assigning names to code constructs (either variables, function, objects, or modules) and storing them in a dictionary. They can be referenced by their name, allowing for straightforward extensibility; adding an additional behavior is as simple as adding a new entry in the registry.
In Ludwig, the combination of these two patterns allows users to add new behaviors by simply implementing the abstract function interface of the encoder of a specific type and adding that function implementation in the registry of implementations available. The same applies for adding new decoders, new combiners, and to add additional data types. The problem with this approach is that different implementations of the same abstract functions have to conform to the same interface, but in our case some parameters of the function may be different. As a concrete example, consider two text encoders: a recurrent neural network (RNN) and a convolutional neural network (CNN). Although they both conform to the same abstract encoding function in terms of the rank of the input and output tensors, their hyper-parameters are different, with the RNN requiring a boolean parameter indicating whether to apply bi-direction or not, and the CNN requiring the size of the filters. Ludwig solves this problem by exploiting *\*\*kwargs*, a Python functionality that allows to pass additional parameters to functions by specifying their names and collecting them into a dictionary. This allows different functions implementing the same abstract interface to have the same signature and then retrieve the specific additional parameters from the dictionary using their names. This also greatly simplifies the implementation of default parameters, because if the dictionary does not contain the keyword of a required parameter, the default value for that parameters is used instead automatically.
Training Pipeline
-----------------
![image](img/pipeline.pdf){width="\linewidth"}
Given a model definition, Ludwig builds a training pipeline as shown in the top of Figure \[fig:pipeline\]. The process is not particularly different from many other machine learning tools and consists in a metadata collection phase, a data pre-processing phase, and a model training phase. The metadata mappings in particular are needed in order to apply exactly the same pre-processing to input data at prediction time, while model weights and hyper-parameters are saved in order to load the same exact model obtained during training. The main notable innovation is the fact that every single component, from the pre-processing to the model, to the training loop is dynamically built depending on the declarative model definition.
One of the main use cases of Ludwig is the quick exploration of different model alternatives for the same data, so, after pre-processing, the pre-processed data is optionally cached into an HDF5 file. The next time the same data is accessed, the HDF5 file will be used instead, saving the time needed to pre-process it.
Prediction Pipeline
-------------------
The prediction pipeline is depicted in the bottom of Figure \[fig:pipeline\]. It uses the metadata obtained during the training phase to pre-process the new input data, loads the model reading its hyper-parameters and weights, and uses it to obtain predictions that are mapped back in data space by a post-processor that uses the same mappings obtained at training time.
Evaluation
==========
One of the positive effects of the *ECD* architecture and its implementation in Ludwig is the ability to specify a potentially complex model architecture with a concise declarative model definition. To analyze how much of an impact this has on the amount of code needed to implement a model (including pre-processing, the model itself, the training loop, and the metrics calculation), the number of lines of code required to implement four reference architectures using different libraries is compared: WordCNN [@DBLP:conf/emnlp/Kim14], Bi-LSTM [@DBLP:conf/acl/TaiSM15] - both models for text classification and sentiment analysis, Tagger [@DBLP:conf/naacl/LampleBSKD16] - sequence tagging model with an RNN encoder and a per-token classification, ResNet [@he2016deep] - image classification model. Although this evaluation is imprecise in nature (the same model can be implemented in a more or less concise way and writing a parameter in a configuration file is substantially simpler than writing a line of code), it could provide intuition about the amount of effort needed to implement a model with different tools. To calculate the mean for different libraries, openly available implementations on GitHub are collected and the number of lines of code of each of them is collected (the list of repositories is available in the appendix). For Ludwig, the amount of lines in the configuration file needed to obtain the same models is reported both in the case where no hyper-parameter is spacified and in the case where all its hyper-parameters are specified.
The results in Table \[tab:evaluation\] show how even when specifying all its hyper-parameters, a Ludwig declarative model configuration is an order of magnitude smaller than even the most concise alternative. This supports the claim that Ludwig can be useful as a tool to reduce the effort needed for training and using deep learning models.
--------- ------------ --------- --------- ----- ----
TensorFlow Keras PyTorch
mean mean mean. w/o w
WordCNN 406.17 201.50 458.75 8 66
Bi-LSTM 416.75 439.75 323.40 10 68
Tagger 1067.00 1039.25 1968.00 10 68
ResNet 1252.75 779.60 479.43 9 61
--------- ------------ --------- --------- ----- ----
: Number of lines of code for implementing different models. *mean* columns are the mean lines of code needed to write a program from scratch for the task. *w* and *w/o* in the Ludwig column refer to the number of lines for writing a model definition specifying every single model hyper-parameter and pre-processing parameter, and without specifying any hyper-parameter respectively.[]{data-label="tab:evaluation"}
Limitations and future work
===========================
Although Ludwig’s *ECD* architecture is particularly well-suited for supervised and self-supervised tasks, how suitable it is for other machine learning tasks is not immediately evident.
One notable example of such tasks are Generative Adversarial Networks (GANs) [@goodfellow2014generative]: their architecture contains two models that learn to generate data and discriminate synthetic from real data and are trained with inverted losses. In order to replicate a GAN within the boundaries of the *ECD* architecture, the inputs to both models would have to be defined at the encoder level, the discriminator output would have to be defined as a decoder, and the remaining parts of both models would have to be defined as one big combiner, which is inelegant; for instance, changing just the generator would result in an entirely new implementation. An elegant solution would allow for disentangling the two models and change them independently. The recursive graph extension of the combiner described in section \[sub:ECD\] allows a more elegant solution by providing a mechanism for defining the generator and discriminator as two independent sub-graphs, improving modularity and extensibility. WAn extension of the toolbox in this direction is planned in the future.
Another example is reinforcement learning. Although *ECD* can be used to build the vast majority of deep architectures currently adopted in reinforcement learning, some of the techniques they employ are relatively hard to represent, such as instance double inference with fixed weights in Deep Q-Networks [@DBLP:journals/nature/MnihKSRVBGRFOPB15], which can currently be implemented only with a really custom and inelegant combiner. Moreover, supporting the dynamic interaction with an environment for data collection and more clever ways to collect it like Go-Explore’s [@Ecoffet2019goexplore] archive or prioritized experience replay [@Schaul2015prioritizedexperiencereplay], is currently out of the scope of the toolbox: a user would have to build these capabilities on their own and call Ludwig functions only inside the inner loop of the interaction with the environment. Extending the toolbox to allow for easy adoption in reinforcement learning scenarios, for example by allowing training through policy gradient methods like REINFORCE [@Williams92reinforce] or off-policy methods, is a potential direction of improvement.
Although these two cases highlight current limitations of the Ludwig, it’s worth noting how most of the current industrial applications of machine learning are based on supervised learning, and that is where the proposed architecture fits the best and the toolbox provides most of its value.
Although the declarative nature of Ludwig’s model definition allows for easier model development, as the number of encoders and their hyper-parameters increase, the need for automatic hyper-parameter optimization arises. In Ludwig, however, different encoders and decoders, i.e., sub-modules of the whole architecture, are themselves hyper-parameters. For this reason, Ludwig is well-suited for performing both hyper-parameter search and architecture search, and blurs the line between the two.
A future addition to the model definition file will be an hyper-parameter search section that will allow users to define which strategy among those available to adopt to perform the optimization and, if the optimization process itself contains parameters, the user will be allowed to provide them in this section as well. Currently a Bayesian optimization over combinatorial structures [@BaptistaP18BOCS] approach is in development, but more can be added.
Finally, more feature types will be added in the future, in particular videos and graphs, together with a number of pre-trained encoders and decoders, which will allow training of a full model in few iterations.
Related Work
============
TensorFlow [@tensorflow2015-whitepaper], Caffe [@jia2014caffe], Theano [@2016arXiv160502688short] and other similar libraries are tensor computation frameworks that allow for automatic differentiation and declarative model through the definition of a computation graph. They all provide similar underlying primitives and support computation graph optimizations that allow for training of large-scale deep neural networks. PyTorch [@paszke2017automatic], on the other hand, provides the same level of abstraction, but allows users to define models imperatively: this has the advantage to make a PyTorch program easier to debug and to inspect. By adding eager execution, TensorFlow 2.0 allows for both declarative and imperative programming styles. In contrast, Ludwig, which is built on top of TensorFlow, provides a higher level of abstraction for the user. Users can declare full model architectures rather than underlying tensor operations, which allows for more concise model definitions, while flexibility is ensured by allowing users to change each parameter of each component of the architecture if they wish to.
Sonnet [@sonnetblog], Keras [@chollet2015keras], and AllenNLP [@Gardner2017AllenNLP] are similar to Ludwig in the sense that both libraries provide a higher level of abstraction over TensorFlow and PyTorch primitives respectively. However, while they provide modules which can be used to build a desired network architecture, what distinguishes Ludwig from them is its declarative nature and being built around data type abstraction. This allows for the flexible *ECD* architecture that can cover many use cases beyond the natural language processing covered by AllenNLP, and also doesn’t require to write code for both model implementation and pre-processing like in Sonnet and Keras.
Scikit-learn [@sklearn_api], Weka [@DBLP:journals/sigkdd/HallFHPRW09], and MLLib [@meng2016mllib] are popular machine learning libraries among researchers and industry practitioners. They contain implementations of several different traditional machine learning algorithm and provide common interfaces for them to use, so that algorithms become in most cases interchangeable and users can easily compare them. Ludwig follows this API design philosophy in its programmatic interface, but focuses on deep learning models that are not available in those tools.
Conclusions
===========
This work presented Ludwig, a deep learning toolbox built around type-based abstraction and a flexible *ECD* architecture that allows model definition through a declarative language.
The proposed tool has many advantages in terms of flexibility, extensibility, and ease of use, which allow both experts and novices to train deep learning models, employ them for obtaining predictions, and experiment with different architectures without the need to write code, but still allowing users to easily add custom sub-modules.
In conclusion, Ludwig’s general and flexible architecture and its ease of use make it a good option for democratizing deep learning by making it more accessible, streamlining and speeding up experimentation, and unlocking many new applications.
Full list of GitHub repositories
================================
repository loc model notes
-------------------------------------------------------------------------------------- ------ --------- -------------------------------------------
<https://github.com/dennybritz/cnn-text-classification-tf> 308 WordCNN
<https://github.com/randomrandom/deep-atrous-cnn-sentiment> 621 WordCNN
<https://github.com/jiegzhan/multi-class-text-classification-cnn> 284 WordCNN
<https://github.com/TobiasLee/Text-Classification> 335 WordCNN cnn.py + files in utils directory
<https://github.com/zackhy/TextClassification> 405 WordCNN cnn\_classifier.pt + train.py + test.py
<https://github.com/YCG09/tf-text-classification> 484 WordCNN all files minus the rnn related ones
<https://github.com/roomylee/rnn-text-classification-tf> 305 Bi-LSTM
<https://github.com/dongjun-Lee/rnn-text-classification-tf> 271 Bi-LSTM
<https://github.com/TobiasLee/Text-Classification> 397 Bi-LSTM attn\_bi\_lstm.py + files utils directory
<https://github.com/zackhy/TextClassification> 459 Bi-LSTM rnn\_classifier.pt + train.py + test.py
<https://github.com/YCG09/tf-text-classification> 506 Bi-LSTM all files minus the cnn related ones
<https://github.com/ry/tensorflow-resnet> 2243 ResNet
<https://github.com/wenxinxu/resnet-in-tensorflow> 635 ResNet
<https://github.com/taki0112/ResNet-Tensorflow> 472 ResNet
<https://github.com/ShHsLin/resnet-tensorflow> 1661 ResNet
<https://github.com/guillaumegenthial/sequence_tagging> 959 Tagger
<https://github.com/guillaumegenthial/tf_ner> 1877 Tagger
<https://github.com/kamalkraj/Named-Entity-Recognition-with-Bidirectional-LSTM-CNNs> 365 Tagger
repository loc model notes
--------------------------------------------------------------------------------------------------------------------------------------- ------ --------- ------------------------------
<https://github.com/Jverma/cnn-text-classification-keras> 228 WordCNN
<https://github.com/bhaveshoswal/CNN-text-classification-keras> 117 WordCNN
<https://github.com/alexander-rakhlin/CNN-for-Sentence-Classification-in-Keras> 258 WordCNN
<https://github.com/junwang4/CNN-sentence-classification-keras-2018> 295 WordCNN
<https://github.com/cmasch/cnn-text-classification> 122 WordCNN
<https://github.com/diegoschapira/CNN-Text-Classifier-using-Keras> 189 WordCNN
<https://github.com/shashank-bhatt-07/Keras-LSTM-Sentiment-Classification> 425 Bi-LSTM
<https://github.com/AlexGidiotis/Document-Classifier-LSTM> 678 Bi-LSTM
<https://github.com/pinae/LSTM-Classification> 547 Bi-LSTM
<https://github.com/susanli2016/NLP-with-Python/blob/master/Multi-Class%20Text%20Classification%20LSTM%20Consumer%20complaints.ipynb> 109 Bi-LSTM
<https://github.com/raghakot/keras-resnet> 292 ResNet
<https://github.com/keras-team/keras-applications/blob/master/keras_applications/resnet50.py> 297 ResNet Only model, no preprocessing
<https://github.com/broadinstitute/keras-resnet> 2285 ResNet
<https://github.com/yuyang-huang/keras-inception-resnet-v2> 560 ResNet
<https://github.com/keras-team/keras-contrib/blob/master/keras_contrib/applications/resnet.py> 464 ResNet Only model, no preprocessing
<https://github.com/Hironsan/anago> 2057 Tagger
<https://github.com/floydhub/named-entity-recognition-template> 150 Tagger
<https://github.com/digitalprk/KoreaNER> 501 Tagger
<https://github.com/vunb/anago-tagger> 1449 Tagger
repository loc model notes
--------------------------------------------------------------------------------------- ------ --------- -----------------------------
<https://github.com/Shawn1993/cnn-text-classification-pytorch> 311 WordCNN
<https://github.com/yongjincho/cnn-text-classification-pytorch> 247 WordCNN
<https://github.com/srviest/char-cnn-text-classification-pytorch> 778 WordCNN ignored model\_CharCNN2d.py
<https://github.com/threelittlemonkeys/cnn-text-classification-pytorch> 499 WordCNN
<https://github.com/keishinkickback/Pytorch-RNN-text-classification> 414 Bi-LSTM
<https://github.com/Jarvx/text-classification-pytorch> 421 Bi-LSTM
<https://github.com/jiangqy/LSTM-Classification-Pytorch> 324 Bi-LSTM
<https://github.com/a7b23/text-classification-in-pytorch-using-lstm> 188 Bi-LSTM
<https://github.com/claravania/lstm-pytorch> 270 Bi-LSTM
<https://github.com/hysts/pytorch_resnet> 447 ResNet
<https://github.com/a-martyn/resnet> 286 ResNet
<https://github.com/hysts/pytorch_resnet_preact> 535 ResNet
<https://github.com/ppwwyyxx/GroupNorm-reproduce/tree/master/ImageNet-ResNet-PyTorch> 1095 ResNet
<https://github.com/KellerJordan/ResNet-PyTorch-CIFAR10> 199 ResNet
<https://github.com/mbsariyildiz/resnet-pytorch> 450 ResNet
<https://github.com/akamaster/pytorch_resnet_cifar10> 344 ResNet
<https://github.com/ZhixiuYe/NER-pytorch> 1184 Tagger
<https://github.com/sgrvinod/a-PyTorch-Tutorial-to-Sequence-Labeling> 840 Tagger
<https://github.com/epwalsh/pytorch-crf> 3243 Tagger
<https://github.com/LiyuanLucasLiu/LM-LSTM-CRF> 2605 Tagger
[^1]: <http://ludwig.ai/user_guide/>
[^2]: <http://ludwig.ai/examples>
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'It was recently discovered that the temperature in the surface layer of externally heated optically thick gray dust clouds increases with the optical depth for some distance from the surface, as opposed to the normal decrease in temperature with distance in the rest of the cloud. This temperature inversion is a result of efficient absorption of diffuse flux from the cloud interior by the surface dust exposed to the external radiation. A micron or bigger size grains experience this effect when the external flux is of stellar spectrum. We explore what happens to the effect when dust is a mixture of grain sizes (multigrain). Two possible boundary conditions are considered: i) a constant external flux without constrains on the dust temperature, and ii) the maximum dust temperature set to the sublimation temperature. We find that the first condition allows small grains to completely suppress the temperature inversion of big grains if the overall opacity is dominated by small grains. The second condition enables big grains to maintain the inversion even when they are a minor contributor to the opacity. In reality, the choice of boundary condition depends on the dust dynamics. When applied to the physics of protoplanetary disks, the temperature inversion leads to a previously unrecognized disk structure where optically thin dust can exist inside the dust destruction radius of an optically thick disk. We conclude that the transition between the dusty disk and the gaseous inner clearing is not a sharp edge, but rather a large optically thin region.'
author:
- Dejan Vinković
title: Temperature inversion on the surface of externally heated optically thick multigrain dust clouds
---
Introduction
============
Dust is one of the principal components of interstellar and circumstellar matter. It serves as a very efficient absorber of starlight, which is dominated by visual and ultraviolet photons, and reemiter of this absorbed energy into infrared (IR), thereby modifying the entire spectral energy distribution. Hence, radiative transfer in dusty environments is an inherent part of the study of star formation, protoplanetary disk evolution, dusty winds from AGB stars, circumnuclear environment of AGNs, etc.
In general, radiative transfer is extremely complicated because the dust in these environments always comes as a mixture of various grain sizes (multigrain[^1]), shapes and chemical compositions. It is, therefore, common to employ certain approximations that make the problem computationally manageable. For example, dust grains are usually approximated with spheres of similar chemical composition, so that the Mie theory can be easily used for calculating dust cross sections. Another most commonly used approximation is replacing a mixture of dust grain sizes by an equivalent (i.e. synthetic or average) single grain size. Numerical calculations have shown that this approximation does not produce a significant change in the spectral energy distribution [@ERR94; @Wolf; @Carciofi]. It is assumed that this is particularly appropriate for optically thick dust clouds.
It has been discovered recently that externally heated optically thick clouds made of large ($\ga 1\mu$m) single size dust grains produce the effect of [*temperature inversion*]{} where the maximum temperature is within the dust cloud[^2], at the visual optical depth of $\tau_V\sim$1, instead of on the very surface exposed to the stellar heating [@Isella; @VIJE06]. The cause of this inversion within the surface layer is its ability to efficiently absorb the diffuse IR radiation originating from the cloud’s interior - a process similar to the “greenhouse effect.”
It is not known, however, if multigrain dust would produce the same temperature inversion effect. Here we employ analytical multigrain radiative transfer to study the surface of optically thick clouds. Two possible types of boundary conditions are explored: i) a constant external flux heating the cloud, with no limits on the dust temperature, and ii) a fixed maximum dust temperature corresponding to dust sublimation. In §\[single\_grain\] we describe the analytic method and apply it on single size dust. In the next section §\[multigrain\] we apply the method on multigrain dust. After that in §\[transverse\] we explore a possibility of surface thermal cooling in transverse direction. Some aspects of our result are discussed in §\[discussion\], with our conclusion in §\[conclusion\].
Optically thick single size grain dust cloud {#single_grain}
============================================
In order to understand effects of multigrain dust on the temperature structure of optically thick clouds, we first have to take a look at the single size dust grain clouds. Here we reiterate analytic techniques described in the literature [@Isella; @VIJE06] and then expand our approach to multigrain dust in the following sections.
Consider dust grains at three different locations (Figure \[Sketch1\]): on the very surface (point [**P$_0$**]{}), at optical depth $\tau_V$ (point [**P$_1$**]{}) and at optical depth $2\tau_V$ (point [**P$_2$**]{}). The optical depth $\tau_V$ is defined at the peak wavelength of the bolometric temperature of external flux $F_{in}$ illuminating the cloud. We will be interested in cases where dust can reach sublimation temperatures (between $\sim$1,000K and 2,000K for interstellar dust) and where the external flux source is a star-like object peaking its emission at submicron wavelengths. For the purpose of this paper we use the optical depth $\tau_V$ in visual, but one can adjust it to the required wavelength of interest.
The surface layer between points [**P$_0$**]{} and [**P$_1$**]{} emits the thermal flux $F_{sur1}$. We assume that it emits equally on both sides (toward the left and right in Figure \[Sketch1\]). This approximation is valid if the layer is optically thin at the peak wavelength of its IR emission. If we define $q=\sigma_V/\sigma_{IR}$, where $\sigma_V$ and $\sigma_{IR}$ are dust absorption cross sections in visual and IR, then the IR optical depth requirement is The same requirement holds for the second surface layer between points [**P$_1$**]{} and [**P$_2$**]{} and its thermal flux $F_{sur2}$. Each of these two layers attenuates the external flux by $exp(-\tau_V)$ and the thermal fluxes by $exp(-\tau_{IR})=exp(-\tau_V/q)$.
Since the cloud is optically thick to its own radiation, no net flux can go through the cloud. The total flux coming from the left at any point in the cloud is equal to the total flux coming from the right. This yields flux balance equations at points [**P$_0$**]{}, [**P$_1$**]{} and [**P$_2$**]{} where $F_{out}$ is the thermal flux coming out of the cloud interior at [**P$_2$**]{}.
We want the external flux $F_{in}$ at point [**P$_2$**]{} to be attenuated enough to make diffuse flux the dominant source of heating. This requirement means that $2\tau_V\ga 1$, which in combination with equation \[tauIR\] gives the allowed range for $\tau_V$ From equations \[F0\]-\[F2\] we further get
A dust grain at [**P$_0$**]{} absorbs $\sim \sigma_V F_{in}$ of the external directional flux. It also absorbs $\sim 2\sigma_{IR}F_{IR}$ of any infrared diffuse flux $F_{IR}$, where the factor 2 accounts for absorption from 2$\pi$ sr. The grain emits at its temperature $T_0$ into 4$\pi$ sr, so that the energy balance is $$\sigma_VF_{in} + 2\sigma_{IR}(F_{sur1} + F_{sur2}\, e^{-\tau_V/q}
+ F_{out}\, e^{-2\tau_V/q}) =$$ where $\sigma_{SB}$ is the Stefan-Boltzmann constant. Using the flux balance in equation \[F0\] and $q=\sigma_V/\sigma_{IR}$, we get
Similarly, we can write the energy balance for a dust grain at [**P$_1$**]{} and [**P$_2$**]{}
We approximate the interior flux as This is a good approximation of the interior for gray dust and an overestimate for non-gray dust, where temperature decreases with optical depth. Now we can continue deriving temperatures in two possible ways.
[*METHOD 1*]{}: Combining equations \[Fsur2\],\[Fsur1\], \[Fin\], \[ebT1\], \[ebT2\] and \[Fout\] yields
The upper panel of Figure \[Temperature1\] shows this temperature profile for $\tau_V=1$. The effect of temperature inversion $T_2>T_0$ is present when . Only grains larger than about one micron can have such a small value of $q$, which makes them almost “gray” in the near IR. This inversion does not appear in non-gray dust ($q>q_{limit}$) where the temperature decreases monotonically with distance from the cloud surface.
[*METHOD 2*]{}: We can use equations \[F0\]-\[F2\] to express $F_{out}$ as a function of $F_{in}$ and then use equations \[Fin\] and \[Fout\] to obtain Similarly, we can express $F_{sur1}$ and $F_{sur2}$ as a function of $F_{in}$ and then use equation \[ebT1\] to obtain The lower panel of Figure \[Temperature1\] shows this result for $\tau_V=1$. It differs slightly from previous solution in Method 1, but gives qualitatively the same result. The temperature inversion now exists for . Henceforth, we use Method 1 for further analysis.
Optically thick multigrain cloud {#multigrain}
================================
Constant external flux as the boundary condition {#multigrain_const_flux}
------------------------------------------------
We consider now an externally heated optically thick dust cloud made of $N$ grain sizes, under the condition of constant external flux and no limits on the dust temperature. A dust grain of the $i^{th}$ size at a point of $\tau_V$ optical distance from the cloud surface is heated by the attenuated stellar flux $F_{in}exp(-\tau_V)$, by the local diffuse flux $F_{sur}$ coming from the direction of the surface, and by the local diffuse flux $F_{out}$ coming out of the cloud interior. The thermodynamic equilibrium of the dust grain gives Factors 2 and 4 account for absorption from 2$\pi$ sr and emission into 4$\pi$ sr.
On the very surface of the cloud $\tau_V=0$ and $F_{sur}=0$, therefore $F_{in}=F_{out}$, which yields where $q_i=\sigma_{V,i}/\sigma_{IR,i}$. This shows that [*dust grains of different sizes have different temperatures on the surface of an optically thick cloud*]{}. The lowest temperature among the grain sizes is acquired by the largest grains because they have the smallest $q_i$.
On the other hand, the contribution of the external flux is negligible $F_{in}exp(-\tau_V)\to 0$ when $\tau_V\gg 1$, therefore $F_{sur}=F_{out}$, which yields [*the same temperature for all grain sizes*]{}
A general relationship between grain temperatures is derived by subtracting equation \[ebi\] for a grain $i$ from the same equation for a grain $j$ and then use equation \[Fini\] to obtain Since $q$ scales inversely with the grain size, equation \[Tij\] shows that [*smaller grains always have a higher temperature than bigger grains at any point in the cloud*]{}, with the limit $T_j \sim T_i$ when $exp(-\tau_V)\ll 1$.
Optical depth is now a cumulative contribution of all grain sizes in the mix. where $n_i$ is the number density of the i$^{th}$ grain size. If we scale optical depth relative to $\tau_V$ then where $\Upsilon_{\lambda,i}$ is the relative contribution of the $i^{th}$ grain to the dust opacity at wavelength $\lambda$
Consider now a model similar to figure \[Sketch1\], except that the dust is multigrain. For simplicity and clarity of the following analysis, we work with only two grain sizes: a “big” grain [*$\alpha$*]{} with $q_\alpha\sim 1$ and a “small” grain [*$\beta$*]{} with $q_\beta>q_\alpha$. The infrared optical depth step is According to equation \[Tij\] the temperature of small grains at optical depth $k\tau_V$ (point [**P$_k$**]{}) is The temperature of big grains at point [**P$_0$**]{} is (equation \[Fini\]) while for the other temperatures we need the flux balance at points [**P$_0$**]{}, [**P$_1$**]{} and [**P$_2$**]{}. Since the balance is the same as in equations \[F0\]-\[F2\], except that the infrared step $\tau_V/q$ is replaced by $\tau_{IR}$, we use the procedure described in §\[single\_grain\] and derive
The interior flux $F_{out}$ is now a cumulative contribution of all grain sizes according to their relative contribution to the dust opacity. In our two-size example Combined with equation \[Tsmall\_n\] gives
Putting together equations \[T1alpha\_\], \[T2alpha\_\] and \[Foutc\] yields the solution
The resulting temperature is plotted in Figures \[Temp\_multigrain\_whole\_fig1\] and \[Temp\_multigrain\_whole\_fig2\]. The upper panel in each figure shows the result when small grains dominate the opacity ($\Upsilon_{V,\alpha}=0.1$). The temperature inversion in big grains is suppressed because the local diffuse flux is dictated by small grains. If big grains dominate the opacity (lower panels, $\Upsilon_{V,\alpha}=0.9$) then the temperature inversion is preserved.
Sublimation temperature as the boundary condition {#multigrain2}
-------------------------------------------------
Now we consider a dust cloud hot enough on its illuminated surface to sublimate dust grains warmer than the sublimation temperature $T_{sub}$. The flux entering the cloud is adjustable to accommodate any temperature boundary condition. From equation \[Fini\] we see that small grains are the first to be removed from the immediate surface. If the external flux is high enough then the immediate surface is populated only by $q_i\sim 1$ grains (“big grains”). All other grains (“small grains”) would survive somewhere within the cloud, at a distance where the local flux is reddened enough by big grains to be absorbed less efficiently.
We again apply our two-size dust model, except that the surface layer of visual optical depth $\tau_V$ is occupied only by big grains (see Figure \[Sketch2\]). It is too hot for small grains to survive within this layer. Both grains exist at optical distances larger than $\tau_V$ from the surface. Following the same procedure as in §\[single\_grain\], we write the flux balance at points [**P$_0$**]{}, [**P$_1$**]{} and [**P$_2$**]{} (see Figure \[Sketch2\]) where the IR optical depth between points [**P$_1$**]{} and [**P$_2$**]{} is given in equation \[tauIR\_multigrain\].
Small grains do not exist now at point [**P$_0$**]{}, but their temperature at other points is still described by equation \[Tsmall\_n\]. Deriving big grain temperatures at [**P$_1$**]{} and [**P$_2$**]{} is now a straightforward procedure already described in §\[single\_grain\] and §\[multigrain\_const\_flux\] The resulting temperature is plotted in Figures \[Temp\_multigrain\_fig1\] and \[Temp\_multigrain\_fig2\]. Two important results are deduced for big grains from this solution: i) big grains maintain the temperature inversion and ii) if small grains set the maximum temperature limit then big grains, which are always colder than small grains (see equation \[Tij\]), cannot reach the sublimation temperature.
However, since we do not put limits on the external flux, the solution that allows the maximum possible external flux is the one that also maximizes $F_{out}$. From equation \[Fout\_multi\] we see that the maximum $F_{out}=\sigma_{SB}T_{sub}$ is achieved when $T_{2,\alpha}=T_{2,\beta}=T_{sub}$. Such a solution is not possible unless $\Upsilon_{IR,\beta}=0$. Therefore, [*dust temperatures are maximized when all small grains sublimate away from the surface region and exist only at optical depths of $exp(-\tau_V)\ll 1$. In other words, the external flux is maximized when the cloud surface “belongs” exclusively to big grains only*]{}.
Transverse cooling of the cloud surface {#transverse}
=======================================
Solutions presented so far assume no time variability in any of the model parameters. The external flux is adjusted by hand to the value that maximizes the external flux and keeps dust temperatures below sublimation. In reality, however, this is a dynamical process where the equilibrium is established by dust moving around and sublimating whenever its temperature exceeds the sublimation point.
The existence of temperature inversion is difficult to understand under such dynamical conditions. Since the very surface of the cloud is [*below*]{} the sublimation point, its dust can survive closer to the external energy source than the dust within the cloud. Therefore, the distance of the cloud from the source is not defined by the very surface, but rather by the dust at the peak temperature [*within*]{} the cloud. Imagine now that the whole cloud is moving closer to the source. From the cloud point of reference, this dynamical process is equivalent to increasing the external flux. According to solutions in §\[single\_grain\] and §\[multigrain\], the peak temperature within the cloud will exceed sublimation at a certain distance from the source and the dust will start to sublimate. This point is the distance at which the dust cloud seemingly stops. However, there is nothing to stop the very surface layer of the cloud to move even closer than the rest of the cloud because its temperature is below sublimation.
Notice that we can not resolve this issue in the approximation of an infinite dusty slab because the transverse optical depth (parallel to the slab surface) is always infinite. No flux can escape the slab in the transverse direction. Hence, the radiative transfer in a slab does not depend on spatial scale and only the optical depth matters. The spatial extension of the surface layer is irrelevant and solutions from §\[single\_grain\] and §\[multigrain\] are applicable irrespectively of the dust dynamics on the spatial scale.
In reality, however, the cloud is finite and dust sublimation can eventually make the cloud optically thin in the transverse direction. This optical depth gap enables thermal radiation to flow out in the transverse direction and thus provides a channel for thermal cooling. Under such conditions the surface layer cam move closer and closer to the energy source by simultaneously expanding the size of the transverse optical depth gap, which then increases the amount of escaping thermal flux. Since the very surface dust is getting closer to the energy source, its temperature increases and eventually it reaches the dust sublimation. At that moment the radiative and dynamical equilibria are established and the whole dust cloud seemingly stops.
The resulting dust cloud has a large surface zone where the thermal flux can transversely escape the cloud. Here we make an attempt to describe this on a quantitative level through a model shown in Figure \[Sketch3\]. It is similar to the infinite slab model described in §\[single\_grain\] except that now we introduce the flux $F_{exit}$ exiting the cloud transversely through the surface layer.
The flux balance equations \[F1\] and \[F2\] remain unchanged, while equation \[F0\] is changed to By following the same procedure as in §\[single\_grain\], we obtain temperatures $T^{\prime\prime}_k$ at points [**P$_k$**]{} where A and B are given in equations \[A\_method1\] and \[B\_method1\]. The values in parentheses are always positive.
We compare this with the original solution in equations \[Fin\], \[T1\_method1\] and \[T2\_method1\]. Notice that keeping the interior temperature to be the same as before ($T^{\prime\prime}_2=T_2$) requires a larger surface temperature ($T^{\prime\prime}_0>T_0$). If the surface dust of $T^{\prime\prime}_0$ is dynamically driven toward the external energy source, as we discussed above, then it will eventually reach the sublimation temperature $T_{sub}$. This dynamical process is accompanied by dust sublimation in the cloud interior. Sublimation creates a transverse optical depth gap and maintains $T^{\prime\prime}_2=T_{sub}$. The established equilibrium condition $T^{\prime\prime}_0=T^{\prime\prime}_2=T_{sub}$ yields This flux is positive for values of $q$ that produce temperature inversion ($q<q_{limit}$). It originates from the diffuse radiation trapped within the cloud. Figure \[Temperature\_exit\] shows temperature profiles for $q<q_{limit}$ based on $F_{exit}$ from equation \[Fexit\_min\]. The surface layer now has a local temperature minimum at $T^{\prime\prime}_1$.
Discussion
==========
According to equation \[T\_tauV\_gg\_1\], the temperature of dust optically deep inside an optically thick cloud depends solely on the local diffuse flux. Since this can give a misleading impression that temperature effects on the cloud surface have no influence on the cloud interior, it is worth explaining why the surface solution is so important for the overall solution.
At optical depths $\tau_V\ga 2$ the flux is exclusively diffuse and the dust temperature is dictated by equation \[T\_tauV\_gg\_1\]. Notice, however, that even though we set the limit of zero net flux flowing through the cloud, there is no limit on the absolute value of the local diffuse flux. Arbitrarily large but equal diffuse fluxes can flow in opposite directions producing the net zero flux. It is the radiative transfer in the surface layer that sets the absolute scale. The cloud surface of $\tau_V\la 2$ converts the directional external flux of stellar spectral shape into the diffuse thermal flux. Hence, this conversion determines the value of diffuse flux at $\tau_V\sim 2$, which then propagates into the cloud interior of $\tau_V\ga 2$.
A more rigorous analytic approach shows that the net flux is not exactly zero. A very small flux, in comparison with the external flux, goes through the cloud and creates a temperature gradient. This yields the well known gray opacity solution [@Mihalas] where $T^4(\tau)=a\tau+b$. While the constant $a$ depends only on the net flux ($a=0$ in the case of zero net flux), the constant $b$ is determined by the boundary condition on the cloud surface. Hence, this becomes a reiteration of the role of the surface layer described above.
The possibility of the temperature inversion effect described in this paper was discussed previously by @Wolf. He noticed in his numerical calculations that larger grains can have a higher temperature than smaller grains in the surface of circumstellar dusty disks. He correctly attributed this effect to the more efficient heating of large grains by the IR radiation from the disk interior, but did not analyze it further. This is the same effect noticed by @D02 in his numerical models of circumstellar disks with gray dust opacities. However, it was not until @Isella and @VIJE06 described this effect analytically in more detail that its importance to the overall disk structure was finally recognized.
The main driver for the detailed description of the temperature inversion effect came from the need for a better understanding of inner regions of dusty disks around young pre-main-sequence stars. The advancements of near infrared interferometry enabled direct imaging of these inner disk regions and resulted in the discovery of inner disk holes produced by dust sublimation [@interfer]. However, developing a self-consistent model that would incorporate the spectral and interferometric data proved to be a difficult problem. Two competing models are proposed. In one the data are explained by a disk that has a large vertical expansion (puffing) at the inner dust sublimation edge due to the direct stellar heating of the disk interior [@DDN]. In the other model the inner disk is surrounded by an optically thin dusty outflow, without a need for special distortions to the vertical disk structure [@VIJE06].
Although current observations cannot distinguish between these two models, theory gives some limits on the dust properties in the former model. The inner disk edge has to be populated by big dust grains ($\ga
1\mic$) [@Isella; @VIJE06] in order to produce a large and bright vertical disk puffing needed to fit the data. The hallmark of this radiative transfer solution is the dust temperature inversion, which we also demonstrate in §\[single\_grain\]. @VIJE06 argue that purely big grains are unrealistic because dust always comes as a mixture of grain sizes (especially in dusty disks where dust collisions should constantly keep small grains in the mix). On the other hand, @Isella point out that smaller grains should be sublimated away from the very surface of the inner disk edge, but cannot prove that this process is efficient enough to preserve dust temperature inversion.
Our analytical analysis in §\[multigrain\] proves that multigrain radiative transfer solutions keep small grains away from the immediate surface of optically thick disks and preserve temperature inversion. This is achieved under the boundary condition of maximum possible temperature reached by all dust grains. Then the surface becomes too hot for small grains to survive, which leaves it populated only by big grains. By choosing this favorable boundary condition, we made an important presumption about the dust dynamics. Dust has to be dynamically transported to the distances where all grains start to sublimate. This can be naturally occurring in optically thick disks due to dust and gas accretion. However, in §\[transverse\] we discovered that big grains can survive closer to the star than the inner edge of optically thick disk. The only requirement is that the disk becomes vertically optically thin at these close distances.
Therefore, the dust sublimation zone is not a simple sharp step-like transition, but rather a large zone (figure \[Disksketch\]). We estimate its size by considering the inner radius of optically thick and thin disks [@VIJE06] where $T_*$, $R_*$ and $L_*$ are the stellar temperature, radius and luminosity, respectively, and $\Psi$ is the the correction for diffuse heating from the disk edge interior. Optically thick disks have $\Psi=2$, while optically thin disks have $\Psi\sim 1.2$. The sublimation zone exists between these two extremes, which translates to $\sim 0.2$AU for Herbig Ae stars ($\sim 50L_\odot$) or $\sim 0.03$AU in T Tau stars ($\sim L_\odot$). This is a considerable size detectable by interferometric imaging and has to be addressed in future studies. Also, its gas is enriched by metals coming from the sublimated dust, which makes this zone a perfect place for dust growth. Notice that, in addition to sublimation, grain growth is another way of making the disk optically thinner. Hence, the evolutionary role of such a large sublimation zone has to be studied further in more detail.
Another major problem with the dynamics of big grains is that they tend to settle toward the midplane, which suppresses vertical expansion of the inner disk edge and leads to the failure of the disk puffing model [@DD04; @VIJE06]. On the other hand, the concurrent model incorporating a dusty outflow lacks a convincing physical process responsible for the formation of a dusty wind. The sublimation zone may play an important role in dusty wind processes, since its small optical depth and small distance from the star should also result in gas properties that are more susceptible to non-gravitational forces capable of launching a dusty wind (such as magnetic fields) than the rest of the dusty disk.
Conclusion
==========
We analyzed the temperature structure of externally heated optically thick dust clouds. We focused on the recently discovered effect of temperature inversion within the optically thin surface of a cloud populated by big ($\ga 1\mu$m) dust grains [@Isella; @VIJE06]. The effect is produced by reprocessing the external radiation and not by any additional energy source. The inversion manifests itself as a temperature increase with optical depth, before it starts to decrease once the external directional radiation is completely transformed into the diffuse thermal flux. Since small grains do not show this effect, the open question was whether small grains would manage to suppress this effect when mixed with big grains.
We show analytically that small grains remove the temperature inversion of big grains if the overall opacity is dominated by small grains. However, this does not happen in situations where the cloud is close enough to the external energy source for dust to start sublimating. Small grains acquire a higher temperature than bigger grains and sublimate away from the immediate cloud surface. The exact grain size composition of the surface depends on the amount of external flux because different grain sizes sublimate at different distances from the surface. These distances are smaller than in optically thin clouds because bigger grains provide shielding to smaller grains from direct external radiation. We show that the temperature inversion is always preserved if small grains are removed from only $\tau_V\sim 1$ of the immediate surface.
If the boundary condition requires all dust sizes to maximize their temperature, then all small grains are removed from the surface layer. They can exist only within the cloud interior $exp(-\tau_V)\ll 1$ where the external radiation is completely absorbed. Such a condition is expected in protoplanetary disks where dust accretion moves dust toward the star. The inner disk radius is then defined by the largest grains, no matter what the overall grain size composition, because the largest grains survive the closest to the star and dictate the surface radiative transfer.
A new problem arises in that case. Since the temperature inversion keeps the very surface of the cloud below the sublimation temperature, its dust can move even closer to the star. We show that this creates an optically thin dusty zone inside the dust destruction radius of an optically thick disk (figure \[Disksketch\]). Only big grains can survive in this zone. We estimate that its size is large enough to be detected by near IR interferometry. It consists of big grains and gas enriched by metals from sublimated dust, hence favorable for grain growth. This shows that the geometry and structure of inner disks cannot be determined by simple ad hoc boundary conditions. It requires self-consistent calculations of dust dynamics combined with radiative transfer calculations and dust sublimation.
Supports by the NSF grant PHY-0503584 and the W.M. Keck Foundation are gratefully acknowledged.
Calvet, N., Patino, A., Magris, G. C., & D’Alessio, P. 1991, , 380, 617
Carciofi, A. C., Bjorkman, J. E., & Magalh[\~ a]{}es, A. M. 2004, , 604, 238
Dullemond, C. P. 2002, , 395, 853
Dullemond, C. P., Dominik, C., & Natta, A. 2001, , 560, 957
Dullemond, C. P., & Dominik, C. 2004, , 417, 159
Efstathiou, A., & Rowan-Robinson, M. 1994, , 266, 212
Isella, A., & Natta, A.2005, , 438, 899
Malbet, F., & Bertout, C.1991, , 383, 814
Mihalas, D. “Stellar atmospheres”, San Francisco, W. H. Freeman and Co., 1978.
Monnier,J. D. & Millan-Gabet, R. 2002, 579, 694
Vinkovi[ć]{}, D., Ivezi[ć]{}, [Ž]{}., Jurki[ć]{}, T., & Elitzur, M. 2006, , 636, 348
Wolf, S. 2003, , 582, 859
[^1]: Typically, the term [*multigrain*]{} also includes all other dust grain properties, like the grain shape and chemistry. But, for simplicity, we use this term only to describe grain size effects.
[^2]: Another type of temperature inversion has been recognized in protoplanetary disks, where additional viscous heating can increase the disk interior temperature [@Calvet; @Malbet]. In contrast, the temperature inversion discussed here is a pure radiative transfer effect and does not require any additional assumption (like disk viscosity) to operate.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'The weak spin-orbit interaction in graphene was predicted to be increased, [*e.g.*]{}, by hydrogenation. This should result in a sizable spin Hall effect (SHE). We employ two different methods to examine the spin Hall effect in weakly hydrogenated graphene. For hydrogenation we expose graphene to a hydrogen plasma and use Raman spectroscopy to characterize this method. We then investigate the SHE of hydrogenated graphene in the H-bar method and by direct measurements of the inverse SHE. Although a large nonlocal resistance can be observed in the H-bar structure, comparison with the results of the other method indicate that this nonlocal resistance is caused by a non-spin-related origin.'
author:
- Tobias Völkl
- Denis Kochan
- Thomas Ebnet
- Sebastian Ringer
- Daniel Schiermeier
- Philipp Nagler
- Tobias Korn
- Christian Schüller
- Jaroslav Fabian
- Dieter Weiss
- Jonathan Eroms
title: 'Absence of a giant spin Hall effect in plasma-hydrogenated graphene'
---
Introduction
============
Covalently bonded hydrogen was predicted to significantly increase the spin-orbit coupling (SOC) of graphene by Castro Neto and Guinea[@CastroNeto2009]. However, experimental results on this were conflicting. Balakrishnan *et al.* reported a high nonlocal resistance in weakly hydrogenated graphene in the so called H-bar structure[@Balakrishnan2013a]. They further observed an oscillatory behavior of this nonlocal resistance with an in-plane magnetic field and therefore attributed this effect to the SHE with a spin Hall angle of around $\alpha_{SH} = 0.18 - 0.45$. A high nonlocal resistance in similar samples was also observed by Kaverzin and van Wees[@Kaverzin2015]. However they obtained an unrealistically high value for the spin Hall angle of $\alpha_{SH} =1.5$ and could not observe any effect of an in-plane magnetic field on this nonlocal resistance. They therefore argue that this nonlocal signal has a non spin related origin.
Here, we perform different types of experiments to solve this controversy. For hydrogenation we expose graphene to a hydrogen plasma which has several advantages over the hydrogenation method by exposing hydrogen silsesquioxane (HSQ) to an electron beam, employed in Refs. . We use Raman spectroscopy to characterize graphene exposed to hydrogen or deuterium to verify that the created defects by this method are indeed bonded hydrogen atoms. Then we perform non-local measurements in the so-called H-bar geometry in graphene that was hydrogenated by this method. Further, we employ electrical spin injection into hydrogenated graphene to perform spin transport measurements as well as measurements of the inverse spin Hall effect. Our results show that the large nonlocal signal in hydrogenated graphene is not related to the spin Hall effect.
Plasma hydrogenation of graphene
================================
Due to limitations of the HSQ-based hydrogenation procedure, which we describe in more detail below, we explore hydrogenation by exposing graphene to a hydrogen plasma in a reactive ion etching chamber (RIE). Following the recipe developed by Wojtaszek *et al.*[@Wojtaszek2011], exfoliated graphene was exposed to hydrogen plasma of pressure $p=40$ mTorr, 30 sccm gas flow and 2 W power. The relatively low power leads to a low acceleration bias voltage of $U_{bias}<2$ V, which reduces the creation of lattice defects. The samples were then investigated by Raman spectroscopy.
Fig. \[hydrogen\_dose\] (a) shows Raman spectra of samples with different plasma exposure time.
![image](hydrogen_dose.pdf){width="0.6538\columnwidth"} ![image](deuterium_dose.pdf){width="0.6599\columnwidth"} ![image](dose_plus_annealing.pdf){width="0.6919\columnwidth"} ![image](H_plus_D_comp.pdf){width="0.7\columnwidth"}
With increasing exposure time both a D-peak and a D$^\prime$-peak arise, which indicate the presence of defects. For higher exposure times a decrease of the 2D-peak intensity can be observed which indicates an alteration of the electronic band structure. As can be seen in the red curve in Fig. \[hydrogen\_dose\] (c) the ratio between the D and G-peak intensities increases with exposure time up to a value around $I_D/I_G=3$ for an exposure time of $t=40$ s and decreases for longer exposure times. For low defect densities the ratio between D and G-peak intensities is proportional to the defect density[@Cancado2011]: $$n_D(cm^{-2})=\frac{1.8 \pm 0.5 \cdot 10^{22}}{\lambda_L^4} \left( \frac{I_D}{I_G} \right)
\label{n_D}$$ with $\lambda_L=532$ nm (given in nm in Eq. ) being the excitation wavelength. $I_D/I_G$ reaches its maximum when the average distance between defects becomes comparable to the distance an e-h pair travels in its lifetime, given by $l_x=v_F/\omega_D$ with $\omega_D$ being the D-peak frequency[@Cancado2011]. At higher defect densities the D-peak becomes broader and its intensity decreases. Further, at high defect densities the graphene band structure is altered by the defects, which reduces possible transitions[@Ferrari2007]. Since the 2D peak is double resonant it is more sensitive to this alteration than the D- and G-peaks and therefore a reduction of the 2D-peak intensity with increasing exposure time can be observed in Fig. \[hydrogen\_dose\](a).
The green curve of Fig. \[hydrogen\_dose\](c) shows $I_D/I_G$ for the same samples after annealing in vacuum at 320 $\degree$C for 1 h. For low plasma exposure times $t \leq 40$ s annealing almost fully removes the defects. Since this temperature is too low to heal vacancies[@Zion2017] in graphene, this behavior indicates that for these low exposure times the observed defects are bonded hydrogen atoms. For $t>40$ s the defects could not be removed by annealing. Therefore the occurrence of lattice defects for higher plasma exposure times is likely. Possible explanations for this might be heating of the samples during the exposure process or etching of carbon atoms by the formation of CH$_2$ after saturation of the hydrogen coverage of graphene[@Luo2009].
To further determine the type of the observed defects the same experiment was performed with deuterium instead of hydrogen. Fig. \[hydrogen\_dose\](b) shows Raman spectra for different exposure times. In comparison to Fig. \[hydrogen\_dose\](a) deuterium seems to induce slightly more defects than hydrogen as can be seen by the rapid decrease of 2D-peak intensity in Fig. \[hydrogen\_dose\](b). One explanation for this could be a higher reactivity of deuterium, due to a slightly increased binding energy[@Paris2013]. Another explanation is that the deuterium atoms are more likely to create lattice defects due to their higher mass.
Samples exposed to either hydrogen or deuterium with an exposure time of $t=20$ s were annealed for 1 h in vacuum at different temperatures. Fig. \[hydrogen\_dose\](d) shows the relative $I_D/I_G$ ratio divided by its value before annealing. Surprisingly the bonded deuterium (red dots in Fig. \[hydrogen\_dose\](d)) is more stable with temperature than the hydrogen (black dots in Fig. \[hydrogen\_dose\](d)). A similar behavior has been observed for hydrogen and deuterium on graphite[@Zecho2002]. This can be explained by a slightly increased binding energy of deuterium due to zero-point energy effects[@Paris2013] and a lower attempt frequency due to the higher mass of deuterium compared to hydrogen, hindering desorption [@Zecho2002]. The fact that a different desorption behavior was found for hydrogen and deuterium is a clear indication that the defects created by this method are really bonded hydrogen since there should be no difference for other defect types.
Concerning the HSQ-based hydrogenation method employed in Refs. we note several difficulties. First, the HSQ film cannot be removed after exposure without destroying the underlying graphene sheet. Therefore, hydrogenation can only be done as a last step of the sample fabrication. Since resist residues from previous steps proved to prevent efficient hydrogenation, it is expected that the hydrogen coverage produced by this method is not homogeneous. Second, a high p-type doping was always observed in samples produced by this method both in our measurements [^1] as well as in the measurements by Kaverzin and van Wees [@Kaverzin2015]. This is problematic since the occurrence of the SHE is only expected close to the charge neutrality point (CNP) [@Ferreira2014], which in these samples is often not accessible due to the high doping. Third, it is not entirely clear that the defects produced by this method are really bonded hydrogen since the Raman measurements are not sensitive to the defect type. Therefore, in our experiments, we resort to plasma hydrogenation.
Nonlocal resistance in hydrogenated graphene
============================================
Using plasma hydrogenation a Hall-bar sample was fabricated. First, exfoliated graphene was exposed to hydrogen plasma for 20 s as described in the previous section. Afterwards, oxygen plasma was used to etch the graphene into a Hall bar and 0.5 nm Cr + 60 nm Au were deposited for contacts. A schematic picture of the sample structure is displayed in the inset of Fig. \[hstruct\].
![Back gate dependent four-point resistivity of H-bar sample at $T=185$ K (black curve) and $T=1.7$ K (red curve). This gives a position of the charge neutrality point at U$_{\textrm{CNP}}=26$ V indicating p-type doping and a mobility around $\mu=1600$ cm$^2$/Vs. Inset: Schematic picture of an H-bar sample.[]{data-label="hstruct"}](066bgsw.pdf){width="0.9\columnwidth"}
Raman measurements of this sample reveal $I_D/I_G=0.43$. Using Eq. \[n\_D\] and assuming that the defect density equals the hydrogen atom density, we extract a coverage of 0.0025%. This value is much lower than in the previous section for the same exposure time since several lithography steps and therefore resist bake-out steps were necessary after the hydrogenation process. However, employing hydrogenation as a first step in the sample fabrication process was preferred over using it as a last step since it is expected that resist residues lead to an inhomogeneous hydrogen coverage of the sample.
Back gate sweeps of the 4-point resistivity of this sample at temperatures $T=185$ K (black curve) and $T=1.7$ K (red curve) are depicted in Fig. \[hstruct\]. In this sample a p-type doping with $U_{CNP}=26$ V and mobilities of $\mu_h=1400$ cm$^2/$Vs /($\mu_h=1500$ cm$^2/$Vs) for the hole side and $\mu_{el}=1800$ cm$^2/$Vs ($\mu_{el}=2000$ cm$^2/$Vs) for the electron side at $T=185$ K ($T=1.7$ K) were observed.
For obtaining the nonlocal resistance a current was applied between contacts 2 and 8 in the inset of Fig. \[hstruct\] and a voltage is measured between contacts 3 and 7 (Fig. \[nl\](a)) and between contacts 4 and 6 (Fig. \[nl\](b)).
![image](nl2um.pdf){width="0.65\columnwidth"} ![image](nlb2um.pdf){width="0.65\columnwidth"} ![image](2um_sim.pdf){width="0.65\columnwidth"} ![image](nl4um.pdf){width="0.65\columnwidth"} ![image](nlb4um.pdf){width="0.65\columnwidth"} ![image](4um_sim.pdf){width="0.65\columnwidth"}
Decreasing the temperature from $T=185$ K (black curves in Fig. \[nl\](a) and (b)) to $T=1.7$ K (green curves in Fig. \[nl\](a) and (b)) increases the nonlocal resistance close to the charge neutrality point. The red curves depict the expected ohmic contribution given by $R_{ohmic}=R_{2pt} \cdot G$, with $R_{2pt}$ being the 2-point resistance between contacts 2 and 8 and a geometry factor $G$ determined by a finite element simulation done with COMSOL. As can be seen in Fig. \[nl\](a) and (b), close to the charge neutrality point the measured nonlocal resistances far exceeds the expected ohmic contribution.
As argued by Balakrishnan *et al.*[@Balakrishnan2013a] this nonlocal resistance might be caused by an interplay between direct and inverse spin Hall effect. Then the nonlocal resistance as a function of distance to the current path $L$ is given by[@Abanin2009]: $$R_{nl}=\frac{1}{2} \alpha_{SH}^2 \rho \frac{W}{ \lambda_s} \exp \left(-\frac{L}{\lambda_s} \right)
\label{Rnl}$$ with the sheet resistivity $\rho$, the sample width $W$ and the spin diffusion length $\lambda_s$. By comparing $R_{nl}$ at the two different distances in Fig. \[nl\](a) and (b) $\lambda_s$ can be calculated to be in the range of $\lambda_s=510 -565$ nm. With this the spin Hall angle $\alpha_{SH}$ close to the charge neutrality point can be calculated to be $\alpha_{SH}=1.3$ for $T=185$ K and $\alpha_{SH}=1.6$ for $T=1.7$ K. These unrealistically high values are similar to the one reported by Kaverzin and van Wees[@Kaverzin2015].
Further, in case that the large nonlocal resistance is caused by the spin Hall effect, $R_{nl}$ should be sensitive to an in-plane magnetic field, due to Larmor precession of the spins. Therefore, an oscillatory behavior of $R_{nl}$ is expected to follow[@Abanin2009]: $$\begin{split}
R_{nl}(B_{||})= \frac{1}{2} \alpha_{SH}^2 \rho W
Re \big[ (\sqrt{1+i \omega_L \tau_s}/\lambda_s) \\
\exp (-( \sqrt{1+i \omega_L \tau_s}/\lambda_s) L) \big]
\label{nlB}
\end{split}$$ with $\omega_L$ being the Larmor frequency.
Fig. \[nl\](c) and (d) show the influence of a magnetic field in both in-plane directions (black and red curves) on $R_{nl}$ for two different distances from the current path. As can be seen, no significant change of $R_{nl}$ with $B_{||}$ can be observed. This is in disagreement with the expected behavior given by Eq. \[nlB\], which is depicted in Fig. \[nl\](e) and (f) for different values of $\tau_s$ in a realistic range, since a lower bound of $\tau_s>10$ ps could be established due to the absence of a weak antilocalization peak[@Note1]. As indicated here, a significant dependence of $R_{nl}$ on $B_{||}$ should be visible.
Inverse spin Hall effect in hydrogenated graphene
=================================================
Due to the difficulties arising from measuring the spin Hall effect in the H-bar geometry a more direct way for observing this effect is desirable. One way to examine the inverse spin Hall effect electrically was explored by Valenzuela and Tinkham[@Valenzuela2006] in aluminum wires. For this they employed electrical spin injection to create a spin current through the wire and measured a resulting nonlocal voltage across a Hall bar.
To employ this method in hydrogenated graphene the sample shown schematically in Fig. \[06\_pic\](a) was fabricated.
![image](06_bgsw2.pdf){width="0.7\columnwidth"} ![image](06_nlsv2.pdf){width="0.7\columnwidth"}
First, exfoliated graphene was exposed to hydrogen plasma for 20 seconds. Spin injection contacts consisting of 1.2 nm MgO, acting as a tunnel barrier, 50 nm Co and 10 nm Au were deposited (orange stripes in Fig. \[06\_pic\](a)). Afterwards 0.5 nm Cr +80 nm Au were deposited for contacts. As a last step oxygen plasma was employed to etch the sample.
Fig. \[06\_pic\](b) shows back gate sweeps of this sample, where a current was applied between contacts 1 and 5 and the voltage was taken between contacts 2 and 3 (black curve in Fig. \[06\_pic\](b)) and between contacts 3 and 4 (red curve in Fig. \[06\_pic\](b)). As can be seen the position of the charge neutrality point differs for the two areas. This can be caused by different doping of the areas either by the ferromagnetic contacts or by a difference in hydrogen coverage between the area underneath the stripes and the rest of the sample. Mobilities of $\mu_h=2000$ cm$^2$/Vs for the hole side and $\mu_{el}=2400$ cm$^2$/Vs for the electron side could be observed in this sample.
Further, nonlocal spin injection measurements were performed to examine whether spin injection is possible with these contacts[@Johnson1985]. Fig. \[06\_pic\](c) shows nonlocal spin-valve measurements at different back gate voltages. Here a current is applied between contacts 3 and 5 in Fig. \[06\_pic\](a) and a nonlocal voltage is measured between contacts 2 and 1. The magnetization of the ferromagnetic stripes is first aligned by a magnetic field in stripe direction of $B_y=1$ T. Then the magnetic field is swept in the opposite direction. Due to their different shape the two ferromagnet stripes have a different coercive field. As can be seen in Fig. \[06\_pic\](c) a clear difference between parallel and antiparallel alignment of the stripe magnetizations can be observed over the whole back gate range.
Applying an out-of plane magnetic field to this setup leads to precession of the spins around that field. The out-of plane magnetic field dependence is depicted in Fig. \[06hanle\].
![Nonlocal resistance after subtraction of a parabolic background (black curve) at $U_{bg}=0$ V. The Hanle peak at low magnetic field can be fitted with Eq. \[eqhanle\] (red curve). At high magnetic field the stripe magnetization is rotated in the magnetic field direction.[]{data-label="06hanle"}](06_hanlefit.pdf){width="0.9\columnwidth"}
Here, a parabolic background that can be caused by a charge current contribution in the nonlocal path by the presence of pinholes in the tunnel barriers[@Volmer2015a] was subtracted. In the low magnetic field range the nonlocal resistance follows the expected behavior of the Hanle-effect[@Fabian2007]: $$\begin{split}
R_{nl}(\omega_L)=R_{nl}(0) \int_{0}^{\infty} \frac{1}{\sqrt{4 \pi D_s t}} \exp \left( - \frac{L^2}{4 D_s t} \right) \\ \cos(\omega_L t) \exp \left( - \frac{t}{\tau_s} \right) \textrm{d} t\\
R_{nl}(0)=\frac{P^2 \rho \lambda_s}{2 W} \exp(-L/\lambda_s)
\end{split}
\label{eqhanle}$$ Fitting the data in the low magnetic field range (red curve in Fig. \[06hanle\]) reveals a spin injection efficiency of $P=3.1 \%$. The injection efficiency is much lower than what is typically observed with these kind of tunnel barriers in pristine graphene. This can be caused by an enhanced island growth of the MgO tunnel barrier due to the attached hydrogen and therefore an increase of pinholes in the barrier, resulting in a relatively low contact resistance of $R_c=1.2-4.2$ k$\Omega \mu$m$^2$. Another explanation might be increased spin relaxation in the barrier due to the hydrogen atoms. It has to be noted that fabricating spin selective contacts in graphene that was hydrogenated by this method proved to be difficult in general.
Further, the extracted spin lifetime of $\tau_s=146$ ps is much smaller than what was observed in pristine graphene with tunneling contacts produced by the same method[@Ringer2018].This is in contrast to the findings of Wojtaszek *et al.* who observed an increase in spin lifetime after treating pristine graphene with hydrogen plasma[@Wojtaszek2013]. This small value for the spin lifetime can be caused by either an increased contact-induced spin relaxation due to an increase in the number of pinholes[@Volmer2013] or due to increased spin relaxation by the presence of hydrogen atoms acting as magnetic impurities[@Kochan2014]. However, $\tau_s$ is still large enough that a clear oscillation of the nonlocal resistance in the H-bar geometry should be visible as shown by Fig. \[nl\](e) and (f).
At higher magnetic fields the stripe magnetization is rotating into the out-of plane directions. Therefore the polarization of the injected spins has an out-of plane component that does not precess around the external field. The nonlocal resistance saturates around a magnetic field of $B_z=1.8$ T. This value coincides with the field at which the magnetization direction is completely rotated into the out-of plane direction, determined by anisotropic magnetoresistance measurements[@Note1].
Contrary to similar measurements performed by Tombros *et al.* in pristine graphene[@Tombros2008a] no difference between the zero magnetic field value and the saturation value of the nonlocal resistance could be observed.
![image](06_ishe.pdf){width="0.64\columnwidth"} ![image](ishe_sim.pdf){width="0.64\columnwidth"} ![image](ishe_sim_scheme2.jpg){width="0.64\columnwidth"}
This indicates isotropic spin relaxation, consistent with the expected dominating spin relaxation mechanisms of contact-induced spin relaxation and spin relaxation due to spin-flip scattering at the absorbed hydrogen atoms. Both mechanisms result in isotropic spin relaxation.
For measurement of the inverse spin Hall effect a current was applied between contacts 3 and 1 in Fig. \[06\_pic\](a) and a nonlocal voltage was measured between contacts 4 and 6. Without an external magnetic field the stripe magnetization is in the in-plane direction. Therefore no nonlocal voltage due to an inverse spin Hall effect is expected. Applying an out-of plane magnetic field results in a rotation of the stripe magnetization towards the out-of plane direction. The resulting out-of plane component of the spin polarization then leads to a nonlocal voltage that is expected to follow[@Valenzuela2006]: $$R_{SH}=\frac{1}{2} P \alpha_{SH} \rho \exp(-L/\lambda_s) \sin(\theta)
\label{eqishe}$$ with $\sin (\theta)$ being the projection of the stripe magnetization on the $z$-axis. With Eq. \[eqhanle\] a saturation of the nonlocal resistance at $B_z=1.8$ T with $R_{SH}= \frac{\alpha_{SH} W}{P \lambda_s} R_{nl}(0) \approx \alpha_{SH} \cdot 6.9 \Omega $ is expected. The expected resulting $R_{SH}$ with $\alpha_{SH}=1$ is depicted by the purple curve in Fig. \[06ishe\](a). For this the angular dependence of the magnetization direction $\sin(\theta)$ was extracted from Fig. \[06hanle\][@Valenzuela2006] and an offset was added for clarity.
The observed nonlocal resistance in this geometry for different back gate voltages is shown in Fig. \[06ishe\](a). Here a large magnetic field dependent nonlocal resistance can be seen. However, no saturation of this nonlocal resistance for $B_z>1.8$ T was observed. The magnetic field dependence of the nonlocal resistance is therefore unlikely to be caused by the spin Hall effect.
To determine the origin of this effect a finite element simulation done with COMSOL was performed. For this the potential distribution in the presence of two pinholes in the tunnel barrier was calculated (similar to the calculations in Ref. ) as shown in Fig. \[06ishe\](c). The resulting magnetic field dependence for different charge carrier concentrations shown in Fig. \[06ishe\](b) is comparable to the nonlocal resistance in Fig. \[06ishe\](a). Therefore it is likely that the observed magnetic field dependence of the nonlocal resistance is caused by a charge current effect due to the presence of pinholes.
This effect can mask a potential inverse spin Hall effect signal. However, the large spin Hall angle of $\alpha_{SH} \approx 1$ resulting from the spin Hall interpretation of the H-bar geometry should still be observable close to the charge neutrality point $U_{CNP}=10$ V of the areas that are not covered by the ferromagnetic stripes.
Spin Hall angle - an estimation of order of magnitude
=====================================================
In this section we provide a theoretical estimate of the upper bound of the spin Hall angle $\alpha_{SH}$ that conventionally expresses a rate conversion of the charge to the transverse spin-current in the presence of SOC. To model hydrogen chemisorption, we employ the tight-binding Hamiltonian inspired by first-principle calculations proposed in Ref. . Plain graphene is described by the conventional nearest-neighbor Hamiltonian $H_0$, and the hydrogen-induced perturbation including a locally enhanced SOC by Hamiltonian $H^\prime$, see Refs. . Related transport characteristics are estimated on the methodology developed in Refs. . Particularly, for a given scattering process $\mathbf{n}, s\mapsto\tilde{\mathbf{n}}, \tilde{s}$ where an electron with the incident direction and spin, $\mathbf{n}, s$, elastically scatters to an outgoing state $\tilde{\mathbf{n}},\tilde{s}$, we calculate the corresponding differential cross-section $\tfrac{\mathrm{d}\sigma}{\mathrm{d}\varphi}\bigl(\mathbf{n},s;\tilde{\mathbf{n}},\tilde{s}\bigr)$ that depends also on the energy of the incident electron. Knowing $\tfrac{\mathrm{d}\sigma}{\mathrm{d}\varphi}$ we know spatial probability distributions of electrons with flipped or conserved spin depending on the relative angle $\tilde{\varphi}_{\mathbf{n}}=\sphericalangle(\mathbf{n}\tilde{\mathbf{n}})$. Elastic scattering governed by $H^\prime$ affects momentum relaxation due to resonances near the Dirac point[@Wehling2010; @Irmer2018], and also spin relaxation due to locally enhanced SOC[@Bundesmann2015]. Despite the fact that hydrogen is predicted to induce also an unpaired magnetic moment[@Yazyev2007], which can serve as another spin relaxation channel[@Kochan2014], we restrict our estimates of $\alpha_{SH}$ just to the local SOC interactions.
Assuming a spin polarized beam of, say, spin-up electrons with the incident energy $E$, the upper bound of the spin Hall angle $\alpha_{SH}(E)$ reads: $$\alpha_{SH}(E)\approx \frac{\left\langle\sum\limits_{\tilde{\mathbf{n}}}\left[\frac{\mathrm{d}\sigma}{\mathrm{d}\varphi}\bigl(\mathbf{n},\uparrow;\tilde{\mathbf{n}},\uparrow\bigr)-
\frac{\mathrm{d}\sigma}{\mathrm{d}\varphi}\bigl(\mathbf{n},\downarrow;\tilde{\mathbf{n}},\uparrow\bigr)\right]\sin{\tilde{\varphi}_{\mathbf{n}}}\right\rangle}
{\left\langle\sum\limits_{\tilde{\mathbf{n}}}\left[\frac{\mathrm{d}\sigma}{\mathrm{d}\varphi}\bigl(\mathbf{n},\uparrow;\tilde{\mathbf{n}},\downarrow\bigr)\right]
2\cos{\tilde{\varphi}_{\mathbf{n}}}\right\rangle}\,,$$ where the angle brackets represent averaging over all incoming directions $\mathbf{n}$. The calculation was performed for one hydrogen atom in a supercell containing 16120 carbon atoms, [*i.e.*]{} a hydrogen concentration of 0.0062 %. Fig. \[FigX\] displays $\alpha_{SH}$ as function of Fermi energy. The obtained values are in magnitude comparable with, [*e.g.*]{}, those of Ferreira *et al.* [@Ferreira2014], but differ from the experimental data fitted by Eq. \[Rnl\].
![Estimated spin Hall angle $\alpha_{SH}$ at zero temperature for a dilute hydrogenated graphene as a function of Fermi energy (zero energy corresponds to the charge neutrality point). The tight-binding parameters and model-based calculation follow[@Gmitra2013; @Bundesmann2015].\[FigX\]](SHA2.pdf){width="45.00000%"}
Further, as seen in Fig. \[FigX\], $\alpha_{SH}$ is expected to vanish at the charge neutrality point, which is in contrast to the observed nonlocal resistance in Fig. \[nl\].
Discussion
==========
The background effect observed in Fig. \[06ishe\] could mask the relatively small spin Hall angle resulting from the theoretical estimation in Fig. \[FigX\]. However, the high value of $\alpha_{SH}>1$ following from the SHE interpretation of the nonlocal resistance in Fig. \[nl\](a) and (b) should still be observable. Further, this unusually high spin Hall angle as well as the absence of an oscillatory behavior of $R_{nl}$ with an in-plane magnetic field support the findings of Kaverzin and van Wees[@Kaverzin2015]. These results suggest that the large nonlocal resistance observed in Fig. \[nl\](a) and (b) is caused by a non spin-related mechanism.
Large nonlocal resistances in the H-bar structure were also observed in graphene decorated with heavy atoms[@Wang2015], hBN/graphene heterostructures[@Gorbachev2014] and in graphene structured with an antidot array[@Pan2017]. These were attributed to the occurrence of a valley-Hall effect[@Wang2015; @Gorbachev2014], a nonzero Berry curvature, due to the presence of a band gap[@Pan2017] and transport through evanescent waves[@VanTuan2016; @Tworzydlo2006]. However none of these effects can sufficiently explain the observed behavior[@Note1].
Conclusion
==========
In conclusion we employed two different types of measurements to investigate the spin Hall effect in hydrogenated graphene. For hydrogenation, graphene was placed into a hydrogen plasma. This technique was investigated by Raman spectroscopy. Since Raman measurements are only sensitive to the number of defects and not to the defect type, measurements with both hydrogen and deuterium were performed. The different desorption behavior observed for these isotopes is a clear indication that the defects produced by this method are indeed bonded hydrogen atoms.
Nonlocal measurements in the so called H-bar geometry showed a large nonlocal resistance that however did not show a dependence on an in-plane magnetic field. Also measurement of the inverse spin Hall effect by electrical spin injection showed no sign of the large spin Hall angle suggested by the spin Hall effect interpretation of the nonlocal measurements. Further, a theoretical estimate showed a much smaller spin Hall angle than suggested by the spin Hall interpretation of the nonlocal resistance in the H-bar method. These results indicate that the large nonlocal resistance is caused by a non spin-related origin.
Acknowledgments {#acknowledgments .unnumbered}
===============
Financial support by the Deutsche Forschungsgemeinschaft (DFG) through project KO 3612/3-1 and within the programs GRK 1570, SFB 689, and SFB 1277 (projects A09, B05 and B06) is gratefully acknowledged. This project has received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement No 696656 (Graphene Flagship).
[34]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\
12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty [****, ()](\doibase 10.1103/PhysRevLett.103.026804) [****, ()](http://dx.doi.org/10.1038/nphys2576) [****, ()](\doibase 10.1103/PhysRevB.91.165412) [****, ()](\doibase http://dx.doi.org/10.1063/1.3638696) [****, ()](\doibase
10.1021/nl201432g), [****, ()](\doibase http://dx.doi.org/10.1016/j.ssc.2007.03.052) [****, ()](\doibase 10.1063/1.4978312), [****, ()](\doibase 10.1021/nn900371t), , [****, ()](\doibase
10.1002/adfm.201202355) [****, ()](\doibase
10.1063/1.1511729), [****, ()](\doibase 10.1021/nl802940s), [****, ()](\doibase 10.1103/PhysRevLett.112.066601) [****, ()](\doibase 10.1103/PhysRevB.79.035304) [****, ()](http://dx.doi.org/10.1038/nature04937) [****, ()](\doibase 10.1103/PhysRevLett.55.1790) [****, ()](http://stacks.iop.org/2053-1583/2/i=2/a=024001) [****, ()](http://epub.uni-regensburg.de/7807/) [****, ()](\doibase 10.1103/PhysRevB.97.205439) [****, ()](\doibase 10.1103/PhysRevB.87.081402) [****, ()](\doibase
10.1103/PhysRevB.88.161405) [****, ()](\doibase 10.1103/PhysRevLett.112.116602) [****, ()](\doibase
10.1103/PhysRevLett.101.046601) [****, ()](\doibase 10.1103/PhysRevLett.110.246602) [****, ()](\doibase 10.1103/PhysRevB.95.165415) [****, ()](\doibase
10.1103/PhysRevB.92.081403) [****, ()](\doibase 10.1103/PhysRevLett.105.056802) [****, ()](\doibase
10.1103/PhysRevB.97.075417) [****, ()](\doibase 10.1103/PhysRevB.75.125408) [****, ()](\doibase
10.1103/PhysRevB.92.161411) [****, ()](\doibase 10.1126/science.1254966) [****, ()](\doibase 10.1103/PhysRevX.7.031043) [****, ()](\doibase 10.1103/PhysRevLett.117.176602) [****, ()](\doibase 10.1103/PhysRevLett.96.246802)
in [1,...,4]{} [ ]{}
[^1]: see supplemental material
| {
"pile_set_name": "ArXiv"
} |
---
abstract: |
We present the results of an ongoing investigation to provide a detailed view of the processes by which massive stars shape the surrounding interstellar medium (ISM), from pc to kpc scales. In this paper we have focused on studying the environments of Wolf-Rayet (WR) stars in M31 to find evidence for WR wind-ISM interactions, through imaging ionized hydrogen nebulae surrounding these stars.
We have conducted a systematic survey for shells surrounding 48 of the 49 known WR stars in M31. There are 17 WR stars surrounded by single shells, or shell fragments, 7 stars surrounded by concentric limb brightened shells, 20 stars where there is no clear physical association of the star with nearby H$\alpha$ emission, and 4 stars which lack nearby H$\alpha$ emission. For the 17+7 shells above, there are 12 which contain one or two massive stars (including a WR star) and that are $\leq$40 pc in radius. These 12 shells may be classical WR ejecta or wind-blown shells. Further, there may be excess H$\alpha$ point source emission associated with one of the 12 WR stars surrounded by putative ejecta or wind-blown shells. There is also evidence for excess point source emission associated with 11 other WR stars. The excess emission may arise from unresolved circumstellar shells, or within the extended outer envelopes of the stars themselves.
In a few cases we find clear morphological evidence for WR shells interacting with each other. In several H$\alpha$ images we see WR winds disrupting, or punching through, the walls of limb-brightened shells.
author:
- 'M. A. Bransford, D. A. Thilker, R. A. M. Walterbos, and N. L. King'
title: ' **Shells Surrounding Wolf-Rayet stars in M31**'
---
INTRODUCTION
============
Wolf-Rayet (WR) stars are thought to be evolved, massive stars which have lost a significant fraction of their outer envelopes. They are sometimes surrounded by ring nebulae produced via the interaction of the central star with its stellar ejecta or surrounding ambient medium (e.g. Miller & Chu 1993). Studies of the chemistry, morphology and kinematics of Galactic WR ring nebulae have resulted in three main categories (e.g. Chu 1981): (a) radiatively excited regions (R-type); (b) stellar ejecta (E-type); (c) and wind-blown bubbles (W-type). A WR star does not play a primary role in the formation or shape of an R-type shell (generally $>$ 30-40 pc in radius), only in its ionization. An E-type shell is produced when the WR stellar wind sweeps up mass lost from the star at an earlier epoch. Ejecta shells tend to be small ($\sim$5-10 pc in radius; Chu 1981; Marston 1997), and are generally identified chemically by high nitrogen-to-oxygen ratios (indicative of CNO processing). The W-type shells ($\sim$10-20 pc in radius) are produced when a strong WR stellar wind sweeps up ambient interstellar medium. Some Galactic WR stars have been observed to exist within shells (Arnal 1992). Eventually WR stars undergo a supernova explosion at the end of their lives. WR stars, therefore, have a substantial impact on their surroundings during the various phases of their evolution.
We present the results of an ongoing study focusing on the interaction of massive stars with the ISM, in nearby galaxies, from single stars to associations (see Thilker, Braun, & Walterbos 1998). Together, these investigations provide a detailed view of the processes by which massive stars shape the surrounding ISM, from pc to kpc scales. The main goal of this paper is to study the environments of WR stars to find evidence for WR wind-ISM interactions in M31, through imaging ionized hydrogen nebulae surrounding these stars.
We have conducted a systematic search for nebulae near WR stars in M31. Our results are based on new and previously published H$\alpha$, H$\alpha$+\[\], and \[\] imaging data of M31 obtained at Kitt Peak National Observatory. M31 is attractive for this type of study due its proximity (nebulae are resolvable down to radii of $\sim$7 pc) and the fact that it provides a uniform sample of WR stars at a well known distance (690 kpc). Surveys for WR stars in M31, while not complete, have resulted in a catalog of 48 WR stars (Massey & Johnson 1998). Another WR star has recently been discovered by Galarza, Walterbos, & Braun (1999).
We present H$\alpha$ and B images of the 48 of these 49 WR stars which fell within the area we imaged, and compile a catalog of ionized hydrogen nebular “shells” in their vicinity. Deciding when an shell is physically associated with a WR star (i.e. an E- or W-type shell) is not easy, since most WR stars are located close to other massive stars and near classical regions. Given the small size of Galactic E-shells we can barely resolve the larger shells of this type, but can easily resolve W- and R-type shells. Although we cannot distinguish W- and R-type shells based solely on imaging data, we attempt to determine which shells may be physically associated with WR stars based on criteria discussed below. Excess point source H$\alpha$ emission associated with WR stars might indicate unresolved E- or W-shells, or emission from a WR stellar wind. The Galactic WNL star HDE 313846 has a broad H$\alpha$ emission line, with an equivalent width of 23 ${\rm \AA}$, which has been interpreted as arising in an extended outer envelope indicating a strong stellar wind (Crowther & Bohannan 1997). For this reason we also list the stars with unresolved H$\alpha$ excess.
The paper is organized as follows. In $\S$2 and 3 we present details of the observations and data reduction. In $\S$4 we describe our morphological classification of H$\alpha$ emission near WR stars, present H$\alpha$ and B images, and provide the physical properties of all detected shells. Further, we attempt to determine which shells are causally associated with a WR star. We discuss the environments of WR stars, and evidence for WR wind-ISM interactions, in $\S$5. We present our summary and conclusions in $\S$6.
THE OBSERVATIONS
================
The data relevant to this study include previously published and new imaging data of M31, both obtained at the KPNO 0.9-m telescope. The earlier data were originally presented by Walterbos & Braun (1992; hereafter WB92). The WB92 data consist of 19 fields, located in the NE half of M31, imaged through 27 ${\rm \AA}$ H$\alpha$ and \[\] filters, 75 ${\rm \AA}$ off-band continuum and broad-band B. Each field measured 6$\farcm$7 $\times$ 6$\farcm$7. Further details may be found in WB92.
The new data, covering 8 fields mostly on the SW side of M31, were obtained during the nights of 4-10 November 1997. The fields were imaged through a pair of 75 ${\rm \AA}$ H$\alpha$+\[\] and off-band, red-shifted H$\alpha$ filters having nearly identical bandpass shape, in addition to a blue (B-band) filter. A 75 ${\rm
\AA}$ filter had to be used since narrower filters were not available to cover the radial velocities on the SW side of M31. Three to 7 individual frames were obtained for each H$\alpha$+\[\], off-band, and B-band field. Total co-added exposure times for each H$\alpha$+\[\] and off-band field ranged from 2400 to 5400 s, and for the B-band fields from 180 to 690 s.
The images were obtained using a TI 2k $\times$ 2k CCD detector. Individual pixels measure 24 $\mu$m, which corresponds to 0$\farcs$68 or 2.3 pc at the distance of M31 (690 kpc). The fields measure 23$\farcm$2 on a side or, ignoring projection effects due to inclination, 4.6 $\times$ 4.6 kpc$^{\rm 2}$. The typical resolution of the WB92 data set was 2$\arcsec$, and for the 1997 data set $\sim$1-2$\arcsec$ (which corresponds to a linear resolution of $\sim$4-7 pc). Future papers will provide the entire 1997 data set and its reduction, although we outline some essential reduction details here.
DATA REDUCTION AND CALIBRATION OF 1997 DATA SET
===============================================
Initial reduction of the individual frames (bias subtraction and flat fielding) was performed within IRAF$\footnote{IRAF (Image Reduction
and Analysis Facility) is provided by the National Optical Astronomy
Observatories (NOAO). NOAO is operated by the Association of
Universities for Research in Astronomy (AURA), Inc. under cooperative
agreement with the National Science Foundation}$. A twilight “master-flat” was constructed by co-adding twilight flats from several different nights, in each filter. The twilight master-flats successfully removed sensitivity variations across the chip. This could be seen by the uniformity of the dome flats after correcting them with the master-flats. Residual, large-scale variations were of the order 1$\%$ across the field. Registered H$\alpha$+\[\], off-band, and blue images were then combined and cosmic rays were removed, using the IRAF routine IMCOMBINE. The continuum images were scaled to foreground stars in the H$\alpha$+\[\]+continuum images, and then subtracted to produce line emission images. The images presented in this paper are each displayed with a logarithmic scaling, to emphasize faint emission and structure within the H$\alpha$ emission.
The line images were flux calibrated using the Oke & Gunn (1983) standard star HD 19445. We corrected for the slight effect of H$\alpha$ absorption in the standard star. The rms noise after calibration and continuum subtraction is $\sim$2 $\times$ 10$^{\rm
-17}$ ergs s$^{\rm -1}$ cm$^{\rm -2}$ pix$^{\rm -1}$. Expressed in emission measure (the integral of the electron density [*squared*]{} along the line of sight) this corresponds to 22 pc cm$^{\rm -6}$, assuming an electron temperature of 10,000 K (note that for ionized gas at 10,000 K, 5.6 $\times$ 10$^{\rm -18}$ erg s $^{\rm -1}$ cm$^{\rm -2}$ arcsec$^{\rm -2}$ = 2.8 pc cm$^{\rm -6}$ (e.g. WB92)). The accuracy of the calibration was tested by comparing the fluxes of several normal regions in overlapping fields between the 1997 data set and WB92. The fluxes in the 1997 data are higher by $\sim$30$\%$. Since the WB92 data did not contain emission from the \[\]$\lambda\lambda$6548,6584 lines, and the ratio of \[\]/H$\alpha$ intensities was assumed to be be 0.3 in WB92, agreement between the data sets is satisfactory.
In order to easily locate the positions of WR stars within our images, the absolute orientation of the fields was determined using an astrometery routine within IDL. This program was interactively given the (X,Y) coordinates of several HST guide stars within our fields. The known ($\alpha$,$\delta$) coordinates of the guide stars were then used to determine a plate scale solution, giving each frame a positional accuracy of $\sim$1$\arcsec$ relative to the guide star reference frame.
IDENTIFICATION AND PHYSICAL OF PROPERTIES OF SHELLS
===================================================
We have classified the morphology of H$\alpha$ emission near WR stars into two groups. The first category (Group I shells) consists of shells, or shell fragments, in which the WR star is at or near the center of a single shell (Ia), or concentric limb brightened shells (Ib). We consider Group I shells as potentially being causally associated with a WR star. Note that the multiple shells presented here are probably not physically related to the multiple shells reported by Marston (1995), since the former are generally E-type shells, which are barely resolved in our data. The second category (Group II stars) consists of WR stars where there is no clear causal relation with nearby H$\alpha$ emission (IIa), or a lack of nearby H$\alpha$ emission (IIb). Group IIa consists of WR stars that are either near amorphous, diffuse H$\alpha$ emission, or on the edge of regions, and therefore have no preferred location with respect to an shell. There are 17 WR stars surrounded by Group Ia shells, 7 stars surrounded by Group Ib shells, 20 Group IIa stars, and 4 Group IIb stars.
H$\alpha$ fluxes were computed for Group I shells, but not for nebular emission near Group II stars. Polygonal masks (shown in Figure 1, see below) used to compute the fluxes were chosen such that they most likely contain all emission from a given shell. Regions for Group Ia shells are generally circular or elliptical in shape. The multiple polygons for Group Ib shells are concentric in nature. The interior polygons defined for Group Ib shells are used to compute the fluxes of the inner shells, and the outer polygons are used to compute the fluxes of the inner+outer shells. Outer shell fluxes then are obtained as the difference of these two measurements. We defined irregularly shaped regions for shell fragments, such that the boundary traced the H$\alpha$ emission associated with a fragment. Masks containing shell fragments therefore tend to be arc shaped. Pixels within the regions defined above were summed to produce the total shell+background fluxes. The background was estimated from one or more polygonal regions immediately surrounding an shell (not shown), and were generally 2-3 times larger in area than the regions used to compute the shell+background fluxes. The background was then subtracted to produce a total shell H$\alpha$ flux. Uncertainties in the flux were estimated by adding in quadrature the rms background uncertainty, for the number of source pixels, and root-N photon noise.
The sizes of shells were estimated by measuring several (5-10) radial distances from a WR star to the outer edge of its surrounding shell, and averaging these values. For a given shell, the uncertainties in the measurements varied depending on factors such as the completeness and/or ellipticity of the shell. Radii were rounded to the nearest 5 pc, and have typical uncertainties of $\pm$5pc.
Figure 1a contains H$\alpha$+\[\] and blue (B-band) images of Group Ia shells from the 1997 data set. The regions used to compute the shell fluxes are indicated. The B images serve as finder charts and illustrate the spatial distribution of surrounding luminous stars. Figure 1b is the same as Figure 1a, but contains images of Group Ia shells from WB92. The H$\alpha$ images lack contribution from \[\]$\lambda\lambda$6548,6584 emission in this case. Group Ib shells are shown in Figure 1c from the 1997 data set and in Figure 1d from WB92. We note that the Group Ia shell surrounding MS11 appears in Figure 1c, due to its proximity with MS12, which is surrounded by a Group Ib shell. The names of the WR star(s) appearing in the images are written above the blue images. The central WR star name appears first whenever there is more than one WR star per image. Tables 1 & 2 contain a listing of the properties of the shells presented in Figure 1, including the spectral subtype of the central star and its coordinates, radii of the shells, and integrated H$\alpha$ luminosities (uncorrected for possible extinction).
Images of Group IIa stars are shown in Figure 2a, from the 1997 data set, and Figure 2b, from WB92. Due to the proximity of ob33wr3 and ob33wr2, the latter is shown in Figure 1c, although we consider this star to belong to Group IIa. Group IIb stars are shown in Figure 2c, from the 1997 data set.
The H$\alpha$ luminosity of Group I shells, presented in Figure 1, ranges between 1 and 43 $\times$ 10$^{\rm 36}$ ergs s$^{\rm
-1}$. In Figure 3 we show the distribution of H$\alpha$ luminosities as a function of the size of the nebulae, and plot the same distribution for the nebular structures surrounding WR stars in M33 (data from Drissen, Shara, & Moffat 1991). Further we indicate the detection limit in our data, with regard to shell-like objects. The minimum detectable luminosity for shells of varied size was determined empirically through the use of a simple code for modeling the anticipated appearance of idealized bubbles in a noisy background. Our program initially computes the projected surface brightness distribution of a shell (at very high spatial resolution) then transforms to an observable form by convolving with an appropriate Gaussian PSF and regridding. Finally, we scale the simulated image to units of emission measure. This result is added to a realistic noise background matching the properties of our observations. Inspection of the shell image with noise added determines whether or not a source of the specified size and luminosity would be detected in our survey. Although rather conservative, this procedure does not reflect non-detection associated with confusion due to neighboring emission line sources. We suspect that the ambiguity related to objects bordering the WR star may often dominate the “error budget” for non-detection of WR shells. For this reason, the detection limit indicated in Figure 3 should be interpreted as a best-case scenario, appropriate when the WR star is relatively isolated but otherwise just a lower limit.
Figure 3 reveals that there is a gap between the detection limit and the H$\alpha$ emission from Group I shells. Why don’t we detect fainter shells? One possible explanation is that the larger shells are R-type, with many massive stars responsible for their ionization. Typically Galactic WR shells expand at 10-50 km s$^{\rm -1}$ (Marston 1995), therefore a shell would need $\geq$4 $\times$ 10$^{\rm 5}$ years to expand to a radius of 20 pc. If the WR phase lasts 10$^{\rm
4}$-10$^{\rm 5}$ years (Marston 1995) shells larger than 20 pc would no longer contain a WR star. The problem with this explanation is that these timescales hold true only if the shell were to begin expanding from the surface of the star. Shells, however, are likely to form far from the surface of the star. In the intervening space between the star and the radius at which the shell is formed, the stellar wind (or possible ejecta) may expand at a much higher velocity than the observed expansion velocity of the resulting shell. Further, one must consider mass loss from pre-WR phases. The winds of massive stars can operate on timescales of several Myr (the typical lifetime of an O star), affecting the ISM up to radii of tens of parsecs. As for the gap at small shell radii, there may be unresolved circumstellar shells ($\leq$ 5-7 pc) surrounding WR stars in our data (see below). The point source detection limit is log L$_{\rm H\alpha}$ $\sim$ 34.5, well below the scale in Figure 3. We note that the upper limit of possible shell emission surrounding Group IIb stars also falls below this scale. The gap therefore remains somewhat surprising.
The implied Lyman continuum luminosity of Group I shells ranges from 0.8 to 31.8 $\times$ 10$^{\rm 48}$ photons s$^{\rm -1}$. Esteban et al. (1993) have estimated that Galactic WR stars can supply 4 to 37 $\times$ 10$^{\rm 48}$ photons s$^{\rm -1}$, indicating that all Group I shells can in principle be ionized by a single WR star. However, this is not proof that the only source of ionization is the WR star. Spectral classification of the brightest blue stars within the projected boundaries of the nebulae would be necessary to convincingly resolve this issue.
We now discuss which Group I shells may be physically associated with a solitary WR star. Our criteria are based on (a) the number of nearby massive stars (other than the WR star), and (b) a comparison of the sizes of the Group I shells with the sizes of WR nebulae in the LMC.
In Figure 4 we plot the sizes of Group I shells versus the number of nearby, potentially ionizing stars (including the WR star in all cases). Stars were selected if they were both (a) within one shell radius (and therefore within the boundary of the shell or shell fragment), and (b) have B - V colors $\leq$ 0 (photometry is taken from Magnier et al. (1992)). Only stars bluer than B0 (B - V = -0.3) can supply enough ionizing photons to be consistent with the H$\alpha$ luminosities of Group I shells. Our choice of B - V $\leq$ 0 was prompted by considering the following two factors. First, there are substantial uncertainties in the photometry from the Magnier catalog. Second, we have made no reddening corrections. Thus the color information on stars in the field is, unfortunately, crude. Note in Figure 4 that Group I shells with radii less than $\sim$40 pc generally contain one or two ionizing stars.
Rosado (1986) reported the radii of shells associated with (a) single WR stars, (b) single OB stars, and (c) multiple OB & WR stars in the LMC. The mean radii are, respectively, 15$\pm$7 pc, 27$\pm$4 pc, and 69$\pm$33 pc (Rosado 1986). Single massive stars in the LMC are surrounded by shells up to $\sim$30 pc in radius, consistent with the radius estimated above for Group I shells
We suggest, allowing for chance alignment, those shells containing one or two ionizing stars (including a WR star), or that are $\leq$ 40 pc in radius, are possibly of E- or W-type. There are 5 Group Ia shells that satisfy this criteria: ob69-F1, MS8, MS2, ob54wr1, and ob42wr1. All 7 inner Group Ib shells satisfy this criteria, although the outer shells for MS14 and MS4 do not. Therefore, 12 of the 48 WR stars we observe (25$\%$) may be surrounded by “classical” WR E- or W-type shells. The remaining 11 Group Ia shells contain three or more stars, or have a radii $>$40 pc, and are probably R-type.
Previous observations have suggested Galactic E- and W-type shells are generally $<$20 pc in radius (Chu 1981; Marston 1997). Our data (in particular see Figure 4), on the other hand, suggest that the shells of this type [*may*]{} be larger in M31, on average, than their Galactic counterparts. However, some of our larger shells (including those classified as R-type) may be (and in some instances probably are) classical, ring-like regions or, for the very largest shells, “superbubbles” (e.g. Oey & Massey 1994; Oey 1996), and may not be “classical” WR ring nebulae. Future observations to specifically classify the shells presented in this paper, as either E-, W- or R-type, would be necessary to verify that E- or W-shells are indeed larger in M31 than the Galaxy.
WR ring nebulae in the Galaxy and LMC have revealed interesting statistical trends, suggesting WC stars generally are found within nebulae of larger size than WN stars (Miller & Chu 1993; Marston 1997; Dopita et al. 1994), possibly indicating an evolutionary sequence. We compared the size distribution of the shells surrounding WN stars with those surrounding WC stars in M31. In light of the discussion above, we restricted our comparison to shells $\leq$40 pc in radius, and containing one or two massive stars (i.e. those shells thought to be causally related to a single WR star). For this particular subset of shells (3 contain WN stars and 14 contain WC stars) we found that shells containing a WN star were on average smaller than those containing a WC star. If true, this supports the evolutionary sequence, WN $\rightarrow$ WC (Marston 1997). However, given the small number of shells (in particular those containing WN stars), our result is not statistically significant. Further, considering all shells in our sample, there are as many WN stars within small shells (putative E- or W-type) as big shells (putative R-type), as can be seen from Tables 1 & 2. We therefore caution that this result should be viewed as preliminary.
We conclude this section by discussing the excess point source H$\alpha$ emission associated with a subset of WR stars in the 1997 and WB92 data. There are 12 stars with possible excess emission in the H$\alpha$ passband. Tables 1 & 2 list the luminosities and equivalent widths of the excess emission for those WR stars within Group I shells, and Table 3 for those stars in Group IIa. No stars within Group IIb have excess emission. Excess H$\alpha$ emission associated with WR stars might indicate unresolved E- or W-shells, or emission from a WR stellar wind. We note that a small contribution from stellar $\lambda$6562 may also be present in the H$\alpha$ passband, but not enough to explain the equivalent widths of the excess emission. The luminosities of the excess emission range from 3 to 10 $\times$ 10$^{\rm 34}$ ergs s$^{\rm -1}$, the equivalent widths from 5 to 240 ${\rm \AA}$. By way of comparison, the Galactic stellar ejecta nebula RCW 58 has an H$\alpha$ luminosity of 1 $\times$ 10$^{\rm 35}$ ergs s$^{\rm -1}$ (Drissen et al. 1991), and WNL 313846 has a broad H$\alpha$ emission line, with an equivalent width of 23 ${\rm \AA}$, interpreted as arising in an extended outer envelop and indicative of a strong stellar wind (Crowther & Bohannan 1997). Crowther & Smith (1997) find that four LMC WN9-11 stars show nebular H$\alpha$ emission lines (equivalent widths ranging from 32 to 83 ${\rm \AA}$) superimposed on their stellar spectra, which is circumstellar and not from an underlying region.
WR WIND-ISM INTERACTIONS
========================
Figure 1c contains an image of the Group Ia shell surrounding MS11 and the Group Ib shell surrounding MS12. The bright star near the top of the B-band image is a foreground star. MS11 is off-set from the center of a large, limb-brightened bubble, and is part of a chain of OB stars. To the south-west of MS11 is a multiple shell surrounding the WR star MS12. MS12 is at the center of a small inner shell, and offset to the west from the center of the outer shell. MS12 is the only nearby ionizing star, which is supported by the flux of ionizing photons implied by the H$\alpha$ luminosity of the surrounding shells.
Interestingly, there is morphological evidence that the shells surrounding these stars may be interacting. Pointing radially away from MS11 is a faint loop of H$\alpha$ emission. Perhaps not coincidentally, the outer shell surrounding MS12 appears to be impinging on the shell surrounding MS11, directly south of the H$\alpha$ loop (or “break-out” feature), causing a noticeable s-shaped distortion in the surface of the larger shell. The H$\alpha$ emission in the limb of the Group I shell is clearly enhanced in the region to the north and south of the H$\alpha$ loop. An effect of the expansion of these shells into one another may have been to weaken the surface of the Group I shell, such that the fast wind from MS11 was able to punch a hole in its wall. Interestingly, there is a gap between the shells. Therefore, if the shells are interacting this would imply they are radiation bounded, and the outer layers of the shells are likely neutral.
In Figure 1d we see, for ob33wr3 and ob33wr2, another interesting case of potentially interacting shells, and the effects of WR winds. A large arc of emission south-west of ob33wr3 is impinging upon a shell structure containing ob33wr2. Interestingly, the Group Ib shell is centered within this arc. As above, the gap between the shell and this arc could indicate the shell is radiation bounded, with neutral material in between. The possibility of a neutral outer layer is strengthened by the evidence for a dust lane, seen in the B image, superimposed on the gap. The probable action of a fast wind arising from ob33wr2 is evident from a small cavity (blown?) in the wall this shell, south-east of the star. An arc is seen surrounding ob10wr1 (see Figure 1d) perhaps being driven into ob10 by the expansion of this Group Ib shell. Finally, the potential action of a WR wind can be seen for the case of the Group IIa star MS17 (see Figure 2a), where it has apparently disrupted the wall of a diamond shaped shell, in which it is embedded.
Obtaining detailed kinematics (in both the ionized and neutral gas) of these interacting shells, and the gas effected by WR winds, may prove extremely useful in understanding the dynamical evolution of shell structures in the ISM of galaxies. Further, in addition to spectroscopy, narrow-band imaging, in lines such as \[\]$\lambda$5007 or \[\]$\lambda$6300, may prove useful in detecting shock fronts in a wind-ISM interaction region. Will shells merge to become larger single structures? What role do WR winds play in enlarging pre-existing shell structures? Are the shells, presented in this paper, the evolutionary precursors of shells (or supershells). What role do WR stars play in producing a frothy medium in galaxies?
SUMMARY AND CONCLUSIONS
=======================
We have presented the results of an ongoing investigation to provide a detailed view of the processes by which massive stars shape the surrounding ISM, from pc to kpc scales. In this paper we have focused on studying the environments of WR stars to find evidence for WR wind-ISM interactions in M31, through imaging ionized hydrogen nebulae surrounding these stars.
We have classified the morphology of the H$\alpha$ emission near 48 WR stars into two groups, Group I and Group II. The first category (Group I shells) consists of shells, or shell fragments, in which the WR star is at or near the center of a single shell (Ia), or concentric limb brightened shells (Ib). We consider Group I shells as potentially being causally associated with a WR star (i.e. E- or W-type shells). There are 17 WR stars within Group Ia shells, and 7 stars within Group Ib shells. The second category (Group II stars) consists of WR stars where there is no clear causal relation with nearby H$\alpha$ emission (IIa), or a lack of nearby H$\alpha$ emission (IIb). Group IIa consists of WR stars that are either near amorphous, diffuse H$\alpha$ emission, or on the edge of regions, and therefore have no preferred location with respect to an shell. There are 20 Group IIa stars, and 4 Group IIb stars. Of the 48 WR stars in our sample, 24 are surrounded by Group I shells (50$\%$) and 24 are Group II stars (50$\%$).
We suspect that Group I shells which contain one or two massive stars, or are $\leq$ 40 pc in radius, may be E- or W-type shells. There are 5 Group Ia shells that satisfy this criteria: ob69-F1, MS8, MS2, ob54wr1, and ob42wr1. All 7 inner Group Ib shells satisfy these criteria, although the outer shells for MS14 and MS4 do not. Therefore, 12 of the 24 WR stars within Group I shells (or 25$\%$ of the sample of 48) appear surrounded by resolved E- or W-type shells.
There are 12 WR stars (25$\%$ of the sample) that have excess emission within the H$\alpha$ bandpass (one WR star with excess emission is in common with the 12 WR stars surrounded by putative E- or W-type shells). The excess H$\alpha$ emission may arise from unresolved E- or W-type shells, or in an extended outer envelope indicating a strong stellar wind. Excess unresolved emission occurs for stars within Group I shells, and for Group IIa stars. No stars within Group IIb have excess emission. If we assume that the unresolved excess H$\alpha$ emission arises from an unresolved E- or W-type shell, nearly one-half of our sample may have surrounding E- or W-type shells.
We find morphological evidence that shells surrounding MS11 and MS12, and ob33wr3 and ob33wr2, may be interacting. Interestingly, there are gaps between the shells. Therefore, if these shells are interacting this implies they are radiation bounded, and the outer layers of the shells are likely neutral. An arc is seen surrounding ob10wr1 perhaps being driven into ob10 by the expansion of this Group Ib shell. Obtaining detailed kinematics (in both the ionized and neutral gas) of these interacting shells, and the gas effected by WR winds, may prove extremely useful in understanding the dynamical evolution of shell structures in the ISM of galaxies.
[**Acknowledgements**]{}
We would like to thank Phil Massey for a useful conversation when visiting NMSU. This research was partially supported by the National Science Foundation through grant AST 96-17014.
Arnal, E. M. 1992, , 254, 305
Chu, Y.-H. 1981 , 249, 195
Crowther, P. A., & Bohannan, B. 1997, , 317, 532
Crowther, P. A., & Smith, L. J. 1997, , 320, 500
Dopita, M. A., Bell, J. F., Chu, Y.-H., & Lozinskaya, T. A. 1994, , 93, 455
Drissen, L., Shara, M. M., & Moffat, A. F. J. 1991, , 101, 1659
Esteban, C., Smith, L. J., Vilchez., J. M., & Clegg, R. E. S. 1993, , 272, 299
Galarza, V., Walterbos, R. A. M., & Braun, R. 1999, , submitted
Magnier, E. A., Lewin W. H. G., van Paradijs J., Hasinger G., Jain A., Pietsch W., Truemper J. 1992, , 96, 379
Marston, A. P. 1997, , 475, 188
Marston, A. P. 1995, , 109, 1839
Massey, P., & Johnson, O. 1998, , 505, 793
Miller, G. J., & Chu, Y.-H. 1993, , 85, 137
Oey, M. S. 1996 , 467, 666
Oey, M. S., & Massey, P. 1994 , 425, 635
Oke J. B., & Gunn J. E. 1983, , 266, 713
Rosado, M. 1986, , 160, 211
Thilker, D., Braun, R., & Walterbos, R. A. M. 1998, , 332, 429
Walterbos, R. A. M., & Braun, R. 1992, , 92, 625 (WB92)
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'The [[*Planck*]{}]{} mission is the most sensitive all-sky CMB experiment currently planned. The High Frequency Instrument ([[*HFI*]{}]{}) will be especially suited to observe clusters of galaxies by their thermal Sunyaev-Zel’dovich (SZ) effect. In order to assess [[*Planck*]{}’s ]{} SZ-capabilities in the presence of spurious signals, a simulation is presented that combines maps of the thermal and kinetic SZ-effects with a realisation of the cosmic microwave background (CMB), in addition to Galactic foregrounds (synchrotron emission, free-free emission, thermal emission from dust, CO-line radiation) as well as the sub-millimetric emission from celestial bodies of our Solar system. Additionally, observational issues such as the finite angular resolution and spatially non-uniform instrumental noise of [[*Planck*]{}’s ]{}sky maps are taken into account, yielding a set of all-sky flux maps, the auto-correlation and cross-correlation properties of which are examined in detail. In the second part of the paper, filtering schemes based on scale-adaptive and matched filtering are extended to spherical data sets, that enable the amplification of the weak SZ-signal in the presence of all contaminations stated above. The theory of scale-adaptive and matched filtering in the framework of spherical maps is developed, the resulting filter kernel shapes are discussed and their functionality is verified.'
author:
- |
B. M. Schäfer$^{1}$[^1], C. Pfrommer$^{1}$, R. M. Hell$^{1}$and M. Bartelmann$^{2}$\
$^1$Max-Planck-Institut für Astrophysik, Karl-Schwarzschild-Stra[ß]{}e 1, Postfach 1317, 85741 Garching, Germany\
$^2$Institut für theoretische Astrophysik, Tiergartenstra[ß]{}e 15, 69121 Heidelberg, Germany
bibliography:
- 'bibtex/aamnem.bib'
- 'bibtex/references.bib'
title: |
Detecting Sunyaev-Zel’dovich clusters with [[*Planck*]{}]{}:\
II. Foreground components and optimised filtering schemes
---
\[firstpage\]
galaxies: clusters: general, cosmology: cosmic microwave background, methods: numerical, space vehicles: [ *Planck*]{}
Introduction {#sect_intro}
============
The Sunyaev-Zel’dovich (SZ) effect is the most important extragalactic source of secondary anisotropies in the CMB sky. The thermal SZ-effect is explained by the fact that CMB photons are put in thermal contact with electrons of the hot intra-cluster medium (ICM) by Compton-interactions which causes a transfer of energy from the ICM to the CMB. Because of the smallness of the Thompson cross-section and of the diluteness of the ICM this transfer of thermal energy is small. In the direction of a cluster, low-energetic photons with frequencies below $\nu=217$ GHz are removed from the line-of-sight. At frequencies above $\nu=217$ GHz CMB photons are scattered into the line-of-sight, causing a distinct modulation of the CMB surface brightness as a function of observing frequency, which enables the detection of clusters of galaxies in microwave data.
In contrast, in the kinetic effect it is the peculiar motion of a cluster along the line of sight relative to the CMB frame that induces CMB surface brightness fluctuations. The peculiar motion of the cluster causes the CMB to be anisotropic in the cluster frame. Due to this symmetry breaking of the scattering geometry, photons scattered into the line-of-sight are shifted in frequency, namely to higher frequencies, if the cluster is moving towards the observer.
The [[*Planck*]{}]{}-mission will be especially suited to detect SZ-clusters due to its sensitivity, its spectroscopic capabilities, sky coverage and spatial resolution. It is expected to yield a cluster catalogue containing $\simeq10^4$ entries. Extensive literature exists on the topic, but so far the influence of foregrounds and details of [[*Planck*]{}’s ]{}instrumentation and data aquisition have not been thoroughly addressed. In this work we aim at modelling the astrophysical and instrumental issues connected to the observation of SZ-clusters as exhaustively as possible: A simulation is presented that combines realistic maps of both SZ-effects with a realisation of the CMB, with four different Galactic foreground components (thermal dust, free-free emission, synchrotron emission and emission from rotational transitions of CO molecules), with maps containing the sub-millimetric emission from planets and asteroids of the Solar system and with instrumental noise. [[*Planck*]{}’s ]{}frequency response and beam shapes are modelled conforming to the present knowledge of [[*Planck*]{}’s ]{}receivers and its optical system. In order to extract the SZ-cluster signal, filtering schemes based on matched and scale-adaptive filtering are extended to spherical data sets.
In contrast to the recent work by @2004astro.ph..6190G, our SZ-simulation does not rely on idealised scaling relations and futhermore, takes account of the cluster’s morphological variety. The Galactic foregrounds are modelled in concordance with WMAP observations [see @2003ApJS..148...97B], which constitues an improvement over the simplifying assumptions made by Geisb[ü]{}sch et al. In addition, instrumentation issues such as non-isotropic detector noise are properly incorporated into the simulation. The filter scheme employed in the paper by Geisb[ü]{}sch et al. is the harmonic-space maximum entropy method introduced by @2002MNRAS.336...97S which assumes approximate prior knowledge of the emission component’s power spectra. Its computational demand is much higher than matched and scale-adaptive filtering: In fact, the computations presented in this work can be run on a notebook-class computer.
The paper is structured as follows: After a brief recapitulation of the SZ-effect in Sect. \[sect\_szdef\], the [[*Planck*]{}]{}-satellite and instrumental issues connected to observation of CMB anisotropies are decribed in Sect. \[sect\_planck\]. The foreground emission components are introduced in Sect. \[sect\_foreground\]. The steps in the simulation of flux maps for the various [[*Planck*]{}]{}-channels are described and their correlation properties are examined in Sect. \[sect\_plancksim\]. The theory of matched and scale-adaptive filtering is extended to spherical data sets and the resulting filter kernel shapes are in detail discussed in Sect. \[sect\_filtering\]. A summary in Sect. \[sect\_summary\] concludes the paper.
Throughout the paper, the cosmological model assumed is the standard cosmology, which has recently been supported by observations of the WMAP satellite [@2003astro.ph..2209S]. Parameter values have been chosen as $\Omega_\mathrm{M} = 0.3$, $\Omega_\Lambda =0.7$, $H_0 = 100\,h\,\mbox{km~}\mbox{s}^{-1}\mbox{ Mpc}^{-1}$ with $h = 0.7$, $\Omega_\mathrm{B} = 0.04$, $n_\mathrm{s} =1$ and $\sigma_8=0.9$.
Sunyaev-Zel’dovich definitions {#sect_szdef}
==============================
The Sunyaev-Zel’dovich effects are the most important extragalactic sources of secondary anisotropies in the CMB. Inverse Compton scattering of CMB photons with electrons of the ionised ICM gives rise to these effects and induce surface brightness fluctuations of the CMB sky, either because of the thermal motion of the ICM electrons (thermal SZ-effect) or because of the bulk motion of the cluster itself relative to the comoving CMB-frame along the line-of-sight (kinetic SZ-effect).
The relative change $\Delta T/T$ in thermodynamic CMB temperature at position $\bmath{\theta}$ as a function of dimensionless frequency $x=h\nu /(k_B T_\mathrm{CMB})$ due to the thermal SZ-effect is given by: $$\begin{aligned}
\frac{\Delta T}{T}(\bmath{\theta}) & = & y(\bmath{\theta})\,\left(x\frac{e^x+1}{e^x-1}-4\right)\mbox{ with }\\
y(\bmath{\theta}) & = & \frac{\sigma_\mathrm{T} k_B}{m_{\mathrm{e}}c^2}\int{\mathrm{d}}l\:n_{\mathrm{e}}(\bmath{\theta},l)T_{\mathrm{e}}(\bmath{\theta},l)\mbox{,}
\label{sz_temp_decr}\end{aligned}$$ where the amplitude $y$ of the thermal SZ-effect is commonly known as the thermal Comptonisation parameter, that itself is defined as the line-of-sight integral of the temperature weighted thermal electron density. $m_{\mathrm{e}}$, $c$, $k_B$ and $\sigma_\mathrm{T}$ denote electron mass, speed of light, Boltzmann’s constant and the Thompson cross section, respectively. The kinetic SZ-effect arises due to the motion of the cluster parallel to the line of sight relative to the CMB-frame: $$\frac{\Delta T}{T}(\bmath{\theta}) = -w(\bmath{\theta})\mbox{ with }
w(\bmath{\theta}) = \frac{\sigma_\mathrm{T}}{c}\int{\mathrm{d}}l\:n_{\mathrm{e}}(\bmath{\theta,l})\upsilon_r(\bmath{\theta},l)\mbox{.}$$ Here, $\upsilon_r$ is the radial component of the cluster’s velocity. The convention is such that $\upsilon_r<0$, if the cluster is moving towards the observer. In this case, the CMB temperature is increased. In analogy, the quantity $w$ is refered toas the kinetic Comptonisation. The SZ-observables are the line-of-sight Comptonisations integrated over the solid angle subtended by the cluster. The quantities $\mathcal{Y}$ and $\mathcal{W}$ are refered to as the integrated thermal and kinetic Comptonisations, respectively: $$\begin{aligned}
\mathcal{Y} & = & \int{\mathrm{d}}\Omega\: y(\bmath{\theta}) =
d_A^{-2}(z)\cdot\frac{\sigma_\mathrm{T} k_B}{m_e c^2}\:\int{\mathrm{d}}V\:n_e T_e\\
\mathcal{W} & = & \int{\mathrm{d}}\Omega\: w(\bmath{\theta}) =
d_A^{-2}(z)\cdot\frac{\sigma_\mathrm{T}}{c}\:\int{\mathrm{d}}V\:n_e \upsilon_r\end{aligned}$$ Here, $d_A(z)$ denotes the angular diameter distance of a cluster situated at redshift $z$.
Submillimetric observations with [[*Planck*]{}]{} {#sect_planck}
=================================================
The [[*Planck*]{}]{}-mission[^2]$^{,}$[^3] will perform a polarisation sensitive survey of the complete microwave sky in nine observing frequencies from the Lagrange point $L_2$ in the Sun-Earth system. It will observe at angular resolutions of up to $5\farcm0$ in the best channels and will achieve micro-Kelvin sensitivity relying on bolometric receivers [high frequency instrument [[*HFI*]{}]{}, described in @lit_hfi] and on high electron mobility transistors . The main characteristics are summarised in Table \[table\_planck\_channel\]. [[*Planck*]{}’s ]{}beam characteristics are given Sect. \[planck\_beamshape\] and the scanning strategy and the simulation of spatially non-uniform detector noise is outlined in Sect. \[planck\_scannoise\].
[[*Planck*]{}]{} channel 1 2 3 4 5 6 7 8 9
--------------------------------------------------- ------------------- ------------------- ------------------- ------------------- ------------------- ------------------- ------------------- ------------------- -------------------
centre frequency $\nu_0$ 30 GHz 44 GHz 70 GHz 100 GHz 143 GHz 217 GHz 353 GHz 545 GHz 857 GHz
frequency window $\Delta\nu$ 3.0 GHz 4.4 GHz 7.0 GHz 16.7 GHz 23.8 GHz 36.2 GHz 58.8 GHz 90.7 GHz 142.8 GHz
resolution $\Delta\theta$ (FWHM) $33\farcm4$ $26\farcm8$ $13\farcm1$ $9\farcm2$ $7\farcm1$ $5\farcm0$ $5\farcm0$ $5\farcm0$ $5\farcm0$
noise level $\sigma_\mathrm{N}$ 1.01 [[*m*]{}K]{} 0.49 [[*m*]{}K]{} 0.29 [[*m*]{}K]{} 5.67 [[*m*]{}K]{} 4.89 [[*m*]{}K]{} 6.05 [[*m*]{}K]{} 6.80 [[*m*]{}K]{} 3.08 [[*m*]{}K]{} 4.49 [[*m*]{}K]{}
thermal SZ-flux ${\langle}S_\mathcal{Y}{\rangle}$ -12.2 Jy -24.8 Jy -53.6 Jy -82.1 Jy -88.8 Jy -0.7 Jy 146.0 Jy 76.8 Jy 5.4 Jy
kinetic SZ-flux ${\langle}S_\mathcal{W}{\rangle}$ 6.2 Jy 13.1 Jy 30.6 Jy 55.0 Jy 86.9 Jy 110.0 Jy 69.1 Jy 15.0 Jy 0.5 Jy
antenna temperature $\Delta T_\mathcal{Y}$ -440 [[*n*]{}K]{} -417 [[*n*]{}K]{} -356 [[*n*]{}K]{} -267 [[*n*]{}K]{} -141 [[*n*]{}K]{} -0.5 [[*n*]{}K]{} 38 [[*n*]{}K]{} 8.4 [[*n*]{}K]{} 0.2 [[*n*]{}K]{}
antenna temperature $\Delta T_\mathcal{W}$ 226 [[*n*]{}K]{} 220 [[*n*]{}K]{} 204 [[*n*]{}K]{} 179 [[*n*]{}K]{} 138 [[*n*]{}K]{} 76 [[*n*]{}K]{} 18 [[*n*]{}K]{} 1.6 [[*n*]{}K]{} 0.02 [[*n*]{}K]{}
Beam shapes {#planck_beamshape}
-----------
\[sect\_beam\] The beam shapes of [[*Planck*]{}]{} are well described by azimuthally symmetric Gaussians $b(\theta) =
\frac{1}{2\pi\sigma_\theta^2}\exp\left(-\frac{\theta^2}{2\sigma_\theta^2}\right)$ with $\sigma_\theta =
\frac{\Delta\theta}{\sqrt{8\ln(2)}}$. The residuals from the ideal Gaussian shape (ellipticity, higher order distortions, diffraction rings, far-side lobes, pick-up of stray-light) are expected not to exceed the percent level and are neglected for the purpose of this work. Table \[table\_planck\_channel\] gives the angular resolution $\Delta\theta$ in terms of FWHM of each [[*Planck*]{}]{}-channel for reference.
Scanning strategy and noise-equivalent maps {#planck_scannoise}
-------------------------------------------
CMB observations by [[*Planck*]{}]{} will proceed in great circles fixed on the ecliptic poles. A single scan will start at the North ecliptic pole, will follow a meridian to the South ecliptic pole and back to the North ecliptic pole by following the antipodal meridian. Such a scan will last one minute and will be repeated sixty times. After that, the rotation axis will be shifted in a precessional motion for $2\farcm5$ (approximately half a beam diameter) and the scan repeated. In this way, the entire sky is mapped once in 180 days.
Fourier transform of the noise time series of [[*Planck*]{}’s ]{}receivers yields a noise power spectrum $P(f)$ of the shape $$P(f) = \sigma_\mathrm{N}^2\cdot\left[1 + \left(\frac{f}{f_\mathrm{knee}}\right)^{-\alpha}\right]\mbox{,}$$ i.e. the noise consists of two components: a power law component in frequency $f$, decribed by the spectral index $\alpha$ that assuming values $0\leq\alpha\leq 2$ and a white noise component, smoothly joined at the frequency $f_\mathrm{knee}$.
The $f^{-\alpha}$-part of the noise spectrum originates from zero point drifts of the detector gain on large time scales. This power law component exhibits low-frequency variations that lead to the typical stripe pattern in simulated [[*Planck*]{}]{}-maps due to the scanning strategy . Algorithms for destriping the maps are a current research topic (for example, the [Mirage]{}-algorithm proposed by @2004astro.ph..1505Y, [MAPCUMBA]{} by and the max-likelihood algorithm by ), but it can be expected that the destriping can be done very efficiently such that the remaining noise largely consists of uncorrelated pixel noise.
In order to incorporate uncorrelated pixel noise into the simulation, a set of maps has been construced, where at each pixel a number from a Gaussian distribution with width $\sigma_\mathrm{N}$ has been drawn. For [[*Planck*]{}’s ]{}[[*HFI*]{}]{}-receivers, the rms-fluctuations $\sigma_\mathrm{N}$ in antenna temperature can be calculated from the noise equivalent power NEP and the sampling frequency $\nu_\mathrm{sampling}=200$ Hz via: $$\sigma_\mathrm{N} = \frac{2~\mathrm{NEP}\sqrt{\nu_\mathrm{sampling}}}{k_B \Delta\nu}
\quad\mbox{({{\em HFI}})}
\label{eqn_hfi_noise}$$
Alternatively, for [[*Planck*]{}’s ]{}[[*LFI*]{}]{}-receivers, the rms-fluctuations $\sigma_\mathrm{N}$ in antenna temperature are given by: $$\sigma_\mathrm{N} = \sqrt{2}\frac{T_\mathrm{noise} + T_\mathrm{CMB}}{\sqrt{\Delta\nu/\nu_\mathrm{sampling}}}
\quad\mbox{({{\em LFI}})}
\label{eqn_lfi_noise}$$ Values for $T_\mathrm{noise}$ and NEP can be obtained from [[*Planck*]{}’s ]{}simulation pipeline manual. The resulting effective noise level for all [[*Planck*]{}]{} channels for a single observation of a pixel is given in Table \[table\_planck\_channel\]. The formulae and respective parameters are taken from the-planck simulation manual, available via [[*Planck*]{}’s ]{}[LiveLink]{}.
The rms-fluctuations $\sigma_\mathrm{N}$ in antenna temperature have to be scaled by $\sqrt{n_\mathrm{det}}$ (assuming Poissonian statistics), where $n_\mathrm{det}$ denotes the number of redundant receivers per channel, because they provide independent surveys of the microwave sky.
From simulated scanning paths it is possible to derive an exposure map using the [simmission]{}- and [ multimod]{}-utilities. An example of such an exposure map in the vicinity of the North ecliptic pole is given in Fig. \[figure\_exposure\_map\]. Using the number of observations $n_\mathrm{obs}$ per pixel, it is possible to scale down the noise amplitudes by $\sqrt{n_\mathrm{obs}}$ and to obtain a realistic noise map for each channel. Here, we apply the simplification that all detectors of a given channel are arranged collinearly. In this case, the exposure maps will have sharp transitions from well-observed regions around the ecliptic poles to the region around the ecliptic equator. In real observations these transitions will be smoothed out due slight displacements of the optical axes among each other which causes the effective exposure pattern to be a superposition of rotated and distorted single-receiver exposure patterns.
Foreground emission components {#sect_foreground}
==============================
The observation of the CMB and of SZ-clusters is seriously impeded by various Galactic foregrounds and by the thermal emission of celestial bodies of our Solar system. In order to describe these emission components, template maps from microwave surveys are used. @1999NewA....4..443B give a comprehensive review for the foreground components relevant for the [[*Planck*]{}]{} mission. As foreground components we include thermal emission from dust in the Galactic plane (Sect. \[foreground\_dust\]), Galactic synchrotron (Sect. \[foreground\_synchro\]) and free-free emission (Sect. \[foreground\_freefree\]), line emission from rotational transitions of carbon monoxide molecules in giant molecular clouds (Sect. \[foreground\_co\]), sub-millimetric emission from planets (Sect. \[foreground\_planets\]) and from minor bodies of the Solar system (Sect. \[foreground\_asteroids\]). Foreground components omitted at this stage are discussed in Sect. \[foreground\_omitted\].
In this work, no attempt is made at modelling the interactions between various foreground components because of poorly known parameters such as the spatial arrangement along the line-of-sight of the emitting and absorbing components. Exemplarily, the reader is referred to @2003ApJS..146..407F, where the absorption of Galactic free-free emission by dust is discussed.
Galactic dust emission {#foreground_dust}
----------------------
At frequencies above $\sim$ 100 GHz, the thermal emission from dust in the disk of the Milky Way is the most prominent feature in the microwave sky. Considerable effort has been undertaken to model the thermal emission from Galactic dust [@1997AAS...191.8704S; @1998ApJ...500..525S; @1999ApJ...524..867F; @2000ApJ...544...81F]. The thermal dust emission is restricted to low Galactic latitudes and the thin disk is easily discernible.
The input template map (see Fig. \[figure\_dustmap\]) is derived from an observation at a wavelength of $\lambda=100~\umu\mathrm{m}$, i.e. $\nu_0=3~\mathrm{THz}$. Its amplitudes $A_\mathrm{dust}$ are given in MJy/sr, which are extrapolated to the actual frequency channels of [[*Planck*]{}]{} using a two-component model suggested by C. Baccigalupi (personal communication). Despite the fact that the dust is expected to spread over a large range of temperatures, the model reproduces the thermal emission remarkably well. This model yields for the flux $S_\mathrm{dust}(\nu)$:
$$S_\mathrm{dust}(\nu) =
\frac{f_1 q\cdot\left(\frac{\nu}{\nu_0}\right)^{\alpha_1} B(\nu,T_1) +
f_2\cdot\left(\frac{\nu}{\nu_0}\right)^{\alpha_2} B(\nu,T_2)}{f_1 q B(\nu_0,T_1) + f_2 B(\nu_0,T_2)}\cdot
A_\mathrm{dust}\mbox{.}$$
The choice of parameters used is: $f_1 = 0.0363$, $f_2 = 1-f_1$, $\alpha_1=1.67$, $\alpha_2=2.70$, $q=13.0$. The two dust temperatures are $T_1 = 9.4$ K and $T_2 = 16.2$ K. The function $B(\nu,T)$ denotes the Planckian emission-law: $$B(\nu,T) = \frac{2h}{c^2}\cdot\frac{\nu^3}{\exp(h\nu/k_B T)-1}\mbox{.}$$
Galactic synchrotron emission {#foreground_synchro}
-----------------------------
Relativistic electrons of the interstellar medium produce synchrotron radiation by spiralling around magnetic field lines, which impedes CMB observations most strongly at frequencies below 100 GHz. The synchrotron emission reaches out to high Galactic latitude and is an important ingredient for modelling foreground emission in microwave observations. An all-sky survey at an observing frequency of 408 MHz has been compiled by and adopted for usage with [[*Planck*]{}]{} by (see Fig. \[figure\_synchromap\]). The average angular resolution of this survey is $0\fdg85$ (FWHM).
Recent observations with WMAP [@2003ApJS..148...97B] indicate that the spectral slope of the synchrotron emission changes dramatically from $\gamma = -0.75$ at frequencies below 22 GHz to $\gamma = -1.25$ above 22 GHz. Theoretically, this may be explained by a momentum-dependent diffusion coefficient for cosmic ray electrons. In order to take account of this spectral steepening, the amplitudes $A_\mathrm{synchro}$ are multiplied with a prefactor in order to obtain the synchrotron fluxes at $\nu=22~\mathrm{GHz}$. This value is then extrapolated to [[*Planck*]{}’s ]{}observing frequencies with a spectral index of $\gamma = -1.25$: The amplitudes $A_\mathrm{synchro}$ of the input map are given in units of MJy/sr, and for the flux $S_\mathrm{synchro}(\nu)$ one thus obtains: $$S_\mathrm{synchro}(\nu) =
\sqrt{\frac{22~\mathrm{GHz}}{408~\mathrm{MHz}}}\cdot A_\mathrm{synchro}\cdot
\left(\frac{\nu}{408~\mathrm{MHz}}\right)^{-1.25}\mbox{.}$$
Here, the fact that the synchrotron spectral index shows significant variations across the Milky Way due to varying magnetic field strength is ignored. Instead, a spatially constant spectral behaviour is assumed.
Galactic free-free emission {#foreground_freefree}
---------------------------
The Galactic ionised plasma produces free-free emission, which is an important source of contamination in CMB observations, as recently confirmed by @2003ApJS..148...97B in WMAP observations. Aiming at modelling the free-free emission at microwave frequencies, we rely on an $H_\alpha$-template provided by @2003ApJS..146..407F. Modeling of the free-free emission component on the basis of an $H_alpha$-template is feasible because both emission processes depend on the emission measure $\int n_e^2{\mathrm{d}}l$, where $n_e$ is the number density of electrons. This template is a composite of three $H_\alpha$-surveys and is because of its high resolution (on average $6\farcm0$ FWHM) particularly well suited for CMB foreground modelling. The morphology of the free-free map is very complex and the emission reaches out to intermediate Galactic latitude.
For relating $H_\alpha$-fluxes $A_{H_\alpha}$ given in units of Rayleighs to the free-free signal’s antenna temperature $T_\mathrm{free-free}$ measured in Kelvin, @1998PASA...15..111V gives the formula: $$\frac{T_\mathrm{free-free}(\umu\mathrm{K})}{A_{H_\alpha}(R)} \simeq
14.0\left(\frac{T_p}{10^4~\mathrm{K}}\right)^{0.317}\cdot 10^{290~\mathrm{K}\cdot T_p^{-1}}\cdot
g_\mathrm{ff}\cdot\left(\frac{\nu}{10\mbox{ GHz}}\right)^{-2}\mbox{.}$$ $T_p$ denotes the plasma temperature and is set to $10^4~\mathrm{K}$ in this work. An approximation for the Gaunt factor $g_\mathrm{ff}$ valid for microwave frequencies in the range $\nu_p\ll\nu\ll k_B T/h$ ($\nu_p$ is the plasma frequency) is given by [@2003ApJS..146..407F]: $$g_\mathrm{ff} = \frac{\sqrt{3}}{\pi}
\left[\ln\left(\frac{(2 k_B T_p)^{3/2}}{\pi e^2 \nu\sqrt{m_e}}\right)-\frac{5}{2}\gamma_E\right]\mbox{,}$$ where $e$ and $m_e$ denote electron charge and mass (in Gaussian units) and $\gamma_E\simeq0.57721$ is the Euler constant. The contribution of fractionally ionised helium to the free-free emissivity as well as the absorption by interstellar dust has been ignored because of its being only a small contribution in the first case and because of poorly known parameters in the latter case. The antenna temperature can be converted to the free-free flux $S_\mathrm{free-free}(\nu)$ by means of: $$S_\mathrm{free-free}(\nu) = 2\frac{\nu^2}{c^2}\cdot k_B T_\mathrm{free-free}(\mathrm{K})\mbox{.}$$
Concerning the free-free emission, there might be the possibility of an additional free-free component uncorrelated with the $H_\alpha$-emission. This hot gas, however, should emit X-ray line radiation, which has not been observed.
CO-lines from giant molecular clouds {#foreground_co}
------------------------------------
In a spiral galaxy such as the Milky Way, a large fraction of the interstellar medium is composed of molecular hydrogen, that resides in giant molecular clouds (GMC), objects with masses of $10^4 - 10^6 M_{\sun}$ and sizes of $50 - 200$ pc. Apart from molecular hydrogen, the GMCs contain carbon monoxide (CO) molecules in significant abundance. The rotational transitions of the CO molecule at 115 GHz and higher harmonics thereof constitute a source of contamination for all [[*Planck*]{}]{} [[*HFI*]{}]{}-channels. An extensive search for atomic and molecular transition lines was undertaken by @1994ApJ...434..587B with the [*FIRAS*]{} instrument onboard [*COBE*]{}.
The CO-contamination is modelled by employing a mosaic of CO-surveys assembled by @1996AAS...189.7004D [@2001ApJ...547..792D]. It shows the velocity-integrated intensity of the transition from the first excited state ($J=1$) to the ground state ($J=0)$ close to the Galactic plane ($b<5^\circ$), and additionally comprises a few CO clouds at higher Galactic latitude, as well as the Large Magellanic Cloud and the Andromeda galaxy M 31. Due to the composition of the map, the angular resolution is not uniform, but the best resolution of $\simeq 7\farcm5$ is reached for a large area around the Galactic plane.
From this map, it is possible to derive the line intensities of the higher harmonics, assuming thermal equilibrium: The frequency $\nu$ for a transition from a state of rotational quantum number $J$ to a state with quantum number $J+1$ of the CO molecule follows from elementary quantum mechanics: The rotational energy of a molecule with moment of inertia $\theta$ and angular momentum $\bmath{J}$ is $E_\mathrm{rot}=\bmath{J}^2/2\theta=\hbar^2\cdot J(J+1)/2\theta$. In the last step the quantum number $J$ was introduced. For the transition energy between two subsequent rotation levels, one obtains: $$\nu_{J\leftrightarrow J+1} = 2Qc\cdot(J+1) = 115\mbox{ GHz}\cdot(J+1)\mbox{,}$$ where $Q=h/8\pi^2 c\theta$ is a measure of the inverse moment of inertia of the molecule and $c$ denotes the speed of light. Thus, the spectrum consists of equidistant lines. The relative intensities of those lines is given by the ratio of their occupation numbers $\chi_J$: $$\label{eqn_occupation}
\chi_J = (2J+1)\cdot\exp\left(-\frac{Qhc}{k_\mathrm{B} T_\mathrm{CO}} J(J+1)\right)\mbox{,}$$ i.e. the relative line intensities $q_{J\leftrightarrow J+1}$ of two consecutive lines is given by: $$q_{J\leftrightarrow J+1} = \frac{\chi_{J+1}}{\chi_J} =
\frac{2J+3}{2J+1}\cdot\exp\left(-\frac{2Qhc}{k_B T_\mathrm{CO}}\cdot(J+1)\right)$$ $\chi_J$ is detemined by a statistical weight $(2J+1)$ reflecting the degeneracy of angular momentum and a Boltzmann factor. For the determination of line intensities thermal equilibrium is assumed, common estimates for the temperature inside GMCs are $T_\mathrm{CO} = 10 - 30$ K. For the purpose of this work, we choose $T_\mathrm{CO} = 20$ K. From the brightness temperature $T_A$ one obtains the CO-flux $S_\mathrm{CO-line}(\nu)$ by means of the following equation: $$S_\mathrm{CO-line}(\nu) = 2\frac{\nu^2}{c^2}\cdot k_B T_A(\mathrm{K})\cdot p(\nu-\nu_{J\leftrightarrow J+1})\mbox{,}$$ where the line shape $p(\nu-\nu_{J\leftrightarrow J+1})$ is assumed to be small in comparison to [[*Planck*]{}’s ]{}frequency response windows such that its actual shape (for instance, a Voigt-profile) is irrelevant.
Planetary submillimetric emission {#foreground_planets}
---------------------------------
Planets produce infra-red and sub-millimetric radiation by absorbing sunlight and by re-emitting this thermal load imposed by the sun. The investigation of the thermal properties of Mars, Jupiter and Saturn has been the target of several space missions [@1997ApJ...488L.161G; @1986Icar...65..244G to name but a few]. For the description of the submillimetric thermal emission properties of planets, an extension to the Wright & Odenwald model [@1976ApJ...210..250W; @1971AJ.....76..719N] was used. The orbital motion of the planets is sufficiently fast such that their movements including their epicyclic motion relative to the Lagrangian point $L_2$, [[*Planck*]{}’s ]{}observing position, has to be taken into account. All planets are imaged twice in approximate half-year intervals due to [[*Planck*]{}’s ]{}scanning strategy, while showing tiny displacements from the ecliptic plane because of the Lissajous-orbit of [[*Planck*]{}]{} around $L_2$ and their orbital inclinations.
The heat balance equation for a planet or asteroid reads as: $$E + F + W \equiv P_\mathrm{emission} = P_\mathrm{absorption} \equiv I + R\mbox{,}$$ where $E$ denotes the heat loss by thermal emission (i.e. the signal for [[*Planck*]{}]{}), $F$ the heat flux outward from the interior of the planet, $W$ is the heat lost by conduction to the planet’s atmosphere, $I$ is the Solar radiation absorbed and $R$ is the heating of the planet caused by the back-scattering of radiation emanating from the surface of the planet by the atmosphere. The definition of these quantities is given by eqns. (\[eqn\_B\_def\]) through (\[eqn\_R\_def\]):
$$\begin{aligned}
E & = & \epsilon\:\sigma\:T_\mathrm{planet}^4\mbox{,}\label{eqn_B_def}\\
F & = & k\cdot\frac{\upartial T_\mathrm{planet}}{\upartial x}\mbox{,}\label{eqn_F_def}\\
I & = & \frac{(1-A)G}{r^2}\cos(\theta^*)\cos\left(\frac{2\pi t}{\tau}\right)\mbox{,}\label{eqn_W_def}\\
R & = & \gamma\:\frac{(1-A)G}{r^2}\cos(\theta^*)\cos\left(\frac{2\pi t}{\tau}\right) = \gamma\:I_\mathrm{max}\mbox{,
and}\label{eqn_I_def}\\
W & = & \kappa\:F\mbox{.}\label{eqn_R_def}\end{aligned}$$
Here, $\epsilon$ is the surface emissivity of the planet, $\sigma$ is the Stefan-Boltzmann constant, $T_\mathrm{planet}$ is the planet’s temperature, $k$ the coefficient of heat conduction, $A$ the planet’s bolometric albedo, $G$ the Solar constant (i.e. the energy flux density of Solar irradiation at the earth’s mean distance), $r$ the distance of the planet to the sun in astronomical units, $\tau$ the planet’s rotation period and $\theta^*$ the geographical latitude of the radiation absorbing surface element. The temperature distribution in the interior of the planet at radial position $x$ is controlled by the heat conduction equation: $$c\cdot\frac{\upartial T_\mathrm{planet}}{\upartial t} = k\cdot\frac{\upartial^2 T_\mathrm{planet}}{\upartial x^2}\mbox{,}$$ with the specific heat per unit volume $c$.
In our model, the heat loss $R$ of the planet’s surface due to conduction to the planet’s atmosphere is taken to be a constant fraction of the heat flux $F$ outward from the interior of the planet, the constant of proportionality being $\kappa$, for which we assumed $\kappa=0.1$. Similarly, the heat gain by back-scattering radiation by the atmosphere $R$ was assumed to be a constant fraction $\gamma$ of the local noon Solar flux $I_\mathrm{max}$, where $\gamma$ was taken to be $\gamma=0.01$. The system of differential eqns. (\[eqn\_B\_def\]) - (\[eqn\_R\_def\]) dependent on time $t$ and on Solar distance $r$ constitutes a heat conduction problem with periodic excitation (by the planet’s rotation). Thus, the heat balance of the planets is modelled by periodic solutions of the Laplacian heat conduction differential equations. It was solved iteratively by applying Laplace transforms with periodic boundary conditions. The integration over the planet’s surface then yields the radiation flux. In the calculation, we addressed rocky and gaseous planets differently with respect to their thermal properties. Furthermore, the giant gaseous planets are known to have internal sources of heat generation, which also has been taken account of.
The brightest point source in the microwave sky due to the planetary thermal emission is Jupiter, causing an increase in antenna temperature of $T_\mathrm{Jupiter}=93.6$ mK in the $\nu=100$ GHz-channel, followed by Saturn with $T_\mathrm{Saturn}=15.0$ mK. All outer planets apart from Pluto will be visible for [[*Planck*]{}]{}. Estimates show that even Galilean satellites Ganymede, Callisto, Io and Europa and Saturn’s moon Titan are above the detection threshold of [[*Planck*]{}]{}, but they are outshone by the stray-light from Jupiter and Saturn, respectively and for that reason not included in our analysis.
Due to the planet’s being point sources, their fast movement and their diverse surface temperatures it is not feasible to produce a template and extrapolate the fluxes with a common emission law to [[*Planck*]{}]{}-frequencies. Instead, flux maps have been produced directly for each of the nine [[*Planck*]{}]{}-channels separately taking account of the planetary motion, the solution of the heat balance equation laid down above and the finite beam-width. The analogous holds for asteroids, that are covered by the next chapter.
Submillimetric emission from asteroids {#foreground_asteroids}
--------------------------------------
Asteroids and minor bodies of the Solar system are easily observed by infrared satellites such as ISO and possibly by sub-millimetric observatories . An estimation by @2002NewA....7..483C shows that a large number of asteroids ($\sim 400$) should yield signals detectable by [[*Planck*]{}]{}. The orbital motion of all asteroids is fast enough to cause double detections at different positions in the sky separated by half a year due to [[*Planck*]{}’s ]{}scanning strategy. In contrast to planets, asteroids are not well restricted to the ecliptic plane and appear up to ecliptic latitudes of $\beta{\lower.5ex\hbox{{$\; \buildrel < \over \sim \;$}}}30\degr$.
The thermal emission properties of asteroids are well understood such that asteroids have been used for calibrating detectors and for determining beam shapes. The thermal model used for describing the submillimetric emission by asteroids is the same extension of the Wright & Odenwald model as for rocky planets. However, additional features that had to be incorporated was the beamed emission due to surface roughness. Furthermore, in the system of differential eqns. (\[eqn\_B\_def\]) - (\[eqn\_R\_def\]) terms $W$ and $R$ were neglected due to the absence of atmospheres in asteroids.
Information about the diameter and albedo was derived using the HG-magnitude system in case of asteroids for which those quantities are unknown, otherwise literature values were taken [from @2000dba..book.....M and IAU’s [*Minor Planet Centre*]{} [^4]]. For the description of the rotation period, an empirical relation that expresses the rotation period as a function of mass was used in the cases where the rotation period is unknown. The brightest sources include Ceres ($T_\mathrm{Ceres}=19.7~\umu\mbox{K}$), Pallas ($T_\mathrm{Pallas}=7.2~\umu\mbox{K}$), Vesta ($T_\mathrm{Vesta}=6.7~\umu\mbox{K}$) and Davida ($T_\mathrm{Davida}=2.1~\umu\mbox{K}$). The temperatures stated are antenna temperatures measured in the $\nu=100$ GHz-channel at the brightness maximum.
Our simulation shows that the number of detectable asteroids is overestimated by @2002NewA....7..483C, who did not take the expected observation geometry and detector response into account. Typical surface temperatures of asteroids are of the order of 150 K, and therefore, [[*Planck*]{}]{} is observing their thermal emission in the Rayleigh-Jeans regime. For that reason, the number of detectable asteroids increases with observing frequency. For our sample of $5\cdot10^4$ asteroids of the [*Minor Planet Centre*]{}’s catalogue, we find a couple of asteroids at $\nu=30$ GHz, a few tens of asteroids at $\nu=100$ GHz and up to 100 asteroids in the highest frequency band at $\nu=857$ GHz. Approximately 1200 asteroids will have fluxes above half of [[*Planck*]{}’s ]{}single-band detection limit estimated for ideal observation conditions and thus they constitute an abundant population of point sources that possibly hampers the detection of SZ-clusters.
The prediction of comets is very uncertain for the years 2007 through 2009: Many comets are not detected yet, non-active comets are too faint with few exceptions and the coma thermal emission features of active comets is very complex. For these reasons, they have been excluded from the analysis.
Future work concerning [[*Planck*]{}’s ]{}foregrounds {#foreground_omitted}
-----------------------------------------------------
Foreground components not considered so far include microwave point sources, such as infra-red galaxies and microwave emitting AGNs. The emission of infra-red galaxies is associated with absorption of star light by dust and re-emission at longer wavelengths. Galaxies with ongoing star formation can have large fractions ($\sim90$%) of their total emission at infra-red wavelengths, compared to about one third in the case of local galaxies. The integrated emission from unresolved infra-red galaxies accounts for the cosmic infra-red background (CIB) , the fluctuations of which are impeding SZ-observations at frequencies above $\nu\simeq100$ GHz [@2004astro.ph..2571A].
and @2003astro.ph..8464W have estimated the number counts of unresolved infra-red galaxies at [[*Planck*]{}]{} frequencies, which was used by @2004astro.ph..2571A in order to estimate the level of fluctuation in the [[*Planck*]{}]{}-beam. In the easiest case, the sources are uncorrelated and the fluctuations obey Poissonian statistics, but the inclusion of correlations is expected to boost the fluctuations by a factor of $\sim1.7$ [@2003ApJ...590..664S]. According to @2004astro.ph..2571A, the resulting fluctuations vary between a few $10^2~\mathrm{Jy}/\mathrm{sr}$ and $10^5~\mathrm{Jy}/\mathrm{sr}$, depending on observing channel. A proper modeling would involve a biasing scheme for populating halos, the knowledge of the star formation history and template spectra in order to determine the K-corrections.
AGNs are another extragalactic source of submillimetric emission. Here, sychrotron emission is the radiation generating mechanism. The spectra show a variety of functional behaviours, with spectral indices $\alpha$ generally ranging from -1 to -0.5, but sources with inverted spectra $\alpha>0$ are commonplace. This variety makes it difficult to extrapolate fluxes to observing frequencies of CMB experiments. Two studies [@1998MNRAS.297..117T; @2001ApJ...562...88S] have estimated the fluctuations generated by radio emitting AGNs at SZ-frequencies and found them to amount to $10^3-10^4~\mathrm{Jy}/\mathrm{sr}$. However, AGNs are known to reside in high-density environments and the proper modelling would involve a (poorly known) biasing scheme in order to assign AGN to the dark matter halos. Apart from that, one would have to assume spectral properties from a wide range of spectral indices and AGN activity duty cycles. Therefore, the study of extragalactic sources has been omitted from this analysis.
Yet another source of microwave emission in the Solar system is the zodiacal light . Modelling of this emission component is very difficult due to the Lissajous-orbit of [[*Planck*]{}]{} around the Lagrangian point $L_2$. The disk of interplanetary dust is viewed under varying angles depending on the orbital period and the integration over the spatially non-uniform emission features is very complicated. @2003Icar..164..384R have investigated the thermal emission by interplanetary dust from measurements by ISO and have found dust temperatures of $T_\mathrm{zodiacal}=250-300$ K and fluxes on the level of $\simeq10^3~\mathrm{Jy}/\mathrm{sr}$, i.e. the equilibrium temperature is separated by two orders of magnitude from the CMB temperature, which means that the intensities are suppressed by a factor of $\sim10^4$ due to the Rayleigh-Jeans regime of the zodiacal emission in which [[*Planck*]{}]{} is observing and by a factor of $10^5$ due to [[*Planck*]{}’s ]{}narrow beams. From this it is concluded that the emission from zodiacal light is unlikely to exceed values of a few $\sim\umu$Jy in observations by [[*Planck*]{}]{} which compares to the fluxes generated by faint asteroids. Thus, the zodiacal light constitutes only a weak foreground emission component at submillimetric wavelengths and can safely be neglected.
Simulating SZ-observations by [[*Planck*]{}]{} {#sect_plancksim}
==============================================
The simulation for assessing [[*Planck*]{}’s ]{}SZ-capabilities proceeds in four steps. Firstly, all-sky maps of the thermal and kinetic SZ-effects are prepared, the details of map-construction are given in Sect. \[sim\_szmap\]. Secondly, a realisation of the CMB was prepared for the assumed cosmological model (Sect. \[sim\_cmbmap\]). The amplitudes were co-added with the Galactic and ecliptic foregrounds introduced in the previous section, subsequently degraded in resolution with [[*Planck*]{}’s ]{}beams (Sect. \[sim\_beam\]). Finally, uncorrelated pixel noise as well as the emission maps comprising planets and asteroids were added. In the last section, cross-correlation properties of the various astrophysical and instrumental noise components are discussed (Sect. \[sim\_ccproperties\]).
At this stage it should be emphasised that we work exclusively with spherical harmonics expansion coefficients $a_{\ell m}$ of the flux maps. The expansion of a function $a(\bmath{\theta})$ into spherical harmonics $Y_\ell^m(\bmath{\theta})$ and the corresponding inversion is given by: $$a_{\ell m} = \int{\mathrm{d}}\Omega\: a(\bmath{\theta})\cdot Y_\ell^m(\bmath{\theta})^*\mbox{ and }
a(\bmath{\theta}) = \sum_{\ell=0}^{\infty}\sum_{m=-\ell}^{+\ell} a_{\ell m}\cdot Y_\ell^m(\bmath{\theta})\mbox{.}
\label{eqn_ylm_decomp}$$ Here, ${\mathrm{d}}\Omega$ denotes the differential solid angle element. For reasons of computational feasibility, we assume isotropic spectral properties of each emission component, i.e. the template map is only providing the amplitude of the respective emission component, but the spectral dependences are assumed to remain the same throughout the sky. While this is an excellent approximation for the CMB and the SZ-effects (in the non-relativistic limit), it is a serious limitation for Galactic foregrounds, where e.g. the synchrotron spectral index or the dust temperatures show significant spatial variations.
Adopting this approximation, the steps in constructing spherical harmonics expansion coefficients ${\langle}S_{\ell m}{\rangle}_{\nu_0}$ of the flux maps $S(\bmath{\theta},\nu)$ for all [[*Planck*]{}]{} channels consist of deriving the expansion coefficients of the template, converting the template amplitudes to flux units, extrapolate the fluxes with a known or assumed spectral emission law to [[*Planck*]{}’s ]{}observing frequencies, to finally convolve the emission law with [[*Planck*]{}’s ]{} frequency response window for computing the spherical harmonics expansion coefficients of the average measured flux ${\langle}S_{\ell
m}{\rangle}_{\nu_0}$ at nominal frequency $\nu_0$ by using eqn. (\[eqn\_tlm\_exp\]).
$${\langle}S_{\ell m}{\rangle}_{\nu_0}
= \frac{\int{\mathrm{d}}\nu\: S_{\ell m}(\nu) R_{\nu_0}(\nu)}{\int{\mathrm{d}}\nu\: R_{\nu_0}(\nu)}
= 2\frac{\nu_0^2}{c^2}\cdot k_B T_{\ell m}\mbox{.}
\label{eqn_tlm_exp}$$
Here, $S_{\ell m}(\nu)$ describes the spectral dependence of the emission component considered, and $R_{\nu_0}(\nu)$ the frequency response of [[*Planck*]{}’s ]{}receivers centered on the fiducial frequency $\nu_0$. Assuming spatial homogeneity of the spectral behaviour of each emission component it is possible to decompose $S_{\ell m}(\nu)$ into $S_{\ell m}(\nu) = q(\nu)\cdot
a_{\ell m}$, i.e. a frequency dependent function $q(\nu)$ and the spherical harmonics expansion coefficients $a_{\ell m}$ of the template describing the morphology. This is possible due to the fact that the decomposition eqn. (\[eqn\_ylm\_decomp\]) is linear. Additionally, eqn. (\[eqn\_tlm\_exp\]) gives the conversion from the averaged flux ${\langle}S_{\ell m}{\rangle}_\nu$ in a [[*Planck*]{}]{}-channel to antenna temperature $T_{\ell m}$.
[[*Planck*]{}’s ]{}frequency response function $R_{\nu_0}(\nu)$ is well approximated by a top-hat function: $$R_{\nu_0}(\nu) =
\left\{
\begin{array}{l@{,\:}l}
1 & \nu\in\left[\nu_0-\Delta\nu,\nu_0+\Delta\nu\right] \\
0 & \nu\notin\left[\nu_0-\Delta\nu,\nu_0+\Delta\nu\right]
\end{array}
\right.
\label{eq_freq_resp}$$ The centre frequencies $\nu_0$ and frequency windows $\Delta\nu$ for [[*Planck*]{}’s ]{}receivers are summarised in Table. \[table\_planck\_channel\]. In this way it is possible to derive a channel-dependent prefactor relating the flux expansion coefficients ${\langle}S_{\ell m}{\rangle}_{\nu_0}$ to the template expansion coefficients $A_{\ell m}$. The superposition of the various emission components in spherical harmonics and the determination of response-folded fluxes is most conveniently done using the [almmixer]{}-utility of [[*Planck*]{}’s ]{}simulation package.
SZ-map preparation {#sim_szmap}
------------------
For constructing an all-sky Sunyaev-Zel’dovich map, a hybrid approach has been pursued. Due to the SZ-clusters being detectable out to very large redshifts, due to their clustering properties on very large angular scales, and due to the requirement of reducing cosmic variance when simulating all-sky observations as will be performed by [[*Planck*]{}]{}, there is the need for very large simulation boxes, encompassing redshifts of $z\simeq1$ which corresponds to comoving scales exceeding 2 Gpc. Unfortunately, a simulation incorporating dark matter and gas dynamics that covers cosmological scales of that size down to cluster scales and possibly resolving cluster substructure is beyond computational feasibility. For that reason, two simulations have been combined: The Hubble-volume simulation [@2001MNRAS.321..372J; @2000MNRAS.319..209C], and a smaller scale simulation including (adiabatic) gas physics by @2002ApJ...579...16W performed with [GADGET]{} [@2001NewA....6...79S; @2002MNRAS.333..649S].
All-sky maps of the SZ-sky were constructed by using the light-cone output of the Hubble-volume simulation as a cluster catalogue and template clusters from the small-scale gas-dynamical simulation. In this way, the sky-maps contain all clusters above $5\cdot 10^{13} M_{\sun}/h$ out to redshift $z=1.48$. The analysis undertaken by gives expected mass and redshift ranges for detectable thermal SZ-clusters, which are covered completely by the all-sky SZ-map presented here. The maps show the correct 2-point halo correlation function, incorporate the evolution of the mass function and the correct distribution of angular sizes.
Furthermore, they exhibit cluster substructure and deviations from the ideal cluster scaling relations induced by the departure from spherical symmetry. The velocities used for computing the kinetic SZ-effect correspond to the ambient density field. The map construction process and the properties of the resulting map are in detail described in @2004_szmap. Visual impressions of the SZ-maps are given by Figs. \[figure\_thszfield\] and \[figure\_kinszfield\].
The fluxes generated by the thermal SZ-effect $S_\mathcal{Y}(x)$ and of the kinetic SZ-effect $S_\mathcal{W}(x)$ are given by eqns. (\[eq:S\_thSZ\]) and (\[eq:S\_kinSZ\]), respectively. The dimensionless frequency is defined as $x=h\nu/(k_B
T_\mathrm{CMB})$ and the flux density of the CMB is given by $S_0=(k_b T_\mathrm{CMB})^3\pi^3/c^2/h^2/5400=22.9~\mathrm{Jy}/\mathrm{arcmin}^2$: $$\begin{aligned}
S_\mathcal{Y}(x) & = & S_0\cdot\mathcal{Y}\cdot
\frac{x^4\cdot\exp(x)}{(\exp(x)-1)^2}\cdot\left[x\frac{\exp(x)+1}{\exp(x)-1} - 4\right]\mbox{.}
\label{eq:S_thSZ} \\
S_\mathcal{W}(x) & = & S_0\cdot\mathcal{W}\cdot\frac{x^4\cdot\exp(x)}{(\exp(x)-1)^2}\mbox{.}
\label{eq:S_kinSZ}\end{aligned}$$
Table \[table\_planck\_channel\] summarises the fluxes $S_\mathcal{Y}$ and $S_\mathcal{W}$ and the corresponding changes in antenna temperature $T_\mathcal{Y}$ and $T_\mathcal{W}$ for the respective Comptonisation of $\mathcal{Y} = \mathcal{W} = 1~\mathrm{arcmin}^2$ for all [[*Planck*]{}]{}-channels.
Fig. \[figure\_sz\_planck\_window\] shows how the frequency dependence of the SZ-signal is altered by [[*Planck*]{}’s ]{}relatively broad frequency response functions. The relative deviations of curves in which the frequency window has been taken into account to the unaltered curve amounts to $5\ldots15$%, depending on observation frequency.
CMB-map generation {#sim_cmbmap}
------------------
The angular power spectrum $C_\ell$ is computed for a flat -cosmology using the [CMBfast]{} code by @1996ApJ...469..437S. In addition to the cosmological parameters being already given in Sect. \[sect\_intro\], we use adiabatic initial conditions, set the CMB monopole to $T_\mathrm{CMB}=2.725$ K [@1999ApJ...512..511M] and the primordial He-mass fraction to $X_\mathrm{He} = 0.24$. The reionisation optical depth $\tau$ was set to $\tau=0.17$ and the reionisation redshift was taken to be $z_\mathrm{reion}=20$ [@2003ApJS..148....1B]. The angular power spectrum of the CMB is normalised to COBE data. With the spectrum of $C_\ell$-coefficients, a set of $a_{\ell m}$-coefficients was synthesised by using the [synalm]{} code based on [synfast]{} by @1998elss.confE..47H. The factors for converting the $a_{\ell m}$-coefficients of the CMB map showing the thermodynamic temperature and to the corresponding fluxes for each channel were then derived by convolution of the Planckian emission law eqn. (\[eq:S\_planck\_cmb\]), $$S_\mathrm{CMB}(\nu) = S_0\cdot\frac{x^3}{\exp(x)-1}\mbox{,}
\label{eq:S_planck_cmb}$$ with [[*Planck*]{}’s ]{}frequency response function eqns. (\[eqn\_tlm\_exp\]) and (\[eq\_freq\_resp\]). Again, $S_0=22.9~\mathrm{Jy}/\mathrm{arcmin}^2$ is the energy flux density of the CMB.
Preparation of simulation data sets {#sim_beam}
-----------------------------------
The expansion coefficients of the flux maps are multiplied with the respective beam’s $b_{\ell 0}$-coefficients in order to describe the finite angular resolution. After that, expansion coefficients of the pixel noise maps and those of the planetary maps have been added. In total, three atlases consisting of nine flux ${\langle}S_{\ell m}{\rangle}_{\nu_0}$-sets belonging to each of [[*Planck*]{}’s ]{}channels with fiducial frequency $\nu_0$ have been compiled:
- [The reference data set is a combination of the CMB, the SZ-maps and the instrumental noise maps. They should provide the cleanest detection of clusters and the measurement of their properties. Apart from the inevitable instrumental noise, this data set only contains cosmological components. In the remainder of the paper, this data set will be refered to as [COS]{}.]{}
- [The second data set adds Galactic foregrounds to the CMB, the SZ-maps and the instrumental noise map. Here, we try to assess the extend to which Galactic foregrounds impede the SZ-observations. Thus, this data set will be denoted [GAL]{}.]{}
- [In the third data set the emission from bodies inside the Solar system was included to the CMB, the SZ-maps, the Galactic foregrounds and the instrumental noise. Because of the planets and asteroids being loosely constrained to the ecliptic plane, this data set will be called [ECL]{}.]{}
An example of a synthesised map showing the combined emission of the SZ-clusters and all Galactic and ecliptic components including neither CMB fluctuations nor instrumental noise at a location close to the Galactic plane is given by Fig. \[figure\_sim\_skyview\]. The observing frequency has been chosen to be $\nu = 143$ GHz, correspondingly, the map has been smoothed with a (Gaussian) beam of $\Delta\theta=7\farcm1$ (FWHM).
[[*Planck*]{}]{}-channel correlation properties {#sim_ccproperties}
-----------------------------------------------
In this section the auto- as well as the cross-correlation properties of the various foregrounds in different [[*Planck*]{}]{}-channels are studied. The cross power specta, defined formally by eqn. (\[eqn\_cross\_power\_def\]) are determined by using: $$C_{\ell,\nu_1\nu_2} = \frac{1}{2\ell+1}\sum_{m=-\ell}^{+\ell}
{\langle}S_{\ell m}{\rangle}_{\nu_1}\cdot {\langle}S_{\ell m}{\rangle}_{\nu_2}^*\mbox{.}
\label{eqn_cross_power}$$ From this definition, the auto-correlation spectra are obtained by setting $\nu_1=\nu_2$, i.e. $C_{\ell,\nu} =
C_{\ell,\nu\nu}$. The band-pass averaged fluxes ${\langle}S_{\ell m}{\rangle}_\nu$ are defined in eqn. (\[eqn\_tlm\_exp\]). In Fig. \[figure\_auto\_correlation\], the power spectra are shown for the $\nu=30$ GHz-, $\nu=143$ GHz-, $\nu=353$ GHz- and the $\nu=847$ GHz-channels. The spectra have been derived including various Galactic and ecliptic noise components in order to study their relative influences. For visualisation purposes, the spectra are smoothed with a moving average filter with a filter window comprising 11 bins.
Distinct acoustic peaks of the CMB are clearly visible in the clean [COS]{} data sets, but are overwhelmed by the Galactic noise components. At small scales, i.e. high multipole order $\ell$, differences between the [GAL]{} and [ECL]{} data sets become apparent, the latter showing a higher amplitude. The (single) acoustic peak measurable in the $\nu=33$ GHz channel is shifted to larger angular scales due to the coarse angular resolution of that particular channel. The $\nu=857$ GHz-curve of the [COS]{} data set behaves like a power law due to the fact that the CMB is observed in the Wien-regime and is consequently strongly suppressed, such that the angular power spectrum is dominated by uncorrelated pixel noise.
Fig. \[figure\_cross\_correlation\] shows exemplarily a couple of cross power spectra. The cross-correlation spectra derived for the [COS]{} data set nicely shows the CMB power spectrum if two neighboring channels close to the CMB maximum are chosen, but the correlation is lost in two widely separated channels. This is especially the case if one considers the two lowest [*LFI*]{}-channels at angular scales which the receivers are not able to resolve. In this regime the pixel noise is still very small and the cross-correlation spectrum drops to very small values.
In order to illustrate the complexity of spectral and morphological behaviour of the power spectra, they are given as contour plots depending on both the observing frequency $\nu$ and the multipole order $\ell$. Fig. \[figure\_cos\_correlation\_contour\] and \[figure\_gal\_correlation\_contour\] contrast the auto-correlation properties of the different data sets. The [COS]{} data set, shown in Fig. \[figure\_cos\_correlation\_contour\], containing nothing but the CMB and instrumental noise apart from the SZ-contribution, shows clearly the acoustic oscillations with the first peak at $\ell\simeq200$ and the consecutive higher harmonics. They are most pronouced in the $\nu=100$ GHz- and $\nu=143$ GHz-channels. At higher multipole moments, the power spectra are dominated by instrumental noise which leads to a rapid (power law) incline.
Adding Galactic foregrounds yields the spectra depicted in Fig. \[figure\_gal\_correlation\_contour\]. Inclusion of Galactic foregrounds significantly complicates the picture and masks off the primary anisotropies. The spectra are dominated by large-scale emission structures of the Milky Way, most notably the emission from thermal dust that causes the spectra to increase with increasing frequency $\nu$.
Cluster detection by using multi-frequency optimised filtering {#sect_filtering}
==============================================================
One challenge in the analysis of two-dimensional all-sky surveys is the extraction of sources of interest which are superposed on a background of noise of varying morphology and spectral behaviour. In the presence of small-scale noise the conventional method to extract sources is low-pass filtering (e.g. with a Gaussian kernel) while wavelet analysis is most suitably applied if large scale noise fluctuations dominate. These methods, however, fail if the characteristic scale of the background fluctuations is comparable with the scale of the signal structures. Other methods have been proposed in order to separate different components in multifrequency CMB observations: They include Wiener filtering [@1996MNRAS.281.1297T; @1999NewA....4..443B; @1999MNRAS.302..663B], maximum-entropy methods [@1998MNRAS.300....1H; @1999MNRAS.306..232H], Mexican-hat wavelet analysis [@2001MNRAS.326..181V; @2000MNRAS.315..757C], fast independent component analysis [@2002MNRAS.334...53M], matched filter analysis [@1998ApJ...500L..83T], adaptive filtering techniques [@2001ApJ...552..484S; @2002ApJ...580..610H; @2002MNRAS.336.1057H], and non-parametric Bayesian approaches [@2002MNRAS.336.1351D].
However, a comparison between these methods is difficult because all of them assume different priors about the spatial properties and frequency dependence. Using prior knowledge about the frequency dependence and statistical properties of several images at different frequency channels, the maximum-entropy method and Wiener filtering are able to separate the components of interest. Contrarily, wavelet analysis is well suited in order to detect compact sources. A combination of these different techniques improves the quality of component separation [@2001MNRAS.328....1V]. Although component separation methods which assume a prior knowledge about the data are quite powerful, they yield biased or even wrong results in the case of incorrect or idealised assumptions about the data. Any error in the separation of one component propagates to the separation of the other components owing to normalisation constraints. In particular, this is the case in non-centrally symmetric source profiles, oversimplified spectral extrapolations of Galactic emission surveys into other wavebands, variations of the assumed frequency dependence, or non-Gaussian noise properties the statistics of which can not fully be characterised by power spectra. Thus, the application of a specific component separation method is a trade-off between robustness and effectiveness with regard to the particular problem.
Filtering techniques relying on Mexican-hat wavelets and on matched and scale-adaptive filters are single component separation methods. They all project either spatial structure or frequency properties (within a given functional family) of the component of interest in the presence of other components acting as background in this context. While Mexican-hat wavelet analysis assumes Gaussian profiles superimposed on large scale variations of the background noise, the matched and scale-adaptive filter generalises to arbitrary source profiles and noise properties which are assumed to be locally homogeneous and isotropic [@2001ApJ...552..484S; @2002MNRAS.336.1057H; @2002ApJ...580..610H].
This section generalises the matched and scale-adaptive filter techniques to global spherical topologies which find application in all-sky surveys such as the case of [[*Planck*]{}’s ]{}microwave/submillimetric survey. In addition, optimised filters for the detection of compact sources in single frequency all-sky observations are derived in the appendix in a more detailed fashion. The proposed method aims at simultaneously localising SZ-clusters and measuring both their amplitudes and angular extent. It can also be applied for localising microwave point sources and estimating their spectral properties.
We choose the spherical filtering approach rather than tiling the sky with a set of two-dimensional flat maps for the following reasons: On the sphere, we do not have to worry about spurious detections as well as double detections due to overlaps in the tesselation. Secondly, our approach provides a physical interpretation of our filter shapes in harmonic space even for the smallest multipole moments in contrast to the case of a flat map where the smallest wavenumbers are determined by the map size. Finally, our approach circumvents projection failures of the noise properties such as stretching effects in the case of conformal mapping which would introduce artifical non-Gaussianity in our maps and distort profile shapes close to the map boundaries.
We pursue the concept of the [*multi-frequency approach*]{} rather than the [*combination method*]{} [c.f. @2002MNRAS.336.1057H]. In other words, we filter each channel separately while taking into account the different cross-correlations between the different channels and the frequency dependence of the signal when constructing the optimised filters. This method seems to be superior to the [*combination method*]{} which tries to find a optimised combination of the different channels with regard to the signal-to-noise ratio of the sources and successively applies filters to the combined map.
The concept is introduced and central definitions are laid down in Sect. \[filter\_construct\]. The concept of constructing filter kernels is outlined in Sect. \[filter\_optimal\]. Subsequently, the matched and scale-adaptive filters are derived for expansions of spherical data sets into spherical harmonics in Sect. \[filter\_allsky\_matched\] and Sect. \[filter\_allsky\_scaleadaptive\]. Then, the numbers of merit are defined in Sect. \[filter\_gain\_reliability\]. Caveats in the numerical derivation are listed in Sect. \[filter\_numerics\]. A discussion of filter kernel shapes in Sect. \[filter\_shape\_discussion\] for actual simulation data. The application of the filter kernels to our simulated sky maps and the extraction of the SZ-cluster signal is described in Sect. \[filter\_renormalise\].
Assumptions and definitions {#filter_construct}
---------------------------
When constructing the particular filters, we assume centrally symmetric profiles of the sources to be detected. This approximation is justified for most of the clusters of [[*Planck*]{}’s ]{}sample whose angular extent will be comparable in size to [[*Planck*]{}’s ]{}beams, i.e. the instrumental beam renders them azimuthally symmetric irrespective of their intrinsic shape. Azimuthal symmetry is no general requirement for the filters which can be generalised to detect e.g. elliptic clusters using expansions into vector rather than scalar spherical harmonics.
We furthermore assume the background to be statistically homogeneous and isotropic, i.e.[ ]{}a complete characterisation can be given in terms of the power spectrum. This assumption obviously fails for non-Gaussian emission features of the Galaxy or of the exposure-weighted instrumental noise on large angular scales. However, the spherical harmonics expansion of any expected compact source profile, which we aim to separate, peaks at high values of the multipole moment due to the smallness of the clusters where the non-Gaussian influence is negligible. Thus, we only have to require homogeneity and isotropy of the background on small scales.
In order to construct our filters, we consider a set of all-sky maps of the detected scalar field $s_\nu(\btheta)$ for the different frequency channels $$\label{eq:signal}
s_\nu(\btheta) = f_\nu y_\nu(|\btheta-\btheta_0|) + n_\nu(\btheta),
\quad \nu = 1, \ldots, N,$$ where $\btheta = (\vartheta, \varphi)$ denotes a two-dimensional vector on the sphere, $\btheta_0$ is the source location, and $N$ is the number of frequencies (respectively, the number of maps). The first term on the right-hand side represents the amplitude of the signal caused by the thermal and kinetic SZ-effect, $y(|\btheta-\btheta_0|)$ and $w(|\btheta-\btheta_0|)$, respectively, while the second term corresponds to the generalised noise which is composed of CMB radiation, all Galactic and ecliptic emission components, and additional instrumental noise. The frequency dependence of the SZ-effect is described by $f_\nu$ in terms of average flux, $$\label{eq:fnu}
f_\nu \equiv {\langle}S_\mathcal{Y}{\rangle}_{\nu}\mbox{ and } f_\nu \equiv {\langle}S_\mathcal{W}{\rangle}_{\nu}$$ where ${\langle}S{\rangle}_\nu$ denotes the flux weighted by the frequency response at the fiducial frequency $\nu$ (c.f. eqn. (\[eqn\_tlm\_exp\])) and $S_\mathcal{Y}$ and $S_\mathcal{W}$ denote the SZ-fluxes given by eqns. (\[eq:S\_thSZ\]) and (\[eq:S\_kinSZ\]). We expect a multitude of clusters to be present in our all-sky maps. In order to sketch the construction of the optimised filter, we assume an individual cluster situated at the North pole ($\btheta_0=\bld{0}$) with a characteristic angular SZ-signal $y_\nu(\theta = |\btheta|) = A \tau_\nu(\theta)$, where we separate the true amplitude $A$ and the spatial profile normalised to unity, $\tau_\nu(\theta)$. The underlying cluster profile $p(\theta)$ is assumed to follow a generalised King-profile with an exponent $\lambda$ which is a parameter in our analysis. At each observation frequency this profile is convolved with the (Gaussian) beam of the respective [[*Planck*]{}]{}-channel (c.f. Sect. \[sect\_beam\]) yielding: $$\begin{aligned}
\label{eq:profile}
\tau_\nu^{}(\theta) &=&
\int {\mathrm{d}}\Omega' p(\theta') b_\nu^{}(|\btheta-\btheta'|)
=\sum_{\ell=0}^{\infty}\tau_{\ell 0,\, \nu}^{} Y_\ell^0(\cos \theta),\\
p(\theta) &=&
\left[1+\left(\frac{\theta}{\theta_\rmn{c}}\right)^2\right]^{-\lambda}\!\!,
\quad\mbox{and}\quad \tau_{\ell 0,\, \nu}
= \sqrt{\frac{4 \pi}{2 \ell+1}} b_{\ell 0,\, \nu} p_{\ell 0}.
\label{eqn_profile_beam_source}\end{aligned}$$ For the second step in eqn. (\[eqn\_profile\_beam\_source\]) we used the convolution theorem on the sphere to be derived in Appendix \[appendix\_sphsingle\]. The background $n_{\nu}(\btheta)$ is assumed to be a compensated homogeneous and isotropic random field with a cross power spectrum $C_{\ell, \nu_1 \nu_2}$ defined by $$\label{eq:PSnoise}
\left{\langle}n_{\ell m, \nu_1}^{} n^*_{\ell' m', \nu_2} \right{\rangle}=
C_{\ell, \nu_1 \nu_2}^{} \delta_{\ell \ell'}^{} \delta_{m,m'}^{},
\quad\mbox{where}\quad {\langle}n_{\nu}(\btheta) {\rangle}= 0,
\label{eqn_cross_power_def}$$ $n_{\ell m, \nu}$ denotes the spherical harmonics expansion coefficient of $n_{\nu}(\btheta)$, $\delta_{\ell \ell'}$ denotes the Kronecker symbol, and ${\langle}\cdot{\rangle}$ corresponds to an ensemble average. Assuming ergodicity of the field under consideration allows taking spatial averages over sufficiently large areas $\Omega = \mathcal{O}(4\pi) $ instead of performing the ensemble average.
Concepts in filter construction {#filter_optimal}
-------------------------------
The idea of an optimised matched filter for multifrequency observations was recently proposed by @2002MNRAS.336.1057H for the case of a flat geometry. For each observing frequency, we aim at constructing a centrally symmetric optimised filter function $\psi_\nu(\theta)$ operating on a sphere. Its functional behaviour induces a family of filters $\psi_\nu(\theta,R_\nu)$ which differ only by a scaling parameter $R_\nu$. For a particular choice of this parameter, we define the filtered field $u_\nu(R_\nu,\bbeta)$ to be the convolution of the filter function with the observed all-sky map at frequency $\nu$, $$\begin{aligned}
\label{eq:udef}
u_\nu^{}(R_\nu,\bbeta) & = & \int {\mathrm{d}}\Omega\, s_\nu^{}(\btheta)\,\psi_\nu^{}(|\btheta-\bbeta|,R_\nu) \\
& = & \sum_{\ell=0}^{\infty}\sum_{m=-\ell}^{+\ell} u_{\ell m,\, \nu}^{} Y_\ell^m(\bbeta)\mbox{ with}\\
u_{\ell m,\, \nu} & = &
\sqrt{\frac{4 \pi}{2 \ell+1}} s_{\ell m,\, \nu}\, \psi_{\ell 0,\, \nu}(R_\nu) \,\label{eqn_k_convolve}.\end{aligned}$$
For the second step, the convolution theorem to be derived in Appendix \[appendix\_sphsingle\] was used. The combined filtered field is defined by $$\label{eq:utotal}
u(R_1,\ldots,R_N;\bbeta)=\sum_{\nu} u_\nu(R_\nu,\bbeta).
\label{eqn_k_add}$$ Taking into account the vanishing expectation value of the noise ${\langle}n_{\nu}(\btheta) {\rangle}= 0$, the expectation value of the filtered field at the North pole $\bbeta=\bld{0}$ is given by $$\label{eq:umean}
{\langle}u_\nu(R_\nu,\bld{0}){\rangle}=
A f_\nu \sum_{\ell=0}^{\infty} \tau_{\ell 0,\, \nu}\,
\psi_{\ell 0,\, \nu}(R_\nu).
\label{eqn_def_ccspec}$$ The assumption that the cross power spectrum of the signal is negligible compared to the noise power spectrum is justified because the thermal and kinetic amplitudes are small compared to unity, $A_{y,w} \ll 1$. Thus, the variance of the combined filtered field (\[eq:utotal\]) is determined by $$\begin{aligned}
\label{eq:uvariance}
\sigma_u^2(R_1,\ldots,R_N) &=&
\left{\langle}\left[u(R_1,\ldots,R_N;\bbeta) -
\left{\langle}u(R_1,\ldots,R_N;\bbeta)\right{\rangle}\right]^2\right{\rangle}\nonumber\\
&=&\sum_{\nu_1,\nu_2}\sum_{\ell=0}^{\infty} C_{\ell,\,\nu_1 \nu_2}
\psi_{\ell 0,\, \nu_1}(R_{\nu_1})\,\psi_{\ell 0,\, \nu_2}(R_{\nu_2}).\end{aligned}$$
The optimised filter functions $\psi_\nu(\theta)$ are chosen to detect the clusters at the North pole of the sphere (to which they have been translated). They are described by a singly peaked profile which is characterised by the scale $R^{(0)}_{\nu}$ as given by eqn. (\[eq:profile\]). While the optimised [*matched filter*]{} is defined to obey the first two of the following conditions, the optimised [*scale-adaptive filter*]{} is required to obey all three conditions:
1. [The combined filtered field $u(R^{(0)}_1,\ldots,R^{(0)}_N;\bld{0})$ is an unbiased estimator of the source amplitude $A$, i.e.[ ]{}${\langle}u(R^{(0)}_{1},\ldots,R^{(0)}_{N};\bld{0}){\rangle}= A$.]{}
2. [The variance of $u(R_1,\ldots,R_N;\bbeta)$ has a minimum at the scales $R^{(0)}_1,\ldots,R^{(0)}_N$ ensuring that the combined filtered field is an efficient estimator.]{}
3. [The expectation value of the filtered field at the source position has an extremum with respect to the the scale $R^{(0)}_{\nu}$, implying $$\label{eq:3cond}
\frac{\upartial}{\upartial R^{(0)}_{\nu}}{\langle}u_\nu(R_\nu,\bld{0}){\rangle}= 0.$$]{}
Matched filter {#filter_allsky_matched}
--------------
For convenience, we introduce the column vectors $\bpsi_{\ell} \equiv [\psi_{\ell 0,\,\nu}]$, $\bld{F}_{\ell} \equiv [f_\nu
\tau_{\ell 0, \nu}]$, and the inverse $\hat{\bld{C}}_{\ell}^{-1}$ of the matrix $\hat{\bld{C}}_{\ell} \equiv [C_{\ell,\,\nu_1
\nu_2}]$. In terms of spherical harmonic expansion coefficients, constraint (i) reads $$\label{eq:constraint1}
\sum_\nu \sum_{\ell=0}^\infty
f_\nu \tau_{\ell 0,\, \nu} \psi_{\ell 0,\, \nu} =
\sum_{\ell=0}^\infty \bld{F}_{\ell}\bpsi_{\ell} = 1.$$ Performing functional variation (with respect to the filter function $\bpsi_{\ell}$) of $\sigma_u^2(R_1,\ldots,R_N)$ while incorporating the (isoperimetric) boundary condition (\[eq:constraint1\]) through a Lagrangian multiplier yields the spherical matched filter $\bpsi_{\ell}$ $$\label{eq:matched filter}
\bpsi_{\ell}^{} = \alpha\, \hat{\bld{C}}_{\ell}^{-1} \bld{F}_{\ell}^{},
\quad\mbox{where}\quad \alpha^{-1} = \sum_{\ell=0}^\infty
\bld{F}_{\ell}^T \hat{\bld{C}}_{\ell}^{-1} \bld{F}_{\ell}^{}.$$ In any realistic application, the cross power spectrum $C_{\ell, \nu_1 \nu_2}$ can be computed from observed data provided the cross power spectrum of the signal is negligible. The quantities $\alpha$, $\bld{F}_{\ell 0}$, and thus $\bpsi_{\ell 0}$ can be computed in a straightforward manner for a specific frequency dependence $f_\nu$ and for a model source profile $\tau_\nu(\theta)$.
Scale-adaptive filter on the sphere {#filter_allsky_scaleadaptive}
-----------------------------------
The scale-adaptive filter $\bpsi_{\ell}$ satisfying all three conditions is given by $$\label{eq:SAF}
\bpsi_{\ell}^{} = \hat{\bld{C}}_{\ell}^{-1} (\alpha\, \bld{F}_{\ell}^{}+\bld{G}_{\ell}^{})
\mbox{, with } \bld{G}_{\ell}^{} \equiv [\mu_{\ell,\nu}^{}\, \beta_\nu^{}]\mbox{, and}$$ $$\label{eq:mu}
\mu_{\ell ,\nu}^{} \equiv f_\nu \tau_{\ell 0, \nu}
\left(2 + \frac{{\mathrm{d}}\ln \tau_{\ell 0, \nu}}{{\mathrm{d}}\ln \ell}\right) =
f_\nu \left[2 \tau_{\ell 0, \nu}
+ \ell\left(\tau_{\ell 0, \nu} - \tau_{\ell-1\, 0, \nu}\right) \right].$$ As motivated in Appendix \[sec:app:SAF\], the logarithmic derivative of $\tau_{\ell 0}$ with respect to the multipole order $\ell$ is a shorthand notation of the differential quotient which is only valid for $\ell\gg 1$. The quantities $\alpha$ and $\beta_\nu$ are given by the components $$\label{eq:components}
\alpha = (\hat{\bld{A}}^{-1})_{00}, \qquad
\beta_\nu = (\hat{\bld{A}}^{-1})_{\nu0},$$ where $\hat{\bld{A}}$ is the $(1+N) \times (1+N)$ matrix with elements $$\begin{aligned}
\label{eq:Amatrix}
\lefteqn{
A_{00}^{}\equiv \sum_{\ell=0}^\infty \bld{F}_{\ell}^\rmn{T}
\hat{\bld{C}}_{\ell}^{-1} \bld{F}_{\ell}^{}, \qquad
A_{0\nu}^{}\equiv \sum_{\ell=0}^\infty \mu_{\ell ,\nu}^{}
\left(\bld{F}_{\ell }^\rmn{T}\hat{\bld{C}}_{\ell}^{-1}\right)_\nu}\\
\lefteqn{
A_{\nu 0}^{}\equiv \sum_{\ell=0}^\infty \mu_{\ell,\nu}^{}
\left(\hat{\bld{C}}_{\ell}^{-1}\bld{F}_{\ell }^{}\right)_\nu, \qquad
A_{\nu \nu'}^{}\equiv \sum_{\ell=0}^\infty
\mu_{\ell,\nu}^{}\,\mu_{\ell,\nu'}^{}
\left(\hat{\bld{C}}_{\ell}^{-1}\right)_{\nu\nu'}^{}}.\end{aligned}$$ In these equations, no summation over the indices is implied.
Detection level and gain {#filter_gain_reliability}
------------------------
As described by @2001ApJ...552..484S, the concept of constructing an optimised filter function for source detection aims at maximising the signal-to-noise ratio $D_u$, $$\label{eq:detlevel}
D_u\equiv\frac{\left{\langle}u(R_1,\ldots,R_N;\bld{0})\right{\rangle}}{\sigma_u(R_1,\ldots,R_N)}
= A\cdot\frac{\sum_{\ell=0}^\infty \bld{F}_{\ell }\bpsi_{\ell }}
{\sqrt{\sum_{\ell=0}^\infty \bpsi_{\ell }^T \hat{\bld{C}}_{\ell}^{} \bpsi_{\ell }^{}}}.$$
Computing the dispersion of the unfiltered field on the sphere yields the signal-to-noise ratio $D_s$ of a signal on the fluctuating background: $$\label{eq:disp}
\sigma_s^2 = \sum_{\nu_1,\nu_2}\sum_{\ell=0}^\infty C_{\ell,\,\nu_1\nu_2}
\quad \Rightarrow \quad D_s = \frac{A}{\sigma_s}.$$ These considerations allow introducing the [*gain*]{} for comparing the signal-to-noise ratios of a peak before and after convolution with a filter function: $$\label{eq:gain}
g \equiv \frac{D_u}{D_s} = \frac{\sigma_s}{\sigma_u(R_1,\ldots,R_N)}.$$
If the noise suppression is successful, the gain $g$ will assume values larger than one. If the filters are constructed efficiently, they are able to reduce the dispersion ($\sigma_u(R_1,\ldots,R_N)<\sigma_s$) while simultaneously retaining the expectation value of the field (\[eq:umean\]). Due to the additional third constraint, the scale-adaptive filter is expected to achieve smaller gains compared to the matched filter.
Numerical derivation of filter kernels {#filter_numerics}
--------------------------------------
For the derivation of suitable filter kernels the source profiles are assumed to be generalised King-profiles as described by eqn. (\[eqn\_profile\_beam\_source\]) convolved with the respective [[*Planck*]{}]{}-beam superimposed on fluctuating background given by template ${\langle}S_{\ell m}{\rangle}_\nu$-coefficients. The inversion of the matrix $\hat{\bld{C}}_{\ell}$ (c.f. eqns. (\[eqn\_cross\_power\]) and (\[eqn\_cross\_power\_def\])) can be performed using either Gauss-Jordan elimination or LU decomposition, which both were found to yield reliable results. In the derivation of the scale-adaptive filters, however, it is numerically advantageous to artificially exclude the lower multipoles $\ell\leq1$ from the calculation. Due to the sub-millimetric emission of the Milky Way, the lower multipoles are very large. Consequently, the corresponding $\psi_{\ell
m}$-coefficients, $\ell\leq1$, have been set to zero, which is not a serious intervention since the filters are designed to amplify structures at angular scales well below a degree. For consistency, the multipoles below the quadrupole have been artificially removed in the derivation of the matched filters as well.
In contrast to the [[*Planck*]{}]{}-simulation pipeline all numerical calculations presented here are carried out in terms of fluxes measured in Jy and not in antenna temperatures for the following reason: Cross-power spectra $C_{\ell, \nu_1\nu_2}$ given in terms of antenna temperatures are proportional to $(\nu_1\cdot\nu_2)^{-2}$ which results in a suppression of the highest frequency channels by a factor of almost $10^5$ compared to the lowest frequency channels.
Furthermore, by working with fluxes instead of antenna temperatures, the filters for extracting the SZ-signal show frequency dependences which can be understood intuitively. The frequency dependence is described by eqns. \[eq:S\_thSZ\] and \[eq:S\_kinSZ\]. The normalisation $\mathcal{Y}$ has been chosen to be $1~\mathrm{arcmin}^2$, which corresponds to the weakest signals [[*Planck*]{}]{} will be able to detect. Because of the smallness of the source profiles to be detected, the calculations were carried out to multipole orders of $\ell_\mathrm{max}=4096$, which ensures that the beams as well as the source profiles are well described. In the plots in Sect. \[filter\_shape\_discussion\], the filters depicted are smoothed with a moving average window comprising eleven bins for better visualisation.
Discussion of filter kernels {#filter_shape_discussion}
----------------------------
### Matched filter {#matched-filter}
-- --
-- --
The spherical harmonics expansion coefficients $\psi_{\ell 0,\nu}$ following from the matched filter algorithm are depicted in Fig. \[figure\_filter\_kernel\_matched\] for four frequencies most relevant to SZ-observation, namely for $\nu=100$ GHz, $\nu=143$ GHz, $\nu=217$ GHz and $\nu=353$ GHz. As background noise components the clean [COS]{} data set (left column) and the exhaustive [GAL]{} data set (right column) are contrasted. The filter kernels have been derived for optimised detection of sources described by a generalised King-profile with angular core radii $\theta_c=3\farcm0$ and $\theta_c=5\farcm0$ and asymptotic slope $\lambda=1.0$.
The principle how the matched filter extracts the SZ-signal from the maps is explained by Fig. \[figure\_filter\_kernel\_matched\]: The SZ-profiles the filter has been optimised are small structures at angular scales corresponding to multipole moments of $\ell\simeq 10^3$. In channels below $\nu=217$ GHz, the clusters are observed in absorption and the fluxes are decreased. For that reason, the filters have negative amplitudes at these angular scales for these specific frequencies. At larger scales, the fluctuations are suppressed by linear combination of the various channels, while the filtering functions show very similar shapes. Optimising the filters for detection of core radii of $5\farcm0$ instead of $3\farcm0$ result in a shift of the negative peak at $\ell\simeq10^3$ to smaller multipole orders. Instrumental noise which is important at even higher multipoles is suppressed by the filter’s exponential decline at high $\ell$ above $\ell{\lower.5ex\hbox{{$\; \buildrel > \over \sim \;$}}}2000$. The unwanted CMB fluctuations and all Galactic contributions at scales larger than the cluster scale are suppressed by weightings with varying sign so that the foregrounds are subtracted at the stage of forming linear combinations of the ${\langle}S_{\ell m}{\rangle}_\nu$-coefficients.
Furthermore, the contours of the matched filter kernels are given in Fig. \[figure\_filter\_kernel\_matched\] as functions of both inverse angular scale $\ell$ and observing frequency $\nu$ for differing noise contributions. The figures compare filters derived for differing background noise compositions. The filters shown serve for the optimised detection of generalised King-profiles with core radius $\theta_c=15\farcm0$ and asymptotic slope $\lambda=1.0$. These (rather large) values have been chosen for visualisation purposes. For clarity, the contour denoting zero values has been omitted due to noisy data. In these figures it is apparent how the filters combine the frequency information in order to achieve a suppression of the unwanted foregrounds: At multipole moments of a few hundred, the filters exhibit changes in sign, such that the measurements at low frequencies are subtracted from the measurements at high frequencies in the linear combination of the filtered maps.
Fig. \[figure\_filter\_kernel\_matched\_real\] illustrates the filter kernels $\psi_\nu(\theta)$ in real space for the same selection of frequencies and background noise components as given above. The filter kernels $\psi_\nu(\theta)$ have been synthesised from the $\psi_{\ell 0,\nu}$-coefficients using the [alm2grid]{}-utility of the [[*Planck*]{}]{}-simulation package. Here, the parameters of the King-profile to be detected are $(\theta_c,\lambda)=(5\farcm0,1.0)$. The filter kernels are similar in shape to Mexican-hat wavelets, but show more than one oscillation. Their action on the sky maps is to apply high-pass filtering, such that all long-wavelength modes are eliminated. At the cluster scale, they implement a linear combination of the sky maps that aims at amplifying the SZ-signal: The kernels derived for both the $\nu=100$ GHz- and $\nu=143$ GHz-channel exhibit a central depression which is used to convert the SZ-signal to positive amplitudes. The other two channels resemble simple Gaussian kernels which smooth the maps to a common effective angular resolution. At frequencies of $\nu=217$ GHz and $\nu=353$ GHz the most important emission feature is Galactic Dust, which is suppressed by the filter’s small amplitudes. In this way, the weak SZ-signal is dissected.
In Fig. \[figure\_filter\_kernel\_matched\_spec\], filter kernels derived with both algorithms for point sources (i.e. with beam profiles of the respective [[*Planck*]{}]{}-channels) are compared, that have been optimised for the detection of varying spectral behaviour of the signal, in this case the thermal SZ-effect, the kinetic SZ-effect and a Planckian thermal emitter with a surface temperature $T_\mathrm{surface}$ of 150 K, such as an asteroid or planet. The filter kernels depicted correspond to observing frequencies of $\nu=143$ GHz and $\nu=217$ GHz. The filters clearly reflect the spectral behaviour of the emission laws of the sources one aims at detecting: While the filter kernels designed for detecting thermal SZ-clusters reflect the peculiar change in sign in the SZ-effect’s frequency dependence, the other two curves show the behaviour to be expected for a Planckian emitter and the kinetic SZ-effect, respectively. Again, the better angular resolution of the $\nu=217$ GHz-channel is apparent by the shifting of the curves to higher multipole order $\ell$.
### Scale-adaptive filter
-- --
-- --
The spherical harmonics expansion coefficients $\psi_{\ell 0,\nu}$ following from the scale-adaptive filter algorithm for the frequencies $\nu=100$ GHz, $\nu=143$ GHz, $\nu=217$ GHz and $\nu=353$ GHz are given in the upper panel of Fig. \[figure\_filter\_kernel\_adaptive\]. The left and right columns compare the filter kernels for differing noise components. Their functional shape has a number of important features in common with the matched filters: They suppress the uncorrelated pixel noise, which is dominant at high $\ell$ by their exponential decline at $\ell{\lower.5ex\hbox{{$\; \buildrel > \over \sim \;$}}}2000$. Furthermore, the filters amplify the SZ-signal, which is negative at frequencies below $\nu=217$ GHz, by assuming large negative values and hence converting the SZ-signal to yield positive amplitudes. Additionally, the filters show a distinct secondary peak at $\ell\simeq2000$ which causes the kernels to be more compact after transformation to real space and enables the size measurement. A more general observation is that the scale-adaptive filter kernel shapes are more complex and noisier in comparison to the matched filter, especially at high $\ell$.
The scale-adaptive filter makes even stronger use of the spectral information than the matched filter. Especially the contour plots in Fig. \[figure\_filter\_kernel\_adaptive\] show that the scale-adaptive filter exhibits alternating signs when varying the observing frequency $\nu$ while keeping the angular scale $\ell$ fixed. In this way, the noise contributions are isolated in angular scale and subsequently suppressed by linear combination of the maps. Furthermore, one notices a change in sign at multipole order $\ell\simeq 200$ which is common to the frequencies $\nu=100\ldots353$ GHz, at which the CMB signal is strongest. Aiming at reducing the variance of the filtered maps, the scale-adaptive filter is suppressing the ${\langle}S_{\ell
m}{\rangle}_\nu$-coefficients by assuming small values.
Fig. \[figure\_filter\_kernel\_adaptive\_real\] gives the filter kernels $\psi_\nu(\theta)$ in real space for selected frequencies and background noise components. The scale-adaptive filters work similarly as the matched filters like Mexican-hat wavelets and subject the sky maps to high pass filtering.
In Fig. \[figure\_filter\_kernel\_adaptive\_spec\], filter kernels derived with both algorithms for point sources (i.e. with beam profiles of the respective [[*Planck*]{}]{}-channels) are compared, that have been optimised for the detection of varying spectral behaviour of the signal, in this case the thermal SZ-effect, the kinetic SZ-effect and a Planckian thermal emitter with a surface temperature $T_\mathrm{surface}$ of 150 K, such as an asteroid or planet. The filter kernels depicted correspond to observing frequencies of $\nu=143$ GHz and $\nu=217$ GHz. As in the case of the matched filter, the frequency dependence of the signal is reflected by the sign of the filter kernel at the anticipated angular scale of the profile to be detected.
Filter renormalisation and synthesis of likelihood maps {#filter_renormalise}
-------------------------------------------------------
Once the filter kernels are derived, the filtered fields $u_{\nu}(R_\nu,\bmath{\beta})$ can be synthesised from the $u_{\ell
m,\nu}$-coefficients (defined in eqn. (\[eqn\_k\_convolve\])) and the resulting maps can be added in order to yield the co-added, filtered field $u(R_1,\ldots,R_N,\bmath{\beta})$ (see eqn. (\[eqn\_k\_add\])), which can be normalised by the level of fluctuation $\sigma_u$ (given by eqn. (\[eqn\_def\_ccspec\])) to yield the likelihood map $D(\bmath{\theta})$. It is favourable to divide the filter kernels by the variance $\sigma_u$ and to apply a renormalisation: $$\psi_{\ell 0,\nu}\longrightarrow\psi^\prime_{\ell 0,\nu} =
\frac{\psi_{\ell 0,\nu}}{\sqrt{\sum_\ell \bmath{\psi}_{\ell}^T\hat{\bld{C}}_{\ell}\bmath{\psi}_{\ell}}}\mbox{.}$$ In this case, the filter kernels are invariant under changes in profile normalisation. With these kernels, the filtered flux maps can be synthesised from the set of ${\langle}S_{\ell m}{\rangle}_\nu$-coefficients and the resulting maps can be co-added to yield the final normalised likelihood map $D(\bmath{\beta})$. It is computationally advantageous, however, to interchange the last two steps, $$\begin{aligned}
D_u(\bmath{\beta}) & = & \frac{u(\bmath{\beta})}{\sigma_u} = \frac{1}{\sigma_u}\sum_\nu u_\nu(\bmath{\beta})\\
& = &
\sum_\nu\sum_{\ell=0}^{\infty}\sum_{m=-\ell}^{+\ell}
\sqrt{\frac{4\pi}{2\ell+1}}{\langle}S_{\ell m}{\rangle}_\nu\frac{\psi_{\ell 0,\nu}}{\sqrt{\sum_\ell
\bmath{\psi}^T_\ell\hat{\bld{C}}_\ell\bmath{\psi}_\ell}} Y_\ell^m(\bmath{\beta})\\
& = & \sum_{\ell=0}^{\infty}\sum_{m=-\ell}^{+\ell}\underbrace{\sqrt{\frac{4\pi}{2\ell+1}}\left[\sum_\nu {\langle}S_{\ell m}{\rangle}_\nu
\cdot\psi^{\prime}_{\ell 0,\nu}\right]}_{\equiv D_{\ell m}} Y_\ell^m(\bmath{\beta})\mbox{,}\end{aligned}$$ and to derive the $D_{\ell m}$-coefficients first, such that the synthesis has to be performed only once. Due to the restriction to axially symmetric kernels, the convolution can be carried out using the [alm2map]{}-utility rather than [totalconvolve]{}.
-- --
-- --
Fig. \[figure\_filter\_likelihood\] gives a visual impression of the capability of the above described filtering schemes: The figure shows a $30\degr\times30\degr$ wide field at the ecliptic North pole at a frequency of $\nu=353$ GHz (at the SZ-maximum) as observed by [[*Planck*]{}]{}, i.e. the image is smoothed to an angular resolution of $\Delta\theta=5\farcm0$ (FWHM) and contains the fluctuating CMB, all Galactic and ecliptic foregrounds as well as pixel noise. Matched and scale-adaptive filter kernels were derived for isolating point sources, i.e. for sources that appear to have profiles equal to [[*Planck*]{}’s ]{}beams of the corresponding channel. For clarity, only amplitudes exceeding a threshold value of 1.0 are shown.
For comparison, Fig. \[figure\_filter\_likelihood\] shows the same detail of the input thermal SZ-map as well. It is immediately apparent that the observation of SZ-clusters without foreground- and noise suppression is not possible and that one has to rely on filtering schemes. As a comparison with Fig. \[figure\_filter\_likelihood\] shows, the filters are clearly able to isolate the SZ-clusters and to strongly suppress all spurious noise contributions. The matched filter, however, shows a slightly better performance and yields more significant peaks due to better background suppression. There are weak residuals present in both maps due to incomplete foreground reduction. These residuals however, have small amplitudes compared to the SZ-detections. The highest peaks exhibit detection significances amounting to $10.6\sigma$ in the case of the matched filter and $9.1\sigma$ in the case of the scale-adaptive filter.
It should be emphasised that the filters work exceptionally well despite the fact that the Milky Way clearly is a non-Gaussian feature, whereas Gaussianity of the fluctuating background was an important assumption in the derivation of the filter kernels. Furthermore, the filters sucessfully separate and amplify the weak SZ-signal in the presence of seven different noise contributions (CMB, four Galactic foregrounds, thermal emission from bodies of the Solar system and instrumental noise) that exhibit different spectral behaviours by relying on just nine broad-band measurements. Fig. \[figure\_flowchart\_analysis\] summarises all steps involved in the simulation of [[*Planck*]{}]{}-observations, filter derivation and signal extraction.
-- --
-- --
Summary and conclusion {#sect_summary}
======================
A simulation for assessing [[*Planck*]{}’s ]{} SZ-capabilities in the presence of spurious signals is presented that combines maps of the thermal and kinetic SZ-effects with a realisation of the cosmic microwave background (CMB), in addition to Galactic foregrounds (synchrotron emission, free-free emission, thermal emission from dust, CO-line radiation) as well as the sub-millimetric emission from celestial bodies of our Solar system. Additionally, observational issues such as the finite angular resolution and spatially non-uniform instrumental noise of [[*Planck*]{}]{} are taken into account.
- [Templates for modelling the free-free emission and the carbon monoxide-line emission have been added to the [[*Planck*]{}]{}-simulation pipeline. The free-free template relies on an $H_\alpha$-survey of the Milky Way. The spectral properties of both foregrounds are modelled with reasonable parameter choices, i.e. $T_p=10^4$ K for the free-free plasma temperature and $T_\mathrm{CO}=20$ K for the mean temperature of giant molecular clouds.]{}
- [An extensive package for modelling the sub-millimetric emission from planet and asteroids has been implemented for [[*Planck*]{}]{}, that solves the heat balance equation of each celestial body. It takes the movement of the planets and asteroids into account, which causes, due to [[*Planck*]{}’s ]{}scanning strategy, double detections separated by approximate half-year intervals. The total number of asteroids implemented is $\simeq 1200$.]{}
- [The foregrounds have been combined under proper inclusion of [[*Planck*]{}’s ]{}frequency response windows in order to yield a set of flux maps. The auto- and cross-correlation properties of those maps are investigated in detail. Furthermore, their decomposition into spherical harmonics ${\langle}S_{\ell m}{\rangle}_\nu$ serve as the basis for the filter construction. It should be emphasised that the spectral properties of a foreground component were assumed to be isotropic.]{}
- [In order to separate the SZ-Signal and to suppress the foreground components, the theory of matched and scale-adaptive filtering has been extended to spherical data sets. The formulae in the context of spherical coordinates and $Y_{\ell}^m$-decomposition are analogous to those derived for Cartesian coordinate systems and Fourier-transforms. ]{}
- [The global properties of filter kernel shapes are examined as functions of observing channel, composition of noise, parameters of the profile to be detected and spectral dependence of the signal. Transformation of the filter kernels to real space yields functions that resemble the Mexican-hat wavelets, but show more than one oscillation. The shape of the filter kernels can be understood intuitively: They subject the maps to high-pass filtering while retaining structures similar in angular extent to the predefined profile size. The signal is then amplified by linear combination of the maps, which again is apparent in the sign of the filter kernels.]{}
- [The functionality of the filtering scheme is verified by applying them to simulated observations. It is found that the Galactic foregrounds can be suppressed very effectively so that the SZ-cluster signals can be retrieved. Comparing the two filters, the scale-adaptive filter performs not as good as the matched filter, which is in accordance to the findings of @2002MNRAS.336.1057H [@2002ApJ...580..610H]. It should be emphasised that for the derivation of the filter kernels nothing but a model profile and all cross-power spectra (in [[*Planck*]{}’s ]{}case a total number of 45 independent $C_{\ell,\nu_1\nu_2}$-sets) are used.]{}
The scientific exploitation of our simulation and the characterisation of [[*Planck*]{}’s ]{}SZ-cluster sample, e.g. the number density as a function of detection sigificance as well as filter parameters, spatial distribution in dependence on Galactic and ecliptic latitude and the distribution in redshift, mass and apparent size will be the subject of a forthcoming paper.
Acknowledgements {#acknowledgements .unnumbered}
================
The authors would like to thank Torsten En[ß]{}lin for careful reading of the manuscript. The support provided by Martin Reinecke in enhancing the [[*Planck*]{}]{}-simulation tools and adding custom changes is greatly appreciated.
Optimised filter for single frequency all-sky observations {#appendix_sphsingle}
==========================================================
This appendix derives the optimised filters for single frequency all-sky observations and thus serves as a detailed supplement to Sect. \[sect\_filtering\] where optimised filters for multi-frequency observations were derived. The formalism outlined in this appendix might be applied to future all-sky surveys in the X-ray or microwave regime.
Assumptions and definitions {#assumptions-and-definitions}
---------------------------
In order to construct our filters, we consider an all-sky map of the detected scalar field $s(\btheta)$ $$\label{eq:app:signal}
s(\btheta) = y(|\btheta-\btheta_0|) + n(\btheta),$$ where $\btheta = (\vartheta, \varphi)$ denotes a two-dimensional vector on the sphere and $\btheta_0$ is the source location. The first term of the right-hand side represents the amplitude of the sources to be detected, while the second term in corresponds to the generalised noise present in the map which is composed of any detected features other than the desired signal including for instance instrumental noise. The statistical properties of the noise are assumed to be characterised by its power spectrum $\left{\langle}n_{\ell m} n^*_{\ell' m'} \right{\rangle}= C_{\ell} \delta_{\ell \ell'} \delta_{m,m'}$. In order to sketch the construction of the optimised filter, we assume an individual cluster situated at the North pole ($\btheta_0=\bld{0}$) with a characteristic angular SZ-signal $y(\theta = |\btheta|) = A \tau(\theta)$, separating the amplitude $A$ and the profile $\tau(\theta)$.
Convolution theorem on the sphere {#appendix_convolution}
---------------------------------
Filtering a scalar field on the sphere with an arbitrary, asymmetric kernel requires the specification of the convolution path as well as the orientation of the filter kernel at each position on the sphere in order to apply any convolution algorithm [@2001PhRvD..63l3002W]. Because of the simplifying restriction to centrally symmetric filter kernels, we give the theorem for the convolution of two functions, one of which is assumed to be centrally symmetric. The filtered field $u(\bbeta)$ is obtained by convolution of the centrally symmetric filter function $\psi(\theta)$ with the scalar field on the sphere $s(\btheta)$, $$\label{eq:app:convolve1}
u(\bbeta) = \int {\mathrm{d}}\Omega\, s(\btheta) \psi(|\btheta-\bbeta|).$$ Expansion of these two scalar fields into spherical harmonics yields $$\begin{aligned}
\label{eq:app:convolve2}
s(\btheta) &=& \sum_{\ell=0}^\infty \sum_{m=-\ell}^{+\ell} s_{\ell m}^{}\,
Y_\ell^m(\btheta), \\
\psi(\theta) &=& \sum_{\ell=0}^\infty \sum_{m=-\ell}^{+\ell} \psi_{\ell m}^{}\,
Y_\ell^m(\theta)
= \sum_{\ell=0}^\infty \sqrt{\frac{2\ell+1}{4\pi}}
\psi_{\ell 0}^{}\,P_\ell^{}(\cos\theta).\end{aligned}$$ The last step assumes central symmetry. In this case, only modes with $m=0$ are contributing. For proceeding, the addition theorem for Legendre polynomials $P_\ell(x)$ [@1995mmp..book.....A] is used in substituting $\gamma=|\btheta-\bbeta|$:$$\label{eq:app:at}
P_\ell(\cos\gamma) = \frac{4\pi}{2\ell+1}\sum_{m=-\ell}^{+\ell} Y_\ell^m(\btheta)\,Y_\ell^{m*}(\bbeta).$$ Combining these equations and applying the completeness relation yields the convolution relation for a centrally symmetric filter kernel, $$\label{eq:app:ct}
u(\bbeta) = \sum_{\ell=0}^{\infty}\sum_{m=-\ell}^{+\ell} u_{\ell m}^{} Y_\ell^m(\bbeta),
\quad\mbox{with}\quad
u_{\ell m} = \sqrt{\frac{4 \pi}{2 \ell+1}} s_{\ell m}\,\psi_{\ell 0} \,.$$
Concepts of optimised filtering on the sphere {#appendix_sphere_filter}
---------------------------------------------
The idea of optimised matched filters was proposed by @1998ApJ...500L..83T, and generalised to scale-adaptive filters by @2001ApJ...552..484S for a flat topology. The construction of a centrally symmetric optimised filter function $\psi(\theta)$ for the amplification and detection of signal profiles differing only in size but not in shape implies a family of filters $\psi(\theta/R)$ introducing a scaling parameter $R$. Decomposing the family of filter functions $\psi(\theta/R)$ into spherical harmonics yields
$$\begin{aligned}
\label{eq:app:filter}
\psi\left(\frac{\theta}{R}\right) &=& R^2\sum_{\ell=0}^\infty
\sqrt{\frac{2\ell+1}{4\pi}}\psi_{\ell 0}(R)\,P_\ell^{}(\cos\theta),\\
\psi_{\ell 0}(R) &=& \frac{1}{R^2} \int {\mathrm{d}}^2\theta\,
\sqrt{\frac{2\ell+1}{4\pi}}\psi\left(\frac{\theta}{R}\right)
P_\ell(\cos\theta),\end{aligned}$$
while allowing for central symmetry of the filter function. For a particular choice of $R$ the filtered field $u(R,\bbeta)$ is obtained by convolution (c.f. Appendix. \[appendix\_convolution\]): $$\begin{aligned}
\label{eq:app:udef}
u(R,\bbeta) & = & \sum_{\ell=0}^{\infty}\sum_{m=-\ell}^{+\ell} u_{\ell m}^{} Y_\ell^m(\bbeta),
\quad\mbox{and}\\
u_{\ell m} & = & \sqrt{\frac{4 \pi}{2 \ell+1}} s_{\ell m}\,\psi_{\ell 0}(R) \,.\end{aligned}$$ Taking into account the vanishing expectation value of the noise ${\langle}n_{\nu}(\btheta) {\rangle}= 0$, the expectation value of the filtered field at the North pole $\bbeta=\bld{0}$ is given by
$$\label{eq:app:umean}
{\langle}u(R,\bld{0}){\rangle}=
A \sum_{\ell=0}^{\infty} \tau_{\ell 0}\,
\psi_{\ell 0}(R).$$
Assuming that the power spectrum of the signal is negligible compared to the noise power spectrum, the variance of the filtered field is given by $$\label{eq:app:uvariance}
\sigma_u^2(R) = \left{\langle}\left[u(R,\bbeta) - {\langle}u(R,\bbeta){\rangle}\right]^2\right{\rangle}=\sum_{\ell=0}^{\infty} C_{\ell}^{}\,\psi_{\ell 0}^2(R).$$
While the optimised [*matched filter*]{} in the case of single frequency observations is defined to obey the first two of the following conditions, the optimised [*scale-adaptive filter*]{} is required to obey all three conditions:
1. [ The filtered field $u(R,\bld{0})$ is an unbiased estimator of the source amplitude $A$ at the true source position, i.e. ${\langle}u(R,\bld{0}){\rangle}= A$.]{}
2. [The variance of $u(R,\bbeta)$ has a minimum at the scale $R$ ensuring that the filtered field is an efficient estimator.]{}
3. [The expectation value of the filtered field at the source position has an extremum with respect to the the scale $R$, implying $$\label{eq:app:3cond}
\frac{\upartial}{\upartial R}{\langle}u(R,\bld{0}){\rangle}= 0.$$]{}
Matched filter {#matched-filter-1}
--------------
In order to derive the matched filter, constraint (i) can be rewritten yielding $$\label{eq:app:constraint1}
\sum_{\ell=0}^\infty \tau_{\ell 0}\, \psi_{\ell 0} = 1.$$ Performing functional variation (with respect to the filter function $\psi$) of $\sigma_u^2(R)$ while incorporating the constraint (\[eq:app:constraint1\]) through a Lagrangian multiplier yields the spherical matched filter: $$\label{eq:app:matched filter}
\psi_{\ell 0}^{} = \alpha\, \frac{\tau_{\ell 0}}{ C_\ell}, \quad\mbox{where}\quad \alpha^{-1} = \sum_{\ell=0}^\infty
\frac{\tau_{\ell 0}^2}{ C_\ell}.$$
In any realistic application, the power spectrum $C_{\ell}$ can be estimated from the observed data provided the power spectrum of the signal is negligible. The quantities $\alpha$, $\tau_{\ell 0}$, and thus the filter kernel $\psi_{\ell 0}$ can be straightforwardly computed for any model source profile $\tau(\theta)$.
Scale-adaptive filter {#sec:app:SAF}
---------------------
The next step consists of reformulating constraint (iii) in order to find a convenient representation for the application of functional variation. The expansion coefficient of the family of filter functions $\psi(\theta/R)$ of eqn. (\[eq:app:filter\]) can be rewritten yielding $$\label{eq:app:approx}
\psi_{\ell 0}^{}(R) =
\frac{1}{R^2} \int {\mathrm{d}}^2\theta\,
\psi\left(\frac{\theta}{R}\right)Y_\ell^{0}(\theta) =
\int {\mathrm{d}}^2\beta\, \psi(\beta) Y_\ell^{0}(R \beta),$$ where $\beta \equiv \theta/R$. In general, this substitution is [*not*]{} valid, because ${\mathrm{d}}^2\theta = \sin\theta\,{\mathrm{d}}\theta\,{\mathrm{d}}\phi$. In the case of localised source profiles, the angle $\theta$ is small for non-vanishing values of $\psi$ justifying the approximation $\sin \theta\approx \theta$. The same argument also applies for the boundaries of integration. With the aid of eqn. (\[eq:app:umean\]), condition (\[eq:app:3cond\]) reads $$\label{eq:app:derivation1}
\frac{\upartial}{\upartial R}{\langle}u(R,\bld{0}){\rangle}=
\sum_{\ell=0}^{\infty} \tau_{\ell 0}\,
\frac{\upartial \psi_{\ell 0}(R)}{\upartial R} = 0.$$ Using eqn. (\[eq:app:approx\]), the derivative now acts on the Legendre polynomial $P_{\ell}$, $$\label{eq:app:derivation2}
\sum_{\ell=0}^\infty \sqrt{\frac{2 \ell+1}{4 \pi}}\,\tau_{\ell 0}
\int {\mathrm{d}}^2 \beta\, \psi(\beta)
P_\ell' (\cos R \beta)\, \beta\,\sin R \beta = 0.$$ Using the derivative relation of the Legendre polynomials [@1995mmp..book.....A], $$\label{eq:app:LegendreRelation}
P_\ell'(x) = \frac{\ell+1}{1 - x^2}\,[x\,P_\ell(x) - P_{\ell+1}(x)],$$ one obtains $$\begin{aligned}
\label{eq:app:derivation3}
\lefteqn{\sum_{\ell=0}^\infty \sqrt{\frac{2 \ell+1}{4 \pi}}\,(\ell +1)\,
\tau_{\ell 0}
\int {\mathrm{d}}^2 \beta\, \psi(\beta) \frac{R \beta}{\sin R \beta}
\,\times}\nonumber \\
&&[\cos R \beta\,P_\ell(\cos R \beta) - P_{\ell+1}(\cos R \beta)] = 0.\end{aligned}$$ In our case, the angle $\theta$ is small for non-vanishing values of $\psi$ justifying the approximations $\sin R\beta
\approx R\beta$ and $\cos R\beta \approx 1$. Substituting back, ${\mathrm{d}}^2\beta = {\mathrm{d}}^2\theta/R^2$, introducing $x\equiv\cos \theta = \cos R \beta$, and inserting the inversion of eqn. (\[eq:app:approx\]), namely $$\label{eq:app:inversion}
\psi(\beta) = \sum_{\ell'=0}^\infty \psi_{\ell'0}^{}(R)
Y_{\ell'0}^{0}(R \beta),$$ one arrives at $$\begin{aligned}
\label{eq:app:derivation4}
\lefteqn{\sum_{\ell',\ell=0}^\infty
\sqrt{\frac{2 \ell+1}{4 \pi}}\,\sqrt{\frac{2 \ell'+1}{4 \pi}}\,
(\ell +1)\, \tau_{\ell 0}\psi_{\ell' 0}(R)\, \times}\nonumber \\
&& \frac{2\pi}{R^2} \int {\mathrm{d}}x\, P_{\ell'}(x)
[P_\ell(x) - P_{\ell+1}(x)] = 0.\end{aligned}$$ Applying the orthogonality relation for the Legendre polynomials, $$\label{eq:orthogonal}
\int_{-1}^{+1}{\mathrm{d}}x\, P_\ell (x) P_{\ell'}(x) =
\frac{2}{2 \ell+1} \delta_{\ell\ell'},$$ and using the small angle approximation in the second term of eqn. (\[eq:app:derivation4\]) with the same argument as given above, yields the final result $$\label{eq:app:derivation5}
\sum_{\ell=0}^\infty\psi_{\ell 0}(R)
[\tau_{\ell 0} + \ell (\tau_{\ell 0} - \tau_{\ell-1, 0})] = 0.$$ Replacing the differential quotient with the corresponding derivative is a valid approximation for $\ell\gg 1$. Thus, eqn. (\[eq:app:derivation5\]) can be recast in shorthand notation yielding $$\label{eq:app:derivation6}
\sum_{\ell=0}^\infty\psi_{\ell 0}(R)\tau_{\ell 0}
\left[2 + \frac{{\mathrm{d}}\ln \tau_{\ell 0}}{{\mathrm{d}}\ln \ell}\right] = 0.$$ This result could have been obtained independently by attaching the tangential plane to the North pole and applying Fourier decomposition of the filter function $\psi$ and the source profile $\tau$. For that reason, it is not surprising that the functional form of this condition on the sphere agrees with that obtained by @2001ApJ...552..484S for a flat topology in two dimensions.
Performing functional variation (with respect to the filter function $\psi$) of $\sigma_u^2(R)$ while interlacing the constraints (\[eq:app:constraint1\]) and (\[eq:app:derivation6\]) through a pair of Lagrangian multipliers yields the spherical scale-adaptive filter, $$\begin{aligned}
\label{eq:app:scale-adaptive filter}
\psi_{\ell 0}^{} &=&
\frac{\tau_{\ell 0}}{ C_\ell\,\Delta}
\left[2 b + c - (2 a + b)\frac{{\mathrm{d}}\ln \tau_{\ell 0}}{{\mathrm{d}}\ln \ell}
\right],\nonumber\\
\Delta &=& ac-b^2, \\
a &=& \sum_{\ell=0}^\infty
\frac{\tau_{\ell 0}^2}{C_\ell}, \quad
b = \sum_{\ell=0}^\infty
\frac{\tau_{\ell 0}}{C_\ell}\frac{{\mathrm{d}}\tau_{\ell 0}}{{\mathrm{d}}\ell}, \nonumber\\
c &=& \sum_{\ell=0}^\infty C_\ell^{-1}
\left(\frac{{\mathrm{d}}\tau_{\ell 0}}{{\mathrm{d}}\ln \ell}\right)^2.\end{aligned}$$
As before in the case of the matched filter, the power spectrum $C_{\ell}$ can be derived from observed data provided the power spectrum of the signal is negligible. Assuming a model source profile $\tau(\theta)$, the quantities $\tau_{\ell 0}$, $a$, $b$, $c$, and finally $\psi_{\ell 0}$ can be computed in a straightforward way. The derivative of $\tau_{\ell 0}$ with respect to the multipole order $\ell$ is a shorthand notation of the differential quotient in eqn. (\[eq:app:derivation5\]).
\[lastpage\]
[^1]: e-mail: spirou@mpa-garching.mpg.de (BMS); pfrommer@mpa-garching.mpg.de (CP); reinhard@mpa-garching.mpg.de (RMH); mbartelmann@ita.uni-heidelberg.de (MB)
[^2]: http://planck.mpa-garching.mpg.de/
[^3]: http://astro.estec.esa.nl/Planck/
[^4]: http://cfa-www.harvard.edu/cfa/ps/mpc.html
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We extend results on Weierstrass semigroups at ramified points of double covering of curves to any numerical semigroup whose genus is large enough. As an application we strengthen the properties concerning Weierstrass weights stated in [@To].'
address: 'ICTP, Mathematics Section, P.O. Box 586, 34100, Trieste - Italy'
author:
- Fernando Torres
title: Remarks on numerical semigroups
---
[^1]
Introduction
============
Let $H$ be a numerical semigroup, that is, a subsemigroup of $(\mathbb N,
+)$ whose complement is finite. Examples of such semigroups are the Weierstrass semigroups at non-singular points of algebraic curves.
In this paper we deal with the following type of semigroups:
\[def\] Let $\gamma\ge 0$ an integer. $H$ is called $\gamma$-hyperelliptic if the following conditions hold:
- $H$ has $\gamma$ even elements in $[2,4\gamma]$.
- The $(\gamma+1)$th positive element of $H$ is $4\gamma+2$.
A 0-hyperelliptic semigroup is usually called hyperelliptic.
The motivation for study of such semigroups comes from the study of Weierstrass semigroups at ramified points of double coverings of curves. Let $\pi: X\to \tilde X$ be a double covering of projective, irreducible, non-singular algebraic curves over an algebraically closed field $k$. Let $g$ and $\gamma$ be the genus of $X$ and $\tilde X$ respectively. Assume that there exists $P\in X$ which is ramified for $\pi$, and denote by $m_i$ the $i$th non-gap at $P$. Then the Weierstrass semigroup $H(P)$ at $P$ satisfies the following properties (cf. [@To], [@To1 Lemma 3.4]):
- $H(P)$ is $\gamma$-hyperelliptic, provided $g\ge
4\gamma+1$ if ${\rm char}(k)\neq 2$, and $g\ge 6\gamma-3$ otherwise.
- $m_{2\gamma+1}=6\gamma+2$, provided $g\ge 5\gamma+1$.
- $m_{\frac{g}{2}-\gamma-1}=g-2$ or $m_{\frac{g-1}{2}-\gamma}=g-1$, provided $g\ge 4\gamma+2$.
- The weight $w(P)$ of $H(P)$ satisfies $$\binom{g-2\gamma}{2}\le w(P)< \binom{g-2\gamma+2}{2}.$$
Conversely if $g$ is large enough and if any of the above properties is satisfied, then $X$ is a double covering of a curve of genus $\gamma$. Aposteriori the four above properties become equivalent whenever $g$ is large enough.
The goal of this paper is to extend these results for any semigroup $H$ such that $g(H):= \#(\mathbb N\setminus H)$ is large enough. We remark that there exist semigroups of genus large enough that cannot be realized as Weierstrass semigroups (see [@Buch1], [@To Scholium 3.5]).
The key tool used to prove these equivalences is Theorem 1.10 in Freiman’s book [@Fre] which have to do with addition of finite sets. From this theorem we deduce Corollary \[cor-cast\] which can be considered as analogous to Castelnuovo’s genus bound for curves in projective spaces ([@C], [@ACGH p.116], [@R Corollary 2.8]). Castelnuovo’s result is the key tool to deal with Weierstrass semigroups. This Corollary can also be proved by means of properties of addition of residue classes (see Remark \[cauchy\]).
In §2 we prove the equivalences $(P_1)\Leftrightarrow
(P_2)\Leftrightarrow (P_3)$. The equivalence $(P_1)\Leftrightarrow
(P_2)$ is proved under the hypothesis $g(H)\ge 6\gamma+4$, while $(P_1)\Leftrightarrow (P_3)$ is proved under $g(H)=6\gamma+5$ or $g(H)\ge 6\gamma+8$. In both cases the bounds on $g(H)$ are sharp (Remark \[sharp\]). We mention that the cases $\gamma\in\{1,2\}$ of $(P_1)\Leftrightarrow (P_3)$ were fixed by Kato [@K2 Lemmas 4,5,6,7].
In §3 we deal with the equivalence $(P_1)\Leftrightarrow (P_4)$. To this purpose we determine bounds for the weight $w(H)$ of the semigroup $H$, which is defined by $$w(H):= \sum_{i=1}^{g}(\ell_i -i),$$ where $g:=g(H)$ and $\mathbb N\setminus H = \{\ell_1,\ldots,\ell_g\}$. It is well known that $0\le w(H)\le \binom{g}{2}$; clearly $w(H)=0\Leftrightarrow
H=\{g+i:i\in \mathbb Z^+\}$, and one can show that $w(H)=\binom{g}{2}
\Leftrightarrow H$ is $\mathbb N$, or $g(H)\ge 1$ and $H$ is hyperelliptic (see e.g. [@F-K Corollary III.5.7]). Associated to $H$ we have the number $$\label{even-gap}
\rho=\rho(H):=\{\ell \in \mathbb N\setminus H: \ell\ {\rm even}\}.$$ In [@To Lemma 2.3] it has been shown that $\rho(H)$ is the unique number $\gamma$ satisfying $(E_1)$ of Definition \[def\], and
- $4\gamma+2 \in H$.
Thus we observe the following:
\[feto0\] Let $H$ be a $\gamma$-hyperelliptic semigroup. Then $$\rho(H)=\gamma.$$
We also observe that if $g(H)\ge 1$, then $H$ is hyperelliptic if and only if $\rho(H)=0$. In general, $\rho(H)$ affects the values of $w(H)$. Let us assume that $\rho(H)\ge
1$ (hence $w(H)<\binom{g}{2}$); then we find $$\binom{g-2\rho}{2}\le w(H)\le \left\{
\begin{array}{ll}
\binom{g-2\rho}{2}+2\rho^2 & {\rm if\ } g\ge 6\rho+5 \\
\frac{g(g-1)}{3} & {\rm otherwise}
\end{array}
\right.$$ (see Lemmas \[bo-weight\] and \[opt-weight\]). These bounds strengthen results of Kato [@K1 Thm.1] and Oliveira [@Oliv p.435] (see Remark \[oliv\]). From this result we prove $(P_1)\Leftrightarrow (P_4)$ (Theorem \[char-weight1\]) under the hypothesis $$\label{bound-g}
g(H)\ge \left\{
\begin{array}{ll}
{\rm max}\{12\gamma-1,1\} & {\rm if\ } \gamma\in\{0,1,2\}, \\
11\gamma+1 & {\rm if\ } \gamma \in\{3,5\}, \\
\frac{21(\gamma-4)+88}{2} & {\rm if\ }\gamma \in \{4,6\}, \\
\gamma^2+4\gamma+3 & {\rm if\ } \gamma\ge 7.
\end{array}
\right.$$ The cases $\gamma \in \{1,2\}$ of that equivalence was fixed by Garcia (see [@G]). In this section we use ideas from Garcia’s [@G Proof of Lemma 8] and Kato’s [@K1 p. 144].
In §1 we recollet some arithmetical properties of numerical semigroups. We mainly remark the influence of $\rho(H)$ on $H$.
It is a pleasure to thank Pablo Azcue and Gustavo T. de A. Moreira for discussions about §2.
Preliminaries
=============
Throughout this paper we use the following notation
- semigroup:numerical semigroup.
- Let $H$ be a semigroup. The [*genus*]{} of $H$ is the number $g(H):= \#(\mathbb N\setminus H)$, which throughout this article will be assumed bigger than 0. The positive elements of $H$ will be called the [*non-gaps*]{} of $H$, and those of $G(H):= \mathbb N\setminus H$ will be called the [*gaps*]{} of $H$. We denote by $m_i(H)$ the $i$th non-gap of $H$. If a semigroup is generated by $m,n,\ldots $ we denote $H=\langle m,n,\ldots \rangle$.
- $[x]$ stands for the integer part of $x\in \mathbb R$.
In this section we recall some arithmetical properties of semigroups. Let $H$ be a semigroup of genus $g$. Set $m_j:= m_j(H)$ for each $j$. If $m_1=2$ then $m_i=2i$ for $i=1,\ldots,g$. Let $m_i\ge 3$. By the semigroup property of $H$ the first $g$ non-gaps satisfy the following inequalities: $$\label{prop-sem}
m_i\ge 2i+1\ \ {\rm for}\ i=1,\ldots,g-2,\ \ m_{g-1}\ge 2g-2,\ \ m_g=2g$$ (see [@Buch], [@Oliv Thm.1.1]).
Let $\rho$ be as in (\[even-gap\]). From [@To1 Lemma 2.3] we have that $$\label{feto}
\{4\rho+2i: i\in \mathbb N\} \subseteq H.$$ From the definition of $\rho$, $H$ has $\rho$ odd non-gaps in $[1,2g-1]$. Let denote by $$u_{\rho} <\ldots < u_1$$ such non-gaps.
\[feto1\] Let $H$ be a semigroup of genus $g$, and $\rho$ the number of even gaps of $H$. Then $$2g \ge 3\rho.$$
Let us assume that $g\le 2\rho -1$. From $u_1\le 2g-1$ we have $u_{2\rho-g+1}\le 4g-4\rho-1$. Let $\ell$ be the biggest even gap of $H$. Then $\ell \le 4g-4\rho$. For suppose that $\ell \ge
4g-4\rho+2$. Thus $\ell-u_{2\rho-g+1}\ge 3$, and then $H$ would has $g-\rho+1$ odd gaps, namely $1,
\ell-u_{2\rho-g+1},\ldots, \ell-u_{\rho}$, a contradicition. Now since in $[2,4g-4\rho]$ there are $2g-2\rho$ even numbers such that $\rho$ of them are gaps, the lemma follows.
Denote by $f_i:=f_i(H)$ the $i$th even non-gap of $H$. Hence by (\[feto\]) we have $$\label{gene}
H=\langle f_1,\ldots,
f_{\rho},4\rho+2,u_{\rho},\ldots,u_1\rangle.$$ Observe that $f_{\rho}=4\rho$, and $$\label{even}
f_{g-\rho}=2g.$$ By [@To1 Lemma 2.1] and since $g\ge 1$ we have $$\label{des-odd-1}
u_{\rho} \ge\ {\rm max}\{2g-4\rho +1, 3\}.$$ In particular, if $g\ge 4\rho$ we obtain $$\label{first=even}
m_1=f_1, \ldots, m_{\rho}=f_{\rho}.$$ Note that (\[des-odd-1\]) is only meanful for $g\ge 2\rho$. For $g\le2\rho-1$ we have:
\[feto2\] Let $H$ be a semigroup of genus $g$, and $\rho$ the number of even gaps of $H$. If $g\le 2\rho-1$, then $$u_{\rho}\ge 4\rho -2g+1.$$
From the proof of Lemma \[feto1\] we have that $H$ has $2g-3\rho$ even non-gaps in $[2,4g-4\rho]$. Consider the following sequence of even non-gaps: $$2u_{\rho}<\ldots< u_{\rho}+u_{4\rho-2g}.$$ Since in this sequence we have $2g-3\rho+1$ even non-gaps, then $$u_{\rho}+u_{4\rho-2g}\ge 4g-4\rho+2.$$ Now, since $u_{4\rho-2g}\le 6g-8\rho+1$ the proof follows.
$\gamma$-hyperelliptic semigroups
=================================
In this section we deal with properties $(P_1)$, $(P_2)$ and $(P_3)$ stated in §0. For $i\in \mathbb Z^+$ set $$d_i(H):= \gcd(m_1(H),\ldots,m_i(H)).$$
\[char1\] Let $\gamma \in \mathbb N$, $H$ a semigroup of genus $g \ge 6\gamma +4$ if $\gamma\ge 1$. Then the following statements are equivalent:
- $H$ is $\gamma$-hyperelliptic.
- $m_{2\gamma+1}(H)= 6\gamma +2$.
\[char2\] Let $\gamma$ and $H$ be as in Theorem \[char1\], and assume that $g\ge
1$ if $\gamma=0$. Then the following statements are equivalent:
- $H$ is $t$-hyperelliptic for some $t\in \{0,\ldots,\gamma\}$.
- $m_{2\gamma+1}(H)\le 6\gamma+2$.
- $\rho(H)\le \gamma$.
\[char3\] Let $\gamma \in \mathbb N$, $H$ a semigroup of genus $g=6\gamma+5$ or $g\ge 6\gamma+7$. Set $r:= \[x]-\gamma-1$. Then the following statements are equivalent:
- $H$ is $\gamma$-hyperelliptic.
- $m_r(H)=g-2$ if $g$ is even; $m_r(H)=g-1$ if $g$ is odd.
- $m_r(H)\le g-1 < m_{r+1}(H)$.
\[char4\] Let $\gamma$, $H$ and $r$ be as in Theorem \[char3\]. Then the following statements are equivalent:
- $H$ is $t$-hyperelliptic for some $t\in\{0,\ldots,\gamma\}$.
- $m_r(H)\le g-2$ if $g$ is even; $m_r(H)\le g-1$ if $g$ is odd.
- $m_r(H)\le g-1$.
- $\rho(H)\le \gamma$.
To prove these results we need a particular case of the result below. For $K$ a subset of a group we set $2K:=\{a+b: a, b \in K\}$.
\[thm-fre\] Let $K=\{0<m_1<\ldots<m_i\}
\subseteq \mathbb Z$ be such that $\gcd(m_1,\ldots,m_i)=1$. If $m_i\ge
i+1+b$, where $b$ is an integer satisfying $0\le b<i-1$, then $$\# 2K \ge 2i+2+b.$$ $\Box$
\[cor-cast\] Let $H$ be a semigroup of genus $g$, and $i\in \mathbb Z^+$. If $$d_i(H)= 1\qquad{\rm and}\qquad i\le g+1,$$ then we have $$2m_i(H) \ge m_{3i-1}(H).$$
Let $K:=\{0,m_1(H),\ldots,m_i(H)\}$. Then by (\[prop-sem\]), we can apply Lemma \[thm-fre\] to $K$ with $b=i-2$.
\[rem-cast\] Both the hypothesis $d_i(H)=1$ and $i\le
g+1$ of the corollary above are necessaries. Moreover the conclusion of that result is sharp:
- Let $i=g+h$, $h\ge 2$. Then $2m_{g+h}=m_{3i-h}$.
- Let $m_1=4$, $m_2=6$ and $m_3=8$. Then $d_3=2$ and $2m_3=m_7$.
- Let $m_1=5$, $m_2=10$, $m_3=15$, $m_4=18$, $m_5=20$. Then $2m_6=m_{14}$.
\[cauchy\] (i) The Corollary above can also be proved by using results on the addition of residue classes: let $H$ and $i$ be as in \[cor-cast\]; assume further that $2\le i\le g-2$ (the remaining cases are easy to prove), and consider $\tilde K:=\{m_1,\ldots,m_i\}\subseteq \mathbb Z_{m_i}$ (i.e. a subset of the integers modulus $m_i$). Let $N:= \# 2\tilde K$. Then it is easy to see that $$2m_i\ge m_{i+N}.$$ Consequently we have a proof of the above Corollary provided $N\ge
2i-1\ (*)$. Since $m_i\ge 2i+1$ (see (\[prop-sem\])), we get $(*)$ provided $m_i$ prime (Cauchy [@Dav1], Davenport [@Dav], [@M Corollary 1.2.3]), or provided $\gcd(m_j,m_i)=1$ for $j=1,\ldots,i-1$ (Chowla [@Lan Satz 114], [@M Corollary 1.2.4]). In general by using the hypothesis $d_i(H)=1$ we can reduce the proof of the Corollary to the case $\gcd(m_{i-1},m_i)=1$. Then we apply Pillai’s [@Pi Thm 1] generalization of Davenport and Chowla results (or Mann’s result [@M Corollary 1.2.2]).
\(ii) Let $H$ and $i$ be as above and assume that $2m_i=m_{3i-1}$. Then from (i) we have $N=\# 2\tilde K=2i-1$. Thus by Kemperman [@Kem Thm 2.1] (or by [@Fre Thm. 1.11]) $2\tilde K$ satisfies one of the following conditions: (1) there exist $m, d\in \mathbb
Z_{m_i}$, such that $2\tilde K=\{m+dj:j=0,1,\ldots,N-1\}$, or (2) there exists a subgroup $F$ of $\mathbb Z_{m_i}$ of order $\ge 2$, such that $2\tilde K$ is the disjoint union of a non-empty set $C$ satisfying $C+F=C$, and a set $C'$ satisfying $C'\subseteq c+F$ for some $c\in C'$. For instance example (iii) of \[rem-cast\] satisfies property (2).
Set $m_j:= m_j(H)$ for each $j$.
[*(Theorem \[char1\]).*]{} By definition $H$ is hyperelliptic if and only if $m_1=2$. So let us assume that $\gamma\ge 1$.
\(i) $\Rightarrow$ (ii): From Lemma \[feto0\] and (\[des-odd-1\]) we find that $u_{\gamma}\ge 6\gamma+3$ if $g\ge 5\gamma+1$. Then (ii) follows from (\[first=even\]) and (\[feto\]).
\(ii) $\Rightarrow$ (i): We claim that $d_{2\gamma+1}(H)=2$. For suppose that $d_{2\gamma+1}(H) \ge 3$. Then $6\gamma+2= m_{2\gamma+1}
\ge m_1+ 6\gamma$ and so $m_1\le 2$, a contradiction. Hence $d_{2\gamma+1}(H)\le
2$. Now suppose that $d_{2\gamma+1}(H) =1$. Then Corollary \[cor-cast\] implies $$2(6\gamma+2) = 2m_{2\gamma+1} \ge m_{6\gamma +2}.$$ But, since $g-2 \ge 6\gamma +2$, by (\[prop-sem\]) we would have $$m_{6\gamma +2} \ge 2(6\gamma+2) +1$$ which leads to a contradiction. This shows that $d_{2\gamma+1}(H)=2$. Now since $m_{2\gamma+1}=6\gamma+2$ we have that $m_\gamma \le 4\gamma$. Moreover, there exist $\gamma$ even gaps of $H$ in $[2, 6\gamma+2]$. Let $\ell$ be an even gap of $H$. The proof follows from the following claim:
$\ell <m_\gamma$.
[*(Claim).*]{} Suppose that there exists an even gap $\ell$ such that $\ell>m_\gamma$. Take the smallest $\ell$ with such a property; then the following $\gamma$ even gaps: $\ell-m_\gamma<
\ldots,
\ell -m_1$ belong to $[2,m_\gamma]$. Thus, since $m_1>2$, we must have $\ell-m_\gamma=2$. This implies that $H$ has $\gamma+1$ even non-gaps in $[2,6\gamma+2]$, namely $\ell-m_\gamma,\ldots,\ell-m_1,\ell$, a contradiction.
This finish the proof of Theorem \[char1\].
[*(Theorem \[char2\]).*]{} The case $\gamma=0$ is trivial; so let assume $\gamma\ge 1$.
\(i) $\Rightarrow$ (ii): Since $g\ge
5\gamma+1\ge 5t+1$ by Theorem \[char1\] we have $m_{2t+1}=6t+2$. Thus (ii) follows from Lemma \[feto0\] and (\[feto\]).
\(ii) $\Rightarrow$ (iii): From the proof of (ii) $\Rightarrow$ (i) of Theorem \[char1\] it follows that $d_{2\gamma+1}(H)=2$. Consequently by using the hypothesis on $m_{2\gamma+1}$, and again from the mentioned proof we have that all the gaps of $H$ belong to $[2,m_\gamma]$. Since $m_\gamma\le 4\gamma$ then we have $\rho(H)\le \gamma$
\(iii) $\Rightarrow$ (i) Since $g\ge 4\gamma+1\ge 4\rho(H)+1$, the proof follows from $(E_1)$ and $(E_2')$ (see §0).
[*(Theorem \[char3\]).*]{} (i) $\Rightarrow$ (ii): Similar to the proof of (i) $\Rightarrow$ (ii) of Theorem \[char1\] (here we need $g\ge
4\gamma+3$ (resp. $g\ge 4\gamma+4$) if $g$ is odd (resp. even)).
Before proving the other implications we remark that $g\le 3r-1$: in fact, if $g\ge 3r$ we would have $g\le 6\gamma+6$ (resp. $g\ge
6\gamma+3$) provided $g$ even (resp. odd) - a contradiction.
\(ii) $\Rightarrow$ (iii): Let $g$ even and suppose that $m_{r+1}=g-1$. Then by Corollary \[cor-cast\] we would have $2g-2=2m_{r+1}\ge m_{3r+2}$ and hence $g-1\ge 3r+2$. This contradicts the previous remark.
\(iii) $\Rightarrow$ (i): We claim that $d_r(H)= 2$. Suppose that $d_r(H)\ge
3$. Then we would have $g-1\ge m_r \ge m_1+3(r-1) \ge 3r-1$, which contradicts the previous remark. Now suppose that $d_r(H)=1$. Then by Corollary \[cor-cast\] we would have $$2g-2\ge m_r \ge m_{3r-1},$$ which again contradicts the previous remark.
Thus the number of even gaps of $H$ in $[2,g-1]$ is $\gamma$, and $m_\gamma\le 4\gamma$. Let $\ell$ be an even gap of $H$. As in the proof of the Claim in Theorem \[char1\] here we also have that $\ell<m_\gamma$. Now the proof follows.
[*(Theorem \[char4\]).*]{} (i) $\Rightarrow$ (ii): By Theorem \[char3\] and since $t\le \gamma$ we have $g-2
=m_{g/2-t-1}$ or $g-1=m_{(g-1)/2-t}$. This implies (ii). The implication (ii) $\Rightarrow$ (iii) is trivial.
\(iii) $\Rightarrow$ (iv): As in the proof of Theorem \[char3\] we obtain $d_r(H)=2$. Then the number of even gaps of $H$ in $[2,g-1]$ is at most $\gamma$. We claim that all the even gaps of $H$ belong to that interval. For suppose there exists an even gap $\ell>g-1$. Choose $\ell$ the smallest one and consider the even gaps $\ell-m_1<\ldots<\ell-m_r\le
g-1$. Then $r\le \gamma$ which yields to $g\le 4\gamma+2$, a contradiction. Consequently $\rho(H)\le \gamma$.
The implication (iv) $\Rightarrow$ (i) follows from Theorem \[char2\].
\[sharp\] The hypothesis on the genus in the above theorems is sharp. To see this let $\gamma\ge 0$ an integer, and let $X$ be the curve defined by the equation $$y^4=\mathop{\prod}\limits^{I}_{j=1}(x-a_j),$$ where the $a_j's$ are pairwise distinct elements of a field $k$, $I=4\gamma+3$ if $\gamma$ is odd; $I=4\gamma+5$ otherwise. Let $P$ be the unique point over $x=\infty$. Then $H(P)=\langle 4, I\rangle$ and so $g(H(P))=6\gamma+3$ (resp. $6\gamma+6$), $m_{2\gamma+1}(H(P))=
6\gamma+2$ (resp. $m_{2\gamma+2}(H(P))=6\gamma+5$), and $\rho(H(P))=2\gamma+1$ (resp. $\rho(H(P))=2\gamma+2$) provided $\gamma$ odd (resp. $\gamma$ even).
Weight of semigroups
====================
Bounding the weight.
--------------------
Let $H$ be a semigroup of genus $g$. Set $m_j=m_j(H)$ for each $j$ and $\rho=\rho(H)$ (see (\[even-gap\]). Due to $m_g=2g$ (see (\[prop-sem\])), the weight $w(H)$ of $H$ can be computed by $$\label{weight}
w(H)=\frac{3g^2+g}{2}-\mathop{\sum}\limits^{g}_{j=1} m_j.$$ Consequently the problem of bounding $w(H)$ is equivalent to the problem of bounding $$S(H):= \sum_{j=1}^{g} m_g.$$ If $\rho=0$, then we have $m_i=2i$ for each $i=1,\ldots,g$. In particular we have $w(H)=\binom{g}{2}$. Let $\rho\ge 1$ (or equivalently $f_1\ge 4$). Then by (\[gene\]) we have $$\label{weight1}
S(H)=\sum_{f\in \tilde H,\ f\le g} 2f + \sum_{i=1}^{\rho} u_i,$$ where $$\tilde H:= \{f/2 : f\in H,\ f\ {\rm even}\}.$$
\[bounds\] With the notation of §1 we have:
- If $f_1=4$, then $f_i=4i$ for $i=1,\ldots,\rho$.
- If $f_1\ge 6$, then $$f_i\ge 4i+2\ \ {\rm for}\ i=1, \ldots,\rho-2,\ \ f_{\rho-1}\ge
4\rho-4,\ \ f_{\rho}=4\rho.$$
- $f_i\le 2\rho+2i$ for each $i$.
- $2g-4j+1 \le u_j \le 2g-2j+1$, for $j=1,\ldots,\rho$.
By (\[feto\]), we have $$\tilde H= \{\frac{f_1}{2},\ldots, \frac{f_\rho}{2}\}\cup \{4\rho +i:
i\in \mathbb N \}.$$ Thus $\tilde H$ is a semigroup of genus $\rho$. Then (i) is due to the fact that $f_1/2=2$ and (ii) follows from (\[prop-sem\]). Statement (iii) follows from (\[even\]).
\(iv) The upper bound follows from $u_1\le 2g-1$. To prove the lower bound we procced by induction on $i$. The case $i=\rho$ follows from (\[des-odd-1\]). Suppose that $u_i\ge 2g-4i+1$ but $u_{i-1} < 2g-4(i-1)+1$, for $1<i\le \rho$. Then $u_i=2g-4i+1$, $u_{i-1}= 2g-4i+3$, and there exists an odd gap $\ell$ of $H$ such that $\ell>u_{i-1}$. Take the smallest $\ell$ with such a property. Set $I:=[\ell-u_{i-1}, \ell-u_\rho]\subseteq [2,4\rho-2]$ and let $t$ be the number of non-gaps of $H$ belonging to $I$. By the election of $\ell$ we have that $\ell-u_{i-1}<f_1$. Now, since $\ell-u_j \in I$ for $j=i-1,\ldots,\rho$ we also have that $$\frac{u_{i-1}-u_\rho}{2}+1 \ge t + \rho -(i-1)+1.$$ Thus $u_\rho \le 2g-2\rho-2i-2t+1$. Now, since $u_\rho+f_{t+1}>u_{i-1}$ and since by statement (iii) $f_{t+i-1}\le
2\rho + 2t +2i-2$, we have that the odd non-gaps $u_\rho
+f_{t+1}, \ldots, u_\rho +f_{t+i-1}$ belong to $[\ell+2,2g-1]$. This is a contradiction because $H$ would have $(\rho -i+2)+(i-1) = \rho
+1$ odd non-gaps.
\[bo-weight\] Let $H$ be a semigroup of genus $g$. With notation as in §1, we have
- $w(H)\ge \binom{g-2\rho}{2}$. Equality holds if and only if $f_1=2\rho+2$ and $u_{\rho}=2g-2\rho+1$.
- If $g\ge 2\rho$, then $w(H)\le \binom{g-2\rho}{
2}+2\rho^2 $. Equality holds if and only if $H=\langle 4, 4\rho,2g-4\rho+1\rangle$.
- If $g\le 2\rho-1$, then $w(H)\le \binom{g+2\rho}{
2}-4g-6\rho^2+8\rho $.
\(i) By (\[weight\]) we have to show that $$S(H) \le
g^2+(2\rho+1)g-2\rho^2-\rho,$$ and that the equality holds if and only if $f_1=2\rho+2$ and $u_{\rho}=2g-2\rho+1$. Both the above statements follow from Lemma \[bounds\] (i), (iv).
\(ii) Here we have to show that $$S(H)\ge g^2 + (2\rho+1)g-4\rho^2-\rho,\tag{$\dag$}$$ and that equality holds if and only $H=\langle
4,4\rho+2,2g-4\rho+1\rangle$.
Since $g\ge 2\rho$ by (\[feto\]) we obtain $$\label{sum1}
S(H)=\sum_{i=1}^{\rho}(f_i(H)+u_i(H))+ g^2+g-4\rho^2-2\rho.$$ Thus we obtain $(\dag)$ by means of Lemma \[bounds\] (ii), (iii) and (iv). Moreover the equality in $(\dag)$ holds if and only if $\sum_{i=1}^{\rho}(f_i+u_i)=2\rho g +\rho$. Then the second part of (ii) also follows from the above mentioned results.
\(iii) In this case, due to the proof of Lemma \[feto1\], instead of equation (\[sum1\]) we have $$\label{sum2}
S(H) = \sum_{i=1}^{2g-3\rho}f_i +
\sum_{i=2g-3\rho+1}^{g-\rho}(2i+2\rho) +
\sum_{i=1}^{\rho}u_i.$$ We will see in the next remark that in this case we have $f_1\ge 6$. Thus by using Lemmas \[feto2\] and \[bounds\] (iii), (iv) we obtain $$S(H)\ge 4\rho^2-(2g+7)\rho +g^2 +5g,$$ from where it follows the proof.
\[remark3\] (i) If $f_1=4$, then $g\ge 2\rho$. This follows from the fact the biggest even gap of $H$ is $4(\rho-1)+2$. Moreover, one can determinate $u_{\rho},\ldots,u_1$ as follows: let $J\in \mathbb N$ satisfying the inequalities below $${\rm max}\{1, \frac{3\rho+2-g}{2}\}\le J\le {\rm
min}\{\rho+1,\frac{g-\rho+3}{2}\},$$ provided $g$ even, otherwise replace $J$ by $\rho-J+2$; then $$\begin{aligned}
\{u_{\rho},\ldots,u_1\} & = &
\{ 2g-4\rho+4J-7+4i: i=1,\ldots,\rho-J+1\}\\
& &\mbox{}\cup\{2g-4J+3+4i: i=1,\ldots,J-1\}.\end{aligned}$$ (see [@Ko §3], [@To Remarks 2.5]). Consequently from (\[sum1\]) and (\[weight\]) we obtain $$w(H)=\binom{g-2\rho}{2}+ 2\rho^2+4\rho+6+4J^2-(4\rho+10)J.$$ In particular we have $$\binom{g-2\rho}{2}+\rho^2-\rho\le w(H)\le
\binom{g-2\rho}{2}+2\rho^2.$$ Let $C$ be an integer such that $0\le 2C\le \rho^2+\rho$. Then $w(H)=\binom{g-2\rho}{2}+\rho^2-\rho +2C$ if and only if $4+32C$ is a square. The lower bound is attained if and only if $H=\langle
4,4\rho+2, 2g-2\rho+1,2g-2\rho+3\rangle$.
\(ii) Let $u_{\rho}=3$. Them from (\[des-odd-1\]) and Lemma \[feto2\] we find that $g\in
\{2\rho-1,2\rho,2\rho+1\}$. Moreover, in this case one can also obtain a explicit formula for $w(H)$ ([@K1 Lemma 6]). Let $g\equiv r \pmod{3}$, $r=0,1,2$ and let $s$ be an integer such that $0\le
s\le (g-r)/3$. If $r=0,1$ (resp. $r=2$), then $$\begin{aligned}
w(H) & =\frac{g(g-1)}{3}+3s^2-gs-s\le \frac{g(g-1)}{3} \\
\intertext{resp.}
w(H) & =\frac{g(g-2)}{3}+3s^2-gs+s\le \frac{g(g-2)}{3}.\end{aligned}$$ If $r=0,1$ (resp. $r=2$), equality occurs if and only if $H=\langle 3,
g+1\rangle$ (resp. $H=\langle 3, g+2,2g+1\rangle$).
Let $g\le 2\rho-1$. The way how we bound from below equation (\[sum2\]) is far away from being sharp. We do not know an analogous to the lower bound of Lemma \[bounds\] (iv) in this case. However, for certain range of $g$ the bounds in \[remark3\] (ii) are the best possible:
\[opt-weight\] Let $H$ be a semigroup of genus $g\ge 11$, $r\in \{1,2,3,4,5,6\}$ such that $g\equiv r \pmod{6}$. Let $\rho$ be the number of even gaps of $H$. If $$\rho>\left\{
\begin{array}{ll}
\frac{g-5}{6} & {\rm if\ } r=5 \\
\frac{g-r}{6}-1 & {\rm if\ } r\neq 5,
\end{array}
\right.$$ then $$w(H)\le \left\{
\begin{array}{ll}
\frac{g(g-2)}{3} & {\rm if\ } r= 2,5 \\
\frac{g(g-1)}{3} & {\rm if\ } r=1,3,4,6.
\end{array}
\right.$$ If $r=2,5$ (resp. $r\not\in\{2,5\}$) equality above holds if and only if $H=\langle 3,g+2,2g+1\rangle$ (resp. $H=\langle 3,g+1\rangle$).
We assume $g\equiv 5 \pmod{6}$; the other cases can be proven in a similar way. By Remark \[remark3\] (ii) we can assume $u_1>3$, and then by (\[weight\]) we have to prove that $$S(H) > \frac{7g^2+7g}{6}.\tag{$*$}$$ Now, since $\rho>(g-5)/6$, by Theorem \[char4\] and Lemma \[feto0\] we must have $$m_{\frac{g+1}{3}}=m_{\frac{g-1}{2}-\frac{g-5}{6}}\ge g.$$ (A) Let $S':= \sum_{i} m_i$, $(g+1)/3\le i \le g$:Define $$F:= \{ i\in \mathbb N: \frac{g+1}{3}\le i\le g,\ m_i\le 2i+\frac{g-5}{3}\},$$ and let $f:= {\rm min}(F)$. Then $f\ge (g+4)/3$, $m_f=2f+\frac{g-5}{3},
m_{f-1}=
2f+\frac{g-8}{3}$. Thus for $g\ge i\ge f$, $d_i=1$ and hence by Corollary \[cor-cast\], $2m_i\ge m_{3i-1}=g+3i-1$. In particular, $f\ge (g+7)/3$. Now we bound $S'$ in three steps:
Step (i). $(g+1)/3\le i\le f-1$: By definiton of $f$ we have that $m_i\ge 2i+\frac{g-2}{3}$ and hence $$\label{aux0}
\sum_{i} m_i \ge f^2+\frac{g-5}{3}f -\frac{2g^2-2g-4}{9}.$$
Step (ii). $f\le i\le (6f-g-7)/3$: Here we have that $m_i\ge
m_f+i-f=i+f+\frac{g-5}{3}$. Hence $$\sum_{i} m_i \ge \frac{5}{2}f^2-\frac{4g+37}{6}f -\frac{g^2-13g-68}{18}.$$
Step (iii). $(6f-g-4)/3\le i\le g$: Here we have $m_i+m_{i+1}\ge
g+3i+1$ for $i$ odd, $6f-g-4\le i\le g-2$. Since $m_g=2g$ then we have $$\sum_{i} m_i \ge -3f^2+6f+\frac{4g^2+2g-8}{3}.$$ (B) Let $S'':= \sum_{i} m_i$, $1\le i\le (g-2)/3$:By Theorem \[char2\] and Lemma \[feto0\] we have that $m_i\ge 3i$ for $i$ odd, $i=3,\ldots, (g-2)/3$. First we notice that for $i$ odd and $3\le i\le (g-8)/3$ we must have $m_{i+1}\ge 3i+3$. Otherwise we would have $d_{i+1}=1$ and hence by Corollary \[cor-cast\] and (\[prop-sem\]) we would have $2m_{i+1}\ge m_{3i+2}\ge 6i+5$, a contradiction.
Let $i$ odd and $3\le i\le (g-8)/3$. If $m_i=3i$ or $m_{i+1}=3i+3$, then $m_1=3$.
[*(Claim).*]{} It is enough to show that $d_i=3$ or $d_{i+1}=3$. Suppose that $m_i=3i$. Since $i$ is odd, $d_i$ is one or three. Suppose $d_i=1$. Then by Corollary \[cor-cast\] we have $6i=2m_i\ge m_{3i-1}$ and hence $6i=2m_i=m_{3i-1}$. Let $\ell \in G(H)$. Then $\ell \ge m_{3i-1}+3$. In fact if $\ell>m_{3i-1}+3$, by choosing the smallest $\ell$ with such a property we would have $3i+2$ gaps in $[1, 6i]$ namely, $1,2,3,\ell-m_{3i-1},\ldots,\ell-m_1$, a contradiction. Then it follows that $g\le 3i+1+3=3i+4$ or $g+2\le 3i+4$.
Now suppose that $m_{i+1}=3i+3$; as in the previous proof here we also have that $d_{i+1}>1$. Suppose that $d_{i+1}=2$. Then $m_1>3$ and hence $m_i=3i+1$. Since we know that $m_{i+2}\ge 3i+6$, then the even number $\ell=3i+5$ is a gap of $H$. Then we would fine $2i+2$ even numbers in $[2,3i+3]$, namely $m_1,\ldots,m_{i+1}$, and $\ell-m_{i+1},\ell-m_1$, a contradiction. Hence $d_{i+1}=3$ and then $m_1=3$.
Then, since we assume $u_1>3$, we have $m_i+m_{i+1}\ge 6i+5$ for $i$ odd $3\le i\le (g-8)/3$, $m_{\frac{g-2}{3}}\ge g-2$, and so $$\label{aux}
\begin{split}
\sum_{i=1}^{(g-2)/3} m_i & \ge \sum_{j=1}^{(g-11)/6} (12j+10)
+m_1+m_2+m_{\frac{g-2}{3}} \\
& \ge \frac{g^2+g-78}{6} + m_1+m_2.\\
\end{split}$$
Summing up (i), (ii), (iii) and (B) we get $$S(H)\ge \frac{3f^2-(2g+11)f}{6}+\frac{22g^2+32g-206}{18}+m_1+m_2.$$ The function $\Gamma(x):= 3x^2-(2g+11)x$ attains its minimum for $x=(2g+11)/6<(g+7)/3\le f$. Suppose that $f\ge (g+13)/3$. Then we find $$S(H)\ge \frac{7g^2+7g-60}{6}+m_1+m_2.$$ We claim that $m_1+m_2>11$. Otherwise we would have $m_3=m_1+m_2=10$ which is impossible. From the claim we get $(*)$.
In all the computations below we use the fact that $2g\le (m-1)(n-1)$ whenever $m,n \in H$ with $\gcd(m,n)=1$ (see e.g. Jenkins [@J]).
Now suppose that $f=(g+10)/3$. Here we find $$S(H)\ge \frac{7g^2+7g-72}{6}+m_1+m_2.$$ Suppose that $m_1+m_2\le 12$ (otherwise the above computation imply $(*)$.). If $g>11$, then $m_4\ge 13$ and so $m_3=m_1+m_2\in \{9,11,12\}$. If $m_1+m_2=9$, then $g\le 6$; if $m_1+m_2=11$ then $g\le 10$; if $m_1+m_2=12$ then $g\le
11$ or $m_1=4$, $m_2=8$. Let $s$ denote the first odd non-gap of $H$. Then $2g\le 3(s-1)$ and so $s>(2g+2)/3$. In the interval $[4,(2g+2)/3]$ does not exist $h\in H$ such that $h\equiv 2 \pmod{4}$: In fact if such a $h$ exists then we would have $4\rho+2\le (2g-4)/3$ or $ \rho\le (g-5)/6$. Consequently $m_3=12,\ldots,m_{(g+1)/6}=(2g+2)/3$. Thus we can improve the computation in (\[aux\]) by summing it up $\sum_{i=1}^{j}(4i+1)$, where $j=(g-5)/12$ or $j=(g-11)/12$. Then we get $(*)$. If $g=11$, the first seven non-gaps are $\{4,8,10,12,14,15,16\}$ or $\{5,7,10,12,14,15,16\}$. In both cases the computation in (\[aux0\]) increases at least by one, and so we obtain $(*)$.
Finally let $f=(g+7)/3$. Here we find $$S(H)\ge \frac{7g^2+7g-78}{6}+m_1+m_2,$$ and we have to analize the cases $m_1+m_2=\{9, 11, 12, 13\}$. This can be done as in the previous case. This finish the proof of Lemma \[opt-weight\].
The equivalence $(P_1)\Leftrightarrow (P_4)$.
---------------------------------------------
We are going to characterize $\gamma$-hyperelliptic semigroups by means of weights of semigroups. We begin with the following result, which has been proved by Garcia for $\gamma\in\{1,2\}$ [@G Lemmas 8 and 10].
\[char-weight\] Let $\gamma\in \mathbb N$ and $H$ a semigroup whose genus $g$ satisfies (\[bound-g\]). Then the following statements are equivalent:
- $H$ is $t$-hyperelliptic for some $t\in \{0,\ldots,\gamma\}$.
- $w(H)\ge \binom{g-2\gamma}{2}$.
\[char-weight1\] Let $\gamma$, $H$ and $g$ be as in Theorem \[char-weight\]. The following statements are equivalent:
- $H$ is $\gamma$-hyperelliptic.
- $\binom{g-2\gamma}{2}\le w(H)\le \binom{g-2\gamma}{2}+2\gamma^2$.
- $\binom{g-2\gamma}{2}\le w(H)<\binom{g-2\gamma+2}{2}$.
[*(Theorem \[char-weight\]).*]{} (i) $\Rightarrow$ (ii): By Lemma \[feto0\] and Lemma \[bo-weight\] (i) we have $w(H)\ge \binom{g-2t}{2}$. This implies (ii).
\(ii) $\Rightarrow$ (i): Suppose that $H$ is not $t$-hyperelliptic for any $t\in \{0,\ldots\gamma\}$. We are going to prove that $w(H)<\binom{g-2\gamma}{2}$, which by (\[weight\]) is equivalent to prove that:
$(*)$$\sum_{i=1}^{g} m_i >
g^2+(2\gamma+1)g-2\gamma^2-\gamma.$
We notice that by Lemma \[feto0\] we must have $\rho\ge \gamma+1$.
Case 1: $g$ satisfies the hypothesis of Lemma \[opt-weight\].From that lemma we have $ S(H)\ge (7g^2+5g)/3$ and then we get $(*)$ provided $$g>\bar\gamma:= 12\gamma+1+\sqrt{96\gamma^2+1}.$$ We notice that $\gamma^2+4\gamma+3\ge \bar\gamma$ if $\gamma\ge 7$. For $\gamma=1,4,6$ we need respectively $g>11$, $g>44$ and $g>65$. By noticing that 11, 44 and 65 are congruent to 2 modulus 3, we can use $g=11$, $g=44$ and $g=65$ because in these cases $S(H)\ge (7g^2+7g)/3$. For the other values of $\gamma$ we obtain the bounds of (\[bound-g\]).
Case 2: $g$ does not satisfy the hypothesis of Lemma \[opt-weight\].Here we have $g\ge 6\rho+5$. From (\[sum1\]) and Lemma \[bounds\] we have $S(H)\ge
g^2+(2\rho+1)g-4\rho^2-\rho$. The function $\Gamma(\rho):= (2g-1)\rho-4\rho^2$ satisfies $$\Gamma(\rho)\ge \Gamma(\gamma+1)=(2\gamma+2)g-4\gamma^2-9\gamma-5,$$ for $\gamma+1 \le \rho\le [(2g-1)/4]-\gamma-1$. Thus we obtain condition $(*)$ provided $g\ge \gamma^2 + 4\gamma+3$.
\[oliv\] Let $H$ be a semigroup of genus $g$, $r$ the number defined in Lemma \[opt-weight\]. Put $c:= (g-5)/6$ if $r=5$, and $c:=
(g-r)/6-1$ otherwise.
From the proof of Case 2 of the above result we see that $S(H)\ge
g^2+3g-5$ whenever $1\le \rho(H)\le (g-3)/2$. Hence this result and Lemma \[opt-weight\] imply $$w(H)\le\left\{
\begin{array}{ll}
(g^2-5g+10/2 & {\rm if\ } \rho(H)\le c\\
{\rm min}\{(g^2-5g+10)/2, (g-1)g/3\} & {\rm if\ }
c<\rho(H)\le (g-3)/2\\
(g-1)g/3 & {\rm if\ } \rho(H)>(g-3)/2.
\end{array}
\right.$$
[*(Theorem \[char-weight1\]).*]{} (i) $\Rightarrow$ (ii) follows from Lemma \[bo-weight\]. (ii) $\Rightarrow$ (iii) follows from the hypothesis on $g$.
\(iii) $\Rightarrow$ (i): By Theorem \[char-weight\] we have that $H(P)$ is $t$-hyperelliptic for some $t\in \{0,\ldots,\gamma\}$. Then by Lemma \[bo-weight\] and hypothesis we have $$\binom{g-2\gamma+2}{2}>w(H)\ge \binom{g-2t}{2},$$ from where it follows that $t=\gamma$.
\[sharp-weight\] The hypothesis on $g$ in the above two theorems is sharp:
\(i) Let $\gamma\ge 7$ and considerer $H:=\langle 4, 4(\gamma+1),
2g-4(\gamma+1)+1\rangle$ where $g$ is an integer satisfying ${\rm
max}\{4\gamma+4, \frac{\gamma^2+6\gamma-3}{2}\}<g\le \gamma^2+4\gamma+2$. Then $H$ has genus $g$ and $\rho(H)=\gamma+1$. In particular $H$ is not $\gamma$-hyperelliptic. By Lemma \[bo-weight\] (ii) we have $w(H)=\binom{g-2(\gamma+1)}{2}+2(\gamma+1)^2$. Now it is easy to check that $w(H)$ satisfies Theorem \[char-weight\] (ii) and Theorem \[char-weight1\] (iii).
\(ii) Let $\gamma\le 6$ and consider $H=\langle 3,g+1\rangle$, where $g=10, 22, 33, 43, 55, 64$ if $\gamma=1,2,3,4,5,6$ respectively. $H$ has genus $g$ and it can be easily checked that $H$ is not $\gamma$-hyperelliptic by means of inequality (\[des-odd-1\]) and Lemma \[feto1\]. Moreover $w(H)=g(g-1)/3$ (see Remark \[remark3\] (ii)). Now it is easy to check that $w(H)$ satisfies Theorem \[char-weight\] (ii) and Theorem \[char-weight\] (iii).
\(iii) The semigroups considered in (i) and (ii) are also Weierstrass semigroups (see Komeda [@Ko], Maclachlan [@Mac Thm. 4]).
Weierstrass weights
-------------------
In this section we apply Theorem \[char-weight1\] in order to characterize double coverings of curves by means of Weierstrass weights. Specifically we strengthen [@To Theorem B] and hence all its corollaries. The basic references for the discussion below are Farkas-Kra [@F-K III.5] and Stöhr-Voloch [@S-V §1].
Let $X$ be a non-singular, irreducible and projective algebraic curve of genus $g$ over an algebraically closed field $k$ of characteristic $p$. Let $\pi:X\to \mathbb P^{g-1}$ be the morphism induced by the canonical linear system on $X$. To any $P\in X$ we can associate the sequence $j_i(P)$ ($i=0,\ldots,g-1$) of intersection multiplicities at $\pi(P)$ of $\pi(X)$ with hyperplanes of $\mathbb P^{g-1}$. This sequence is the same for all but finitely many points (the so called Weierstrass points of $X$). These points are supported by a divisor $\mathcal{W}$ in such a way that the Weierstrass weight at $P$, $v_P(\mathcal{W})$, satisfies $$v_P({\mathcal{W}})\ge w(P):= \sum_{i=1}^{g-1}(j_i(P)-\epsilon_i),$$ where $0=\epsilon_0<\ldots<\epsilon_{g-1}$ is the sequence at a generic point. One has $j_i(P)\ge \epsilon_i$ for each $i$, and from the Riemann-Roch theorem follows that $G(P):=\{j_i(P)+1:i=0,\ldots,g-1\}$ is the set of gaps of a semigroup $H(P)$ of genus $g$ (the so called Weierstrass semigroup at $P$). $X$ is called [*classical*]{} if $\epsilon_i=i$ for each $i$ (e.g. if $p=0$ or $p>2g-2$). In this case we have $v_P({\mathcal{W}})=w_P(R)$ for each $P$, and the number $w(P)$ is just the weight of the semigroup $H(P)$ defined in §0. The following result strengthen [@To Thm.B]. The proof follows from [@To Thm.A], [@To1 Thm.A], and Theorem \[char-weight1\].
Let $X$ be a classical curve, and assume that $g$ satisfies (\[bound-g\]). Then the following statements are equivalent:
- $X$ is a double covering of a curve of genus $\gamma$.
- There exists $P\in X$ such that $$\binom{g-2\gamma}{2}\le w(P)\le \binom{g-2\gamma}{2}+2\gamma^2.$$
- There exists $P\in X$ such that $$\binom{g-2\gamma}{2}\le w(P)< \binom{g-2\gamma+2}{2}.$$
Remark \[sharp-weight\] says that the bound for $g$ above is the best possible. Further applications of §3.1 and §3.2 will be published elsewhere [@To2].
[Feto 99]{}
Arbarello, E.; Cornalba, M.; Griffiths, P.A.; Harris, J.: “Geometry of algebraic curves" Vol I, Springer-Verlag, New York 1985.
Buchweitz, R.O.:“Über deformationem monomialer kurvensingularitäten und Weierstrasspunkte auf Riemannschen flächen", Thesis, Hannover 1976.
Buchweitz, R.O.: “On Zariski’s criterion for equisingularity and non–smoothable monomial curves", Thése, Paris VII 1981.
Castelnuovo, G.: [*Ricerche di geometria sulle curve algebriche,*]{} Atti. R. Acad. Sci. Torino [**24**]{} (1889), 196–223.
Davenport, H.: [*On the addition of residue classes,*]{} J. London Math. Soc. [**10**]{} (1935), 30–32.
Davenport, H.: [*A historical note,*]{} J. London Math. Soc. [**22**]{} (1947), 110–101.
Farkas, H.M.; Kra, I.: “Riemann surfaces", Grad. Texts in Math. [**71**]{} (second edition) Springer-Verlag 1992.
Freiman, G.A.: “Foundations of a structural theory of set addition", Translations of Mathematical Monographs [**37**]{} AMS, 1973.
Garcia, A.: [*Weights of Weierstrass points in double covering of curves of genus one or two,*]{} Manuscripta Math. [**55**]{} (1986), 419–432.
Jenkins, J.A.: [*Some remarks on Weierstrass points,*]{} Proc. Amer. Math. Soc. [**44**]{}, (1974), 121–122.
Kato, T.: [*On the order of a zero of the theta function,*]{} Kodai Math. Sem. Rep. [**28**]{}, (1977), 390–407.
Kato, T.: [*Non-hyperelliptic Weierstrass points of maximal weight,*]{} Math. Ann. [**239**]{}, (1979), 141–147.
Kato, T.: [*On criteria of g-hyperellipticity,*]{} Kodai Math. J. [**2**]{}, (1979), 275–285.
Kemperman, J.H.: [*On small sumsets in an abelian group,*]{} Acta Math. [**103**]{}, (1960), 63–88.
Komeda, J.: [*On Weierstrass points whose first non-gaps are four,*]{} J. Reine. Angew. Math. [**341**]{} (1983), 68–86.
Landau, E.: “Über einige neuere Fortschritte der additiven Zahlentheorie", Cambridge Tracts 35, (second edition) 1964.
Mann, H.B.: “Addition theorems: The addition theorems of group theory and number theory", Interscience, New York, 1965.
Maclachlan, C.: [*Weierstrass points on compact Riemann surfaces,*]{} J. London Math. Soc. (2) [**3**]{} (1971), 722–724.
Oliveira, G.: [*Weierstrass semigroups and the canonical ideal of non–trigonal curves,*]{} Manuscripta Math. [**71**]{} (1991), 431–450.
Pillai, S.S.: [*On the addition of residue classes,*]{} Proc. Indian Acad. Sci. A [**7**]{} (1938), 1–4.
Rathmann, J.: [*The uniform position principle for curves in characteristic p,*]{} Math. Ann. [**276**]{} (1987), 565–579.
Stöhr, K.O.; Voloch, J.F.: [*Weierstrass points and curves over finite fields*]{}, Proc. London Math. Soc. (3), [**52**]{} (1986), 1–19.
Torres, F.: [*Weierstrass points and double coverings of curves with application: Symmetric numerical semigroups which cannot be realized as Weierstrass semigroups,*]{} Manuscripta Math. [**83**]{} (1994), 39–58.
Torres, F.: [*On certain $N$-sheeted coverings of curves and numerical semigroups which cannot be realized as Weierstrass semigroups,*]{} Comm. Algebra [**23**]{} (11) (1995), 4211–4228.
Torres, F.: [*Constellation of Weierstrass points*]{} (provisory title), in preparation.
[^1]: The author is supported by a grant from the International Atomic Energy Agency and UNESCO
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Multiple hypothesis testing is a fundamental problem in high dimensional inference, with wide applications in many scientific fields. In genome-wide association studies, tens of thousands of tests are performed simultaneously to find if any SNPs are associated with some traits and those tests are correlated. When test statistics are correlated, false discovery control becomes very challenging under arbitrary dependence. In the current paper, we propose a novel method based on principal factor approximation, which successfully subtracts the common dependence and weakens significantly the correlation structure, to deal with an arbitrary dependence structure. We derive an approximate expression for false discovery proportion (FDP) in large scale multiple testing when a common threshold is used and provide a consistent estimate of realized FDP. This result has important applications in controlling FDR and FDP. Our estimate of realized FDP compares favorably with Efron (2007)’s approach, as demonstrated in the simulated examples. Our approach is further illustrated by some real data applications. We also propose a dependence-adjusted procedure, which is more powerful than the fixed threshold procedure.'
author:
- 'Jianqing Fan, Xu Han and Weijie Gu'
bibliography:
- 'amsis-ref.bib'
title: '**Estimating False Discovery Proportion Under Arbitrary Covariance Dependence[^1]** '
---
[**Keywords:**]{} Multiple hypothesis testing, high dimensional inference, false discovery rate, arbitrary dependence structure, genome-wide association studies.
Introduction
============
Multiple hypothesis testing is a fundamental problem in the modern research for high dimensional inference, with wide applications in scientific fields, such as biology, medicine, genetics, neuroscience, economics and finance. For example, in genome-wide association studies, massive amount of genomic data (e.g. SNPs, eQTLs) are collected and tens of thousands of hypotheses are tested simultaneously to find if any of these genomic data are associated with some observable traits (e.g. blood pressure, weight, some disease); in finance, thousands of tests are performed to see which fund managers have winning ability (Barras, Scaillet & Wermers 2010).
False Discovery Rate (FDR) has been introduced in the celebrated paper by Benjamini & Hochberg (1995) for large scale multiple testing. By definition, FDR is the expected proportion of falsely rejected null hypotheses among all of the rejected null hypotheses. The classification of tested hypotheses can be summarized in Table 1:
------------ -------------- ---------- -------
Number Number
Number of not rejected rejected Total
True Null $U$ $V$ $p_0$
False Null $T$ $S$ $p_1$
$p-R$ $R$ $p$
------------ -------------- ---------- -------
: Classification of tested hypotheses[]{data-label="ar"}
Various testing procedures have been developed for controlling FDR, among which there are two major approaches. One is to compare the $P$-values with a data-driven threshold as in Benjamini & Hochberg (1995). Specifically, let $p_{(1)} \leq p_{(2)} \leq \cdots \leq p_{(p)}$ be the ordered observed $P$-values of $p$ hypotheses. Define $k = \text{max}\Big\{i: p_{(i)} \leq i\alpha/p \Big\}$ and reject $H_{(1)}^0, \cdots, H_{(k)}^0$, where $\alpha$ is a specified control rate. If no such $i$ exists, reject no hypothesis. The other related approach is to fix a threshold value $t$, estimate the FDR, and choose $t$ so that the estimated FDR is no larger than $\alpha$ (Storey 2002). Finding such a common threshold is based on a conservative estimate of FDR. Specifically, let $\mathrm{\widehat{FDR}}(t) = \widehat{p}_0t/(R(t)\vee 1)$, where $R(t)$ is the number of total discoveries with the threshold $t$ and $\widehat{p}_0$ is an estimate of $p_0$. Then solve $t$ such that $\mathrm{\widehat{FDR}}(t)\leq\alpha$. The equivalence between the two methods has been theoretically studied by Storey, Taylor & Siegmund (2004) and Ferreira & Zwinderman (2006).
Both procedures have been shown to control the FDR for independent test statistics. However, in practice, test statistics are usually correlated. Although Benjamini & Yekutieli (2001) and Clarke & Hall (2009) argued that when the null distribution of test statistics satisfies some conditions, dependence case in the multiple testing is asymptotically the same as independence case, multiple testing under general dependence structures is still a very challenging and important open problem. Efron (2007) pioneered the work in the field and noted that correlation must be accounted for in deciding which null hypotheses are significant because the accuracy of false discovery rate techniques will be compromised in high correlation situations. There are several papers that show the Benjamini-Hochberg procedure or Storey’s procedure can control FDR under some special dependence structures, e.g. Positive Regression Dependence on Subsets (Benjamini & Yekutieli 2001) and weak dependence (Storey, Taylor & Siegmund 2004). Sarkar (2002) also shows that FDR can be controlled by a generalized stepwise multiple testing procedure under positive regression dependence on subsets. However, even if the procedures are valid under these special dependence structures, they will still suffer from efficiency loss without considering the actual dependence information. In other words, there are universal upper bounds for a given class of covariance matrices.
In the current paper, we will develop a procedure for high dimensional multiple testing which can deal with any arbitrary dependence structure and fully incorporate the covariance information. This is in contrast with previous literatures which consider multiple testing under special dependence structures, e.g. Sun & Cai (2009) developed a multiple testing procedure where parameters underlying test statistics follow a hidden Markov model, and Leek & Storey (2008) and Friguet, Kloareg & Causeur (2009) studied multiple testing under the factor models. More specifically, consider the test statistics $$(Z_1,\cdots,Z_p)^T\sim N((\mu_1,\cdots,\mu_p)^T,{\mbox{\boldmath $\Sigma$}}),$$ where ${\mbox{\boldmath $\Sigma$}}$ is known and $p$ is large. We would like to simultaneously test $H_{0i}: \mu_i=0$ vs $H_{1i}: \mu_i\neq0$ for $i=1,\cdots,p$. Note that ${\mbox{\boldmath $\Sigma$}}$ can be any non-negative definite matrix. Our procedure is called Principal Factor Approximation (PFA). The basic idea is to first take out the principal factors that derive the strong dependence among observed data $Z_1,\cdots,Z_p$ and to account for such dependence in calculation of false discovery proportion (FDP). This is accomplished by the spectral decomposition of ${\mbox{\boldmath $\Sigma$}}$ and taking out the largest common factors so that the remaining dependence is weak. We then derive the asymptotic expression of the FDP, defined as $V/R$, that accounts for the strong dependence. The realized but unobserved principal factors that derive the strong dependence are then consistently estimated. The estimate of the realized FDP is obtained by substituting the consistent estimate of the unobserved principal factors.
We are especially interested in estimating FDP under the high dimensional sparse problem, that is, $p$ is very large, but the number of $\mu_i\neq0$ is very small. In section 2, we will explain the situation under which ${\mbox{\boldmath $\Sigma$}}$ is known. Sections 3 and 4 present the theoretical results and the proposed procedures. In section 5, the performance of our procedures is critically evaluated by various simulation studies. Section 6 is about the real data analysis. All the proofs are relegated to the Appendix and the Supplemental Material.
Motivation of the Study
=======================
In genome-wide association studies, consider $p$ SNP genotype data for $n$ individual samples, and further suppose that a response of interest (i.e. gene expression level or a measure of phenotype such as blood pressure or weight) is recorded for each sample. The SNP data are conventionally stored in an $n\times p$ matrix ${\mbox{\bf X}}=(x_{ij})$, with rows corresponding to individual samples and columns corresponding to individual SNPs . The total number $n$ of samples is in the order of hundreds, and the number $p$ of SNPs is in the order of tens of thousands.
Let $X_j$ and $Y$ denote, respectively, the random variables that correspond to the $j$th SNP coding and the outcome. The biological question of the association between genotype and phenotype can be restated as a problem in multiple hypothesis testing, [*i.e.*]{}, the simultaneous tests for each SNP $j$ of the null hypothesis $H_j$ of no association between the SNP $X_j$ and $Y$. Let $\{X_j^i\}_{i=1}^n$ be the sample data of $X_j$, $\{Y^i\}_{i=1}^n$ be the independent sample random variables of $Y$. Consider the marginal linear regression between $\{Y^i\}_{i=1}^n$ and $\{X_j^i\}_{i=1}^n$: $$\label{gwj1}
(\alpha_j,\beta_j)={\mathrm{argmin}}_{a_j,b_j}\frac{1}{n}\sum_{i=1}^nE(Y^i-a_j-b_jX_j^i)^2, \ \ \ j=1,\cdots,p.$$ We wish to simultaneously test the hypotheses $$H_{0j}:\quad\beta_j=0\quad\text{vs}\quad H_{1j}:\quad\beta_j\neq0, \quad\quad j=1,\cdots,p$$ to see which SNPs are correlated with the phenotype.
Recently statisticians have increasing interests in the high dimensional sparse problem: although the number of hypotheses to be tested is large, the number of false nulls ($\beta_j\neq0$) is very small. For example, among 2000 SNPs, there are maybe only 10 SNPs which contribute to the variation in phenotypes or certain gene expression level. Our purpose is to find these 10 SNPs by multiple testing with some statistical accuracy.
Because of the sample correlations among $\{X_j^i\}_{i=1,j=1}^{i=n,j=p}$, the least-squares estimators $\{\widehat{\beta}_j\}_{j=1}^p$ for $\{\beta_j\}_{j=1}^p$ in (1) are also correlated. The following result describes the joint distribution of $\{\widehat{\beta}_j\}_{j=1}^p$. The proof is straightforward.
[Let $\widehat{\beta}_j$ be the least-squares estimator for $\beta_j$ in (1) based on $n$ data points, $s_{kl}$ be the sample correlation between $X_k$ and $X_l$. Assume that the conditional distribution of $Y^i$ given $X_1^i,\cdots,X_p^i$ is $N(\mu(X_1^i,\cdots,X_p^i),\sigma^2)$. Then, conditioning on $\{X_j^i\}_{i=1,j=1}^{i=n,j=p}$, the joint distribution of $\{\widehat{\beta}_j\}_{j=1}^p$ is $(\widehat{\beta}_1,\cdots,\widehat{\beta}_p)^T\sim N((\beta_1,\cdots,\beta_p)^T,{\mbox{\boldmath $\Sigma$}}^*)$, where the $(k,l)$th element in ${\mbox{\boldmath $\Sigma$}}^*$ is $\displaystyle{\mbox{\boldmath $\Sigma$}}_{kl}^*=\sigma^2s_{kl}/(ns_{kk}s_{ll})$.]{}
For ease of notation, let $Z_1,\cdots,Z_p$ be the standardized random variables of $\widehat{\beta}_1,\cdots,\widehat{\beta}_p$, that is, $$\label{b2}
Z_i=\frac{\widehat{\beta}_i}{{\mbox{SD}}(\widehat{\beta}_i)}=\frac{\widehat{\beta}_i}{\sigma/(\sqrt{n}s_{ii})}, \quad\quad i=1,\cdots,p.$$ In the above, we implicitly assume that $\sigma$ is known and the above standardized random variables are z-test statistics. The estimate of residual variance $\sigma^2$ will be discussed in Section 6 via refitted cross-validation (Fan, Guo & Hao, 2011). Then, conditioning on $\{X_j^i\}$, $$\label{c1}
(Z_1,\cdots,Z_p)^T\sim N((\mu_1,\cdots,\mu_p)^T,{\mbox{\boldmath $\Sigma$}}),$$ where $\mu_i=\sqrt{n}\beta_is_{ii}/\sigma$ and covariance matrix ${\mbox{\boldmath $\Sigma$}}$ has the $(k,l)$th element as $s_{kl}$. Simultaneously testing (2) based on $(\widehat{\beta}_1,\cdots,\widehat{\beta}_p)^T$ is thus equivalent to testing
$$\label{c2}
H_{0j}:\quad\mu_j=0\quad\text{vs}\quad H_{1j}:\quad \mu_j\neq0, \quad\quad j=1,\cdots,p$$
based on $(Z_1,\cdots,Z_p)^T$.
In (4), ${\mbox{\boldmath $\Sigma$}}$ is the population covariance matrix of $(Z_1,\cdots,Z_p)^T$, and is known based on the sample data $\{X_j^i\}$. The covariance matrix ${\mbox{\boldmath $\Sigma$}}$ can have arbitrary dependence structure. We would like to clarify that ${\mbox{\boldmath $\Sigma$}}$ is known and there is no estimation of the covariance matrix of $X_1,\cdots,X_p$ in this set up.
Estimating False Discovery Proportion
=====================================
From now on assume that among all the $p$ null hypotheses, $p_0$ of them are true and $p_1$ hypotheses ($p_1 = p - p_0$) are false, and $p_1$ is supposed to be very small compared to $p$. For ease of presentation, we let $p$ be the sole asymptotic parameter, and assume $p_0\rightarrow\infty$ when $p\rightarrow\infty$. For a fixed rejection threshold $t$, we will reject those $P$-values no greater than $t$ and regard them as statistically significance. Because of its powerful applicability, this procedure has been widely adopted; see, e.g., Storey (2002), Efron (2007, 2010), among others. Our goal is to estimate the realized FDP for a given $t$ in multiple testing problem (\[c2\]) based on the observations (\[c1\]) under arbitrary dependence structure of ${\mbox{\boldmath $\Sigma$}}$. Our methods and results have direct implications on the situation in which ${\mbox{\boldmath $\Sigma$}}$ is unknown, the setting studied by Efron (2007, 2010) and Friguet et al (2009). In the latter case, ${\mbox{\boldmath $\Sigma$}}$ needs to be estimated with certain accuracy.
Approximation of FDP
--------------------
Define the following empirical processes: $$\begin{aligned}
V(t) & = & \#\{true \ null \ P_i: P_i \leq t\}, \nonumber\\
S(t) & = & \#\{false \ null \ P_i: P_i \leq t\} \quad \text{and} \nonumber\\
R(t) & = & \#\{P_i: P_i \leq t\}, \nonumber\end{aligned}$$ where $t\in[0,1]$. Then, $V(t)$, $S(t)$ and $R(t)$ are the number of false discoveries, the number of true discoveries, and the number of total discoveries, respectively. Obviously, $R(t) = V(t) + S(t)$, and $V(t)$, $S(t)$ and $R(t)$ are all random variables, depending on the test statistics $(Z_1,\cdots,Z_p)^T$. Moreover, $R(t)$ is observed in an experiment, but $V(t)$ and $S(t)$ are both unobserved.
By definition, ${\mbox{FDP}}(t)=V(t)/R(t)$ and ${\mbox{FDR}}(t) = E\Big[V(t)/R(t)\Big]$. One of interests is to control FDR$(t)$ at a predetermined rate $\alpha$, say $15\%$. There are also substantial research interests in the statistical behavior of the number of false discoveries $V(t)$ and the false discovery proportion ${\mbox{FDP}}(t)$, which are unknown but realized given an experiment. One may even argue that controlling FDP is more relevant, since it is directly related to the current experiment.
We now approximate $V(t)/R(t)$ for the high dimensional sparse case $p_1\ll p$. Suppose $(Z_1,\cdots,Z_p)^T\sim N((\mu_1,\cdots,\mu_p)^T,{\mbox{\boldmath $\Sigma$}})$ in which ${\mbox{\boldmath $\Sigma$}}$ is a correlation matrix. We need the following definition for weakly dependent normal random variables; other definitions can be found in Farcomeni (2007).
Suppose $(K_1,\cdots,K_p)^T\sim N((\theta_1,\cdots,\theta_p)^T,{\mbox{\bf A}})$. Then $K_1,\cdots,K_p$ are called weakly dependent normal variables if $$\lim_{p\rightarrow\infty}p^{-2}\sum_{i,j}|a_{ij}|=0,$$ where $a_{ij}$ denote the $(i,j)$th element of the covariance matrix ${\mbox{\bf A}}$.
Our procedure is called principal factor approximation (PFA). The basic idea is to decompose any dependent normal random vector as a factor model with weakly dependent normal random errors. The details are shown as follows. Firstly apply the spectral decomposition to the covariance matrix ${\mbox{\boldmath $\Sigma$}}$. Suppose the eigenvalues of ${\mbox{\boldmath $\Sigma$}}$ are $\lambda_1,\cdots,\lambda_{p}$, which have been arranged in decreasing order. If the corresponding orthonormal eigenvectors are denoted as ${\mbox{\boldmath $\gamma$}}_1,\cdots,{\mbox{\boldmath $\gamma$}}_p$, then $${\mbox{\boldmath $\Sigma$}}=\sum_{i=1}^p\lambda_i{\mbox{\boldmath $\gamma$}}_i{\mbox{\boldmath $\gamma$}}_i^T.$$ If we further denote ${\mbox{\bf A}}= \sum_{i=k+1}^p\lambda_i{\mbox{\boldmath $\gamma$}}_i{\mbox{\boldmath $\gamma$}}_i^T$ for an integer $k$, then $$\|{\mbox{\bf A}}\|_F^2 =\lambda_{k+1}^2+\cdots+\lambda_p^2,$$ where $\|\cdot\|_F$ is the Frobenius norm. Let ${\mbox{\bf L}}=(\sqrt{\lambda_1}{\mbox{\boldmath $\gamma$}}_1,\cdots,\sqrt{\lambda_k}{\mbox{\boldmath $\gamma$}}_k)$, which is a $p\times k$ matrix. Then the covariance matrix ${\mbox{\boldmath $\Sigma$}}$ can be expressed as $$\label{b19}
{\mbox{\boldmath $\Sigma$}}={\mbox{\bf L}}{\mbox{\bf L}}^T+{\mbox{\bf A}},$$ and $Z_1,\cdots,Z_p$ can be written as $$\label{b20}
Z_i=\mu_i+{\mbox{\bf b}}_i^T{\mbox{\bf W}}+K_i, \quad\quad i=1,\cdots,p,$$ where ${\mbox{\bf b}}_i=(b_{i1},\cdots,b_{ik})^T$, $(b_{1j},\cdots,b_{pj})^T=\sqrt{\lambda_j}{\mbox{\boldmath $\gamma$}}_j$, the factors are ${\mbox{\bf W}}=(W_1,\cdots,W_k)^T$ $\sim N_k(0,{\mbox{\bf I}}_k)$ and the random errors are $ (K_1,\cdots,K_p)^T$ $\sim N(0,{\mbox{\bf A}})$. Furthermore, $W_1,\cdots,W_k$ are independent of each other and independent of $K_1,\cdots,K_p$. Changing a probability if necessary, we can assume (\[b20\]) is the data generation process. In expression (\[b20\]), $\{\mu_i=0\}$ correspond to the true null hypotheses, while $\{\mu_i\neq0\}$ correspond to the false ones. Note that although (\[b20\]) is not exactly a classical multifactor model because of the existence of dependence among $K_1,\cdots,K_p$, we can nevertheless show that $(K_1,\cdots,K_p)^T$ is a weakly dependent vector if the number of factors $k$ is appropriately chosen.
We now discuss how to choose $k$ such that $(K_1,\cdots,K_p)^T$ is weakly dependent. Denote by $a_{ij}$ the $(i,j)$th element in the covariance matrix ${\mbox{\bf A}}$. If we have $$\label{c3}
p^{-1}(\lambda_{k+1}^2+\cdots+\lambda_p^2)^{1/2} \longrightarrow 0 \ \text{as} \ p\rightarrow\infty,$$ then by the Cauchy-Schwartz inequality $$p^{-2}\sum_{i,j}|a_{ij}|\leq p^{-1}\|{\mbox{\bf A}}\|_F=p^{-1}(\lambda_{k+1}^2+\cdots+\lambda_p^2)^{1/2}\longrightarrow0 \ \text{as} \ p\rightarrow\infty.$$ Note that $\sum_{i=1}^p\lambda_i=tr({\mbox{\boldmath $\Sigma$}})=p$, so that (\[c3\]) is self-normalized. Note also that the left hand side of (\[c3\]) is bounded by $p^{-1/2}\lambda_{k+1}$ which tends to zero whenever $\lambda_{k+1}=o(p^{1/2})$. Therefore, we can assume that the $\lambda_k>cp^{1/2}$ for some $c>0$. In particular, if $\lambda_1=o(p^{1/2})$, the matrix ${\mbox{\boldmath $\Sigma$}}$ is weak dependent and $k=0$. In practice, we always choose the smallest $k$ such that $$\frac{\sqrt{\lambda_{k+1}^2+\cdots+\lambda_{p}^2}}{\lambda_1+\cdots+\lambda_p} < \varepsilon$$ holds for a predetermined small $\varepsilon$, say, $0.01$.
Suppose $(Z_1,\cdots,Z_p)^T\sim N((\mu_1,\cdots,\mu_p)^T,{\mbox{\boldmath $\Sigma$}})$. Choose an appropriate $k$ such that $$(C0) \ \ \ \ \ \ \ \ \frac{\sqrt{\lambda_{k+1}^2+\cdots+\lambda_{p}^2}}{\lambda_1+\cdots+\lambda_p}=O(p^{-\delta}) \ \ \ \text{for} \ \ \delta>0.$$ Let $\sqrt{\lambda_j}{\mbox{\boldmath $\gamma$}}_j=(b_{1j},\cdots,b_{pj})^T$ for $j=1,\cdots,k$. Then, where $a_i = (1-\sum_{h=1}^kb_{ih}^2)^{-1/2}$, $\eta_i = {\mbox{\bf b}}_i^T{\mbox{\bf W}}$ with ${\mbox{\bf b}}_i=(b_{i1},\cdots,b_{ik})^T$ and ${\mbox{\bf W}}\sim N_k(0,{\mbox{\bf I}}_k)$ in (\[b20\]), and $\Phi(\cdot)$ and $z_{t/2}=\Phi^{-1}(t/2)$ are the cumulative distribution function and the $t/2$ lower quantile of a standard normal distribution, respectively.
Note that condition (C0) implies that $K_1,\cdots,K_p$ are weakly dependent random variables, but (\[c3\]) converges to zero at some polynomial rate of $p$.
Theorem 1 gives an asymptotic approximation for FDP$(t)$ under general dependence structure. To the best of our knowledge, it is the first result to explicitly spell out the impact of dependence. It is also closely connected with the existing results for independence case and weak dependence case. Let $b_{ih}=0$ for $i=1,\cdots,p$ and $h=1,\cdots,k$ in (\[b20\]) and $K_1,\cdots,K_p$ be weakly dependent or independent normal random variables, then it reduces to the weak dependence case or independence case, respectively. In the above two specific cases, the numerator of (\[a50\]) is just $p_0t$. Storey (2002) used an estimate for $p_0$, resulting an estimator of ${\mbox{FDP}}(t)$ as $\widehat{p}_0t/R(t)$. This estimator has been shown to control the false discovery rate under independency and weak dependency. However, for general dependency, Storey’s procedure will not work well because it ignores the correlation effect among the test statistics, as shown by (\[a50\]). Further discussions for the relationship between our result and the other leading research for multiple testing under dependence are shown in Section 3.4.
The results in Theorem 1 can be better understood by some special dependence structures as follows. These specific cases are also considered by Roquain & Villers (2010), Finner, Dickhaus & Roters (2007) and Friguet, Kloareg & Causeur (2009) under somewhat different settings.
**Example 1: \[Equal Correlation\]** If ${\mbox{\boldmath $\Sigma$}}$ has $\rho_{ij}=\rho\in[0,1)$ for $i\neq j$, then we can write $$Z_i=\mu_i+\sqrt{\rho}W+\sqrt{1-\rho}K_i \ \ \ i=1,\cdots,p$$ where $W\sim N(0,1)$, $K_i\sim N(0,1)$, and $W$ and all $K_i$’s are independent of each other. Thus, we have $$\lim_{p\rightarrow\infty}\Big[\mathrm{FDP}(t)-\frac{p_0\Big[\Phi(d(z_{t/2}+\sqrt{\rho}W))+\Phi(d(z_{t/2}-\sqrt{\rho}W))\Big]}{\sum_{i=1}^p\Big[\Phi(d(z_{t/2}+\sqrt{\rho}W+\mu_i))+\Phi(d(z_{t/2}-\sqrt{\rho}W-\mu_i))\Big]}\Big]=0 \ \ \text{a.s.},$$ where $d=(1-\rho)^{-1/2}$.
Note that Delattre & Roquain (2011) studied the FDP in a particular case of equal correlation. They provided a slightly different decomposition of $\{Z_i\}_{i=1}^p$ in the proof of Lemma 3.3 where the errors $K_i$’s have a sum equal to 0. Finner, Dickhaus & Roters (2007) in their Theorem 2.1 also shows a result similar to Theorem 1 for equal correlation case.
**Example 2: \[Multifactor Model\]** Consider a multifactor model: $$\label{c4}
Z_i=\mu_i+\eta_i+a_i^{-1}K_i, \quad\quad i=1,\cdots,p,$$ where $\eta_i$ and $a_i$ are defined in Theorem 1 and $K_i\sim N(0,1)$ for $i=1,\cdots,p$. All the $W_h$’s and $K_i$’s are independent of each other. In this model, $W_1,\cdots,W_k$ are the $k$ common factors. By Theorem 1, expression (\[a50\]) holds.
Note that the covariance matrix for model (\[c4\]) is $${\mbox{\boldmath $\Sigma$}}= {\mbox{\bf L}}{\mbox{\bf L}}^T + {\mathrm{diag}}(a_1^{-2}, \cdots, a_p^{-2}).$$ When $\{a_j\}$ is not a constant, columns of $L$ are not necessarily eigenvectors of ${\mbox{\boldmath $\Sigma$}}$. In other words, when the principal component analysis is used, the decomposition of (\[b19\]) can yield a different $L$ and condition (\[c3\]) can require a different value of $k$. In this sense, there is a subtle difference between our approach and that in Friguet, Kloareg & Causeur (2009) when the principal component analysis is used. Theorem 1 should be understood as a result for any decomposition (\[b19\]) that satisfies condition (C0). Because we use principal components as approximated factors, our procedure is called principal factor approximation. In practice, if one knows that the test statistics comes from a factor model structure, multiple testing procedure based on this factor model should be preferable. However, when such factor structure is not clear, our procedure can deal with an arbitrary covariance dependence case.
In Theorem 1, since ${\mbox{FDP}}$ is bounded by 1, taking expectation on both sides of the equation (\[a50\]) and by the Portmanteau lemma, we have the convergence of FDR:
Under the assumptions in Theorem 1, $$\label{a51}
\lim_{p\rightarrow\infty}\Big\{\mathrm{FDR}(t)-E\Big[\frac{\sum_{i\in\text{\{true null\}}}\Big\{\Phi(a_i(z_{t/2}+\eta_i))+\Phi(a_i(z_{t/2}-\eta_i))\Big\}}{\sum_{i=1}^p\Big\{\Phi(a_i(z_{t/2}+\eta_i+\mu_i))+\Phi(a_i(z_{t/2}-\eta_i-\mu_i))\Big\}}\Big]\Big\}=0.$$
The expectation on the right hand side of (\[a51\]) is with respect to standard multivariate normal variables $(W_1,\cdots,W_k)^T\sim N_k(0,{\mbox{\bf I}}_k)$.
The proof of Theorem 1 is based on the following result.
Under the assumptions in Theorem 1, $$\begin{aligned}
\lim_{p\rightarrow\infty}\Big[p^{-1}R(t)-p^{-1}\sum_{i=1}^p\Big[\Phi(a_i(z_{t/2}+\eta_i+\mu_i))+\Phi(a_i(z_{t/2}-\eta_i-\mu_i))\Big]\Big]=0 \ \ \text{a.s.}, \label{a54}\\
\lim_{p\rightarrow\infty}\Big[p_0^{-1}V(t)-p_0^{-1}\sum_{i\in\text{\{true null\}}}\Big[\Phi(a_i(z_{t/2}+\eta_i))+\Phi(a_i(z_{t/2}-\eta_i))\Big]\Big]=0 \ \ \text{a.s.}. \label{a54a}\end{aligned}$$
The proofs of Theorem 1 and Proposition 2 are shown in the Appendix.
Estimating Realized FDP
-----------------------
In Theorem 1 and Proposition 2, the summation over the set of true null hypotheses is unknown. However, due to the high dimensionality and sparsity, both $p$ and $p_0$ are large and $p_1$ is relatively small. Therefore, we can use $$\label{a52}
\sum_{i=1}^p\Big[\Phi(a_i(z_{t/2}+\eta_i))+\Phi(a_i(z_{t/2}-\eta_i))\Big]$$ as a conservative surrogate for $$\label{a53}
\sum_{i\in\text{\{true null\}}}\Big[\Phi(a_i(z_{t/2}+\eta_i))+\Phi(a_i(z_{t/2}-\eta_i))\Big].$$ Since only $p_1$ extra terms are included in (\[a52\]), the substitution is accurate enough for many applications.
Recall that ${\mbox{FDP}}(t)=V(t)/R(t)$, in which $R(t)$ is observable and known. Thus, only the realization of $V(t)$ is unknown. The mean of $V(t)$ is $E\Big[\sum_{i\in\text{\{true null\}}}I(P_i\leq t)\Big]=p_0t$, since the $P$-values corresponding to the true null hypotheses are uniformly distributed. However, the dependence structure affect the variance of $V(t)$, which can be much larger than the binomial formula $p_0 t (1-t)$. Owen (2005) has theoretically studied the variance of the number of false discoveries. In our framework, expression (\[a52\]) is a function of i.i.d. standard normal variables. Given $t$, the variance of (\[a52\]) can be obtained by simulations and hence variance of $V(t)$ is approximated via (\[a52\]). Relevant simulation studies will be presented in Section 5.
In recent years, there have been substantial interests in the realized random variable FDP itself in a given experiment, instead of controlling FDR, as we are usually concerned about the number of false discoveries given the observed sample of test statistics, rather than an average of FDP for hypothetical replications of the experiment. See Genovese & Wasserman (2004), Meinshausen (2005), Efron (2007), Friguet et al (2009), etc. In our problem, by Proposition 2 it is known that $V(t)$ is approximately $$\sum_{i=1}^p\Big[\Phi(a_i(z_{t/2}+\eta_i))+\Phi(a_i(z_{t/2}-\eta_i))\Big].$$ Let $$\mathrm{FDP_A}(t)=\Big(\sum_{i=1}^p\Big[\Phi(a_i(z_{t/2}+\eta_i))+\Phi(a_i(z_{t/2}-\eta_i))\Big]\Big)/R(t),$$ if $R(t)\neq0$ and $\mathrm{FDP_A}(t)=0$ when $R(t)=0$. Given observations $z_1,\cdots,z_p$ of the test statistics $Z_1,\cdots,Z_p$, if the unobserved but realized factors $W_1,\cdots,W_k$ can be estimated by $\widehat{W}_1,\cdots,\widehat{W}_k$, then we can obtain an estimator of $\mathrm{FDP_A}(t)$ by $$\label{b21}
\widehat{{\mbox{FDP}}}(t)=\min\Big(\sum_{i=1}^p\Big[\Phi(a_i(z_{t/2}+\widehat{\eta}_i))+\Phi(a_i(z_{t/2}-\widehat{\eta}_i))\Big],R(t)\Big)/R(t),$$ when $R(t)\neq0$ and $\widehat{{\mbox{FDP}}}(t)=0$ when $R(t)=0$. Note that in (\[b21\]), $\widehat{\eta}_i=\sum_{h=1}^kb_{ih}\widehat{W}_h$ is an estimator for $\eta_i={\mbox{\bf b}}_i^T{\mbox{\bf W}}$.
The following procedure is one practical way to estimate ${\mbox{\bf W}}=(W_1,\cdots,W_k)^T$ based on the data. For observed values $z_1,\cdots,z_p$, we choose the smallest $90\%$ of $|z_i|$’s, say. For ease of notation, assume the first $m$ $z_i$’s have the smallest absolute values. Then approximately $$\label{b22}
Z_i={\mbox{\bf b}}_i^T{\mbox{\bf W}}+K_i,\quad i=1,\cdots,m.$$ The approximation from (\[b20\]) to (\[b22\]) stems from the intuition that large $|\mu_i|$’s tend to produce large $|z_i|$’s as $Z_i\sim N(\mu_i,1)$ $1\leq i\leq p$ and the sparsity makes approximation errors negligible. Finally we apply the robust $L_1$-regression to the equation set (\[b22\]) and obtain the least-absolute deviation estimates $\widehat{W}_1,\cdots,\widehat{W}_k$. We use $L_1$-regression rather than $L_2$-regression because there might be nonzero $\mu_i$ involved in (\[b22\]) and $L_1$ is more robust to the outliers than $L_2$. Other possible methods include using penalized method such as LASSO or SCAD to explore the sparsity. For example, one can minimize $$\sum_{i=1}^p (Z_i - \mu_i - {\mbox{\bf b}}_i^T{\mbox{\bf W}})^2 + \sum_{i=1}^p p_\lambda(|\mu_i|)$$ with respect to $\{\mu_i\}_{i=1}^p$ and ${\mbox{\bf W}}$, where $p_\lambda(\cdot)$ is a folded-concave penalty function (Fan and Li, 2001).
The estimator (\[b21\]) performs significantly better than Efron (2007)’s estimator in our simulation studies. One difference is that in our setting ${\mbox{\boldmath $\Sigma$}}$ is known. The other is that we give a better approximation as shown in Section 3.4.
Efron (2007) proposed the concept of conditional FDR. Consider $E(V(t))/R(t)$ as one type of FDR definitions (see Efron (2007) expression (46)). The numerator $E(V(t))$ is over replications of the experiment, and equals a constant $p_0t$. But if the actual correlation structure in a given experiment is taken into consideration, then Efron (2007) defines the conditional FDR as $E(V(t)|A)/R(t)$ where $A$ is a random variable which measures the dependency information of the test statistics. Estimating the realized value of $A$ in a given experiment by $\widehat{A}$, one can have the estimated conditional FDR as $E(V(t)|\widehat{A})/R(t)$. Following Efron’s proposition, Friguet et al (2009) gave the estimated conditional FDR by $E(V(t)|\widehat{{\mbox{\bf W}}})/R(t)$ where $\widehat{{\mbox{\bf W}}}$ is an estimate of the realized random factors ${\mbox{\bf W}}$ in a given experiment.
Our estimator in (\[b21\]) for the realized FDP in a given experiment can be understood as an estimate of conditional FDR. Note that (\[a53\]) is actually $E(V(t)|\{\eta_i\}_{i=1}^p)$. By Proposition 2, we can approximate $V(t)$ by $E(V(t)|\{\eta_i\}_{i=1}^p)$. Thus the estimate of conditional FDR $E(V(t)|\{\widehat{\eta}_i\}_{i=1}^p)/R(t)$ is directly an estimate of the realized FDP $V(t)/R(t)$ in a given experiment.
Asymptotic Justification
------------------------
Let ${\mbox{\bf w}}=(w_1,\cdots,w_k)^T$ be the realized values of $\{W_h\}_{h=1}^k$, and $\widehat{{\mbox{\bf w}}}$ be an estimator for ${\mbox{\bf w}}$. We now show in Theorem 2 that $\widehat{{\mbox{FDP}}}(t)$ in (\[b21\]) based on a consistent estimator $\widehat{{\mbox{\bf w}}}$ has the same convergence rate as $\widehat{{\mbox{\bf w}}}$ under some mild conditions.
If the following conditions are satisfied:
- $R(t)/p>H$ for $H>0$ as $p\rightarrow\infty$,
- $\min_{1\leq i\leq p}\min(|z_{t/2}+{\mbox{\bf b}}_i^T{\mbox{\bf w}}|,|z_{t/2}-{\mbox{\bf b}}_i^T{\mbox{\bf w}}|)\geq \tau>0$,
- $\|\widehat{{\mbox{\bf w}}}-{\mbox{\bf w}}\|_2=O_p(p^{-r})$ for some $r>0$,
then $|\mathrm{\widehat{FDP}}(t)-\mathrm{FDP_A}(t)|=O(\|{\widehat {\mbox{\bf w}}}-{\mbox{\bf w}}\|_2)$.
In Theorem 2, (C2) is a reasonable condition because $z_{t/2}$ is a large negative number when threshold $t$ is small and ${\mbox{\bf b}}_i^T{\mbox{\bf w}}$ is a realization from a normal distribution $N(0, \sum_{h=1}^kb_{ih}^2)$ with $\sum_{h=1}^kb_{ih}^2<1$. Thus $z_{t/2}+{\mbox{\bf b}}_i^T{\mbox{\bf w}}$ or $z_{t/2}-{\mbox{\bf b}}_i^T{\mbox{\bf w}}$ is unlikely close to zero.
Theorem 3 shows the asymptotic consistency of $L_1-$regression estimators under model (\[b22\]). Portnoy (1984b) has proven the asymptotic consistency for robust regression estimation when the random errors are i.i.d. However, his proof does not work here because of the weak dependence of random errors. Our result allows $k$ to grow with $m$, even at a faster rate of $o(m^{1/4})$ imposed by Portnoy (1984b).
Suppose (\[b22\]) is a correct model. Let ${\widehat {\mbox{\bf w}}}$ be the $L_1-$regression estimator: $${\widehat {\mbox{\bf w}}}\equiv {\mathrm{argmin}}_{{\mbox{\boldmath $\beta$}}\in R^k}\sum_{i=1}^m|Z_i-{\mbox{\bf b}}_i^T{\mbox{\boldmath $\beta$}}|$$ where ${\mbox{\bf b}}_i=(b_{i1},\cdots,b_{ik})^T$. Let ${\mbox{\bf w}}=(w_1,\cdots,w_k)^T$ be the realized values of $\{W_h\}_{h=1}^k$. Suppose $k=O(m^{\kappa})$ for $0\leq\kappa<1-\delta$. Under the assumptions
- \[a55\] $\sum_{j=k+1}^p\lambda_j^2\leq\eta$ for $\eta=O(m^{2\kappa})$,
- \[a56\] $$\lim_{m\rightarrow\infty}\sup_{\|{\mbox{\bf u}}\|=1}m^{-1}\sum_{i=1}^mI(|{\mbox{\bf b}}_i^T{\mbox{\bf u}}|\leq d)=0$$ for a constant $d>0$.
- $a_{\max}/a_{\min}\leq S$ for some constant $S$ when $m\rightarrow\infty$ where $1/a_i$ is the standard deviation of $K_i$,
- $a_{\min}=O(m^{(1-\kappa)/2})$.
We have $\|{\widehat {\mbox{\bf w}}}-{\mbox{\bf w}}\|_2=O_p(\sqrt{\frac{k}{m}})$.
(C4) is stronger than (C0) in Theorem 1 as (C0) only requires $\sum_{j=k+1}^p\lambda_j^2=O(p^{2-2\delta})$. (C5) ensures the identifiability of ${\mbox{\boldmath $\beta$}}$, which is similar to Proposition 3.3 in Portnoy (1984a). (C6) and (C7) are imposed to facilitate the technical proof.
We now briefly discuss the role of the factor $k$. To make the approximation in Theorem 1 hold, we need $k$ to be large. On the other hand, to make the realized factors estimable with reasonably accuracy, we hope to choose a small $k$ as demonstrated in Theorem 3. Thus, the practical choice of $k$ should be done with care.
Since $m$ is chosen as a certain large proportion of $p$, combination of Theorem 2 and Theorem 3 thus shows the asymptotic consistency of $\widehat{{\mbox{FDP}}}(t)$ based on $L_1-$regression estimator of ${\mbox{\bf w}}=(w_1,\cdots,w_k)^T$ in model (\[b22\]): $$|\mathrm{\widehat{FDP}}(t)-\mathrm{FDP_A}(t)|=O_p(\sqrt{\frac{k}{m}}).$$
The result in Theorem 3 are based on the assumption that (\[b22\]) is a correct model. In the following, we will show that even if (\[b22\]) is not a correct model, the effects of misspecification are negligible when $p$ is sufficiently large. To facilitate the mathematical derivations, we instead consider the least-squares estimator. Suppose we are estimating ${\mbox{\bf W}}=(W_1,\cdots,W_k)^T$ from (\[b20\]). Let ${\mbox{\bf X}}$ be the design matrix of model (\[b20\]). Then the least-squares estimator for ${\mbox{\bf W}}$ is $\widehat{{\mbox{\bf W}}}_{{\mbox{\scriptsize LS}}}={\mbox{\bf W}}+({\mbox{\bf X}}^T{\mbox{\bf X}})^{-1}{\mbox{\bf X}}^T({\mbox{\boldmath $\mu$}}+{\mbox{\bf K}})$, where ${\mbox{\boldmath $\mu$}}=(\mu_1,\cdots,\mu_p)^T$ and ${\mbox{\bf K}}=(K_1,\cdots,K_p)^T$. Instead, we estimate $W_1,\cdots,W_k$ based on the simplified model (\[b22\]), which ignores sparse $\{\mu_i\}$. Then the least-squares estimator for ${\mbox{\bf W}}$ is $\widehat{{\mbox{\bf W}}}_{{\mbox{\scriptsize LS}}}^*={\mbox{\bf W}}+({\mbox{\bf X}}^T{\mbox{\bf X}})^{-1}{\mbox{\bf X}}^T{\mbox{\bf K}}={\mbox{\bf W}}$, in which we utilize the orthogonality between ${\mbox{\bf X}}$ and ${\mathrm{var}}({\mbox{\bf K}})$. The following result shows that the effect of misspecification in model (\[b22\]) is negligible when $p\rightarrow\infty$, and consistency of the least-squares estimator.
The bias due to ignoring non-nulls is controlled by $$\|\widehat{{\mbox{\bf W}}}_{\mathrm{LS}}-{\mbox{\bf W}}\|_2=\|\widehat{{\mbox{\bf W}}}_{\mathrm{LS}}-\widehat{{\mbox{\bf W}}}_{\mathrm{LS}}^*\|_2\leq\|{\mbox{\boldmath $\mu$}}\|_2\Big(\sum_{i=1}^k\lambda_i^{-1}\Big)^{1/2}.$$
In Theorem 1, we can choose appropriate $k$ such that $\lambda_k>cp^{1/2}$ as noted proceeding Theorem 1. Therefore, $\sum_{i=1}^k\lambda_i^{-1}\rightarrow 0$ as $p\rightarrow\infty$ is a reasonable condition. When $\{\mu_i\}_{i=1}^p$ are truly sparse, it is expected that $\|{\mbox{\boldmath $\mu$}}\|_2$ grows slowly or is even bounded so that the bound in Theorem 4 is small. For $L_1-$regression, it is expected to be even more robust to the outliers in the sparse vector $\{\mu_i\}_{i=1}^p$.
Dependence-Adjusted Procedure
-----------------------------
A problem of the method used so far is that the ranking of statistical significance is completely determined by the ranking of the test statistics $\{|Z_i|\}$. This is undesirable and can be inefficient for the dependent case: the correlation structure should also be taken into account. We now show how to use the correlation structure to improve the signal to noise ratio.
Note that by (\[b20\]), $Z_i-{\mbox{\bf b}}_i^T{\mbox{\bf W}}\sim N(\mu_i,a_i^{-2})$ where $a_i$ is defined in Theorem 1. Since $a_i^{-1}\leq1$, the signal to noise ratio increases, which makes the resulting procedure more powerful. Thus, if we know the true values of the common factors ${\mbox{\bf W}}=(W_1,\cdots,W_k)^T$, we can use $a_i(Z_i-{\mbox{\bf b}}_i^T{\mbox{\bf W}})$ as the test statistics. The dependence-adjusted $p$-values $\displaystyle 2\Phi(-|a_i(Z_i-{\mbox{\bf b}}_i^T{\mbox{\bf W}})|)$ can then be used. Note that this testing procedure has different thresholds for different hypotheses based on the magnitude of $Z_i$, and has incorporated the correlation information among hypotheses. In practice, given $Z_i$, the common factors $\{W_h\}_{h=1}^k$ are realized but unobservable. As shown in section 3.2, they can be estimated. The dependence adjusted $p$-values are then given by $$\label{d1}
2\Phi(-|a_i(Z_i-{\mbox{\bf b}}_i^T\widehat{{\mbox{\bf W}}})|)$$ for ranking the hypotheses where $\widehat{{\mbox{\bf W}}}=(\widehat{W}_1,\cdots,\widehat{W}_k)^T$ is an estimate of the principal factors. We will show in section 5 by simulation studies that this dependence-adjusted procedure is more powerful. The “factor adjusted multiple testing procedure" in Friguet et al (2009) shares a similar idea.
Relation with Other Methods
---------------------------
Efron (2007) proposed a novel parametric model for $V(t)$: $$V(t)=p_0t\Big[1+2A\frac{(-z_{t/2})\phi(z_{t/2})}{\sqrt{2}t}\Big],$$ where $A\sim N(0,\alpha^2)$ for some real number $\alpha$ and $\phi(\cdot)$ stands for the probability density function of the standard normal distribution. The correlation effect is explained by the dispersion variate $A$. His procedure is to estimate $A$ from the data and use $$p_0t\Big[1+2\widehat{A}\frac{(-z_{t/2})\phi(z_{t/2})}{\sqrt{2}t}\Big]\Big/R(t)$$ as an estimator for realized ${\mbox{FDP}}(t)$. Note that the above expressions are adaptations from his procedure for the one-sided test to our two-sided test setting. In his simulation, the above estimator captures the general trend of the FDP, but it is not accurate and deviates from the true FDP with large amount of variability. Consider our estimator $\widehat{\mathrm{FDP}}(t)$ in (\[b21\]). Write $\widehat{\eta}_i=\sigma_iQ_i$ where $Q_i\sim N(0,1)$. When $\sigma_i\rightarrow0$ for $\forall i\in\{\text{true null}\}$, by the second order Taylor expansion,
$$\widehat{\mathrm{FDP}}(t)\approx\frac{p_0t}{R(t)}\Big[1+\sum_{i\in\{\text{true null}\}}\sigma_i^2(Q_i^2-1)\frac{(-z_{t/2})\phi(z_{t/2})}{p_0t}\Big].$$
By comparison with Efron’s estimator, we can see that $$\widehat{A}=\frac{1}{\sqrt{2}p_0}\sum_{i\in\{\text{true null}\}}\Big[\widehat{\eta}_i^2-E(\widehat{\eta}_i^2)\Big].$$ Thus, our method is more general.
Leek & Storey (2008) considered a general framework for modeling the dependence in multiple testing. Their idea is to model the dependence via a factor model and reduces the multiple testing problem from dependence to independence case via accounting the effects of common factors. They also provided a method of estimating the common factors. In contrast, our problem is different from Leek & Storey’s and we estimate common factors from principal factor approximation and other methods. In addition, we provide the approximated FDP formula and its consistent estimate.
Friguet, Kloareg & Causeur (2009) followed closely the framework of Leek & Storey (2008). They assumed that the data come directly from a multifactor model with independent random errors, and then used the EM algorithm to estimate all the parameters in the model and obtained an estimator for FDP$(t)$. In particular, they subtract $\eta_i$ out of (\[c4\]) based on their estimate from the EM algorithm to improve the efficiency. However, the ratio of estimated number of factors to the true number of factors in their studies varies according to the dependence structures by their EM algorithm, thus leading to inaccurate estimated ${\mbox{FDP}}(t)$. Moreover, it is hard to derive theoretical results based on the estimator from their EM algorithm. Compared with their results, our procedure does not assume any specific dependence structure of the test statistics. What we do is to decompose the test statistics into an approximate factor model with weakly dependent errors, derive the factor loadings and estimate the unobserved but realized factors by $L_1$-regression. Since the theoretical distribution of $V(t)$ is known, estimator (\[b21\]) performs well based on a good estimation for $W_1,\cdots,W_k$.
Approximate Estimation of FDR
=============================
In this section we propose some ideas that can asymptotically control the FDR, not the FDP, under arbitrary dependency. Although their validity is yet to be established, promising results reveal in the simulation studies. Therefore, they are worth some discussion and serve as a direction of our future work.
Suppose that the number of false null hypotheses $p_1$ is known. If the signal $\mu_i$ for $i\in\text{\{false null\}}$ is strong enough such that $$\label{b24}
\Phi\Big(a_i(z_{t/2}+\eta_i+\mu_i)\Big)+\Phi\Big(a_i(z_{t/2}-\eta_i-\mu_i)\Big)\approx1,$$ then asymptotically the FDR is approximately given by $$\label{b25}
{\mbox{FDR}}(t)=E\Big\{\frac{\sum_{i=1}^p\Big[\Phi(a_i(z_{t/2}+\eta_i))+\Phi(a_i(z_{t/2}-\eta_i))\Big]}{\sum_{i=1}^p\Big[\Phi(a_i(z_{t/2}+\eta_i))+\Phi(a_i(z_{t/2}-\eta_i))\Big]+p_1}\Big\},$$ which is the expectation of a function of $W_1,\cdots,W_k$. Note that ${\mbox{FDR}}(t)$ is a known function and can be computed by Monte Carlo simulation. For any predetermined error rate $\alpha$, we can use the bisection method to solve $t$ so that ${\mbox{FDR}}(t)=\alpha$. Since $k$ is not large, the Monte Carlo computation is sufficiently fast for most applications.
The requirement (\[b24\]) is not very strong. First of all, $\Phi(3)\approx0.9987$, so (\[b24\]) will hold if any number inside the $\Phi(\cdot)$ is greater than 3. Secondly, $1-\sum_{h=1}^kb_{ih}^2$ is usually very small. For example, if it is $0.01$, then $a_i=(1-\sum_{h=1}^kb_{ih}^2)^{-1/2}\approx10$, which means that if either $z_{t/2}+\eta_i+\mu_i$ or $z_{t/2}-\eta_i-\mu_i$ exceed 0.3, then (\[b24\]) is approximately satisfied. Since the effect of sample size $n$ is involved in the problem in Section 2, (\[b24\]) is not a very strong condition on the signal strength $\{\beta_i\}$.
Note that Finner et al (2007) considered a “Dirac uniform model", where the $p$-values corresponding to a false hypothesis are exactly equal to 0. This model might be potentially useful for FDR control. The calculation of (\[b25\]) requires the knowledge of the proportion $p_1$ of signal in the data. Since $p_1$ is usually unknown in practice, there is also future research interest in estimating $p_1$ under arbitrary dependency.
Simulation Studies
==================
In the simulation studies, we consider $p=2000$, $n=100$, $\sigma=2$, the number of false null hypotheses $p_1=10$ and the nonzero $\beta_i=1$, unless stated otherwise. We will present 6 different dependence structures for ${\mbox{\boldmath $\Sigma$}}$ of the test statistics $(Z_1,\cdots,Z_p)^T\sim N((\mu_1,\cdots,\mu_p)^T,{\mbox{\boldmath $\Sigma$}})$. Following the setting in section 2, ${\mbox{\boldmath $\Sigma$}}$ is the correlation matrix of a random sample of size $n$ of $p-$dimensional vector ${\mbox{\bf X}}_i=(X_{i1},\cdots,X_{ip})$, and $\mu_j=\sqrt{n}\beta_j\widehat{\sigma}_j/\sigma$, $j=1,\cdots,p$. The data generating process vector ${\mbox{\bf X}}_i$’s are as follows.
- **\[Equal correlation\]** Let ${\mbox{\bf X}}^T=(X_{1},\cdots,X_{p})^T\sim N_p(0,{\mbox{\boldmath $\Sigma$}})$ where ${\mbox{\boldmath $\Sigma$}}$ has diagonal element 1 and off-diagonal element $1/2$.
- **\[Fan & Song’s model\]** For ${\mbox{\bf X}}=(X_{1},\cdots,X_{p})$, let $\{X_{k}\}_{k=1}^{1900}$ be i.i.d. $N(0,1)$ and $$X_{k}=\sum_{l=1}^{10}X_{l}(-1)^{l+1}/5+\sqrt{1-\frac{10}{25}}\epsilon_{k}, \ \ k=1901,\cdots,2000,$$ where $\{\epsilon_{k}\}_{k=1901}^{2000}$ are standard normally distributed.
- **\[Independent Cauchy\]** For ${\mbox{\bf X}}=(X_{1},\cdots,X_{p})$, let $\{X_{k}\}_{k=1}^{2000}$ be i.i.d. Cauchy random variables with location parameter 0 and scale parameter 1.
- **\[Three factor model\]** For ${\mbox{\bf X}}=(X_{1},\cdots,X_{p})$, let $$X_{j}=\rho_j^{(1)}W^{(1)}+\rho_j^{(2)}W^{(2)}+\rho_j^{(3)}W^{(3)}+H_{j},$$ where $W^{(1)}\sim N(-2,1)$, $W^{(2)}\sim N(1,1)$, $W^{(3)}\sim N(4,1)$, $\rho_{j}^{(1)},\rho_{j}^{(2)},\rho_{j}^{(3)}$ are i.i.d. $U(-1,1)$, and $H_{j}$ are i.i.d. $N(0,1)$.
- **\[Two factor model\]** For ${\mbox{\bf X}}=(X_{1},\cdots,X_{p})$, let $$X_{j}=\rho_j^{(1)}W^{(1)}+\rho_j^{(2)}W^{(2)}+H_{j},$$ where $W^{(1)}$ and $W^{(2)}$ are i.i.d. $N(0,1)$, $\rho_{j}^{(1)}$ and $\rho_{j}^{(2)}$ are i.i.d. $U(-1,1)$, and $H_{j}$ are i.i.d. $N(0,1)$.
- **\[Nonlinear factor model\]** For ${\mbox{\bf X}}=(X_{1},\cdots,X_{p})$, let $$X_{j}=\sin(\rho_j^{(1)}W^{(1)})+sgn(\rho_j^{(2)})\exp(|\rho_j^{(2)}|W^{(2)})+H_{j},$$ where $W^{(1)}$ and $W^{(2)}$ are i.i.d. $N(0,1)$, $\rho_{j}^{(1)}$ and $\rho_{j}^{(2)}$ are i.i.d. $U(-1,1)$, and $H_{j}$ are i.i.d. $N(0,1)$.
Fan & Song’s Model has been considered in Fan & Song (2010) for high dimensional variable selection. This model is close to the independent case but has some special dependence structure. Note that although we have used the term “factor model" above to describe the dependence structure, it is not the factor model for the test statistics $Z_1,\cdots,Z_p$ directly. The covariance matrix of these test statistics is the sample correlation matrix of $X_1,\cdots,X_p$. The effectiveness of our method is examined in several aspects. We first examine the goodness of approximation in Theorem 1 by comparing the marginal distributions and variances. We then compare the accuracy of FDP estimates with other methods. Finally, we demonstrate the improvement of the power with dependence adjustment.
**Distributions of FDP and its approximation:** Without loss of generality, we consider a dependence structure based on the two factor model above. Let $n=100$, $p_1=10$ and $\sigma=2$. Let $p$ vary from 100 to 1000 and $t$ be either 0.01 or 0.005. The distributions of ${\mbox{FDP}}(t)$ and its approximated expression in Theorem 1 are plotted in Figure \[a59\]. The convergence of the distributions are self-evidenced. Table 2 summarizes the total variation distance between the two distributions.
\[a59\]
$p=100$ $p=500$ $p=1000$
----------- --------- --------- ----------
$t=0.01$ 0.6668 0.1455 0.0679
$t=0.005$ 0.6906 0.2792 0.1862
: Total variation distance between the distribution of ${\mbox{FDP}}$ and the limiting distribution of ${\mbox{FDP}}$ in Figure 1. The total variation distance is calculated based on “TotalVarDist" function with “smooth" option in R software.
**Variance of $V(t)$:** Variance of false discoveries in the correlated test statistics is usually large compared with that of the independent case which is $p_0t(1-t)$, due to correlation structures. Thus the ratio of variance of false discoveries in the dependent case to that in the independent test statistics can be considered as a measure of correlation effect. See Owen (2005). Estimating the variance of false discoveries is an interesting problem. With approximation (\[a54a\]), this can easily be computed. In Table \[e0\], we compare the true variance of the number of false discoveries, the variance of expression (\[a53\]) (which is infeasible in practice) and the variance of expression (\[a52\]) under 6 different dependence structures. It shows that the variance computed based on expression (\[a52\]) approximately equals the variance of number of false discoveries. Therefore, we provide a fast and alternative method to estimate the variance of number of false discoveries in addition to the results in Owen (2005). Note that the variance for independent case is merely less than 2. The impact of dependence is very substantial.
Dependence Structure ${\mathrm{var}}(V(t))$ ${\mathrm{var}}(V)$ ${\mathrm{var}}(V.up)$
------------------------ ------------------------ --------------------- ------------------------
Equal correlation 180.9673 178.5939 180.6155
Fan & Song’s model 5.2487 5.2032 5.2461
Independent Cauchy 9.0846 8.8182 8.9316
Three factor model 81.1915 81.9373 83.0818
Two factor model 53.9515 53.6883 54.0297
Nonlinear factor model 48.3414 48.7013 49.1645
: Comparison for variance of number of false discoveries (column 2), variance of expression (\[a53\]) (column 3) and variance of expression (\[a52\]) (column 4) with $t=0.001$ based on 10000 simulations. []{data-label="e0"}
True FDP PFA Storey B-H
------------------------ ------------- ------------- ------------- -------------
Equal correlation $6.67\%$ $6.61\%$ $2.99\%$ $3.90\%$
($15.87\%$) ($15.88\%$) ($10.53\%$) ($14.58\%$)
Fan & Song’s model $14.85\%$ $14.85\%$ $13.27\%$ $14.46\%$
($11.76\%$) ($11.58\%$) ($11.21\%$) ($13.46\%$)
Independent Cauchy $13.85\%$ $13.62\%$ $11.48\%$ $13.21\%$
($13.60\%$) ($13.15\%$) ($12.39\%$) ($15.40\%$)
Three factor model $8.08\%$ $8.29\%$ $4.00\%$ $5.46\%$
($16.31\%$) ($16.39\%$) ($11.10\%$) ($16.10\%$)
Two factor model $8.62\%$ $8.50\%$ $4.70\%$ $5.87\%$
($16.44\%$) ($16.27\%$) ($11.97\%$) ($16.55\%$)
Nonlinear factor model $6.63\%$ $6.81\%$ $3.20\%$ $4.19\%$
($15.56\%$) ($15.94\%$) ($10.91\%$) ($15.31\%$)
: Comparison of FDP values for our method based on equation (\[b25\]) without taking expectation (PFA) with Storey’s procedure and Benjamini-Hochberg’s procedure under six different dependence structures, where $p=2000$, $n=200$, $t=0.001$, and $\beta_i=1$ for $i\in\text{\{false null\}}$. The computation is based on 10000 simulations. The means of FDP are listed with the standard deviations in the brackets.[]{data-label="e1"}
**Comparing methods of estimating FDP:** Under different dependence structures, we compare FDP values using our procedure PFA in equation (\[b25\]) without taking expectation and with $p_1$ known, Storey’s procedure with $p_1$ known ($(1-p_1)t/R(t)$) and Benjamini-Hochberg procedure. Note that Benjamini-Hochberg procedure is a FDR control procedure rather than a FDP estimating procedure. The Benjamini-Hochberg FDP is obtained by using the mean of “True FDP" in Table \[e1\] as the control rate in B-H procedure. Table \[e1\] shows that our method performs much better than Storey’s procedure and Benjamini-Hochberg procedure, especially under strong dependence structures (rows 1, 4, 5, and 6), in terms of both mean and variance of the distribution of FDP. Recall that the expected value of FDP is the FDR. Table 3 also compares the FDR of three procedures by looking at the averages. Note that the actual FDR from B-H procedure under dependence is much smaller than the control rate, which suggests that B-H procedure can be quite conservative under dependence.
**Comparison with Efron’s Methods:** We now compare the estimated values of our method PFA (\[b21\]) and Efron (2007)’s estimator with true values of false discovery proportion, under 6 different dependence structures. Efron (2007)’s estimator is developed for estimating FDP under unknown ${\mbox{\boldmath $\Sigma$}}$. In our simulation study, we have used a known $\Sigma$ for Efron’s estimator for fair comparisons. The results are depicted in Figure \[a60\], Figure \[c7\] and Table \[e2\]. Figure \[a60\] shows that our estimated values correctly track the trends of FDP with smaller amount of noise. It also shows that both our estimator and Efron’s estimator tend to overestimate the true FDP, since $\mathrm{FDP_A}(t)$ is an upper bound of the true $\mathrm{FDP}(t)$. They are close only when the number of false nulls $p_1$ is very small. In the current simulation setting, we choose $p_1=50$ compared with $p=1000$, therefore, it is not a very sparse case. However, even under this case, our estimator still performs very well for six different dependence structures. Efron (2007)’s estimator is illustrated in Figure \[a60\] with his suggestions for estimating parameters, which captures the general trend of true FDP but with large amount of noise. Figure \[c7\] shows that the relative errors of PFA concentrate around 0, which suggests good accuracy of our method in estimating FDP. Table \[e2\] summarizes the relative errors of the two methods.
------------------------ -------- -------- -------- --------
mean SD mean SD
Equal correlation 0.0241 0.1262 1.4841 3.6736
Fan & Song’s model 0.0689 0.1939 1.2521 1.9632
Independent Cauchy 0.0594 0.1736 1.3066 2.1864
Three factor model 0.0421 0.1657 1.4504 2.6937
Two factor model 0.0397 0.1323 1.1227 2.0912
Nonlinear factor model 0.0433 0.1648 1.3134 4.0254
------------------------ -------- -------- -------- --------
: Means and standard deviations of the relative error between true values of FDP and estimated FDP under the six dependence structures in Figure \[a60\]. $\text{RE}_{\text{P}}$ and $\text{RE}_{\text{E}}$ are the relative errors of our PFA estimator and Efron (2007)’s estimator, respectively. RE is defined in Figure \[c7\].[]{data-label="e2"}
**Dependence-Adjusted Procedure:** We compare the dependence-adjusted procedure described in section 3.4 with the testing procedure based only on the observed test statistics without using correlation information. The latter is to compare the original z-statistics with a fixed threshold value and is labeled as “fixed threshold procedure” in Table \[e5\]. With the same FDR level, a procedure with smaller false nondiscovery rate (FNR) is more powerful, where ${\mbox{FNR}}=E[T/(p-R)]$ using the notation in Table 1.
------------------------ -------- ------- ----------- -------- ------- -----------
FDR FNR Threshold FDR FNR Threshold
Equal correlation 17.06% 4.82% 0.06 17.34% 0.35% 0.001
Fan & Song’s model 6.69% 6.32% 0.0145 6.73% 1.20% 0.001
Independent Cauchy 7.12% 0.45% 0.019 7.12% 0.13% 0.001
Three factor model 5.46% 3.97% 0.014 5.53% 0.31% 0.001
Two factor model 5.00% 4.60% 0.012 5.05% 0.39% 0.001
Nonlinear factor model 6.42% 3.73% 0.019 6.38% 0.68% 0.001
------------------------ -------- ------- ----------- -------- ------- -----------
: Comparison of Dependence-Adjusted Procedure with Fixed Threshold Procedure under six different dependence structures, where $p=1000$, $n=100$, $\sigma=1$, $p_1=200$, nonzero $\beta_i$ simulated from $U(0,1)$ and $k=n-3$ over 1000 simulations.[]{data-label="e5"}
In Table \[e5\], without loss of generality, for each dependence structure we fix threshold value 0.001 and reject the hypotheses when the dependence-adjusted $p$-values (\[d1\]) is smaller than 0.001. Then we find the corresponding threshold value for the fixed threshold procedure such that the FDR in the two testing procedures are approximately the same. The FNR for the dependence-adjusted procedure is much smaller than that of the fixed threshold procedure, which suggests that dependence-adjusted procedure is more powerful. Note that in Table \[e5\], $p_1=200$ compared with $p=1000$, implying that the better performance of the dependence-adjusted procedure is not limited to sparse situation. This is expected since subtracting common factors out make the problem have a higher signal to noise ratio.
Real Data Analysis
==================
Our proposed multiple testing procedures are now applied to the genome-wide association studies, in particular the expression quantitative trait locus (eQTL) mapping. It is known that the expression levels of gene CCT8 are highly related to Down Syndrome phenotypes. In our analysis, we use over two million SNP genotype data and CCT8 gene expression data for 210 individuals from three different populations, testing which SNPs are associated with the variation in CCT8 expression levels. The SNP data are from the International HapMap project, which include 45 Japanese in Tokyo, Japan (JPT), 45 Han Chinese in Beijing, China (CHB), 60 Utah residents with ancestry from northern and western Europe (CEU) and 60 Yoruba in Ibadan, Nigeria (YRI). The Japanese and Chinese population are further grouped together to form the Asian population (JPTCHB). To save space, we omit the description of the data pre-processing procedures. Interested readers can find more details from the websites: http://pngu.mgh.harvard.edu/ purcell/plink/res.shtml and ftp://ftp.sanger.ac.uk/pub/genevar/, and the paper Bradic, Fan & Wang (2010).
We further introduce two sets of dummy variables $(\mbox{\bf d}_1,\mbox{\bf d}_2)$ to recode the SNP data, where $\mbox{\bf d}_1=(d_{1,1},\cdots,d_{1,p})$ and $\mbox{\bf d}_2=(d_{2,1},\cdots,d_{2,p})$, representing three categories of polymorphisms, namely, $(d_{1,j},d_{2,j})=(0,0)$ for $\text{SNP}_j=0$ (no polymorphism), $(d_{1,j},d_{2,j})=(1,0)$ for $\text{SNP}_j=1$ (one nucleotide has polymorphism) and $(d_{1,j},d_{2,j})=(0,1)$ for $\text{SNP}_j=2$ (both nucleotides have polymorphisms). Let $\{Y^i\}_{i=1}^n$ be the independent sample random variables of $Y$, $\{d_{1,j}^i\}_{i=1}^n$ and $\{d_{2,j}^i\}_{i=1}^n$ be the sample values of $d_{1,j}$ and $d_{2,j}$ respectively. Thus, instead of using model (\[gwj1\]), we consider two marginal linear regression models between $\{Y^i\}_{i=1}^n$ and $\{d_{1,j}^i\}_{i=1}^n$: $$\label{gwj2}
\min_{\alpha_{1,j},\beta_{1,j}}\frac{1}{n}\sum_{i=1}^nE(Y^i-\alpha_{1,j}-\beta_{1,j} d_{1,j}^i)^2, \ \ \ j=1,\cdots,p$$ and between $\{Y^i\}_{i=1}^n$ and $\{d_{2,j}^i\}_{i=1}^n$: $$\label{gwj3}
\min_{\alpha_{2,j},\beta_{2,j}}\frac{1}{n}\sum_{i=1}^nE(Y^i-\alpha_{2,j}-\beta_{2,j} d_{2,j}^i)^2, \ \ \ j=1,\cdots,p.$$ For ease of notation, we denote the recoded $n \times 2p$ dimensional design matrix as ${\mbox{\bf X}}$. The missing SNP measurement are imputed as $0$ and the redundant SNP data are excluded. Finally, the logarithm-transform of the raw CCT8 gene expression data are used. The details of our testing procedures are summarized as follows.
- To begin with, consider the full model $Y=\alpha+{\mbox{\bf X}}\beta + \epsilon$, where $Y$ is the CCT8 gene expression data, ${\mbox{\bf X}}$ is the $n \times 2p$ dimensional design matrix of the SNP codings and $\epsilon_i\sim N(0,\sigma^2)$, $i=1,\cdots,n$ are the independent random errors. We adopt the refitted cross-validation (RCV) (Fan, Guo & Hao 2010) technique to estimate $\sigma$ by $\widehat{\sigma}$, where LASSO is used in the first (variable selection) stage.
- Fit the marginal linear models (\[gwj2\]) and (\[gwj3\]) for each (recoded) SNP and obtain the least-squares estimate $\widehat{\beta}_j$ for $j=1,\cdots,2p$. Compute the values of $Z$-statistics using formula (\[b2\]), except that $\sigma$ is replaced by $\widehat{\sigma}$.
- Calculate the P-values based on the $Z$-statistics and compute $R(t)=\#\{P_j: P_j \leq t\}$ for a fixed threshold $t$.
- Apply eigenvalue decomposition to the population covariance matrix ${\mbox{\boldmath $\Sigma$}}$ of the $Z$-statistics. By Proposition 1, ${\mbox{\boldmath $\Sigma$}}$ is the sample correlation matrix of $(d_{1,1},d_{2,1},\cdots,$ $d_{1,p},d_{2,p})^T$. Determine an appropriate number of factors $k$ and derive the corresponding factor loading coefficients $\{b_{ih}\}_{i=1,\ h=1}^{i=2p,\ h=k}$.
- Order the absolute-valued $Z$-statistics and choose the first $m=95\%\times2p$ of them. Apply $L_1$-regression to the equation set (\[b22\]) and obtain its solution $\widehat{W}_1,\cdots,\widehat{W}_k$. Plug them into (\[b21\]) and get the estimated $\text{FDP}(t)$.
For each intermediate step of the above procedure, the outcomes are summarized in the following figures. Figure \[a61\] illustrates the trend of the RCV-estimated standard deviation $\widehat{\sigma}$ with respect to different model sizes. Our result is similar to that in Fan, Guo & Hao (2010), in that although $\widehat{\sigma}$ is influenced by the selected model size, it is relatively stable and thus provides reasonable accuracy. The empirical distributions of the $Z$-values are presented in Figure \[a62\], together with the fitted normal density curves. As pointed out in Efron (2007, 2010), due to the existence of dependency among the $Z$-values, their densities are either narrowed or widened and are not $N(0,1)$ distributed. The histograms of the $P$-values are further provided in Figure \[a63\], giving a crude estimate of the proportion of the false nulls for each of the three populations.
\[0.40\][![$\widehat{\sigma}$ of the three populations with respect to the selected model sizes, derived by using refitted cross-validation (RCV).[]{data-label="a61"}](Figure_4 "fig:")]{}
\[0.40\][![Empirical distributions and fitted normal density curves of the $Z$-values for each of the three populations. Because of dependency, the $Z$-values are no longer $N(0,1)$ distributed. The empirical distributions, instead, are $N(0.12,1.22^2)$ for CEU, $N(0.27,1.39^2)$ for JPT and CHB, and $N(-0.04,1.66^2)$ for YRI, respectively. The density curve for CEU is closest to $N(0,1)$ and the least dispersed among the three.[]{data-label="a62"}](Figure_5 "fig:")]{}
\[0.40\][![Histograms of the $P$-values for each of the three populations. []{data-label="a63"}](Figure_6 "fig:")]{}
\[0.40\][![Number of total discoveries, estimated number of false discoveries and estimated False Discovery Proportion as functions of thresholding $t$ for CEU population (row 1), JPT and CHB (row 2) and YRI (row 3). The $x$-coordinate is $-\log t$, the minus $\log_{10}$-transformed thresholding.[]{data-label="a64"}](Figure_7_CEU "fig:")]{} \[0.40\][![Number of total discoveries, estimated number of false discoveries and estimated False Discovery Proportion as functions of thresholding $t$ for CEU population (row 1), JPT and CHB (row 2) and YRI (row 3). The $x$-coordinate is $-\log t$, the minus $\log_{10}$-transformed thresholding.[]{data-label="a64"}](Figure_7_JPTCHB "fig:")]{} \[0.40\][![Number of total discoveries, estimated number of false discoveries and estimated False Discovery Proportion as functions of thresholding $t$ for CEU population (row 1), JPT and CHB (row 2) and YRI (row 3). The $x$-coordinate is $-\log t$, the minus $\log_{10}$-transformed thresholding.[]{data-label="a64"}](Figure_7_YRI "fig:")]{}
The main results of our analysis are presented in Figures \[a64\], which depicts the number of total discoveries $R(t)$, the estimated number of false discoveries $\widehat{V}(t)$ and the estimated False Discovery Proportion $\widehat{\text{FDP}}(t)$ as functions of (the minus $\log_{10}$-transformed) thresholding $t$ for the three populations. As can be seen, in each case both $R(t)$ and $\widehat{V}(t)$ are decreasing when $t$ decreases, but $\widehat{\text{FDP}}(t)$ exhibits zigzag patterns and does not always decrease along with $t$, which results from the cluster effect of the P-values. A closer study of the outputs further shows that for all populations, the estimated FDP has a general trend of decreasing to the limit of around $0.1$ to $0.2$, which backs up the intuition that a large proportion of the smallest $P$-values should correspond to the false nulls (true discoveries) when Z-statistics is very large; however, in most other thresholding values, the estimated FDPs are at a high level. This is possibly due to small signal-to-noise ratios in eQTL studies.
The results of the selected SNPs, together with the estimated FDPs, are depicted in Table \[a67\]. It is worth mentioning that Deutsch et al. (2005) and Bradic, Fan & Wang (2010) had also worked on the same CCT8 data to identify the significant SNPs in CEU population. Deutsch et al. (2005) performed association analysis for each SNP using ANOVA, while Bradic, Fan & Wang (2010) proposed the penalized composite quasi-likelihood variable selection method. Their findings were different as well, for the first group identified four SNPs (exactly the same as ours) which have the smallest P-values but the second group only discovered one SNP rs965951 among those four, arguing that the other three SNPs make little additional contributions conditioning on the presence of rs965951. Our results for CEU population coincide with that of the latter group, in the sense that the false discovery rate is high in our findings and our association study is marginal rather than joint modeling among several SNPs.
Population Threshold \# Discoveries Estimated FDP Selected SNPs
------------ ----------------------- ---------------- --------------- ---------------------
JPTCHB $1.61 \times 10^{-9}$ 5 $ 0.1535$ rs965951 rs2070611
rs2832159 rs8133819
rs2832160
YRI $1.14 \times 10^{-9}$ 2 $ 0.2215$ rs9985076 rs965951
CEU $6.38 \times 10^{-4}$ 4 $ 0.8099$ rs965951 rs2832159
rs8133819 rs2832160
: Information of the selected SNPs and the associated FDP for a particular threshold. Note that the density curve of the $Z$-values for CEU population is close to $N(0,1)$, so the approximate $\widehat{\text{FDP}}(t)$ equals $pt/R(t)\approx 0.631$. Therefore our high estimated FDP is reasonable.[]{data-label="a67"}
Population Threshold \# Discoveries Estimated FDP Selected SNPs
------------ ----------------------- ---------------- --------------- -------------------------
JPTCHB $2.89 \times 10^{-4}$ 5 $ 0.1205$ rs965951 rs2070611
rs2832159 rs8133819
rs2832160
YRI $8.03 \times 10^{-5}$ 4 $ 0.2080$ rs7283791 rs11910981
rs8128844 rs965951
CEU $5.16 \times 10^{-2}$ 6 $ 0.2501$ rs464144\* rs4817271
rs2832195 rs2831528\*
rs1571671\* rs6516819\*
: Information of the selected SNPs for a particular threshold based on the dependence-adjusted procedure. The number of factors $k$ in (\[d1\]) equals 10. The estimated FDP is based on estimator (\[b21\]) by applying PFA to the adjusted Z-values. $*$ is the indicator for SNP equal to 2 and otherwise is the indicator for 1.[]{data-label="a68"}
Table \[a68\] lists the SNP selection based on the dependence-adjusted procedure. For JPTCHB, with slightly smaller estimated FDP, the dependence-adjusted procedure selects the same SNPs with the group selected by the fixed-threshold procedure, which suggests that these 5 SNPs are significantly associated with the variation in gene CCT8 expression levels. For YRI, rs965951 is selected by the both procedures, but the dependence-adjusted procedure selects other three SNPs which do not appear in Table \[a67\]. For CEU, the selections based on the two procedures are quite different. However, since the estimated FDP for CEU is much smaller in Table \[a68\] and the signal-to-noise ratio of the test statistics is higher from the dependence-adjusted procedure, the selection group in Table \[a68\] seems more reliable.
Discussion
==========
We have proposed a new method (principal factor approximation) for high dimensional multiple testing where the test statistics have an arbitrary dependence structure. For multivariate normal test statistics with a known covariance matrix, we can express the test statistics as an approximate factor model with weakly dependent random errors, by applying spectral decomposition to the covariance matrix. We then obtain an explicit expression for the false discovery proportion in large scale simultaneous tests. This result has important applications in controlling FDP and FDR. We also provide a procedure to estimate the realized FDP, which, in our simulation studies, correctly tracks the trend of FDP with smaller amount of noise.
To take into account of the dependence structure in the test statistics, we propose a dependence-adjusted procedure with different threshold values for magnitude of $Z_i$ in different hypotheses. This procedure has been shown in simulation studies to be more powerful than the fixed threshold procedure. An interesting research question is how to take advantage of the dependence structure such that the testing procedure is more powerful or even optimal under arbitrary dependence structures.
While our procedure is based on a known correlation matrix, we would expect that it can be adapted to the case with estimated covariance matrix. The question is then how accuracy the covariance matrix should be so that a simple substitution procedure will give an accurate estimate of FDP.
We provide a simple method to estimate the realized principal factors. A more accurate method is probably the use of penalized least-squares method to explore the sparsity and to estimate the realized principle factor.
Appendix
========
Lemma 1 is fundamental to our proof of Theorem 1 and Proposition 2. The result is known in probability, but has the formal statement and proof in Lyons (1988).
Let $\{X_n\}_{n=1}^{\infty}$ be a sequence of real-valued random variables such that $E|X_n|^2\leq1$. If $|X_n|\leq1$ a.s. and $\sum_{N\geq1}\frac{1}{N}E|\frac{1}{N}\sum_{n\leq N}X_n|^2<\infty$, then $\lim_{N\rightarrow\infty}\frac{1}{N}\sum_{n\leq N}X_n=0 \ \ a.s.$.
**Proof of Proposition 2:** Note that $P_i = 2\Phi(-|Z_i|)$. Based on the expression of $(Z_1,\cdots,Z_p)^T$ in (\[b20\]), $\Big\{I(P_i\leq t|W_1,\cdots,W_k)\Big\}_{i=1}^p$ are dependent random variables. Nevertheless, we want to prove $$\label{b27}
p^{-1}\sum_{i=1}^p[I(P_i\leq t|W_1,\cdots,W_k)-P(P_i\leq t|W_1,\cdots,W_k)]\stackrel{p\rightarrow\infty}{\longrightarrow}0 \ a.s. .$$ Letting $X_i=I(P_i\leq t|W_1,\cdots,W_k)-P(P_i\leq t|W_1,\cdots,W_k)$, by Lemma 1 the conclusion (\[b27\]) is correct if we can show $${\mbox{Var}}\Big(p^{-1}\sum_{i=1}^{p}I(P_i\leq t|W_1,\cdots,W_k)\Big)=O_p(p^{-\delta}) \ \ \text{for some} \ \delta>0.$$ To begin with, note that $$\begin{aligned}
&&{\mbox{Var}}\Big(p^{-1}\sum_{i=1}^{p}I(P_i\leq t|W_1,\cdots,W_k)\Big)\\
&=&p^{-2}\sum_{i=1}^{p}{\mbox{Var}}\Big(I(P_i\leq t|W_1,\cdots,W_k)\Big)\\
&&+2p^{-2}\sum_{1\leq i<j\leq p}{\mbox{Cov}}\Big(I(P_i\leq t|W_1,\cdots,W_k),I(P_j\leq t|W_1,\cdots,W_k)\Big).\end{aligned}$$ Since ${\mbox{Var}}\big(I(P_i\leq t|W_1,\cdots,W_k)\big)\leq\frac{1}{4}$, the first term in the right-hand side of the last equation is $O_p(p^{-1})$. For the second term, the covariance is given by $$\begin{aligned}
&&P(P_i\leq t,P_j\leq t|W_1,\cdots,W_k)-P(P_i\leq t|W_1,\cdots,W_k)P(P_j\leq t|W_1,\cdots,W_k)\\
&=&P(|Z_i|<-\Phi^{-1}(t/2),|Z_j|<-\Phi^{-1}(t/2)|W_1,\cdots,W_k)\\
&&-P(|Z_i|<-\Phi^{-1}(t/2)|W_1,\cdots,W_k)P(|Z_j|<-\Phi^{-1}(t/2)|W_1,\cdots,W_k)\end{aligned}$$ To simplify the notation, let $\rho_{ij}^k$ be the correlation between $K_i$ and $K_j$. Without loss of generality, we assume $\rho_{ij}^k>0$ (for $\rho_{ij}^k<0$, the calculation is similar). Denote by $$c_{1,i}= a_i(-z_{t/2}-\eta_i-\mu_i), \ \ \ c_{2,i}= a_i(z_{t/2}-\eta_i-\mu_i).$$ Then, from the joint normality, it can be shown that $$\begin{aligned}
\label{e3}
&&P(|Z_i|<-\Phi^{-1}(t/2),|Z_j|<-\Phi^{-1}(t/2)|W_1,\cdots,W_k)\nonumber\\
&=&P(c_{2,i}/a_i<K_i<c_{1,i}/a_i, c_{2,j}/a_j<K_j<c_{1,j}/a_j)\nonumber\\
&=&\int_{-\infty}^{\infty}\Big[\Phi\Big(\frac{(\rho_{ij}^k)^{1/2}z+c_{1,i}}{(1-\rho_{ij}^k)^{1/2}}\Big)-\Phi\Big(\frac{(\rho_{ij}^k)^{1/2}z+c_{2,i}}{(1-\rho_{ij}^k)^{1/2}}\Big)\Big]\\
&&\quad\quad\times\Big[\Phi\Big(\frac{(\rho_{ij}^k)^{1/2}z+c_{1,j}}{(1-\rho_{ij}^k)^{1/2}}\Big)-\Phi\Big(\frac{(\rho_{ij}^k)^{1/2}z+c_{2,j}}{(1-\rho_{ij}^k)^{1/2}}\Big)\Big]\phi(z)dz.\nonumber\end{aligned}$$
Next we will use Taylor expansion to analyze the joint probability further. We have shown that $(K_1,\cdots,K_p)^T\sim N(0,{\mbox{\bf A}})$ are weakly dependent random variables. Let $cov_{ij}^k$ denote the covariance of $K_i$ and $K_j$, which is the $(i,j)$th element of the covariance matrix ${\mbox{\bf A}}$. We also let $b_{ij}^k=(1-\sum_{h=1}^kb_{ih}^2)^{1/2}(1-\sum_{h=1}^kb_{jh}^2)^{1/2}$. By the Hölder inequality, $$\begin{aligned}
p^{-2}\sum_{i,j=1}^p|cov_{ij}^k|^{1/2}\leq p^{-1/2}(\sum_{i,j=1}^p|cov_{ij}^k|^2)^{1/4}=\Big[p^{-2}(\sum_{i=k+1}^{p}\lambda_i^2)^{1/2}\Big]^{1/4}\rightarrow0\end{aligned}$$ as $p\rightarrow\infty$. For each $\Phi(\cdot)$, we apply Taylor expansion with respect to $(cov_{ij}^k)^{1/2}$, $$\begin{aligned}
\Phi\Big(\frac{(\rho_{ij}^k)^{1/2}z+c_{1,i}}{(1-\rho_{ij}^k)^{1/2}}\Big)&=&\Phi\Big(\frac{(cov_{ij}^k)^{1/2}z+(b_{ij}^k)^{1/2}c_{1,i}}{(b_{ij}^k-cov_{ij}^k)^{1/2}}\Big)\\
&=&\Phi(c_{1,i})+\phi(c_{1,i})(b_{ij}^k)^{-1/2}z(cov_{ij}^k)^{1/2}\\
&&\quad\quad\quad+\frac{1}{2}\phi(c_{1,i})c_{1,i}(b_{ij}^k)^{-1}(1-z^2)cov_{ij}^k+R(cov_{ij}^k).\end{aligned}$$ where $R(cov_{ij}^k)$ is the Lagrange residual term in the Taylor’s expansion, and $R(cov_{ij}^k)=f(z)O(|cov_{ij}^k|^{3/2})$ in which $f(z)$ is a polynomial function of $z$ with the highest order as 6.
Therefore, we have (\[e3\]) equals $$\begin{aligned}
&&\Big[\Phi(c_{1,i})-\Phi(c_{2,i})\Big]\Big[\Phi(c_{1,j})-\Phi(c_{2,j})\Big]\\
&&\quad\quad+\Big(\phi(c_{1,i})-\phi(c_{2,i})\Big)\Big(\phi(c_{1,j})-\phi(c_{2,j})\Big)(b_{ij}^k)^{-1}cov_{ij}^k+O(|cov_{ij}^k|^{3/2}),\end{aligned}$$ where we have used the fact that $\int_{-\infty}^{\infty} z\phi(z)dz=0$, $\int_{-\infty}^{\infty} (1-z^2)\phi(z)dz=0$ and the finite moments of standard normal distribution are finite. Now since $P(|Z_i|<-\Phi^{-1}(t/2)|W_1,\cdots,W_k)=\Phi(c_{1,i})-\Phi(c_{2,i})$, we have $$\begin{aligned}
&&{\mbox{Cov}}\Big(I(P_i\leq t|W_1,\cdots,W_k),I(P_j\leq t|W_1,\cdots,W_k)\Big)\\
&=&\Big(\phi(c_{1,i})-\phi(c_{2,i})\Big)\Big(\phi(c_{1,j})-\phi(c_{2,j})\Big)a_ia_jcov_{ij}^k+O(|cov_{ij}^k|^{3/2}).\end{aligned}$$ In the last line, $\big(\phi(c_{1,i})-\phi(c_{2,i})\big)\big(\phi(c_{1,j})-\phi(c_{2,j})\big)a_ia_j$ is bounded by some constant except on a countable collection of measure zero sets. Let $C_i$ be defined as the set $\{z_{t/2}+\eta_i+\mu_i=0\}\cup\{z_{t/2}-\eta_i-\mu_i=0\}$. On the set $C_i^c$, $\big(\phi(c_{1,i})-\phi(c_{2,i})\big)a_i$ converges to zero as $a_i\rightarrow\infty$. Therefore, $\big(\phi(c_{1,i})-\phi(c_{2,i})\big)\big(\phi(c_{1,j})-\phi(c_{2,j})\big)a_ia_j$ is bounded by some constant on $(\bigcup_{i=1}^pC_i)^c$.
By the Cauchy-Schwartz inequality and $(C0)$ in Theorem 1, $p^{-2}\sum_{i,j}|cov_{i,j}^k|=O(p^{-\delta})$. Also we have $|cov_{ij}^k|^{3/2}<|cov_{ij}^k|$. On the set $(\bigcup_{i=1}^pC_i)^c$, we conclude that $${\mbox{Var}}\Big(p^{-1}\sum_{i=1}^pI(P_i\leq t|W_1,\cdots,W_k)\Big)=O_p(p^{-\delta}).$$ Hence by Lemma 1, for fixed $(w_1,\cdots,w_k)^T$, $$\label{g1}
p^{-1}\sum_{i=1}^p\big\{I(P_i\leq t|W_1=w_1,\cdots,W_k=w_k)
-P(P_i\leq t|W_1=w_1,\cdots,W_k=w_k)\big\}\stackrel{p\to\infty}{\longrightarrow}0\ \text{a.s.}.$$ If we define the probability space on which $(W_1,\cdots,W_k)$ and $(K_1,\cdots,K_p)$ are constructed as in (10) to be $(\Omega, \mathcal{F},\nu)$, with $\mathcal{F}$ and $\nu$ the associated $\sigma-$algebra and (Lebesgue) measure, then in a more formal way, $(\ref{g1})$ is equivalent to $$p^{-1}\sum_{i=1}^p\big\{I(P_i(\omega)\leq t|W_1=w_1,\cdots,W_k=w_k)
-P(P_i\leq t|W_1=w_1,\cdots,W_k=w_k)\big\}\stackrel{p\to\infty}{\longrightarrow}0$$ for each fixed $(w_1,\cdots,w_k)^T$ and almost every $\omega\in\Omega$, leading further to $$p^{-1}\sum_{i=1}^p\big\{I(P_i(\omega)\leq t)
-P(P_i\leq t|W_1(\omega),\cdots,W_k(\omega))\big\}\stackrel{p\to\infty}{\longrightarrow}0$$ for almost every $\omega\in\Omega$, which is the definition for $$p^{-1}\sum_{i=1}^p\big\{I(P_i\leq t)
-P(P_i\leq t|W_1,\cdots,W_k)\big\}\stackrel{p\to\infty}{\longrightarrow}0\ \text{a.s.}.$$ Therefore, $$\lim_{p\to\infty}p^{-1}\sum_{i=1}^p\Big\{I(P_i\leq t)
-\big [\Phi(a_i(z_{t/2}+\eta_i+\mu_i))+\Phi(a_i(z_{t/2}-\eta_i-\mu_i))\big ]\Big\}=0\ \text{a.s.}.$$ With the same argument we can show $$\lim_{p\to\infty}p_0^{-1}\Big\{V(t)-\sum_{i\in\{\text{true null}\}}
\big [\Phi(a_i(z_{t/2}+\eta_i))+\Phi(a_i(z_{t/2}-\eta_i))\big ]\Big\}=0\ \text{a.s.}$$ for the high dimensional sparse case. The proof of Proposition 2 is now complete.\
**Proof of Theorem 1:**\
For ease of notation, denote $\sum_{i=1}^p\big [\Phi(a_i(z_{t/2}+\eta_i+\mu_i))+\Phi(a_i(z_{t/2}-\eta_i-\mu_i))\big ]$ as $\tilde R(t)$ and $\sum_{i\in\{\text{true null}\}}\big [\Phi(a_i(z_{t/2}+\eta_i))+\Phi(a_i(z_{t/2}-\eta_i))\big ]$ as $\tilde V(t)$, then
$$\begin{array}{rl}
&\displaystyle\lim_{p\to\infty}\Big \{\text{FDP}(t)
-\displaystyle\frac{\sum_{i\in\{\text{true null}\}}\big [\Phi(a_i(z_{t/2}+\eta_i))+\Phi(a_i(z_{t/2}-\eta_i))\big ]}
{\sum_{i=1}^p\big [\Phi(a_i(z_{t/2}+\eta_i+\mu_i))+\Phi(a_i(z_{t/2}-\eta_i-\mu_i))\big ]}\Big\} \\
= &\displaystyle\lim_{p\to\infty}\Big\{\displaystyle\frac{V(t)}{R(t)}-\displaystyle\frac{\tilde V(t)}{\tilde R(t)}\Big\} \\
= &\displaystyle\lim_{p\to\infty}\displaystyle\frac{(V(t)/p_0)[(\tilde R(t)-R(t))/p]+(R(t)/p)[(V(t)-\tilde V(t))/p_0]}
{R(t)\tilde R(t)/(p_0p)}\\
= &0\ \text{a.s.}
\end{array}$$
by the results in Proposition 2 and the fact that both $p_0^{-1}V(t)$ and $p^{-1}R(t)$ are bounded random variables. The proof of Theorem 1 is complete.
**Proof of Theorem 2:** Letting $$\begin{aligned}
\Delta_1&=&\sum_{i=1}^p\Big[\Phi(a_i(z_{t/2}+{\mbox{\bf b}}_i^T{\widehat {\mbox{\bf w}}}))-\Phi(a_i(z_{t/2}+{\mbox{\bf b}}_i^T{\mbox{\bf w}}))\Big]\quad\quad\text{and}\\
\Delta_2&=&\sum_{i=1}^p\Big[\Phi(a_i(z_{t/2}-{\mbox{\bf b}}_i^T{\widehat {\mbox{\bf w}}}))-\Phi(a_i(z_{t/2}-{\mbox{\bf b}}_i^T{\mbox{\bf w}}))\Big],\end{aligned}$$ we have $$\widehat{{\mbox{FDP}}}(t)-{\mbox{FDP}}_A(t)=(\Delta_1+\Delta_2)/R(t).$$ Consider $\Delta_1=\sum_{i=1}^p\Delta_{1i}$. By the mean value theorem, there exists $\xi_i$ in the interval of $({\mbox{\bf b}}_i^T{\widehat {\mbox{\bf w}}},{\mbox{\bf b}}_i^T{\mbox{\bf w}})$, such that $\Delta_{1i}=\phi(a_i(z_{t/2}+\xi_i))a_i{\mbox{\bf b}}_i^T({\widehat {\mbox{\bf w}}}-{\mbox{\bf w}})$ where $\phi(\cdot)$ is the standard normal density function.
Next we will show that $\phi(a_i(z_{t/2}+\xi_i))a_i$ is bounded by a constant. Without loss of generality, we discuss about the case in (C2) when $z_{t/2}+{\mbox{\bf b}}_i^T{\mbox{\bf w}}<-\tau$. By (C3), we can choose sufficiently large $p$ such that $z_{t/2}+\xi_i<-\tau/2$. For the function $g(a)=\exp(-a^2x^2/8)a$, $g(a)$ is maximized when $a=2/x$. Therefore, $$\sqrt{2\pi}\phi(a_i(z_{t/2}+\xi_i))a_i<a_i\exp(-a_i^2\tau^2/8)\leq2\exp(-1/2)/\tau.$$ For $z_{t/2}+{\mbox{\bf b}}_i^T{\mbox{\bf w}}>\tau$ we have the same result. In both cases, we can use a constant $D$ such that $\phi(a_i(z_{t/2}+\xi_i))a_i\leq D$.
By the Cauchy-Schwartz inequality, we have $\sum_{i=1}^p|b_{ih}|\leq(p\sum_{i=1}^pb_{ih}^2)^{1/2}=(p\lambda_h)^{1/2}$. Therefore, by the Cauchy-Schwartz inequality and the fact that $\sum_{h=1}^k\lambda_h<p$, we have $$\begin{aligned}
|\Delta_{1}|&\leq&D\sum_{i=1}^p\Big[\sum_{h=1}^k|b_{ih}||\widehat{w}_h-w_h|\Big]\\
&\leq&D\sum_{h=1}^k(p\lambda_h)^{1/2}|\widehat{w}_h-w_h|\\
&\leq&D\sqrt{p}\Big(\sum_{h=1}^k\lambda_h\sum_{h=1}^k(\widehat{w}_h-w_h)^2\Big)^{1/2}\\
&<&Dp\|{\widehat {\mbox{\bf w}}}-{\mbox{\bf w}}\|_2.\end{aligned}$$ By (C1) in Theorem 2, $R(t)/p>H$ for $H>0$ when $p\rightarrow\infty$. Therefore, $|\Delta_{1}/R(t)|=O(\|\widehat{{\mbox{\bf w}}}-{\mbox{\bf w}}\|_2)$. For $\Delta_{2}$, the result is the same. The proof of Theorem 2 is now complete.
**Proof of Theorem 3:** Without loss of generality, we assume that the true value of ${\mbox{\bf w}}$ is zero, and we need to prove $\|{\widehat {\mbox{\bf w}}}\|_2=O_p(\sqrt{\frac{k}{m}})$. Let $L: R^k\rightarrow R^k$ be defined by $$L_j({\mbox{\bf w}})=m^{-1}\sum_{i=1}^m b_{ij}sgn(K_i-{\mbox{\bf b}}_i^T{\mbox{\bf w}})$$ where $sgn(x)$ is the sign function of $x$ and equals zero when $x=0$. Then we want to prove that there is a root ${\widehat {\mbox{\bf w}}}$ of the equation $L({\mbox{\bf w}})=0$ satisfying $\|{\widehat {\mbox{\bf w}}}\|_2^2=O_p(k/m)$. By classical convexity argument, it suffices to show that with high probability, ${\mbox{\bf w}}^TL({\mbox{\bf w}})<0$ with $\|{\mbox{\bf w}}\|_2^2=Bk/m$ for a sufficiently large constant $B$.
Let $V={\mbox{\bf w}}^TL({\mbox{\bf w}})=m^{-1}\sum_{i=1}^mV_i$, where $V_i=({\mbox{\bf b}}_i^T{\mbox{\bf w}})sgn(K_i-{\mbox{\bf b}}_i^T{\mbox{\bf w}})$. By Chebyshev’s inequality, $P(V<E(V)+h\times {\mbox{SD}}(V))>1-h^{-2}$. Therefore, to prove the result in Theorem 3, we want to derive the upper bounds for $E(V)$ and ${\mbox{SD}}(V)$ and show that $\forall h>0$, $\exists B$ and $M$ s.t. $\forall m>M$, $P(V<0)>1-h^{-2}$.
We will first present a result from Polya (1945), which will be very useful for our proof. For $x>0$, $$\label{a70}
\Phi(x)=\frac{1}{2}\Big[1+\sqrt{1-\exp(-\frac{2}{\pi}x^2)}\Big](1+\delta(x)) \ \ \ \text{with} \ \ \sup_{x>0}|\delta(x)|<0.004.$$ The variance of $V$ is shown as follows: $${\mbox{Var}}(V)=m^{-2}\sum_{i=1}^m{\mbox{Var}}(V_i)+m^{-2}\sum_{i\neq j}{\mbox{Cov}}(V_i,V_j).$$ Write ${\mbox{\bf w}}=s{\mbox{\bf u}}$ with $\|{\mbox{\bf u}}\|_2=1$ where $s=(Bk/m)^{1/2}$. By (C5), (C6) and (C7) in Theorem 3, for sufficiently large $m$, $$\begin{aligned}
\label{d1}
\sum_{i=1}^m{\mbox{Var}}(V_i)&=&\sum_{i=1}^mI(|{\mbox{\bf b}}_i^T{\mbox{\bf u}}|\leq d){\mbox{Var}}(V_i)+\sum_{i=1}^mI(|{\mbox{\bf b}}_i^T{\mbox{\bf u}}|>d){\mbox{Var}}(V_i)\nonumber \\
&=&\Big[\sum_{i=1}^mI(|{\mbox{\bf b}}_i^T{\mbox{\bf u}}|>d){\mbox{Var}}(V_i)\Big](1+o(1)),\end{aligned}$$ and $$\begin{aligned}
\label{d2}
\sum_{i\neq j}{\mbox{Cov}}(V_i,V_j)&=&\sum_{i\neq j}I(|{\mbox{\bf b}}_i^T{\mbox{\bf u}}|\leq d)I(|{\mbox{\bf b}}_j^T{\mbox{\bf u}}|\leq d){\mbox{Cov}}(V_i,V_j)\nonumber \\
&&+2\sum_{i\neq j}I(|{\mbox{\bf b}}_i^T{\mbox{\bf u}}|\leq d)I(|{\mbox{\bf b}}_j^T{\mbox{\bf u}}|>d){\mbox{Cov}}(V_i,V_j)\nonumber\\
&&+\sum_{i\neq j}I(|{\mbox{\bf b}}_i^T{\mbox{\bf u}}|>d)I(|{\mbox{\bf b}}_j^T{\mbox{\bf u}}|>d){\mbox{Cov}}(V_i,V_j)\nonumber\\
&=&\Big[\sum_{i\neq j}I(|{\mbox{\bf b}}_i^T{\mbox{\bf u}}|>d)I(|{\mbox{\bf b}}_j^T{\mbox{\bf u}}|>d){\mbox{Cov}}(V_i,V_j)\Big](1+o(1)).\end{aligned}$$ We will prove (\[d1\]) and (\[d2\]) in detail at the end of proof for Theorem 3.
For each pair of $V_i$ and $V_j$, it is easy to show that $${\mbox{Cov}}(V_i,V_j)=4({\mbox{\bf b}}_i^T{\mbox{\bf w}})({\mbox{\bf b}}_j^T{\mbox{\bf w}})\Big[P(K_i<{\mbox{\bf b}}_i^T{\mbox{\bf w}},K_j<{\mbox{\bf b}}_j^T{\mbox{\bf w}})-\Phi(a_i{\mbox{\bf b}}_i^T{\mbox{\bf w}})\Phi(a_i{\mbox{\bf b}}_j^T{\mbox{\bf w}})\Big].$$ The above formula includes the ${\mbox{Var}}(V_i)$ as a specific case.
By Polya’s approximation (\[a70\]), $$\label{d3}
{\mbox{Var}}(V_i)=({\mbox{\bf b}}_i^T{\mbox{\bf w}})^2\exp\Big\{-\frac{2}{\pi}(a_i{\mbox{\bf b}}_i^T{\mbox{\bf w}})^2\Big\}(1+\delta_j) \ \ \text{with} \ |\delta_j|<0.004.$$ Hence $$\begin{aligned}
\sum_{i=1}^mI(|{\mbox{\bf b}}_i^T{\mbox{\bf u}}|>d){\mbox{Var}}(V_i)&\leq&\sum_{i=1}^ms^2\exp\Big\{-\frac{2}{\pi}(a_ids)^2\Big\}(1+\delta_j)\\
&\leq&2ms^2\exp\Big\{-\frac{2}{\pi}(a_{\min}ds)^2\Big\}.\end{aligned}$$ To compute ${\mbox{Cov}}(V_i,V_j)$, we have $$\begin{aligned}
&&P(K_i<{\mbox{\bf b}}_i^T{\mbox{\bf w}},K_j<{\mbox{\bf b}}_j^T{\mbox{\bf w}})\\
&=&\int_{-\infty}^{\infty}\Phi\Big(\frac{(|\rho_{ij}^k|)^{1/2}z+a_i{\mbox{\bf b}}_i^T{\mbox{\bf w}}}{(1-|\rho_{ij}^k|)^{1/2}}\Big)\Phi\Big(\frac{\delta_{ij}^k(|\rho_{ij}^k|)^{1/2}z+a_j{\mbox{\bf b}}_j^T{\mbox{\bf w}}}{(1-|\rho_{ij}^k|)^{1/2}}\Big)\phi(z)dz\\
&=&\Phi(a_i{\mbox{\bf b}}_i^T{\mbox{\bf w}})\Phi(a_j{\mbox{\bf b}}_j^T{\mbox{\bf w}})+\phi(a_i{\mbox{\bf b}}_i^T{\mbox{\bf w}})\phi(a_j{\mbox{\bf b}}_j^T{\mbox{\bf w}})a_ia_jcov_{ij}^k(1+o(1)),\end{aligned}$$ where $\delta_{ij}^k=1$ if $\rho_{ij}^k\geq0$ and $-1$ otherwise. Therefore, $$\label{d4}
{\mbox{Cov}}(V_i,V_j)= 4({\mbox{\bf b}}_i^T{\mbox{\bf w}})({\mbox{\bf b}}_j^T{\mbox{\bf w}})\phi(a_i{\mbox{\bf b}}_i^T{\mbox{\bf w}})\phi(a_j{\mbox{\bf b}}_j^T{\mbox{\bf w}})a_ia_jcov_{ij}^k(1+o(1)),$$ and $$\begin{aligned}
&&|\sum_{i\neq j}I(|{\mbox{\bf b}}_i^T{\mbox{\bf u}}|>d)I(|{\mbox{\bf b}}_j^T{\mbox{\bf u}}|>d){\mbox{Cov}}(V_i,V_j)|\\
&<&\sum_{i\neq j}s^2\exp\Big\{-(a_{\min}ds)^2\Big\}a_{\max}^2|cov_{ij}^k|(1+o(1)).\end{aligned}$$ Consequently, we have $${\mbox{Var}}(V)<\frac{2}{m}s^2\exp\Big\{-\frac{2}{\pi}(a_{\min}ds)^2\Big\}a_{\max}^2\Big[\frac{1}{m}\sum_i\sum_j|cov_{ij}^k|\Big].\\$$ We apply (C4) in Theorem 3 and the Cauchy-Schwartz inequality to get $\frac{1}{m}\sum_i\sum_jcov_{ij}^k\leq(\sum_{i=k+1}^p\lambda_i^2)^{1/2}$ $\leq\eta^{1/2}$, and conclude that the standard deviation of $V$ is bounded by $$\sqrt{2}sm^{-1/2}\exp\Big\{-\frac{1}{\pi}(a_{\min}ds)^2\Big\}a_{\max}(\eta)^{1/4}.$$ In the derivations above, we used the fact that ${\mbox{\bf b}}_i^T{\mbox{\bf u}}\leq\|{\mbox{\bf b}}_i\|_2<1$, and the covariance matrix for $K_i$ in (21) of the paper is a submatrix for covariance matrix of $K_i$ in (10).
Next we will show that $E(V)$ is bounded from above by a negative constant. Using $x(\Phi(x)-\frac{1}{2})\geq0$, we have $$\begin{aligned}
-E(V)&=&\frac{2}{m}\sum_{i=1}^m{\mbox{\bf b}}_i^T{\mbox{\bf w}}\Big[\Phi(a_i{\mbox{\bf b}}_i^T{\mbox{\bf w}})-\frac{1}{2}\Big]\\
&\geq&\frac{2ds}{m}\sum_{i=1}^mI(|{\mbox{\bf b}}_i^T{\mbox{\bf u}}|>d)\Big[\Phi(a_ids)-\frac{1}{2}\Big]\\
&=&\frac{2ds}{m}\sum_{i=1}^m\Big[\Phi(a_ids)-\frac{1}{2}\Big]-\frac{2ds}{m}\sum_{i=1}^mI(|{\mbox{\bf b}}_i^T{\mbox{\bf u}}|\leq d)\Big[\Phi(a_ids)-\frac{1}{2}\Big].\end{aligned}$$ By (C5) in Theorem 3, $\frac{1}{m}\sum_{i=1}^mI(|{\mbox{\bf b}}_i^T{\mbox{\bf u}}|\leq d)\rightarrow0$, so for sufficiently large $m$, we have $$-E(V)\geq \frac{ds}{m}\sum_{i=1}^m\Big[\Phi(a_ids)-\frac{1}{2}\Big].$$ An application of (\[a70\]) to the right hand side of the last line leads to $$\begin{aligned}
-E(V)\geq \frac{ds}{m}\sum_{i=1}^m\frac{1}{2}\sqrt{1-\exp\big\{-\frac{2}{\pi}(a_{\min}ds)^2\big\}}.\end{aligned}$$ Note that $$1-\exp(-\frac{2}{\pi}x^2)=\frac{2}{\pi}x^2\sum_{l=0}^{\infty}\frac{1}{(l+1)!}(-\frac{2}{\pi}x^2)^l>\frac{2}{\pi}x^2\sum_{l=0}^{\infty}\frac{1}{l!}(-\frac{1}{\pi}x^2)^l=\frac{2}{\pi}x^2\exp(-\frac{1}{\pi}x^2),$$ so we have $$-E(V)\geq \frac{d^2}{2}s^2\sqrt{\frac{2}{\pi}}a_{\min}\exp\Big\{-\frac{1}{2\pi}(a_{\min}ds)^2\Big\}.$$ To show that $\forall h>0$, $\exists B$ and $M$ s.t. $\forall m>M$, $P(V<0)>1-h^{-2}$, by Chebyshev’s inequality and the upper bounds derived above, it is sufficient to show that $$\frac{d^2}{2}s^2\sqrt{\frac{1}{\pi}}a_{\min}\exp\Big\{-\frac{1}{2\pi}(a_{\min}ds)^2\Big\}>hsm^{-1/2}\exp\Big\{-\frac{1}{\pi}(a_{\min}ds)^2\Big\}a_{\max}\eta^{1/4}.$$ Recall $s=(Bk/m)^{1/2}$, after some algebra, this is equivalent to show $$d^2(Bk)^{1/2}(\pi)^{-1/2}\exp\Big\{\frac{1}{2\pi}(a_{\min}ds)^2\Big\}>2h\eta^{1/4}\frac{a_{\max}}{a_{\min}}.$$ By (C6), then for all $h>0$, when $B$ satisfies $d^2(Bk)^{1/2}(\pi)^{-1/2}>2h\eta^{1/4}S$, we have $P(V<0)>1-h^{-2}$. Note that $k=O(m^{\kappa})$ and $\eta=O(m^{2\kappa})$, so $k^{-1/2}\eta^{1/4}=O(1)$. To complete the proof of Theorem 3, we only need to show that (\[d1\]) and (\[d2\]) are correct.
To prove (\[d1\]), by (\[d3\]) we have $$\sum_{i=1}^mI(|{\mbox{\bf b}}_i^T{\mbox{\bf u}}|\leq d){\mbox{Var}}(V_i)\leq\sum_{i=1}^mI(|{\mbox{\bf b}}_i^T{\mbox{\bf u}}|\leq d)s^2d^2,$$ and $$\sum_{i=1}^mI(|{\mbox{\bf b}}_i^T{\mbox{\bf u}}| > d){\mbox{Var}}(V_i)\geq\sum_{i=1}^mI(|{\mbox{\bf b}}_i^T{\mbox{\bf u}}| > d)s^2d^2\exp\Big\{-\frac{2}{\pi}a_{\max}^2s^2\Big\}.$$ Recall $s=(Bk/m)^{1/2}$, then by (C6) and (C7), $\exp\Big\{\frac{2}{\pi}a_{\max}^2s^2\Big\}=O(1)$. Therefore, by (C5) we have $$\frac{\sum_{i=1}^mI(|{\mbox{\bf b}}_i^T{\mbox{\bf u}}|\leq d){\mbox{Var}}(V_i)}{\sum_{i=1}^mI(|{\mbox{\bf b}}_i^T{\mbox{\bf u}}| > d){\mbox{Var}}(V_i)}\rightarrow0 \ \ \text{as} \ \ m\rightarrow\infty,$$ so (\[d1\]) is correct. With the same argument and by (\[d4\]), we can show that (\[d2\]) is also correct. The proof of Theorem 3 is now complete.
**Proof of Theorem 4:** Note that $\|\widehat{{\mbox{\bf W}}}_{{\mbox{\scriptsize LS}}}-\widehat{{\mbox{\bf W}}}_{{\mbox{\scriptsize LS}}}^*\|_2=\|({\mbox{\bf X}}^T{\mbox{\bf X}})^{-1}{\mbox{\bf X}}^T{\mbox{\boldmath $\mu$}}\|_2$. By the definition of ${\mbox{\bf X}}$, we have ${\mbox{\bf X}}^T{\mbox{\bf X}}=\Lambda$, where $\Lambda={\mathrm{diag}}(\lambda_1,\cdots,\lambda_k)$. Therefore, by the Cauchy-Schwartz inequality, $$\|\widehat{{\mbox{\bf W}}}_{{\mbox{\scriptsize LS}}}-\widehat{{\mbox{\bf W}}}_{{\mbox{\scriptsize LS}}}^*\|_2=\Big[\sum_{i=1}^k\big(\frac{\sqrt{\lambda_i}{\mbox{\boldmath $\gamma$}}_i^T{\mbox{\boldmath $\mu$}}}{\lambda_i}\big)^2\Big]^{1/2}\leq\|{\mbox{\boldmath $\mu$}}\|_2\Big(\sum_{i=1}^k\frac{1}{\lambda_i}\Big)^{1/2}$$ The proof is complete.
[99]{}
Barras, L., Scaillet, O. and Wermers, R. (2010). False Discoveries in Mutual Fund Performance: Measuring Luck in Estimated Alphas. [*Journal of Finance*]{}, [**65**]{}, 179-216.
Benjamini, Y. and Hochberg, Y. (1995). Controlling the False Discovery Rate: A Practical and Powerful Approach to Multiple Testing. [*Journal of the Royal Statistical Society, Series B*]{}, [**57**]{}, 289-300.
Benjamini, Y. and Yekutieli, D. (2001). The Control of the False Discovery Rate in Multiple Testing Under Dependency. [*The Annals of Statistics*]{}, [**29**]{}, 1165-1188.
Bradic, J., Fan, J. and Wang, W. (2010). Penalized Composite Quasi-Likelihood For Ultrahigh-Dimensional Variable Selection. [*Journal of the Royal Statistical Society, Series B*]{}, [**73(3)**]{}, 325-349.
Clarke, S. and Hall, P. (2009). Robustness of Multiple Testing Procedures Against Dependence. [*The Annals of Statistics*]{}, [**37**]{}, 332-358.
Delattre, S. and Roquain, E. (2011). On the False Discovery Proportion Convergence under Gaussian Equi-Correlation. [*Statistics and Probability Letters*]{}, [**81**]{}, 111-115.
Deutsch, S., Lyle, R., Dermitzakis, E.T., Attar, H., Subrahmanyan, L., Gehrig, C., Parand, L., Gagnebin, M., Rougemont, J., Jongeneel, C.V. and Antonarakis, S.E. (2005). Gene expression variation and expression quantitative trait mapping of human chromosome 21 genes. [*Human Molecular Genetics*]{}, [**14**]{}, 3741-3749.
Efron, B. (2007). Correlation and Large-Scale Simultaneous Significance Testing. [*Journal of the American Statistical Association*]{}, [**102**]{}, 93-103.
Efron, B. (2010). Correlated Z-Values and the Accuracy of Large-Scale Statistical Estimates. [*Journal of the American Statistical Association*]{}, [**105**]{}, 1042-1055.
Fan, J., Guo, S. and Hao, N. (2010). Variance Estimation Using Refitted Cross-Validation in Ultrahigh Dimensional Regression. [*Journal of the Royal Statistical Society: Series B*]{} to appear.
Fan, J. and Li, R. (2001). Variable selection via nonconcave penalized likelihood and its oracle properties. [*Journal of American Statistical Association*]{}, [**96**]{}, 1348-1360.
Fan, J. and Song, R. (2010). Sure Independence Screening in Generalized Linear Models with NP-Dimensionality. [*Annals of Statistics*]{}, [**38**]{}, 3567-3604.
Ferreira, J. and Zwinderman, A. (2006). On the Benjamini-Hochberg Method. [*Annals of Statistics*]{}, [**34**]{}, 1827-1849.
Farcomeni, A. (2007). Some Results on the Control of the False Discovery Rate Under Dependence. [*Scandinavian Journal of Statistics*]{}, [**34**]{}, 275-297.
Friguet, C., Kloareg, M. and Causeur, D. (2009). A Factor Model Approach to Multiple Testing Under Dependence. [*Journal of the American Statistical Association*]{}, [**104**]{}, 1406-1415.
Finner, H., Dickhaus, T. and Roters, M. (2007). Dependency and False Discovery Rate: Asymptotics. [*Annals of Statistis*]{}, [**35**]{}, 1432-1455.
Genovese, C. and Wasserman, L. (2004). A Stochastic Process Approach to False Discovery Control. [*Annals of Statistics*]{}, [**32**]{}, 1035-1061.
Leek, J.T. and Storey, J.D. (2008). A General Framework for Multiple Testing Dependence. [*PNAS*]{}, [**105**]{}, 18718-18723.
Lyons, R. (1988). Strong Laws of Large Numbers for Weakly Correlated Random Variables. [*The Michigan Mathematical Journal*]{}, [**35**]{}, 353-359.
Meinshausen, N. (2006). False Discovery Control for Multiple Tests of Association under General Dependence. [*Scandinavian Journal of Statistics*]{}, [**33(2)**]{}, 227-237.
Owen, A.B. (2005). Variance of the Number of False Discoveries. [*Journal of the Royal Statistical Society, Series B*]{}, [**67**]{}, 411-426.
Polya, G. (1945). Remarks on computing the probability integral in one and two dimensions. [*Proceeding of the first Berkeley symposium on mathematical statistics and probability*]{}, 63-78.
Portnoy, S. (1984a). Tightness of the sequence of c.d.f. processes defined from regression fractiles. [*Robust and Nonlinear Time Series Analysis*]{}. Springer-Verlag, New York, 231-246.
Portnoy, S. (1984b). Asymptotic behavior of M-estimators of $p$ regression parameters when $p^2/n$ is large; I. Consistency. [*Annals of Statistics*]{}, [**12**]{}, 1298-1309, 1984.
Roquain, E. and Villers, F. (2011). Exact Calculations For False Discovery Proportion With Application To Least Favorable Configurations. [*Annals of Statistics*]{}, [**39**]{}, 584-612.
Sarkar, S. (2002). Some Results on False Discovery Rate in Stepwise Multiple Testing Procedures. [*Annals of Statistics*]{}, [**30**]{}, 239-257.
Storey, J.D. (2002). A Direct Approach to False Discovery Rates. [*Journal of the Royal Statistical Society, Series B*]{}, [**64**]{}, 479-498.
Storey, J.D., Taylor, J.E. and Siegmund, D. (2004). Strong Control, Conservative Point Estimation and Simultaneous Conservative Consistency of False Discovery Rates: A Unified Approach. [*Journal of the Royal Statistical Society, Series B*]{}, [**66**]{}, 187-205.
Sun, W. and Cai, T. (2009). Large-scale multiple testing under dependency. [*Journal of the Royal Statistical Society, Series B*]{}, [**71**]{}, 393-424.
[^1]: Jianqing Fan is Frederick L. Moore’18 professor, Department of Operations Research & Financial Engineering, Princeton University, Princeton, NJ 08544, USA and honorary professor, School of Statistics and Management, Shanghai University of Finance and Economics, Shanghai, China (Email: jqfan@princeton.edu). Xu Han is assistant professor, Department of Statistics, University of Florida, Florida, FL 32606 (Email: xhan@princeton.edu). Weijie Gu is graduate student, Department of Operations Research & Financial Engineering, Princeton University, Princeton, NJ 08544 (Email: wgu@princeton.edu). The paper was completed while Xu Han was a postdoctoral fellow at Princeton University. This research was partly supported by NSF Grants DMS-0704337 and DMS-0714554 and NIH Grant R01-GM072611. The authors are grateful to the editor, associate editor and referees for helpful comments.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'A *perfect matroid design (PMD)* is a matroid whose flats of the same rank all have the same size. In this paper we introduce the $q$-analogue of a PMD and its properties. In order to do that, we first establish a new cryptomorphic definition for $q$-matroids. We show that $q$-Steiner systems are examples of $q$-PMD’s and we use this structure to construct subspace designs from $q$-Steiner systems. We apply this construction to Steiner systems and hence establish the existence of subspace designs with previously unknown parameters.'
address:
- |
School of Mathematics and Statistics, University College Dublin, Ireland\
ebyrne@ucd.ie
- |
Department of Computer Science - University of Milan - via Celoria 18, Milan, Italy\
michela.ceria@gmail.com\
- |
University of Picardie Jules Verne, 33 rue Saint Leu Amiens, 80039, France\
sorina.ionica@u-picardie.fr\
- |
Faculty of Military Science, Netherlands Defence Academy, The Netherlands\
RPMJ.Jurrius@mindef.nl
- |
Institute of Mathematics, University of Zurich, Switzerland\
elif.sacikara@math.uzh.ch\
author:
- Eimear Byrne
- Michela Ceria
- Sorina Ionica
- Relinde Jurrius
- Elif Saçikara
bibliography:
- 'References.bib'
title: 'Constructions of new Matroids and Designs over $\mathbb{F}_q$'
---
Introduction
============
In combinatorics, we often describe a -analogue of a concept or theory to be any generalization which replaces finite sets by vector spaces over the finite field of $q$ elements, $\mathbb{F}_q$. Two classical topics in combinatorics that have recently been studied as $q$-analogues are matroids and designs. These objects and some of the connections between them are the main focus of this paper.
A subspace design (also called a $q$-design, or a design over ${\mathbb{F}}_q$) is a $q$-analogue of a design. A $t$-$(n,k,\lambda;q)$ subspace design is a collection $\mathcal{B}$ of $k$-dimensional subspaces of an $n$-dimensional vector space $V$ with the property that every $t$-dimensional subspace of $V$ is contained in exactly $\lambda$ of the members of $\mathcal{B}$. Explicit constructions of subspace designs have proved so far to be more elusive than their classical counterparts. Early papers by Thomas, Suzuki, and Itoh have provided some examples of infinite families of subspace designs [@thomas; @suzuki2design92; @itoh], while in [@largesets] an approach to the problem using [*large sets*]{} is given. A $q$-analogue of the Assmus-Mattson theorem gives a general construction of subspace designs from coding theory [@byrne2019assmus]. Further sporadic examples have been found by assuming a prescribed automorphism group of the subspace design [@braun2018q]. For the special case $\lambda=1$ we call such a design a $q$-Steiner system and write $S(t,k,n;q)$. The actual existence of an $S(t,k,n;q)$ Steiner system for $t>1$, was established for the first time when $S(2,3,13;2)$ designs were discovered by Braun *et al* [@qSteiner_exists_2016]. No other examples have been found to date. The smallest open case is that of the $S(2,3,7;q)$ Steiner system, also known as the $q$-analogue of the Fano plane.
While subspace designs have been intensively studied over the last decade [@braun2018q], $q$-analogues of matroids were only considered recently [@JP18; @gorla2019rank]. In fact, the $q$-matroid defined in [@JP18] was a re-discovery of a combinatorial object already studied by Crapo [@crapo1964theory]. Classical matroids are a generalisation of several ideas in combinatorics, such as independence in vector spaces and trees in graph theory. One of the important properties of matroids is that there are equivalent, yet seemingly different ways to define them: in terms of their independent sets, flats, circuits, bases, closure operator and rank function. We call these equivalent definitions *cryptomorphisms*. Up to this day, the only known cryptomorphisms for $q$-matroids are those between independent subspaces, the rank function, and bases [@JP18]. In [@BCJ17] the cryptomorphism via bi-colouring of the subspace lattice is discussed. In this paper we will extend these results with a new cryptomorphism, namely the definition of a $q$-matroid in terms of its flats. In the classical case, there is a link between designs and matroids, given by the so-called [*perfect matroid designs*]{} (PMDs). PMDs are matroids for which flats of the same rank have the same cardinality. They were studied by Murty and others in [@murty1970equicardinal] and [@Young], who showed in particular that Steiner systems are among the few examples of PMDs and, more importantly, that they could be applied to construct new designs. In this paper we obtain $q$-analogues of some of these results.
First, we extend the theory of $q$-matroids to include a new cryptomorphism, namely that between flats and the rank function. We apply this cryptomorphism to obtain the first examples of $q$-PMDs; in particular we show that $q$-Steiner systems are $q$-PMDs. Secondly, using the $q$-matroid structure of the $q$-Steiner system, we derive new subspace designs. This leads in some cases to designs with parameters not previously known. On the other hand, interestingly, some of the parameters of the designs we obtain from the putative $q$-Fano plane coincide with those obtained by Braun *et al* [@braun2005systematic]. By characterising the automorphism group of the designs that we obtained, we show that the subspace designs of [@braun2005systematic] cannot be derived from the $q$-Fano plane via our construction.
This paper is organised as follows. After some preliminary notions in Section \[sec-preliminaries\], we prove in Section \[sec-cryptomorphisms\] the above-mentioned new cryptomorphism for $q$-matroids. An overview of the different (but equivalent) ways to define $q$-matroids is found at the end of this section. In Section \[qPMD\] we prove that $q$-Steiner systems are examples of the $q$-analogue of a perfect matroid design. We use this to derive new designs from the $q$-Steiner system, using its $q$-matroid structure and its flats, independent spaces, and circuits.
Preliminaries {#sec-preliminaries}
=============
In this section, we bring together certain fundamental definitions on lattices, $q$-matroids and subspace designs, respectively. Throughout the paper, ${\mathbb{F}_q}$ will denote the finite field of $q$ elements, $n$ will be a fixed positive integer and $E$ will denote the $n$-dimensional vector space $\mathbb{F}_q^n$.
Lattices
--------
Let us first recall preliminaries on lattices. The reader is referred to Stanley [@stanley:1997] for further details.
Let $(\mathcal{L}, \leq)$ be a partially ordered set. Let $a,b,v \in \mathcal{L}$. We say that $v$ is an [**upper bound**]{} of $a$ and $b$ if $a\leq v$ and $b\leq v$ and furthermore, we say that $v$ is a [**least upper bound**]{} if $v \leq u$ for any $u \in \mathcal{L} $ that is also an upper bound of $a$ and $b$. If a least upper bound of $a$ and $b$ exists, then it is unique, is denoted by $a\vee b$ and called the [**join**]{} of $a$ and $b$. We analogously define [*a lower bound*]{} and [*the greatest lower bound*]{} of $a$ and $b$ and denote the unique greatest lower bound of $a$ and $b$ by $a\wedge b$, which is called the [**meet**]{} of $a$ and $b$. The poset $\mathcal{L}$ is called a *lattice* if each pair of elements has a least upper bound and greatest lower bound and denoted by $(\mathcal{L}, \leq, \vee, \wedge)$.
Of particular relevance to this paper is the subspace lattice $(\mathcal{L}(E), \leq, \vee, \wedge)$, which is the lattice of ${\mathbb{F}}_q$- subspaces of $E$, ordered with respect to inclusion and for which the join of a pair of subspaces is their vector space sum and the meet of a pair of subspaces is their intersection. That is, for all $A,B \subseteq E$ we have: $$A \leq B \Leftrightarrow A \subseteq B, A \vee B = A + B, A \wedge B = A \cap B.$$
Let $(\mathcal{L}, \leq, \vee, \wedge)$ be a lattice and let $a,b\in\mathcal{L}$ with $a\leq b$ but $a\neq b$, we say that $a$ [**covers**]{} $b$ if for all $c\in \mathcal{L}$ we have that $a\leq c \leq b$ implies that $c=a$ or $c=b$. A finite semimodular lattice $\mathcal{L}$ is a finite lattice with the property that if $a$ covers $a\wedge b$ then $a\vee b$ covers $b$.
Let $(\mathcal{L}, \leq, \vee, \wedge)$ be a lattice. A [**chain**]{} of length $r$ between two elements $a,b\in\mathcal{L}$ is a sequence of distinct elements $a_0,a_1,\ldots,a_r$ in $\mathcal{L}$ such that $a=a_0\leq a_1\leq\cdots\leq a_r=b$. If $a_{i+1}$ covers $a_i$ for all $i$, we call this a [**maximal chain**]{}.
A bijection $\phi: \mathcal{L} \to \mathcal{L}$ on a lattice $(\mathcal{L}, \leq, \vee, \wedge)$ is called an [**automorphism**]{} of $\mathcal{L}$ if one of the following equivalent conditions holds for all $a, b \in \mathcal{L}$:
- $a \leq b$ iff $\phi(a)\leq \phi(b)$,
- $\phi(a \vee b)= \phi(a) \vee \phi(b)$,
- $\phi(a \wedge b)= \phi(a) \wedge \phi(b)$.
$q$-Matroids
------------
For background on the theory of matroids we refer the reader to [@gordonmcnulty] or [@oxley]. For the $q$-analogue of a matroid we follow the treatment of Jurrius and Pellikaan [@JP18]. The definition of a $q$-matroid is a straightforward generalisation of the definition of a classical matroid in terms of its rank function.
\[rankfunction\] A $q$-matroid $M$ is a pair $(E,r)$ where $r$ is an integer-valued function defined on the subspaces of $E$ with the following properties:
- For every subspace $A\subseteq E$, $0\leq r(A) \leq \dim A$.
- For all subspaces $A\subseteq B \subseteq E$, $r(A)\leq r(B)$.
- For all $A,B$, $r(A+ B)+r(A\cap B)\leq r(A)+r(B)$.
The function $r$ is called the rank function of the -matroid.
We list some examples of $q$-matroids [@JP18].
\[ex:uniform\]\[The uniform matroid\] Let $M=(E,r)$, where $$r(A)=\begin{cases}
\dim A & \text{, if } \dim A \leq k, \\
k & \text{, if } \dim A > k,
\end{cases}$$ for $0\leq k \leq n$ and a subspace $A$ of $E$. Then $M$ satisfies axioms (r1)-(r3) and is called the uniform $q$-matroid. We denote it by $U_{k,n}(\mathbb{F}_q)$.
\[ex:rankmetric\]\[\] Let $G$ be a full rank $n\times k$ matrix over an extension field $\mathbb{F}_{q^m}$ of $\mathbb{F}_q$. For any subspace $A\subseteq E$ define the rank of $A$ to be $r(A) = \mathrm{rank}(GY)$ for any ${\mathbb{F}_q}$-matrix $Y$ whose columns span $A$. It can be shown that $(E,r)$ satisfies (r1)-(r3) and hence is a $q$-matroid.
In classical matroid theory, there are several definitions of a matroid in terms of the axioms of its independent spaces, bases, flats, circuits, etc. These equivalences, which are not immediately , are referred to in the literature as *cryptomorphisms*. In this paper we will establish a new cryptomorphism for $q$-matroids. The reader is referred to Figure \[diagram\] at the end of Section \[sec-cryptomorphisms\] for an overview of known connections between cryptomorphisms of $q$-matroids. First, we define independent spaces, flats, and the closure function in terms of the rank function of a $q$-matroid.
\[independentspaces\] Let $(E,r)$ be a $q$-matroid. A subspace $A$ of $E$ is called [**independent**]{} if $$r(A)=\dim A.$$
A subspace that is not independent is called [**dependent**]{}. We call a [**circuit**]{} if it is itself a dependent space and every proper subspace of $C$ is independent.
\[flat\] A subspace $F$ of a $q$-matroid $(E,r)$ is called a [**flat**]{} if for all one-dimensional subspaces $x$ such that $x\nsubseteq F$ we have that $$r(F+x)>r(F).$$
We define the notion of a flat via axioms, without reference to a rank function.
\[flat-axioms\] Let $\mathcal{F} \subseteq \mathcal{L}(E)$. We define the following flat axioms:
- $E\in\mathcal{F}$.
- If $F_1\in\mathcal{F}$ and $F_2\in\mathcal{F}$, then $F_1\cap F_2\in\mathcal{F}$.
- For all $F\in\mathcal{F}$ and $x\subseteq E$ a one-dimensional subspace not contained in $F$, there is a unique $F'\in\mathcal{F}$ covering $F$ such that $x\subseteq F'$.
We write $(E,\mathcal{F})$ to denote a vector space $E$ together with a family of flats satisfying the flat axioms.
We will see in Section \[sec-cryptomorphisms\] that a space of flats $(E,\mathcal{F})$ completely determines a $q$-matroid. The following theorem summarizes important results from [@JP18].
\[th:JP18prel\] Let $(E,r)$ be a $q$-matroid and let $A,B\subseteq E$ and let $x,y \subseteq E$ each have dimension one. The following hold.
- $r(A+x) \leq r(A)+1$.
- If $r(A+z)=r(A)$ for each one-dimensional space then $r(A+B)=r(A)$.
- If $r(A+x)=r(A+y)=r(A)$ then $r(A+x+y)=r(A)$.
An interesting family of matroids, the PMDs were introduced in [@murty1970equicardinal; @Young]. For more details in the classical case, the reader should refer to the work of Deza [@deza1992perfect]. We consider here a $q$-analogue of a PMD.
A $q$-[**perfect matroid design**]{} ($q$-PMD) is a $q$-matroid with the property that any two of its flats of the same rank have the same dimension.
Subspace designs
----------------
Given a pair of nonnegative integers $N$ and $M$, the $q$-[**binomial**]{} or [**Gaussian coefficient**]{} counts the number of $M$-dimensional subspaces of an $N$-dimensional subspace over ${\mathbb{F}_q}$ and is given by: $${\left[\begin{matrix} N \\ M \end{matrix} \right]_{q}}:=\prod_{i=0}^{M-1}\frac{q^N-q^i}{q^M-q^i}.$$
We write ${\left[\begin{matrix} E \\ k \end{matrix} \right]_{q}}$ to denote the set of all $k$-subspaces of $E$ (the $k$-Grassmanian of $E$).
Recall the following well-known result.
\[lem:folklore\] Let $s,t$ be positive integers satisfying $0\leq t \leq s \leq n$. The number of $s$-spaces of $E$ that contain a fixed $t$-space is given by ${\left[\begin{matrix} n-t \\ s-t \end{matrix} \right]_{q}}$.
We briefly recall the definition of a subspace design and well known examples of these combinatorial objects. The interested reader is referred to the survey [@braun2018q] and the references therein for a comprehensive treatment of designs over finite fields.
Let $1\leq t\leq k \leq n$ be integers and let $\lambda\geq 0$ be an integer. A $t$-$(n,k,\lambda;q)$ subspace design is a pair $(E,\mathcal{B})$, where $\mathcal{B}$ is a collection of subspaces of $E$ of dimension $k$, called blocks, with the property that every subspace of $E$ of dimension $t$ is contained in exactly $\lambda$ blocks.
[*Subspace designs*]{} are also known as [designs over finite fields]{}. A $q$-Steiner system is a $t$-$(n,k,1;q)$ subspace design and is said to have parameters $S(t,k,n;q)$. The $q$-Steiner triple systems are those with parameters $S(2,3,n;q)$ and are denoted by $STS(n;q)$. The $t$-$(n,k,\lambda;q)$ subspace designs with $t=1$ and $\lambda =1$ are examples of spreads.
A $q$-analogue of the Fano plane would be given by an $STS(7;q)$, whose existence is an open question for any $q$.
For a subspace $U$ of $E$ we define $U^\perp:=\{ v \in E : \langle u, v \rangle = 0\}$ to be the orthogonal space of $U$ with respect to the standard inner product $\langle u, v \rangle=\sum_{i=1}^{n} u_i v_i$. We will use the notions of the supplementary and dual subspace designs [@suzuki1990inequalities; @KP].
\[lem:dualdesign\] Let $k,t,\lambda$ be positive integers and let $\mathcal{D}=(E, \mathcal{B})$ be a $t$-$(n,k,\lambda;q)$ design.
- The [**supplementary design**]{} of $\mathcal{D}$ is the subspace design $\left(E,{\left[\begin{matrix} E \\ k \end{matrix} \right]_{q}}- \mathcal{B}\right)$.\
It has parameters $t$-$\left(n,k,{\left[\begin{matrix} n-t \\ k-t \end{matrix} \right]_{q}}-\lambda;q\right)$.
- The [**dual design**]{} of $\mathcal{D}$ is given by $(E, {\mathcal B}^\perp)$, where ${{\color{black} { \mathcal B^\perp}}}:=\{ U^\perp : U \in {{\color{black} {\mathcal B}}} \}$. It has parameters $$t- \left(n,n-k,\lambda{\left[\begin{matrix} n-t \\ k \end{matrix} \right]_{q}}{\left[\begin{matrix} n-t \\ k-t \end{matrix} \right]_{q}}^{-1};q \right).$$
The [**intersection numbers**]{} $\lambda_{i,j}$ defined in Lemma \[intersection\_numbers\] were given in [@suzuki1990inequalities]. These design invariants play an important role in establishing non-existence of a design for a given set of parameters.
\[intersection\_numbers\] Let $k,t,\lambda$ be positive integers and let $\mathcal D$ be a $t$-$(n,k,\lambda;q)$ design. Let $I,J$ be $i,j$ dimensional subspaces of ${\mathbb{F}_q}^n$ satisfying $i+j \leq t$ and $I \cap J = \{0\}$. Then the number $$\lambda_{i,j}:=|\{ {{\color{black} { U \in \mathcal B}}} : I \subseteq U, \ J \cap U = \{0\} \}|,$$ depends only on $i$ and $j$, and is given by the formula $$\lambda_{i,j} = q^{j(k-i)}\lambda {\left[\begin{matrix} n-i-j \\ k-i \end{matrix} \right]_{q}}{\left[\begin{matrix} n-t \\ k-t \end{matrix} \right]_{q}}^{-1} .$$
By Lemma \[intersection\_numbers\], the existence of a $t$-$(n,k,\lambda;q)$ design implies the *integrality conditions*, namely that $\lambda_i={\lambda}_{i,0} \in \mathbb{Z}$ for all $i$.
A parameter set $t$-$(n,k,\lambda; q)$ is called [**admissible**]{} if it satisfies the integrality conditions and is called [**realisable**]{} if a $t$-$(n,k,\lambda; q)$ design exists.
It is well-known and follows directly from the integrality conditions that an $STS(n;q)$ is admissible if and only if $n\equiv 1 \text{ or } 3 \mod{6}$.
Finally, for a given subspace design $(E,\mathcal{B})$, an automorphism $\phi$ of $\mathcal{L}(E)$, is called an automorphism of the design if $\phi(\mathcal{B})= \mathcal{B}$. The automorphism group of a subspace design is equal to the automorphism group of its supplementary design and is in $1-1$ correspondence with that of the dual design. The automorphism group has been leveraged to construct new subspace designs using the Kramer-Mesner method [@KramerMesner]. If the number of orbits the automorphism group is small enough, then the corresponding diophantine system of equations can be solved in a feasible amount of time on a personal computer [@braun2005some; @braun2005systematic]. It is known that the $q$-Fano plane has automorphism group of order at most $2$ [@BraunKier; @KierKurz], so this method cannot be applied in this case.
A Cryptomorphism of $q$-Matroids {#sec-cryptomorphisms}
================================
In this section we provide a new cryptomorphic definition of a $q$-matroid, in terms of its flats. Recall that a flat of a $q$-matroid $(E,r)$ is a subspace $F$ such that for all one-dimensional spaces $x\not\subseteq F$ we have that $r(F+x)>r(F)$. We remark that the results of this section hold for $q$-matroids defined with respect to finite dimensional vector spaces over arbitrary fields.
\[Cover\] Let $F_1$ and $F_2$ be flats of a $q$-matroid. We say that $F_1$ [**covers**]{} $F_2$ if $F_2\subseteq F_1$ and there is no other flat $F'$ such that $F_2\subseteq F'\subseteq F_1$.
Before establishing a cryptomorphism between the $q$-matroid $(E,r)$ and , we prove some preliminary results.
\[Uguagl\] Let $(E,r)$ be a matroid with rank function $r$. Let $A\subseteq B$ be subspaces of $E$ and let $x$ be a one-dimensional subspace of $E$. If $r(B+x)=r(B)+1$ then $r(A+x)=r(A)+1$.
Suppose that $r(B+x)=r(B)+1$. Since $A \subseteq B$, we have $(A+x)+B = B+x$ and $A \subseteq (A+x) \cap B$, Therefore, by (r2) and applying (r3) to $A+x$ and $B$ we get: $$r(A+x)+r(B) \geq r((A+x)+B)+r((A+x)\cap B) \geq r(B+x)+r(A) =r(B)+1+ r(A),$$ and so $r(A+x) \geq r(A)+1$. By Theorem \[th:JP18prel\], and so we get the equality $r(A+x) = r(A)+1$.
\[F2dasolo\] If $F_1,F_2$ are two flats of a $q$-matroid $(E,r)$ then $F_1\cap F_2$ is also a flat.
Let $F:=F_1\cap F_2$ and take a one-dimensional space $x \nsubseteq F$; therefore $x$ is not a subspace of $F_1$ or $F_2$; say, without loss of generality, that $x \nsubseteq F_1$. By Theorem \[th:JP18prel\], $r(F_1+x)=r(F_1)+1$ and by Lemma \[Uguagl\], $r(F+x)=r(F)+1 > r(F)$, which implies that $F$ is flat of $(E,r)$.
Let $\mathcal{F}$ be a collection of subspaces of $E$ and let $A\subseteq E$ be subspace. We define the subspace $${{\color{black}{C_{\mathcal{F}}(A)}}}:=\cap \{F\in \mathcal{F} : A \subseteq F\} .$$
\[MinimalFlat\] Let $\mathcal{F}$ be a collection of subspaces of $E$ satisfying the axioms (F1)-(F3). Let $A \subseteq B \subseteq E$ be subspaces. Then $C_{\mathcal{F}}(A)$ is the unique flat in $\mathcal{F}$ such that:
1. $A \subseteq C_{\mathcal{F}}(A)$,
2. if $A \subseteq F\in \mathcal{F}$, then $C_{\mathcal{F}}(A)\subseteq F$,
3. $C_{\mathcal{F}}(A) \subseteq C_{\mathcal{F}}(B)$.
\(1) and (2) follow immediately from the definition of $C_{\mathcal{F}}(A)$, which is clearly uniquely determined. If $B \subseteq F$ for some flat $F$ then $A \subseteq F$ and so clearly, $C_{\mathcal{F}}(A) \subseteq C_{\mathcal{F}}(B)$.
In the instance that $\mathcal{F}$ is the set of flats of a $q$-matroid $(E,r)$, then from Lemma \[F2dasolo\], ${{\color{black}{C_{\mathcal{F}}(A)}}}$ is itself a flat, . In particular, $F_A$ the unique minimal flat of $\mathcal{F}_r$ that contains $A$.
\[minFl\] Let $(E,r)$ be a $q$-matroid, let $G$ be a subspace of $E$ and let $x$ be a one-dimensional subspace such that $r(G)=r(G+x)$. Then $x \subseteq F_G$.
Suppose, towards a contradiction, that $x \nsubseteq F_G$. We apply (r3) to $F_G$ and $G+x$: $$r(F_G+G+x)+r(F_G \cap (G+x)) \leq r(F_G)+r(G+x).$$ Now since $G\subseteq F_G$ but $x\not\subseteq F_G$, the above inequality can be stated as $$r(F_G+x)+r(G) \leq r(F_G)+r(G).$$ However, as $F_G$ is a flat, $r(F_G+x)=r(F_G)+1$, which gives the required contradiction.
\[EqRank\] Let $(E,r)$ be a $q$-matroid and let $G \subseteq E $. Then $r(G)=r(F_G)$.
Consider the collection of subspaces $$\mathcal{H}:=\{y\subseteq E:\, \dim(y)=1, r(G+y)=r(G)\}.$$ Let $U$ be the vector space sum of the elements of $\mathcal{H}$. By applying Theorem \[th:JP18prel\](ii), we have that $r(U)=r(G)$. Moreover, $U\subseteq F_G$ by Lemma \[minFl\].
Suppose $r(G)<r(F_G)$. If $U=F_G$ then we would arrive at the contradiction $r(U)=r(F_G)>r(G)$, so assume otherwise. Then there exists a one-dimensional subspace $x \subseteq F_G$, $x\not \subseteq U$. Since $x \notin \mathcal{H}$, by (r2) we have $$r(U)=r(G) < r(G+x) \leq r(U+x).$$ On the other hand, for $x' \subseteq E - F_G$, by Lemma \[minFl\], we have $$r(U)=r(G)<r(G+x')\leq r(U+x').$$ Therefore $U$ is itself a flat and hence $G \subseteq U \subseteq F_G$, contradicting the minimality of $F_G$. We deduce that $r(G)=r(F_G)$.
\[flataxioms\] The flats of a $q$-matroid satisfy the flat axioms (F1)-(F3) of Definition \[flat-axioms\].
Let $(E,r)$ be a $q$-matroid with rank function $r$. By definition, the set of flats ${\mathcal F}_r$ of $(E,r)$ is characterised by: $$\mathcal{F}_r:=\{F \subseteq E : r(F+x)>r(F),\, \forall x \nsubseteq F,\, \dim(x)=1\}.$$ The condition (F1) holds vacuously, while (F2) comes from Lemma \[F2dasolo\].
To prove (F3), let $F\subseteq E$ and $x \subseteq E$ with $\dim(x)=1$ and $x \nsubseteq F$. We will show that there is a unique $F'$ covering $F$ and containing $x$. Suppose, towards a contradiction, that $x$ is not contained in any flat covering $F$. Let $G=F+x$ and consider $F_G$, the minimal flat containing $G$. By our assumption, there must be a flat $F'$ such that $F \subsetneq F' \subsetneq F_G$. Without loss of generality, we may assume that $F'$ is a cover of $F$. Clearly $x \nsubseteq F'$. Let $y$ be one-dimensional space $y\subseteq F'$, $y \nsubseteq F$. Now, $x,y\nsubseteq F$ and $y\subseteq F_G$. Let $H=F+y$. We claim that $x \subseteq F_H$, in which case we would arrive at the contradiction $x \subseteq F_H \subseteq F'$ and $x \nsubseteq F'$. Since $G=F+x$, $H=F+y$, $x,y\nsubseteq F$ and $F$ is a flat, we have $r(G)=r(H)=r(F)+1$. By Lemma \[EqRank\], $r(G)=r(F_G)$ and since $y\subseteq F_G$ we also have $r(G)=r(G+y)=r(F_G)$. Now, $$r(H+x)=r(F+x+y)=r(G+y)=r(G)=r(F)+1=r(H).$$ Hence by Lemma \[minFl\], $x \subseteq F_H$. We deduce that $x$ is contained in a cover of $F$. As regards uniqueness, suppose we have two different covers $F_1 \neq F_2$ of $F$ containing $x$ and let $L:=F_1\cap F_2$. By the flat axiom (F2), $L$ is a flat and since $x,F \subseteq F_1,F_2$ then $x,F \subseteq L$. On the other hand, $F \neq L$ since $x \nsubseteq F$, so $F \subsetneq L$. Since $F_1 \neq F_2$, $L$ be equal to both of them, say $L \neq F_2$, so $F \subsetneq L \subsetneq F_2$, contradicting the fact that $F_2$ covers $L$.
The next lemma will be used frequently in our proofs.
\[cover\] Let $F$ be a flat and $x\subseteq E$ a one-dimensional subspace. Then the minimal flat containing $F+x$ is either equal to $F$ or it covers $F$.
If $x\subseteq F$, then $F+x=F$ so the minimal flat containing $F+x$ is $F$ itself. If $x\not\subseteq F$, then by (F3) there is a unique flat $F'$ that covers $F$ and contains $x$. Since $F'$ covers $F$ and contains both $F$ and $x$, it is clearly the minimal flat containing $F+x$.
Our aim is to prove the converse of Proposition \[flataxioms\]: that is, if we have a collection satisfying the axioms (F1)-(F3), it is the collection of flats of a $q$-matroid. Before we do that, we show that the flats of a $q$-matroid form a semimodular lattice. (It is in fact a geometric lattice, as was noted in Theorem 1 of [@BCJ17].)
\[semimodularLat\] The set of flats of a $q$-matroid form a semimodular lattice under inclusion, where for any two flats $F_1$ and $F_2$ the meet is defined to be $F_1\wedge F_2 :=F_1\cap F_2$ and the join $F_1\vee F_2$ is the smallest flat containing $F_1+F_2$.
The set of flats forms a poset with respect to inclusion, so we only need to prove that the definitions of meet and join as $F_1\wedge F_2:=F_1\cap F_2$ and $F_1\vee F_2 := F_G$ with $G:=F_1+F_2$ are well defined.
Let us consider the meet. From (F2) $F_1 \wedge F_2$ is a flat and the fact that it is the maximal flat covered by $F_1,F_2$ follows from the definition of intersection. As regards the join, $F_G$ by definition is indeed a flat and, more precisely, is the unique minimal flat containing $F_1+F_2$. We remark that, since we have a lattice, there is a maximal flat, which is $E$, and a minimal one, , which is also the minimal flat containing the zero space. In order to prove that the lattice is semimodular, we have to prove that if $F_1\wedge F_2$ is covered by $F_1$, then $F_2$ is covered by $F_1\vee F_2$. So let the flat $F_1\cap F_2$ be covered by $F_1$. Then for all $x\subseteq F_1- F_2$ we have that the minimal flat containing $(F_1\cap F_2)+ x$ is $F_1$ by Lemma \[cover\]. Since $F_2+x\subseteq F_2+F_1$, we have that the minimal flat $H$ containing $F_2+ x$ satisfies $H \leq F_2\vee F_1$. On the other hand, because $(F_1\cap F_2)+x\subseteq F_2+x$, we have that $F_1 \leq H$. Now we have that both $F_1, F_2 \leq H$ so $H$ must contain the least upper bound of the two, that is, $H\geq F_1\vee F_2$. We conclude that $H=F_1\vee F_2$, which means $F_1\vee F_2$ covers $F_2$ by Lemma \[cover\]. This proves the lattice of flats is semimodular.
Since the lattice of flats is semimodular, we can deduce the following (see [@stanley:1997 Prop. 3.3.2] and [@stanley:2007 Prop. 3.7]):
\[JordanDedekind\] The lattice of flats of a $q$-matroid satisfies the Jordan-Dedekind property, that is: all maximal chains between two fixed elements of the lattice have the same finite length.
In what follows, we will need the following lemma.
\[GettingFA\] Let $A$ be a subspace of $E$ and let $\mathcal{F}$ be a collection of subspaces of $E$. Let $x \subseteq A$ have dimension one and let $F\subseteq A$ be an element of $\mathcal{F}$. Let $F'$ be the minimal element of $\mathcal{F}$ containing $x+F$. If $A \subseteq F'$ then $F'={{\color{black}{C_{\mathcal {F}}(A)}}}$.
If $A \subseteq F' \in \mathcal{F}$, we have ${{\color{black}{C_{\mathcal {F}}(A)}}} \subseteq F'$ by definition. Then since $F+x \subseteq A$ we have $F+x \subseteq {{\color{black}{C_{\mathcal {F}}(A)}}} \subseteq F'$. Since $F'$ is the the minimal flat containing $
F$ and $x$, $F' \subseteq {{\color{black}{C_{\mathcal {F}}(A)}}}$, implying their equality.
For each $A\subseteq E$, let the length minus one of a maximal chain of flats from ${{\color{black}{C_{\mathcal {F}}(\{0\})}}}$ to ${{\color{black}{C_{\mathcal {F}}(A)}}}$. By Corollary \[JordanDedekind\], all such maximal chains have the same length, so $r_{\mathcal{F}}$ is well defined as a function on $\mathcal{L}(E)$. We are now ready to prove our main theorem.
\[crypto:flat-rank\] Let $E$ be a finite dimensional space. If $\mathcal{F}$ is a family of subspaces of $E$ that satisfies the flat axioms (F1)-(F3) and for each $A \subseteq E$, then $(E,r_\mathcal{F})$ is a $q$-matroid and its family of flats is $\mathcal{F}$. Conversely, satisfies the conditions (F1)-(F3) and $r=r_{\mathcal{F}_r}$.
Let $(E,r)$ be a $q$-matroid. We have seen in Proposition \[flataxioms\] that satisfies (F1)-(F3).
Let now $(E,\mathcal{F})$ be a family of flats. . We $r_{\mathcal{F}}$ satisfies (r1)-(r3), that is, that $(E,r_{\mathcal{F}})$ is a $q$-matroid.
(r1): For a subspace $A$, $r_{\mathcal{F}}(A)\geq 0$ since ${{\color{black}{C_{\mathcal {F}}(A)}}}$ is contained in any chain from $F_0$ to ${{\color{black}{C_{\mathcal {F}}(A)}}}$. If $A \subseteq F_0$ then $F_0={{\color{black}{C_{\mathcal {F}}(A)}}}$ and $r_{\mathcal{F}}(A)=0 \leq \dim (A)$, so the result clearly holds. If $F_0$ does not contain $A$, then there is a one-dimensional space $x_0 \subseteq A, {{\color{black}{x_0 \not\subseteq F_0}}}$. Let $G_0=F_0+x_0$ and define $F_1$ to be the minimal flat containing $F_0$ and ${{\color{black}{x_0}}}$. $F_1$ clearly has dimension at least 1 and is a cover of $F_0$. Indeed if there is a flat $H$ such that $F_0 \subsetneq H \subsetneq F_1$, $H$ contains $F_0$ properly (otherwise $H=F_0$) and $x_0 \nsubseteq H$ (otherwise $H=F_1$). If it contains any element of $F_0+x_0$ but not in $F_0$ it would contain $x_0$ itself as a subspace. If $A \subseteq F_1$ then by Lemma \[GettingFA\] we have $F_1={{\color{black}{C_{\mathcal {F}}(A)}}}$ and the required maximal chain is $F_0 \subsetneq {{\color{black}{C_{\mathcal {F}}(A)}}}$. If $A \nsubseteq F_1$ then choose and define $F_2$ to be the unique cover of $G_1=F_1+x_1$; clearly $\dim(F_2) \geq 2$. We continue in this way, choosing at each step a one-dimensional subspace $x_i \in A, {{\color{black}{x_1 \not\subseteq F_{i}}}}$ and construct the unique flat $F_{i+1}$ covering $G_{i} = F_{i}+x_i$, until we arrive at a flat $F_k$ that contains $A$. By Lemma \[GettingFA\], we have $F_k=F_A$, yielding the maximal chain of flat $F_0 \subseteq F_1 \subseteq \cdots \subseteq F_k = {{\color{black}{C_{\mathcal {F}}(A)}}}$. Since $\dim(F_i)\geq i$ for each $i$, it follows that $r_{\mathcal{F}}(A) = k \leq \dim(A)$.
(r2): Let $A \subseteq B$; we prove $r_{\mathcal{F}}(A) \leq r_{\mathcal{F}}(B)$. By Lemma \[MinimalFlat\] (3), ${{\color{black}{C_{\mathcal {F}}(A)}}} \subseteq {{\color{black}{C_{\mathcal {F}}(B)}}}$, therefore a maximal chain of flats $F_0\subseteq F_1\subseteq\cdots\subseteq {{\color{black}{C_{\mathcal {F}}(A)}}}$ is contained in ${{\color{black}{C_{\mathcal {F}}(B)}}}$.
(r3): Let $A, B \subseteq E$ be subspaces and consider a maximal chain of flats $F_0 \subseteq ... \subseteq {{\color{black}{C_{\mathcal {F}}(A\cap B)}}}$. If ${{\color{black}{C_{\mathcal {F}}(A\cap B)}}}\neq {{\color{black}{C_{\mathcal {F}}(A)}}}$ then choose a one-dimensional space $x_1\subseteq A, {{\color{black}{x_1 \not \subseteq C_{\mathcal {F}}(A\cap B)}}}$ and continue extending the chain, by setting $G_1={{\color{black}{C_{\mathcal {F}}(A\cap B)}}}+x_1$ and taking $F_{1}={{\color{black}{C_{\mathcal {F}}(G_1)}}}$ and then repeating this procedure, each time choosing $x_i \subseteq A, {{\color{black}{x_i \not \subseteq F_i}}}$, where $F_i$ is the cover of $G_i=F_{i-1}+x_{i-1}$ for each $i$. This sequence is clearly finite (in fact has length at most $\dim(A)-\dim(A\cap B)$), and by Lemma \[GettingFA\], there exists some $k$ such that $F_k = {{\color{black}{C_{\mathcal {F}}(A)}}}$. Once we have a maximal chain terminating at ${{\color{black}{C_{\mathcal {F}}(A)}}}$, if $B\nsubseteq {{\color{black}{C_{\mathcal {F}}(A)}}}$, we repeat the same procedure, constructing a maximal chain terminating at ${{\color{black}{C_{\mathcal {F}}(A\cup B)}}}$. In the same way, from $F_0 \subseteq ... \subseteq {{\color{black}{C_{\mathcal {F}}(A\cap B)}}}$, we construct a maximal chain terminating at ${{\color{black}{C_{\mathcal {F}}(B)}}}$, which can be extended to a maximal chain terminating at ${{\color{black}{C_{\mathcal {F}}(A+ B)}}}$. For any $y \subseteq {{\color{black}{C_{\mathcal {F}}(B)}}}$, by (F2), the minimal flat containing ${{\color{black}{C_{\mathcal {F}}(A\cap B)}}}+y$ is contained in the minimal flat containing ${{\color{black}{C_{\mathcal {F}}(A)}}}+y$. Therefore, the length of a maximal chain from ${{\color{black}{C_{\mathcal {F}}(A\cap B)}}} $ to ${{\color{black}{C_{\mathcal {F}}(B)}}}$ is longer than or equal to the a maximal chain from ${{\color{black}{C_{\mathcal {F}}(A)}}}$ to ${{\color{black}{C_{\mathcal {F}}(A+ B)}}}$. This yields $$r_{\mathcal{F}}(A+B)-r_{\mathcal{F}}(A) \leq r_{\mathcal{F}}(B)-r_{\mathcal{F}}(A\cap B)$$ and this proves (r3).
The only thing that remains to be proved is that rank and flats defined as above compose correctly, namely $\mathcal{F}\to r\to\mathcal{F}'$ implies $\mathcal{F}=\mathcal{F}'$, and $r\to\mathcal{F}\to r'$ implies $r=r'$. Given a family $\mathcal{F}$ of flats satisfying (F1)-(F3), define $r(A)$ to be the length of a maximal chain $F_0\subseteq\cdots\subseteq {{\color{black}{C_{\mathcal {F}}(A)}}}$ minus one. Then let $\mathcal{F}'=
{{\color{black}{\mathcal{F}_r}}}=\{F\subseteq E:r(F+x)>r(F), \forall x\not\subseteq F ,\dim(x)=1\}$. We want to show that $\mathcal{F}=\mathcal{F}'$. Let $F\in\mathcal{F}$, $F={{\color{black}{C_{\mathcal {F}}(F)}}}$ is the endpoint of a maximal chain. Equivalently, for -dimensional subspace $x \subseteq E , {{\color{black}{x \nsubseteq}}} F$, we have that a maximal chain for $F+x$ has to terminate at a flat that $F$ and so $r(F+x)>r(F)$. Thus $F\in\mathcal{F}'$.
Conversely, if $r$ is a rank function satisfying (r1)-(r3), let $\mathcal{F}={{\color{black}{\mathcal{F}_r}}}$. Then let $r_\mathcal{F}(A)$ be the length of a maximal chain $F_0\subseteq\cdots\subseteq {{\color{black}{C_{\mathcal {F}}(A)}}}$ minus one. We want to show that $r=r_\mathcal{F}$. This follows from the same reasoning as above: each element $F \in \mathcal{F}$ is the endpoint of a maximal chain and hence is strictly contained in the unique cover of $x+F$ for any $x \not\subseteq F$.
In Figure \[diagram\] we summarize all currently established cryptomorphisms of $q$-matroids.
$q$-PMD’s and Subspace Designs {#qPMD}
==============================
As an application of the cryptomorphism between the rank function and flats proved in Section \[sec-cryptomorphisms\], we obtain the first example of $q$-PMD that has a classical analogue, namely the $q$-Steiner systems. , we generalize a result of *Murty et al.* [@Young] and show that from the flats, independent subspaces and circuits of our $q$-PMD we derive subspace designs. While only the $STS(13,2)$ parameters are known to be realisable to date, we have used it in our construction to obtain subspace designs for parameters that were not previously known to be realisable.
$q$-Steiner Systems are $q$-PMDs {#q-steiner-q-PMD}
--------------------------------
We start by showing that gives a $q$-matroid, and we classify its family of flats.
\[prop:steinertoflats\] Let $\mathcal{S}$ be system and let $\mathcal{B}$ its blocks. We define the family $$\mathcal{F}=\left\{ \bigcap_ { B \in S}B : S\subseteq \mathcal{B}\right\}.$$
By the cryptomorphic definition of a $q$-matroid in Theorem \[crypto:flat-rank\], it would be enough to show that $\mathcal{F}$ satisfies the axioms (F1), (F2) and (F3). By taking $S=\emptyset$, we see that $E\in \mathcal{F}$. By the definition of $\mathcal{F}$, we see that (F2) also holds.
To prove $(F3)$, let $F \in \mathcal{F}$ and let $x \subseteq E$ be a one-dimensional subspace such that $x \nsubseteq F$.
The $q$-matroid $(E,\mathcal{F})$ determined by a $q$-Steiner system as described in Proposition \[prop:steinertoflats\] is referred to as the $q$-matroid [**induced**]{} by Steiner system.
\[prop:classifysteinerflats\] Let $(E,\mathcal{F})$ be a $q$-matroid induced by a $q$-Steiner system with blocks $\mathcal{B}$. Let $F$ be a subspace of $E$. Then $F \in \mathcal{F}$ if and only if one of the following holds:
- $F=E$,
- $F \in \mathcal{B}$,
- $\dim(F)\leq t-1$.
all blocks are flats. Let us consider the intersection of two blocks. This space does not have dimension greater than $t-1$, because every $t$-space (and thus every space of dimension bigger than $t$) is in precisely one block. On the other hand, every space $F$ of dimension at most $t-1$ is in the same number of blocks whose intersection is $F$. So every space of dimension at most $t-1$ is a flat.
In Theorem \[crypto:flat-rank\] it was shown that a collection of subspaces $\mathcal{F}$ of $E$ satisfying (F1)-(F3) determines a $(E,r)$ for which $r(A)+1$ is the length of a maximal chain of flats contained in $F_A$.
\[rank-func-pmd\] Let $M=(E,\mathcal{F})$ be the $q$-matroid for which $\mathcal{F}$ is the set of intersections the blocks of an $S(t,k,n; q)$ Steiner system Then $M$ is $q$-PMD with rank function defined by $$r(A)=\left\{ \begin{array}{ll}
\dim(A) & \text{ if } \dim(A) \leq t,\\
t & \text{ if } \dim(A) > t \textrm{ and } A \text{ is contained in a block of }\mathcal{B}, \\
t+1 & \text{ if } \dim(A) > t \textrm{ and } A \text{ is not contained in a block of }\mathcal{B}.
\end{array}
\right .$$
Let $A \subseteq E$ be a subspace. Then $r(A)+1$ is the length of a maximal chain of flats contained in . If then $A$ is a flat, as are all its subspaces. So a maximal chain of flats contained in has length $\dim A+1$, hence $r(A)=\dim A$. If $\dim A=t$ then $A$ is contained in a unique block, and this block is equal to . A maximal chain of flats , where $\dim F_i=i$. This chain has length $t+1$ hence $r(A)=t=\dim A$.
If $\dim A>t$ and $A$ is contained in a block, then is a block and we apply the same reasoning as before to find $r(A)=t$. If $\dim A>t$ and $A$ is not contained in a block, then and a maximal chain of flats is $F_0\subseteq F_1\subseteq\cdots\subseteq F_{t-1}\subseteq B\subseteq E$ where $B$ is a block. (Note that $\dim B>\dim A$, but $A\not\subseteq B$.) This gives $r(A)=t+1$.
To establish the $q$-PMD property, we show that flats of the same rank have the same dimension. Clearly this property is satisfied by flats of dimension at most $t$. Let $F$ be a flat of dimension at least $t+1$ and rank $t$. Then $F$ is contained in a unique block and hence, being an intersection of blocks by definition, is itself a block and has dimension $k$. If $F$ has rank $t+1$, then it is not contained in a block, and hence must be $E$.
Subspace Designs from $q$-PMD’s
-------------------------------
Let $M$ be a induced by a $q$-Steiner system. We will now give a classification of its flats, independent subspaces and circuits and show that these yield new subspace designs by
Flats {#flats .unnumbered}
-----
We have classified the flats of a $q$-matroid induced by a $q$-Steiner system in Proposition \[prop:classifysteinerflats\]. By considering all flats of a given rank, we thus get the following designs:
- For rank $t+1$ we have only one block, ${\mathbb{F}}_q^n$. This is a $n$-$(n,n,1)$ design.
- For rank $t$, we get the original $q$-Steiner system.
- For rank less than $t$ we get a trivial design.
Independent spaces {#independent-spaces .unnumbered}
------------------
\[class\_independent\] Let $M$ be the $q$-PMD induced by a $q$-Steiner system with blocks $\mathcal{B}$. Let $I$ be a subspace of $E$. Then $I$ is independent if:
- $\dim I\leq t$
- $\dim I=t+1$ and $I$ is not contained in a block of $\mathcal{B}$.
This follows directly from the fact that $I$ is independent if and only if $r(I)=\dim I$ and the definition of the rank function of $M$ .
We want to know if all independent spaces of . There are two trivial cases:
- If $\ell\leq t$ then the blocks are all spaces of dimension $\ell$. This is a trivial design.
- If $\ell>t+1$ then there are no independent spaces. This is the empty design.
So, the only interesting case to study is that of the independent spaces of dimension $t+1$. These comprise the $(t+1)$-spaces none of which is contained in a block of $\mathcal{B}$.
\[design\_independent\] Let $M$ be the $q$-PMD induced by a $q$-Steiner system with parameters $S(t,k,n;q)$ and blocks $\mathcal{B}$. The independent spaces of dimension $t+1$ of $M$ form a $t$-$(n,t+1,{{\color{black} {\lambda_{\mathcal{I}}}}})$ design with ${{\color{black} {\lambda_{\mathcal{I}}}}}=(q^{n-t}-q^{k-t})/(q-1)$.
Let $\mathcal{I}$ be the set of independent spaces of dimension $t+1$ of $M$. We claim that for a given $t$-space $A$, the number of blocks $I\in\mathcal{I}$ containing it is independent of the choice of $A$, and thus equal to .
Let $A$ be a $t$-space and let $\lambda(A)$ denote the number of $(t+1)$-spaces of ${\mathcal I}$ that contain $A$. $A$ is contained in a unique block $B\in\mathcal{B}$ of the $q$-Steiner system. We extend $A$ to a $(t+1)$-space $I$ that is not contained in any block of $\mathcal{B}$, that is, we extend $A$ to $I \in \mathcal{I}$. We do this by taking a $1$-space $x$ not in $B$ and letting $I=A+x$. The number of $1$-spaces not in $B$ is equal to the total number of $1$-spaces minus the number of one-dimensional spaces in $B$: $${\left[\begin{matrix} n \\ 1 \end{matrix} \right]_{q}}-{\left[\begin{matrix} k \\ 1 \end{matrix} \right]_{q}}=\frac{q^n-1}{q-1}-\frac{q^k-1}{q-1}=q^k
{\left[\begin{matrix} n-k \\ 1 \end{matrix} \right]_{q}}.$$ However, another one-dimensional space $y$ that is in $I$ but not in $A$ gives that $A+x=A+y$. The number of in $I$ but not in $A$ is equal to $${\left[\begin{matrix} t+1 \\ 1 \end{matrix} \right]_{q}}-{\left[\begin{matrix} t \\ 1 \end{matrix} \right]_{q}}=\frac{q^{t+1}-1}{q-1}-\frac{q^t-1}{q-1}=q^t.$$ This means that the number of ways we can extend $A$ to $I \in \mathcal{I}$ is the quotient of the two values calculated above: $$\lambda(A)=
q^{k-t}{\left[\begin{matrix} n-k \\ 1 \end{matrix} \right]_{q}}=
\frac{q^{n-k}-q^{k-t}}{q-1}={{\color{black} {\lambda_{\mathcal{I}}}}},$$ which is of the choice of $A$ of dimension $t$.
For the $q$-PMD arising from the Steiner system we have $b_{\mathcal{I}}
= 3267963270$ and $\lambda_{\mathcal{I}} = 2046.$ For the $q$-PMD arising from the putative $q$-Fano plane , we have $b_{\mathcal{I}} = 11430$ and $\lambda_{\mathcal{I}}=30.$
Circuits {#circuits .unnumbered}
--------
\[prop:circuits\] Let $M$ be a $q$-PMD induced by a $q$-Steiner system with blocks $\mathcal{B}$. Let $C$ be a subspace of $M$. Then $C$ is a circuit if and only if:
- $\dim C=t+1$ and $C$ is contained in a block of $\mathcal{B}$,
- $\dim C=t+2$ and all -subspaces of $C$ are contained in of the blocks of $\mathcal{B}$.
A circuit is a space such that all its codimension $1$ subspaces are independent. All spaces of dimension at most $t$ are independent, so a circuit will have dimension at least $t+1$. Also, since the rank of $M$ is $t+1$, a circuit has dimension at most $t+2$. The result now follows from the definition of a circuit and the above Proposition \[class\_independent\] that classifies the independent spaces of $M$.
We now show that all the $(t+1)$-circuits form a design and that all the $(t+2)$-circuits form a design.
\[design\_circuits\] Let $M$ be a $q$-PMD induced by a $q$-Steiner system with blocks $\mathcal{B}$. Let $\mathcal{C}_{t+1}$ be the collection of all circuits of $M$ of dimension $(t+1)$. Then $\mathcal{C}_{t+1}$ are the blocks of a $t$-$(n,t+1,\lambda_{\mathcal{C}_{t+1}})$ design where $$\lambda_{\mathcal{C}_{t+1}}={\left[\begin{matrix} k-t \\ 1 \end{matrix} \right]_{q}}.$$
Let $A$ be a $t$-space and contained in unique block $B_A$ in the $q$-Steiner system. There are ${\left[\begin{matrix} k-t \\ t+1-t \end{matrix} \right]_{q}} = {\left[\begin{matrix} k-t \\ 1 \end{matrix} \right]_{q}}$ $(t+1)$-dimensional subspaces of $B_A$ that contain $A$, from Lemma \[lem:folklore\]. Every such $(t+1)$-space is a circuit by definition. If $C$ is a circuit not contained in $B_A$, then by Proposition \[prop:circuits\] $C$ is contained in another block $B \in {\mathcal B}$. Therefore, if $A \subseteq C$, then $A$ is contained in two distinct blocks $B_A$ and $B$, contradicting the Steiner system property. Hence, $\lambda(A)={\left[\begin{matrix} k-t \\ 1 \end{matrix} \right]_{q}},$ which is independent of our choice of $A$ of dimension $t$.
In fact by Proposition \[prop:circuits\], Theorem \[design\_circuits\] and Theorem \[design\_independent\] are equivalent. The circuits of dimension $t+1$ are precisely the set $(t+1)$-spaces each of which is contained in some block of the $q$-Steiner system. Therefore this set of circuits is the complement of the set of $(t+1)$ spaces for which none of its members is contained in a block of the Steiner system. It follows that the $q$-designs of Theorems \[design\_independent\] and \[design\_circuits\] are supplementary designs with respect to each other.
\[design\_circuits-2\] Let $M$ be a $q$-PMD induced by a $q$-Steiner system with blocks $\mathcal{B}$. Let $\mathcal{C}_{t+2}$ be the collection of all circuits of $M$ of dimension $(t+2)$. Then $\mathcal{C}_{t+2}$ are the blocks of a $t$-$(n,t+2,\lambda_{\mathcal{C}_{t+2}})$ design where $$\lambda_{\mathcal{C}_{t+2}}=q^{k-t}{\left[\begin{matrix} n-k \\ 1 \end{matrix} \right]_{q}}\left( {\left[\begin{matrix} n-t-1 \\ 1 \end{matrix} \right]_{q}}-\frac{1}{q}{\left[\begin{matrix} k-t \\ 1 \end{matrix} \right]_{q}}{\left[\begin{matrix} t+1 \\ 1 \end{matrix} \right]_{q}}\right)\frac{1}{q+1}.$$
Let $\mathcal{C}_{t+2}$ be the set of circuits of dimension $t+2$ of $M$. We argue that every $t$-space is contained in the same number $\lambda_{\mathcal{C}_{t+2}}$ of members of $\mathcal{C}_{t+2}$. We do this by calculating for a given $t$-space $A$ the number of blocks $C\in\mathcal{C}_{t+2}$ it is contained in. It turns out this number is independent of the choice of $A$, and thus equal to $\lambda_{\mathcal{C}_{t+2}}$.
Define $$N(A):=|\{ (I,C): A\subseteq I\subseteq C, I \in {\mathcal I}, \dim I = t+1, C\in {\mathcal{C}_{t+2}} \}|.$$ The number of $(t+1)$-dimensional independent spaces $I$ containing $A$ is exactly the number calculated in Theorem \[design\_independent\], which is $(q^{n-t}-q^{k-t})/(q-1)$. Now let $I$ be an independent space of dimension $t+1$ that contains the $t$-space $A$. Then $I$ is a $(t+1)$-space that is not contained in a block of $\mathcal{B}$. We will extend $I$ to a $(t+2)$-space $C \in {\mathcal{C}_{t+2}}$. The number of $(t+2)$-spaces containing $I$ can be counted as follows: $$\label{circuitcount}
\frac{\text{number of $1$-spaces $x$ in $E$ such that $I+x\in\mathcal{C}$}}{\text{number of $1$-spaces in $I+x$ but not in $I$}}.$$ The denominator of this fraction is straightforward : $${\left[\begin{matrix} t+2 \\ 1 \end{matrix} \right]_{q}}-{\left[\begin{matrix} t+1 \\ 1 \end{matrix} \right]_{q}}=\frac{q^{t+2}-1}{q-1}-\frac{q^{t+1}-1}{q-1}=\frac{q^{t+2}-q^{t+1}}{q-1}=q^{t+1}.$$ Now we the number of $1$-spaces $x$ such that . This will happen if and only if we pick $x$ in a so-called *forbidden block*. We say that a block of $\mathcal{B}$ is *forbidden* with respect to $I$ if it intersects $I$ in dimension $t$. Note that if $x \nsubseteq I$ is in a forbidden block, then $I+x$ contains a $(t+1)$-subspace of this forbidden block. We claim that forbidden blocks intersect only in $I$. Let $A_1$ and $A_2$ be $t$-spaces contained in $I$. Then $A_1$ and $A_2$ are each contained in a unique block of $\mathcal{B}$, say $B_1$ and $B_2$, respectively. These are forbidden blocks. Because $A_1$ and $A_2$ have codimension $1$ in $I$, we have that $\dim(A_1\cap A_2)=t-1$. By the design property, we have that $\dim(B_1\cap B_2)\leq t-1$. Now $A_1 \cap A_2 \subseteq B_1 \cap B_2$, and hence, comparing dimensions, we have $B_1 \cap B_2 = A_1 \cap A_2 \subseteq I$. Therefore, the number of forbidden blocks is equal to the number of $t$-spaces in $I$, which is ${\left[\begin{matrix} t+1 \\ t \end{matrix} \right]_{q}}={\left[\begin{matrix} t+1 \\ 1 \end{matrix} \right]_{q}}$. For a forbidden block $B$ we have that $\dim(B\cap I)=t$, so the number of $1$-spaces excluded by each forbidden block is ${\left[\begin{matrix} k \\ 1 \end{matrix} \right]_{q}}-{\left[\begin{matrix} t \\ 1 \end{matrix} \right]_{q}}=q^t{\left[\begin{matrix} k-t \\ 1 \end{matrix} \right]_{q}}$. The total number of excluded points is thus $q^t{\left[\begin{matrix} k-t \\ 1 \end{matrix} \right]_{q}}{\left[\begin{matrix} t+1 \\ 1 \end{matrix} \right]_{q}}$, because all forbidden blocks meet only in a subspace of $I$. The total number of points outside $I$ is ${\left[\begin{matrix} n \\ 1 \end{matrix} \right]_{q}}-{\left[\begin{matrix} t+1 \\ 1 \end{matrix} \right]_{q}}=q^{t+1}{\left[\begin{matrix} n-t-1 \\ 1 \end{matrix} \right]_{q}}$. This means that the number of $1$-spaces in $E$ such that is equal to $$q^{t+1}{\left[\begin{matrix} n-t-1 \\ 1 \end{matrix} \right]_{q}}-q^{t}{\left[\begin{matrix} k-t \\ 1 \end{matrix} \right]_{q}}{\left[\begin{matrix} t+1 \\ 1 \end{matrix} \right]_{q}}$$ Substituting in Equation \[circuitcount\] gives that the number of $(t+2)$-dimensional circuits containing $I$ is equal to $${\left[\begin{matrix} n-t-1 \\ 1 \end{matrix} \right]_{q}}-\frac{1}{q}{\left[\begin{matrix} k-t \\ 1 \end{matrix} \right]_{q}}{\left[\begin{matrix} t+1 \\ 1 \end{matrix} \right]_{q}} .$$ It follows that $$N(A)=q^{k-t}{\left[\begin{matrix} n-k \\ 1 \end{matrix} \right]_{q}}\left({\left[\begin{matrix} n-t-1 \\ 1 \end{matrix} \right]_{q}}-\frac{1}{q}{\left[\begin{matrix} k-t \\ 1 \end{matrix} \right]_{q}}{\left[\begin{matrix} t+1 \\ 1 \end{matrix} \right]_{q}}\right),$$ which is independent of our choice of $A$ of dimension $t$. Now for a fixed circuit $C \in {\mathcal{C}_{t+2}}$ containing $A$ there are $${\left[\begin{matrix} (t+2)-t \\ (t+1)-t \end{matrix} \right]_{q}}={\left[\begin{matrix} 2 \\ 1 \end{matrix} \right]_{q}}=\frac{q^2-1}{q-1}=q+1$$ independent $(t+1)$-spaces containing $A$ we have $$N(A)= (q+1) |\{C \in {\mathcal{C}_{t+2}} : A \subseteq C\}|=(q+1) \lambda(A).$$ We conclude that $$\lambda_{\mathcal{C}}=\lambda(A)=q^{k-t}{\left[\begin{matrix} n-k \\ 1 \end{matrix} \right]_{q}}\left( {\left[\begin{matrix} n-t-1 \\ 1 \end{matrix} \right]_{q}}-\frac{1}{q}{\left[\begin{matrix} k-t \\ 1 \end{matrix} \right]_{q}}{\left[\begin{matrix} t+1 \\ 1 \end{matrix} \right]_{q}}\right)\frac{1}{q+1}.$$
The admissibility of given parameters presented in Lemma \[intersection\_numbers\] plays an important role to show non existence results on the subspace designs. In the following corollary we give the admissibility conditions of the parameters presented in Theorem \[design\_independent\], \[design\_circuits\] and \[design\_circuits-2\].
\[cor:STS\] If a Steiner triple system $STS(n;q)$ exists, then there $2$-$(n,k,\lambda;q)$ design with the following parameters,
- $k=4$, $\displaystyle \lambda=\left(\frac{q^{n-3}-1}{q^2-1}\right)\left(\frac{q(q^{n-3}-1)-(q^3-1)}{q-1}\right)$,
- $k=4$, $\displaystyle \lambda= \frac{q^{n-3}-1}{q^2-1}\left({\left[\begin{matrix} 3 \\ 1 \end{matrix} \right]_{q}}+1\right)$,
- $k=n-4$, $\displaystyle\lambda= \frac{(q^{n-3}-1)(q^{n-4}-1)(q^{n-5}-1)}{(q^{4}-1)(q^{3}-1)(q^{2}-1)}\left({\left[\begin{matrix} 3 \\ 1 \end{matrix} \right]_{q}}+1\right)$.
Moreover, the parameters of the design listed in $(1)$ are admissible if and only if the parameters of an $STS(n; q)$ are admissible.
More generally, as a special case of Theorem \[design\_circuits-2\], if an STS$(n;q)$ exists then a $2$-$(n,4,\lambda;q)$ design exists for $$\begin{aligned}
\lambda & = & q{\left[\begin{matrix} n-3 \\ 1 \end{matrix} \right]_{q}}\left( {\left[\begin{matrix} n-3 \\ 1 \end{matrix} \right]_{q}}-\frac{1}{q}{\left[\begin{matrix} 3 \\ 1 \end{matrix} \right]_{q}}\right)\frac{1}{q+1}.
\end{aligned}$$ The $2$-$(n,4,\lambda;q)$ design would have intersection number $$\begin{aligned}
\lambda_1 & = &
\frac{{\left[\begin{matrix} n-1 \\ 1 \end{matrix} \right]_{q}}}{{\left[\begin{matrix} 3 \\ 1 \end{matrix} \right]_{q}}} \lambda = \frac{q^{n-1}-1}{q^3-1} \frac{q^{n-3}-1}{q^2-1}\frac{q(q^{n-3}-1)-(q^3-1)}{q-1}.
\end{aligned}$$ If $n\equiv 1,3 \mod 6$ then it is easy to see that $\lambda, \lambda_{1}$ are integers and so the parameters $q,n,4,\lambda$ are admissible. Conversely, suppose that $q,n,4,\lambda$ are admissible. Then in particular, $\lambda \in {\mathbb{Z}}$. If $q+1$ divides ${\left[\begin{matrix} n-3 \\ 1 \end{matrix} \right]_{q}}$ then $n$ is odd and so $n\equiv 1,3,5 \mod 6$. If $n = 5 + 6m$ for some positive integer $m$ then $\lambda_{1} \in {\mathbb{Z}}$ if and only if $$\frac{q^{6m+4}-1}{q^2-1}\left(\frac{q^{6m+2}-1}{q-1}\right)^2 \frac{1}{q^2+q+1}\in {\mathbb{Z}}.$$ Since $\displaystyle \frac{q^{6m+2}-1}{q-1} \equiv q+1 \mod (q^2+q+1)$, we see that $\lambda_{1}$ is an integer if and only if $q^2+q+1$ divides $q\sum_{j=0}^{3m+1}q^{2j}$, which yields a contradiction. Therefore $n\equiv 1,3 \mod 6$. If $q+1$ does not divide ${\left[\begin{matrix} n-3 \\ 1 \end{matrix} \right]_{q}}$ then $q+1$ and ${\left[\begin{matrix} n-3 \\ 1 \end{matrix} \right]_{q}}$ are coprime and $n$ is even. Therefore, $\lambda \in {\mathbb{Z}}$ if and only if $q+1$ divides $q{\left[\begin{matrix} n-3 \\ 1 \end{matrix} \right]_{q}}-1$, which holds if and only if $q+1$ divides $q-1$, giving a contradiction. We deduce that the parameters $q,n,4,\lambda$ are admissible if and only if the parameters of an STS$(n;q)$ are admissible. With the discussion above, it remains to see that the design in $(2)$ is the supplementary design of $(1)$ and the design in $(3)$ is the dual design of $(2)$. Indeed, for a $2-(n,4,\lambda;q)$ design with parameters $\displaystyle \lambda=\left(\frac{q^{n-3}-1}{q^2-1}\right)\left(\frac{q^{n-2}-q^3-q+1}{q-1}\right)$, its supplementary is a design with parameters $2-(n,4,{\left[\begin{matrix} n-2 \\ 2 \end{matrix} \right]_{q}} - \lambda)$. Note that $$\begin{aligned}
{\left[\begin{matrix} n-2 \\ 2 \end{matrix} \right]_{q}} - \lambda &= \frac{q^{n-3}-1}{q^2-1}\left({\left[\begin{matrix} 3 \\ 1 \end{matrix} \right]_{q}}+1\right).\end{aligned}$$ Similarly, for a design with parameters $2-(n,4,\lambda;q)$, where $\displaystyle\lambda= \frac{q^{n-3}-1}{q^2-1}\left({\left[\begin{matrix} 3 \\ 1 \end{matrix} \right]_{q}}+1\right)$, its dual is a design with $2-(n,n-4,\frac{{\left[\begin{matrix} n-2 \\ 4 \end{matrix} \right]_{q}}}{{\left[\begin{matrix} n-2 \\ 2 \end{matrix} \right]_{q}}}\lambda;q)$. Since the factor $\left({\left[\begin{matrix} 3 \\ 1 \end{matrix} \right]_{q}}+1\right)$ does not depend on $n$ it is enough to $$\begin{aligned}
\label{eq:dslambda}
\frac{{\left[\begin{matrix} n-2 \\ 4 \end{matrix} \right]_{q}}}{{\left[\begin{matrix} n-2 \\ 2 \end{matrix} \right]_{q}}}\frac{q^{n-3}-1}{q^2-1} &=
\frac{(q^{n-3}-1)(q^{n-4}-1)(q^{n-5}-1)}{(q^4-1)(q^3-1)(q^2-1)}.
\end{aligned}$$
Table \[explicit\_param13\] shows the parameters that we obtain from the Steiner system $STS(13;q)$ and Corollary \[cor:STS\]. Table \[explicit\_param\] summarizes the parameters of subspace designs whose existence would be implied by the existence of the $q$-Fano plane.
\[rem:newdesign\]
- In the literature, the only known Steiner triple systems found are those with parameters $STS(13;2)$ [@braun2018q pg $193$-$194$]. Therefore, the existence of such $STS(13;2)$ Steiner triple systems implies, via Corollary \[cor:STS\], the existence of new subspace designs with parameters $2$-$(13,4,2728;2)$ (see Table \[explicit\_param13\]).
- Moreover, for $q=2,3$ and $n=7$, Corollary \[cor:STS\] shows that the existence of the $q$-Fano plane implies the existence of $2$-$(7,3,8;2)$ and $2$-$(7,3,14;3)$ designs (see also Table \[explicit\_param\]). Such designs have actually been constructed in [@braun2005systematic] and [@braun2005some], respectively. However, for $q=4,5$, there is no information on the existence of the parameters presented in Table \[explicit\_param\].
$q=2$ $2-(13,4,695.299;2)$, $2-(13,4,2728;2)$, $2-(13,9,3385448;2)$.
------- ----------------------------------------------------------------
: Parameters of the designs in Corollary \[cor:STS\] from an $STS(13;2)$.[]{data-label="explicit_param13"}
$q=2$ $2-(7,4,115;2)$ $2-(7,4,40;2)$ $2-(7,3,8;2)$ [@braun2005systematic]
------- -----------------------------------------------------------------------
$q=3$ $2-(7,4,1070;3)$ $2-(7,4,140;3)$ $2-(7,3,14;3)$[@braun2005systematic]
$q=4$ $2-(7,4,5423;4)$ $2-(7,4,374;4)$ $2-(7,3,22;4)$ $^{*}$
$q=5$ $2-(7,4,19474;5)$ $2-(7,4,832;5)$ $2-(7,3,32;5)$ $^{*}$
$*$ no information on the existence
: Parameters of new designs in Corollary \[cor:STS\] from putative $STS(7;q)$.[]{data-label="explicit_param"}
The automorphism group of designs from $q$-PMDs
-----------------------------------------------
For the subspace designs constructed in Theorems \[design\_independent\], \[design\_circuits\], and \[design\_circuits-2\], we show that their automorphism group is isomorphic to the automorphism group of the Steiner system. Since the designs in Theorems \[design\_independent\] and \[design\_circuits\] are supplementary to each other, and moreover the constructions of the circuits in Theorem \[design\_circuits-2\] are obtained by the independent spaces in Theorem \[design\_independent\] we only consider the automorphism groups of the designs in Theorem \[design\_independent\] and \[design\_circuits-2\], respectively.
Let $\mathcal{S}$ be an $S(t,k,n; q)$ $q$-Steiner system. Then
- The automorphism group of the design obtained in Theorem \[design\_independent\] from $\mathcal{S}$ is isomorphic to the automorphism group of $\mathcal{S}$.
- The automorphism group of the design obtained in Theorem \[design\_circuits-2\] from $\mathcal{S}$ is isomorphic to the automorphism group of $\mathcal{S}$.
\(1) Let $\mathcal{I}$ be the set of independent spaces of dimension $t+1$ of the $q$-PMD, which are not contained in a block of $\mathcal{B}$. Let $\phi$ be an automorphism of $\mathcal{S}$. Given an independent space $I\in \mathcal{I}$, we claim that the image $\phi(I)$ is also an independent space. Since $\phi \in {\textrm{Aut}}(\mathcal{L}(E))$, then $\dim \phi(I)=t+1$. Moreover, $\phi(I)$ cannot be contained in a block $B$ of $\mathcal{B}$, because otherwise we see $I \subseteq \phi^{-1}(B)\in \mathcal{B}$, which is a contradiction.
Conversely, let $\phi$ be an automorphism of the subspace design with blocks $\mathcal{I}$. We will show that $\phi(B)$ is also in $\mathcal{B}$ for all $B\in \mathcal{B}$. Assume that $A$ is a $t$-dimensional subspace such that $A\subseteq B$ and $\phi(A)\neq A$. Note that if such a space does not exist, then $\phi(B)=B$, hence $\phi(B)$ is a block. We will denote $B_A=B$ the unique block containing $A$ and by $B_{\phi(A)}$ the unique block in $\mathcal{B}$ containing $\phi(A)$. Now assume that $\phi(B_A)$ is not a block, which in particular implies that there exists a one-dimensional subspace . By considering the independent space construction in Theorem \[design\_independent\], we claim that the set $I_A=\phi(A)+x$ is independent, since it has dimension $t+1$ and is not contained in a block. Indeed, if it were contained in a block $B'\neq B_{\phi(A)}$, then $\phi(A)$ would be contained in two different blocks and this would contradict the fact that $\mathcal{S}$ is a Steiner system. Finally, note that since $\mathcal{I}$ is a design, there are $\lambda_I$ independent spaces $I_1, \ldots ,I_{\lambda_I}$ such that $A\subseteq I$. It follows that $\phi(A)\subseteq \phi(I_i)$ and by construction $\phi(I_i)\not \subseteq \phi(B_A)$. Hence $I_A$ is different from all the subspaces $I_i$ and $\phi(A)$ is contained in $\lambda_I+1$ independent subspaces in $\mathcal{I}$, which is a contradiction.
\(2) Let $\phi$ be an automorphism of $\mathcal{S}$. Let $C\in \mathcal{C}_{t+2}$. If $\phi(C)$ is not a circuit of dimension $t+2$, there exists a $t+1$-subspace $I\subseteq \phi(C)$ which is contained in a block $B$ of $\mathcal{B}$. Then $\phi^{-1}(A)$ is a -subspace of $C$ such that $\phi^{-1}(A)\subseteq \phi^{-1}(B)$. This contradicts the fact that $C$ is a circuit of dimension $t+2$. Conversely, consider an automorphism $\phi$ of $\mathcal{C}_{t+2}$, i.e. $\phi(C)\in \mathcal{C}_{t+2}$ for all $C\in \mathcal{C}_{t+2}$. First, we observe that for a fixed circuit $C$ and every independent space such that $I\subseteq C$, we have that $\phi(I)$ is also independent. Otherwise, $\phi(C)$ would contain a dependent -subspace $\phi(I)$. Let $B\in \mathcal{B}$ and we will show that $\phi(B)\in \mathcal{B}$. Assume that $A$ is a $t$-dimensional subspace such that $A\subseteq B$ and $\phi(A)\neq A$. As before, we denote by $B_A=B$ and by $B_{\phi(A)}$ the unique block containing $\phi(A)$. As explained in the proof of Theorem \[design\_circuits-2\], the $\lambda_{\mathcal{C}_{t+2}}$ circuits $C\in \mathcal{C}_{t+2}$ containing $A$ are counted by taking an independent set $I$ which contains $A$ and defining $C=I+x$, with . This implies that $\phi(C)=\phi(I)+\phi(x)$, give $\lambda_{\mathcal{C}_{t+2}}$ distinct circuits containing $\phi(A)$. On the other hand, if there exists , we can also construct a circuit $C'=\phi(I)+x'$, which will contain $\phi(A)$. Since $C'$ is different from the $\lambda_{\mathcal{C}_{t+2}}$ circuits constructed before, we get a contradiction.
\[rem:autgroup\] Subspace designs with parameters $2$-$(7,3,8;2)$ and $2$-$(7,3,14;3)$ were found by computer search in [@braun2005systematic], applying the Kramer-Mesner method and under the assumption that their automorphism groups contain a Singer cycle. If these designs were the same as the ones presented in Table $1$, then their automorphism group would be isomorphic to that of an $STS(7;2)$ $q$-Steiner triple system. However, it is known that the $q$-Fano plane has automorphism group of order at most $2$ [@BraunKier; @KierKurz]. This shows that the designs that could be implied by our construction are not isomorphic to those obtained by Braun *et al.* [@braun2005systematic].
Acknowledgements
================
This paper is the product of a collaboration that was initiated at the Women in Numbers Europe (WIN-E3) conference, held in Rennes, August 26-30, 2019. The authors are very grateful to the organisers: Sorina Ionica, Holly Krieger, and Elisa Lorenzo García, for facilitating their participation at this workshop, which was supported by the Henri Lebesgue Center, the Association for Women in Mathematics (AWM) and the Clay Mathematics Institute (CMI). The fifth author is supported by the Swiss Confederation through the Swiss Goverment Excellence Scholarship no: 2019.0413 and by the Swiss National Science Foundation grant n. 188430.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We consider a simple linear reversible isomerization reaction $\mathrm{A} \rightleftharpoons \mathrm{B}$ under subdiffusion described by continuous time random walks (CTRW). The reactants’ transformations take place independently on the motion and are described by constant rates. We show that the form of the ensuing system of mesoscopic reaction-subdiffusion is somewhat unusual: the equation giving the time derivative of one reactant concentration, say $A(x,t)$, contains the terms depending not only on $\Delta A$, but also on $\Delta B$, i.e. depends also on the transport operator of another reactant. Physically this is due to the fact that several transitions from A to B and back may take place at one site before the particle jumps.'
author:
- 'F. Sagués'
- 'V.P. Shkilev'
- 'I.M. Sokolov'
title: 'Reaction-Subdiffusion Equations for the A $\rightleftharpoons$ B Reaction'
---
There are several reasons to discuss in detail the structure of mesoscopic kinetic equations describing the behavior of a simple reversible isomerization reaction $\mathrm{A} \rightleftharpoons \mathrm{B}$ under subdiffusion.
Many phenomena in systems out of equilibrium can be described within a framework of reaction-diffusion equations. Examples can be found in various disciplines ranging from chemistry and physics to biology. Both reaction-diffusion systems with normal and anomalous diffusion have been extensively studied over the past decades. However, for the latter, a general theoretical framework which would hold for all kinds of reactions is still absent. The reasons for subdiffusion and therefore its properties can be different in systems of different kind; we concentrate here on the situations when such subdiffusion can be adequately described by continuous-time random walks (CTRW). In CTRW the overall particle’s motion can be considered as a sequence of jumps interrupted by waiting times, the case pertinent to many systems where the transport is slowed down by obstacles and binding sites. In the case of anomalous diffusion these times are distributed according to a power law lacking the mean. The case of exponential distribution, on the other hand, corresponds to a normal diffusion. On the microscopic level of particles’ encounter the consideration of subdiffusion does not seem to be problematic, although it has posed several interesting questions [@micro1; @micro2; @micro3; @micro4; @micro5]. However, these microscopic approaches cannot be immediately adopted for description of spatially inhomogeneous systems, which, in the case of normal diffusion, are successfully described within the framework of reaction-diffusion equations. To discuss such behavior under subdiffusion many authors used the kind of description where the customary reaction term was added to a subdiffusion equation for concentrations to describe such phenomena as a reaction front propagation or Turing instability [@M1; @M2; @HW1; @HW2; @HLW; @LHW].
The results of these works were jeopardized after it was shown in Ref.[@SSS1] that these procedure does not lead to a correct description even of a simple irreversible isomerization reaction $\mathrm{A} \rightarrow \mathrm{B}$. The transport operator describing the subdiffusion is explicitly dependent on the properties of reaction, which stems from an essentially very simple observation that only those particles jump (as A) which survive (as A).
The properties of the reaction depend strongly on whether the reaction takes place only with the step of the particle, or independently on the particles’ steps, and moreover, whether the newborn particle retains the rest of it previous waiting time or is assigned a new one [@HLW1; @NeNe]. Here we consider in detail the following situation: The $\mathrm{A} \rightleftharpoons \mathrm{B}$ transformations take place independently on the particles’ jumps; the waiting time of a particle on a site is not changed by the reaction, both for the the forward and for the backward transformation. As a motivation for such a scheme we can consider the reaction as taking place in an aqueous solution which soaks a porous medium (say a sponge or some geophysical formation). If sojourn times in each pore are distributed according to the power law, the diffusion on the larger scales is anomalous; on the other hand, the reaction within each pore follows usual kinetics. We start by putting a droplet containing, say, only A particles somewhere within the system and follow the spread and reaction by measuring the local A and B concentrations.
Stoichiometry of the chemical reaction implies the existence of a conservation law. In the case of the $\mathrm{A} \rightleftharpoons
\mathrm{B}$ it is evident that the overall number of particles is conserved. If the isomerization takes place independently on the particle’s motion, then the evolution of the overall concentration $C(x,t)=A(x,t)+B(x,t)$, where $A(x,t)$ and $B(x,t)$ are the local concentrations of A and B particles respectively, is not influenced by the reaction, and has to follow the simple subdiffusion equation $$\dot{C}(x,t) = K_\alpha \;_0D_t^{1-\alpha} \Delta C$$ as it should be. On the other hand, neither the result of the treatment in Ref.[@SSS2] nor the result of Ref.[@YH] reproduce this behavior which is a consequence of the fundamental stoichiometry. In the work [@SSS2] (where two of the authors of the present report were involved) it was implicitly assumed that the back reaction can only take place on a step of a particle, without discussing this assumption. The corresponding description lead to the expressions which could not be cast in a form resembling the reaction-diffusion equations at all. The more general approach of Ref.[@YH], definitely correct for irreversible reactions, also fails to reproduce this local conservation law and thus is inappropriate for the description of reversible reactions under the conditions discussed. According to Ref.[@NeNe] the approach of Ref.[@YH] implies that the waiting time after each reaction is assigned anew, which makes a large difference in the reversible case.
As a step on the way to understanding the possible form of the reaction-subdiffusion equations we consider in what follows the simplest linear reversible scheme where each step can be explicitly checked. We show that the form of the corresponding equations is somewhat unusual, which emphasizes the role of coupling between the reaction and transport in reaction-subdiffusion kinetics. Actually, the equation giving the time derivative of one reactant concentration, say $A(x,t)$, contains the terms depending not only on $\Delta A$, but also on $\Delta B$, i.e. depends also on the transport operator of another reactant. Physically this is due to the fact that several transitions from A to B and back may take place at one site before the particle jumps. This dependence disappears only in the Markovian case due to vanishing of the corresponding prefactor.
Following the approach of Ref. [@SSS1; @SSS2] we describe the behavior of concentrations in the discrete scheme by the following equations: $$\begin{aligned}
&& \dot{A_i}(t)=-I_i(t)+\frac{1}{2} I_{i-1}(t)+\frac{1}{2} I_{i+1}(t)-k_1 A(t)+k_2 B(t) \\
&& \dot{B_i}(t)=-J_i(t)+\frac{1}{2} J_{i-1}(t)+\frac{1}{2} J_{i+1}(t)+k_1 A(t)-k_2 B(t) \end{aligned}$$ where $I(t)$ is the loss flux of A-particles on site $i$ and $J(t)$ is the corresponding loss flux for B-particles at site $i$. In the continuous limit the equations read as $$\begin{aligned}
\dot{A}(x,t)&=&\frac{a^2}{2}\Delta I(x,t)-k_1 A(x,t)+k_2 B(x,t) \\
\dot{B}(x,t)&=&\frac{a^2}{2}\Delta J(x,t)+k_1 A(x,t)-k_2 B(x,t).
\label{Cont} \end{aligned}$$ We now use the conservation laws for A and B particles to obtain the equations for the corresponding fluxes. The equations for the particles’ fluxes on a given site in time domain (the index $i$ or the coordinate $x$ is omitted) are:
$$\begin{aligned}
I(t) &=& \psi(t) P_{AA}(t) A(0)
+ \int_0^t \psi(t-t') P_{AA}(t-t')
\left[ I(t') + k_1 A(t') - k_2B(t') +\dot{A}(t') \right] \nonumber \\
&+& \psi(t) P_{BA}(t) B(0)
+ \int_0^t \psi(t-t') P_{BA}(t-t')
\left[ J(t') - k_1 A(t') + k_2B(t') +\dot{B}(t') \right]
\label{EqA}\end{aligned}$$
for A-particles, and $$\begin{aligned}
J(t) &=& \psi(t) P_{BB}(t) B(0)
+ \int_0^t \psi(t-t') P_{BB}(t-t')
\left[ J(t') - k_1 A(t') + k_2B(t') +\dot{B}(t') \right] \\
&+& \psi(t) P_{AB}(t) A(0)
+ \int_0^t \psi(t-t') P_{AB}(t-t')
\left[ J(t') + k_1 A(t') - k_2B(t') +\dot{A}(t') \right]\end{aligned}$$ for B-particles.
The explanation of the form of e.g. Eq.(\[EqA\]) is as follows: An A-particle which jumps from a give site at time $t$ either was there as A from the very beginning, and jumps as A probably having changed its nature several time in between, or came later as A and jumps as A, or was there from the very beginning as B and leave the site as A, etc. Here $P_{AA}$, $P_{AB}$, $P_{BA}$ and $P_{BB}$ are the survival/transformation probabilities, i.e. the probability that a particle coming to a site as A at $t=0$ leaves it at time $t$ as A (probably having changed its nature from A to B and back in between), the probability that a particle coming to a site as A at $t=0$ leaves it at time $t$ as B, the probability that a particle coming to a site as B at $t=0$ leaves it at time $t$ as A, and the probability that a particle coming to a site as B at $t=0$ leaves it at time $t$ as B: $$\begin{aligned}
P_{AA}(t)&=&\frac{k_2}{k_1+k_2}+\frac{k_1}{k_1+k_2}e^{-(k_1+k_2)t} \nonumber \\
P_{BA}(t)&=&\frac{k_2}{k_1+k_2}-\frac{k_2}{k_1+k_2}e^{-(k_1+k_2)t} \nonumber \\
P_{BB}(t)&=&\frac{k_1}{k_1+k_2}+\frac{k_2}{k_1+k_2}e^{-(k_1+k_2)t} \nonumber \\
P_{AB}(t)&=&\frac{k_1}{k_1+k_2}-\frac{k_1}{k_1+k_2}e^{-(k_1+k_2)t}\end{aligned}$$ These are given by the solution of the classical reaction kinetic equations $$\begin{aligned}
\dot{A}(t)&=&-k_1A(t)+k_2B(t) \nonumber \\
\dot{B}(t)&=&k_1A(t)-k_2B(t).\end{aligned}$$ The values of $P_{AA}$, $P_{AB}$ are given by the solutions $P_{AA}(t)=A(t)$ and $P_{AB}(t)=B(t)$ under initial conditions $A(0)=1,\; B(0)=0$, and the values of $P_{BA}$ and $P_{BB}$ are given by $P_{BA}(t)=A(t)$ and $P_{BB}(t)=B(t)$ under initial conditions $A(0)=0,\; B(0)=1$.
In the Laplace domain we get: $$\begin{aligned}
I(u)&=&\psi_1(u)\left[ I(u)+k_1A(u)-k_2B(u)+uA(u)\right] \nonumber \\
&& +\psi_2(u)\left[J(u)-k_1A(u)+k_2B(u)+uB(u)\right] \nonumber \\
J(u)&=&\psi_3(u)\left[J(u)-k_1A(u)+k_2B(u)+uB(u)\right] \label{system1}\\
&& +\psi_4(u)\left[ I(u)+k_1A(u)-k_2B(u)+uA(u)\right] \nonumber\end{aligned}$$ where $\psi_1(u),\psi_2(u), \psi_3(u)$ and $\psi_4(u)$ are the Laplace transforms of $\psi_1(t)=\psi(t)P_{AA}(t)$, $\psi_2(t)=\psi(t)P_{BA}(t)$, $\psi_3(t)=\psi(t)P_{BB}(t)$ and $\psi_4(t)=\psi(t)P_{AB}(t)$, respectively.
Using shift theorem we can get the representations of $\psi_i$ in the Laplace domain. They read: $$\begin{aligned}
\psi_1(u)&=&\frac{k_2}{k_1+k_2}\psi(u)+\frac{k_1}{k_1+k_2}\psi(u+k_1+k_2)\nonumber \\
\psi_2(u)&=&\frac{k_2}{k_1+k_2}\psi(u)-\frac{k_2}{k_1+k_2}\psi(u+k_1+k_2)\nonumber \\
\psi_3(u)&=&\frac{k_1}{k_1+k_2}\psi(u)+\frac{k_2}{k_1+k_2}\psi(u+k_1+k_2)\nonumber \\
\psi_4(u)&=&\frac{k_1}{k_1+k_2}\psi(u)-\frac{k_1}{k_1+k_2}\psi(u+k_1+k_2)\end{aligned}$$ The system of linear equations for the currents, Eqs.(\[system1\]), then has the solution $$\begin{aligned}
I(u)=a_{11}(u) A(u)+ a_{12}(u) B(u) \\
J(u)=a_{21}(u) A(u)+ a_{22}(u) B(u)\end{aligned}$$ with the following values for the coefficients: $$\begin{aligned}
a_{11} &=& \frac{1}{k_1+k_2}\frac{1}{1+\phi\psi-\psi-\phi} \\
&\times& \left[-\phi\psi\left(k_1k_2+u(k_1+k_2)+k_1^2\right) \right. \\
&+& \left. \phi k_1(u+k_1+k_2)+\psi k_2 u\right], \end{aligned}$$
$$\begin{aligned}
a_{21} &=& \frac{k_1}{k_1+k_2}\frac{1}{1+\phi\psi-\psi-\phi} \\
&\times& \left[\phi\psi (k_1+k_2 )+(\psi-\phi)u-(k_1+k_2)\phi\right],\end{aligned}$$
and with the two other coefficients, $a_{12}$ and $a_{22}$ differing from $a_{21}$ and $a_{11}$ by interchanging $k_1$ and $k_2$. Here $\psi \equiv \psi(u)$ and $\phi \equiv \psi(u+k_1+k_2)$.
For the exponential waiting time density $\psi(t)=\tau^{-1}\exp(-t/\tau)$ the corresponding values get $$\begin{aligned}
a_{11}&=&a_{22}=1/\tau \\
a_{12}&=&a_{21}=0,\end{aligned}$$ and the system of equations for the concentrations in the continuous limit, Eqs.(\[Cont\]), reduces to the customary system of reaction-diffusion equations. For the case of the power-law distributions $\psi(t) \simeq
t^{-1-\alpha}$ the Laplace transform of the waiting time PDF is $\psi(u) \simeq 1-cu^\alpha$ for small $u$, with $c=\tau^\alpha \Gamma(1-\alpha)$, so that $$\begin{aligned}
a_{11}&=&\frac{c^{-1}}{k_1+k_2} \left[ k_2u^{1-\alpha}+k_1(u+k_1+k_2)^{1-\alpha} \right.\\
&-& \left. ck_1(k_1+k_2) -cu(k_1+k_2)\right] \\
a_{22}&=&\frac{c^{-1}}{k_1+k_2} \left[ k_1u^{1-\alpha}+k_2(u+k_1+k_2)^{1-\alpha} \right. \\
&-& \left. ck_2(k_1+k_2) -cu(k_1+k_2)\right] \\
a_{21}&=&\frac{c^{-1}}{k_1+k_2} \left[ k_1u^{1-\alpha}-k_1(u+k_1+k_2)^{1-\alpha} \right. \\
&+& \left. ck_1(k_1+k_2) \right] \\
a_{12}&=&\frac{c^{-1}}{k_1+k_2}\left[ k_2u^{1-\alpha}-k_2(u+k_1+k_2)^{1-\alpha} \right. \\
&+& \left. ck_2(k_1+k_2) \right].\end{aligned}$$ Now we turn to the case of long times and relatively slow reactions, so that all parameters, $u$, $k_1$ and $k_2$ can be considered as small. In this case, for $\alpha<1$, the leading terms in all these parameters are the first two terms in each of the four equations, and the other terms can be neglected. In the time domain the operator corresponding to $u^{1-\alpha}$ is one of the fractional derivative $_0D_t^{1-\alpha}$, and the operator corresponding to $(u+k_1+k_2)^{1-\alpha}$ is the transport operator of Ref.[@SSS2], $_0\mathcal{T}_t^{1-\alpha}(k_1+k_2)$ with $_0\mathcal{T}_t^{1-\alpha}(k) = e^{-kt} \;_0D_t^{1-\alpha} e^{kt}$. Introducing the corresponding equations for the currents into the balance equations for the particle concentrations we get:
$$\begin{aligned}
\dot{A}(x,t)&=&K_\alpha \left[\frac{k_2}{k_1+k_2} \;_0D_t^{1-\alpha}+
\frac{k_1}{k_1+k_2} \;_0\mathcal{T}_t^{1-\alpha}(k_1+k_2) \right] \Delta A(x,t)\nonumber \\
&+&K_\alpha \left[\frac{k_2}{k_1+k_2} \;_0D_t^{1-\alpha}-
\frac{k_2}{k_1+k_2} \;_0\mathcal{T}_t^{1-\alpha}(k_1+k_2) \right] \Delta B(x,t)
-k_1 A(x,t)+k_2 B(x,t) \label{ConcA} \\
\dot{B}(x,t)&=& K_\alpha \left[\frac{k_1}{k_1+k_2} \;_0D_t^{1-\alpha}-
\frac{k_1}{k_1+k_2} \;_0\mathcal{T}_t^{1-\alpha}(k_1+k_2) \right] \Delta A(x,t) \nonumber \\
&+&K_\alpha \left[\frac{k_1}{k_1+k_2} \;_0D_t^{1-\alpha}+
\frac{k_2}{k_1+k_2} \;_0\mathcal{T}_t^{1-\alpha}(k_1+k_2) \right] \Delta B(x,t)
+k_1 A(x,t)-k_2 B(x,t). \label{ConcB}\end{aligned}$$
Note also that the equation for $C(x,t) = A(x,t)+B(x,t)$ following from summing up the Eqs.(\[ConcA\]) and (\[ConcB\]) is a simple subdiffusion equation $$\dot{C}(x,t) = K_\alpha \;_0D_t^{1-\alpha} \Delta C$$ as it should be. On the other hand, neither the result of the treatment in Ref.[@SSS2] nor the result of Ref.[@YH] reproduce this behavior which is a consequence of the fundamental conservation law prescribed by the stoichiometry of reaction.
Note that this system still holds for $\alpha=1$ when both the fractional derivative $\;_0D_t^{1-\alpha}$ and the transport operator $\;_0\mathcal{T}_t^{1-\alpha}(k_1+k_2)$ are unit operators. In this case the usual system of reaction-diffusion equations is restored: $$\begin{aligned}
\dot{A}(x,t)&=& K \Delta A(x,t)-k_1 A(x,t)+k_2 B(x,t) \\
\dot{B}(x,t)&=& K \Delta B(x,t)+k_1 A(x,t)-k_2 B(x,t). \end{aligned}$$
Let us summarize our findings. We considered the system of mesoscopic (reaction-subdiffusion) equations describing the kinetics of a reversible isomerization $\mathrm{A} \rightleftharpoons \mathrm{B}$ taking place in a subdiffusive medium. When the waiting times of the particles are not assigned anew after their transformations (i.e. when the overall concentration of reactants is governed by the simple subdiffusion equation), this reaction is described by a rather unusual system of reaction-subdiffusion equations having a form which was up to our knowledge not discussed before: Each of the equations, giving the temporal changes of the corresponding concentrations, depends on the Laplacians of *both* concentrations, $A$ and $B$ (not only on the same one, as in the case of normal diffusion). This is a rather unexpected situation especially taking into account the fact that our reaction is practically decoupled from the transport of particles. The form reduces to a usual reaction-diffusion form for normal diffusion (due to cancellations). It is important to note that the physical reason of the appearance of such a form is the possibility of several transformations $\mathrm{A} \to \mathrm{B} \to \mathrm{A} \to
\mathrm{B} \, ...$ during one waiting period, and that such possibilities have to be taken into account also for more complex reactions including reversible stages.
**Acknowledgement:** IMS gratefully acknowledges financial support by DFG within SFB 555 joint research program. FS acknowledges financial support from MEC under project FIS 2006 - 03525 and from DURSI under project 2005 SGR 00507.
[99]{}
R. Metzler, J. Klafter, Chem. Phys. Lett. **321**, 238 (2000)
J. Sung, E. Barkai, R.J. Silbey and S. Lee, J. Chem. Phys. **116** 2338 (2002); J. Sung, R.J. Silbey, Phys. Rev. Lett. **91** 160601 (2003)
K. Seki, M. Wojcik and M. Tachiya, J. Chem. Phys. **119**, 2165 (2003); J. Chem. Phys. **119**, 7525 (2003)
S.B. Yuste, K. Lindenberg, Phys. Rev. Lett. **87** 118301 (2001)
S.B. Yuste, K. Lindenberg, Phys. Rev. E **72**, 061103 (2005)
S. Fedotov and V. Méndez, Phys. Rev. E **66** 030102(R) (2002)
V. Méndez, V. Ortega-Cejas and J. Casas-Vásques, Eur. J. Phys. B **53** 503 (2006)
B.I. Henry and S.L. Wearne, Physica A **256** 448 (2000)
B.I. Henry and S.L. Wearne, SIAM J. Appl. Math. **62** 870 (2002)
B.I. Henry, T.A.M. Langlands and S.L. Wearne, Phys. Rev. E **72** 026101 (2005)
T.A.M. Langlands, B.I. Henry and S.L. Wearne, J. Phys.-Condensed Matter **19** 065115 (2007)
I. M. Sokolov, M. G. W. Schmidt and F. Sagués, Phys. Rev. E **73**, 031102 (2006)
B.I. Henry, T.A.M. Langlands and S.L. Wearne, Phys. Rev. E, **74** 031116 (2006)
Y. Nec and A.A. Nepomnyashchy, J. Phys. A: Math. Theor. **40** 14687 (2007)
M. G. W. Schmidt, F. Sagués and I. M. Sokolov, J. Phys.: Cond. Mat. **19**, 06511 (2007)
A. Yadav and W. Horsthemke, Phys. Rev. E **74** 066118 (2006)
| {
"pile_set_name": "ArXiv"
} |
Introduction
============
Options are derivative financial products which allow buying and selling of risks related to future price variations. The option buyer has the right (but not obligation) to purchase (for a call option) or sell (for a put option) any asset in the future (at its exercise date) at a fixed price. Estimates of the option price are based on the well known arbitrage pricing theory: the option price is given by the expected value of the option payoff at its exercise date. For example, the price of a call option is the expected value of the positive part of the difference between the market value of the underlying asset and the asset fixed price at the exercise date. The main challenge in this situation is modelling the future asset price and then estimating the payoff expectation, which is typically done using statistical Monte Carlo (MC) simulations and careful selection of the static and dynamic parameters which describe the market and assets.
Some of the widely used options include American option, where the exercise date is variable, and its slight variation Bermudan/American (BA) option, with the fairly discretized variable exercise date. Pricing these options with a large number of underlying assets is computationally intensive and requires several days of serial computational time (i.e. on a single processor system). For instance, Ibanez and Zapatero (2004) [@ibanez2004mcv] state that pricing the option with five assets takes two days, which is not desirable in modern time critical financial markets. Typical approaches for pricing options include the binomial method [@cox1979ops] and MC simulations [@glasserman2004mcm]. Since binomial methods are not suitable for high dimensional options, MC simulations have become the cornerstone for simulation of financial models in the industry. Such simulations have several advantages, including ease of implementation and applicability to multi–dimensional options. Although MC simulations are popular due to their “embarrassingly parallel" nature, for simple simulations, allows an almost arbitrary degree of near-perfect parallel speed-up, their applicability to pricing American options is complex[@ibanez2004mcv], [@broadie1997vao], [@longstaff:vao]. Researchers have proposed several approximation methods to improve the tractability of MC simulations. Recent advances in parallel computing hardware such as multi-core processors, clusters, compute “clouds", and large scale computational grids have also attracted the interest of the computational finance community. In literature, there exist a few parallel BA option pricing techniques. Examples include Huang (2005) [@huang2005pap] or Thulasiram (2002) [@thulasiram2016pep] which are based on the binomial lattice model. However, a very few studies have focused on parallelizing MC methods for BA pricing [@toke2006mcv]. In this paper, we aim to parallelize two American option pricing methods: the first approach proposed in Ibanez and Zapatero (2004) [@ibanez2004mcv] (I&Z) which computes the optimal exercise boundary and the second proposed by Picazo (2002) [@hickernell2002mca] (CMC) which uses the classification of continuation values. These two methods in their sequential form are similar to recursive programming so that at a given exercise opportunity they trigger many small independent MC simulations to compute the continuation values. The optimal strategy of an American option is to compare the exercise value (intrinsic value) with the continuation value (the expected cash flow from continuing the option contract), then exercise if the exercise value is more valuable. In the case of I&Z Algorithm the continuation values are used to parameterize the exercise boundary whereas CMC Algorithm classifies them to provide a characterization of the optimal exercise boundary. Later, both approaches compute the option price using MC simulations based on the computed exercise boundaries.
Our roadmap is to study both the algorithms to highlight their potential for parallelization: for the different phases, our aim is to identify where and how the computation could be split into independent parallel tasks. We assume a master-worker grid programming model, where the master node splits the computation in such tasks and assigns them to a set of worker nodes. Later, the master also collects the partial results produced by these workers. In particular, we investigate parallel BA options pricing to significantly reduce the pricing time by harnessing the computational power provided by the computational grid.
The paper is organized as follows. Sections \[sectionibanezzapatero\] and \[sectionpicazo\] focus on two pricing methods and are structured in a similar way: a brief introduction to present the method, sequential followed by parallel algorithm and performance evaluation concludes each section. In section \[sectionconclusion\] we present our conclusions.
Computing optimal exercise boundary algorithm {#sectionibanezzapatero}
=============================================
Introduction
------------
In [@ibanez2004mcv], the authors propose an option pricing method that builds a full exercise boundary as a polynomial curve whose dimension depends on the number of underlying assets. This algorithm consists of two phases. In the first phase the exercise boundary is parameterized. For parameterization, the algorithm uses linear interpolation or regression of a quadratic or cubic function at a given exercise opportunity. In the second phase, the option price is computed using MC simulations. These simulations are run until the price trajectory reaches the dynamic boundary computed in the first phase. The main advantage of this method is that it provides a full parameterization of the exercise boundary and the exercise rule. For American options, a buyer is mainly concerned in these values as he can decide at ease whether or not to exercise the option. At each exercise date $t$, the optimal exercise point $S_t^*$ is defined by the following implicit condition, $$\label{matched_condition}
P_t(S_t^{*}) = I(S_t^{*})$$ where $P_t(x)$ is the price of the American option on the period $[t,T]$, $I(x)$ is the exercise value (intrinsic value) of the option and $x$ is the asset value at opportunity date $t$. As explained in [@ibanez2004mcv], these optimal values stem from the monotonicity and convexity of the price function $P(\cdot)$ in [(\[matched\_condition\])]{}. These are general properties satisfied by most of the derivative securities such as maximum, minimum or geometric average basket options. However, for the problems where the monotonicity and convexity of the price function can not be easily established, this algorithm has to be revisited. In the following section we briefly discuss the sequential algorithm followed by a proposed parallel solution.
Sequential Boundary Computation {#subsectionibanezzapaterosequential}
-------------------------------
The algorithm proposed in [@ibanez2004mcv] is used to compute the exercise boundary. To illustrate this approach, we consider a call BA option on the maximum of $d$ assets modeled by Geometric Brownian Motion (GBM). It is a standard example for the multi–dimensional BA option with maturity date $T$, constant interest rate $r$ and the price of this option at $t_0$ is given as
$P_{t_0} = {\mathbb E}\left ( \exp{(-r\tau)} \Phi(S_{\tau},\tau) | S_{t_0} \right ) $
where $\tau$ is the optimal stopping time $\in \lbrace t_1,..,T
\rbrace $, defined as the first time $t_i$ such that the underlying value $S_{\tau}$ surpasses the optimal exercise boundary at the opportunity $\tau$ otherwise the option is held until $\tau =
\infty$. The payoff at time $\tau$ is defined as follows: $\Phi(S_{\tau},\tau) = (\max_i(S^i_{\tau}) - K)^+$, where $i$ = 1,..,$d$, $S$ is the underlying asset price vector and $K$ is the strike price. The parameter $d$ has a strong impact on the complexity of the algorithm, except in some cases as the average options where the number of dimensions $d$ can be easily reduced to one. For an option on the maximum of $d$ assets there are $d$ separate exercise regions which are characterized by $d$ boundaries [@ibanez2004mcv]. These boundaries are monotonic and smooth curves in ${\mathbb R}^{d-1}$. The algorithm uses backward recursive time programming, with a finite number of exercise opportunities $m =
{1,...,N_T}$. Each boundary is computed by regression on $J$ numbers of boundary points in ${\mathbb R}^{d-1}$. At each given opportunity, these optimal boundary points are computed using $N_1$ MC simulations. Further in case of an option on the maximum of $d$ underlying assets, for each asset the boundary points are computed. It takes $n$ iterations to converge each individual point. The complexity of this step is ${\mathbb O}\left ( \sum_{m=1}^{N_T} d \times J \times m
\times N_1 \times (N_T - m) \times n \right )$. After estimating $J$ optimal boundary points for each asset, $d$ regressions are performed over these points to get $d$ execution boundaries. Let us assume that the complexity of this step is ${\mathbb O}( N_T \times regression(J))$, where the complexity of the regression is assumed to be constant. After computing the boundaries at all $m$ exercise opportunities, in the second phase, the price of the option is computed using a standard MC simulation of $N$ paths in ${\mathbb R}^d$. The complexity of the pricing phase is ${\mathbb O}(d \times N_T \times N)$. Thus the total complexity of the algorithm is as follows,
\[equa-complexity-ibanez\] ${\mathbb O}\left ( \sum_{m=1}^{N_T} d \times J \times m \times N_1 \times (N_T - m) \times n + N_T \times regression(J) + d \times N_T \times N \right )$\
$\approx {\mathbb O}\left ( N_T^2 \times J \times d \times N_1 \times n + N_T \times ( J + d \times N ) \right )$
For the performance benchmarks, we use the same simulation parameters as given in [@ibanez2004mcv]. Consider a call BA option on the maximum of $d$ assets with the following configuration. $$\begin{aligned}
\label{ibanez}
\begin{array}{l}
\mbox {$K=100$, \textit{interest rate r = 0.05}, \textit{volatility rate $\sigma$ = 0.2},}\\
\mbox {\textit{dividend $\delta$ = 0.1}, $J=128$, $N_1=5e3$, $N=1e6$, $d=3$, }\\
\mbox {$N_T = 9$ and $T = 3$ years.}
\end{array}\end{aligned}$$ The sequential pricing of this option [(\[ibanez\])]{} takes 40 minutes. The distribution of the total time for the different phases is shown in Figure \[fig:timeDistribution\_OEB\]. As can be seen, the data generation phase, which simulates and calculates $J$ optimal boundary points, consumes most of the computational time. We believe that a parallel approach to this and other phases could dramatically reduce the computational time. This inspires us to investigate a parallel approach for I&Z Algorithm which is presented in the following section. The numerical results that we shall provide indicate that the proposed parallel solution is more efficient compared with the serial algorithm.
Parallel approach {#subsectionibanezzapateroparallel}
-----------------
In this section, a simple parallel approach for I&Z Algorithm is presented and the pseudocode for the same is given in Algorithm \[fig:flowfigure\_OEB\]. This approach is inspired from the solution proposed by Muni Toke [@toke2006mcv], though he presents it in the context of a low–order homogeneous parallel cluster. The algorithm consists of two phases. The first parallel phase is based on the following observation: for each of the $d$ boundaries, the computation of $J$ optimal boundary points at a given exercise date can be simulated independently.
**\[glp\]** Generation of the $J$ *“Good Lattice Points”* **\[calc\]** Computation of a boundary point with $N_1$ Monte Carlo simulations **\[reg\]** Regression of the exercise boundary . **\[mc\]** Computation of the partial option price. Estimation of the final option price by merging the partial prices. \[fig:flowfigure\_OEB\]
The optimal exercise boundaries from opportunity date $m$ back to $m-1$ are computed as follows. Note that at $m = N_T$, the boundary is known (e.g. for a call option the boundary at $N_T$ is defined as the strike value). Backward to $m=N_T-1$, we have to estimate $J$ optimal points from $J$ initial good lattice points [@ibanez2004mcv], [@haber1983pip] to regress the boundary to this time. The regression of ${\mathbb R}^{d}$ $\rightarrow$ ${\mathbb R}^{d}$ is difficult to achieve in a reasonable amount of time in case of large number of training points. To decrease the size of the training set we utilize *“Good Lattice Points”* (GLPs) as described in [@haber1983pip],[@sloan1994lmm], and [@boyle:pas]. In particular case of a call on the maximum of $d$ assets, only a regression of ${\mathbb R}^{d-1}$ $\rightarrow$ ${\mathbb R}$ is needed, but we repeat it $d$ times.
The Algorithm \[fig:flowfigure\_OEB\] computes GLPs using either SSJ library [@lecuyer2002ssf] or the quantification number sequences as presented in [@pages2003oqq]. SSJ is a Java library for stochastic simulation and it computes GLPs as a Quasi Monte Carlo sequence such as Sobol or Hamilton sequences. The algorithm can also use the number sequences readily available at [@quantification]. These sequences are generated using an optimal quadratic quantizer of the Gaussian distribution in more than one dimension. The **\[calc\]** phase of the Algorithm \[fig:flowfigure\_OEB\] is embarrassingly parallel and the $J$ boundary points are equally distributed among the computing nodes. At each node, the algorithm simulates $N_1$ paths to compute the approximate points. Then Newton’s iterations method is used to converge an individual approximated point to the optimal boundary point. After computing $J$ optimal boundary points, these points are collected by the master node, for sequential regression of the exercise boundary. This node then repeats the same procedure for every date $t$, in a recursive way, until $t=t_1$ in the **\[reg\]** phase.
The **\[calc\]** phase provides the exact optimal exercise boundary at every opportunity date. After computation of the boundary, in the last **\[mc\]** phase, the option is priced using parallel MC simulations as shown in the Algorithm \[fig:flowfigure\_OEB\].
Numerical results and performance {#subsection_ibanezzapatero_numericalresults}
---------------------------------
In this section we present performance and accuracy results due to the parallel I&Z Algorithm described in Algorithm \[fig:flowfigure\_OEB\]. We price a basket BA call option on the maximum of 3 assets as given in [(\[ibanez\])]{}. The start prices for the assets are varied as 90, 100, and 110. The prices estimated by the algorithm are presented in Table \[tab:IbanezZapa\]. To validate our results we compare the estimated prices with the prices mentioned in Andersen and Broadies (1997) [@andersen2004pds]. Their results are reproduced in the column labeled as *“Binomial"*. The last column of the table indicates the errors in the estimated prices. As we can see, the estimated option prices are close to the desired prices by acceptable marginal error and this error is represented by a desirable 95% confidence interval.
As mentioned earlier, the algorithm relies on $J$ number of GLPs to effectively compute the optimal boundary points. Later the parameterized boundary is regressed over these points. For the BA option on the maximum described in [(\[ibanez\])]{}, Muni Toke [@toke2006mcv] notes that $J$ smaller than 128 is not sufficient and prejudices the option price. To observe the effect of the number of optimal boundary points on the accuracy of the estimated price, the number of GLPs is varied as shown in Table \[tab:impactOfJ\]. For this experiment, we set the start price of the option as $S_0 =
90$. The table indicates that increasing number of GLPs has negligible impact on the accuracy of the estimated price. However, we observe the linear increase in the computational time with the increase in the number of GLPs.
\[INSERT TABLE \[tab:IbanezZapa\] HERE\]
$S^{i}_0$ Option Price Variance (95% CI) Binomial Error
----------- -------------- ------------------- ---------- -------
90 11.254 153.857 (0.024) 11.29 0.036
100 18.378 192.540 (0.031) 18.69 0.312
110 27.512 226.332 (0.035) 27.58 0.068
: Price of the call BA on the maximum of three assets ($d=3$, with the spot price $S^{i}_0$ for $i= 1,..,3$) using I&Z Algorithm. ($r=0.05$, $\delta=0.1$, $\sigma=0.2$, $\rho=0.0$, $T=3$ and $N=9$). The binomial values are referred as the true values.[]{data-label="tab:IbanezZapa"}
*J* Price Time (in minute) Error
------ -------- ------------------ ------- --
128 11.254 4.6 0.036
256 11.258 8.1 0.032
1024 11.263 29.5 0.027
: Impact of the value of *J* on the results of the maximum on three assets option ($S_0 = 90$). The binomial price is 11.29. Running time on 16 processors.[]{data-label="tab:impactOfJ"}
To evaluate the accuracy of the computed prices by the parallel algorithm, we obtained the numerical results with 16 processors. First, let us observe the effect of $N_1$, the number of simulations required in the first phase of the algorithm, on the computed option price. In [@toke2006mcv], the author comments that the large values of $N_1$ do not affect the accuracy of option price. For these experiments, we set the number of GLPs, $J$, as 128 and vary $N_1$ as shown in Table \[tab:impactOfN1\]. We can clearly observe that $N_1$ in fact has a strong impact on the accuracy of the computed option prices: the computational error decreases with the increased $N_1$. A large value of $N_1$ results in more accurate boundary points, hence more accurate exercise boundary. Further, if the exercise boundary is accurately computed, the resulting option prices are much closer to the true price. However this, as we can see in the third column, comes at a cost of increased computational time.
$N_1$ Price Time (in minute) Error
-------- -------- ------------------ ------- --
5000 11.254 4.6 0.036
10000 11.262 6.9 0.028
100000 11.276 35.7 0.014
: Impact of the value of *$N_1$* on the results of the maximum on three assets option ($S_0=90$). The binomial price is 11.29. Running time on 16 processors.[]{data-label="tab:impactOfN1"}
The I&Z algorithm highly relies on the accuracy and the convergence rate of the optimal boundary points. While the former affects the accuracy of the option price, the later affects the speed up of the algorithm. In each iteration, to converge to the optimal boundary point, the algorithm starts with an arbitrary point with the strike price $K$ often as its initial value. The algorithm then uses $N_1$ random MC paths to simulate the approximated point. A convergence criterion is used to optimize this approximated point. The simulated point is assumed to be optimal when it satisfies the following condition, $\vert S_{t_n}^{i,(simulated)} - S_{t_n}^{i,(initial)}
\vert < \epsilon$ = 0.01, where the $S_{t_n}^{i,(initial)}$ is the initial point at a given opportunity $t_n$, $i=1..J$ and the $S_{t_n}^{i,(simulated)}$ is the point simulated by using $N_1$ MC simulations. In case, the condition is not satisfied, this procedure is repeated and now with the initial point as the newly simulated point $S_{t_n}^{i,(simulated)}$. Note that the number of iterations $n$, required to reach to the optimal value, varies depending on the fixed precision in the Newton procedure (for instance, with a precision $\epsilon = 0.01$, $n$ varies from 30 to 60). We observed that not all boundary points take the same time for the convergence. Some points converge faster to the optimal boundary points while some take longer than usual. Since the algorithm has to wait until all the points are optimized, the slower points increase the computational time, thus reducing the efficiency of the parallel algorithm, see Figure \[fig:speedupIB\].
The Classification and Monte Carlo algorithm {#sectionpicazo}
============================================
Introduction {#subsectionpicazointro}
------------
The Monte Carlo approaches for BA option pricing are usually based on continuation value computation [@longstaff:vao] or continuation region estimation [@hickernell2002mca], [@ibanez2004mcv]. The option holder decides either to execute or to continue with the current option contract based on the computed asset value. If the asset value is in the exercise region, he executes the option otherwise he continues to hold the option. Denote that the asset values which belong to the exercise region will form the exercise values and rest will belong to the continuation region. In [@hickernell2002mca] Picazo et al. propose an algorithm based on the observation that at a given exercise opportunity the option holder makes his decision based on whether the sign of $(exercise\ value -
continuation\ value)$ is positive or negative. The author focuses on estimating the continuation region and the exercise region by characterizing the exercise boundary based on these signs. The classification algorithm is used to evaluate such sign values at each opportunity. In this section we briefly describe the sequential algorithm described in [@hickernell2002mca] and propose a parallel approach followed by performance benchmarks.
Sequential algorithm {#subsectionpicazosequential}
--------------------
For illustration let us consider a BA option on $d$ underlying assets modeled by Geometric Brownian Motion (GBM). $S_t = (S_t^i)$ with $i=1,..,d$. The option price at time $t_0$ is defined as follows:
$P_{t_0}(S_{t_0}) = {\mathbb E}\left (\exp{(-r\tau)} \Phi(S_{\tau},\tau) | S_{t_0} \right ) $
where $\tau$ is the optimal stopping time $\in \lbrace t_1,..,T
\rbrace $, $T$ is the maturity date, $r$ is the constant interest rate and $\Phi(S_{\tau},\tau)$ is the payoff value at time $\tau$. In case of I&Z Algorithm, the optimal stopping time is defined when the underlying asset value crosses the exercise boundary. The CMC algorithm defines the stopping time whenever the underlying asset value makes the sign of $(exercise\ value - continuation\ value)$ positive. Without loss of generality, at a given time $t$ the BA option price on the period $[t,T]$ is given by:
$P_{t}(S_t) = {\mathbb E}\left (\exp{(-r(\tau - t))} \Phi(S_{\tau},\tau) | S_{t} \right ) $
where $\tau$ is the optimal stopping time $\in \lbrace 1,..,T \rbrace
$. Let us define the difference between the payoff value and the option price at time $t_m$ as,
$\beta(t_m, S_{t_m}) = \Phi(S_{t_m},t_m) - P_{t_m}(S_{t_m}) $
where $m \in \lbrace 1,..,T \rbrace$. The option is exercised when $S_{t_m} \in \lbrace x | \beta(t_m, x) > 0 \rbrace$ which is the exercise region, and $x$ is the simulated underlying asset value, otherwise the option is continued. The goal of the algorithm is to determinate the function $\beta(\cdot)$ for every opportunity date. However, we do not need to fully parameterize this function. It is enough to find a function $F_t(\cdot)$ such that $sign F_t(\cdot) =
sign \beta(t, \cdot)$.
The algorithm consists of two phases. In the first phase, it aims to find a function $F_t(\cdot)$ having the same sign as the function $\beta(t, \cdot)$. The AdaBoost or LogitBoost algorithm is used to characterize these functions. In the second phase the option is priced by a standard MC simulation by taking the advantage of the characterization of $F_{t_m}(\cdot)$, so for the $(i)^{th}$ MC simulation we get the optimal stopping time $\tau_{(i)} = \min \lbrace
t_m \in \lbrace t_1,t_2,...,T \rbrace | F_t(S^{(i)}_t) > 0
\rbrace$. The $(i)$ is not the index of the number of assets.
Now, consider a call BA option on the maximum of $d$ underlying assets where the payoff at time $\tau$ is defined as $\Phi(S_{\tau},\tau) =
(\max_i(S^i_{\tau}) - K)^+$ with $i=1,..,d$. During the first phase of the algorithm, at a given opportunity date $t_m$ with $m \in
{1,...,N_T}$, $N_1$ underlying price vectors sized $d$ are simulated. The simulations are performed recursively in backward from $m = T$ to $m = 1$. From each price point, another $N_2$ paths are simulated from a given opportunity date to the maturity date to compute the “small” BA option price at this opportunity (i.e. $P_{t_m}(S_{t_m})$). At this step, $N_1$ option prices related to the opportunity date are computed. The time step complexity of this step is ${\mathbb O}(N_1 \times d \times m \times N_2 \times (N_T - m))$. In the classification phase, we use a training set of $N_1$ underlying price points and their corresponding option prices at a given opportunity date. In this step, a non–parametric regression is done on $N_1$ points to characterize the exercise boundary. This first phase is repeated for each opportunity date. In the second phase, the option value is computed by simulating a large number, $N$, of standard MC simulations with $N_T$ exercise opportunities. The complexity of this phase is ${\mathbb O}(d \times N_T \times N)$. Thus, the total time steps required for the algorithm can be given by the following formula,
\[equa-complexity-picazo\] ${\mathbb O}\left ( \sum_{m=1}^{N_T} N_1 \times d \times m \times N_2 \times (N_T - m) + N_T \times classification(N_1) + d \times N_T \times N \right )$\
$\approx {\mathbb O}\left ( N_T^2 \times N_1 \times d \times N_2 + N_T \times ( N_1 + d \times N ) \right )$
where ${\mathbb O}(classification(\cdot))$ is the complexity of the classification phase and the details of which can be found in [@hickernell2002mca]. For the simulations, we use the same option parameters as described in [(\[ibanez\])]{}, taken from [@ibanez2004mcv], and the parameters for the classification can be found in [@hickernell2002mca]. $$\begin{aligned}
\label{picazo}
\begin{array}{l}
\mbox {$K=100$, \textit{interest rate r = 0.05}, \textit{volatility rate $\sigma$ = 0.2},}\\
\mbox {\textit{dividend $\delta$ = 0.1}, $N_1=5e3$, $N_2=500$, $N=1e6$, $d=3$ }\\
\mbox {$N_T = 9$ and $T = 3$ years.}
\end{array}\end{aligned}$$ Each of the $N_1$ points of the training set acts as a seed which is further used to simulate $N_2$ simulation paths. From the exercise opportunity $m$ backward to $m-1$, a Brownian motion bridge is used to simulate the price of the underlying asset. The time distribution of each phase of the sequential algorithm for pricing the option [(\[picazo\])]{} is shown in Figure \[fig:timeDistributionfigure\_PI\]. As we can see from the figure, the most computationally intensive part is the data generation phase which is used to compute the option prices required for classification. In the following section we present a parallel approach for this and rest of the phases of the algorithm.
Parallel approach {#subsectionpicazoparallel}
-----------------
The Algorithm \[fig:flowfigure\_CMC\] illustrates the parallel approach based on CMC Algorithm. At $t_m = T$ we generate $N_1$ points of the price of the underlying assets, $S_{t_m}^{(i)}$, $i=1,..,N_1$ then apply the Brownian bridge simulation process to get the price at the backward date, $t_{m-1}$. For simplicity we assume a master–worker programming model for the parallel implementation: the master is responsible for allocating independent tasks to workers and for collecting the results. The master divides $N_1$ simulations into $nb$ tasks then distributes them to a number of workers. Thus each worker has $N_1/nb$ points to simulate in the **\[calc\]** phase. Each worker, further, simulates $N_2$ paths for each point from $t_m$ to $t_{N_T}$ starting at $S_{t_m}^{(i)}$ to compute the option price related to the opportunity date. Next the worker calculates the value $y_j = (exercise\ value - continuation\ value)$, $j=1,..,N_1/nb$. The master collects the $y_j$ of these $nb$ tasks from the workers and then classifies them in order to return the characterization model of the associated exercise boundary in the **\[class\]** phase.
**\[calc\]** Computation of training points. **\[class\]** Classification using boosting. **\[mc\]** The partial option price computation. Estimation of the final option price by merging the partial prices. \[fig:flowfigure\_CMC\]
For the classification phase, the master does a non-parametric regression with the set $(x_{(i)},y_{(i)})^{N_1}_{i=1}$, where $x_{(i)} = S_{t_m}^{(i)}$, to get the function $F_{t_m}(x)$ described above in Section [(\[subsectionpicazosequential\])]{}. The algorithm recursively repeats the same procedure for earlier time intervals $[m-1,1]$. As a result we obtain the characterization of the boundaries, $F_{t_m}(x)$, at every opportunity $t_m$. Using these boundaries, a standard MC simulation, **\[mc\]**, is used to estimate the option price. The MC simulations are distributed among workers such that each worker has the entire characterization boundary information $(F_{t_m}(x), m=1,..,N_T)$ to compute the partial option price. The master later merges the partially computed prices and estimates the final option price.
Numerical results and performance {#subsection_picazo_performance}
---------------------------------
In this section we present the numerical and performance results of the parallel CMC Algorithm. We focus on the standard example of a call option on the maximum of 3 assets as given in [(\[picazo\])]{}. As it can be seen, the estimated prices are equivalent to the reference prices presented in Andersen and Broadies [@andersen2004pds], which are represented in the *“Binomial”* column in Table \[tab:cmcmax\]. For pricing this option, the sequential execution takes up to 30 minutes and the time distribution for the different phases can be seen in Figure \[fig:timeDistributionfigure\_PI\]. The speed up achieved by the parallel algorithm is presented in Figure \[fig:speedupPicazo\]. We can observe from the figure that the parallel algorithm achieves linear scalability with a fewer number of processors. The different phases of the algorithm scale differently. The MC phase being embarrassingly parallel scales linearly, while, the number of processors has no impact on the scalability of the classification phase. The classification phase is sequential and takes a constant amount of time for the same option. This affects the overall speedup of the algorithm as shown in Figure \[fig:speedupPicazo\].
$S_0$ Price Variance (95% CI) Binomial Error
------- -------- ------------------- ---------- -------
90 11.295 190.786 (0.027) 11.290 0.005
100 18.706 286.679 (0.033) 18.690 0.016
110 27.604 378.713 (0.038) 27.580 0.024
: Price of the call BA on the maximum of three assets using CMC Algorithm. ($r=0.05$, $\delta=0.1$, $\sigma=0.2$, $\rho=0.0$, $T=3$, $N=9$ opportunities)[]{data-label="tab:cmcmax"}
Conclusion {#sectionconclusion}
==========
The aim of the study is to develop and implement parallel Monte Carlo based Bermudan/American option pricing algorithms. In this paper, we particularly focused on multi–dimensional options. We evaluated the scalability of the proposed parallel algorithms in a computational grid environment. We also analyzed the performance and the accuracy of both algorithms. While I&Z Algorithm computes the exact exercise boundary, CMC Algorithm estimates the characterization of the boundary. The results obtained clearly indicate that the scalability of I&Z Algorithm is limited by the boundary points computation. The Table \[tab:impactOfJ\] showed that there is no effective advantage to increase the number of such points as will, just to take advantage of a greater number of available CPUs. Moreover, the time required for computing individual boundary points varies and the points with slower convergence rate often haul the performance of the algorithm. However, in the case of CMC Algorithm, the sequential classification phase tends to dominate the total parallel computational time. Nevertheless, CMC Algorithm can be used for pricing different option types such as maximum, minimum or geometric average basket options using a generic classification configuration. While the optimal exercise boundary structure in I&Z Algorithm needs to be tailored as per the option type and requires. Parallelizing classification phase presents us several challenges due to its dependency on inherently sequential non–parametric regression. Hence, we direct our future research to investigate efficient parallel algorithms for computing exercise boundary points, in case of I&Z Algorithm, and the classification phase, in case of CMC Algorithm.
Acknowledgments
===============
This research is supported by the French “ANR-CIGC GCPMF” project and Grid5000 has been funded by ACI-GRID.
[10]{}
.
L. Andersen and M. Broadie. . , 50(9):1222–1234, 2004.
P.P. Boyle, A. Kolkiewicz, and K.S. Tan. . .
M. Broadie and J. Detemple. . , 7(3):241–286, 1997.
J.C. Cox, S.A. Ross, and M. Rubinstein. . , 7(3):229–263, 1979.
P. Glasserman. . Springer, 2004.
S. Haber. . , 41(163):115–129, 1983.
F.J. Hickernell, H. Niederreiter, and K. Fang. . Springer, 2002.
K. Huang and R.K. Thulasiram. . , pages 177–185, 2005.
A. Ibanez and F. Zapatero. . , 39(2):239–273, 2004.
P. L’Ecuyer, L. Meliani, and J. Vaucher. . , pages 234–242, 2002.
F.A. Longstaff and E.S. Schwartz. . , 2001.
G. Pag[è]{}s and J. Printems. . , 9(2):135–165, 2003.
I.H. Sloan and S. Joe. . Oxford University Press, 1994.
R.K. Thulasiram and D.A. Bondarenko. . , 1530, 2002.
I.M. Toke. . , Volume 3743:page 462, 2006.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We prove that the Hodge metric completion of the Teichmüller space of polarized and marked Calabi–Yau manifolds is a complex affine manifold. We also show that the extended period map from the completion space is injective into the period domain, and that the completion space is a bounded domain of holomorphy and admits a complete Kähler–Einstein metric.'
address:
- 'Department of Mathematics,University of California at Los Angeles, Los Angeles, CA 90095-1555, USA'
- 'Department of Mathematics,University of California at Los Angeles, Los Angeles, CA 90095-1555, USA'
- 'Center of Mathematical Sciences, Zhejiang University, Hangzhou, Zhejiang 310027, China; Department of Mathematics,University of California at Los Angeles, Los Angeles, CA 90095-1555, USA'
author:
- Xiaojing Chen
- Feng Guan
- Kefeng Liu
title: 'Hodge metric completion of the Teichmüller space of Calabi–Yau manifolds'
---
\[section\] \[thm\][Theorem]{} \[thm\][Lemma]{} \[thm\][Corollary]{} \[thm\][Proposition]{} \[thm\][Conjecture]{} \[thm\][Definition]{} \[thm\][Acknowledgement]{} \[thm\][Construction]{} \[thm\][Notations]{} \[thm\][Question]{} \[thm\][Problem]{} \[thm\][Remark]{} \[thm\][Claim]{} \[thm\][Assumption]{} \[thm\][Assumptions]{} \[thm\][Properties]{} \[thm\][Example]{} \[thm\][Comments]{} \[thm\] \[thm\][Observation]{} \[thm\][Definition-Theorem]{}
\#1\#2
[[Z]{}]{}[**Z**]{} Ø ¶[**P**]{} [[Q]{}]{}[**Q**]{} [[R]{}]{}[**R**]{} [[C]{}]{} **H** c i [[T]{}]{} [[M]{}]{} v
Introduction
============
A compact projective manifold $M$ of complex dimension $n$ with $n\geq 3$ is called Calabi–Yau in this paper, if it has a trivial canonical bundle and satisfies $H^i(M,\mathcal{O}_M)=0$ for $0<i<n$. A polarized and marked Calabi–Yau manifold is a triple consisting of a Calabi–Yau manifold $M$, an ample line bundle $L$ over $M$, and a basis of the integral middle homology group modulo torsion, $H_n(M,\mathbb{Z})/\text{Tor}$.
We will denote by $\mathcal{T}$ the Teichmüller space for the deformation of the complex structure on the polarized and marked Calabi–Yau manifold $M$. It is the moduli space of equivalence classes of marked and polarized Calabi-Yau manifolds. Actually the Teichmüller space is precisely the universal cover of the smooth moduli space $\mathcal{Z}_m$ of polarized Calabi–Yau manifolds with level $m$ structure with $m\geq 3$, which is constructed by Popp, Viehweg, and Szendröi, for example in Section 2 of [@sz]. The versal family $\mathcal{U}\rightarrow\mathcal{T}$ of the polarized and marked Calabi–Yau manifolds is the pull-back of the versal family $\mathcal{X}_{\mathcal{Z}_m}\rightarrow \mathcal{Z}_m$ introduced also in [@sz]. Therefore $\mathcal{T}$ is a connected and simply connected smooth complex manifold with $\dim_{{\mathbb{C}}}{\mathcal{T}}=h^{n-1,1}(M)=N,$ where $h^{n-1,1}(M)=\dim_{\mathbb{C}} H^{n-1,1}(M)$.
Let $D$ be the period domain of polarized Hodge structures of the $n$-th primitive cohomology of $M$. The period map $\Phi:\,\mathcal{T}\rightarrow D$ is defined by assigning to each point in $\mathcal{T}$ the Hodge structure of the corresponding fiber. Let us denote the period map on the smooth moduli space by $\Phi_{\mathcal{Z}_m}:\, \mathcal{Z}_{m}\rightarrow D/\Gamma$, where $\Gamma$ denotes the global monodromy group which acts properly and discontinuously on $D$. Then $\Phi: \,\mathcal{T}\to D$ is the lifting of $\Phi_{\mathcal{Z}_m}\circ \pi_m$, where $\pi_m: \,\mathcal{T}\rightarrow \mathcal{Z}_m$ is the universal covering map. There is the Hodge metric $h$ on $D$, which is a complete homogeneous metric and is studied in [@GS]. By local Torelli theorem of Calabi–Yau manifolds, both $\Phi_{\mathcal{Z}_m}$ and $\Phi$ are locally injective. Thus the pull-backs of $h$ on $\mathcal{Z}_m$ and $\mathcal{T}$ are both well-defined Kähler metrics, and they are still called the Hodge metrics.
In this paper, one of our essential constructions is the global holomorphic affine structure on the Teichmüller space, which can be outlined as follows: fix a base point $p\in \mathcal{T}$ with its Hodge structure $\{H^{k,n-k}_p\}_{k=0}^n$ as the reference Hodge structure. With this fixed base point $\Phi(p)\in D$, we identify the unipotent subgroup $N_+$ with its orbit in $\check{D}$ (see Section 3.1 and Remark \[N+inD\]) and define $\check{\mathcal{T}}=\Phi^{-1}(N_+)\subseteq \mathcal{T}$. We first show that $\Phi: \,\check{\mathcal{T}}\rightarrow N_+\cap D$ is a bounded map with respect to the Euclidean metric on $N_+$, and that $\mathcal{T}\backslash\check{\mathcal{T}}$ is an analytic subvariety. Then by applying Riemann extension theorem, we conclude that $\Phi(\mathcal{T})\subseteq N_+\cap D$. Using this property, we then show $\Phi$ induces a global holomorphic map $\tau: \,{\mathcal{T}}\rightarrow \mathbb{C}^N$, which actually gives a local coordinate map around each point in $\mathcal{T}$ by using local Torelli theorem for Calabi–Yau manifolds. Thus $\tau: \,\mathcal{T}\rightarrow \mathbb{C}^N$ induces a global holomorphic affine structure on $\mathcal{T}$. It is not hard to see that $\tau=P\circ \Phi:\,\mathcal{T}\rightarrow \mathbb{C}^N$ is a composition map with $P: \,N_+\rightarrow \mathbb{C}^N\simeq H^{n-1,1}_p$ a natural projection map into a subspace, where $N_+\simeq {\mathbb C}^d$ with the fixed base point $p\in \mathcal{T}$.
Let $\mathcal{Z}^H_{_m}$ be the Hodge metric completion of the smooth moduli space $\mathcal{Z}_m$ and let $\mathcal{T}^H_{_m}$ be the universal cover of $\mathcal{Z}^H_{_m}$ with the universal covering map $\pi_{_m}^H:\,\mathcal{T}^H_{_m}\rightarrow \mathcal{Z}^H_{_m}$. Lemma \[cidim\] shows that $\mathcal{Z}^H_{_m}$ is a connected and complete smooth complex manifold, and thus $\mathcal{T}^H_{_m}$ is a connected and simply connected complete smooth complex manifold. We also obtain the following commutative diagram: $$\begin{aligned}
\xymatrix{\mathcal{T}\ar[r]^{i_{m}}\ar[d]^{\pi_m}&\mathcal{T}^H_{_m}\ar[d]^{\pi_{_m}^H}\ar[r]^{{\Phi}^{H}_{_m}}&D\ar[d]^{\pi_D}\\
\mathcal{Z}_m\ar[r]^{i}&\mathcal{Z}^H_{_m}\ar[r]^{{\Phi}_{_{\mathcal{Z}_m}}^H}&D/\Gamma,
}\end{aligned}$$ where $\Phi^H_{_{\mathcal{Z}_m}}$ is the continuous extension map of the period map $\Phi_{\mathcal{Z}_m}:\,\mathcal{Z}_m\rightarrow D/\Gamma$, $i$ is the inclusion map, $i_m$ is a lifting of $i\circ \pi_{m}$, and $\Phi^H_{_m}$ is a lifting of $\Phi^H_{_{\mathcal{Z}_m}}\circ \pi_{_m}^H$. We prove in Lemma \[choice\] that there is a suitable choice of $i_m$ and $\Phi^H_{_m}$ such that $\Phi=\Phi^H_{_m}\circ i_m$. It is not hard to see that $\Phi^H_{_m}$ is actually a bounded holomorphic map from $\mathcal{T}^H_{_m}$ to $N_+\cap D$.
\[defnofTH\] For any $m\geq 3$, the complete complex manifold $\mathcal{T}^H_{_m}$ is a complex affine manifold, which is bounded domain in $\mathbb{C}^N$. Moreover, the holomorphic map $\Phi_{_m}^H:\, \mathcal{T}^H_{_m}\rightarrow N_+\cap D$ is an injection. As a consequence, the complex manifolds $\mathcal{T}^H_{_m}$ and $\mathcal{T}^H_{{m'}}$ are biholomorphic to each other for any $m, m'\geq 3$.
This proposition allows us to define the complete complex manifold $\mathcal{T}^H$ with respect to the Hodge metric by $\mathcal{T}^H=\mathcal{T}^H_{_m}$, the holomorphic map $i_{\mathcal{T}}: \,\mathcal{T}\to \mathcal{T}^H$ by $i_{\mathcal{T}}=i_m$, and the extended period map $\Phi^H:\, \mathcal{T}^H\rightarrow D$ by $\Phi^H=\Phi^H_{_m}$ for any $m\geq 3$. By these definitions, Proposition \[defnofTH\] implies that $\mathcal{T}^H$ is a complex affine manifold and that $\Phi^H:\,\mathcal{T}^H\to N_+\cap D$ is a holomorphic injection. The main result of this paper is the following.
\[introcor2\] The complete complex affine manifold $\mathcal{T}^H$ is the completion space of $\mathcal{T}$ with respect to the Hodge metric, and it is a bounded domain $\mathbb{C}^N$. Moreover, the extended period map $\Phi^H:\, \mathcal{T}^H\rightarrow N_+\cap D$ is a holomorphic injection.
Here we remark that one technical difficulty of our arguments is to prove that $\mathcal{T}^H$ is the smooth Hodge metric completion space of $\mathcal{T}$, which is to prove that $\mathcal{T}^H$ is smooth and that the map $i_{\mathcal{T}}: \,\mathcal{T}\to\mathcal{T}^H$ is an embedding.
To show the smoothness of $\mathcal{T}^H$, we need to show that the definition $\mathcal{T}^H=\mathcal{T}^H_{_m}$ does not depend on the choice of the level $m$. Therefore, we first have to introduce the smooth complete manifold $\mathcal{T}^H_{_m}$. Then by realizing the fact that the image of the period map in $D$ is independent of the level structure and that the metric completion space is unique, this independence of level structure can be reduced to show that $\Phi^H_{_m}:\,\mathcal{T}^H_{_m}\rightarrow N_+\cap D$ is injective. To achieve this, we first define the map $\tau^H_{_m}:\,\mathcal{T}^H_{_m}\to \mathbb{C}^N$ by composing the map $\Phi^H_{_m}$ with the projection $P$ as above. Then by crucially using the property that the global coordinate map $\tau:\,\mathcal{T}\rightarrow \mathbb{C}^N$ is local embedding, we show that $\tau^H_{_m}$ is a local embedding as well. Thus we obtain that there exists a holomorphic affine structure on $\mathcal{T}^H_{_m}$ such that $\tau^H_{_m}$ is a holomorphic affine map. Finally, the affineness of $\tau^H_{_m}$ and completeness of $\mathcal{T}^H_{_m}$ gives the injectivity of $\tau^H_{_m}$, which implies the injectivity of $\Phi^H_{_m}$.
To overcome the difficulty of showing $i_{\mathcal{T}}: \,\mathcal{T}\to\mathcal{T}^H$ is embedding, we first note that $\mathcal{T}_0:=\mathcal{T}_{m}=i_m(\mathcal{T})$ is also well-defined. Then it is not hard to show that $i_{\mathcal{T}}: \,\mathcal{T}\to\mathcal{T}_0$ is a covering map. Moreover, we prove that $i_{\mathcal{T}}: \,\mathcal{T}\to \mathcal{T}_0$ is actually one-to-one by showing that the fundamental group of $\mathcal{T}_0$ is trivial. Here the markings of the Calabi–Yau manifolds and the simply connectedness of $\mathcal{T}$ come into play substantially. It would substantially simplify our arguments if one can directly prove that the Hodge metric completion space of $\mathcal{T}$ is smooth without using $\mathcal{T}_m$ and ${\mathcal{T}}^H_m$.
As a direct corollary of this theorem, we easily deduce that the period map $\Phi=\Phi^H\circ i_{\mathcal{T}}:\,\mathcal{T}\to D$ is also injective since it is a composition of two injective maps. This is the global Torelli theorem for the period map from the Teichmüller space to the period domain. In the case that the period domain $D$ is Hermitian symmetric and that it has the same dimension as $\mathcal{T}$, the above theorem implies that the extended period map $\Phi^H$ is biholomorphic, in particular it is surjective. Moreover, we prove another important result as follows.
The completion space $\mathcal{T}^H$ is a bounded domain of holomorphy in $\mathbb{C}^N$; thus there exists a complete Kähler–Einstein metric on $\mathcal{T}^H$.
To prove this theorem, we construct a plurisubharmonic exhaustion function on $\mathcal{T}^H$ by using Proposition \[general f\], the completeness of $\mathcal{T}^H$, and the injectivity of $\Phi^H$. This shows that $\mathcal{T}^H$ is a bounded domain of holomorphy in $\mathbb{C}^N$. The existence of the Kähler-Einstein metric follows directly from a theorem in Mok–Yau in [@MokYau]. This paper is organized as follows. In Section \[TandPhi\], we review the definition of the period domain of polarized Hodge structures and briefly describe the construction of the Teichmüller space of polarized and marked Calabi–Yau manifolds, the definition of the period map and the Hodge metrics on the moduli space and the Teichmüller space respectively. In Section \[AffineonT\], we show that the image of the period map is in $N_+\cap D$ and we construct a holomorphic affine structure on the Teichmüller space. In Section \[THm\], we prove that there exists a global holomorphic affine structure on $\mathcal{T}^H_{_m}$ and that the map $\Phi^H_{_m}: \,\mathcal{T}^H_{_m}\rightarrow D$ is an injective map. In Section \[completionspace\], we define the completion space $\mathcal{T}^H$ and the extended period map $\Phi^H$. We then show our main result that $\mathcal{T}^H$ is the Hodge metric completion space of $\mathcal{T}$, which is also a complex affine manifold, and that $\Phi^H$ is a holomorphic injection, which extends the period map $\Phi: \,\mathcal{T}
\rightarrow D.$ Therefore, the global Torelli theorem for Calabi–Yau manifolds on the Teichmüller space follows directly. Finally, we prove $\mathcal{T}^H$ is a bounded domain of holomorphy in $\mathbb{C}^N$, and thus it admits a complete Kähler–Einstein metric. In Appendix \[topological lemmas\], we include two topological lemmas: a lemma shows the existence of the choices of $\Phi^H_{_m}$ and $i_m$ satisfying $\Phi=\Phi^H_{_m}\circ i_{m}$; and a lemma that relates the fundamental group of the moduli space $\mathcal{Z}_m$ to that of its completion space $\mathcal{Z}^H_{_m}$. **Acknowledgement**: We would like to thank Professors Robert Friedman, Mark Green, Phillip Griffiths, Si Li, Bong Lian, Eduard Looijenga, Wilfried Schmid, Andrey Todorov, Veeravalli Varadarajan and Shing-Tung Yau for sharing many of their ideas.
Teichmüller space and the period map of Calabi–Yau manifolds {#TandPhi}
=============================================================
In Section \[section period domain\], we recall the definition and some basic properties of the period domain. In Section \[section construction of Tei\], we discuss the construction of the Teichmülller space of Calabi–Yau manifolds based on the works of Popp [@Popp], Viehweg [@v1] and Szendröi [@sz] on the moduli spaces of Calabi–Yau manifolds. In Section \[section period map\], we define the period map from the Teichmüller space to the period domain. We remark that most of the results in this section are standard and can be found from the literature in the subjects.
Period domain of polarized Hodge structures {#section period
domain}
-------------------------------------------
We first review the construction of the period domain of polarized Hodge structures. We refer the reader to $\S3$ in [@schmid1] for more details.
A pair $(M,L)$ consisting of a Calabi–Yau manifold $M$ of complex dimension $n$ with $n\geq 3$ and an ample line bundle $L$ over $M$ is called a [polarized Calabi–Yau manifold]{}. By abuse of notation, the Chern class of $L$ will also be denoted by $L$ and thus $L\in H^2(M,\mathbb{Z})$. Let $\{ \gamma_1,\cdots,\gamma_{h^n}\}$ be a basis of the integral homology group modulo torsion, $H_n(M,\mathbb{Z})/\text{Tor}$ with $\dim H_n(M,\mathbb{Z})/\text{Tor}=h^n$.
Let the pair $(M,L)$ be a polarized Calabi–Yau manifold, we call the triple $(M,L,\{\gamma_1,\cdots,\gamma_{h^n}\})$ a polarized and marked Calabi–Yau manifold.
For a polarized and marked Calabi–Yau manifold $M$ with background smooth manifold $X$, we identify the basis of $H_n(M,\mathbb{Z})/\text{Tor}$ to a lattice $\Lambda$ as in [@sz]. This gives us a canonical identification of the middle dimensional de Rham cohomology of $M$ to that of the background manifold $X$, that is, $$H^n(M)\cong H^n(X),$$ where the coefficient ring can be ${\mathbb{Q}}$, ${\mathbb{R}}$ or ${\mathbb{C}} $. Since the polarization $L$ is an integer class, it defines a map $$L:\, H^n(X,{\mathbb{Q}})\to H^{n+2}(X,{\mathbb{Q}}),\quad\quad A\mapsto L\wedge A.$$ We denote by $H_{pr}^n(X)=\ker(L)$ the primitive cohomology groups, where the coefficient ring can also be ${\mathbb{Q}}$, ${\mathbb{R}}$ or ${\mathbb{C}}$. We let $H_{pr}^{k,n-k}(M)=H^{k,n-k}(M)\cap H_{pr}^n(M,{\mathbb{C}})$ and denote its dimension by $h^{k,n-k}$. We have the Hodge decomposition $$\begin{aligned}
\label{cl10}
H_{pr}^n(M,{\mathbb{C}})=H_{pr}^{n,0}(M)\oplus\cdots\oplus
H_{pr}^{0,n}(M).\end{aligned}$$ It is easy to see that for a polarized Calabi–Yau manifold, since $H^2(M, {\mathcal O}_M)=0$, we have $$H_{pr}^{n,0}(M)= H^{n,0}(M), \ H_{pr}^{n-1,1}(M)= H^{n-1,1}(M).$$ The Poincaré bilinear form $Q$ on $H_{pr}^n(X,{\mathbb{Q}})$ is defined by $$Q(u,v)=(-1)^{\frac{n(n-1)}{2}}\int_X u\wedge v$$ for any $d$-closed $n$-forms $u,v$ on $X$. Furthermore, $Q $ is nondegenerate and can be extended to $H_{pr}^n(X,{\mathbb{C}})$ bilinearly. Moreover, it also satisfies the Hodge-Riemann relations $$\begin{aligned}
Q\left ( H_{pr}^{k,n-k}(M), H_{pr}^{l,n-l}(M)\right )=0\ \
\text{unless}\ \ k+l=n, \quad\text{and}\quad \label{cl30}\\
\left (\sqrt{-1}\right )^{2k-n}Q\left ( v,{\overline}v\right )>0\ \
\text{for}\ \ v\in H_{pr}^{k,n-k}(M)\setminus\{0\}. \label{cl40}\end{aligned}$$
Let $f^k=\sum_{i=k}^nh^{i,n-i}$, denote $f^0=m$, and $F^k=F^k(M)=H_{pr}^{n,0}(M)\oplus\cdots\oplus H_{pr}^{k,n-k}(M)$, from which we have the decreasing filtration $H_{pr}^n(M,{\mathbb{C}})=F^0\supset\cdots\supset F^n.$ We know that $$\begin{aligned}
&\dim_{\mathbb{C}} F^k=f^k, \label{cl45}\\
H^n_{pr}(X,{\mathbb{C}})&=F^{k}\oplus {\overline}{F^{n-k+1}}, \quad\text{and}\quad
H_{pr}^{k,n-k}(M)=F^k\cap{\overline}{F^{n-k}}.\end{aligned}$$ In terms of the Hodge filtration, the Hodge-Riemann relations and are $$\begin{aligned}
Q\left ( F^k,F^{n-k+1}\right )=0, \quad\text{and}\quad\label{cl50}\\
Q\left ( Cv,{\overline}v\right )>0 \quad\text{if}\quad v\ne 0,\label{cl60}\end{aligned}$$ where $C$ is the Weil operator given by $Cv=\left (\sqrt{-1}\right
)^{2k-n}v$ for $v\in H_{pr}^{k,n-k}(M)$. The period domain $D$ for polarized Hodge structures with data is the space of all such Hodge filtrations $$D=\left \{ F^n\subset\cdots\subset F^0=H_{pr}^n(X,{\mathbb{C}})\mid \eqref{cl45}, \eqref{cl50} \text{ and } \eqref{cl60} \text{ hold}
\right \}.$$ The compact dual $\check D$ of $D$ is $$\check D=\left \{ F^n\subset\cdots\subset F^0=H_{pr}^n(X,{\mathbb{C}})\mid \eqref{cl45} \text{ and } \eqref{cl50} \text{ hold} \right \}.$$ The period domain $D\subseteq \check D$ is an open subset. From the definition of period domain we naturally get the [Hodge bundles]{} on $\check{D}$ by associating to each point in $\check{D}$ the vector spaces $\{F^k\}_{k=0}^n$ in the Hodge filtration of that point. Without confusion we will also denote by $F^k$ the bundle with $F^k$ as the fiber for each $0\leq k\leq n$.
We remark the notation change for the primitive cohomology groups. As mentioned above that for a polarized Calabi–Yau manifold, $$H_{pr}^{n,0}(M)= H^{n,0}(M), \ H_{pr}^{n-1,1}(M)= H^{n-1,1}(M).$$ For the reason that we mainly consider these two types of primitive cohomology group of a Calabi–Yau manifold, by abuse of notation, we will simply use $H^n(M,\mathbb{C})$ and $H^{k, n-k}(M)$ to denote the primitive cohomology groups $H^n_{pr}(M,\mathbb{C})$ and $H_{pr}^{k, n-k}(M)$ respectively. Moreover, we will use cohomology to mean primitive cohomology in the rest of the paper.
Construction of the Teichmuller space {#section construction of Tei}
-------------------------------------
We first recall the concept of Kuranishi family of compact complex manifolds. We refer to page 8-10 in [@su], page 94 in [@Popp] and page 19 in [@v1] for equivalent definitions and more details.
A family of compact complex manifolds $\pi:\,\mathcal{W}\to\mathcal{B}$ is *versal* at a point $p\in \mathcal{B}$ if it satisfies the following conditions:
1. If given a complex analytic family $\iota:\,\mathcal{V}\to \mathcal{S}$ of compact complex manifolds with a point $s\in \mathcal{S}$ and a biholomorphic map $f_0:\,V=\iota^{-1}(s)\to U=\pi^{-1}(p)$, then there exists a holomorphic map $g$ from a neighbourhood $\mathcal{N}\subseteq \mathcal{S}$ of the point $s$ to $\mathcal{T}$ and a holomorphic map $f:\,\iota^{-1}(\mathcal{N})\to \mathcal{W}$ with $\iota^{-1}(\mathcal{N})\subseteq \mathcal{V}$ such that they satisfy that $g(s)=p$ and $f|_{\mathcal{\iota}^{-1}(s)}=f_0$ with the following commutative diagram $$\xymatrix{\iota^{-1}(\mathcal{N})\ar[r]^{f}\ar[d]^{\iota}&\mathcal{W}\ar[d]^{\pi}\\
\mathcal{N}\ar[r]^{g}&\mathcal{B}.
}$$
2. For all $g$ satisfying the above condition, the tangent map $(dg)_s$ is uniquely determined.
If a family $\pi:\,\mathcal{W}\to\mathcal{B}$ is versal at every point $p\in\mathcal{B}$, then it is a *versal family* on $\mathcal{B}$. If a complex analytic family satisfies the above condition (1), then the family is called *complete* at $p$. If a complex analytic family $\pi:\,\mathcal{W}\rightarrow \mathcal{B}$ of compact complex manifolds is complete at each point of $\mathcal{B}$ and versal at the point $p\in\mathcal{B}$, then the family $\pi:\,\mathcal{W}\rightarrow\mathcal{B}$ is called the *Kuranishi family* of the complex manifold $V=\pi^{-1}(p)$. The base space $\mathcal{B}$ is called the *Kuranishi space*. If the family is complete at each point in a neighbourhood of $p\in\mathcal{B}$ and versal at $p$, then the family is called a *local Kuranishi family* at $p\in\mathcal{B}$.
Let $(M,L)$ be a polarized Calabi–Yau manifold. For any integer $m\geq 3$, we call a basis of the quotient space $(H_n(M,\mathbb{Z})/\text{Tor})/m(H_n(M,\mathbb{Z})/\text{Tor})$ a level $m$ structure on the polarized Calabi–Yau manifold. For deformation of polarized Calabi–Yau manifold with level $m$ structure, we reformulate Theorem 2.2 in [@sz] to the following theorem, in which we only put the statements we need in this paper. One can also look at [@Popp] and [@v1] for more details about the construction of moduli spaces of Calabi–Yau manifolds.
\[Szendroi theorem 2.2\] Let $M$ be a polarized Calabi–Yau manifold with level $m$ structure with $m\geq 3$. Then there exists a connected quasi-projective complex manifold $\mathcal{Z}_m$ with a versal family of Calabi–Yau manifolds, $$\begin{aligned}
\label{szendroi versal family}
\mathcal{X}_{\mathcal{Z}_m}\rightarrow \mathcal{Z}_m,\end{aligned}$$ which contains $M$ as a fiber and is polarized by an ample line bundle $\mathcal{L}_{\mathcal{Z}_m}$ on $\mathcal{X}_{\mathcal{Z}_m}$.
The Teichmüller space is the moduli space of equivalent classes of polarized and marked Calabi–Yau manifolds. More precisely, a polarized and marked Calabi–Yau manifold is a triple $(M, L, \gamma)$, where $M$ is a Calabi-Yau manifold, $L$ is a polarization on $M$, and $\gamma$ is a basis of $H_n(M,\mathbb{Z})/\text{Tor}$. Two triples $(M, L, \gamma)$ and $(M', L', \gamma')$ are equivalent if there exists a biholomorphic map $f:\,M\to M'$ with $$\begin{aligned}
f^*L'&=L,\\
f^*\gamma' &=\gamma,\end{aligned}$$ then $[M, L, \gamma]=[M', L', \gamma']\in \mathcal{T}$. Because a basis $\gamma$ of $H_n(M,\mathbb{Z})/\text{Tor}$ naturally induces a basis of $\left(H_n(M,\mathbb{Z})/\text{Tor}\right)/ m\left(H_n(M,\mathbb{Z})/\text{Tor}\right) $, we have a natural map $\pi_m:\,\mathcal{T}\to\mathcal{Z}_m$.
By this definition, it is not hard to show that the Teichmüller space is precisely the universal cover of $\mathcal{Z}_m$ with the covering map $\pi_m: \,\mathcal{T}\rightarrow\mathcal{Z}_m$. In fact, as the same construction in Section 2 of [@sz], we can also realize the Teichmüller space $\mathcal{T}$ as a quotient space of the universal cover of the Hilbert scheme of Calabi–Yau manifolds by special linear group $SL(N+1,\mathbb{C})$. Here the dimension is given by $N+1=p(k)$ where $p$ is the Hilbert polynomial of each fiber $(M,L)$ and $k$ satisfies that for any polarized algebraic variety $({\widetilde}{M}, {\widetilde}{L})$ with Hilbert polynomial $p$, the line bundle ${\widetilde}{L}^{\otimes k}$ is very ample. Under this construction, Teichmüller space $\mathcal{T}$ is automatically simply connected, and there is a natural covering map $\pi_m:\,\mathcal{T}\to\mathcal{Z}_m$. On the other hand, in Theorem 2.5 and Corollary 2.8 of [@CGL], the authors also proved that $\pi_m: \mathcal{T}\rightarrow\mathcal{Z}_m$ is a universal covering map, and consequencely that $\mathcal{T}$ is the universal cover space of $\mathcal{Z}_m$ for some $m$. Thus by the uniqueness of universal covering space, these two constructions of $\mathcal{T}$ are equivalent to each other.
We denote by ${\varphi}:\,\U\rightarrow \mathcal{T}$ the pull-back family of the family (\[szendroi versal family\]) via the covering $\pi_m$.
\[imp\] The Teichmüller space $\mathcal{T}$ is a connected and simply connected smooth complex manifold and the family $$\begin{aligned}
\label{versal family over Teich}
{\varphi}:\,\mathcal{U}\rightarrow\mathcal{T},\end{aligned}$$ which contains $M$ as a fiber, is local Kuranishi at each point of $\mathcal{T}$.
For the first half, because $\mathcal{Z}_m$ is a connected and smooth complex manifold, its universal cover $\mathcal{T}$ is thus a connected and simply connected smooth complex manifold. For the second half, we know that the family (\[szendroi versal family\]) is a versal family at each point of $\mathcal{Z}_m$ and that $\pi_m$ is locally biholomorphic, therefore the pull-back family via $\pi_m$ is also versal at each point of ${{\mathcal T}}$. Then by the definition of local Kuranishi family, we get that $\U\rightarrow {{\mathcal T}}$ is local Kuranishi at each point of ${{\mathcal T}}$.
We remark that the family ${\varphi}: \mathcal{U}\rightarrow \mathcal{T}$ being local Kuranishi at each point is essentially due to the local Torelli theorem for Calabi–Yau manifolds. In fact, the Kodaira-Spencer map of the family $\mathcal{U}\rightarrow \mathcal{T}$ $$\begin{aligned}
\kappa:\,T^{1,0}_p\mathcal{T}\to {H}^{0,1}(M_p,T^{1,0}M_p),\end{aligned}$$ is an isomorphism for each $p\in\mathcal{T}$. Then by theorems in page 9 of [@su], we conclude that $\mathcal{U}\to \mathcal{T}$ is versal at each $p\in \mathcal{T}$. Moreover, the well-known Bogomolov-Tian-Todorov result ([@tian1] and [@tod1]) implies that $\dim_{\mathbb{C}}(\mathcal{T})=N=h^{n-1,1}$. We refer the reader to Chapter 4 in [@km1] for more details about deformation of complex structures and the Kodaira-Spencer map. Note that the [Teichmüller ]{}space $\mathcal{T}$ does not depend on the choice of $m$. In fact, let $m_1$ and $m_2$ be two different integers, and $\U_1\rightarrow{{\mathcal T}}_1$, $\U_2\rightarrow{{\mathcal T}}_2$ be two versal families constructed via level $m_1$ and level $m_2$ structures respectively as above, and both of which contain $M$ as a fiber. By using the fact that ${{\mathcal T}}_1$ and ${{\mathcal T}}_2$ are simply connected and the definition of versal family, we have a biholomorphic map $f:\,{{\mathcal T}}_1\rightarrow{{\mathcal T}}_2$, such that the versal family $\U_1\rightarrow{{\mathcal T}}_1$ is the pull back of the versal family $\U_2\rightarrow {{\mathcal T}}_2$ by $f$. Thus these two families are biholomorphic to each other.
There is another easier way to show that $\mathcal{T}$ does not depend on the choice of $m$. Let $m_1$ and $m_2$ be two different integers, and $\mathcal{X}_{\mathcal{Z}_{m_{1}}}\rightarrow\mathcal{Z}_{{m_{1}}}$, $\mathcal{X}_{\mathcal{Z}_{m_{2}}}\rightarrow\mathcal{Z}_{{m_{2}}}$ be two versal families with level $m_1$ and level $m_2$ structures respectively, and $\U_1\rightarrow{{\mathcal T}}_1$, $\U_2\rightarrow{{\mathcal T}}_2$ be two versal families constructed via level $m_1$ and level $m_2$ structures respectively as above, and both of which contain $M$ as a fiber as above. Then $\mathcal{T}_1$ is the universal cover of $\mathcal{Z}_{m_1}$ and $\mathcal{T}_2$ is the universal cover of $\mathcal{Z}_{m_2}$. Let us consider the product of two integers $m_{1,2}=m_1m_2$, and the versal family $\mathcal{X}_{_{\mathcal{Z}_{m_{1,2}}}}\rightarrow \mathcal{Z}_{m_{1,2}}$. Then the moduli space $\mathcal{Z}_{m_{1,2}}$ with level $m_{1,2}$ structure is a covering space of both $\mathcal{Z}_{{m_{1}}}$ and $\mathcal{Z}_{{m_{2}}}$. Let $\mathcal{T}$ be the universal cover space of $\mathcal{Z}_{m_{1,2}}$, and $\mathcal{U}\rightarrow \mathcal{T}$ the pull back family from $\mathcal{X}_{\mathcal{Z}_{m_{1,2}}}$ as above. Since $\mathcal{Z}_{m_{1,2}}$ is a covering space of both $\mathcal{Z}_{m_1}$ and $\mathcal{Z}_{m_2}$, we conclude that $\mathcal{T}$ is universal cover of $\mathcal{Z}_{m_1}$, and $\mathcal{T}$ is also the universal cover of $\mathcal{Z}_{m_2}$. Thus we proved that the universal covers of $\mathcal{Z}_{m_1}$ and $\mathcal{Z}_{m_2}$ are the same, that is, $\mathcal{T}=\mathcal{T}_1=\mathcal{T}_2$ with $\mathcal{U}$ the pull back versal family over $\mathcal{T}$.
In the rest of the paper, we will simply use “level $m$ structure” to mean “level $m$ structure with $m\geq 3$".
The period map and the Hodge metric on the Teichmüller space {#section period map}
------------------------------------------------------------
For any point $p\in{{\mathcal T}}$, let $M_p$ be the fiber of family ${\varphi}:\, \U\rightarrow {{\mathcal T}}$, which is a polarized and marked Calabi–Yau manifold. Since the Teichmüller space is simply connected and we have fixed the basis of the middle homology group modulo torsions, we identify the basis of $H_n(M,\mathbb{Z})/\text{Tor}$ to a lattice $\Lambda$ as in [@sz]. This gives us a canonical identification of the middle dimensional cohomology of $M$ to that of the background manifold $X$, that is, $H^n(M,\mathbb{C})\simeq H^n(X, \mathbb{C})$. Therefore, we can use this to identify $H^n(M_p, \
\mathbb{C})$ for all fibers on ${{\mathcal T}}$. Thus we get a canonical trivial bundle $H^n(M_p,\mathbb{C})\times {{\mathcal T}}$.
The period map from $\mathcal{T}$ to $D$ is defined by assigning to each point $p\in{{\mathcal T}}$ the Hodge structure on $M_p$, that is $$\begin{aligned}
\Phi:\, {{\mathcal T}}\rightarrow D, \quad\quad p\mapsto\Phi(p)=\{F^n(M_p)\subset\cdots\subset F^0(M_p)\}\end{aligned}$$ We denote $F^k(M_p)$ by $F^k_p$ for simplicity.
The period map has several good properties, and one may refer to Chapter 10 in [@Voisin] for details. Among them, one of the most important is the following Griffiths transversality: the period map $\Phi$ is a holomorphic map and its tangent map satisfies that $$\Phi_*(v)\in \bigoplus_{k=1}^{n}
\text{Hom}\left(F^k_p/F^{k+1}_p,F^{k-1}_p/F^{k}_p\right)\quad\text{for any}\quad p\in{{\mathcal T}}\ \
\text{and}\ \ v\in T_p^{1,0}{{\mathcal T}}$$ with $F^{n+1}=0$, or equivalently, $\Phi_*(v)\in \bigoplus_{k=0}^{n} \text{Hom} (F^k_p,F^{k-1}_p).
$
In [@GS], Griffiths and Schmid studied the so-called [Hodge metric]{} on the period domain $D$. We denote it by $h$. In particular, this Hodge metric is a complete homogeneous metric. Let us denote the period map on the moduli space by $\Phi_{\mathcal{Z}_m}: \,\mathcal{Z}_{m}\rightarrow D/\Gamma$, where $\Gamma$ denotes the global monodromy group which acts properly and discontinuously on the period domain $D$. By local Torelli theorem for Calabi–Yau manifolds, we know that $\Phi_{\mathcal{Z}_m}, \Phi$ are both locally injective. Thus it follows from [@GS] that the pull-backs of $h$ by $\Phi_{\mathcal{Z}_m}$ and $\Phi$ on $\mathcal{Z}_m$ and $\mathcal{T}$ respectively are both well-defined Kähler metrics. By abuse of notation, we still call these pull-back metrics the *Hodge metrics*.
Holomorphic affine structure on the Teichmüller space {#AffineonT}
=====================================================
In Section \[preliminary\], we review some properties of the period domain from Lie group and Lie algebra point of view. In Section \[boundedness of Phi\], we fix a base point $p\in\mathcal{T}$ and introduce the unipotent space $N_+\subseteq \check{D}$, which is biholomorphic to $\mathbb{C}^d$. Then we show that the image $\Phi(\mathcal{T})$ is bounded in $N_+\cap D$ with respect to the Euclidean metric on $N_+$. In Section \[affineonT\], using the property that $\Phi(\mathcal{T})\subseteq N_+$, we define a holomorphic map $\tau:\,\mathcal{T}\rightarrow \mathbb{C}^N$. Then we use local Torelli theorem to show that $\tau$ defines a local coordinate chart around each point in $\mathcal{T}$, and this shows that $\tau: \,\mathcal{T}\rightarrow \mathbb{C}^N$ defines a holomorphic affine structure on $\mathcal{T}$.
Preliminary
-----------
Let us briefly recall some properties of the period domain from Lie group and Lie algebra point of view. All of the results in this section is well-known to the experts in the subject. The purpose to give details is to fix notations. One may either skip this section or refer to [@GS] and [@schmid1] for most of the details.
The orthogonal group of the bilinear form $Q$ in the definition of Hodge structure is a linear algebraic group, defined over $\mathbb{Q}$. Let us simply denote $H_{\mathbb{C}}=H^n(M, \mathbb{C})$ and $H_{\mathbb{R}}=H^n(M, \mathbb{R})$. The group of the $\mathbb{C}$-rational points is $$\begin{aligned}
G_{\mathbb{C}}=\{ g\in GL(H_{\mathbb{C}})|~ Q(gu, gv)=Q(u, v) \text{ for all } u, v\in H_{\mathbb{C}}\},\end{aligned}$$ which acts on $\check{D}$ transitively. The group of real points in $G_{\mathbb{C}}$ is $$\begin{aligned}
G_{\mathbb{R}}=\{ g\in GL(H_{\mathbb{R}})|~ Q(gu, gv)=Q(u, v) \text{ for all } u, v\in H_{\mathbb{R}}\},\end{aligned}$$ which acts transitively on $D$ as well.
Consider the period map $\Phi:\, \mathcal{T}\rightarrow D$. Fix a point $p\in \mathcal{T}$ with the image $o:=\Phi(p)=\{F^n_p\subset \cdots\subset F^{0}_p\}\in D$. The points $p\in \mathcal{T}$ and $o\in D$ may be referred as the base points or the reference points. A linear transformation $g\in G_{\mathbb{C}}$ preserves the base point if and only if $gF^k_p=F^k_p$ for each $k$. Thus it gives the identification $$\begin{aligned}
\check{D}\simeq G_\mathbb{C}/B\quad\text{with}\quad B=\{ g\in G_\mathbb{C}|~ gF^k_p=F^k_p, \text{ for any } k\}.\end{aligned}$$ Similarly, one obtains an analogous identification $$\begin{aligned}
D\simeq G_\mathbb{R}/V\hookrightarrow \check{D}\quad\text{with}\quad V=G_\mathbb{R}\cap B,\end{aligned}$$ where the embedding corresponds to the inclusion $
G_\mathbb{R}/V=G_{\mathbb{R}}/G_\mathbb{R}\cap B\subseteq G_\mathbb{C}/B.$ The Lie algebra $\mathfrak{g}$ of the complex Lie group $G_{\mathbb{C}}$ can be described as $$\begin{aligned}
\mathfrak{g}&=\{X\in \text{End}(H_\mathbb{C})|~ Q(Xu, v)+Q(u, Xv)=0, \text{ for all } u, v\in H_\mathbb{C}\}.\end{aligned}$$ It is a simple complex Lie algebra, which contains $\mathfrak{g}_0=\{X\in \mathfrak{g}|~ XH_{\mathbb{R}}\subseteq H_\mathbb{R}\}$ as a real form, i.e. $\mathfrak{g}=\mathfrak{g}_0\oplus i \mathfrak{g}_0.$ With the inclusion $G_{\mathbb{R}}\subseteq G_{\mathbb{C}}$, $\mathfrak{g}_0$ becomes Lie algebra of $G_{\mathbb{R}}$. One observes that the reference Hodge structure $\{H^{k, n-k}_p\}_{k=0}^n$ of $H^n(M,{\mathbb{C}})$ induces a Hodge structure of weight zero on $\text{End}(H^n(M, {\mathbb{C}})),$ namely, $$\begin{aligned}
\mathfrak{g}=\bigoplus_{k\in \mathbb{Z}} \mathfrak{g}^{k, -k}\quad\text{with}\quad\mathfrak{g}^{k, -k}=\{X\in \mathfrak{g}|XH_p^{r, n-r}\subseteq H_p^{r+k, n-r-k}\}.\end{aligned}$$ Since the Lie algebra $\mathfrak{b}$ of $B$ consists of those $X\in \mathfrak{g}$ that preserves the reference Hodge filtration $\{F_p^n\subset\cdots\subset F^0_p\}$, one thus has $$\begin{aligned}
\mathfrak{b}=\bigoplus_{k\geq 0} \mathfrak{g}^{k, -k}.\end{aligned}$$ The Lie algebra $\mathfrak{v}_0$ of $V$ is $\mathfrak{v}_0=\mathfrak{g}_0\cap \mathfrak{b}=\mathfrak{g}_0\cap \mathfrak{b}\cap{\overline}{\mathfrak{b}}=\mathfrak{g}_0\cap \mathfrak{g}^{0, 0}.$ With the above isomorphisms, the holomorphic tangent space of $\check{D}$ at the base point is naturally isomorphic to $\mathfrak{g}/\mathfrak{b}$.
Let us consider the nilpotent Lie subalgebra $\mathfrak{n}_+:=\oplus_{k\geq 1}\mathfrak{g}^{-k,k}$. Then one gets the holomorphic isomorphism $\mathfrak{g}/\mathfrak{b}\cong \mathfrak{n}_+$. We take the unipotent group $N_+=\exp(\mathfrak{n}_+)$.
As $\text{Ad}(g)(\mathfrak{g}^{k, -k})$ is in $\bigoplus_{i\geq k}\mathfrak{g}^{i, -i} \text{ for each } g\in B,$ the sub-Lie algebra $\mathfrak{b}\oplus \mathfrak{g}^{-1, 1}/\mathfrak{b}\subseteq \mathfrak{g}/\mathfrak{b}$ defines an Ad$(B)$-invariant subspace. By left translation via $G_{\mathbb{C}}$, $\mathfrak{b}\oplus\mathfrak{g}^{-1,1}/\mathfrak{b}$ gives rise to a $G_{\mathbb{C}}$-invariant holomorphic subbundle of the holomorphic tangent bundle at the base point. It will be denoted by $T^{1,0}_{o, h}\check{D}$, and will be referred to as the holomorphic horizontal tangent bundle at the base point. One can check that this construction does not depend on the choice of the base point. The horizontal tangent subbundle at the base point $o$, restricted to $D$, determines a subbundle $T_{o, h}^{1, 0}D$ of the holomorphic tangent bundle $T^{1, 0}_oD$ of $D$ at the base point. The $G_{\mathbb{C}}$-invariace of $T^{1, 0}_{o, h}\check{D}$ implies the $G_{\mathbb{R}}$-invariance of $T^{1, 0}_{o, h}D$. As another interpretation of this holomorphic horizontal bundle at the base point, one has $$\begin{aligned}
\label{horizontal}
T^{1, 0}_{o, h}\check{D}\simeq T^{1, 0}_o\check{D}\cap \bigoplus_{k=1}^n\text{Hom}(F^{k}_p/F^{k+1}_p, F^{k-1}_p/F^{k}_p).\end{aligned}$$ In [@schmid1], Schmid call a holomorphic mapping $\Psi: \,{M}\rightarrow \check{D}$ of a complex manifold $M$ into $\check{D}$ *horizontal* if at each point of $M$, the induced map between the holomorphic tangent spaces takes values in the appropriate fibre $T^{1,0}\check{D}$. It is easy to see that the period map $\Phi: \, \mathcal{T}\rightarrow D$ is horizontal since $\Phi_*(T^{1,0}_p\mathcal{T})\subseteq T^{1,0}_{o,h}D$ for any $p\in \mathcal{T}$. Since $D$ is an open set in $\check{D}$, we have the following relation: $$\begin{aligned}
\label{n+iso}
T^{1,0}_{o, h}D= T^{1, 0}_{o, h}\check{D}\cong\mathfrak{b}\oplus \mathfrak{g}^{-1, 1}/\mathfrak{b}\hookrightarrow \mathfrak{g}/\mathfrak{b}\cong \mathfrak{n}_+.\end{aligned}$$
\[N+inD\]With a fixed base point, we can identify $N_+$ with its unipotent orbit in $\check{D}$ by identifying an element $c\in N_+$ with $[c]=cB$ in $\check{D}$; that is, $N_+=N_+(\text{ base point })\cong N_+B/B\subseteq\check{D}.$ In particular, when the base point $o$ is in $D$, we have $N_+\cap D\subseteq D$.
Let us introduce the notion of an adapted basis for the given Hodge decomposition or the Hodge filtration. For any $p\in \mathcal{T}$ and $f^k=\dim F^k_p$ for any $0\leq k\leq n$, we call a basis $$\xi=\left\{ \xi_0, \xi_1, \cdots, \xi_N, \cdots, \xi_{f^{k+1}}, \cdots, \xi_{f^k-1}, \cdots,\xi_{f^{2}}, \cdots, \xi_{f^{1}-1}, \xi_{f^{0}-1} \right\}$$ of $H^n(M_p, \mathbb{C})$ an *adapted basis for the given Hodge decomposition* $$H^n(M_p, {\mathbb{C}})=H^{n, 0}_p\oplus H^{n-1, 1}_p\oplus\cdots \oplus H^{1, n-1}_p\oplus H^{0, n}_p,$$ if it satisfies $
H^{k, n-k}_p=\text{Span}_{\mathbb{C}}\left\{\xi_{f^{k+1}}, \cdots, \xi_{f^k-1}\right\}$ with $\dim H^{k,n-k}_p=f^k-f^{k+1}$. We call a basis $$\begin{aligned}
\zeta=\{\zeta_0, \zeta_1, \cdots, \zeta_N, \cdots, \zeta_{f^{k+1}}, \cdots, \zeta_{f^k-1}, \cdots, \zeta_{f^2}, \cdots, \zeta_{f^1-0}, \zeta_{f^0-1}\}\end{aligned}$$ of $H^n(M_p, \mathbb{C})$ an *adapted basis for the given filtration* $$\begin{aligned}
F^n\subseteq F^{n-1}\subseteq\cdots\subseteq F^0\end{aligned}$$ if it satisfies $F^{k}=\text{Span}_{\mathbb{C}}\{\zeta_0, \cdots, \zeta_{f^k-1}\}$ with $\text{dim}_{\mathbb{C}}F^{k}=f^k$. Moreover, unless otherwise pointed out, the matrices in this paper are $m\times m$ matrices, where $m=f^0$. The blocks of the $m\times m$ matrix $T$ is set as follows: for each $0\leq \alpha, \beta\leq n$, the $(\alpha, \beta)$-th block $T^{\alpha, \beta}$ is $$\begin{aligned}
\label{block}
T^{\alpha, \beta}=\left[T_{ij}(\tau)\right]_{f^{-\alpha+n+1}\leq i \leq f^{-\alpha+n}-1, \ f^{-\beta+n+1}\leq j\leq f^{-\beta+n}-1},\end{aligned}$$ where $T_{ij}$ is the entries of the matrix $T$, and $f^{n+1}$ is defined to be zero. In particular, $T =[T^{\alpha,\beta}]$ is called a *block lower triangular matrix* if $T^{\alpha,\beta}=0$ whenever $\alpha<\beta$.
We remark that by fixing a base point, we can identify the above quotient Lie groups or Lie algebras with their orbits in the corresponding quotient Lie algebras or Lie groups. For example, $\mathfrak{n}_+\cong \mathfrak{g}/\mathfrak{b}$, $\mathfrak{g}^{-1,1}\cong\mathfrak{b}\oplus \mathfrak{g}^{-1,1}/\mathfrak{b}$, and $N_+\cong N_+B/B\subseteq \check{D}$. We can also identify a point $\Phi(p)=\{ F^n_p\subseteq F^{n-1}_p\subseteq \cdots \subseteq F^{0}_p\}\in D$ with its Hodge decomposition $\bigoplus_{k=0}^n H^{k, n-k}_p$, and thus with any fixed adapted basis of the corresponding Hodge decomposition for the base point, we have matrix representations of elements in the above Lie groups and Lie algebras. For example, elements in $N_+$ can be realized as nonsingular block lower triangular matrices with identity blocks in the diagonal; elements in $B$ can be realized as nonsingular block upper triangular matrices.
We shall review and collect some facts about the structure of simple Lie algebra $\mathfrak{g}$ in our case. Again one may refer to [@GS] and [@schmid1] for more details. Let $\theta: \,\mathfrak{g}\rightarrow \mathfrak{g}$ be the Weil operator, which is defined by $$\begin{aligned}
\theta(X)=(-1)^p X\quad \text{ for } X\in \mathfrak{g}^{p,-p}.\end{aligned}$$ Then $\theta$ is an involutive automorphism of $\mathfrak{g}$, and is defined over $\mathbb{R}$. The $(+1)$ and $(-1)$ eigenspaces of $\theta$ will be denoted by $\mathfrak{k}$ and $\mathfrak{p}$ respectively. Moreover, set $$\begin{aligned}
\mathfrak{k}_0=\mathfrak{k}\cap \mathfrak{g}_0, \quad \mathfrak{p}_0=\mathfrak{p}\cap \mathfrak{g}_0.\end{aligned}$$The fact that $\theta$ is an involutive automorphism implies $$\begin{aligned}
\mathfrak{g}=\mathfrak{k}\oplus\mathfrak{p}, \quad \mathfrak{g}_0=\mathfrak{k}_0\oplus \mathfrak{p}_0, \quad
[\mathfrak{k}, \,\mathfrak{k}]\subseteq \mathfrak{k}, \quad [\mathfrak{p},\,\mathfrak{p}]\subseteq\mathfrak{p}, \quad [\mathfrak{k}, \,\mathfrak{p}]\subseteq \mathfrak{p}.\end{aligned}$$ Let us consider $\mathfrak{g}_c=\mathfrak{k}_0\oplus \sqrt{-1}\mathfrak{p}_0.$ Then $\mathfrak{g}_c$ is a real form for $\mathfrak{g}$. Recall that the killing form $B(\cdot, \,\cdot)$ on $\mathfrak{g}$ is defined by $$\begin{aligned}
B(X,Y)=\text{Trace}(\text{ad}(X)\circ\text{ad}(Y)) \quad \text{for } X,Y\in \mathfrak{g}.\end{aligned}$$ A semisimple Lie algebra is compact if and only if the Killing form is negative definite. Thus it is not hard to check that $\mathfrak{g}_c$ is actually a compact real form of $\mathfrak{g}$, while $\mathfrak{g}_0$ is a non-compact real form. Recall that $G_{\mathbb{R}}\subseteq G_{\mathbb{C}}$ is the subgroup which correpsonds to the subalgebra $\mathfrak{g}_0\subseteq\mathfrak{g}$. Let us denote the connected subgroup $G_c\subseteq G_{\mathbb{C}}$ which corresponds to the subalgebra $\mathfrak{g}_c\subseteq\mathfrak{g}$. Let us denote the complex conjugation of $\mathfrak{g}$ with respect to the compact real form by $\tau_c$, and the complex conjugation of $\mathfrak{g}$ with respect to the compact real form by $\tau_0$.
The intersection $K=G_c\cap G_{\mathbb{R}}$ is then a compact subgroup of $\mathbb{G}_{\mathbb{R}}$, whose Lie algebra is $\mathfrak{k}_0=\mathfrak{g}_{\mathbb{R}}\cap \mathfrak{g}_c$. With the above notations, Schmid showed in [@schmid1] that $K$ is a maximal compact subgroup of $G_{\mathbb{R}}$, and it meets every connected component of $G_{\mathbb{R}}$. Moreover, $V=G_{\mathbb{R}}\cap B\subseteq K$.
We know that in our cases, $G_{\mathbb{C}}$ s a connected simple Lie group, $B$ a parabolic subgroup in $G_{\mathbb{C}}$ with $\mathfrak{b}$ as its Lie algebra. The Lie algebra $\mathfrak{b}$ has a unique maximal nilpotent ideal $\mathfrak{n}_-$. It is not hard to see that $$\begin{aligned}
\mathfrak{g}_c\cap \mathfrak{n}_-=\mathfrak{n}_-\cap \tau_c(\mathfrak{n}_-)=0.\end{aligned}$$ By using Bruhat’s lemma, one concludes $\mathfrak{g}$ is spanned by the parabolic subalgebras $\mathfrak{b}$ and $\tau_c(\mathfrak{b})$. Moreover $\mathfrak{v}=\mathfrak{b}\cap \tau_c(\mathfrak{b})$, $\mathfrak{b}=\mathfrak{v}\oplus \mathfrak{n}_-$. In particular, we also have $$\mathfrak{n}_+=\tau_c(\mathfrak{n}_-).$$ As remarked in $\S1$ in [@GS] of Griffiths and Schmid, one gets that $\mathfrak{v}$ must have the same rank of $\mathfrak{g}$ as $\mathfrak{v}$ is the intersection of the two parabolic subalgebras $\mathfrak{b}$ and $\tau_c(\mathfrak{b})$. Moreover, $\mathfrak{g}_0$ and $\mathfrak{v}_0$ are also of equal rank, since they are real forms of $\mathfrak{g}$ and $\mathfrak{v}$ respectively. Therefore, we can choose a Cartan subalgebra $\mathfrak{h}_0$ of $\mathfrak{g}_0$ such that $\mathfrak{h}_0\subseteq \mathfrak{v}_0$ is also a Cartan subalgebra of $\mathfrak{v}_0$. Since $\mathfrak{v}_0\subseteq \mathfrak{k}_0$, we also have $\mathfrak{h}_0\subseteq\mathfrak{k}_0$. A Cartan subalgebra of a real Lie algebra is a maximal abelian subalgebra. Therefore $\mathfrak{h}_0$ is also a maximal abelian subalgebra of $\mathfrak{k}_0$, hence $\mathfrak{h}_0$ is a Cartan subalgebra of $\mathfrak{k}_0$. Summarizing the above, we get
\[cartaninK\]There exists a Cartan subalgebra $\mathfrak{h}_0$ of $\mathfrak{g}_0$ such that $\mathfrak{h}_0\subseteq \mathfrak{v}_0\subseteq \mathfrak{k}_0$ and $\mathfrak{h}_0$ is also a Cartan subalgebra of $\mathfrak{k}_0$.
As an alternate proof of Proposition \[cartaninK\], to show that $\mathfrak{k}_0$ and $\mathfrak{g}_0$ have equal rank, one realizes that that $\mathfrak{g}_0$ in our case is one of the following real simple Lie algebras: $\mathfrak{sp}(2l, \mathbb{R})$, $\mathfrak{so}(p, q)$ with $p+q$ odd, or $\mathfrak{so}(p, q)$ with $p$ and $q$ both even. One may refer to [@Griffiths1] and [@su] for more details.
Proposition \[cartaninK\] implies that the simple Lie algebra $\mathfrak{g}_0$ in our case is a simple Lie algebra of first category as defined in $\S4$ in [@Sugi]. In the upcoming part, we will briefly derive the result of a simple Lie algebra of first category in Lemma 3 in [@Sugi1]. One may also refer to [@Xu] Lemma 2.2.12 at pp. 141-142 for the same result.
Let us still use the above notations of the Lie algebras we consider. By Proposition 4, we can take $\mathfrak{h}_0$ to be a Cartan subalgebra of $\mathfrak{g}$ such that $\mathfrak{h}_0\subseteq \mathfrak{v}_0\subseteq\mathfrak{k}_0$ and $\mathfrak{h}_0$ is also a Cartan subalgebra of $\mathfrak{k}_0$. Let us denote $\mathfrak{h}$ to be the complexification of $\mathfrak{h}_0$. Then $\mathfrak{h}$ is a Cartan subalgebra of $\mathfrak{g}$ such that $\mathfrak{h}\subseteq \mathfrak{v}\subseteq \mathfrak{k}$.
Write $\mathfrak{h}_0^*=\text{Hom}(\mathfrak{h}_0, \mathbb{R})$ and $\mathfrak{h}^*_{_\mathbb{R}}=\sqrt{-1}\mathfrak{h}^*_0. $ Then $\mathfrak{h}^*_{_\mathbb{R}}$ can be identified with $\mathfrak{h}_{_\mathbb{R}}:=\sqrt{-1}\mathfrak{h}_0$ by duality using the restriction of the Killing form $B$ of $\mathfrak{g}$ to $\mathfrak{h}_{_\mathbb{R}}$. Let $\rho\in\mathfrak{h}_{_{\mathbb{R}}}^*\simeq \mathfrak{h}_{_{\mathbb{R}}}$, one can define the following subspace of $\mathfrak{g}$ $$\begin{aligned}
\mathfrak{g}^{\rho}=\{x\in \mathfrak{g}| [h, \,x]=\rho(h)x\quad\text{for all } h\in \mathfrak{h}\}.\end{aligned}$$ An element $\varphi\in\mathfrak{h}_{_{\mathbb{R}}}^*\simeq\mathfrak{h}_{_{\mathbb{R}}}$ is called a root of $\mathfrak{g}$ with respect to $\mathfrak{h}$ if $\mathfrak{g}^\varphi\neq \{0\}$.
Let $\Delta\subseteq \mathfrak{h}^{*}_{_\mathbb{R}}\simeq\mathfrak{h}_{_{\mathbb{R}}}$ denote the space of nonzero $\mathfrak{h}$-roots. Then each root space $$\begin{aligned}
\mathfrak{g}^{\varphi}=\{x\in\mathfrak{g}|[h,x]=\varphi(h)x \text{ for all } h\in \mathfrak{h}\}\end{aligned}$$ belongs to some $\varphi\in \Delta$ is one-dimensional over $\mathbb{C}$, generated by a root vector $e_{_{\varphi}}$.
Since the involution $\theta$ is a Lie-algebra automorphism fixing $\mathfrak{k}$, we have $[h, \theta(e_{_{\varphi}})]=\varphi(h)\theta(e_{_{\varphi}})$ for any $h\in \mathfrak{h}$ and $\varphi\in \Delta.$ Thus $\theta(e_{_{\varphi}})$ is also a root vector belonging to the root $\varphi$, so $e_{_{\varphi}}$ must be an eigenvector of $\theta$. It follows that there is a decomposition of the roots $\Delta$ into $\Delta_{\mathfrak{k}}\cup\Delta_{\mathfrak{p}}$ of compact roots and non-compact roots with root spaces $\mathbb{C}e_{_{\varphi}}\subseteq \mathfrak{k}$ and $\mathfrak{p}$ respectively. The adjoint representation of $\mathfrak{h}$ on $\mathfrak{g}$ determins a decomposition $$\begin{aligned}
\mathfrak{g}=\mathfrak{h}\oplus \sum_{\varphi\in\Delta}\mathfrak{g}^{\varphi}.\end{aligned}$$There also exists a Weyl base $\{h_i, 1\leq i\leq l;\,\,e_{_{\varphi}}, \text{ for any } \varphi\in\Delta\}$ with $l=\text{rank}(\mathfrak{g})$ such that $\text{Span}_{\mathbb{C}}\{h_1, \cdots, h_l\}=\mathfrak{h}$, $\text{Span}_{\mathbb{C}}\{e_{\varphi}\}=g^{\varphi}$ for each $\varphi\in \Delta$, and $$\begin{aligned}
&\tau_c(h_i)=\tau_0(h_i)=-h_i, \quad \text{ for any }1\leq i\leq l;\\ \tau_c(e_{_{\varphi}})=&\tau_0(e_{_{\varphi}})=-e_{_{-\varphi}} \quad \text{for any } \varphi\in \Delta_{\mathfrak{k}};\quad\tau_0(e_{_{\varphi}})=-\tau_c(e_{_{\varphi}})=e_{_{\varphi}} \quad\text{for any }\varphi\in \Delta_{\mathfrak{p}}.\end{aligned}$$With respect to this Weyl base, we have $$\begin{aligned}
\mathfrak{k}_0&=\mathfrak{h}_0+\sum_{\varphi\in \Delta_{\mathfrak{k}}}\mathbb{R}(e_{_{\varphi}}-e_{_{-\varphi}})+\sum_{\varphi\in\Delta_{\mathfrak{k}}} \mathbb{R}\sqrt{-1}(e_{_{\varphi}}+e_{_{-\varphi}});\\
\mathfrak{p}_0&=\sum_{\varphi\in \Delta_{\mathfrak{p}}}\mathbb{R}(e_{_{\varphi}}+e_{_{-\varphi}})+\sum_{\varphi\in\Delta_{\mathfrak{p}}} \mathbb{R}\sqrt{-1}(e_{_{\varphi}}-e_{_{-\varphi}}).\end{aligned}$$
\[puretype\]Let $\Delta$ be the set of $\mathfrak{h}$-roots as above. Then for each root $\varphi\in \Delta$, there is an integer $-n\leq k\leq n$ such that $e_{\varphi}\in \mathfrak{g}^{k,-k}$. In particular, if $e_{_{\varphi}}\in \mathfrak{g}^{k, -k}$, then $\tau_0(e_{_{\varphi}})\in \mathfrak{g}^{-k,k}$ for any $-n\leq k\leq n$.
Let $\varphi$ be a root, and $e_{\varphi}$ be the generator of the root space $\mathfrak{g}^{\varphi}$, then $e_{\varphi}=\sum_{k=-n}^n e^{-k,k}$, where $e^{-k,k}\in \mathfrak{g}^{-k,k}$. Because $\mathfrak{h}\subseteq \mathfrak{v}\subseteq \mathfrak{g}^{0,0}$, the Lie bracket $[e^{-k,k}, h]\in \mathfrak{g}^{-k,k}$ for each $k$. Then the condition $[e_{\varphi}, h]=\varphi (h)e_{\varphi}$ implies that $$\begin{aligned}
\sum_{k=-n}^n[e^{-k,k},h]=\sum_{k=-n}^n\varphi(h)e^{-k,k}\quad \text{ for each } h\in \mathfrak{h}.\end{aligned}$$ By comparing the type, we get $$\begin{aligned}
[e^{-k,k},h]=\varphi(h)e^{-k,k}\quad \text{ for each } h\in \mathfrak{h}.\end{aligned}$$ Therefore $e^{-k,k}\in g^{\varphi}$ for each $k$. As $\{e^{-k,k}, \}_{k=-n}^n$ forms a linear independent set, but $g^{\varphi}$ is one dimensional, thus there is only one $-n\leq k\leq n$ with $e^{-k,k}\neq 0$.
Let us now introduce a lexicographic order (cf. pp.41 in [@Xu] or pp.416 in [@Sugi]) in the real vector space $\mathfrak{h}_{_{\mathbb{R}}}$ as follows: we fix an ordered basis $e_1, \cdots, e_l$ for $\mathfrak{h}_{_{\mathbb{R}}}$. Then for any $h=\sum_{i=1}^l\lambda_ie_i\in\mathfrak{h}_{_{\mathbb{R}}}$, we call $h>0$ if the first nonzero coefficient is positive, that is, if $\lambda_1=\cdots=\lambda_k=0, \lambda_{k+1}>0$ for some $1\leq k<l$. For any $h, h'\in\mathfrak{h}_{_{\mathbb{R}}}$, we say $h>h'$ if $h-h'>0$, $h<h'$ if $h-h'<0$ and $h=h'$ if $h-h'=0$. In particular, let us identify the dual spaces $\mathfrak{h}_{_{\mathbb{R}}}^*$ and $\mathfrak{h}_{_{\mathbb{R}}}$, thus $\Delta\subseteq \mathfrak{h}_{_{\mathbb{R}}}$. Let us choose a maximal linearly independent subset $\{e_1, \cdots, e_s\}$ of $\Delta_{\mathfrak{p}}$, then a maximal linearly independent subset $\{e_{s+1}, \cdots, e_{l}\}$ of $\Delta_{\mathfrak{k}}$. Then $\{e_1, \cdots, e_s, e_{s+1}, \cdots, e_l\}$ forms a basis for $\mathfrak{h}_{_{\mathbb{R}}}^*$ since $\text{Span}_{\mathbb{R}}\Delta=\mathfrak{h}_{_{\mathbb{R}}}^*$. Then define the above lexicographic order in $\mathfrak{h}_{_{\mathbb{R}}}^*\simeq \mathfrak{h}_{_{\mathbb{R}}}$ using the ordered basis $\{e_1,\cdots, e_l\}$. In this way, we can also define $$\begin{aligned}
\Delta^+=\{\varphi>0: \,\varphi\in \Delta\};\quad \Delta_{\mathfrak{p}}^+=\Delta^+\cap \Delta_{\mathfrak{p}}.\end{aligned}$$ Similarly we can define $\Delta^-$, $\Delta^-_{\mathfrak{p}}$, $\Delta^{+}_{\mathfrak{k}}$, and $\Delta^-_{\mathfrak{k}}$. Then one can conclude the following lemma from Lemma 2.2.10 and Lemma 2.2.11 at pp.141 in [@Xu],
\[RootSumNonRoot\]Using the above notation, we have $$\begin{aligned}
(\Delta_{\mathfrak{k}}+\Delta^{\pm}_{\mathfrak{p}})\cap \Delta\subseteq\Delta_{\mathfrak{p}}^{\pm}; \quad (\Delta_{\mathfrak{p}}^{\pm}+\Delta_{\mathfrak{p}}^{\pm})\cap\Delta=\emptyset.\end{aligned}$$ If one defines $$\mathfrak{p}^{\pm}=\sum_{\varphi\in\Delta^{\pm}_{\mathfrak{p}}} \mathfrak{g}^{\varphi}\subseteq\mathfrak{p},$$ then $\mathfrak{p}=\mathfrak{p}^+\oplus\mathfrak{p}^-$ and $
[\mathfrak{p}^{\pm}, \, \mathfrak{p}^{\pm}]=0,$ $ [\mathfrak{p}^+, \,\mathfrak{p}^-]\subseteq\mathfrak{k}$, $[\mathfrak{k}, \,\mathfrak{p}^{\pm}]\subseteq\mathfrak{p}^{\pm}. $
Two different roots $\varphi, \psi\in \Delta$ are said to be strongly orthogonal if and only if $\varphi\pm\psi\notin\Delta\cup \{0\}$, which is denoted by $\varphi {\protect\mathpalette{\protect\independenT}{\perp}}\psi$.
For the real simple Lie algebra $\mathfrak{g}_0=\mathfrak{k}_0\oplus\mathfrak{p}_0$ which has a Cartan subalgebra $\mathfrak{h}_0$ in $\mathfrak{k}_0$, the maximal abelian subspace of $\mathfrak{p}_0$ can be described as in the following lemma, which is a slight extension of a lemma of Harish-Chandra in [@HC]. One may refer to Lemma 3 in [@Sugi1] or Lemma 2.2.12 at pp.141–142 in [@Xu] for more details. For reader’s convenience we give the detailed proof.
\[stronglyortho\]There exists a set of strongly orthogonal noncompact positive roots $\Lambda=\{\varphi_1, \cdots, \varphi_r\}\subseteq\Delta^+_{\mathfrak{p}}$ such that $$\begin{aligned}
\mathfrak{a}_0=\sum_{i=1}^r\mathbb{R}\left(e_{_{\varphi_i}}+e_{_{-\varphi_i}}\right)\end{aligned}$$ is a maximal abelian subspace in $\mathfrak{p}_0$.
Let $\varphi_1$ be the minimum in $\Delta_\mathfrak{p}^+$, and $\varphi_2$ be the minimal element in $\{\varphi\in\Delta_{\mathfrak{p}}^+:\,\varphi{\protect\mathpalette{\protect\independenT}{\perp}}\varphi_1\}$, then we obtain inductively an maximal ordered set of roots $\Lambda=\{\varphi_1,\cdots, \varphi_r\}\subseteq \Delta_\mathfrak{p}^+$, such that for each $1\leq k\leq r$ $$\begin{aligned}
\varphi_k=\min\{{\varphi}\in\Delta_{\mathfrak{p}}^+:\,\varphi{\protect\mathpalette{\protect\independenT}{\perp}}\varphi_j \text{ for } 1\leq j\leq k-1\}.\end{aligned}$$ Because $\varphi_i{\protect\mathpalette{\protect\independenT}{\perp}}\varphi_j$ for any $1\leq i<j\leq r$, we have $[e_{_{\pm\varphi_i}}, e_{_{\pm\varphi_j}}]=0$. Therefore $\mathfrak{a}_0=\sum_{i=1}^r\mathbb{R}\left(e_{_{\varphi_i}}+e_{_{-\varphi_i}}\right)$ is an abelian subspace of $\mathfrak{p}_0$. Also because a root can not be strongly orthogonal to itself, the ordered set $\Lambda$ contains distinct roots. Thus $\dim_\mathbb{R}\mathfrak{a}_0=r$.
Now we prove that $\mathfrak{a}_0$ is a maximal abelian subspace of $\mathfrak{p}_0$. Suppose towards a contradiction that there was a nonzero vector $X\in\mathfrak{p}_0$ as follows $$X=\sum_{\alpha\in\Delta_{\mathfrak{p}}^+\setminus \Lambda}\lambda_\alpha\left( e_{_{\alpha}}+e_{_{-\alpha}}\right)+\sum_{\alpha\in\Delta_{\mathfrak{p}}^+\setminus \Lambda}\mu_\alpha\sqrt{-1}\left( e_{_{\alpha}}-e_{_{-\alpha}}\right), \quad\text{where }\lambda_\alpha, \mu_\alpha\in \mathbb{R},$$ such that $[X, e_{_{\varphi_i}}+e_{_{-\varphi_i}}]=0$ for each $1\leq i\leq r$. We denote $c_\alpha=\lambda_\alpha+\sqrt{-1}\mu_\alpha$. Because $X\neq 0$, there exists $\psi\in\Delta_{\mathfrak{p}}^+\setminus \Lambda$ with $c_{\psi}\neq0$. Also $\psi$ is not strongly orthogonal to $\varphi_i$ for some $1\leq i\leq r$. Thus we may first define $k_{\psi}$ for each $\psi$ with $c_{\psi}\neq 0$ as the following: $$k_{\psi}=\min_{1\leq i\leq r}\{i: \psi \text{ is not strongly orthogonal to } \varphi_i\}.$$ Then we know that $1\leq k_\psi\leq r$ for each $\psi$ with $c_\psi\neq 0$. Then we define $k$ to be the following, $$\begin{aligned}
\label{definitionofk}
k=\min_{\psi\in \Delta^+_{\mathfrak{p}}\setminus\Lambda\text{ with }c_{\psi}\neq0}\{k_{\psi}\}.\end{aligned}$$ Here, we are taking the minimum over a finite set in the and $1\leq k\leq r$. Moreover, we get the following non-empty set, $$\begin{aligned}
\label{definitionofSk}S_k=\{\psi\in \Delta^+_{\mathfrak{p}}\setminus \Lambda: c_{\psi}\neq 0 \text{ and } k_{\psi}=k\}\neq \emptyset.\end{aligned}$$
Recall the notation $N_{\beta, \gamma}$ for any $\beta, \gamma\in \Delta$ is defined as as follows: if $\beta+\gamma\in \Delta\cup\{0\}$, $N_{\beta,\gamma}$ is defined such that $[e_{\beta}, e_{\gamma}]=N_{\beta,\gamma}e_{\beta+\gamma};$ if $\beta+\gamma\notin \Delta\cup \{0\}$ then one defines $N_{\beta, \gamma}=0.$ Now let us take $k$ as defined in and consider the Lie bracket $$\begin{aligned}
0&=[X, e_{_{\varphi_k}}+e_{_{-\varphi_k}}]\\&=\sum_{\psi\in\Delta_{\mathfrak{p}}^+\setminus \Lambda}\left(c_\psi(N_{_{\psi,\varphi_k}}e_{_{\psi+\varphi_k}}+N_{_{\psi,-\varphi_k}}e_{_{\psi-\varphi_k}})
+{\overline}{c}_\psi(N_{_{-\psi,\varphi_k}}e_{_{-\psi+\varphi_k}}+N_{_{-\psi,-\varphi_k}}e_{_{-\psi-\varphi_k}})\right).\end{aligned}$$ As $[\mathfrak{p}^{\pm}, \,\mathfrak{p}^{\pm}]=0$, we have $\psi+\varphi_k \notin \Delta$ and $-\psi-\varphi_k \notin\Delta$ for each $\psi\in \Delta_{\mathfrak{p}}^+$. Hence, $N_{_{\psi,\varphi_k}}=N_{_{-\psi,-\varphi_k}}=0$ for each $\psi\in\Delta_{\mathfrak{p}}^+$. Then we have the simplified expression $$\begin{aligned}
\label{equal0}
0=[X, e_{_{\varphi_k}}+e_{_{-\varphi_k}}]=\sum_{\psi\in\Delta_{\mathfrak{p}}^+\setminus \Lambda}\left(c_{\psi}N_{_{\psi,-\varphi_k}}e_{_{\psi-\varphi_k}}
+{\overline}{c}_\psi N_{_{-\psi,\varphi_k}}e_{_{-\psi+\varphi_k}}\right).\end{aligned}$$ Now let us take $\psi_0\in S_k\neq \emptyset$. Then $c_{\psi_0}\neq 0$. By the definition of $k$, we have $\psi_0$ is not strongly orthogonal to $\varphi_k$ while $\psi_0+\varphi_k\notin \Delta\cup\{0\}$. Thus we have $\psi_0-\varphi_k \in\Delta\cup\{0\}$. Therefore $c_{\psi_{_0}}N_{_{\psi_0,-\varphi_k}}e_{_{\psi_0-\varphi_k}}\neq 0$. Since $0=[X, e_{_{\varphi_k}}+e_{_{-\varphi_k}}]$, there must exist one element $\psi_0'\neq \psi_0\in\Delta_{\mathfrak{p}}^+\setminus \Lambda$ such that $\varphi_k-\psi_0=\psi_0'-\varphi_k$ and $c_{\psi_0'}\neq 0$. This implies $2\varphi_k=\psi_0+\psi_0'$, and consequently one of $\psi_0$ and $\psi_0'$ is smaller then $\varphi_k$. Then we have the following two cases:
(i). if $\psi_0<\varphi_k$, then we find $\psi_0<\varphi_k$ with $\psi_0{\protect\mathpalette{\protect\independenT}{\perp}}\varphi_i$ for all $1\leq i\leq k-1$, and this contradicts to the definition of $\varphi_k$ as the following $$\begin{aligned}
\varphi_k=\min\{{\varphi}\in\Delta_{\mathfrak{p}}^+:\,\varphi{\protect\mathpalette{\protect\independenT}{\perp}}\varphi_j \text{ for } 1\leq j\leq k-1\}.\end{aligned}$$
(ii). if $\psi_0'<\varphi_k$, since we have $c_{\psi_0'}\neq 0$, we have $$\begin{aligned}
k_{{\psi_0'}}=\min_{1\leq i\leq r}\{i: \psi_0' \text{ is not strongly orthogonal to } \varphi_i \}.\end{aligned}$$ Then by the definition of $k$ in , we have $k_{{\psi_0'}}\geq k$. Therefore we found $\psi_0'<\varphi_k$ such that $\psi_0'<\varphi_i$ for any $1\leq i\leq k-1< k_{\psi_0'}$, and this contradicts with the definition of $\varphi_k$. Therefore in both cases, we found contradictions. Thus we conclude that $\mathfrak{a}_0$ is a maximal abelian subspace of $\mathfrak{p}_0$.
For further use, we also state a proposition about the maximal abelian subspaces of $\mathfrak{p}_0$ according to Ch V in [@Hel],
\[adjoint\] Let $\mathfrak{a}_0'$ be an arbitrary maximal abelian subspaces of $\mathfrak{p}_0$, then there exists an element $k\in K$ such that $k\cdot \mathfrak{a}_0=\mathfrak{a}'_0$. Moreover, we have $$\mathfrak{p}_{0}=\bigcup_{k\in K}\emph{Ad}(k)\cdot\mathfrak{a}_0,$$ where $\emph{Ad}$ denotes the adjoint action of $K$ on $\mathfrak{a}_0$.
Boundedness of the period map {#boundedness of Phi}
-----------------------------
Now let us fix the base point $p\in \mathcal{T}$ with $\Phi(p)\in D$. Then according to the above remark, $N_+$ can be viewed as a subset in $\check{D}$ by identifying it with its orbit in $\check{D}$ with base point $\Phi(p)$. Let us also fix an adapted basis $(\eta_0, \cdots, \eta_{m-1})$ for the Hodge decomposition of the base point $\Phi(p)\in D$. Then we can identify elements in $N_+$ with nonsingular block lower triangular matrices whose diagonal blocks are all identity submatrix. We define $$\begin{aligned}
\check{\mathcal{T}}=\Phi^{-1}(N_+).\end{aligned}$$ At the base point $\Phi(p)=o\in N_+\cap D$, the tangent space $T_{o}^{1,0}N_+=T_o^{1,0}D\simeq \mathfrak{n}_+\simeq N_+$, then the Hodge metric on $T_o^{1,0}D$ induces an Euclidean metric on $N_+$. In the proof of the following lemma, we require all the root vectors to be unit vectors with respect to this Euclidean metric.
Let $\mathfrak{w} \subseteq \mathfrak{n}_+$ be the abelian subalgebra of $\mathfrak{n}_+$ determined by the period map, $${\Phi}_* :\, \text{T}^{1,0}\check{{{\mathcal T}}} \to \text{T}^{1,0}D.$$ By Griffiths transversality, $\mathfrak{w}\subseteq {{\mathfrak g}}^{-1,1}$ is an abelian subspace. Let $$W:= \exp(\mathfrak{w})\subseteq N_+$$ and $P :\, N_+\cap D \to W\cap D$ be the projection map induced by the projection from $N_+$ to its subspace $W$. Then $W$, as a complex Euclidean space, has the induced Euclidean metric from $N_+$. We can consider $W$ as an integral submanifold of the abelian algebra $\a$ passing through the base point $o=\Phi(p)$, see page 248 in [@CZTG]. For the basic properties of integral manifolds of horizontal distribution, see Chapter 4 of [@CZTG], or [@CKT] and [@Mayer].
The restricted period map ${\Phi} : \, \check{{{\mathcal T}}}\to N_+\cap D$ composed with the projection map $P$ gives a holomorphic map $$\Psi :\, \check{{{\mathcal T}}} \to W\cap D,$$ that is $\Psi=P \circ \Phi$.
Because the period map is a horizontal map, and the geometry of horizontal slices of the period domain $D$ is similar to Hermitian symmetric space as discussed in detail in [@GS], the proof of the following lemma is basically an analogue of the proof of the Harish-Chandra embedding theorem for Hermitian symmetric spaces, see for example [@Mok].
\[abounded\] The restriction of the period map $\Psi:\,\check{\mathcal{T}}\rightarrow W\cap D$ is bounded in $W$ with respect to the Euclidean metric on $W\subset N_+$.
We need to show that there exists $0\leq C<\infty$ such that for any $q\in \check{\mathcal{T}}$, $d_{E}({\Psi}({p}), {\Psi}(q))\leq C$, where $d_E$ is the Euclidean distance on $W\subseteq N_+$. Let $\mathfrak{w}_0:= \mathfrak{w}+ \tau_0(\mathfrak{w})\subseteq \mathfrak{p}_0$ be the abelian subspace of $\mathfrak{p}_0$.
For any $q\in \check{\mathcal{T}}$, there exists a vector $X^+\in \mathfrak{w}\subseteq\mathfrak{g}^{-1,1}\subseteq \mathfrak{n}_+$ and a real number $T_0$ such that $\beta(t)=\exp(tX^+)$ defines a geodesic $\beta: \,[0, T_0]\rightarrow N_+\subseteq G_{\mathbb{C}}$ from $\beta(0)=I$ to $\beta(T_0)$ with $\pi_1(\beta(T_0))=\Psi(q)$, where $\pi_1:\,N_+\rightarrow N_+B/B\subseteq \check{D}$ is the projection map with the fixed base point $\Phi(p )=o\in D$. In this proof, we will not distinguish $N_+\subseteq {G}_{\mathbb{C}}$ from its orbit $N_+B/B\subseteq\check{D}$ with fixed base point $\Phi(p )=o$. Consider $X^-=\tau_0(X^+)\in \mathfrak{g}^{1,-1}$, then $X=X^++X^-\in\mathfrak{w}_0\subseteq(\mathfrak{g}^{-1,1}\oplus \mathfrak{g}^{1,-1})\cap \mathfrak{g}_0$. For any $q\in\check{\mathcal{T}}$, there exists $T_1$ such that $\gamma=\exp(tX): [0, T_1]\rightarrow G_{\mathbb{R}}$ defines a geodesic from $\gamma(0)=I$ to $\gamma(T_1)\in G_{\mathbb{R}}$ such that $\pi_2(\gamma(T_1))=\Psi(q)\in W\cap D$, where $\pi_2: \, G_{\mathbb{R}}\rightarrow G_{\mathbb{R}}/V\simeq D$ denotes the projection map with the fixed base point $\Phi(p )=o$. Let $\Lambda=\{\varphi_1, \cdots, \varphi_r\}\subseteq \Delta^+_{\mathfrak{p}}$ be a set of strongly orthogonal roots given in Proposition \[adjoint\]. We denote $x_{\varphi_i}=e_{\varphi_i}+e_{-\varphi_i}$ and $y_{\varphi_i}=\sqrt{-1}(e_{\varphi_i}-e_{-\varphi_i})$ for any $\varphi_i\in \Lambda$. Then $$\begin{aligned}
\mathfrak{a}_0=\mathbb{R} x_{\varphi_{_1}}\oplus\cdots\oplus\mathbb{R}x_{\varphi_{_r}},\quad\text{and}\quad\mathfrak{a}_c=\mathbb{R} y_{\varphi_{_1}}\oplus\cdots\oplus\mathbb{R}y_{\varphi_{_r}},\end{aligned}$$ are maximal abelian spaces in $\mathfrak{p}_0$ and $\sqrt{-1}\mathfrak{p}_0$ respectively.
Since $X\in \mathfrak{w}_0\subseteq \mathfrak{g}^{-1,1}\oplus \mathfrak{g}^{1,-1}\subseteq \mathfrak{p}_0$, by Proposition \[adjoint\], there exists $k\in K$ such that $ X\in Ad(k)\cdot\mathfrak{a}_0$. As the adjoint action of $K$ on $\mathfrak{p}_0$ is unitary action and we are considering the length in this proof, we may simply assume that $X\in \mathfrak{w}_0$ up to a unitary transformation. With this assumption, there exists $\lambda_{i}\in \mathbb{R}$ for $1\leq i\leq r$ such that $$\begin{aligned}
X=\lambda_{1}x_{_{\varphi_1}}+\lambda_{2}x_{_{\varphi_2}}+\cdots+\lambda_{r}x_{_{\varphi_r}}\end{aligned}$$ Since $\mathfrak{a}_0$ is commutative, we have $$\begin{aligned}
\exp(tX)=\prod_{i=1}^r\exp (t\lambda_{i}x_{_{\varphi_i}}). $$ Now for each $\varphi_i\in \Lambda$, we have $\text{Span}_{\mathbb{C}}\{e_{_{\varphi_i}}, e_{_{-\varphi_i}}, h_{_{\varphi_i}}\}\simeq \mathfrak{sl}_2(\mathbb{C}) \text{ with}$ $$\begin{aligned}
h_{_{\varphi_i}}\mapsto \left[\begin{array}[c]{cc}1&0\\0&-1\end{array}\right], \quad &e_{_{\varphi_i}}\mapsto \left[\begin{array}[c]{cc} 0&1\\0&0\end{array}\right], \quad e_{_{-\varphi_i}}\mapsto \left[\begin{array}[c]{cc} 0&0\\1&0\end{array}\right];
$$ and $\text{Span}_\mathbb{R}\{x_{_{\varphi_i}}, y_{_{\varphi_i}}, \sqrt{-1}h_{_{\varphi_i}}\}\simeq \mathfrak{sl}_2(\mathbb{R})\text{ with}$ $$\begin{aligned}
\sqrt{-1}h_{_{\varphi_i}}\mapsto \left[\begin{array}[c]{cc}i&0\\0&-i\end{array}\right], \quad &x_{_{\varphi_i}}\mapsto \left[\begin{array}[c]{cc} 0&1\\1&0\end{array}\right], \quad y_{_{-\varphi_i}}\mapsto \left[\begin{array}[c]{cc} 0&i\\-i&0\end{array}\right].\end{aligned}$$ Since $\Lambda=\{\varphi_1, \cdots, \varphi_r\}$ is a set of strongly orthogonal roots, we have that $$\begin{aligned}
&\mathfrak{g}_{\mathbb{C}}(\Lambda)=\text{Span}_\mathbb{C}\{e_{_{\varphi_i}}, e_{_{-\varphi_i}}, h_{_{\varphi_i}}\}_{i=1}^r\simeq (\mathfrak{sl}_2(\mathbb{C}))^r,\\\text{and} \quad&\mathfrak{g}_{\mathbb{R}}(\Lambda)=\text{Span}_\mathbb{R}\{x_{_{\varphi_i}}, y_{_{-\varphi_i}}, \sqrt{-1}h_{_{\varphi_i}}\}_{i=1}^r\simeq (\mathfrak{sl}_2(\mathbb{R}))^r.\end{aligned}$$ In fact, we know that for any $\varphi, \psi\in \Lambda$ with $\varphi\neq\psi$, $[e_{_{\pm\varphi}}, \,e_{_{\pm\psi}}]=0$ since $\varphi$ is strongly orthogonal to $\psi$; $[h_{_{{\varphi}}},\,h_{_{\psi}}]=0$, since $\mathfrak{h}$ is abelian; $[h_{_{\varphi}},\, e_{_{\pm\psi}}]=\sqrt{-1}[[e_{_{\varphi}},\,e_{_{-\varphi}}],\,e_{_{\pm\psi}}]=\sqrt{-1}[e_{_{-\varphi}},\,[e_{_{\varphi}},\,e_{_{\pm\psi}}]]=0$.
Then we denote $G_{\mathbb{C}}(\Lambda)=\exp(\mathfrak{g}_{\mathbb{C}}(\Lambda))\simeq (SL_2(\mathbb{C}))^r$ and $G_{\mathbb{R}}(\Lambda)=\text{exp}(\mathfrak{g}_{\mathbb{R}}(\Lambda))=(SL_2(\mathbb{R}))^r$, which are subgroups of $G_{\mathbb{C}}$ and $G_{\mathbb{R}}$ respectively. With the fixed reference point $o=\Phi(p)$, we denote $D(\Lambda)=G_\mathbb{R}(\Lambda)(o)$ and $S(\Lambda)=G_{\mathbb{C}}(\Lambda)(o)$ to be the corresponding orbits of these two subgroups, respectively. Then we have the following isomorphisms, $$\begin{aligned}
D(\Lambda)=G_{\mathbb{R}}(\Lambda)\cdot B/B&\simeq G_{\mathbb{R}}(\Lambda)/G_{\mathbb{R}}(\Lambda)\cap V,\label{isomorphism1}\\
S(\Lambda)\cap (N_+B/B)=(G_{\mathbb{C}}(\Lambda)\cap N_+)\cdot B/B&\simeq (G_{\mathbb{C}}(\Lambda)\cap N_+)/(G_{\mathbb{C}}(\Lambda)\cap N_+\cap B).\label{isomorphism2}\end{aligned}$$ With the above notations, we will show that (i). $D(\Lambda)\subseteq S(\Lambda)\cap (N_+B/B)\subseteq \check{D}$; (ii). $D(\Lambda)$ is bounded inside $S(\Lambda)\cap(N_+B/B)$.
Notice that since $X\in \mathfrak{g}^{-1,1}\oplus\mathfrak{g}^{1,-1}$. By Proposition \[puretype\], we know that for each pair of roots $\{e_{_{\varphi_i}}, e_{_{-\varphi_i}}\}$, either $e_{_{\varphi_i}}\in \mathfrak{g}^{-1,1}\subseteq \mathfrak{n}_+$ and $e_{_{-\varphi_i}}\in \mathfrak{g}^{1,-1}$, or $e_{_{\varphi_i}}\in \mathfrak{g}^{1,-1}$ and $e_{_{-\varphi_i}}\in \mathfrak{g}^{-1,1}\subseteq\mathfrak{n}_+$. For notation simplicity, for each pair of root vectors $\{e_{_{\varphi_i}}, e_{_{-\varphi_i}}\}$, we may assume the one in $\mathfrak{g}^{-1,1}\subseteq \mathfrak{n}_+$ to be $e_{_{\varphi_i}}$ and denote the one in $\mathfrak{g}^{1,-1}$ by $e_{_{-\varphi_i}}$. In this way, one can check that $\{\varphi_1, \cdots, \varphi_r\}$ may not be a set in $\Delta^+_{\mathfrak{p}}$, but it is a set of strongly orthogonal roots in $\Delta_{\mathfrak{p}}$. Therefore, we have the following description of the above groups, $$\begin{aligned}
&G_{\mathbb{R}}(\Lambda)=\exp(\mathfrak{g}_{\mathbb{R}}(\Lambda))=\exp(\text{Span}_{\mathbb{R}}\{ x_{_{\varphi_1}}, y_{_{\varphi_1}},\sqrt{-1}h_{_{\varphi_1}}, \cdots, x_{_{\varphi_r}}, y_{_{-\varphi_r}}, \sqrt{-1}h_{_{\varphi_r}}\})\\
&G_{\mathbb{R}}(\Lambda)\cap V=\exp(\mathfrak{g}_{\mathbb{R}}(\Lambda)\cap \mathfrak{v}_0)=\exp(\text{Span}_{\mathbb{R}}\{\sqrt{-1}h_{_{\varphi_1}}, \cdot,\sqrt{-1} h_{_{\varphi_r}}\})\\
&G_{\mathbb{C}}(\Lambda)\cap N_+=\exp(\mathfrak{g}_{\mathbb{C}}(\Lambda)\cap \mathfrak{n}_+)=\exp(\text{Span}_{\mathbb{C}}\{e_{_{\varphi_1}}, e_{_{\varphi_2}}, \cdots, e_{_{\varphi_r}}\});\\
&G_{\mathbb{C}}(\Lambda)\cap B=\exp(\mathfrak{g}_{\mathbb{C}}(\Lambda)\cap \mathfrak{b})=\exp(\text{Span}_{\mathbb{C}}(h_{_{\varphi_1}}, e_{_{-\varphi_1}}, \cdots, h_{_{\varphi_r}}, e_{_{-\varphi}}\}).\end{aligned}$$ Thus by the isomorphisms in and , we have $$\begin{aligned}
&D(\Lambda)\simeq \prod_{i=1}^r\exp(\text{Span}_{\mathbb{R}}\{x_{_{\varphi_i}}, y_{_{\varphi_i}}, \sqrt{-1}h_{_{\varphi_i}}\})/\exp(\text{Span}_{\mathbb{R}}\{\sqrt{-1}h_{_{\varphi_i}}\},\\& \quad S(\Lambda)\cap (N_+B/B)\simeq\prod_{i=1}^r\exp(\text{Span}_{\mathbb{C}}\{e_{_{\varphi_i}}\}).\end{aligned}$$Let us denote $G_{\mathbb{C}}(\varphi_i)=\exp(\text{Span}_{\mathbb{C}}\{e_{_{\varphi_i}}, e_{_{-\varphi_i}}, h_{_{{\varphi}_i}}\})\simeq SL_2(\mathbb{C}),$ $S(\varphi_i)=G_{\mathbb{C}}(\varphi_i)(o)$, and $G_{\mathbb{R}}(\varphi_i)=\exp(\text{Span}_{\mathbb{R}}\{x_{_{\varphi_i}}, y_{_{\varphi_i}}, \sqrt{-1}h_{_{\varphi_i}}\})\simeq SL_2(\mathbb{R}),$ $D(\varphi_i)=G_{\mathbb{R}}(\varphi_i)(o)$. On one hand, each point in $S(\varphi_i)\cap (N_+B/B)$ can be represented by $$\begin{aligned}
\text{exp}(ze_{_{\varphi_i}})=\left[\begin{array}[c]{cc}1&0\\ z& 1\end{array}\right] \quad \text{for some } z\in \mathbb{C}.\end{aligned}$$ Thus $S(\varphi_i)\cap (N_+B/B)\simeq \mathbb{C}$. One the other hand, denote $z=a+bi$ for some $a,b\in\mathbb{R}$, then $$\begin{aligned}
&\exp(ax_{_{\varphi_i}}+by_{_{\varphi_i}})=\left[\begin{array}[c]{cc}\cosh |z|&\frac{{\overline}{z}}{|z|}\sinh |z|\\ \frac{z}{|z|}\sinh |z|& \cosh |z|\end{array}\right]\\
\\
&=\left[\begin{array}[c]{cc}1& 0\\ \frac{z}{|z|}\tanh |z|&1\end{array}\right]\left[\begin{array}[c]{cc}\cosh |z|&0\\0&(\cosh |z|)^{-1}\end{array}\right]\left[\begin{array}[c]{cc}1&\frac{{\overline}{z}}{|z|}\tanh |z|\\ 0&1\end{array}\right]\\
\\
&=\exp\left[(\frac{z}{|z|}\tanh |z|) e_{_{\varphi_i}}\right]\exp\left[(-\log \cosh |z|)h_{_{\varphi_i}}\right]\exp\left[(\frac{{\overline}{z}}{|z|}\tanh |z| )e_{_{-\varphi_i}}\right].\end{aligned}$$ So elements in $D(\varphi_i)$ can be represented by $\exp[(z/|z|)(\tanh |z|) e_{_{\varphi_i}}]$, i.e. the lower triangular matrix $$\begin{aligned}
\left[\begin{array}[c]{cc}1& 0\\ \frac{z}{|z|}\tanh |z|&1\end{array}\right],\end{aligned}$$ in which $\frac{z}{|z|}\tanh |z|$ is a point in the unit disc $\mathfrak{D}$ of the complex plane. Therefore the $D(\varphi_i)$ is a unit disc $\mathfrak{D}$ in the complex plane $S(\varphi_i)\cap (N_+B/B)$. Therefore $$D(\Lambda)\simeq \mathfrak{D}^r\quad\text{and}\quad S(\Lambda)\cap N_+\simeq \mathbb{C}^r.$$ So we have obtained both (i) and (ii). As a consequence, we get that for any $q\in \check{\mathcal{T}}$, $\Psi(q)\in D(\Lambda)$. This implies $$\begin{aligned}
d_E(\Psi(p), \Psi(q))\leq \sqrt{r}\end{aligned}$$ where $d_E$ is the Eulidean distance on $S(\Lambda)\cap (N_+B/B)$.
To complete the proof, we only need to show that $S(\Lambda)\cap (N_+B/B)$ is totally geodesic in $N_+B/B$. In fact, the tangent space of $N_+$ at the base point is $\mathfrak{n}_+$ and the tangent space of $S(\Lambda)\cap N_+B/B$ at the base point is $\text{Span}_{\mathbb{C}}\{e_{_{\varphi_1}}, e_{_{\varphi_2}}, \cdots, e_{_{\varphi_r}}\}$. Since $\text{Span}_{\mathbb{C}}\{e_{_{\varphi_1}}, e_{_{\varphi_2}}, \cdots, e_{_{\varphi_r}}\}$ is a sub-Lie algebra of $\mathfrak{n}_+$, the corresponding orbit $S(\Lambda)\cap N_+B/B$ is totally geodesic in $N_+B/B$. Here the basis $\{e_{_{\varphi_1}}, e_{_{\varphi_2}}, \cdots, e_{_{\varphi_r}}\}$ is an orthonormal basis with respect to the pull-back Euclidean metric.
Although not needed in the proof of the above theorem, we can also show that the above inclusion of $D(\varphi_i)$ in $ D$ is totally geodesic in $D$ with respect to the Hodge metric. In fact, the tangent space of $D(\varphi_i)$ at the base point is $\text{Span}_{\mathbb{R}}\{x_{_{\varphi_i}}, y_{_{\varphi_i}}\}$ which satisfies $$\begin{aligned}
&[x_{_{\varphi_i}}, [x_{_{\varphi_i}}, y_{_{\varphi_i}}]]=4y_{_{\varphi_i}},\\
&[y_{_{\varphi_i}}, [y_{_{\varphi_i}}, x_{_{\varphi_i}}]]=4x_{_{\varphi_i}}.\end{aligned}$$ So the tangent space of $D(\varphi_i)$ forms a Lie triple system, and consequently $D(\varphi_i)$ gives a totally geodesic in $D$. The fact that the exponential map of a Lie triple system gives a totally geodesic in $D$ is from (cf. [@Hel] Ch 4 $\S$7), and we note that this result still holds true for locally homogeneous spaces instead of only for symmetric spaces. And the pull-back of the Hodge metric on $D(\varphi_i)$ is $G(\varphi_i)$ invariant metric, therefore must be the Poincare metric on the unit disc. In fact, more generally, we have
If ${\widetilde}{G}$ is a subgroup of $G_{\mathbb{R}}$, then the orbit ${\widetilde}{D}={\widetilde}{G}(o)$ is totally geodesic in $D$, and the induced metric on ${\widetilde}{D}$ is ${\widetilde}{G}$ invariant.
Firstly, ${\widetilde}{D}\simeq {\widetilde}{G}/({\widetilde}{G}\cap V)$ is a quotient space. The induced metric of the Hodge metric from $D$ is $G_{\mathbb{R}}$-invariant, and therefore ${\widetilde}{G}$-invariant. Now let $\gamma:\,[0,1]\to {\widetilde}{D}$ be any geodesic, then there is a local one parameter subgroup $S:\,[0,1] \to{\widetilde}{G}$ such that, $\gamma(t)=S(t)\cdot \gamma(0)$. On the other hand, because ${\widetilde}{G}$ is a subgroup of $G_{\mathbb{R}}$, we have that $S(t)$ is also a one parameter subgroup of $G_{\mathbb{R}}$, therefore the curve $\gamma(t)=S(t)\cdot \gamma(0)$ also gives a geodesic in $D$. Since geodesics on ${\widetilde}{D}$ are also geodesics on $D$, we have proved ${\widetilde}{D}$ is totally geodesic in $D$.
We recall that according to the discussion in Section 2.3, we have the following commutative diagram, $$\begin{aligned}
\xymatrix{\mathcal{T}\ar[d]^{\pi_m}\ar[r]^{\Phi}&D\ar[d]^{\pi_D}\\\mathcal{Z}_m\ar[r]^{\Phi_{\mathcal{Z}_m}}&D/\Gamma,}\end{aligned}$$ where $\pi_m:\,\mathcal{T}\rightarrow \mathcal{Z}_m$ is the universal covering map, and $\pi_D:\,D\rightarrow D/\Gamma$ is a projection map. Let us denote the Hodge metric completion of $\mathcal{Z}_m$ by $\mathcal{Z}^H_{_m}$, and let $\mathcal{T}^H_{_m}$ be the universal cover of $\mathcal{Z}^H_{_m}$, for which the details will be given in Section 4.1. By the definition of $\mathcal{Z}^H_{_m}$, there exists the continuous extension map $\Phi^H_{_{\mathcal{Z}_m}}$. Moreover, since $\mathcal{T}$ is the universal cover of the moduli space $\mathcal{Z}_m$. We have the following commutative diagram, $$\begin{aligned}
\label{diagram2}
\xymatrix{\mathcal{T}\ar[r]^{i_{m}}\ar[d]^{\pi_m}&\mathcal{T}^H_{_m}\ar[d]^{\pi_{_m}^H}\ar[r]^{{\Phi}^{H}_{_m}}&D\ar[d]^{\pi_D}\\
\mathcal{Z}_m\ar[r]^{i}&\mathcal{Z}^H_{_m}\ar[r]^{{\Phi}_{_{\mathcal{Z}_m}}^H}&D/\Gamma,
}\end{aligned}$$ where $i$ is the inclusion map, ${i}_{_m}$ is a lifting map of $i\circ\pi_m$, $\pi_D$ is the covering map and ${\Phi}^{H}_{_m}$ is a lifting map of ${\Phi}_{_{\mathcal{Z}_m}}^H\circ \pi_{_m}^H$. In particular, $\Phi^H_{_m}$ is a continuous map from $\mathcal{T}^H_{_m}$ to $D$. We notice that the lifting maps ${i}_{_{\mathcal{T}}}$ and ${\Phi}^H_{_m}$ are not unique, but Lemma \[choice\] implies that there exists a suitable choice of $i_{m}$ and $\Phi_{_m}^H$ such that $\Phi=\Phi^H_{_m}\circ i_{m}$. We will fix the choice of $i_{m}$ and $\Phi^{H}_m$ such that $\Phi=\Phi^H_{_m}\circ i_m$ in the rest of the paper. Let us denote $\mathcal{T}_m:=i_{m}(\mathcal{T})$ and the restriction map $\Phi_m=\Phi^H_{_m}|_{\mathcal{T}_m}$. Then we also have $\Phi=\Phi_m\circ i_m$. We refer the reader to Section 4.1 for the details about the completion spaces $\mathcal{Z}^H_{_m}$ and $\mathcal{T}^H_{_m}$.
In the following proof, we will fix an $m\geq 3$ and simply drop the subscript $m$, that is we will simply use $\mathcal{Z}$ to denote $\mathcal{Z}_m$, use $\mathcal{Z}^H$ to denote $\mathcal{Z}_{_m}^H$, use $\mathcal{T}^H$ to denote $\mathcal{T}^H_{_m}$, and we will also simplify the notation in as follows, $$\begin{aligned}
\label{diagram3}
\xymatrix{\mathcal{T}\ar[r]^{i_\mathcal{T}}\ar[d]^{\pi}&\mathcal{T}^H\ar[d]^{\pi^H}\ar[r]^{{\Phi}^{H}}&D\ar[d]^{\pi_D}\\
\mathcal{Z}\ar[r]^{i}&\mathcal{Z}^H\ar[r]^{{\Phi}_{_{\mathcal{Z}}}^H}&D/\Gamma.
}\end{aligned}$$ We remark that we will show that actually $\mathcal{T}^H_{_m}$ and $\mathcal{T}^H_{_{m'}}$ are biholomorphic to each other for any $m, m'\geq 3$. Moreover, using continiuty of $\Phi^H:\,\mathcal{T}^H\rightarrow D$ and the Griffiths transversality of $\Phi:\,\mathcal{T}\rightarrow D$, we will also prove that the map $\Phi^H:\,\mathcal{T}^H\rightarrow D$ still satisfies the Griffiths transversality, and a detailed proof of this property is given in Lemma \[GTonTH\]. With the above preparation, we are ready to prove the following theorem.
\[locallybounded\] The image of the restriction of the period map $\Phi :\, \check{{{\mathcal T}}}
\to N_+\cap D$ is bounded in $N_+$ with respect to the Euclidean metric on $N_+$.
In Lemma \[abounded\], we already proved that the image of $\Psi=P\circ \Phi$ is bounded with respect to the Euclidean metric on $W\subseteq N_+$. Now together with the Griffiths transversality, we will deduce the boundedness of the image of $\Phi :\, \check{{{\mathcal T}}} \to
N_+\cap D$ from the boundedness of the image of $\Psi$.
In fact, $W\cap D$ is an integral submanifold in $D$ of the abelian algebra $\mathfrak{w} \subseteq {{\mathfrak g}}^{-1,1}$ passing through the base point $o=\Phi(p)$, so is $\Phi(\check{{{\mathcal T}}})$ which is an analytic variety. See Theorem 9.6 of [@Griffiths3]. Restricted to $\Phi(\check{{{\mathcal T}}})$ and given any point $z=\Psi(q)\in \Psi(\check{{{\mathcal T}}})$, $P$ is the projection map from the intersection $\Phi(\check{{{\mathcal T}}})\cap p^{-1}(z')$ to $z$, where $z'=p(z)\in G_\mathbb{R}/K$ and $p : D\to G_\mathbb{R}/K$ is the natural projection map. Therefore given any point $z\in
W\cap D$, $P^{-1}(z)$ is the set of points in $\Phi(\check{{{\mathcal T}}})\cap
p^{-1}(z')$. Note that the Griffiths transversality implies that the projection map $\pi$ is a locally one-to-one map when restricted to a horizontal slice, which is a small open neighborhood of the integral submanifold of the horizontal distribution. For local structure of the integral manifolds of distributions, see for examples, Proposition 19.16 in page 500 of [@Lee] and Corollary 2.1 in [@Lu].
Now let $\mathfrak{p}_h=\mathfrak{p}/\mathfrak{p}\cap \mathfrak{b}\subseteq \mathfrak{n}_+$ denote the horizontal tangent space of $D$ at the base point $o$, which is viewed as an Euclidean subspace of $\mathfrak{n}_+$. We consider the natural projection $P_h:\, D\cap N_+\to D\cap \mathrm{exp}(\mathfrak{p}_h)$ by viewing $ \mathrm{exp}(\mathfrak{p}_h)$ as an Euclidean subspace of $N_+$. It is easy to see that the natural projection $\pi : D\to G_\mathbb{R}/K$, when restricted to $D\cap \mathrm{exp}(\mathfrak{p}_h)$, is surjective onto $G_\mathbb{R}/K$.
We consider $D\cap W$ as a submanifold of $D\cap \mathrm{exp}(\mathfrak{p}_h)$. Then there exists a neighborhood $U$ of the base point $o$ in $D$ such that the image of $U\cap {\Phi}(\check{{{\mathcal T}}}) $ under this projection $P_h$ lies inside $D\cap W$, since they both are integral submanifolds of the horizontal distribution given by the abelian algebra $\mathfrak{w}$. That means that the complex analytic subvariety $P_h^{-1}(D\cap W)\cap{\Phi}(\check{{{\mathcal T}}})$ contains an open neighborhood $U\cap {\Phi}(\check{{{\mathcal T}}})$ of ${\Phi}(\check{{{\mathcal T}}})$. However, we can deduce from the irreducibility of ${{\mathcal T}}$ that ${\Phi}(\check{{{\mathcal T}}})$ is a connected irreducible complex analytic variety. Hence $P_h^{-1}(D\cap W)\cap {\Phi}(\check{{{\mathcal T}}})={\Phi}(\check{{{\mathcal T}}})$, that is, we have obtained that $P_h({\Phi}(\check{{{\mathcal T}}}))$ lies inside $D\cap W$ completely. This shows that any fiber of the projection $\pi : D\to G_\mathbb{R}/K$ that intersects ${\Phi}(\check{{{\mathcal T}}})$ must intersect $D\cap W$.
With the above in mind, the proof can be divided into the following two steps.
\(i) We claim that there are only finite points in $P^{-1}(z)$ for any $z\in \Psi(\check{{{\mathcal T}}})$.
Otherwise, we have $\{q_i\}_{i=1}^{\infty}\subseteq \check{{{\mathcal T}}}$ and $\{y_i=\Phi(q_i)\}_{i=1}\subseteq P^{-1}(z)$ with limiting point $y_{\infty}\in p^{-1}(z')\simeq K/V$, since $K/V$ is compact. Using the diagram , we project the points $q_i$ to $q'_i\in \mathcal{Z}$ via the universal covering map $\pi:\, {{\mathcal T}}\to \mathcal{Z}$. First note that there must be infinite many $q'_i$’s. Otherwise, we have a subsequence $\{q_{j_k}\}$ of $\{q_j\}$ such that $\pi(q_{j_k})=q'_{i_0}$ for some $i_0$ and $$y_{j_k}=\Phi(q_{j_k})=\gamma_{k}\Phi(q_{j_0})=\gamma_{k}y_{j_0},$$ where $\gamma_k \in \Gamma$ is the monodromy action. Since $\Gamma$ is discrete, the subsequence $\{y_{j_k}\}$ is not convergent, which is a contradiction.
Now we project the points $q_i$ on $\mathcal{Z}$ via the universal covering map $\pi:\, {{\mathcal T}}\to \mathcal{Z}$ and still denote them by $q_i$ without confusion. Then the sequence $\{q_i\}_{i=1}^{\infty}\subseteq \mathcal{Z}$ has a limiting point $q_\infty$ in ${\overline}{\mathcal{Z}}$, where ${\overline}{\mathcal{Z}}$ is the compactification of $\mathcal{Z}$, where such compacification exists by the work Viehweg in [@v1]. By continuity the period map $\Phi :\, \mathcal{Z} \to D/\Gamma$ can be extended over $q_\infty$ with $\Phi(q_\infty)=\pi_D(y_\infty) \in D/\Gamma$, where $\pi_D :\, D\to D/\Gamma$ is the projection map. Thus $q_\infty$ lies $\mathcal{Z}^H$. Now we can regard the sequence $\{q_i\}_{i=1}^{\infty}$ as a convergent sequence in $\mathcal{Z}^H$ with limiting point $q_\infty \in \mathcal{Z}^H$. We can also choose a sequence $\{{\widetilde}{q}_i\}_{i=1}^{\infty}\subseteq {{\mathcal T}}^H$ with limiting point ${\widetilde}{q}_\infty\in {{\mathcal T}}^H$ such that ${\widetilde}{q}_i$ maps to $q_i$ via the universal covering map $\pi^H :\, {{\mathcal T}}^H\to \mathcal{Z}^H$ and $\Phi^H({\widetilde}{q}_i)=y_i \in D$, for $i\ge 1$ and $i=\infty$. Since the extended period map $\Phi^H :\, {{\mathcal T}}^H \to D$ still satisfies the Griffiths transversality by Lemma \[GTonTH\], we can choose a small neighborhood $U$ of ${\widetilde}{q}_\infty$ such that $U$, and thus the points ${\widetilde}{q}_i$ for $i$ sufficiently large are mapped into to a horizontal slice, which is a contradiction.
\(ii) Let $r(z)$ be the cardinality of the fiber $P^{-1}(z)$, for any $z\in \Psi(\check{{{\mathcal T}}})$. We claim that $r(z)$ is locally constant.
To be precise, for any $z\in \Psi(\check{{{\mathcal T}}})$, let $r=r(z)$ and choose points $x_1,\cdots,x_r$ in $N_+\cap D$ such that $P(x_i)=z$. Let $U_i$ be the horizontal slice around $x_i$ such that the restricted map $P :\, U_i \to P(U_i)$ is injective, $i=1,\cdots,r$. We choose the balls $B_i$, $i=1,\cdots,r$ small enough in $N_+\cap D$ such that $B_i$’s are mutually disjoint and $B_i\supseteq U_i$. We claim that there exists a small neighborhood $V\ni z$ in $W\cap D$ such that for any $r=1,\cdots, r$, the restricted map $$\label{localconstant}
P :\, P^{-1}(V)\cap B_i \to V \text{ is injective}.$$ To prove the above claim, we define the pair of sequences $(z,y)$ for some $i$ as follows,
- $\{z_k\}_{k=1}^\infty$ is a convergent sequence in $W\cap D$ with limiting point $z$. $\{y_k\}_{k=1}^\infty$ is a convergent sequence in $N_+\cap D$ with limiting point $x_i$ such that $P(y_k)=z_k$ and $y_k \in B_i\setminus U_i$, for any $k\ge 1$.
We will prove that the pair of sequences $(z,y)$ as ($*_i$) does not exist for any $i=1,\cdots,r$, which implies that we can find a small neighborhood $V\ni z$ in $A\cap D$ such that $P^{-1}(V)\cap B_i=U_i$. Hence (\[localconstant\]) holds and $r(z)$ is locally constant.
In fact, if for some $i$ there exists a pair of sequences $(z,y)$ as ($*_i$), then we can find a sequence $\{q_k\}_{k=1}^\infty$ in ${{\mathcal T}}^H$ with limiting point $q_\infty \in {{\mathcal T}}^H$ by a similar argument as (i) such that $\Phi^H(q_k)=y_k$ for any $k\ge 1$ and $\Phi^H(q_\infty)=x_i$. Since $\Phi^H : {{\mathcal T}}^H \to D$ still satisfies the Griffiths transversality due to Lemma \[GTonTH\], we can choose a small neighborhood $S\ni q_\infty$ in ${{\mathcal T}}^H$ such that $\Phi^H(S)\subseteq U_i$. Then for $k$ sufficiently large, $y_k=\Phi^H(q_k)\in U_i$, which is a contradiction to ($*_i$).
Therefore from (i) and (ii), we conclude that the image $\Phi(\check{{{\mathcal T}}})\subseteq N_+\cap D$ is also bounded.
In order to prove Corollary \[image\], we first show that $\mathcal{T}\backslash\check{\mathcal{T}}$ is an analytic subvariety of $\mathcal{T}$ with $\text{codim}_{\mathbb{C}}(\mathcal{T}\backslash\check{\mathcal{T}})\geq 1$.
\[transversal\]Let $p\in\mathcal{T}$ be the base point with $\Phi(p)=\{F^n_p\subseteq F^{n-1}_p\subseteq \cdots\subseteq F^0_p\}.$ Let $q\in \mathcal{T}$ be any point with $\Phi(q)=\{F^n_q\subseteq F^{n-1}_q\subseteq \cdots\subseteq F^0_q\}$, then $\Phi(q)\in N_+$ if and only if $F^{k}_q$ is isomorphic to $F^k_p$ for all $0\leq k\leq n$.
For any $q\in \mathcal{T}$, we choose an arbitrary adapted basis $\{\zeta_0, \cdots, \zeta_{m-1}\}$ for the given Hodge filtration $\{F^n_q\subseteq F^{n-1}_q\subseteq\cdots\subseteq F^0_q\}$. Recall that $\{\eta_0, \cdots, \eta_{m-1}\}$ is the adapted basis for the Hodge filtration $\{F^n_p \subseteq F^{n-1}_p\subseteq \cdots\subseteq F^0_p\}$ for the base point $p$. Let $[A^{i,j}(q)]_{0\leq i,j\leq n}$ be the transition matrix between the basis $\{\eta_0,\cdots, \eta_{m-1}\}$ and $\{\zeta_0, \cdots, \zeta_{m-1}\}$ for the same vector space $H^{n}(M, \mathbb{C})$, where $A^{i,j}(q)$ are the corresponding blocks. Recall that elements in $N_+$ and $B$ have matrix representations with the fixed adapted basis at the base point: elements in $N_+$ can be realized as nonsingular block lower triangular matrices with identity blocks in the diagonal; elements in $B$ can be realized as nonsingular block upper triangular matrices. Therefore $\Phi(q)\in N_+=N_+B/B\subseteq \check{D}$ if and only if its matrix representation $[A^{i,j}(q)]_{0\leq i,j\leq n}$ can be decomposed as $L(q)\cdot U(q)$, where $L(q)$ is a nonsingular block lower triangular matrix with identities in the diagonal blocks, and $U(q)$ is a nonsingular block upper triangular matrix. By basic linear algebra, we know that $[A^{i,j}(q)]$ has such decomposition if and only if $\det[A^{i,j}(q)]_{0\leq i, j\leq k}\neq 0$ for any $0\leq k\leq n$. In particular, we know that $[A(q)^{i,j}]_{0\leq i,j\leq k}$ is the transition map between the bases of $F^k_p$ and $F^k_q$. Therefore, $\det([A(q)^{i,j}]_{0\leq i,j\leq k})\neq 0$ if and only if $F^k_q$ is isomorphic to $F^k_p$.
\[codimension\]The subset $\check{\mathcal{T}}$ is an open dense submanifold in $\mathcal{T}$, and $\mathcal{T}\backslash \check{\mathcal{T}}$ is an analytic subvariety of $\mathcal{T}$ with $\text{codim}_{\mathbb{C}}(\mathcal{T}\backslash \check{\mathcal{T}})\geq 1$.
From Lemma \[transversal\], one can see that $\check{D}\setminus N_+\subseteq \check{D}$ is defined as a analytic subvariety by equations $$\begin{aligned}
\det [A^{i,j}(q)]_{0\leq i,j\leq k}=0 \quad \text{for each } 0\leq k\leq n.\end{aligned}$$ Therefore $N_+$ is dense in $\check{D}$, and that $\check{D}\setminus N_+$ is an analytic subvariety, which is close in $\check{D}$ and $\text{codim}_{\mathbb{C}}(\check{D}\backslash N_+)\geq 1$. We consider the period map $\Phi:\,\mathcal{T}\to \check{D}$ as a holomorphic map to $\check{D}$, then $\mathcal{T}\setminus \check{\mathcal{T}}=\Phi^{-1}(\check{D}\setminus N_+)$ is the pre-image of the holomorphic map $\Phi$. So $\mathcal{T}\setminus \check{\mathcal{T}}$ is also an analytic subvariety and a close set in $\mathcal{T}$. Because $\mathcal{T}$ is smooth and connected, if $\dim(\mathcal{T}\setminus \check{\mathcal{T}})=\dim\mathcal{T}$, then $\mathcal{T}\setminus \check{\mathcal{T}}=\mathcal{T}$ and $\check{\mathcal{T}}=\emptyset$. But this contradicts to the fact that the reference point $p\in \check{\mathcal{T}}$. So we have $\dim(\mathcal{T}\setminus \check{\mathcal{T}})<\dim\mathcal{T}$, and consequently $\text{codim}_{\mathbb{C}}(\mathcal{T}\backslash \check{\mathcal{T}})\geq 1$.
We can also prove this lemma in a more direct manner. Let $p\in \mathcal{T}$ be the base point. For any $q\in \mathcal{T}$, let us still use $[A(q)^{i,j}]_{0\leq i,j\leq n}$ as the transition matrix between the adapted bases $\{\eta_0,\cdots, \eta_{m-1}\}$ and $\{\zeta_0, \cdots, \zeta_{m-1}\}$ to the Hodge filtration at $p$ and $q$ respectively. In particular, we know that $[A(q)^{i,j}]_{0\leq i,j\leq k}$ is the transition map between the bases of $F^k_p$ and $F^k_q$. Therefore, $\det([A(q)^{i,j}]_{0\leq i,j\leq k})\neq 0$ if and only if $F^k_q$ is isomorphic to $F^k_p$. We recall the inclusion $$\begin{aligned}
D\subseteq \check D \subseteq\check{\mathcal{F}}\subseteq Gr{\left (}f^n,
H^n(X,{{\mathbb C}}) {\right )}\times \cdots \times Gr{\left (}f^1, H^n(X,{{\mathbb C}}) {\right )}\end{aligned}$$ where $\check{\mathcal F}=\left \{ F^n\subseteq \cdots \subseteq F^1 \subseteq
H^n(X,{{\mathbb C}}) \mid \dim_{{\mathbb C}}F^k=f^k\right\}$ and $Gr(f^k, H^n(X, \mathbb{C})$ is the complex vector subspaces of dimension $f^k$ of $H^n(X, \mathbb{C})$. For each $1\leq k\leq n$ the points in $Gr{\left (}f^k, H^n(X,{{\mathbb C}}) {\right )}$ whose corresponding vector spaces are not isomorphic to the reference vector space $F_p^k$ form a divisor $Y_k\subseteq Gr{\left (}f^k, H^n(X,{{\mathbb C}}) {\right )}$. Now we consider the divisor $Y\subseteq \prod_{k=1}^n Gr{\left (}f^k, H^n(X,{{\mathbb C}}) {\right )}$ given by $$\begin{aligned}
Y=\sum_{k=1}^n {\left (}\prod_{j<k} Gr{\left (}f^j, H^n(X,{{\mathbb C}}) {\right )}\times Y_k
\times \prod_{j>k} Gr{\left (}f^j, H^n(X,{{\mathbb C}}) {\right )}{\right )}.\end{aligned}$$ Then $Y\cap D$ is also a divisor in $D$. Since by Lemma \[transversal\], we know that $\Phi(q)\in N_+$ if and only if $F^k_q$ is isomorphic to $F^k_p$ for all $0\leq k\leq n$, so we have ${{\mathcal T}}\setminus \check{{\mathcal T}}=\Phi^{-1}(Y\cap D)$. Finally, by local Torelli theorem for Calabi–Yau manifolds, we know that $\Phi:\,\mathcal{T}\rightarrow D$ is a local embedding. Therefore, the complex codimension of $(\mathcal{T}\setminus\check{\mathcal{T}})$ in $\mathcal{T}$ is at least the complex codimension of the divisor $Y\cap D$ in $D$.
\[image\]The image of $\Phi:\,\mathcal{T}\rightarrow D$ lies in $N_+\cap D$ and is bounded with respect to the Euclidean metric on $N_+$.
According to Lemma \[codimension\], $\mathcal{T}\backslash\check{\mathcal{T}}$ is an analytic subvariety of $\mathcal{T}$ and the complex codimension of $\mathcal{T}\backslash\check{\mathcal{T}}$ is at least one; by Theorem\[locallybounded\], the holomorphic map $\Phi:\,\check{\mathcal{T}}\rightarrow N_+\cap D$ is bounded in $N_+$ with respect to the Euclidean metric. Thus by the Riemann extension theorem, there exists a holomorphic map $\Phi': \,\mathcal{T}\rightarrow N_+\cap D$ such that $\Phi'|_{_{\check{\mathcal{T}}}}=\Phi|_{_{\check{\mathcal{T}}}}$. Since as holomorphic maps, $\Phi'$ and $\Phi$ agree on the open subset $\check{\mathcal{T}}$, they must be the same on the entire $\mathcal{T}$. Therefore, the image of $\Phi$ is in $N_+\cap D$, and the image is bounded with respect to the Euclidean metric on $N_+.$ As a consequence, we also get $\mathcal{T}=\check{\mathcal{T}}=\Phi^{-1}(N_+).$
Holomorphic affine structure on the Teichmüller space {#affineonT}
-----------------------------------------------------
We first review the definition of complex affine manifolds. One may refer to page 215 of [@Mats] for more details.
Let $M$ be a complex manifold of complex dimension $n$. If there is a coordinate cover $\{(U_i,\,{\varphi}_i);\, i\in I\}$ of M such that ${\varphi}_{ik}={\varphi}_i\circ{\varphi}_k^{-1}$ is a holomorphic affine transformation on $\mathbb{C}^n$ whenever $U_i\cap U_k$ is not empty, then $\{(U_i,\,{\varphi}_i);\, i\in I\}$ is called a complex affine coordinate cover on $M$ and it defines a holomorphic affine structure on $M$.
Let us still fix an adapted basis $(\eta_0, \cdots, \eta_{m-1})$ for the Hodge decomposition of the base point $\Phi(p)\in D$. Recall that we can identify elements in $N_+$ with nonsingular block lower triangular matrices whose diagonal blocks are all identity submatrix, and element in $B$ with nonsingular block upper triangular matrices. Therefore $N_+\cap B=Id$. By Corollary \[image\], we know that ${\mathcal{T}}=\Phi^{-1}(N_+)$. Thus we get that each $\Phi(q)$ can be uniquely determined by a matrix, which we will still denote by $\Phi(q)=[\Phi_{ij}(q)]_{0\leq i,j\leq {m-1}}$ of the form of nonsingular block lower triangular matrices whose diagonal blocks are all identity submatrix. Thus we can define a holomorphic map $$\begin{aligned}
&\tau: \,{\mathcal{T}}\rightarrow \mathbb{C}^N\cong H^{n-1,1}_p,
\quad q\mapsto (1,0)\text{-block of the matrix }\Phi(q)\in N_+,\\
\text{that is}\quad &\tau(q)=(\tau_1(q), \tau_2(q), \cdots, \tau_N(q))=(\Phi_{10}(q), \Phi_{20}(q), \cdots, \Phi_{N0}(q))\end{aligned}$$
If we define the following projection map with respect to the base point and the its pre-fixed adapted basis to the Hodge decomposition, $$\begin{aligned}
\label{projectionmap}
P: N_+\cap D\rightarrow H^{n-1,1}_p\cong \mathbb{C}^N,\quad
F\mapsto (\eta_1, \cdots, \eta_N)F^{(1,0)}=F_{10}\eta_1+\cdots +F_{N0}\eta_N,\end{aligned}$$ where $F^{(1,0)}$ is the $(1,0)$-block of the unipotent matrix $F$, according to our convention in , then $\tau=P\circ \Phi: \,{\mathcal{T}}\rightarrow \mathbb{C}^N.$
\[tau\]The holomorphic map $\tau=(\tau_1, \cdots, \tau_N): \,{\mathcal{T}}\rightarrow \mathbb{C}^N$ defines a coordinate chart around each point $q\in{\mathcal{T}}$.
Recall that we have the generator $\{\eta_0\}\subseteq H^0(M_p, \Omega^n_{M_p})$, the generators $\{\eta_1, \cdots, \eta_N\} \subseteq H^1(M_p, \Omega^{n-1}(M_p))$, and the generators $\{\eta_{N+1}, \cdots, \eta_{m-1}\} \in \oplus_{k\geq 2}H^k(M_p, \Omega^{n-k}(M_p)$.
On one hand, the 0-th column of the matrix $\Phi(q)\in N_+$ for each $q\in\mathcal{T}$ gives us the following data: $$\begin{aligned}
\Omega: \,\mathcal{T}\rightarrow F^n; \quad &\Omega(q)=(\eta_0, \cdots, \eta_{m-1})(\Phi_{00}(q), \Phi_{10}(q), \cdots, \Phi_{N0}(q),\cdots)^T\\&=\eta_0+\tau_1(q)\eta_1+\tau_2(q)\eta_2+\cdots\tau_N(q)\eta_N+g_0(q)\in F^n_q\simeq H^{0}(M_q, \Omega^n(M_q)),\end{aligned}$$ where $g_0(q)\in \bigoplus_{k\geq 2}H^{k}(M_p, \Omega^{n-k}(M_p))$.
The $1$-st to $N$-th columns of $\Phi(q)\in N_+$ give us the following data: $$\begin{aligned}
\theta_1(q)=\eta_1+g_1(q),
\quad\cdots\cdots, \quad
\theta_N(q)=\eta_N+g_N(q) \in F^{n-1}_q,\end{aligned}$$ where $g_k(q)\in \bigoplus_{k\geq 2}H^{k}(M_p, \Omega^{n-k}(M_p))$, such that $\{\Omega(q), \theta_1(q), \cdots, \theta_N(q)\}$ forms a basis for $F^{n-1}_q$ for each $q\in\mathcal{T}$.
On the other hand, by local Torelli theorem, we know that for any holomorphic coordinate $\{\sigma_1, \cdots, \sigma_N\}$ around $q$, $\{\Omega(q), \frac{\partial{\Omega(q)}}{\partial\sigma_1},\cdots, \frac{\partial{\Omega(q)}}{\partial\sigma_N}\}$ forms a basis of $F^{n-1}_q$.
As both $\{\Omega(q), \theta_1(q), \cdots, \theta_N(q)\}$ and $\{\Omega(q), \frac{\partial{\Omega(q)}}{\partial\sigma_1},\cdots, \frac{\partial{\Omega(q)}}{\partial\sigma_N}\}$ are bases for $F^{n-1}_q$, there exists $\{X_1, \cdots, X_N\}$ such that $X_k=\sum_{i=1}^N a_{ik} \frac{\partial}{\partial \sigma_i}$ for each $1\leq k\leq N$ such that $$\begin{aligned}
\label{eq1111}\theta_k=X_k(\Omega(q))+\lambda_k\Omega(q)\quad \text{for}\quad 1\leq k\leq N.\end{aligned}$$
Note that we have$X_k(\Omega(q))=X_k(\tau_1(q))\eta_1+\cdots + X_k(\tau_N(q))\eta_N+X_k(g_0(q))$ and $\theta_k=\eta_k+g_k(q)$, where $X_k(g_0(q)), g_k(q)\in \oplus_{k\geq2}H^k(M_q, \Omega^{n-k}(M_q))$. By comparing the types of classes in , we get $$\begin{aligned}
\label{eq2222}
\lambda_k=0, \quad \text{and}\quad X_k(\Omega(q))=\theta_k(q)=\eta_k+g_k(q) \quad\text{for each } 1\leq k\leq N.\end{aligned}$$
Since $\{\theta_1(q), \cdots, \theta_N(q)\}$ are linearly independent set for $F^{n-1}(M_q)$, we know that $\{X_1, \cdots, X_N\}$ are also linearly independent in $T^{1,0}_q({\mathcal{T}})$. Therefore $\{X_1, \cdots, X_N\}$ forms a basis for $T^{1,0}_q({\mathcal{T}})$. Without loss of generality, we may assume $X_k=\frac{\partial}{\partial \sigma_k}$ for each $1\leq k\leq N$. Thus by , we have $$\frac{\partial\Omega(q)}{\partial\sigma_k}=\theta_k=\eta_k+g_k(q)\quad\text{for any}\quad 1\leq k\leq N.$$ Since we also have $$\frac{\partial\Omega(q)}{\partial\sigma_k}=\frac{\partial}{\partial\sigma_k}(\eta_0+\tau_1(q)\eta_1+\tau_2(q)\eta_2+\cdots\tau_N(q)\eta_N+g_0(q)),$$ we get $\left[\frac{\partial{\tau_i(q)}}{\partial\sigma_j}\right]_{1\leq i,j\leq N}=\text{I}_N.$ This shows that $\tau_*:\,T^{1,0}_q(\mathcal{T})\rightarrow T_{\tau(q)}(\mathbb{C}^N)$ is an isomorphism for each $q\in\mathcal{T}$, as $\{\frac{\partial}{\partial\sigma_1}, \cdots, \frac{\partial}{\partial\sigma_N}\}$ is a basis for $T^{1,0}_q(\mathcal{T})$.
Thus the holomorphic map $\tau: \,{\mathcal{T}}\rightarrow \mathbb{C}^N$ defines local coordinate map around each point $q\in\mathcal{T}$. In particular, the map $\tau$ itself gives a global holomorphic coordinate for ${\mathcal{T}}$. Thus the transition maps are all identity maps. Therefore,
The global holomorphic coordinate map $\tau: \,{\mathcal{T}}\rightarrow \mathbb{C}^N$ defines a holomorphic affine structure on ${\mathcal{T}}$.
This affine structure on ${\mathcal{T}}$ depends on the choice of the base point $p$. Affine structures on ${\mathcal{T}}$ defined in this ways by fixing different base point may not be compatible with each other.
Hodge metric completion of the Teichmüller space with level structure {#THm}
=====================================================================
In Section \[defofTH\], given $m\geq 3$, we introduce the Hodge metric completion $\mathcal{T}^H_{_m}$ of the Teichmüller space with level $m$ structure, which is the universal cover of $\mathcal{Z}^H_{_m}$, where $\mathcal{Z}^H_{_m}$ is the completion space of the smooth moduli space $\mathcal{Z}_{m}$ with respect to the Hodge metric. We denote the lifting maps $i_m: \,\mathcal{T}\rightarrow \mathcal{T}^H_{_m}$, $\Phi_{_{m}}^H: \,\mathcal{T}^H_{_m}\to D$ and $\mathcal{T}_m:=i_{m}(\mathcal{T})$. We prove that $\Phi^H_{_m}$ is a bounded holomorphic map from $\mathcal{T}^H_{_m}$ to $N_+\cap D$. In Section \[affineness\], we first define the map $\tau^H_{_m}$ from $\mathcal{T}^H_{_m}$ to $\mathbb{C}^N$ and its restriction $\tau_m$ on the submanifold $\mathcal{T}_m$. We then show that $\tau_m$ is also a local embedding and conclude that $\tau_m$ defined a global holomorphic affine structure on $\mathcal{T}_m$. Then the affineness of $\tau_m$ shows that the extension map $\tau^H_{_m}$ is also defines a holomorphic affine structure on $\mathcal{T}^H_{_m}$. In Section \[injective\], we prove that $\tau^H_{_m}$ is an injection by using the Hodge metric completeness and the global holomorphic affine structure on $\mathcal{T}^H_{_m}$. As a corollary, we show that the holomorphic map $\Phi_{_m}^H$ is an injection.
Definitions and basic properties {#defofTH}
--------------------------------
Recall in Section \[section construction of Tei\], $\mathcal{Z}_m$ is the smooth moduli space of polarized Calabi–Yau manifolds with level $m$ structure, which is introduced in [@sz]. The Teichmüller space $\mathcal{T}$ is the universal cover of $\mathcal{Z}_m$.
By the work of Viehweg in [@v1], we know that $\mathcal{Z}_m$ is quasi-projective and consequently we can find a smooth projective compactification ${\overline}{\mathcal{Z}}_{_m}$ such that $\mathcal{Z}_m$ is open in ${\overline}{\mathcal{Z}}_{_m}$ and the complement ${\overline}{\mathcal{Z}}_{_m}\backslash\mathcal{Z}_m$ is a divisor of normal crossing. Therefore, $\mathcal{Z}_m$ is dense and open in ${\overline}{\mathcal{Z}}_{m}$ with the complex codimension of the complement ${\overline}{\mathcal{Z}}_{_m}\backslash \mathcal{Z}_m$ at least one. Moreover as ${\overline}{\mathcal{Z}}_{_m}$ is a compact space, it is a complete space.
Recall at the end of Section \[section period map\], we pointed out that there is an induced Hodge metric on $\mathcal{Z}_m$. Let us now take $\mathcal{Z}^H_{_m}$ to be the completion of $\mathcal{Z}_m$ with respect to the Hodge metric. Then $\mathcal{Z}_m^H$ is the smallest complete space with respect to the Hodge metric that contains $\mathcal{Z}_m$. Although the compact space ${\overline}{\mathcal{Z}}_{_m}$ may not be unique, the Hodge metric completion space $\mathcal{Z}^H_{_m}$ is unique up to isometry. In particular, $\mathcal{Z}^H_{_m}\subseteq{\overline}{\mathcal{Z}}_{_m}$ and thus the complex codimension of the complement $\mathcal{Z}^H_{_m}\backslash \mathcal{Z}_m$ is at least one. Moreover, we can prove the following Lemma 4.1.
For a direct proof of Lemma 4.1, we can use Theorem 9.6 in [@Griffiths3], which is about the extension of period map to those points of finite monodromy in ${\overline}{\mathcal{Z}}_{_m}$, to get the above ${\mathcal Z}_m^H$, see also [@CMP Corollary 13.4.6]. To apply Griffiths theorem, we only need to use the discrete subgroup in $G_{\mathbb{R}}$ that contains the global monodromy group, which is the image of $\pi_1(\mathcal{Z}_{m})$. We will still denote this discrete subgroup in $G_{\mathbb{R}}$ by $\Gamma$ and consider the period map $\Phi_m:\, {\mathcal Z}_m\to D/\Gamma$. In fact, as in the discussion in Page 156 above Theorem 9.6 in [@Griffiths3], the isotropy group of any point in $D/\Gamma$ is of finite order, and the extended period map is a proper holomorphic map into $D/\Gamma$. Therefore we see that the points in the above extension ${\mathcal Z}_m^H$ are precisely those points of finite monodromy.
Similar to the discussion in [@sz Page 697], we see that Serre’s lemma [@sz Lemma 2.4] and the construction of the moduli spaces ${\mathcal Z}_m$ with $m\geq 3$ in [@sz] imply that the monodromy at the extension points in ${\mathcal Z}_m^H$, which are all of finite order, are all trivial. Therefore the extended period map ${\Phi}_{_{\mathcal{Z}_m}}^H$ is still locally liftable, and horizontal as proved in Lemma 5.10. In the following context, we can simply assume that the global monodromy group $\Gamma$ is torsion-free and thus $D/\Gamma$ is smooth, without loss of generality.
\[cidim\] The Hodge metric completion $\mathcal{Z}^H_{_m}$ is a dense and open smooth submanifold in ${\overline}{\mathcal{Z}}_{_m}$ and the complex codimenison of $\mathcal{Z}^H_{_m}\backslash\mathcal{Z}_m$ is at least one.
We will give two more different proofs to show the different aspects of the completion. The first one is more conceptual which uses extension of the period map, while the second one is more elementary which only uses basic definition of metric completion.
Since ${\overline}{\mathcal{Z}}_{_m}$ is a smooth manifold and $\mathcal{Z}_m$ is dense in ${\overline}{\mathcal{Z}}_{_m}$, we only need to show that $\mathcal{Z}^H_{_m}$ is open in ${\overline}{\mathcal{Z}}_{_m}$. By abusing notation, in the following discussion we will also use $\Gamma$ to denote such a torsion-free discrete subgroup in $G_{\mathbb R}$ containing the global monodromy group, the image of $\pi_1({\mathcal Z}_{m})$. For level $m\geq 3$, from Serre’s lemma [@sz Lemma 2.4] and Borel’s theorem [@Milne Theorem 3.6], we see that the congruence subgroup of level $m$ will be a torsion-free discrete subgroup in $G_{\mathbb R}$.
For the first proof, we can use a compactification ${\overline}{D/\Gamma}$ of $D/\Gamma$ and a continuous extension of the period map $\Phi_{\mathcal{Z}_m}\to D/\Gamma$ to $$\begin{aligned}
{\overline}{\Phi}_{\mathcal{Z}_m}:\,{\overline}{\mathcal{Z}}_{_m}\to {\overline}{D/\Gamma}.\end{aligned}$$ Then since $D/\Gamma$ is complete with respect to the Hodge metric, the Hodge metric completion space $\mathcal{Z}^H_{_m}={\overline}{\Phi}_{\mathcal{Z}_m}^{-1}(D/\Gamma)$. Notice that $D/\Gamma$ is open and dense in the compacification in $\overline{D/\Gamma}$, $\mathcal{Z}_{_m}^H$ is therefore an open subset of ${\overline}{\mathcal{Z}}_{_m}$.
To define the compactification space ${\overline}{D/\Gamma}$, there are different approaches from literature. One natural compactification is given by Kato-Usui following [@Schmid86] as given in [@Usui09] and discussed in [@Green14 Page 2], where one attaches to $D$ a set $B(\Gamma)$ of equivalence classes of limiting mixed Hodge structures, then define ${\overline}{D/\Gamma}=(D\cup B(\Gamma))/\Gamma$ and extend the period map $\Phi_{\mathcal{Z}_m}$ continuously to ${\overline}{\Phi}_{\mathcal{Z}_m}$. An alternative and more recent method is to use the reduced limit period mapping as defined in [@Green14 Page 29, 30] to enlarge the period domain $D$. Different approaches can give us different compactification spaces ${\overline}{D/\Gamma}$. Both compactifications are applicable to our above proof of the openness of $\mathcal{Z}^H_{_m}={\overline}{\Phi}_{\mathcal{Z}_m}^{-1}(D/\Gamma)$.
For the second, more elementary proof, we will directly show the open smoothness of $\mathcal{Z}^H_{_m}$. Since ${\overline}{\mathcal{Z}}_{_m}\backslash\mathcal{Z}_m$ is a normal crossing divisor in ${\overline}{\mathcal{Z}}_{_m}$, any point $q\in{\overline}{\mathcal{Z}}_{_m}\backslash \mathcal{Z}_m$ has a neighborhood $U_q\subset {\overline}{\mathcal{Z}}_{_m}$ such that $U_q\cap\mathcal{Z}_m$ is biholomorphic to $(\Delta^*)^k\times\Delta^{N-k}$. Given a fixed reference point $p\in\mathcal{Z}_m$, and any point $p'\in U_q\cap \mathcal{Z}_m$, there is a piecewise smooth curve $\gamma_{p,p'}:\, [0,1]\to \mathcal{Z}_m$ connecting $p$ and $p'$, since $\mathcal{Z}_m$ is a path connected manifold. There is also a piecewise smooth curve $\gamma_{p',q}$ in $U_q\cap \mathcal{Z}_m$ connecting $p'$ and $q$. Therefore, there is a piecewise smooth curve $\gamma_{p,q}:\, [0,1]\to{\overline}{\mathcal{Z}}_{_m}$ connecting $p$ and $q$ with $\gamma_{p,q}((0, 1))\subset \mathcal{Z}_m$. Thus for any point $q\in{\overline}{\mathcal{Z}}_{_m}$, we can define the Hodge distance between $q$ and $p$ by $$d_H(p,q)=\inf\limits_{\gamma_{p,q}} L_H(\gamma_{p,q}),$$ where $L_H(\gamma_{p,q})$ denotes the length of the curve $\gamma_{p,q}$ with respect to the Hodge metric on $\mathcal{Z}_m$, and the infimum is taken over all piecewise smooth curves $\gamma_{p,q}:\, [0,1]\rightarrow{\overline}{\mathcal{Z}}_{_m} $ connecting $p$ and $q$ with $\gamma_{p,q}((0,1))\subset \mathcal{Z}_{_m}$. Then $\mathcal{Z}_{_m}^H$ is the set of all the points $q$ in ${\overline}{\mathcal{Z}}_{_m}$ with $d_H(p,q)<\infty$, and the complement set ${\overline}{\mathcal{Z}}_{_m}\backslash\mathcal{Z}_{_m}^H$ contains all the points $q$ in ${\overline}{\mathcal{Z}}_{_m}$ with $d_H(p,q)=\infty$. Now let $\{q_k\}_{k=1}^\infty\subset {\overline}{\mathcal{Z}}_{_m}\backslash\mathcal{Z}_{_m}^H$ be a convergent sequence and assume that the limit of this sequence is $q_\infty$. Then the Hodge distance between the reference point $p$ and the limiting point $q_\infty$ is clearly $d(p,q_\infty)=\infty$ as well, since $d(p,q_k)=\infty$ for all $k$. Thus $q_\infty\in{\overline}{\mathcal{Z}}_{_m}\backslash\mathcal{Z}_{_m}^H$. Therefore, the set of the points of Hodge infinite distance form $p$ is a close set in ${\overline}{\mathcal{Z}}_{_m}$. We conclude that $\mathcal{Z}_{_m}^H$ is an open submanifold of ${\overline}{\mathcal{Z}}_{_m}$.
We recall some basic properties about metric completion space we are using in this paper. We know that the metric completion space of a connected space is still connected. Therefore, $\mathcal{Z}_{_m}^H$ is connected.
Suppose $(X, d)$ is a metric space with the metric $d$. Then the metric completion space of $(X, d)$ is unique in the following sense: if ${\overline}{X}_1$ and ${\overline}{X}_2$ are complete metric spaces that both contain $X$ as a dense set, then there exists an isometry $f: {\overline}{X}_1\rightarrow {\overline}{X}_2$ such that $f|_{X}$ is the identity map on $X$. Moreover, as mentioned above that the metric completion space ${\overline}{X}$ of $X$ is the smallest complete metric space containing $X$ in the sense that any other complete space that contains $X$ as a subspace must also contains ${\overline}{X}$ as a subspace.
Moreover, suppose ${\overline}{X}$ is the metric completion space of the metric space $(X, d)$. If there is a continuous map $f: \,X\rightarrow Y$ which is a local isometry with $Y$ a complete space, then there exists a continuous extension ${\overline}{f}:\,{\overline}{X}\rightarrow{Y}$ such that ${\overline}{f}|_{X}=f$.
In the rest of the paper, unless otherwise pointed out, when we mention a complete space, the completeness is always with respect to the Hodge metric.
Let $\mathcal{T}^{H}_{_m}$ be the universal cover of $\mathcal{Z}^H_{_m}$ with the universal covering map $\pi_{_m}^H:\, \mathcal{T}^{H}_{_m}\rightarrow \mathcal{Z}_m^H$. Thus $\mathcal{T}^H_{_m}$ is a connected and simply connected complete smooth complex manifold with respect to the Hodge metric. We will call $\mathcal{T}^H_{_m}$ the *Hodge metric completion space with level $m$ structure* of $\mathcal{T}$. Since $\mathcal{Z}_m^H$ is the Hodge metric completion of $\mathcal{Z}_m$, there exists the continuous extension map ${\Phi}_{_{\mathcal{Z}_m}}^H: \,\mathcal{Z}_m^H\rightarrow D/\Gamma$. Moreover, recall that the Teichmüller space $\mathcal{T}$ is the universal cover of the moduli space $\mathcal{Z}_m$ with the universal covering denoted by $\pi_m:\, \mathcal{T}\to \mathcal{Z}_m$.
Thus we have the following commutative diagram $$\begin{aligned}
\label{cover maps} \xymatrix{\mathcal{T}\ar[r]^{i_{m}}\ar[d]^{\pi_m}&\mathcal{T}^H_{_m}\ar[d]^{\pi_{_m}^H}\ar[r]^{{\Phi}^{H}_{_m}}&D\ar[d]^{\pi_D}\\
\mathcal{Z}_m\ar[r]^{i}&\mathcal{Z}^H_{_m}\ar[r]^{{\Phi}_{_{\mathcal{Z}_m}}^H}&D/\Gamma,
}\end{aligned}$$ where $i$ is the inclusion map, ${i}_{_m}$ is a lifting map of $i\circ\pi_m$, $\pi_D$ is the covering map and ${\Phi}^{H}_{_m}$ is a lifting map of ${\Phi}_{_{\mathcal{Z}_m}}^H\circ \pi_{_m}^H$. In particular, $\Phi^H_{_m}$ is a continuous map from $\mathcal{T}^H_{_m}$ to $D$. We notice that the lifting maps ${i}_{_{\mathcal{T}}}$ and ${\Phi}^H_{_m}$ are not unique, but Lemma \[choice\] implies that there exists a suitable choice of $i_{m}$ and $\Phi_{_m}^H$ such that $\Phi=\Phi^H_{_m}\circ i_{m}$. We will fix the choice of $i_{m}$ and $\Phi^{H}_m$ such that $\Phi=\Phi^H_{_m}\circ i_m$ in the rest of the paper. Let us denote $\mathcal{T}_m:=i_{m}(\mathcal{T})$ and the restriction map $\Phi_m=\Phi^H_{_m}|_{\mathcal{T}_m}$. Then we also have $\Phi=\Phi_m\circ i_m$.
\[opend\]The image $\mathcal{T}_m$ equals to the preimage $(\pi^H_{_m})^{-1}(\mathcal{Z}_{m})$.
Because of the commutativity of diagram (\[cover maps\]), we have that $\pi^H_{_m}(i_m(\mathcal{T}))=i(\pi_m(\mathcal{T}))=\mathcal{Z}_{m}$. Therefore, $\mathcal{T}_m=i_m(\mathcal{T})\subseteq (\pi^H_{_m})^{-1}(\mathcal{Z}_{m})$. For the other direction, we need to show that for any point $q\in (\pi^H_{_m})^{-1}(\mathcal{Z}_{m})\subseteq\mathcal{T}^H_{_m}$, then $q\in i_m(\mathcal{T})=\mathcal{T}_m$.
Let $p=\pi^H_{_m}(q)\in i(\mathcal{Z}_{m})$, Let $x_1\in \pi_m^{-1}(i^{-1}(p))\subseteq \mathcal{T}$ be an arbitrary point, then $\pi^H_{_m}(i_m(x_1))=i(\pi_m(x_1))=p$ and $i_m(x_1)\in (\pi^H_{_m})^{-1}(p)\subseteq \mathcal{T}^H_{_m}$.
As $\mathcal{T}^H_{_m}$ is a connected smooth complex manifold, $\mathcal{T}^H_{_m}$ is path connected. Therefore, for $i_m(x_1), q\in \mathcal{T}^H_{_m}$, there exists a curve $\gamma:\,[0,1]\to\mathcal{T}^H_{_m}$ with $\gamma(0)=i_m(x_1)$ and $\gamma(1)=q$. Then the composition $\pi^H_{_m}\circ \gamma$ gives a loop on $\mathcal{Z}^H_{_m}$ with $\pi^H_{_m}\circ \gamma(0)=\pi^H_{_m}\circ \gamma(1)=p$. Lemma \[fund\] implies that there is a loop $\Gamma$ on $\mathcal{Z}_m$ with $\Gamma(0)=\Gamma(1)=i^{-1}(p)$ such that $$\begin{aligned}
[i\circ \Gamma]=[\pi^H_{_m}\circ \gamma]\in \pi_1(\mathcal{Z}^H_{_m}),\end{aligned}$$ where $\pi_1(\mathcal{Z}^H_{_m})$ denotes the fundamental group of $\mathcal{Z}^H_{_m}$. Because $\mathcal{T}$ is universal cover of $\mathcal{Z}_m$, there is a unique lifting map ${\widetilde}{\Gamma}:\,[0,1]\to \mathcal{T}$ with ${\widetilde}{\Gamma}(0)=x_1$ and $\pi_m\circ{\widetilde}{\Gamma}=\Gamma$. Again since $\pi^H_{_m}\circ i_m=i\circ \pi_m$, we have $$\begin{aligned}
\pi^H_{_m}\circ i_m\circ {\widetilde}{\Gamma}=i\circ \pi_m\circ{\widetilde}{\Gamma}=i\circ\Gamma:\,[0,1]\to \mathcal{Z}_m.\end{aligned}$$ Therefore $[\pi^H_{_m}\circ i_m\circ {\widetilde}{\Gamma}]=[i\circ \Gamma]\in \pi_1(\mathcal{Z}_m)$, and the two curves $i_m\circ {\widetilde}{\Gamma}$ and $\gamma$ have the same starting points $i_m\circ {\widetilde}{\Gamma}(0)=\gamma(0)=i_m(x_1)$. Then the homotopy lifting property of the covering map $\pi^H_{_m}$ implies that $i_m\circ {\widetilde}{\Gamma}(1)=\gamma(1)=q$. Therefore, $q\in i_m(\mathcal{T})$, as needed.
Since $\mathcal{Z}_m$ is an open submanifold of $\mathcal{Z}^H_{_m}$ and $\pi^H_{_m}$ is a holomorphic covering map, the preimage $\mathcal{T}_m=(\pi^H_{_m})^{-1}(\mathcal{Z}_{m})$ is a connected open submanifold of $\mathcal{T}^H_{_m}$. Furthermore, because the complex codimension of $\mathcal{Z}^H_{m}\setminus \mathcal{Z}_{m}$ is as least one in $\mathcal{Z}^H_{_m}$, the complex codimension of $\mathcal{T}^H_{_m}\backslash\mathcal{T}_m$ is also as least one in $\mathcal{T}^H_{_m}$.
First, it is not hard to see that the restriction map $\Phi_m$ is holomorphic. Indeed, we know that $i_m:\,\mathcal{T}\rightarrow \mathcal{T}_m$ is the lifting of $i\circ \pi_m$ and $\pi^H_{_m}|_{\mathcal{T}_m}:\, \mathcal{T}_m\rightarrow \mathcal{Z}_m$ is a holomorphic covering map, thus $i_m$ is also holomorphic. Since $\Phi=\Phi_m\circ i_m$ with both $\Phi$, $i_m$ holomorphic and $i_m$ locally invertible, we can conclude that $\Phi_m:\,\mathcal{T}_m\rightarrow D$ is a holomorphic map. Moreover, we have $\Phi_m(\mathcal{T}_m)=\Phi_m(i_m(\mathcal{T}))=\Phi(\mathcal{T})$ as $\Phi=i_m\circ \Phi_m$. In particular, as $\Phi: \,\mathcal{T}\rightarrow N_+\cap D$ is bounded, we get that $\Phi_m: \,\mathcal{T}\rightarrow N_+\cap D$ is also bounded in $N_+$ with the Euclidean metric. Thus $\Phi^H_{_m}$ is also bounded. Therefore applying Riemann extension theorem, we get
\[Riemannextension\]The map $\Phi^{H}_{_m}$ is a bounded holomorphic map from $\mathcal{T}^H_{_m}$ to $N_+\cap D$.
According to the above discussion, we know that the complement $\mathcal{T}^H_{_m}\backslash\mathcal{T}_m$ is the pre-image of $\mathcal{Z}^H_{m}\setminus \mathcal{Z}_{m}$ of the covering map $\pi_m^H$. So $\mathcal{T}^H_{_m}\backslash\mathcal{T}_m$ is a analytic subvariety of $\mathcal{T}^H_{_{m}}$, with complex codimension at least one and $\Phi_m:\,\mathcal{T}_m\to N_+\cap D$ is a bounded holomorphic map. Therefore, simply applying the Riemman extension theorem to the holomorphic map $\Phi_m:\,\mathcal{T}_m\rightarrow N_+\cap D$, we conclude that there exists a holomorphic map $\Phi'_m:\, \mathcal{T}^H_{_m}\rightarrow N_+\cap D$ such that $\Phi'_m|_{\mathcal{T}_m}=\Phi_m$. We know that both $\Phi^H_{_m}$ and $\Phi'_m$ are continuous maps defined on $\mathcal{T}^H_{_m}$ that agree on the dense subset $\mathcal{T}_m$. Therefore, they must agree on the whole $\mathcal{T}^H_{_m}$, that is, $\Phi^H_{_m}=\Phi'_m$ on $\mathcal{T}^H_{_m}$. Therefore, $\Phi^H_{_m}:\,\mathcal{T}^H_{_m}\rightarrow N_+\cap D$ is a bounded holomorphic map.
Holomorphic affine structure on the Hodge metric completion space {#affineness}
------------------------------------------------------------------
In this section, we still fix the base point $\Phi(p)\in D$ with $p\in \mathcal{T}$ and an adapted basis $(\eta_0, \cdots, \eta_{m-1})$ for the Hodge decomposition of $\Phi(p)$. We defined the global coordinate map $\tau:\,\mathcal{T}\rightarrow \mathbb{C}^N,$ which is a holomorphic affine local embedding. Let us now define $$\begin{aligned}
\tau_m:=P\circ \Phi_m:\, \mathcal{\mathcal{T}}_m\rightarrow \mathbb{C}^N,\end{aligned}$$ where $P: \,N_+\cap D\rightarrow \mathbb{C}^N$ is the projection map defined in . Then as $\Phi=\Phi_m\circ i_m$, we get $\tau=P\circ \Phi=P\circ \Phi_m\circ i_m=\tau_m \circ i_m.$ In the following lemma, we will crucially use the fact that the holomorphic map $\tau: \,\mathcal{T}\rightarrow \mathbb{C}^N$ is a local embedding.
\[affine on Tm\]The holomorphic map $\tau_m: \,\mathcal{T}\rightarrow \mathbb{C}^N$ is a local embedding. In particular, $\tau_m:\,\mathcal{T}_m\rightarrow \mathbb{C}^N$ defines a global holomorphic affine structure on $\mathcal{T}_m$.
Since $i\circ \pi_m=\pi^H_{_m}\circ i_m$ with $i:\,\mathcal{Z}_m\rightarrow \mathcal{Z}_{_m}^H$ the natural inclusion map and $\pi_m$, $\pi^H_{_m}$ both universal covering maps, $i_{m}$ is a lifting of the inclusion map. Thus $i_m$ is locally biholomorphic. On the other hand, we showed that $\tau$ is a local embedding. We may choose an open cover $\{U_\alpha\}_{\alpha\in \Lambda}$ of $\mathcal{T}_m$ such that for each $U_\alpha\subseteq \mathcal{T}_m$, $i_m$ is biholomorphic on $U_\alpha$ and thus the inverse $(i_m)^{-1}$ is also an embedding on $U_\alpha$. Obviously we may also assume that $\tau$ is an embedding on $(i_m)^{-1}(U_\alpha)$. In particular, the relation $\tau=\tau_m\circ i_m$ implies that $\tau_{m}|_{U_\alpha}=\tau\circ (i_m)^{-1}|_{U_\alpha}$ is also an embedding on $U_\alpha$. In this way, we showed $\tau_m$ is a local embedding on $\mathcal{T}_m$. Therefore, since $\dim_{\mathbb{C}}\mathcal{T}_m=N$, $\tau_m: \,\mathcal{T}_m\rightarrow \mathbb{C}^N$ defines a local coordinate map around each point in $\mathcal{T}_m$. In particular, the map $\tau_m$ itself gives a global holomorphic coordinate for ${\mathcal{T}}_m$. Thus the transition maps are all identity maps. Therefore, $\tau_m:\,\mathcal{T}_m\rightarrow \mathbb{C}^N$ defines a global holomorphic affine structure on $\mathcal{T}_m$.
Let us define $\tau_m^H:=P\circ \Phi^H_{_m}: \,\mathcal{T}^H_{_m}\rightarrow \mathbb{C}^N$, where $P: \,N_+\cap D\rightarrow \mathbb{C}^N$ is still the projection map defined in . Then we easily see that $\tau^H_m|_{\mathcal{T}_m}=\tau_m$. We also have the following,
\[injofPsi\] The holomorphic map $\tau^H_{_m}: \,\mathcal{T}^H_{_m}\rightarrow \mathbb{C}^N\cong H^{n-1,1}_p$ is a local embedding.
The proof uses mainly the affineness of $\tau_m:\,\mathcal{T}_{m}\rightarrow \mathbb{C}^N\cong H^{n-1,1}_p$. By Proposition \[opend\], we know that $\mathcal{T}_m$ is dense and open in $\mathcal{T}^H_{_m}$. Thus for any point $q\in \mathcal{T}^H_{_m}$, there exists $\{q_k\}_{k=1}^\infty\subseteq\mathcal{T}_{m}$ such that $\lim_{k\to \infty} q_k=q$. Because $\tau^H_{_m}(q)\in H^{n-1,1}_p$, we can take a neighborhood $W\subseteq H^{n-1,1}_p$ of $\tau^H_{_m}(q)$ with $W\subseteq \tau^H_{_m}(\mathcal{T}^H_{_m})$.
Consider the projection map $P: N_+\to \mathbb{C}^N$ with $P(F)=F^{(1,0)}$ the $(1,0)$ block of the matrix $F$, and the decomposition of the holomorphic tangent bundle $$T^{1,0}N_+=\bigoplus_{0\leq l\leq k\leq n}\text{Hom}(F^{k}/F^{k+1}, F^{l}/F^{l+1}).$$ In particular, the subtangent bundle $\text{Hom}(F^{n}, F^{n-1}/F^{n})$ over $N_+$ is isomorphic to the pull-back bundle $P^*(T^{1,0}\mathbb{C}^N)$ of the holomorphic tangent bundle of $\mathbb{C}^N$ through the projection $P$. On the other hand, the holomorphic tangent bundle of $\mathcal{T}_m$ is also isomorphic to the holomorphic bundle $\text{Hom}(F^{n}, F^{n-1}/F^{n})$, where $F^n$ and $F^{n-1}$ are pull-back bundles on $\mathcal{T}_m$ via $\Phi_m$ from $N_+\cap D$. Since the holomorphic map $\tau_m=P \circ \Phi_{m}$ is a composition of $P$ and $\Phi_{m}$, the pull-back bundle of $T^{1,0}W$ through $\tau_m$ is also isomorphic to the tangent bundle of $\mathcal{T}_m$.
Now with the fixed adapted basis $\{\eta_1, \cdots, \eta_N\}$, one has a standard coordinate $(z_1, \cdots, z_N)$ on $\mathbb{C}^N\cong H^{n-1,1}_p$ such that each point in $\mathbb{C}^N\cong H^{n-1,1}_p$ is of the form $z_1\eta_1+\cdots+z_N\eta_N$. Let us choose one special trivialization of $$\begin{aligned}
T^{1,0}W\cong\text{Hom}(F^n_p, F^{n-1}_p/F^n_p)\times W\end{aligned}$$ by the standard global holomorphic frame $(\Lambda_1, \cdots, \Lambda_N)=(\partial/\partial z_1, \cdots, \partial/\partial z_N)$ on $T^{1,0}W$. Under this trivialization, we can identify $T_o^{1,0}W$ with $\text{Hom}(F^n_p, F^{n-1}_p/F^n_p)$ for any $o\in W$. Then $(\Lambda_1, \cdots, \Lambda_N)$ are parallel sections with respect to the trivial affine connection on $T^{1,0}W$. Let $U_q\subseteq(\tau_{_m}^H)^{-1}(W)$ be a neighborhood of $q$ and let $U=U_q\cap \mathcal{T}_{m}$. Then the pull back sections $(\tau^H_{_m})^*(\Lambda_1, \cdots, \Lambda_N):\, U_q\rightarrow T^{1,0}U_q$ are tangent vectors of $U_q$, we denote them by $(\mu^H_1, \cdots, \mu^H_N)$ for convenience.
According to the proof of Lemma \[affine on Tm\], we know that the restriction map $\tau_m$ is a local embedding. Therefore the tangent map $(\tau_m)_*:\,T^{1,0}_{q'}U\rightarrow T^{1,0}_{o}W$ is an isomorphism, for any $q'\in U$ and $o=\tau_m(q')$. Moreover, since $\tau_m$ is a holomorphic affine map, the holomorphic sections $(\mu_1, \cdots, \mu_N):=(\mu^H_1, \cdots, \mu^H_N)|_{_U}$ form a holomorphic parallel frame for $T^{1,0}U$. Under the parallel frames $(\mu_1, \cdots, \mu_N)$ and $(\Lambda_1, \cdots, \Lambda_N)$, there exists a nonsingular matrix function $A(q')=(a_{ij}(q'))_{1\leq i\leq N, 1\leq j\leq N}$, such that the tangent map $(\tau_m)_*$ is given by $$\begin{aligned}
(\tau_m)_*(\mu_1, \cdots, \mu_N)(q')=(\Lambda_1(o), \cdots, \Lambda_N(o))A(q'), \quad\text{with } q'\in U \text{ and } o=\tau_m(q')\in D.\end{aligned}$$
Moreover, since $(\Lambda_1, \cdots, \Lambda_N)$ and $(\mu_1, \cdots, \mu_N)$ are parallel frames for $T^{1,0}W$ and $T^{1,0}U$ respectively and $\tau_{m}$ is a holomorphic affine map, the matrix $A(q')=A$ is actually a constant nonsingular matrix for all $q'\in U$. In particular, for each $q_k\in U$, we have $((\tau_m)_*\mu_1, \cdots, (\tau_m)_*\mu_N)(q_k)=(\Lambda_1(o_k), \cdots, \Lambda_N(o_k))A,$ where $o_k=\tau_m(q_k)$. Because the tangent map $(\tau^H_{_m})_*: \,T^{1,0}U_q\to T^{1,0}W$ is a continuous map, we have that $$\begin{aligned}
(\tau^H_{_m})_*(\mu_1^{H}(q), &\cdots, \mu_N^{H}(q))=\lim_{k\to\infty}(\tau_{m})_*(\mu_1(q_k), \cdots, \mu_N(q_k))=\lim_{k\to\infty}(\Lambda_1(o_k), \cdots, \Lambda_N(o_k))A\\&=(\Lambda_1({\overline}{o}), \cdots, \Lambda_N({\overline}{o}))A, \quad\text{where}\quad o_k=\tau_{m}(q_k) \ \ \text{and}\ \ {\overline}{o}=\tau^H_{_m}(q).\end{aligned}$$ As $(\Lambda_1({\overline}{o}), \cdots, \Lambda_N({\overline}{o}))$ forms a basis for $T^{1,0}_{{\overline}{o}}W=\text{Hom}(F^n_p, F^{n-1}_p/F^n_p)$ and $A$ is nonsingular, we can conclude that $(\tau_{_m}^H)_*$ is an isomorphism from $T^{1,0}_qU_q$ to $T^{1,0}_{{\overline}{o}}W$. This shows that $\tau_{_m}^H:\,\mathcal{T}^H_{_m}\rightarrow \mathbb{C}^N\cong H^{n-1,1}_p
$ is a local embedding.
\[THmaffine\]The holomorphic map $\tau^H_{_m}: \,\mathcal{T}^H_{_m}\rightarrow \mathbb{C}^N$ defines a global holomorphic affine structure on $\mathcal{T}^H_{_m}$.
Since $\tau_{_m}^H:\,\mathcal{T}^H_{_m}\rightarrow \mathbb{C}^N$ is a local embedding and $\dim\mathcal{T}^H_{_m}=N$, thus the same arguments as the proof of Lemma \[affine on Tm\] can be applied to conclude $\tau^H_{_m}$ defines a global holomorphic affine structure on $\mathcal{T}^H_{_m}$.
It is important to note that the flat connections which correspond to the global holomorphic affine structures on $\mathcal{T}$, on $\mathcal{T}_m$ or on $\mathcal{T}_{_m}^H$ are in general not compatible to the corresponding Hodge metrics on them.
Injectivity of the period map on the Hodge metric completion space {#injective}
------------------------------------------------------------------
\[injectivityofPhiH\]For any $m\geq 3$, the holomorphic map $\tau^H_{_m}:\, \mathcal{T}^H_{_m}\to\mathbb{C}^N$ is an injection.
To prove this theorem, we will first prove the following elementary lemma, where we mainly use the completeness with the Hodge metric, the holomorphic affine structure on $\mathcal{T}^H_{_m}$, the affineness of $\tau^H_{_m}$, and the properties of Hodge metric. We remark that as $\mathcal{T}^H_{_m}$ is a complex affine manifold, we have the notion of straight lines in it with respect to the affine structure.
\[straightline\] For any two points in $\mathcal{T}^H_{_m},$ there is a straight line in $\mathcal{T}^H_{_m}$ connecting them.
Let $p$ be an arbitrary point in $\mathcal{T}^H_{_m}$, and let $S\subseteq \mathcal{T}^H_{_m}$ be the collection of points that can be connected to $p$ by straight lines in $\mathcal{T}^H_{_m}$. We need to show that $S=\mathcal{T}^H_{_m}$.
We first show that $S$ is a closed set. Let $\{q_i\}_{i=1}^\infty\subseteq S$ be a Cauchy sequence with respect to the Hodge metric. Then for each $i$ we have the straight line $l_i$ connecting $p$ and $q_i$ such that $l_i(0)=p$, $l_i(T_i)=q_i$ for some $T_i\geq 0$ and $v_i:=\frac{\partial}{\partial t}l_i(0)$ a unit vector with respect to the Euclidean metric on $n_+$. We can view these straight lines $l_i:\,[0, T_i]\to \mathcal{T}^H_{_m}$ as the solutions of the affine geodesic equations $l''_i(t)=0$ with initial conditions $v_i:=\frac{\partial}{\partial t}l_i(0)$ and $l_i(0)=p$ in particular $T_i=d_E(p,q_i)$ is the Euclidean distance between $p$ and $q_i$. It is well-known that solutions of these geodesic equations analytically depend on their initial data.
Proposition \[Riemannextension\] showed that $\Phi^H_{_m}:\,\mathcal{T}^H_{_{m}}\to N_+\cap D$ is a bounded map, which implies that the image of $\Phi^H_{_{m}}$ is bounded with respect to the Euclidean metric on $N_+$. Because a linear projection will map a bounded set to a bounded set, we have that the image of $\tau^H_{_{m}}=P\circ\Phi^H_{_{m}}$ is also bounded in $\mathbb{C}^N$ with respect to the Euclidean metric on $\mathbb{C}^N$. Passing to a subsequence, we may therefore assume that $\{T_i\}$ and $\{v_i\}$ converge, with $\lim_{i\to \infty}T_i=T_\infty$ and $\lim_{i\to \infty}v_i=v_\infty$, respectively. Let $l_\infty(t)$ be the local solution of the affine geodesic equation with initial conditions $\frac{\partial}{\partial t}l_{\infty}(0)=v_\infty$ and $l_\infty(0)=p$. We claim that the solution $l_\infty(t)$ exists for $t\in [0, T_\infty]$. Consider the set $$\begin{aligned}
E_\infty:=\{a:\,l_\infty(t)\quad\text{exists for}\quad t\in[0, a)\}.\end{aligned}$$ If $E_\infty$ is unbounded above, then the claim clearly holds. Otherwise, we let $a_\infty=\sup E_\infty$, and our goal is to show $a_\infty>T_\infty$. Suppose towards a contradiction that $a_\infty\leq T_\infty$. We then define the sequence $\{a_k\}_{k=1}^\infty$ so that $a_k/T_k=a_\infty /T_\infty$. We have $a_k\leq T_k$ and $\lim_{k\to \infty}a_k=a_\infty$. Using the continuous dependence of solutions of the geodesic equation on initial data, we conclude that the sequence $\{l_{k}(a_k)\}_{k=1}^\infty$ is a Cauchy sequence. As $\mathcal{T}^H_{_m}$ is a complete space, the sequence $\{l_{k}(a_k)\}_{k=1}^\infty$ converges to some $q'\in \mathcal{T}^H_{_m}$. Let us define $l_{\infty}(a_\infty):=q'$. Then the solution $l_{\infty}(t)$ exists for $t\in [0, a_\infty]$. On the other hand, as $\mathcal{T}^H_{_m}$ is a smooth manifold, we have that $q'$ is an inner point of $\mathcal{T}^H_{_m}$. Thus the affine geodesic equation has a local solution at $q'$ which extends the geodesic $l_\infty$. That is, there exists $\epsilon>0$ such that the solution $l_\infty(t)$ exists for $t\in [0, a_\infty+\epsilon)$. This contradicts the fact that $a_\infty$ is an upper bound of $E_\infty$. We have therefore proven that $l_{\infty}(t)$ exists for $t\in [0, T_\infty]$.
Using the continuous dependence of solutions of the affine geodesic equations on the initial data again, we get $$l_\infty (T_\infty)=\lim\limits_{k\to \infty}l_k(T_k)=\lim\limits_{k\to \infty}q_k=q_\infty.$$ This means the limit point $q_{\infty}\in S$, and hence $S$ is a closed set.
Let us now show that $S$ is an open set. Let $q\in S$. Then there exists a straight line $l$ connecting $p$ and $q$. For each point $x\in l$ there exists an open neighborhood $U_x\subseteq \mathcal{T}^H_{_m}$ with diameter $2r_x$. The collection of $\{U_x\}_{x\in l}$ forms an open cover of $l$. But $l$ is a compact set, so there is a finite subcover $\{U_{x_i}\}_{i=1}^K$ of $l$. Then the straight line $l$ is covered by a cylinder $C_r$ in $\mathcal{T}^H_{_m}$ of radius $r=\min\{r_{x_i}:~1\leq i\leq K\}$. As $C_r$ is a convex set, each point in $C_r$ can be connected to $p$ by a straight line. Therefore we have found an open neighborhood $C_r$ of $q\in S$ such that $C_r\subseteq S$, which implies that $S$ is an open set.
As $S$ is a non-empty, open and closed subset in the connected space $\mathcal{T}^H_{_m}$, we conclude that $S=\mathcal{T}^H_{_m}$, as we desired.
Let $p, q\in \mathcal{T}^H_{_m}$ be two different points. Then Lemma \[straightline\] implies that there is a straight line $l\subseteq \mathcal{T}^H_{_m}$ connecting $p$ and $q$. Since $\tau^H_{_m}:\, {\mathcal T}^H_{_m}\rightarrow \mathbb{C}^N$ is affine, the restriction $\tau^H_{_m}|_l$ is a linear map. Suppose towards a contradiction that $\tau^H_{_m}(p)=\tau^H_{_m}(q)\in \mathbb{C}^N$. Then the restriction of $\tau^H_{_m}$ to the straight line $l$ is a constant map as $\tau^H_{_m}|_l$ is linear. By Lemma \[injofPsi\], we know that $\tau^H_{_m}: \, \mathcal{T}^H_{_m}\to \mathbb{C}^N$ is locally injective. Therefore, we may take $U_p$ to be a neighborhood of $p$ in $\mathcal{T}^H_{_m}$ such that $\tau^H_{_m}: \, U_p\rightarrow\mathbb{C}^N$ is injective. However, the intersection of $U_p$ and $l$ contains infinitely many points, but the restriction of $\tau^H_{_m}$ to $U_p\cap l$ is a constant map. This contradicts the fact that when we restrict $\tau_{_m}^H$ to $U_p\cap l$, $\tau_{_m}^H$ is an injective map. Thus $\tau^H_{_m}(p)\neq \tau^H_{_m}(q)$ if $p\neq q\in \mathcal{T}^H_{_m}$.
Since $\tau^H_m=P\circ \Phi^H_{_m}$, where $P$ a the projection map and $\tau^H_m$ is injective and $\Phi^H_{_m}$ a bounded map, we get
\[embedTHmintoCN\]The completion space $\mathcal{T}^H_{_m}$ is a bounded domain in $\mathbb{C}^N$.
\[injective PhiHm\]The holomorphic map $\Phi^H_{_m}: \,\mathcal{T}^H_{_m}\rightarrow N_+\cap D$ is also an injection.
Global Torelli and Domain of holomorphy {#completionspace}
=======================================
In this section, we define the completion space $\mathcal{T}^H$ by $\mathcal{T}^H=\mathcal{T}^H_{_m}$, and the extended period map $\Phi^H$ by $\Phi^H=\Phi^H_{_m}$ for any $m\geq 3$ after proving that $\mathcal{T}^H_{_m}$ doesn’t depend on the choice of the level $m$. Therefore $\mathcal{T}^H$ is a complex affine manifold and that $\Phi^H$ is a holomorphic injection. We then prove the main result Theorem \[main theorem\], which asserts that $\mathcal{T}^H$ is the completion space of $\mathcal{T}$ with respect to the Hodge metric and it is a bounded domain of holomorphy in $\mathbb{C}^N$. As a direct corollary, we get the global Torelli theorem of the period map from the Teichmüller space to the period domain.
For any two integers $m, m'\geq 3$, let $\mathcal{Z}_m$ and $\mathcal{Z}_{m'}$ be the smooth quasi-projective manifolds as in Theorem \[Szendroi theorem 2.2\] and let $\mathcal{Z}^H_{_m}$ and $\mathcal{Z}^H_{m'}$ be their completions with respect to the Hodge metric. Let $\mathcal{T}^H_{_m}$ and $\mathcal{T}^H_{m'}$ be the universal cover spaces of $\mathcal{Z}^H_{_m}$ and $\mathcal{Z}_{{m'}}^H$ respectively, then we have the following proposition.
\[indepofm\] The complete complex manifolds $\mathcal{T}^H_m$ and $\mathcal{T}^H_{m'}$ are biholomorphic to each other.
By defintion, $\mathcal{T}_m=i_m(\mathcal{T})$, $\mathcal{T}_{m'}=i_{m'}(\mathcal{T})$ and $\Phi_m=\Phi^H_{_m}|_{\mathcal{T}_m}$, $\Phi_{m'}=\Phi^H_{m'}|_{\mathcal{T}_{m'}}$. Because $\Phi^H_m$ and $\Phi^H_{m'}$ are embeddings, $\mathcal{T}_m\cong \Phi^H_{_m}(\mathcal{T}_m)$ and $\mathcal{T}_{m'}\cong\Phi^H_{_m}(\mathcal{T}_m)$. Since the composition maps $\Phi^H_{_m}\circ i_m=\Phi$ and $\Phi^H_{m'}\circ i_{m'}=\Phi$, we get $\Phi^H_{_m}(i_m(\mathcal{T}))=\Phi(\mathcal{T})=\Phi^H_{m'}(i_{m'}(\mathcal{T}))$. Since $\Phi$ and $\mathcal{T}$ are both independent of the choice of the level structures, so is the image $\Phi(\mathcal{T})$. Then $\mathcal{T}_{m}\cong\Phi(\mathcal{T})\cong\mathcal{T}_{m'}$ biholomorphically, and they don’t depend on the choice of level structures. Moreover, Proposition \[opend\] implies that $\mathcal{T}^H_m$ and $\mathcal{T}^H_{m'}$ are Hodge metric completion spaces of $\mathcal{T}_m$ and $\mathcal{T}_{m'}$, respectively. Thus the uniqueness of the metric completion spaces implies that $\mathcal{T}^H_m$ is biholomorphic to $\mathcal{T}^H_{m'}$.
Proposition \[indepofm\] shows that $\mathcal{T}^H_{_m}$ is independent of the choice of the level $m$ structure, and it allows us to give the following definitions.
We define the complete complex manifold $\mathcal{T}^H=\mathcal{T}^H_{_m}$, the holomorphic map $i_{\mathcal{T}}: \,\mathcal{T}\to \mathcal{T}^H$ by $i_{\mathcal{T}}=i_m$, and the extended period map $\Phi^H:\, \mathcal{T}^H\rightarrow D$ by $\Phi^H=\Phi^H_{_m}$ for any $m\geq 3$. In particular, with these new notations, we have the commutative diagram $$\begin{aligned}
\xymatrix{\mathcal{T}\ar[r]^{i_{\mathcal{T}}}\ar[d]^{\pi_m}&\mathcal{T}^H\ar[d]^{\pi^H_{_m}}\ar[r]^{{\Phi}^{H}}&D\ar[d]^{\pi_D}\\
\mathcal{Z}_m\ar[r]^{i}&\mathcal{Z}^H_{_m}\ar[r]^{{\Phi}_{_{\mathcal{Z}_m}}^H}&D/\Gamma.
}\end{aligned}$$
The main result of this section is the following,
\[main theorem\] The complex manifold $\mathcal{T}^H$ is a complex affine manifold which can be embedded into $\mathbb{C}^N$ and it is the completion space of $\mathcal{T}$ with respect to the Hodge metric. Moreover, the extended period map $\Phi^H: \,\mathcal{T}^H\rightarrow N_+\cap D$ is a holomorphic injection.
By the definition of $\mathcal{T}^H$ and Theorem \[THmaffine\], it is easy to see that $\mathcal{T}^H_{_m}$ is a complex affine manifold, which can be embedded into $\mathbb{C}^N$. It is also not hard to see that the injectivity of $\Phi^H$ follows directly from Corollary \[injective PhiHm\] by the definition of $\Phi^H$. Now we only need to show that $\mathcal{T}^H$ is the Hodge metric completion space of $\mathcal{T}$, which follows from the following lemma.
\[injectivity of i\] The map $i_{\mathcal{T}}:\,\mathcal{T}\to \mathcal{T}^H$ is an embedding.
On one hand, define $\mathcal{T}_0$ to be $\mathcal{T}_0=\mathcal{T}_m$ for any $m\geq 3$, as $\mathcal{T}_m$ doesn’t depend on the choice of level $m$ structure according to the proof of Proposition \[indepofm\]. Since $\mathcal{T}_{0}=(\pi^H_{_m})^{-1}(\mathcal{Z}_m)$, $\pi_m^H:\, \mathcal{T}_0\to \mathcal{Z}_m$ is a covering map. Thus the fundamental group of $\mathcal{T}_0$ is a subgroup of the fundamental group of $\mathcal{Z}_m$, that is, $ \pi_1(\mathcal{T}_0)\subseteq \pi_1(\mathcal{Z}_m),$ for any $m\geq 3$. Moreover, the universal property of the universal covering map $\pi_m: \,\mathcal{T}\to \mathcal{Z}_m$ with $\pi_m=\pi_{_m}^H|_{\mathcal{T}_0}\circ i_{\mathcal{T}}$ implies that $i_{\mathcal{T}}:\,\mathcal{T}\to \mathcal{T}_0$ is also a covering map.
On the other hand, let $\{m_k\}_{k=1}^{\infty}$ be a sequence of positive integers such that $m_k< m_{k+1}$ and $m_k|m_{k+1}$ for each $k\geq 1$. Then there is a natural covering map from $\mathcal{Z}_{m_{k+1}}$ to $\mathcal{Z}_{m_{k}}$ for each $k$. In fact, because each point in $\mathcal{Z}_{m_{k+1}}$ is a polarized Calabi–Yau manifold with a basis $\gamma_{_{m_{k+1}}}$ for the space $(H_n(M,\mathbb{Z})/\text{Tor})/m_{k+1}(H_n(M,\mathbb{Z})/\text{Tor})$ and $m_k|m_{k+1}$, then the basis $\gamma_{_{m_{k+1}}}$ induces a basis for the space $(H_n(M,\mathbb{Z})/\text{Tor})/m_{k}(H_n(M,\mathbb{Z})/\text{Tor})$. Therefore we get a well-defined map $\mathcal{Z}_{m_{k+1}}\rightarrow \mathcal{Z}_{m_k}$ by assigning to a polarized Calabi–Yau manifold with the basis $\gamma_{m_{k+1}}$ the same polarized Calabi–Yau manifold with the basis $(\gamma_{m_{k+1}} (\text{mod} \ m_k))\in(H_n(M,\mathbb{Z})/\text{Tor})/m_{k}(H_n(M,\mathbb{Z})/\text{Tor})$. Moreover, using the versal properties of both the families $\mathcal{X}_{m_{k+1}}\to \mathcal{Z}_{m_{k+1}}$ and $\mathcal{X}_{m_k}\to \mathcal{Z}_{m_k}$, we can conclude that the map $\mathcal{Z}_{m_{k+1}}\to \mathcal{Z}_{m_k}$ is locally biholomorphic. Therefore, $\mathcal{Z}_{m_{k+1}}\rightarrow\mathcal{Z}_{m_k}$ is actually a covering map. Thus the fundamental group $\pi_1(\mathcal{Z}_{m_{k+1}})$ is a subgroup of $\pi_1(\mathcal{Z}_{m_{k}})$ for each $k$. Hence, the inverse system of fundamental groups $$\begin{aligned}
\xymatrix{
\pi_1(\mathcal{Z}_{m_1})&\pi_1(\mathcal{Z}_{m_2})\ar[l]&\cdots\cdots\ar[l]&\pi_1(\mathcal{Z}_{m_k})\ar[l]&\cdots\ar[l]
}\end{aligned}$$ has an inverse limit, which is the fundamental group of $\mathcal{T}$. Because $\pi_1(\mathcal{T}_0)\subseteq \pi_1(\mathcal{Z}_{m_{k}})$ for any $k$, we have the inclusion $\pi_1(\mathcal{T}_0)\subseteq\pi_1(\mathcal{T})$. But $\pi_1(\mathcal{T})$ is a trivial group since $\mathcal{T}$ is simply connected, thus $\pi_1(\mathcal{T}_0)$ is also a trivial group. Therefore the covering map $i_{\mathcal{T}}:\,\mathcal{T}\to \mathcal{T}_0$ is a one-to-one covering. This shows that $i_{\mathcal{T}}: \,\mathcal{T}\rightarrow \mathcal{T}^H$ is an embedding.
There is another approach to Lemma \[injectivity of i\], which is a proof by contradiction. Suppose towards a contradiction that there were two points $p\neq q\in \mathcal{T}$ such that $i_{\mathcal{T}}(p)=i_{\mathcal{T}}(q)\in \mathcal{T}^H$.
On one hand, since each point in $\mathcal{T}$ represents a polarized and marked Calabi–Yau manifold, $p$ and $q$ are actually triples $(M_p, L_p, \gamma_p)$ and $(M_q, L_q, \gamma_q)$ respectively, where $\gamma_p$ and $\gamma_q$ are two bases of $H_n(M, \mathbb{Z})/\text{Tor}$. On the one hand, each point in $\mathcal{Z}_{m}$ represents a triple $(M, L, \gamma_m)$ with $\gamma_m$ a basis of $(H_n(M,\mathbb{Z})/\text{Tor})/m(H_n(M, \mathbb{Z})\text{Tor})$ for any $m\geq 3$. By the assumption that $i_{\mathcal{T}}(p)=i_{\mathcal{T}}(q)$ and the relation that $i\circ \pi_m=\pi^H_{_m}\circ i_{\mathcal{T}}$, we have $i\circ\pi_m(p)=i\circ\pi_m(q)\in \mathcal{Z}_m$ for any $m\geq 3$. In particular, for any $m\geq 3$, the image of $(M_p, L_p, \gamma_p)$ and $(M_q, L_q, \gamma_q)$ under $\pi_m$ are the same in $\mathcal{Z}_m$. This implies that there exists a biholomorphic map $f_{pq}:\, M_p\to M_q$ such that $f_{pq}^*(L_q)=L_p$ and $f_{pq}^*(\gamma_q)= \gamma_p\cdot A$, where $A$ is an integer matrix satisfying $$\begin{aligned}
\label{matrix A}
A=(A_{ij})\equiv\text{Id}\quad(\text{mod } m)\quad \quad\text{for any}\quad m\geq 3.\end{aligned}$$ Let $|A_{ij}|$ be the absolute value of the $ij$-th entry of the matrix $(A_{ij})$. Since holds for any $m\geq 3$, we can choose an integer $m_0$ greater than any $|A_{ij}|$ such that $$\begin{aligned}
A=(A_{ij})\equiv\text{Id}\quad(\text{mod } m_0). \end{aligned}$$ Since each $A_{ij}<m_0$ and $A=(A_{ij})\equiv\text{Id}\quad(\text{mod } m_0)$, we have $A=\text{Id}$. Therefore, we found a biholomorphic map $f_{pq}:\, M_p\to M_q$ such that $f_{pq}^*(L_q)=L_p$ and $f_{pq}^*(\gamma_q)= \gamma_p$. This implies that the two triples $(M_p, L_p, \gamma_p)$ and $(M_q, L_q, \gamma_q)$ are equivalent to each other. Therefore, $p$ and $q$ in $\mathcal{T}$ are actually the same point. This contradicts to our assumption that $p\neq q$.
Since $\Phi=\Phi^H\circ i_{\mathcal{T}}$ with both $\Phi^H$ and $i_{\mathcal{T}}$ embeddings, we get the global Torelli theorem for the period map from the Teichmüller space to the period domain as follows.
The period map $\Phi:\, \mathcal{T}\rightarrow D$ is injective.
As another important consequence, we prove the following property of $\mathcal{T}^H$.
\[important result\] The completion space $\mathcal{T}^H$ is a bounded domain of holomorphy in $\mathbb{C}^N$; thus there exists a complete Kähler–Einstein metric on $\mathcal{T}^H$.
We recall that a $\mathcal{C}^2$ function $\rho:\, \Omega\rightarrow \mathbb{R}$ on a domain $\Omega\subseteq\mathbb{C}^n$ is *plurisubharmornic* if and only if its Levi form is positive definite at each point in $\Omega$. Given a domain $\Omega\subseteq \mathbb{C}^n$, a function $f:\, \Omega\rightarrow \mathbb{R}$ is called an *exhaustion function* if for any $c\in \mathbb{R}$, the set $\{z\in \Omega\,|\, f(z)< c\}$ is relatively compact in $\Omega$. The following well-known theorem provides a definition for domains of holomorphy. For example, one may refer to [@Hom] for details.
An open set $\Omega\in \mathbb{C}^n$ is a domain of holomorphy if and only if there exists a continuous plurisubharmonic function $f:\, \Omega\to \mathbb{R}$ such that $f$ is also an exhaustion function.
The following theorem in Section 3.1 of [@GS] gives us the basic ingredients to construct a plurisubharmoic exhaustion function on $\mathcal{T}^H$.
\[general f\] On every manifold $D$, which is dual to a Kähler C-space, there exists an exhaustion function $f: \, D\rightarrow \mathbb{R}$, whose Levi form, restricted to $T^{1, 0}_h(D)$, is positive definite at every point of $D$.
We remark that in this proposition, in order to show $f$ is an exhaustion function on $D$, Griffiths and Schmid showed that the set $f^{-1}(-\infty, c]$ is compact in $D$ for any $c\in \mathbb{R}$.
\[GTonTH\] The extended period map $\Phi^H: \,\mathcal{T}^H\rightarrow D$ still satisfies the Griffiths transversality.
Let us consider the tangent bundles $T^{1,0}\mathcal{T}^H$ and $T^{1,0}D$ as two differential manifolds, and the tangent map $(\Phi^H)_*:\,T^{1,0}\mathcal{T}^H \to T^{1,0}D$ as a continuous map. We only need to show that the image of $(\Phi^H)_*$ is contained in the horizontal tangent bundle $T^{1,0}_hD$.
The horizontal subbundle $T^{1,0}_hD$ is a close set in $T^{1,0}D$, so the preimage of $T^{1,0}_hD$ under $(\Phi^H)_*$ is a close set in $T^{1,0}\mathcal{T}^H$. On the other hand, because the period map $\Phi$ satisfies the Griffiths transversality, the image of $\Phi_*$ is in the horizontal subbundle $T^{1,0}_hD$. This means that the preimage of $T^{1,0}_hD$ under $(\Phi^H)_*$ contains both $T^{1,0}\mathcal{T}$ and the closure of $T^{1,0}\mathcal{T}$, which is $T^{1,0}\mathcal{T}^H$. This finishes the proof.
By Theorem \[main theorem\], we can see that $\mathcal{T}^H$ is a bounded domain in $\mathbb{C}^N$. Therefore, once we show $\mathcal{T}^H$ is domain of holomorphy, the existence of Kähler-Einstein metric on it follows directly from the well-known theorem by Mok–Yau in [@MokYau].
In order to show that $\mathcal{T}^H$ is a domain of holomorphy in $\mathbb{C}^N$, it is enough to construct a plurisubharmonic exhaustion function on $\mathcal{T}^H$.
Let $f$ be the exhaustion function on $D$ constructed in Proposition \[general f\], whose Levi form, when restricted to the horizontal tangent bundle $T^{1,0}_hD$ of $D$, is positive definite at each point of $D$. We claim that the composition function $f\circ \Phi^H$ is the demanded plurisubharmonic exhaustion function on $\mathcal{T}^H$.
By the Griffiths transversality of $\Phi^H$, the composition function $f\circ \Phi^H:\,\mathcal{T}^H\to \mathbb{R}$ is a plurisubharmonic function on $\mathcal{T}^H$. Thus it suffices to show that the function $f\circ \Phi^H$ is an exhaustion function on $\mathcal{T}^H$, which is enough to show that for any constant $c\in\mathbb{R}$, $(f\circ\Phi^H)^{-1}(-\infty, c]=(\Phi^H)^{-1}\left(f^{-1}(-\infty, c]\right)$ is a compact set in $\mathcal{T}^H$. Indeed, it has already been shown in [@GS] that the set $f^{-1}(-\infty, c]$ is a compact set in $D$. Now for any sequence $\{p_k\}_{k=1}^\infty\subseteq (f\circ\Phi^H)^{-1}(-\infty, c]$, we have $\{\Phi^H(p_k)\}_{k=1}^\infty\subseteq f^{-1}(-\infty, c]$. Since $f^{-1}(-\infty, c]$ is compact in $D$, the sequence $\{\Phi^H(p_k)\}_{k=1}^\infty$ has a convergent subsequence. We denote this convergent subsequence by $\{\Phi^H(p_{k_n})\}_{n=1}^\infty\subseteq \{\Phi^H(p_k)\}_{k=1}^\infty$ with $k_n<k_{n+1}$, and denote $\lim_{k\to \infty}\Phi^H(p_k)=o_\infty\in D$. On the other hand, since the map $\Phi^H$ is injective and the Hodge metric on $\mathcal{T}^H$ is induced from the Hodge metric on $D$, the extended period map $\Phi^H$ is actually a global isometry onto its image. Therefore the sequence $\{p_{k_n}\}_{n=1}^\infty$ is also a Cauchy sequence that converges to $(\Phi^H)^{-1}(o_{\infty})$ with respect to the Hodge metric in $(f\circ\Phi^H)^{-1}(-\infty, c]\subseteq \mathcal{T}^H$. In this way, we have proved that any sequence in $(f\circ\Phi^H)^{-1}(-\infty, c]$ has a convergent subsequence. Therefore $(f\circ\Phi^H)^{-1}(-\infty, c]$ is compact in $\mathcal{T}^H$, as was needed to show.
Two topological lemmas {#topological lemmas}
======================
In this appendix we first prove the existence of the choice of $i_{m}$ and $\Phi^H_{_m}$ in diagram such that $\Phi=\Phi^H_m\circ i_{m}$. Then we show a lemma that relates the fundamental group of the moduli space of Calabi–Yau manifolds to that of completion space with respect to the Hodge metric on $\mathcal{Z}_m$. The arguments only use elementary topology and the results may be well-known. We include their proofs here for the sake of completeness.
\[choice\] There exists a suitable choice of $\ i_{m}$ and $\Phi_{_m}^H$ such that $\Phi_{_m}^H\circ i_{m}=\Phi.$
Recall the following commutative diagram: $$\begin{aligned}
\xymatrix{\mathcal{T}\ar[r]^{i_{m}}\ar[d]^{\pi_m}&\mathcal{T}^H_{_m}\ar[d]^{\pi_{_m}^H}\ar[r]^{{\Phi}^{H}_{_m}}&D\ar[d]^{\pi_D}\\
\mathcal{Z}_m\ar[r]^{i}&\mathcal{Z}^H_{_m}\ar[r]^{{\Phi}_{_{\mathcal{Z}_m}}^H}&D/\Gamma.
}\end{aligned}$$ Fix a reference point $p\in \mathcal{T}$. The relations $ i\circ\pi_m=\pi^H_{_m}\circ i_{m}$ and $\Phi_{\mathcal{Z}_m}^H\circ \pi^H_{_m} = \pi_D\circ\Phi^H_{_m}$ imply that $\pi_D\circ\Phi^H_{_m}\circ i_{m} = \Phi_{\mathcal{Z}_m}^H\circ i\circ\pi_m= \Phi_{\mathcal{Z}_m}\circ\pi_m.$ Therefore $\Phi^H_{_m}\circ i_{m}$ is a lifting map of $\Phi_{\mathcal{Z}_m}$. On the other hand $\Phi: \, \mathcal{T}\to D$ is also a lifting of $\Phi_{\mathcal{Z}_m}$. In order to make $\Phi^H_{_m}\circ i_{m}=\Phi$, one only needs to choose the suitable $ i_{m}$ and $\Phi^H_{_m}$ such that these two maps agree on the reference point, i.e. $\Phi^H_{_m}\circ i_{m}(p)=\Phi(p).$
For an arbitrary choice of $ i_{m}$, we have $ i_{m}(p)\in {\mathcal{T}}_m^H$ and $\pi^H_{_m}( i_{m}(p))=i(\pi_m(p))$. Considering the point $ i_{m}(p)$ as a reference point in ${\mathcal{T}}_m^H$, we can choose $\Phi^H_{_m}( i_{m}(p))$ to be any point from $\pi_D^{-1}(\Phi^H_{\mathcal{Z}_m}( i(\pi_m(p)))) = \pi_D^{-1}(\Phi_{\mathcal{Z}_m}(\pi_m(p))).$ Moreover the relation $\pi_D(\Phi(p))=\Phi_{\mathcal{Z}_m}(\pi_m(p))$ implies that $\Phi(p)\in \pi_D^{-1}(\Phi_{\mathcal{Z}_m}(\pi_m(p)))$. Therefore we can choose $\Phi^H_{_m}$ such that $\Phi^H_{_m}( i_{m}(p))=\Phi(p).$ With this choice, we have $\Phi^H_{_m}\circ i_{m}=\Phi$.
\[fund\] Let $\pi_1(\mathcal{Z}_m)$ and $\pi_1(\mathcal{Z}_m^H)$ be the fundamental groups of $\mathcal{Z}_m$ and $\mathcal{Z}_m^H$ respectively, and suppose the group morphism $$i_*: \, \pi_1(\mathcal{Z}_m) \to \pi_1(\mathcal{Z}_m^H)$$ is induced by the inclusion $i: \, \mathcal{Z}_m\to \mathcal{Z}_m^H$. Then $i_*$ is surjective.
First notice that $\mathcal{Z}_m$ and $\mathcal{Z}_m^H$ are both smooth manifolds, and $\mathcal{Z}_m\subseteq \mathcal{Z}_m^H$ is open. Thus for each point $q\in \mathcal{Z}_m^H\setminus \mathcal{Z}_m$ there is a disc $D_q\subseteq \mathcal{Z}_m^H$ with $q\in D_q$. Then the union of these discs $$\bigcup_{q\in \mathcal{Z}_m^H\setminus \mathcal{Z}_m}D_q$$ forms a manifold with open cover $\{D_q:~q\in \cup_q D_q\}$. Because both $\mathcal{Z}_m$ and $\mathcal{Z}_m^H$ are second countable spaces, there is a countable subcover $\{D_i\}_{i=1}^\infty$ such that $\mathcal{Z}_m^H=\mathcal{Z}_m\cup\bigcup\limits_{i=1}^\infty D_i,$ where the $D_i$ are open discs in $\mathcal{Z}_m^H$ for each $i$. Therefore, we have $\pi_1(D_i)=0$ for all $i\geq 1$. Letting $\mathcal{Z}_{m,k}=\mathcal{Z}_m\cup\bigcup\limits_{i=1}^k D_i$, we get $$\begin{aligned}
\pi_1(\mathcal{Z}_{m,k})*\pi_1(D_{k+1})=\pi_1(\mathcal{Z}_{m,k})=\pi_1(\mathcal{Z}_{m,{k-1}}\cup D_k), \quad\text{ for any } k.\end{aligned}$$ We know that $\text{codim}_{\mathbb{C}}$($\mathcal{Z}_m^H\backslash \mathcal{Z}_m)\geq 1.$ Therefore since $D_{k+1}\backslash\mathcal{Z}_{m,k}\subseteq D_{k+1}\backslash \mathcal{Z}_m$, we have $\text{codim}_{\mathbb{C}}[D_{k+1}\backslash(D_{k+1}\backslash \mathcal{Z}_{m,k})]\geq 1$ for any $k$. As a consequence we can conclude that $D_{k+1}\cap \mathcal{Z}_{m,k}$ is path-connected. Hence we can apply the Van Kampen Theorem on $X_k=D_{k+1}\cup \mathcal{Z}_{m,k}$ to conclude that for every $k$, the following group homomorphism is surjective: $$\xymatrix{\pi_1(\mathcal{Z}_{m,k})=\pi_1(\mathcal{Z}_{m,k})*\pi_1(D_{k+1})\ar@{->>}[r]&\pi_1(\mathcal{Z}_{m,k}\cup D_{k+1})=\pi_1(\mathcal{Z}_{m,{k+1}}).}$$ Thus we get the directed system: $$\xymatrix{\pi_1(\mathcal{Z}_m)\ar@{->>}[r]&\pi_1(\mathcal{Z}_{m,1})\ar@{->>}[r]&\cdots\cdots\ar@{->>}[r]&\pi_1(\mathcal{Z}_{m,k})\ar@{->>}[r]&\cdots\cdots}$$ By taking the direct limit of this directed system, we get the surjectivity of the group homomorphism $\pi_1(\mathcal{Z}_m)\to\pi_1(\mathcal{Z}_m^H)$.
[99]{}
J. Carlson, A. Kasparian, and D. Toledo, **58**(1989), pp. 669-694.
J. Carlson, S. Muller-Stach, and C. Peters, , .
X. Chen, F. Guan and K. Liu, , [arXiv:1205.4207v3]{} E. Cattani, F. El Zein, P. A. Griffiths, L. D. Trang, , (2014). F. Forstneri$\check{c}$, , , , (2011). H. Grauert, R. Remmert, , , , (2011). M. Green, P. Griffiths, and C. Robles, Extremal degenerations of polarized Hodge structures, .
P. Griffiths, , **90**(1968), pp. 568-626, pp. 805-865.
P. Griffiths, 38(1970), pp. 228-296.
P. Griffiths and W. Schmid, , **123**(1969), pp. 253-302.
Harish-Chandra, **78**(1956), pp. 564-628.
S. Helgason, .
L. Hörmander, , .
S. Kobayashi and K. Nomizu, , Wiley-Interscience, New York-London, (1963).
K. Kodaira and J. Morrow, , AMS Chelsea Publishing, Porvindence, RI, (2006), Reprint of the 1971 edition with errata.
J. Lee, , , , ( 2nd ed. 2013).
Zhiqin Lu, .
J. Milne (2011), 2, 467-548.
N. Mok, , World Scientific, Singapore-New Jersey-London-HongKong, (1989).
N. Mok and S. T. Yau, , **39**(1983), American Mathematics Society, Providence, Rhode Island, 41–60.
Y. Matsushima, Affine structure on complex manifold, , **5**(1968), pp. 215–222. Richard Mayer, **352, No.5**(2000), pp. 2121-2144.
H. Popp, , Volume 620 of [*Lecture Notes in Mathemathics*]{}, Springer-Verlag, Berlin-New York, (1977).
W. Schmid, Variation of [H]{}odge structure: the singularities of the period mapping, , **22**(1973), pp. 211–319.
E. Cattani, A. Kaplan, and W. Schmid, Degeneration of [H]{}odge structures, (1986), 123(3), pp. 457–535.
Y. Shimizu and K. Ueno, , Volume 206 of [*Translation of Mathematical Monographs*]{}, American Mathematics Society, Providence, Rhode Island, (2002).
M. Sugiura, **11**(1959), pp. 374–434.
M. Sugiura, **23**(1971), pp. 374–383. B. Szendröi, Some finiteness results for [C]{}alabi-[Y]{}au threefolds, , Second series, **60**(1999), pp. 689–699. G. Tian, Smoothness of the universal deformation space of compact [C]{}alabi-[Y]{}au manifolds and its [P]{}etersson-[W]{}eil metric. , (1986),World Sci. Publishing, Singapore, [*Adv. Ser. Math. Phys.*]{}, **1**(1987), pp. 629–646.
A. N. Todorov, The [W]{}eil-[P]{}etersson geometry of the moduli space of [${\rm
SU}(n\geq 3)$]{} ([C]{}alabi-[Y]{}au) manifolds. [I]{}, , **126**(1989), pp. 325–346. K. Kato and S. Usui. , volume 169 of [*Annals of Mathematics Studies.*]{} Princeton University Press, Princeton, NJ, (2009).
E. Viehweg, , Volume 30 of [*Ergebnisse der Mathematik und ihrer Grenzgebiete (3) \[Results in Mathematics and Related Areas (3)\]*]{}, Springer-Verlag, Berlin, (1995).
C. Voisin, , Cambridge Universigy Press, New York, (2002).
Y. Xu, , , (2001). (Chinese)
| {
"pile_set_name": "ArXiv"
} |
Derivation of the single excitation contribution to the 2nd order correlation energy
====================================================================================
In this section we derive Eq. (1) that is presented in the main part of this Letter – the single excitation contribution to the 2nd-order correlation energy – from Rayleigh-Schrödinger perturbation theory (RSPT). The interacting $N$-electron system at hand is governed by the Hamiltonian $$\hat{H} = \sum_{j=1}^{N}\left[-\frac{1}{2}\nabla^2_j + \hat{v}_\text{ext} ({{\bf r}}_j) \right] +
\sum_{j<k}^{N} \frac{1}{|{{\bf r}}_j -{{\bf r}}_k|}, \nonumber$$ where $\hat{v}_\text{ext}({{\bf r}})$ is a local, multiplicative external potential. In RSPT, $\hat{H}$ is partitioned into a non-interacting mean-field Hamiltonian $\hat{H}^0$ and an interacting perturbation $\hat{H}'$, $$\begin{aligned}
\hat{H} & =& \hat{H}^0 + \hat{H}' \nonumber \\
\hat{H}^0 &=& \sum_{j=1}^{N} \hat{h}^0 (j) = \sum_{j=1}^{N} \left[-\frac{1}{2}\nabla^2_j + \hat{v}_\text{ext} ({{\bf r}}_j) +
\hat{v}^\text{MF}_j \right] \nonumber \\
\hat{H}' &=& \sum_{j<k}^{N} \frac{1}{|{{\bf r}}_j -{{\bf r}}_k|} - \sum_{j=1}^N \hat{v}^\text{MF}_j. \nonumber
\end{aligned}$$ Here $\hat{v}^\text{MF}$ is some mean-field-type single-particle potential, which can be non-local, as in the case of Hartree-Fock (HF) theory, or local, as in the case of Kohn-Sham (KS) theory.
Suppose the solution of the single-particle Hamiltonian $\hat{h}^0$ is known $$\hat{h}^0 |\psi_p\rangle = \epsilon_p |\psi_p\rangle,
\label{Eq:SP_eigeneq}$$ then the solution of the non-interacting many-body Hamiltonian $H^0$ follows $$\hat{H}^0 |\Phi_n \rangle = E^{(0)}_n |\Phi_n \rangle \nonumber .$$ The $|\Phi_n \rangle$ are Slater determinants formed from $N$ of the spin orbitals $|p\rangle = |\psi_p\rangle$ determined in Eq. (\[Eq:SP\_eigeneq\]). These Slater determinants can be distinguished according to their excitation level: the ground-state configuration $|\Phi_0\rangle$, singly excited configurations $|\Phi_i^a\rangle$, doubly excited configurations $|\Phi_{ij}^{ab}\rangle$, etc., where $i,j, \dots$ denotes occupied orbitals and $a,b,\dots$ unoccupied ones. Following standard perturbation theory, the single-excitation (SE) contribution to the 2nd-order correlation energy is given by $$\begin{aligned}
E^\text{SE}_c& =& \sum_{i}^\text{occ}\sum_a^\text{unocc}\frac{|\langle\Phi_0|\hat{H}'|\Phi_i^a\rangle|^2}{E^{(0)}_0 - E^{(0)}_{i,a}} \nonumber \\
& =&\sum_{i}^\text{occ}\sum_a^\text{unocc}\frac{|\langle\Phi_0|\sum_{j<k}^{N} \frac{1}{|{{\bf r}}_j -{{\bf r}}_k|} - \sum_{j=1}^N \hat{v}^\text{MF}_j|\Phi_i^a\rangle|^2}{\epsilon_i - \epsilon_a} \nonumber \\
\label{Eq:SE_expression}
\end{aligned}$$ where we have used the fact $E^{(0)}_0 - E^{(0)}_{i,a} = \epsilon_i - \epsilon_a$.
To proceed, the numerator of Eq. (\[Eq:SE\_expression\]) needs to be evaluated. This can be most easily done using second-quantization formulation $$\begin{aligned}
\sum_{j<k}^{N} \frac{1}{|{{\bf r}}_j -{{\bf r}}_k|} & \rightarrow & \frac{1}{2}\sum_{pqrs} \langle pq|rs \rangle c_p^{\dagger}
c_q^{\dagger}c_sc_r, \nonumber \\
\sum_{i=j}^N \hat{v}^\text{MF}_j & \rightarrow & \sum_{pq} \langle p|\hat{v}^\text{MF}|q \rangle c_p^{\dagger} c_q, \nonumber
\end{aligned}$$ where $p,q,r,s$ are arbitrary spin-orbitals from Eq. (\[Eq:SP\_eigeneq\]), $c_p^{\dagger}$ and $c_q$, etc. are the electron creation and annihilation operators, and $ \langle pq|rs \rangle$ the two-electron Coulomb integrals $$\langle pq|rs \rangle = \int d{{\bf r}}d{{\bf r'}}\frac{\psi_p^\ast({{\bf r}})\psi_r({{\bf r}})\psi_q^\ast({{\bf r'}})\psi_s({{\bf r'}})}
{|{{\bf r}}-{{\bf r'}}|}. \nonumber$$ The expectation value of the two-particle Coulomb operator between the ground-state configuration $\Phi_0$ and the single excitation $\Phi_i^a$ is given by $$\begin{aligned}
\displaystyle
\langle\Phi_0| \frac{1}{2}\sum_{pqrs} \langle pq|rs \rangle c_p^{\dagger}c_q^{\dagger}c_sc_r |\Phi_i^a\rangle & =&
\sum_p^\text{occ} \left[ \langle ip|ap\rangle - \langle ip|pa\rangle \right] \nonumber \\
& =& \langle \psi_i |\hat{v}^\text{HF}|\psi_a\rangle
\label{Eq:TP_operator}
\end{aligned}$$ where $v^\text{HF}$ is the HF single-particle potential.
The expectation value of the mean-field single-particle operator $\hat{v}^\text{MF}$, on the other hand, is given by $$\langle\Phi_0| \sum_{pq} \langle p|\hat{v}^\text{MF}|q \rangle c_p^{\dagger} c_q | \Phi_i^a\rangle
= \langle \psi_i |\hat{v}^\text{MF}|\psi_a\rangle
\label{Eq:SP_operator}$$ Combining Eqs. (\[Eq:SE\_expression\]), (\[Eq:TP\_operator\]), and (\[Eq:SP\_operator\]), one gets $$\begin{aligned}
E_{c}^\text{SE} &=&
\sum_{i}^\text{occ}\sum_a^\text{unocc}
\frac{|\langle \psi_i |\hat{v}^\text{HF}-\hat{v}^\text{MF}|\psi_a\rangle|^2}{\epsilon_i - \epsilon_a}
\nonumber \\
&=& \sum_{i}^\text{occ}\sum_a^\text{unocc} \frac{|\Delta v_{ia}|^2}{\epsilon_i - \epsilon_a},
\label{Eq:2nd_cSE}\end{aligned}$$ where $\Delta v_{ia}$ is the matrix element of the difference between the HF potential $\hat{v}^\text{HF}$ and the single-particle mean-field potential $\hat{v}^\text{MF}$ in question.
Observing that the $\psi$’s are eigenstates of $\hat{h}^0=-\frac{1}{2}\nabla^2 + v_\text{ext}+v^\text{MF}$, and hence all non-diagonal elements $\langle\psi_i|\hat{h}^0|\psi_a\rangle$ are zero, one can alternatively express Eq. (\[Eq:2nd\_cSE\]) as $$\begin{aligned}
E_{c}^\text{SE} &=&
\sum_{i}^\text{occ}\sum_a^\text{unocc}\frac{|\langle \psi_i |-\frac{1}{2}\nabla^2 + \hat{v}_\text{ext} +
\hat{v}^\text{HF} |\psi_a\rangle|^2}{\epsilon_i - \epsilon_a} \nonumber \\
&=& \sum_{i}^\text{occ}\sum_a^\text{unocc}
\frac{|\langle \psi_i |\hat{f}|\psi_a\rangle|^2}{\epsilon_i - \epsilon_a}
\label{Eq:2nd_cSE_1}\end{aligned}$$ where $\hat{f}$ is the single-particle HF Hamiltonian, or simply Fock operator. Thus Eq. (1) in the main paper is derived.
For the HF reference state, i.e., when $\hat{v}^\text{MF} = \hat{v}^\text{HF}$, the $\psi$’s are eigenstates of the Fock operator, and hence Eq. (\[Eq:2nd\_cSE\]) (or (\[Eq:2nd\_cSE\_1\])) is zero. For any other reference state, e.g., the KS reference state, the $\psi$’s are no longer eigenstates of the Fock operator, and Eq. (\[Eq:2nd\_cSE\]) is in general not zero. This gives rise to a finite SE contribution to the second-order correlation energy. In the following we assume $\hat{v}^\text{MF} = \hat{v}^\text{KS}$.
Renormalized single excitation (RSE) correction scheme
======================================================
The SE contribution as given by Eq. (\[Eq:2nd\_cSE\]) corresponds to a correction to the correlation energy at 2nd-order, which can be represented by the Goldstone diagram [@Goldstone:1957] shown in Fig. \[Goldstone\_diag\](a). Similar to 2nd-order M[ø]{}ller-Plesset perturbation theory, the 2nd-order SE is ill-behaved when the single-particle gap closes. An important lesson from diagrammatic perturbation theory to deal with problems of this kind is to resum higher order diagrams of the same type to infinite order. The random phase approximation itself is a good example of this. In case of SE, one can also identify a series of higher order diagrams up to infinite order, as illustrated in terms of Goldstone diagrams in Fig. \[Goldstone\_diag\].
![Goldstone digrammatic representation of a selection of single excitation processes. (a): second order; (b-c): third-order; (d-g): fourth-order. Here $v_{ii} =\langle \psi_i|\hat{v}^\text{HF}-\hat{v}^\text{KS}|\psi_i \rangle$ and $v_{aa} =\langle \psi_a|\hat{v}^\text{HF}-\hat{v}^\text{KS}|\psi_a \rangle$.[]{data-label="Goldstone_diag"}](RSE_diag_paper.eps){width="46.00000%"}
Here we only pick the “diagonal" terms where a particle state $a$ or a hole state $i$ is, in the intermediate process, scattered into the same state by the perturbative potential $\Delta \hat{v} = \hat{v}^\text{HF} - \hat{v}^\text{KS}$. These terms dominate over the “non-diagonal" ones [@magnidiag] and facilitate a simple mathematical treatment, because they form a geometrical series that can easily be summed up. Here we call the summation of this series *renormalized* single excitations (RSE). Following the textbook rule [@Szabo/Ostlund:1989] for evaluating the Goldstone diagrams, one obtains $$\begin{aligned}
E^\text{RSE}_c &=& \sum_{i}^\text{occ}\sum_a^\text{unocc}
\frac{|\Delta v_{ia}|^2}{\epsilon_i - \epsilon_a} + \nonumber \\
&& \left[ \sum_{i}^\text{occ}\sum_a^\text{unocc}
\frac{|\Delta v_{ia}|^2 (\Delta v_{aa} - \Delta v_{ii})}{(\epsilon_i - \epsilon_a)^2} \right] + \nonumber \\
&& \left[ \sum_{i}^\text{occ}\sum_a^\text{unocc}
\frac{|\Delta v_{ia}|^2 (\Delta v_{aa} - \Delta v_{ii})^2}{(\epsilon_i - \epsilon_a)^3} \right] + \nonumber \\
&& \cdots \nonumber \\
&=& \sum_{i}^\text{occ}\sum_a^\text{unocc} \frac{|\Delta v_{ia}|^2}
{\epsilon_i - \epsilon_a + \Delta v_{ii}- \Delta v_{aa} }
\label{Eq:rse}\end{aligned}$$ The additonal term $\Delta v_{ii} - \Delta v_{aa}$ in the denominater of Eq. (\[Eq:rse\]), which appears to be negative definite in practical calculations, ensures that $E^\text{RSE}_c$ is finite and prevents possible divergence problems even when the single-particle KS gap closes. In this context, we note in passing that a similar partial resummation of the so-called Epstein-Nesbet series [@Szabo/Ostlund:1989] of diagrams to “renormalize" the 2nd-order correlation energy has been performed by Jiang and Engel [@Jiang/Engel:2006] in the KS many-body perturbation theory.
Acknowledgement
===============
XR thanks Rodney J. Bartlett, Georg Kresse, and Hong Jiang for helpful discussions.
[41]{}
J. Goldstone, Proc. Roy. Soc. A, **239**, 267 (1957).
For a series of molecules, we have numerically confirmed that the nondiagonal elements $\Delta v_{pq}$ are one to two orders of magnitude smaller than the diagonal ones $\Delta v_{pp}$.
A. Szabo and A. S. Ostlund, *Modern Quantum Chemistry: Introduction to Advanced Electronic Structure Theory* (McGraw-Hill, New York, 1089).
H. Jiang and E. Engel, J. Chem. Phys. **125**, 184108 (2006).
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'The formal weight enumerators were first introduced by M. Ozeki, and it was shown in the author’s previous paper that there are various families of divisible formal weight enumerators. Among them, three families are dealt with in this paper and their properties are investigated: they are analogs of the Mallows-Sloane bound, the extremal property, the Riemann hypothesis, etc. In the course of the investigation, some generalizations of the theory of invariant differential operators developed by I. Duursma and T. Okuda are deduced.'
author:
- Koji Chinen
title: On some families of divisible formal weight enumerators and their zeta functions
---
[**Key Words:**]{} Formal weight enumerator; Invariant polynomial ring; Zeta function for codes; Riemann hypothesis.
[**Mathematics Subject Classification:**]{} Primary 11T71; Secondary 13A50, 12D10.
Introduction {#section:intro}
============
The formal weight enumerators were first introduced to coding theory and number theory by Ozeki [@Oz]. Recently, the present author [@Ch3] showed that there are many other families of “divisible formal weight enumerators”. So, first we give the definitions of formal weight enumerators and their divisibility. In the following, the action of a matrix $\sigma=\left(\begin{array}{cc} a & b \\ c & d \end{array}\right)$ on a polynomial $f(x,y)\in {\bf C}[x,y]$ is defined by $$\label{eq:action_sigma}
f^\sigma(x,y)=f(ax+by, cx+dy).$$
\[dfn:fwe\] We call a homogeneous polynomial $$\label{eq:fwe_form}
W(x,y)=x^n+\sum_{i=d}^n A_i x^{n-i} y^i\in {\bf C}[x,y]\quad (A_d\ne 0)$$ a formal weight enumerator if $$\label{eq:fwe_transf}
W^{\sigma_q}(x,y)=-W(x,y)$$ for some $q\in {\bf R}$, $q> 0$, $q\ne 1$, where $$\label{eq:macwilliams}
\sigma_q=\frac{1}{\sqrt{q}}\left(\begin{array}{rr} 1 & q-1 \\ 1 & -1 \end{array}\right).$$ Moreover, for some fixed $c\in {\bf N}$, we call $W(x,y)$ divisible by $c$ if $$A_i\ne 0\quad \Rightarrow \quad c|i$$ is satisfied.
The transformation defined by $\sigma_q$ is often called the MacWilliams transform. Ozeki’s formal weight enumerators are of the form $$W_{{\mathcal H}_{8}}(x,y)^l W_{12}(x,y)^{2m+1}$$ and their suitable linear combinations, where, $$\begin{aligned}
W_{{\mathcal H}_{8}}(x,y) &=& x^8+14x^4y^4+y^8, \label{eq:we_hamming}\\
W_{12}(x,y) &=& x^{12}-33x^8y^4-33x^4y^8+y^{12}.\label{eq:w12}\end{aligned}$$ The polynomial $W_{{\mathcal H}_{8}}(x,y)$ is the weight enumerator of the famous extended Hamming code ${\mathcal H}_{8}$. We have ${W_{{\mathcal H}_{8}}}^{\sigma_2}(x,y)=W_{{\mathcal H}_{8}}(x,y)$ and ${W_{12}}^{\sigma_2}(x,y)=-W_{12}(x,y)$, so Ozeki’s formal weight enumerators are those for $q=2$ and $c=4$.
In the paper [@Ch3], it was shown that the formal weight enumerators divisible by two exist for $q=2, 4, 4/3, 4\pm 2\sqrt{2}, 2\pm 2\sqrt{5}/5, 8\pm 4\sqrt{3}$, etc. The properties of formal weight enumerators vary according to the values of $q$. In this paper, we consider the cases $q=2,4$ and $4/3$. For the cases of other $q$, the reader is referred to [@Ch3]. We are mainly interested in the extremal property and the Riemann hypothesis for the zeta functions of the formal weight enumerators.
Zeta functions of this kind were first introduced by Duursma [@Du1] for the weight enumerators of linear codes, whose theory was developed in his subsequent papers [@Du2] – [@Du4]. Later the present author generalized them to Ozeki’s formal weight enumerators in [@Ch1], and to some other invariant polynomials in [@Ch2]. The definition is the following:
\[dfn:zeta\] For any homogeneous polynomial of the form (\[eq:fwe\_form\]) and $q\in{\bf R}$ ($q>0, q\ne 1$), there exists a unique polynomial $P(T)\in{\bf C}[T]$ of degree at most $n-d$ such that $$\label{eq:zeta_duursma}
\frac{P(T)}{(1-T)(1-qT)}(y(1-T)+xT)^n=\cdots +\frac{W(x,y)-x^n}{q-1}T^{n-d}+ \cdots.$$ We call $P(T)$ and $Z(T)=P(T)/(1-T)(1-qT)$ the zeta polynomial and the zeta function of $W(x,y)$, respectively.
For the proof of existence and uniqueness of $P(T)$, see [@Ch2 Appendix A] for example. Recall that we must assume $d, d^\perp\geq 2$ where $d^\perp$ is defined by $$W^{\sigma_q}(x,y)=\pm x^n + A_{d^\perp} x^{n-d^\perp} y^{d^\perp}+ \cdots,$$ when considering the zeta functions (see [@Du2 p.57]).
If a (formal) weight enumerator $W(x,y)$ has the property $W^{\sigma_q}(x,y)=\pm W(x,y)$, then the zeta polynomial $P(T)$ has the functional equation $$\label{eq:func_eq}
P(T)=\pm P\left(\frac{1}{qT}\right)q^g T^{2g}\qquad (g=n/2+1-d).$$ The quantity $g$ is called the genus of $W(x,y)$. Note that $$\label{eq:genus_geq0}
d\leq \frac{n}{2}+1$$ because $g$ must satisfy $g\geq 0$. Now we can formulate the Riemann hypothesis:
\[dfn:RH\] A (formal) weight enumerator $W(x,y)$ with $W^{\sigma_q}(x,y)=\pm W(x,y)$ satisfies the Riemann hypothesis if all the zeros of $P(T)$ have the same absolute value $1/\sqrt{q}$.
We know examples of (formal) weight enumerators both satisfying and not satisfying the Riemann hypothesis (see [@Du3 Section 4], [@Ch1] – [@Ch4]).
In the case of the formal weight enumerators treated in this article (especially the cases $q=2$ and $4$), there seems to be similar structures to the cases of the weight enumerators of self-dual codes over the fields ${ {\bf F}_{2}}$ and ${ {\bf F}_{4}}$ (so-called Type I and Type IV codes). One of the main purposes of this paper is to investigate such formal weight enumerators and to clarify the properties in common with the weight enumerators of Types I and IV. Our main results are Theorem \[thm:analog\_mallows-sloane\] which establishes analogs of the Mallows-Sloane bound (see Theorem \[thm:mallows-sloane\]), and Theorem \[thm:equiv\_rh\] which is an analog of Okuda’s theorem (see [@Ok Theorem 5.1]) concerning a certain equivalence of the Riemann hypothesis between some sequences of extremal weight enumerators.
To this end, we apply the theory of invariant differential operators on invariant polynomial rings, which was introduced by Duursma [@Du4] and generalized by Okuda [@Ok]. Our second purpose is to generalize their theory further and state it in a form a little easier to use (our main result in this direction is Theorem \[thm:duursma\_okuda\]).
As to the formal weight enumerators for $q=4/3$, we also find similar structures, but a little different treatment is required. For example, to deduce an analog of the Mallows-Sloane bound (Theorem \[thm:mallows-sloane\_q=4/3\]), it seems that the theory of invariant differential operators does not work well, so we must appeal to the analytical method in MacWilliams-Sloane [@MaSl p.624-628]. Our main results for this case are Theorem \[thm:mallows-sloane\_q=4/3\] (an analog of the Mallows-Sloane bound) and Theorem \[thm:equiv\_rh\_4over3\] (some equivalence of the Riemann hypothesis).
The rest of the paper is organized as follows: in Section \[section:duursma\_okuda\], we show the theorem which generalizes the results of Duursma and Okuda. Section \[section:fwe\_2\_4\] is devoted to the analysis of divisible formal weight enumerators for $q=2$ and $4$. In Section \[section:fwe\_4over3\], we discuss the properties of divisible formal weight enumerators for $q=4/3$.
For a real number $x$, $[x]$ means the greatest integer not exceeding $x$. The Pochhammer symbol $(a)_n$ means $(a)_n=a(a+1)\cdots (a+n-1)$ for $n\geq 1$ and $(a)_0=1$.
Generalization of the theory of Duursma and Okuda {#section:duursma_okuda}
=================================================
The theory of invariant differential operators on some invariant polynomial rings was introduced by Duurma [@Du4 Section 2]. It considerably simplified the proof of the Mallows-Sloane bound ([@Du4 Theorem 3]). Later a certain generalization is deduced by Okuda [@Ok Section 5], which was used to prove a kind of equivalence of the Riemann hypothesis between some sequences of extremal self-dual codes (see [@Ok Theorem 5.1 and Section 6]). Okuda’s idea should be highly appreciated, as well as that of Duursma.
Their theory must have various applications, in fact one of which is our analysis of formal weight enumerators. In this section, we generalize their theory and give several statements in forms useful for applications.
We adopt a standard notation as to the action of matrices: for a matrix $\sigma=\left(\begin{array}{cc} a & b \\ c & d \end{array}\right)$ and a pair of variables $(x,y)$, we define $$(x,y)^\sigma=(ax+by, cx+dy).$$ The action of $\sigma$ on a polynomial $f(x,y)\in {\bf C}[x,y]$ is defined in (\[eq:action\_sigma\]) (these are different from the notation of Duursma [@Du4]). For a homogenous polynomial $p(x,y)$, $p(x,y)(D)$ means a differential operator obtained by replacing $x$ by $\partial/\partial x$ and $y$ by $\partial/\partial y$.
\[lem:duursma\] Let $A(x,y)$, $p(x,y)$ be homogenous polynomials in ${\bf C}[x,y]$. Suppose two pairs of variables $(u,v)$ and $(x,y)$ are related by $(u,v)=(x,y)^\sigma$ for a matrix $\sigma=\left(\begin{array}{cc} a & b \\ c & d \end{array}\right)$. Then we have $$p^{^t \sigma}(u,v)(D)A(u,v)=p(x,y)(D)A^\sigma(x,y).$$
[[**Proof.** ]{}]{}This is Duursma [@Du4 Lemma 1]. We state a proof briefly because it is omitted in [@Du4]. By the chain rule of differentiation, we have $$\frac{\partial}{\partial x}A^\sigma (x,y)=\frac{\partial}{\partial x}A(u,v)
=\frac{\partial A}{\partial u}\frac{\partial u}{\partial x}+\frac{\partial A}{\partial v}\frac{\partial v}{\partial x}.$$ Since $(u,v)=(x,y)^\sigma$, we have $\partial u/\partial x=a$, $\partial v/\partial x=c$. Thus, $$\frac{\partial}{\partial x}A(u,v)
=\left( a \frac{\partial }{\partial u} + c \frac{\partial }{\partial v} \right)A(u,v).$$ Similarly we have $$\frac{\partial}{\partial y}A(u,v)
=\left( b \frac{\partial }{\partial u} + d \frac{\partial }{\partial v} \right)A(u,v).$$ Therefore we have $(\partial/\partial x, \partial/\partial y)=(\partial/\partial u, \partial/\partial v)^{^t \sigma}$ and generally $p(x,y)(D)=p^{^t \sigma}(u,v)(D)$.
The following proposition is a generalization of the discussion of [@Du4 pp.108-109]:
\[prop:duursma\] Let $a(x,y)$, $A(x,y)$, $p(x,y)$ be homogenous polynomials in ${\bf C}[x,y]$ and suppose $\deg a(x,y) \leq \deg A(x,y)-\deg p(x,y)$. If $a(x,y)|p(x,y)(D)A^\sigma (x,y)$, then we have $$\label{eq:prop_duursma}
a^{\sigma^{-1}}(x,y)|p^{^t \sigma}(x,y)(D)A(x,y).$$
[[**Proof.** ]{}]{}Let $(u,v)=(x,y)^\sigma$. Then, from Lemma \[lem:duursma\] and the assumption, we have $$a(x,y)|p^{^t \sigma}(u,v)(D)A(u,v).$$ Since $a(x,y)=a^{\sigma^{-1}}(u,v)$, we have $a^{\sigma^{-1}}(u,v)|p^{^t \sigma}(u,v)(D)A(u,v)$. This is the same as (\[eq:prop\_duursma\]).
[[**Remark.** ]{}]{}The formula $$(x-y)^{d^\perp-1}|((q-1)x-y)(D)A(x,y)$$ which is essentially the same as $(x-y)^{d^\perp-1}|((q-1)x-y)(D)A(x,y)q^{n-k}$ on [@Du4 p.108] is obtained by setting $$\sigma=\sigma_q,\quad p(x,y)=y, \quad a(x,y)=y^{d^\perp-1},$$ and the formula $$(x-\zeta^{-1}y)^{d^\perp-1}|((q-1)x-\zeta y)(D)A(x,y)$$ on [@Du4 p.109] ($x-\zeta y$ on the left hand side seems to be a mistake) is obtained by $$\sigma=\left(\begin{array}{cc}1 & 0 \\ 0 & \zeta \end{array}\right),\quad p(x,y)=(q-1)x-y, \quad a(x,y)=(x-y)^{d^\perp-1}.$$ In synthesis of the discussion in Section 2 and Lemma 11 in [@Du4], and Okuda [@Ok Proposition 5.4], taking applications to formal weight enumerators into consideration, we obtain the following generalized version of their results:
\[thm:duursma\_okuda\] Let $a(x,y)$, $A(x,y)$, $p(x,y)$ be the same as in Proposition \[prop:duursma\]. We suppose $$\begin{aligned}
p^{^t \sigma}(x,y) &=& c_1 p(x,y),\\
A^\sigma (x,y) &=& c_2 A(x,y)\end{aligned}$$ ($c_i \in {\bf C}$, $c_i\ne 0$) for a linear transformation $\sigma$. Then we have the following:
\(i) $$\label{eq:duursma_okuda-1}
\{p(x,y)(D)A(x,y)\}^\sigma = \displaystyle\frac{c_2}{c_1} p(x,y)(D)A(x,y).$$ (ii) If $a(x,y)|p(x,y)(D)A(x,y)$, then $$a^\sigma(x,y)|p(x,y)(D)A(x,y).$$ Moreover, if $(a(x,y), a^\sigma(x,y))=1$, then $$a(x,y) a^\sigma(x,y)|p(x,y)(D)A(x,y).$$
\(iii) Suppose $a(x,y)|p(x,y)(D)A(x,y)$ and put $$\label{eq:duursma_okuda-3}
p(x,y)(D)A(x,y)=a(x,y) \tilde a(x,y).$$ If $a^\sigma (x,y)=c_3 a(x,y)$ ($c_3 \in {\bf C}$, $c_3\ne 0$), then $$\label{eq:duursma_okuda-3res}
\tilde a^\sigma(x,y)=\frac{c_2}{c_1 c_3}\tilde a(x,y).$$
[[**Proof.** ]{}]{}(i) Let $(u,v)=(x,y)^\sigma$. Then, by Lemma \[lem:duursma\] and the assumption, we have $$p(u,v)(D)A(u,v)=\frac{c_2}{c_1}p(x,y)(D)A(x,y).$$ This means (\[eq:duursma\_okuda-1\]).
\(ii) We can prove the former claim by replacing $\sigma$ by $\sigma^{-1}$ in Proposition \[prop:duursma\] (note that $p^{^t\sigma^{-1}}(x,y)=p(x,y)/c_1$ and $A^{\sigma^{-1}}(x,y)=A(x,y)/c_2$). The latter claim is obvious.
\(iii) Let $\sigma$ act on the both sides of (\[eq:duursma\_okuda-3\]). Then, $$\frac{c_2}{c_1}p(x,y)(D)A(x,y)=c_3a(x,y)\tilde a^\sigma (x,y)$$ by (i) and the assumption. Using (\[eq:duursma\_okuda-3\]) again, we get the formula (\[eq:duursma\_okuda-3res\]).
[[**Remark.** ]{}]{}Okuda [@Ok Proposition 5.4] is essentially the same as the case where $c_1=c_2=1$ in (i), which was used in the proof of [@Ok Theorem 5.1]. On the other hand, Duursma [@Du4 Lemma 11] is the case where $c_1=c_2=c_3=1$ for some special $a(x,y)$, $p(x,y)$ and $\sigma$ in (iii). Later we will encounter the cases $c_i=\pm 1$.
Formal weight enumerators for $q=2$ and $4$ {#section:fwe_2_4}
===========================================
In this section, we discuss the properties of formal weight enumerators divisible by two for $q=2$ and $4$. Let $$\begin{aligned}
\varphi_4(x,y) &=& x^4-6x^2y^2+y^4, \label{eq:fwe_q=2_gen}\\
\varphi_3(x,y) &=& x^3-9xy^2. \label{eq:fwe_q=4_gen}\end{aligned}$$ Then we can easily see that $$\begin{aligned}
{\varphi_4}^{\sigma_2}(x,y) &=& -\varphi_4(x,y),\\
{\varphi_3}^{\sigma_4}(x,y) &=& -\varphi_3(x,y) \end{aligned}$$ (see also [@Ch3 Section 3]). We can also verify that $W_{2,q}(x,y)=x^2+(q-1)y^2$ satisfies ${W_{2,q}}^{\sigma_q}(x,y)=W_{2,q}(x,y)$ for any $q$. Note that $\varphi_4(x,y)$, $\varphi_3(x,y)$ and $W_{2,q}(x,y)$ are invariant under the action of $$\tau=\left(\begin{array}{cc} 1 & 0 \\ 0 & -1 \end{array}\right).$$ We form the following polynomial rings: $$\begin{aligned}
R_{\rm I}^- &=& {\bf C}[W_{2,2}(x,y), \varphi_4(x,y)]\label{eq:ring_fwe_q=2},\\
R_{\rm IV}^- &=& {\bf C}[W_{2,4}(x,y), \varphi_3(x,y)]\label{eq:ring_fwe_q=4}.\end{aligned}$$ These are, so to speak, rings of Type I and Type IV formal weight enumerators, respectively, by analogy with those of Type I and Type IV weight enumerators. Type I weight enumerators are those of self-dual codes over ${{ {\bf F}_{2}}}$ divisible by two (that is, the weights of all the codewords are divisible by two). The ring of them is $$R_{\rm I} = {\bf C}[W_{2,2}(x,y), W_{{\mathcal H}_{8}}(x,y)]$$ (see (\[eq:we\_hamming\]) for the definition of $W_{{\mathcal H}_{8}}(x,y)$, see also [@CoSl p.186] for this ring). Similarly, the Type IV weight enumerators are those of self-dual codes over ${{ {\bf F}_{4}}}$ divisible by two, whose ring is $$R_{\rm IV} = {\bf C}[W_{2,4}(x,y), x^6+45x^2y^4+18y^6]$$ ([@CoSl p.203]).
[[**Remark.** ]{}]{}The rings $R_{\rm I}^-$ and $R_{\rm IV}^-$ are the invariant polynomial rings of the groups $G_{\rm I}^- = \langle \sigma_2 \tau \sigma_2, \tau \rangle$ and $G_{\rm IV}^- = \langle \sigma_4 \tau \sigma_4, \tau \rangle$, respectively. The group $G_{\rm I}^-$ has order 8 and its Molien series are $\Phi_{\rm I}^- (\lambda)=1/\{(1-\lambda^2)(1-\lambda^4)\}$. The group $G_{\rm IV}^-$ has order 6 and its Molien series are $\Phi_{\rm IV}^- (\lambda)=1/\{(1-\lambda^2)(1-\lambda^3)\}$.
Type I formal weight enumerators are the polynomials $W(x,y)$ of the form (\[eq:fwe\_form\]), given by $$\label{eq:fwe_typeI}
W_{2,2}(x,y)^l \varphi_4(x,y)^{2m+1}\qquad (l,m \geq 0)$$ and their suitable linear combinations (note that we need an odd number of $\varphi_4(x,y)$ to have $W^{\sigma_2}(x,y)=-W(x,y)$). Some examples of such linear combinations will be given in Example \[exam:typeI\_fwe\] later. Similarly, Type IV formal weight enumerators are given by $$\label{eq:fwe_typeIV}
W_{2,4}(x,y)^l \varphi_3(x,y)^{2m+1}\qquad (l,m \geq 0)$$ and their suitable linear combinations (see Example \[exam:typeIV\_fwe\] for an example of such a linear combination).
Our first goal in this section is Theorem \[thm:analog\_mallows-sloane\]. As a preparation for it, we prove the following proposition, which is an analog of [@Du4 Lemma 2]:
\[prop:divide\] (i) Let $W(x,y)$ be a Type I formal weight enumerator with $d\geq 4$ and let $p(x,y)=xy(x^2-y^2)$. Then we have $$\label{eq:divide_I}
\{xy(x^2-y^2)\}^{d-3}| p(x,y)(D)W(x,y).$$
\(ii) Let $W(x,y)$ be a Type IV formal weight enumerator with $d\geq 4$ and let $p(x,y)=y(x^2-9y^2)$. Then we have $$\label{eq:divide_IV}
\{y(x^2-y^2)\}^{d-3}| p(x,y)(D)W(x,y).$$
[[**Proof.** ]{}]{}(i) It is easy to see that $p^{^t \sigma_2}(x,y)=p(x,y)$ and that $W(x,y)=W(y,x)$ since $W(x,y)$ is invariant under $\sigma_2 \tau \sigma_2$. Moreover, since $W(x,y)$ is of the form (\[eq:fwe\_form\]), we have $$p(x,y)(D)W(x,y)=C(x^{n-d-1}y^{d-3}+ \cdots + x^{d-3}y^{n-d-1})$$ for some constant $C$. So we have $$(xy)^{d-3}|p(x,y)(D)W(x,y)=-p(x,y)(D)W^{\sigma_2}(x,y)$$ (note that the terms $x^{n-d-1}y^{d-3}$ and $x^{d-3}y^{n-d-1}$ do not disappear when $d\geq 4$ because of the inequality (\[eq:genus\_geq0\])). By Proposition \[prop:duursma\], $$\{(xy)^{d-3}\}^{{\sigma_2}^{-1}} | -p^{^t \sigma_2}(x,y)(D)W(x,y).$$ Since $\{(xy)^{d-3}\}^{{\sigma_2}^{-1}}=\{(x^2-y^2)/2\}^{d-3}$ and $p^{^t \sigma_2}(x,y)=p(x,y)$, we obtain $$(x^2-y^2)^{d-3} | p(x,y)(D)W(x,y).$$ We get (\[eq:divide\_I\]) by Theorem \[thm:duursma\_okuda\] (ii) because $((xy)^{d-3}, (x^2-y^2)^{d-3})=1$.
\(ii) First we note the following: $$W^{\sigma_4}(x,y)=-W(x,y), \quad W^\tau(x,y)=W(x,y),$$ $$p^{^t \sigma_4}(x,y)=p(x,y), \quad p^{^t \tau}(x,y)=-p(x,y),$$ $$(y^{d-3})^{{\sigma_4}^{-1}}=\{(x-y)/2\}^{d-3}, \quad \{(x-y)^{d-3}\}^{\tau^{-1}}=(x+y)^{d-3}.$$ Using these, we can prove (\[eq:divide\_IV\]) similarly to (i).
[[**Remark.** ]{}]{}As the result of this proposition, we must have $4(d-3)\leq n-4$ for Type I formal weight enumerators with $d\geq 4$, and $3(d-3)\leq n-3$ for Type IV formal weight enumerators with $d\geq 4$.
In the case of Types I and IV weight enumerators, that is the members of $R_{\rm I}$ and $R_{\rm IV}$ of the form (\[eq:fwe\_form\]), the following upper bounds of $d$ by $n$ are known:
\[thm:mallows-sloane\] $$\begin{aligned}
(\mbox{Type I}) & & d\leq 2\left[ \frac{n}{8} \right] +2, \\
(\mbox{Type IV}) & & d\leq 2\left[ \frac{n}{6} \right] +2.\end{aligned}$$
[[**Proof.** ]{}]{}See [@Du3 Theorem 3] for example.
Our next result is the following:
\[thm:analog\_mallows-sloane\] (i) Let $W(x,y)$ be a Type I formal weight enumerator of the form (\[eq:fwe\_form\]). Then we have $$d\leq 2\left[ \frac{n-4}{8} \right] +2.$$ (ii) Let $W(x,y)$ be a Type IV formal weight enumerator of the form (\[eq:fwe\_form\]). Then we have $$d\leq 2\left[ \frac{n-3}{6} \right] +2.$$
[[**Proof.** ]{}]{}(i) We assume $d\geq 4$. Let $p(x,y)=xy(x^2-y^2)$ and $a(x,y)=\{xy(x^2-y^2)\}^{d-3}$. Then we have $$p^{^t \sigma_2}(x,y)=p(x,y), \quad p^{^t \tau}(x,y)=-p(x,y),$$ $$W^{\sigma_2}(x,y)=-W(x,y), \quad W^\tau(x,y)=W(x,y),$$ $$a^{\sigma_2}(x,y)=a(x,y), \quad a^\tau (x,y)=-a(x,y)$$ (note that $d$ is even). We apply Theorem \[thm:duursma\_okuda\] (iii). For $\sigma=\sigma_2$, we have $c_1=c_3=1$ and $c_2=-1$, for $\sigma=\tau$, we have $c_1=c_3=-1$, $c_2=1$. So the cofactor $\tilde a(x,y)$ in (\[eq:duursma\_okuda-3\]) satisfies $$\tilde a^{\sigma_2}(x,y)=-\tilde a(x,y), \quad \tilde a^{\tau}(x,y)=\tilde a(x,y).$$ Moreover, we can see that $\deg \tilde a(x,y)=n-4d+8$ and $\tilde a(x,y)$ has a term $x^{n-4d+8}$ (see Remark after Proposition \[prop:divide\]). Hence $\tilde a(x,y)$ is a constant times a Type I formal weight enumerator. Especially, $\tilde a(x,y)$ is divided by $\varphi_4(x,y)=x^4-6x^2y^2+y^4$. This, together with Proposition \[prop:divide\] (i) yields that $$\{xy(x^2-y^2)\}^{d-3}(x^4-6x^2y^2+y^4) | p(x,y)(D)W(x,y).$$ Comparing the degrees on the both sides, we obtain $$4(d-3)+4\leq n-4.$$ Putting $d=2d'$ ($d'\in{\bf N}$), we have $d'\leq (n-4)/8+1$. Since $d'$ is an integer, it is equivalent to $d'\leq [(n-4)/8]+1$. The conclusion follows immediately for $d\geq 4$. It also holds for $d=2$.
\(ii) We assume $d\geq 4$. The polynomials $p(x,y)=y(x^2-y^2)$, $W(x,y)$ and $a(x,y)=\{y(x^2-y^2)\}^{d-3}$ satisfy $$p^{^t \sigma_4}(x,y)=p(x,y), \quad p^{^t \tau}(x,y)=-p(x,y),$$ $$W^{\sigma_4}(x,y)=-W(x,y), \quad W^\tau(x,y)=W(x,y),$$ $$a^{\sigma_4}(x,y)=a(x,y), \quad a^\tau (x,y)=-a(x,y).$$ (note that $d$ is even). We can prove similarly to (i) that $$\{y(x^2-y^2)\}^{d-3}(x^3-9xy^2) | p(x,y)(D)W(x,y).$$ We obtain the conclusion by comparing the degrees for $d\geq 4$. It also holds for $d=2$.
[[**Remark.** ]{}]{}A similar bound is known for Ozeki’s formal weight enumerators which are generated by $W_{{\mathcal H}_{8}}(x,y)$ and $W_{12}(x,y)$ (see (\[eq:we\_hamming\]) and (\[eq:w12\])), that is, $$d\leq 4\left[\frac{n-12}{24}\right]+4$$ (compare this with the Mallows-Sloane bound for Type II weight enumerators $d\leq 4[n/24]+4$, [@Du3 Theorem 3] or [@MaSl Chapter 19, Theorem 13]). See [@Ch1] for details.
Now we can define the notion of extremal formal weight enumerators:
\[dfn:extremal\_fwe\] Let $W(x,y)$ be a Type I or Typr IV formal weight enumerator. We call $W(x,y)$ extremal if the equality holds in Theorem \[thm:analog\_mallows-sloane\].
We can verify that there exists a unique extremal formal weight enumerator for each degree $n$.
\[exam:typeI\_fwe\] We collect some examples of Type I formal weight enumerators.
\(1) The extremal formal weight enumerator of degree 12 ($d=4$, note that $a(x,y)=xy(x^2-y^2)$). It coincides with $W_{12}(x,y)$ in (\[eq:w12\]): $$\begin{aligned}
W_{12}(x,y) &=& \frac{1}{8}\left(9W_{2,2}(x,y)^4 \varphi_4(x,y)-\varphi_4(x,y)^3 \right)\\
&=& x^{12}-33x^8y^4-33x^4y^8+y^{12}.\end{aligned}$$ We have $$\begin{aligned}
p(x,y)(D)W_{12}(x,y) &=& -6336xy(x^2-y^2)(x^4-6x^2y^2+y^4)\\
&=& -6336 a(x,y)\varphi_4(x,y).\end{aligned}$$ (2) The extremal formal weight enumerator of degree 14 ($d=4$): $$\begin{aligned}
W_{14}(x,y)&:=&\frac{1}{16}(17 W_{2,2}(x,y)^5 \varphi_4(x,y)-W_{2,2}(x,y) \varphi_4(x,y)^3) \\
&=& x^{14}-26x^{10}y^4-39x^8y^6-39x^6y^8-26x^4y^{10}+y^{14}.\end{aligned}$$ We have $$p(x,y)(D)W_{14}(x,y) = -6240 a(x,y)\varphi_4(x,y)W_{2,2}(x,y).$$ (3) The extremal formal weight enumerator of degree 20 ($d=6$, note that $a(x,y)=\{xy(x^2-y^2)\}^{3}$): $$\begin{aligned}
W_{20}(x,y)&:=&\frac{1}{256}(235 W_{2,2}(x,y)^8 \varphi_4(x,y)+10W_{2,2}(x,y)^4 \varphi_4(x,y)^3
+11\varphi_4(x,y)^5) \\
&=& x^{20}-190x^{14}y^6+95x^{12}y^8-836x^{10}y^{10}+95x^8y^{12}-190x^6y^{14}+y^{20}.\end{aligned}$$ We have $$p(x,y)(D)W_{20}(x,y) = -319200 a(x,y)\varphi_4(x,y).$$ (4) An example of a non-extremal formal weight enumerator (degree 20, $d=4$): $$\begin{aligned}
W'_{20}(x,y)&:=&\frac{1}{16}(15 W_{2,2}(x,y)^8 \varphi_4(x,y) +\varphi_4(x,y)^5) \\
&=& x^{20}+5x^{16}y^4-240x^{14}y^6+250x^{12}y^8-1056x^{10}y^{10}\\
& & +250x^8y^{12}-240x^6y^{14}+5x^4y^{16}+y^{20}.\end{aligned}$$ We have $$\begin{aligned}
p(x,y)(D)W'_{20}(x,y) &=& 1920 a(x,y)\varphi_4(x,y)\\
& & \cdot (x^8-238x^6y^2+490x^4y^4-238x^2y^6+y^8).\end{aligned}$$ Here the polynomial of degree 8 on the right hand side is equal to $$\frac{1}{8}(121 \varphi_4(x,y)^2- 113 W_{2,2}(x,y)^4),$$ which is invariant under $\sigma_2$.
\[exam:typeIV\_fwe\] We show only one example of the extremal Type IV formal weight enumerator (degree 11, $d=4$): $$\begin{aligned}
W_{11}(x,y) &=& \frac{1}{9}(8 W_{2,4}(x,y)^4 \varphi_3(x,y) + W_{2,4}(x,y)\varphi_3(x,y)^3)\\
&=& x^{11}-30x^7y^4-336x^5y^6-1035x^3y^8-648xy^{10}.\end{aligned}$$ For $p(x,y)=y(y^2-9x^2)$, we have $$\begin{aligned}
p(x,y)(D)W_{11}(x,y) &=& -720y(x^2-y^2)(x^3-9xy^2)(x^2+3y^2)\\
&=& -720 a(x,y)\varphi_3(x,y) W_{2,4}(x,y)\end{aligned}$$ where $a(x,y)=\{y(x^2-y^2)\}^{d-3}=y(x^2-y^2)$.
Some numerical experiments suggest the following:
\[conj:RHtype1\_4\] All extremal formal weight enumerators of Types I and IV satisfy the Riemann hypothesis.
For the extremal Types I and IV formal weight enumerators, we can also prove analogs of [@Du4 Theorem 12] (the former assertion of it) and [@Du4 Theorem 19]. From Theorem \[thm:analog\_mallows-sloane\], the degree $n$ can be expressed by $d$ in (\[eq:fwe\_form\]) as follows: $$\begin{aligned}
(\mbox{Type I})& &n=4(d-1)+2v, \quad v=0,1,2,3,\\
(\mbox{Type IV})& &n=3(d-1)+2v, \quad v=0,1,2.\end{aligned}$$ Using these parameters, we can prove the following:
\[thm:analog\_duursma\_th12\] (i) Suppose $d\geq 4$. Then extremal Type I formal weight enumerators $W(x,y)$ satisfy $$(xy^3-x^3y)(D)W(x,y)=(d-2)_3 (n-d) A_d (x^3y-xy^3)^{d-3}(x^2+y^2)^v (x^4-6x^2y^2+y^4).$$
\(ii) Suppose $d\geq 4$. Then extremal Type IV formal weight enumerators $W(x,y)$ satisfy $$(y^3-9x^2y)(D)W(x,y)=(d-2)_3A_d (x^2y-y^3)^{d-3}(x^2+3y^2)^v (x^3-9xy^2).$$
[[**Proof.** ]{}]{}We can prove this similarly to [@Du4 Theorem 12].
We assume $d\geq 4$ and $d$ is even. We put $d-2=m$ ($m\geq 2$, $m$ is even).
\[thm:analog\_duursma\_th19\] (i) Let $W(x,y)$ be an extremal Type I formal weight enumerator of degree $n=4m+2v+4$ ($m\geq 2$, $m$ is even, $v=0,1,2,3$) and $P(T)=\sum_{i=0}^r p_i T^i$ be the zeta polynomial of $W(x,y)$. Then $$\begin{aligned}
\sum_{i=0}^{2m+2v+2} p_i {{4m+2v}\choose{m-1+i}}(x-y)^{3m+2v+1-i} y^{m-1+i}
&=& \frac{(d-2)_3 (n-d) A_d}{(n-3)_4} (xy)^{m-1} (x^2-y^2)^{m-1}\nonumber\\
& & \cdot (x^2+y^2)^v (x^4-6x^2y^2+y^4).\label{eq:analog_duursma_th19_I}\end{aligned}$$
\(ii) Let $W(x,y)$ be an extremal Type IV formal weight enumerator of degree $n=3m+2v+3$ ($m\geq 2$, $m$ is even, $v=0,1,2$) and $Q(T)=P(T)(1+2T)=\sum_{i=0}^r q_i T^i$, where $P(T)$ is the zeta polynomial of $W(x,y)$. Then $$\begin{aligned}
\sum_{i=0}^{m+2v+2} q_i {{3m+2v}\choose{m-1+i}}(x-y)^{2m+2v+1-i} y^{m-1+i}
&=& \frac{(d-2)_3 A_d}{3(n-2)_3} y^{m-1} (x^2-y^2)^{m-1}\nonumber\\
& & \cdot (x^2+3y^2)^v (x^3-9xy^2).\label{eq:analog_duursma_th19_IV}\end{aligned}$$
[[**Proof.** ]{}]{}Similar to the proof of [@Du4 Theorem 19].
Unfortunately, we cannot prove Conjecture \[conj:RHtype1\_4\] using Theorem \[thm:analog\_duursma\_th19\]. The obstacles are the existence of the factor $x^4-6x^2y^2+y^4$ and $x^3-9xy^2$ on the right hand side of (\[eq:analog\_duursma\_th19\_I\]) and (\[eq:analog\_duursma\_th19\_IV\]), as well as $x^{m-1}$ in (\[eq:analog\_duursma\_th19\_I\]), as was the case of the Type I extremal weight enumerators. However, we can prove a certain equivalence between the Riemann hypothesis for two sequences of extremal formal weight enumerators, which is an analog of Okuda [@Ok Theorem 5.1]:
\[thm:equiv\_rh\] (i) Let $W(x,y)$ be the extremal Type I formal weight enumerator of degree $n=8k+4$ ($k\geq 1$) with the zeta polynomial $P(T)$. Then $$W^\ast(x,y):=\frac{1}{n(n-1)}(x^2+y^2)(D)W(x,y)$$ is the extremal formal weight enumerator of degree $8k+2$ with the zeta polynomial $(2T^2-2T+1)P(T)$. The Riemann hypothesis for $W(x,y)$ is equivalent to that of $W^\ast(x,y)$.
\(ii) Let $W(x,y)$ be the extremal Type IV formal weight enumerator of degree $n=6k+3$ ($k\geq 1$) with the zeta polynomial $P(T)$. Then $$W^\ast(x,y):=\frac{1}{n(n-1)}\left(x^2+\frac{1}{3}y^2\right)(D)W(x,y)$$ is the extremal formal weight enumerator of degree $6k+1$ with the zeta polynomial $(4T^2-2T+1)P(T)/3$. The Riemann hypothesis for $W(x,y)$ is equivalent to that of $W^\ast(x,y)$.
[[**Proof.** ]{}]{}(i) We follow the method of Okuda [@Ok Section 5]. Our proof is similar to it, but we state a proof because [@Ok], being written in Japanese, is not easily accessible to all the readers. We have $W^{\sigma_2}(x,y)=-W(x,y)$ and $W^\tau(x,y)=W(x,y)$. For $p(x,y)=x^2+y^2$, we have $p^{^t\sigma_2}(x,y)=p^{^t\tau}(x,y)=p(x,y)$. So, from Theorem \[thm:duursma\_okuda\] (i) (the case $c_1=1$, $c_2=-1$), we can see that $W^\ast(x,y)$ is a formal weight enumerator of degree $n-2$, the term of smallest degree with respect to $y$ is that of $x^{n-d} y^{d-2}$. If $n=8k+4$, then $2[(n-4)/8]+2=2k+2$, and if $n=8k+2$, then $2[(n-4)/8]+2=2k$. Since the extremal formal weight enumerator is determined uniquely for each degree $n$, we can see that $W^\ast(x,y)$ is extremal.
To deduce the relation between the zeta polynomials, we need the MDS weight enumerators for $q=2$. Let $M_{n,d}=M_{n,d}(x,y)$ be the $[n, k=n-d+1, d]$ MDS weight enumerator and suppose the genus of $W(x,y)$ is $n/2+1-d$. Then $P(T)=\sum_{i=0}^{n-2d+2} a_i T^i$ is related to $W(x,y)$ by $$\label{eq:rel_P(T)_W(x,y)}
W(x,y)=a_0 M_{n,d} + a_1 M_{n,d+1} + \cdots + a_{n-2d+2}M_{n,n-d+2}$$ (see [@Du2 formula (5)]). Note that $d\geq 4$. We have $$\begin{aligned}
x(D)M_{n,i}(x,y) &=& n M_{n-1,i}(x,y),\\
y(D)M_{n,i}(x,y) &=& n (M_{n-1,i-1}(x,y)-M_{n-1,i}(x,y))\end{aligned}$$ (see “puncturing and averaging operator” and “shortening and averaging operator” of [@Du2 Section 3]). We act $x(D)$ on both sides of (\[eq:rel\_P(T)\_W(x,y)\]) and obtain $$x(D)W(x,y)=n(a_0 M_{n-1,d} + a_1 M_{n-1,d+1} + \cdots + a_{n-2d+2} M_{n-1,n-d+2}).$$ So we see that the zeta polynomial of $x(D)W(x,y)/n$ is $P(T)$. Acting $x(D)$ once again, we can see the zeta polynomial of $x^2(D)W(x,y)/n(n-1)$ is $P(T)$, too. For the operator $y(D)$, we have $$\begin{aligned}
\frac{1}{n}y(D)W(x,y) &=& a_0 M_{n-1,d-1} + (a_1-a_0) M_{n-1,d}+\cdots +
(a_{n-2d+2}-a_{n-2d+1})M_{n-1,n-d+1}\\
& & -a_{n-2d+2}M_{n-1,n-d+2},\end{aligned}$$ of which the zeta polynomial is $$a_0+(a_1-a_0)T+\cdots +(a_{n-2d+2}-a_{n-2d+1})T^{n-2d+2}-a_{n-2d+2}T^{n-2d+3}=(1-T)P(T).$$ From this, we can also see that the zeta polynomial of $y^2(D) W(x,y)/n(n-1)$ is $(1-T)^2P(T)$. Note that $x^2(D)W(x,y)/n(n-1)$ begins with the term of $M_{n-2,d}$, whereas $y^2(D) W(x,y)/n(n-1)$ begins with $M_{n-2,d-2}$. Therefore, adjusting the degree, we can conclude that the zeta polynomial of $W^\ast(x,y)$ is $$T^2 P(T)+(1-T)^2 P(T)=(2T^2-2T+1)P(T).$$ The equivalence of the Riemann hypothesis is immediate since both roots of $2T^2-2T+1$ have the same absolute value $1/\sqrt{2}$.
\(ii) We use $p(x,y)=x^2+y^2/3$. The proof is similar to that of (i) (this case is almost the same as [@Ok Theorem 5.1]).
Formal weight enumerators for $q=4/3$ {#section:fwe_4over3}
=====================================
In our previous paper [@Ch3], we have found that $$\label{eq:gen_fwe4over3_deg6}
\varphi_6(x,y) = x^6-5x^4y^2+\frac{5}{3}x^2y^4-\frac{1}{27}y^6$$ satisfies ${\varphi_6}^{\sigma_{4/3}}(x,y)=-\varphi_6(x,y)$. We also know that $$\label{eq:gen_fwe4over3_deg2}
W_{2,4/3}(x,y) = x^2+\frac{1}{3}y^2$$ satisfies ${W_{2,4/3}}^{\sigma_{4/3}}(x,y)=W_{2,4/3}(x,y)$. So we form the following two polynomial rings $$\begin{aligned}
R_{4/3}^- &=&{\bf C}[\varphi_6(x,y), W_{2,4/3}(x,y)],\\
R_{4/3} &=&{\bf C}[\varphi_6(x,y)^2, W_{2,4/3}(x,y)]. \end{aligned}$$ The formal weight enumerators are polynomials of the form (\[eq:fwe\_form\]) in $R_{4/3}^-$ given by $$W_{2,4/3}(x,y)^l \varphi_6(x,y)^{2m+1} \qquad (l,m\geq 0)$$ and their suitable linear combinations. We also consider the invariant polynomials of the form (\[eq:fwe\_form\]) in $R_{4/3}$ given by $$W_{2,4/3}(x,y)^l \varphi_6(x,y)^{2m} \qquad (l,m\geq 0, \quad (l,m)\ne(0,0))$$ and their suitable linear combinations.
We show that the rings $R_{4/3}^-$ and $R_{4/3}$ can be realized as invariant polynomial rings of some groups in $SL_2({\bf C})$. We can see that $R_{4/3}$ is indeed the largest ring which contains polynomials invariant under $\sigma_{4/3}$ and divisible by two. We showed in [@Ch3] that there is no $W(x,y)$ of degree less than six satisfying $W^{\sigma_{4/3}}(x,y)=-W(x,y)$, so $R_{4/3}^-$ is also the largest ring of formal weight enumerators for $q=4/3$ divisible by two.
\[prop:groups\_4over3\] (i) Let $\eta=\displaystyle\frac{1}{2}\left(\begin{array}{rr} 1 & 1 \\ -3 & 1 \end{array}\right)$, $\tau=\left(\begin{array}{rr} 1 & 0 \\ 0 & -1 \end{array}\right)$ and $G_{4/3}^-=\langle \eta, \tau \rangle$. Then we have $|G_{4/3}^-|=12$ and the Molien series are $$\Phi(\lambda)=\frac{1}{(1-\lambda^2)(1-\lambda^6)}.$$ The ring $R_{4/3}^-$ is the invariant polynomial ring of $G_{4/3}^-$.
\(ii) Let $G_{4/3}=\langle \sigma_{4/3}, \tau \rangle$. Then we have $|G_{4/3}|=24$ and the Molien series are $$\Phi(\lambda)=\frac{1}{(1-\lambda^2)(1-\lambda^{12})}.$$ The ring $R_{4/3}$ is the invariant polynomial ring of $G_{4/3}$.
[[**Proof.** ]{}]{}(i) We can verify that $\eta$ has order 6, $\tau^2=I$ ($I$ is the identity matrix) and the relation $\tau\eta=\eta^5\tau$. It follows that $$\begin{aligned}
G_{4/3}^- &=& \{\eta^i \tau^j\ ;\ 0\leq i \leq 5, \ j=0,1 \} \\
&=&\langle \eta \rangle \rtimes \langle \tau \rangle. \end{aligned}$$ Thus we can see $|G_{4/3}^-|=12$. The Molien series can be calculated directly by the definition ([@MaSl p.600]) $$\Phi(\lambda)=\frac{1}{|G_{4/3}^-|}\sum_{A\in G_{4/3}^-} \frac{1}{\det(I-\lambda A)}.$$ The result implies that the invariant polynomial ring ${\bf C}[x,y]^{G_{4/3}^-}$ has two generators, one of which has degree two and the other has degree six. It can be checked that $\eta$ and $\tau$ fix both $W_{2,4/3}(x,y)$ and $\varphi_6(x,y)$.
\(ii) We have ${\sigma_{4/3}}^2=\rho^2=I$, $\sigma_{4/3}\tau$ has order 12 and so $\tau\sigma_{4/3}=(\sigma_{4/3}\tau)^{11}$. There are no $k,l\in{\bf Z}$ such that $(\sigma_{4/3}\tau)^k\sigma_{4/3}=(\sigma_{4/3}\tau)^l$. Therefore $$\begin{aligned}
G_{4/3} &=& \{(\sigma_{4/3}\tau)^i {\sigma_{4/3}}^j\ ;\ 0\leq i \leq 11, \ j=0,1 \} \\
&=&\langle \sigma_{4/3}\tau \rangle \rtimes \langle \sigma_{4/3} \rangle \end{aligned}$$ and $|G_{4/3}|=24$. The Molien series are obtained similarly. It is obvious that ${\sigma_{4/3}}$ and $\tau$ fix $\varphi_6(x,y)^2$ and $W_{2,4/3}(x,y)$.
Next we consider analogs of Mallows-Slaone bound. In the present case ($q=4/3$), it seems difficult to find a good differential operator $p(x,y)(D)$ and a good polynomial $a(x,y)$ like in the previous section, but it is possible to prove the following by use of an analytic method of [@MaSl Chapter 19, Section 5]:
\[thm:mallows-sloane\_q=4/3\] (i) Any invariant polynomial of the form (\[eq:fwe\_form\]) in $R_{4/3}$ satisfies $$\label{eq:mallows-sloane_q=4/3-1}
d\leq 2\left[\frac{n}{12}\right]+2.$$
\(ii) Any formal weight enumerator of the form (\[eq:fwe\_form\]) in $R_{4/3}^-$ with $n\equiv 6\ ({\rm mod}\ 12)$ satisfies $$\label{eq:mallows-sloane_q=4/3-2}
d\leq 2\left[\frac{n-6}{12}\right]+2.$$
[[**Proof.** ]{}]{}(i) We follow the method of [@MaSl p.624-628]. So we use a similar notation and state an outline. Let $$W_2(x,y)=W_{2,4/3}(x,y)$$ and $$\begin{aligned}
W'_{12}(x,y) &=& \frac{1}{2}(W_{2,4/3}(x,y)^6-\varphi_6(x,y)^2)\\
&=&\frac{1}{81}x^2y^2(x^2-y^2)^2(9x^2-y^2)^2.\end{aligned}$$ Then we have $$R_{4/3}={\bf C}[W_2(x,y), W'_{12}(x,y)].$$ An invariant polynomial $W(x,y)$ in $R_{4/3}$ of the form (\[eq:fwe\_form\]) can be written as $$\label{eq:lin_comb}
W(x,y)=\sum_{r=0}^\mu a_r W_2(x,y)^{6\mu+\nu-6r} W'_{12}(x,y)^r,$$ here, $n=\deg W(x,y)=2(6\mu+\nu)$ ($\mu\geq 0$, $0\leq \nu \leq 5$, $(\mu,\nu)\ne(0,0)$). Suppose we choose suitable $a_r$ and we cancel as many coefficients as possible. The right hand side of (\[eq:lin\_comb\]) is a linear combination of $\mu + 1$ polynomials, so we can at least make $y^2, y^4, \cdots, y^{2\mu}$ disappear. So we assume $$\label{eq:lin_comb_cancel}
W(x,y)=x^n + \sum_{r=\mu+1}^{6\mu+\nu} A_{2r} x^{n-2r} y^{2r}.$$ Our goal is to prove $A_{2\mu+2}\ne0$. We substitute $x$ by 1 and $y^2$ by $x$ in $W_2(x,y)$ and $W'_{12}(x,y)$. We put $$\begin{aligned}
f(x) &=& 1+\frac{1}{3}x,\\
g(x) &=& x(1-x)^2 (1-x/9)^2.\end{aligned}$$ The function ${\mit\Phi}(x)=xf(x)^6/g(x)$ satisfies the conditions of the Bürmann-Lagrange Theorem (see [@MaSl Chapter 19, Theorem 14]) and we can conclude that $$\label{eq:coeff_A_2m+2}
A_{2\mu+2} = \frac{9^{2\mu+2}(6\mu+\nu)}{3\cdot(\mu+1)!} \frac{d^\mu}{dx^\mu}
\left.\left\{ \frac{(1+x/3)^{5-\nu}}{(x-1)^{2\mu+2}(x-9)^{2\mu+2}} \right\}\right|_{x=0}.$$ Let $$F_\mu (x;\alpha, \beta)=(x-\alpha)^{-2\mu-2}(x-\beta)^{-2\mu-2}$$ for $\alpha, \beta>0$. Then it is easy to see that $$F_\mu^{(l)}(0;\alpha, \beta)=\sum_{r=0}^l {{l}\choose{r}} (2\mu+2)_{l-r}(2\mu+2)_r \alpha^{-2\mu-2-l+r} \beta^{-2\mu-2-r}>0$$ for all $l\geq 0$ ($\alpha=1$, $\beta=9$ in our case). Moreover, since $5-\nu\geq 0$, we have $\{(1+x/3)^{5-\nu}\}^{(l)}|_{x=0}>0$ unless $\{(1+x/3)^{5-\nu}\}^{(l)}$ is identically zero. Thus we can see that $A_{2\mu+2}>0$ for all $\mu\geq 0$ and that $d\leq 2\mu+2$. We recall $n=2(6\mu+\nu)$ and $d$ is even. Putting $d=2d'$ ($d'\in{\bf N}$), we obtain $$d'\leq \mu+1 =\frac{n}{12}-\frac{\nu}{6}+1\leq \frac{n}{12}+1, \quad d'\leq \left[\frac{n}{12}\right]+1.$$ The conclusion follows immediately.
\(ii) The proof is similar to (i), but a little more delicate estimate is needed. If we cancel as many coefficients as possible, the formal weight enumerator $W(x,y)$ of degree $n\equiv 6\ ({\rm mod}\ 12)$ can be written in the form $$\begin{aligned}
W(x,y) &=& \sum_{r=0}^\mu b_r W'_{12}(x,y)^r \varphi_6(x,y)^{2\mu-2r+1} \qquad (\mu\geq 0)\nonumber\\
&=& x^{12\mu+6} + \sum_{r=\mu+1}^{6\mu+3} A_{2r} x^{12\mu-2r+6} y^{2r}. \end{aligned}$$ Here, $n=\deg W(x,y)=12\mu+6$. We put $$\begin{aligned}
f(x) &=& 1-5x+\frac{5}{3}x^2-\frac{1}{27}x^3,\\
g(x) &=& x(1-x)^2 (1-x/9)^2.\end{aligned}$$ By a similar argument to (i), we get $$\begin{aligned}
A_{2\mu+2} &=& -\frac{9^{2\mu+2}(2\mu+1)}{(\mu+1)!}\frac{d^\mu}{dx^\mu}
\left.\left\{\left(-\frac{1}{9}x^2+\frac{10}{3}x-5\right)F_\mu(x; 1, 9)\right\}\right|_{x=0}\nonumber\\
&=& -\frac{9^{2\mu+2}(2\mu+1)}{(\mu+1)!}
\left\{5\sum_{r=0}^\mu {{\mu}\choose{r}} (2\mu+2)_{\mu-r}(2\mu+2)_{r}9^{-2\mu-2-r} \right. \nonumber\\
& &-\frac{10}{3}\mu\sum_{r=0}^{\mu-1} {{\mu-1}\choose{r}} (2\mu+2)_{\mu-1-r}(2\mu+2)_{r}9^{-2\mu-2-r} \nonumber\\
& &+\left.\frac{\mu(\mu-1)}{9}\sum_{r=0}^{\mu-2} {{\mu-2}\choose{r}} (2\mu+2)_{\mu-2-r}(2\mu+2)_{r}9^{-2\mu-2-r}
\right\}\label{eq:coeff_A_2m+2_fwe}\end{aligned}$$ for $\mu\geq2$. Now we prove $A_{2\mu+2}<0$. It suffices to show that $$\sum_{r=0}^\mu {{\mu}\choose{r}} (2\mu+2)_{\mu-r}(2\mu+2)_{r}9^{-2\mu-2-r}
>\mu\sum_{r=0}^{\mu-1} {{\mu-1}\choose{r}} (2\mu+2)_{\mu-1-r}(2\mu+2)_{r}9^{-2\mu-2-r},$$ that is, to show that $$\sum_{r=0}^{\mu-1} {{\mu}\choose{r}} (2\mu+2)_{\mu-r}(2\mu+2)_{r}9^{-2\mu-2-r}+
{{\mu}\choose{\mu}} (2\mu+2)_\mu 9^{-3\mu-2}$$ $$\label{eq:ineq_goal}
>\mu\sum_{r=0}^{\mu-1} {{\mu-1}\choose{r}} (2\mu+2)_{\mu-1-r}(2\mu+2)_{r}9^{-2\mu-2-r}.$$ If $r\ne0$, then ${{\mu}\choose{r}}> {{\mu-1}\choose{r}}$. If $0\leq r \leq \mu-1$, then we have $$\frac{(2\mu+2)_{\mu-r}(2\mu+2)_{r}}{\mu (2\mu+2)_{\mu-1-r}(2\mu+2)_{r}}
=\frac{3\mu+1-r}{\mu}>\frac{2\mu+2}{\mu}>1.$$ From these, we can prove (\[eq:ineq\_goal\]) and get $A_{2\mu+2}<0$. Since $n=12\mu+6$, we can estimate $d$ as $$d\leq 2\mu+2=2\cdot\frac{n-6}{12}+2,$$ the conclusion follows similarly to (i) for $\mu\geq 2$, that is, $n\geq 30$. For the cases $\mu=0,1$, explicit constructions show the bound: when $\mu=0$ ($n=6$), there is only one formal weight enumerator $\varphi_6(x,y)$ whose $d=2$, so (\[eq:mallows-sloane\_q=4/3-2\]) holds. When $\mu=1$ ($n=18$), the basis contains two formal weight enumerators $\varphi_6(x,y)^3$ and $W'_{12}(x,y)\varphi_6(x,y)$. We eliminate the term of $y^2$ by making $$\begin{aligned}
\varphi_6(x,y)^3 + 15 W'_{12}(x,y)\varphi_6(x,y)
&=& x^{18} - \frac{85}{3}x^{14}y^4 + \frac{1037}{27}x^{12}y^6 - \frac{935}{27}x^{10}y^8 \\
& & + \frac{935}{81}x^8y^{10} - \frac{1037}{729}x^6y^{12} + \frac{85}{729}x^4y^{18} - \frac{1}{19683}y^{18}\end{aligned}$$ whose $d=4$. Thus we have proved the theorem.
\[exam:coeff\_A\_d\](i) Let $\mu=1$ and $\nu=5$ in (\[eq:coeff\_A\_2m+2\]). Then (\[eq:coeff\_A\_2m+2\]) gives $A_4$ for $n=\deg W(x,y)=22$: $$A_4=\frac{9^4\cdot 11}{3\cdot 2}\frac{d}{dx}\left.\left\{ \frac{1}{(x-1)^4(x-9)^4} \right\}\right|_{x=0}
=\frac{220}{27}.$$ It coincides with the relevant coefficient in
$\displaystyle\frac{1}{36}\{ 25 W_{2,4/3}(x,y)^{11} +11 W_{2,4/3}(x,y)^5 \varphi_6(x,y)^2 \}$ $$\begin{aligned}
&=&x^{22}+\frac{220}{27}x^{18}y^4+\frac{2497}{243}x^{16}y^6+\frac{2750}{729}x^{14}y^8+\frac{484}{2187}x^{12}y^{10}+\frac{484}{6561}x^{10}y^{12}\\
& &+\frac{2750}{19683}x^8y^{14}+\frac{2497}{59049}x^6y^{16}+\frac{220}{59049}x^4y^{18}+\frac{1}{177147}y^{22}.\end{aligned}$$
\(ii) Let $\mu=2$ in (\[eq:coeff\_A\_2m+2\_fwe\]). Then (\[eq:coeff\_A\_2m+2\_fwe\]) gives $$A_6=-\frac{14065}{81}$$ for $n=\deg W(x,y)=30$. It coincides with the relevant coefficient in
$\displaystyle\frac{1}{8424}\{ 10075 W_{2,4/3}(x,y)^{12} \varphi_6(x,y) - 2600 W_{2,4/3}(x,y)^6 \varphi_6(x,y)^3
+949 \varphi_6(x,y)^5 \}$ $$= x^{30}-\frac{14065}{81}x^{24}y^6+ \cdots .$$
[[**Remark.** ]{}]{}(i) It is very plausible that the bound (\[eq:mallows-sloane\_q=4/3-2\]) holds for any formal weight enumerators in $R_{4/3}^-$. The general case requires the analysis of $$\sum_{r=0}^\mu b_r W_2(x,y)^c W'_{12}(x,y)^r \varphi_6(x,y)^{2\mu-2r+1} \quad(0\leq c \leq 5, \mu\geq 0),$$ which is attended with much difficulty. What is treated in Theorem \[thm:mallows-sloane\_q=4/3\] (ii) is the case where $c=0$.
\(ii) One is tempted to find suitable $p(x,y)$ to prove Theorem \[thm:mallows-sloane\_q=4/3\] like in the previous section. One of the candidates of $p(x,y)$ should be $$p(x,y)=xy(x^2-y^2)(x^2-9y^2)$$ which satisfies $p^{^t\sigma_{4/3}}(x,y)=p(x,y)$ and $p^{^t\tau}(x,y)=-p(x,y)$. Using this and a similar reasoning to the previous section, we can prove $$\{xy(x^2-y^2)(9x^2-y^2)\}^{d-5} \varphi_6(x,y) | p(x,y)(D)W(x,y)$$ for a formal weight enumerator $W(x,y)$ in $R_{4/3}^-$ with $d\geq 6$, but this does not reach the desired bound (\[eq:mallows-sloane\_q=4/3-2\]).
We can define the extremal polynomials in $R_{4/3}$:
\[dfn:extremal\_4over3\] Let $W(x,y)$ be a polynomial of the form (\[eq:fwe\_form\]) in $R_{4/3}$. We call $W(x,y)$ extremal if the equality holds in (\[eq:mallows-sloane\_q=4/3-1\]).
Some numerical experiments suggest the following:
\[conj:RH4over3\] All extremal polynomial of the form (\[eq:fwe\_form\]) in $R_{4/3}$ satisfy the Riemann hypothesis.
We cannot prove the above conjecture, but we can prove the following theorem, analogous to Theorem \[thm:equiv\_rh\]:
\[thm:equiv\_rh\_4over3\] Let $W(x,y)$ be the extremal polynomial of the form (\[eq:fwe\_form\]) in $R_{4/3}$ and of degree $n=12k$ ($k\geq 1$) with the zeta polynomial $P(T)$. Then $$W^\ast(x,y):=\frac{1}{n(n-1)}(x^2+3y^2)(D)W(x,y)$$ is the extremal polynomial of degree $12k-2$ with the zeta polynomial $(4T^2-6T+3)P(T)$. The Riemann hypothesis for $W(x,y)$ is equivalent to that of $W^\ast(x,y)$.
[[**Proof.** ]{}]{}We use $p(x,y)=x^2+3y^2$. We can prove the theorem similarly to Theorem \[thm:equiv\_rh\] (we omit the detail).
\[exam:equiv\_rh\_4over3\] The case $k=1$. The extremal polynomial of degree 12 is $$\begin{aligned}
W_{12}^{\rm E}(x,y) &=& \frac{1}{6}\{5 W_{2,4/3}(x,y)^6+\varphi_6(x,y)^2\}\\
&=& x^{12}+\frac{55}{9}x^8y^4 - \frac{176}{81}x^6y^6 + \frac{55}{81}x^4y^8 + \frac{1}{729}y^{12}.\end{aligned}$$ The zeta polynomial is $$P_{12}^{\rm E}(T)=\frac{1}{5103}(448T^6+896T^5+1128T^4+1092T^3+846T^2+504T+189).$$ On the other hand, $$\begin{aligned}
(W_{12}^{\rm E})^\ast(x,y) &=& x^{10}+\frac{5}{3}x^8y^2+\frac{10}{9}x^6y^4+\frac{10}{27}x^4y^6+\frac{5}{81}x^2y^8+\frac{1}{243}y^{10}\\
&=& W_{2,4/3}(x,y)^5,\end{aligned}$$ which is indeed the extremal polynomial of degree 10. We can verify that its zeta polynomial coincides with $(4T^2-6T+3)P_{12}^{\rm E}(T)$.
[[**Remark.** ]{}]{}It can be conjectured that a theorem similar to Theorem \[thm:equiv\_rh\_4over3\] holds for extremal formal weight enumerators in $R_{4/3}^-$. In this case, the relevant degrees are $n=12k+6$ and $12k+4$ ($k\geq 1$). We proved (\[eq:mallows-sloane\_q=4/3-2\]) for the degree $n=12k+6$, but not for the degree $12k+4$. The author observed that there was a relation $P_{28}^{\rm E}(T)=(4T^2-6T+3)P_{30}^{\rm E}(T)$, where $P_{30}^{\rm E}(T)$ is the zeta polynomial of the extremal formal weight enumerator of degree 30 ($d=6$) and $P_{28}^{\rm E}(T)$ is that of the unique formal weight enumerator of degree 28 with $d=4$ (at this degree, we can verify that it is extremal).
[*Acknowledgement.* ]{}The author would like to express his sincere gratitude to Professor Yoshio Mimura for his pieces of valuable advice on the structures of matrix groups. This work was established mainly during the author’s stay at University of Strasbourg for the overseas research program of Kindai University. He would also like to thank Professor Yann Bugeaud at University of Strasbourg for his hospitality and Kindai University for giving him a chance of the program.
[99]{} K. Chinen, Zeta functions for formal weight enumerators and the extremal property, Proc. Japan Acad. 81 Ser. A. (2005), 168-173. K. Chinen, An abundance of invariant polynomials satisfying the Riemann hypothesis, Discrete Math. 308 (2008), 6426-6440. K. Chinen, Construction of divisible formal weight enumerators and extremal polynomials not satisfying the Riemann hypothesis, preprint. K. Chinen, Extremal invariant polynomials not satisfying the Riemann hypothesis, preprint.
J. H. Conway, N. J. A. Sloane, Sphere Packings, Lattices and Groups, third ed., Springer Verlag, NewYork, 1999.
I. Duursma, Weight distribution of geometric Goppa codes, Trans. Amer. Math. Soc. 351, No.9 (1999), 3609-3639. I. Duursma, From weight enumerators to zeta functions, Discrete Appl. Math. 111 (2001), 55-73. I. Duursma, A Riemann hypothesis analogue for self-dual codes, DIMACS series in Discrete Math. and Theoretical Computer Science 56 (2001), 115-124. I. Duursma, Extremal weight enumerators and ultraspherical polynomials, Discrete Math. 268, No.1-3 (2003), 103-127. F. J. MacWilliams, N. J. A. Sloane, The Theory of Error-Correcting Codes, North-Holland, Amsterdam, 1977. T. Okuda, Relation between zeta polynomials and differential operators on some invariant rings (in Japanese), RIMS Kôkyûroku Bessatsu B20 (2010), 57-69. See also RIMS Kôkyûroku 1593 (2008), 145-153. M. Ozeki, On the notion of Jacobi polynomials for codes, Math. Proc. Camb. Phil. Soc. 121 (1997), 15-30. Shephard, G. C. and Todd, J. A. : Finite unitary reflection groups, Canad. J. Math. [**6**]{} (1954), 274-304.
| {
"pile_set_name": "ArXiv"
} |
---
author:
- 'J. Marocco'
- 'E. Hache'
- 'F. Lamareille'
bibliography:
- 'bib.bib'
date: 'Received ; Accepted'
subtitle: |
II. A supplementary diagnostic for AGNs\
using the $D_{n}(4000)$ index
title: 'Spectral classification of emission-line galaxies from the Sloan Digital Sky Survey'
---
Introduction\[sec:Introduction\]
================================
There are several existing types of emission-line galaxies: the two main classes are star-forming galaxies (hereafter SFG) and active galactic nuclei (hereafter AGN). Emission lines are observed in star-forming galaxies because gas is ionized by new hot stars. In contrast, AGN galaxies contain a supermassive blackhole, and their emission lines come from gas ionization by the light emitted from their accretion disk. AGN can be classified in several types, but we only consider narrow-line AGNs, which can be confused with SFG, i.e. Seyfert 2 galaxies and LINERs (low-ionization nuclear emission-line region). We do not consider Seyfert 1 galaxies because they can be easily distinguished from SFGs by their wide Balmer emission lines. A third class of emission-line galaxies is what we call “composites”. Composites show emission lines which are due both to recent star formation and to an AGN.
To classify emission-line galaxies, one may use two diagnostic diagrams depending on the redshift range: the first one is known as the BPT diagnostic [@1981PASP...93....5B], later studied by @2001ApJ...556..121K who used it to separate AGN from SFG thanks to theoretical models. @2003MNRAS.346.1055K revised Kewley’s work and allowed going deeper into the classification process by showing a third type of galaxies called composites. It was then again revised by @2006MNRAS.372..961K, who improved the classification of AGNs into Seyfert 2 and LINERs. This diagnostic uses log$\left([\mathrm{O\textrm{\textsc{iii}}}]\lambda5007/\mathrm{H}\beta\right)$ vs. log$\left([\mathrm{N}\textsc{ii}]\lambda6583/\mathrm{H}\alpha\right)$ and log$\left([\mathrm{O\textrm{\textsc{iii}}}]\lambda5007/\mathrm{H}\beta\right)$ vs. log$\left([\mathrm{S}\textsc{ii}]\lambda\lambda6717+6731/\mathrm{H}\alpha\right)$ diagrams and may be used up to $z\apprle0.5$ with optical spectrographs. Other diagnostics have been used in the past in the same diagrams . We use @2006MNRAS.372..961K as a reference since it is the latest widely used diagnostic and is based on the biggest sample. We refer the reader to @2006MNRAS.372..961K, @2006ApJ...650..727C, and references herein for comparisons of these diagnostics. See also @2006MNRAS.371.1559G for a specific discussion on low-metallicity AGNs.
The second diagnostic was originally proposed by @1996MNRAS.281..847T and studied later by @1997MNRAS.289..419R. This diagnostic is useful at intermediate and high redshift when some emission lines used in the BPT diagnostic are no longer observed by getting red-shifted out of spectrographs. @2004MNRAS.350..396L [hereafter L04] established a classification using empirical demarcation lines in the diagnostic diagram showing $\log\left([\mathrm{O}\textsc{iii}]\lambda5007/\mathrm{H}\beta\right)$ vs. $\log\left([\mathrm{O}\textrm{\textsc{ii}}]\lambda\lambda3726+3729/\mathrm{H}\beta\right)$, which may be used up to $z\apprle1.0$ with optical spectrographs, or even at $z\gtrsim2.5$ with near-infrared spectrographs(where optical diagnostics cannot be used). In Paper I [@2009lama], one of us proposed revised equations for the classification that we use in this paper. We know that the @2009lama [hereafter L10] diagnostic implies a loss of Seyfert 2 galaxies, because of the region where Seyfert 2 and SFGs get mixed. As discussed in Paper I, the L10 diagnostic also cannot unambiguously separate composites from SFGs or LINERs. The goal of this paper is to try to solve these two limitations with a different approach. Following the idea under the “DEW” diagnostic introduced by @2006MNRAS.371..972S, we use the $D_{n}(4000)$ index to derive a supplementary diagnostic. @2011ApJ...728...38Y have already derived a similar new diagnostic based on $U-B$ rest-frame colors. Compared to the present paper, it does suffer from the following limitations. It is based on rest-frame colors whose calculation may suffer from biases from imperfect $k$-correction at high redshift (unless such colors are integrated directly from the spectra), it does not provide a distinction between Seyfert 2 galaxies and LINERs, and it does not provide a way to isolate at least a fraction of composite galaxies. Conversely, that diagnostic has the advantage of only relying on the detection of $[\mathrm{O}\textsc{iii}]\lambda5007$ and $\mathrm{H}\beta$ emission lines.
Our goal is to provide a diagnostic that can be used to classify intermediate- or high-redshift emission-line galaxies as closely as possible to local universe studies. The older L04 diagnostic has already been used in various studies, such as star formation rates [@2009ApJ...694.1099M], metallicities , AGN populations , gamma ray burst hosts [@2009ApJ...691..182S], and clusters [@2009MNRAS.398..133L]. Results provided in Paper I and here may be used to revise spectral classification of emission-line galaxies in intermediate redshift optical galaxy redshift surveys such as VVDS , zCOSMOS [@2009ApJS..184..218L], DEEP2 [@2003SPIE.4834..161D], GDDS [@2004AJ....127.2455A], GOODS , and others. We hope it will also serve as a reference for ongoing or future high-redshift surveys involving future spectrographs: in the optical, MUSE on VLT [@2010SPIE.7735E...7B] or DIORAMAS on EELT [@2010SPIE.7735E..75L]; or in near-infrared (at $z\gtrsim2.5$), EMIR on GTC [@2006SPIE.6269E..40G; @2005RMxAC..24..154C], KMOS on VLT [@2006SPIE.6269E..44S], MOSFIRE on Keck [@2008SPIE.7014E..99M].
It is worth mentioning here as a warning that @2008MNRAS.391L..29S demonstrate that a fraction – whose value is still uncertain – of the galaxies classified as LINERs or composites by emission-line diagnostics may be actually “retired” galaxies. Ionization in such galaxies would be produced by post-AGB stars and white dwarfs. The reader should therefore be aware that galaxies that we refer to as LINERs or composites *might not* contain an AGN. @2011MNRAS.tmp..249C have derived a diagnostic that isolates this class of “retired” galaxies. This diagnostic is based on H$\alpha$ and [\[]{}N[<span style="font-variant:small-caps;">ii</span>]{}[\]]{}$\lambda$6583 emission lines. It is, however, beyond the scope of this paper to derive a similar diagnostic that can be used on higher redshift spectra, but it may be the goal of a future work.
This paper is organized as follows. We first present the data and how we selected them (Sect. \[sec:Data-selection\]), then we summarize of existing classification schemes (Sect. \[sec:Existing-classification-schemes\]). In Sect. \[sec: Limits of blue classifications\], we discuss the limits of the L10 and DEW diagnostics. Finally we present our supplementary diagnostic in Sect. \[sec:Spectral-classification\].
Data selection\[sec:Data-selection\]
====================================
We used a sample of 868 492 galaxies from the SDSS (Data Release 7, [@2009ApJS..182..543A], available at: <http://www.mpa-garching.mpg.de/SDSS/DR7/>) with redshifts between 0.01 and 0.3. Actually the sample originally contained measurements for 927 552 different galaxies, but there are 109 219 duplicate spectra (twice or more), so we averaged these duplicated measurements in order to increase the signal-to-noise ratio, and filtered out those that do not increase the averaged signal-to-noise ratio. Among others, these data contain measurements of the equivalent widths of the following emission lines: [\[]{}O[<span style="font-variant:small-caps;">iii</span>]{}[\]]{}$\lambda$5007, [\[]{}O[<span style="font-variant:small-caps;">ii</span>]{}[\]]{}$\lambda\lambda$3726+3729, [\[]{}N[<span style="font-variant:small-caps;">ii</span>]{}[\]]{}$\lambda$6583, [\[]{}[<span style="font-variant:small-caps;">Sii</span>]{}[\]]{}$\lambda\lambda$6717+6731, H$\beta$, H$\alpha$, and [\[]{}NeIII[\]]{}$\lambda$3969. Balmer emission-line measurements were automatically corrected for any underlying absorption. The spectral coverage of SDSS is 3800-9200 ${\ifmmode\begingroup\def\b@ld{bold}
\text{\ifx\math@version\b@ld\bfseries\fi\AA}\endgroup\else\AA\fi}$, and the mean resolution of the spectra $1800\lesssim\lambda/\triangle\lambda\lesssim2200$. We also retrieved the value of the $D_{n}(4000)$ index [@1999ApJ...527...54B], which were measured on emission-line subtracted spectra.
We filtered our data for a specific signal-to-noise ratio (in equivalent width) greater than five in order to keep the same selection as in Paper I, also eliminating data with positive equivalent width, which would involve absorption lines. We did not apply this selection to the [\[]{}NeIII[\]]{}$\lambda$3969, which is only used as an optional measurement in the DEW diagnostic. This finally leads to 89 379 galaxies. All classifications and plots presented in this paper were processed by the “JClassif” software, part of the “Galaxie” pipeline available at: <http://www.ast.obs-mip.fr/galaxie/>.
Throughout this paper, as in Paper I, all emission line ratios are *equivalent width* ratios rather than flux ratios. This is done to eliminate any dependence that may exist (mainly for the $[\mathrm{O}\textsc{ii}]/\mathrm{H}\beta$ emission line ratio) between the derived diagnostic and the dust properties of the sample. Indeed, equivalent width ratios are sensitive not to dust attenuation, but only to the ratio between continuum fluxes below each lines. Considering [\[]{}O[<span style="font-variant:small-caps;">ii</span>]{}[\]]{} and H$\beta$, this parameter should not evolve strongly between galaxies with similar properties in the diagnostic diagrams, even if they are at different redshifts, keeping the consistency of the diagnostics .
Existing classification schemes\[sec:Existing-classification-schemes\]
======================================================================
K06 diagnostic
--------------
As in Paper I, we use the a simplified version of the diagnostic from @2006MNRAS.372..961K [hereafter K06] as the reference classification. In the two main K06 diagrams, we use the following demarcation lines:
$$\log\left([\mathrm{O}\textsc{iii}]/\mathrm{H}\beta\right)=0.61/[\log\left([\mathrm{N}\textsc{ii}]/\mathrm{H}\alpha\right)-0.05]+1.30,\label{eq: Kauffmann red}$$
where AGNs are above this curve, and
$$\log\left([\mathrm{O}\textsc{iii}]/\mathrm{H}\beta\right)=0.61/[\log\left([\mathrm{N}\textsc{ii}]/\mathrm{H}\alpha\right)-0.47]+1.19,\label{eq: Kewley red}$$
with SFGs below this curve. Composites fall between these two curves. Moreover, AGNs can be subclassified into Seyfert 2 and LINERs using the line
$$\log\left([\mathrm{O}\textsc{iii}]/\mathrm{H}\beta\right)=1.89\times\log\left([\mathrm{S}\textrm{\textsc{ii}}]/\mathrm{H}\alpha\right)+0.76.\label{eq: Kewley LINERs/SFG}$$
Seyferts 2 are above this line, LINERs are below.
Our K06 diagnostic is simplified since it does not use the last log$\left([\mathrm{O\textrm{\textsc{iii}}}]\lambda5007/\mathrm{H}\beta\right)$ vs. log$\left([\mathrm{O}\textsc{i}]\lambda6300/\mathrm{H}\alpha\right)$ diagram. This is a reasonable approximation since the $[\mathrm{O}\textsc{i}]\lambda6300$ emission line is weaker than the others, hence not detected in most intermediate- and high-redshift spectra where the signal-to-noise ratio is typically low.
L10 diagnostic\[sub:L10-classification\]
----------------------------------------
The L10 diagnostic has been defined in Paper I. We summarize here the main equations, but we refer the reader to Paper I for details. The first equation separates SFGs from AGNs:
$$\log\left([\mathrm{O}\textsc{iii}]/\mathrm{H}\beta\right)=0.11/[\log\left([\mathrm{O}\textsc{ii}]/\mathrm{H}\beta\right)-0.92]+0.85,\label{eq: New blue SFG/AGN}$$
where AGNs are above this curve. The second equation separates Seyfert 2 from LINERs in the AGN region: $$\log\left([\mathrm{O}\textsc{iii}]/\mathrm{H}\beta\right)=0.95\times\log\left([\mathrm{O}\textsc{ii}]/\mathrm{H}\beta\right)-0.40.\label{eq: New blue LINERs/Sey2}$$ Seyferts 2 are above this line. Then, we define a region where some Seyfert 2 (26% of them) are mixed with a majority of SFGs (21.5% contamination by Seyfert 2). This region, called “SFG/Sy2”, is located below Eq. \[eq: New blue SFG/AGN\] and above the line
$$\log\left([\mathrm{O}\textsc{iii}]/\mathrm{H}\beta\right)=0.30.\label{eq: New blue SFG-Sey2 mix}$$
Finally, we define the region where most of the composites fall (64% of them), even if this region is dominated by SFGs (79%) and also contains some LINERs (2%). This region, called “SFG-LIN/comp”, can be located by the two following inequalities:
$$\log\left([\mathrm{O}\textsc{iii}]/\mathrm{H}\beta\right)\le-(x-1)^{2}-0.1x+0.25,\label{eq: New blue composite 1}$$
$$\log\left([\mathrm{O}\textsc{iii}]/\mathrm{H}\beta\right)\ge(x-0.2)^{2}-0.60,\label{eq: New blue composite 2}$$
with $x=\log\left([\mathrm{O}\textsc{ii}]/\mathrm{H}\beta\right)$. Unlike in Paper I, we now divide the “SFG-LIN/comp” region for clarity into “SFG/comp” and “LIN/comp” regions, and the separation between SFG and LINERs is done according to Eq. \[eq: New blue SFG/AGN\].
DEW diagnostic
--------------
The DEW diagnostic has been proposed by @2006MNRAS.371..972S and involves the DEW diagnostic diagram, showing the $D_{n}(4000)$ index vs. the maximum (in absolute value) of the equivalent widths of $[\mbox{O}\textsc{ii}]$ and $\mathrm{[Ne\textsc{iii}]}$ emission lines. We separate AGNs from SFGs using
$$D_{n}(4000)=-0.15x'+1.7,\label{eq: DEW Grazyna}$$
with $x'=\mathrm{\log\left(max\left(EW[O\textsc{ii}],EW[Ne\textsc{iii}]\right)\right)}+1$, where AGNs are above this line. This diagnostic is based the $D_{n}(4000)$ index being an indicator of the mean age of the stellar populations. Thus, it is indeed useful to separate galaxies that are dominated by older stars (AGNs) from galaxies dominated by younger stars (SFGs). The DEW diagnostic also considers that the [\[]{}Ne[<span style="font-variant:small-caps;">iii</span>]{}[\]]{} emission line may be stronger than the $[\mbox{O}\textsc{ii}]$ emission line in AGNs. Thus, it should be used as a good additional tracer for AGNs in low signal-to-noise ratio surveys.
Summary\[sub:Summary\]
----------------------
![image](143fig1){width="0.3\paperwidth"} ![image](143fig2){width="0.3\paperwidth"}\
![image](143fig3){width="0.3\paperwidth"} ![image](143fig4){width="0.3\paperwidth"}
Figure \[Flo:summary\] shows how the different types of galaxies (according to K06) appear in the high-redshift diagrams (top panels) and how the high redshift classifications appear back in one of the K06 diagnostic diagrams (bottom panels). In all panels, SFG are plotted in blue, Seyfert 2 in green (except in the bottom-right panel where green points stand for all types of AGNs), LINERs in cyan, and composites in magenta. The L10 diagnostic (left panels) implies several regions where different types of galaxies get mixed. Seyfert 2 region and LINERs region are now quite well defined, but we see composites falling in the SFGs and LINERs regions. Most of the composites fall in the region of the L10 diagnostic called SF-LIN/comp(marked by the dashed contour corresponding to Eqs. \[eq: New blue composite 1\] and \[eq: New blue composite 2\]). SFGs and Seyfert 2 are now separated quite well, but still there is a small region of the L10 diagnostic, called SF/Sy2, where they get mixed. In the bottom-left panel, it seems that most of the SF/Sy2 galaxies belong to the K06 SFG region, and that a large number of SFG/comp galaxies belong to K06 SFG region. LIN/comp galaxies seem to appear half/half in the K06 composites and LINERs regions.
We now the compare K06 and DEW classifications (right panels). We see that all K06 LINERs are correctly classified as AGNs in the DEW diagnostic. Most of K06 Seyfert 2 galaxies lie in the DEW AGN region as well, so that is quite satisfying. However composites are shared in DEW SFG and AGN regions, which confirms that composites are sort of hybrids between AGNs and SFGs, also in terms of stellar populations. Thus they obviously cannot be isolated in the DEW diagnostic. We emphasize that the definition of SFG and AGN galaxies used in @2006MNRAS.371..972S is slightly different from the K06 scheme: it is based on the long-dashed curve shown in the bottom-right panel of Fig. \[Flo:summary\] (Eq. 11 of their paper). The DEW diagnostic is designed to exclude only pure SFGs without any AGN contribution, while according to @2006MNRAS.371..972S galaxies classified as SFG by K06 (and by us) would allow up to 3% AGN contribution. Composites would allow up to 20% AGN contribution.
Indeed in the bottom-right panel, a non-negligible number of DEW AGNs actually belong to the K06 SFG or composite regions.However, we also note conversely that a non-negligible number of DEW SFGs contaminate the K06 composite and AGN regions. The DEW diagnostic actually fails to completely exclude all pure SFGs.
Limits of the L10 and DEW classifications\[sec: Limits of blue classifications\]
================================================================================
Success and contamination charts\[sub:Success-and-contamination\]
-----------------------------------------------------------------
[00.00.0000]{}
[$^{*}$ union of SFG/Sy2, SFG, and SFG/comp regions.]{}
[$^{**}$ union of the LINERs and LIN/comp regions.]{}
\[Flo: Succes chart New blue\]
[00.00.0000]{}
[$^{*}$ union of SFG/Sy2, SFG, and SFG/comp regions.]{}
[$^{**}$ union of the LINERs and LIN/comp regions.]{}
\[Flo: Contamination chart New blue\]
The success chart consists in classifying galaxies from our sample according to the reference, then associating a probability for each type of galaxy (AGN, composite, or SFG) to be classified correctly in the new diagnostic. The contamination chart is based on the same principle as the success chart, except this time we classify galaxies according to the new diagnostic, and then we calculate the probability that the galaxies classified as one type are actually of that same type according to the reference. Table \[Flo: Succes chart New blue\] shows the success chart of the L10 diagnostic. It reveals a relatively satisfying spread of composite galaxies and AGNs inside the different types defined. Table \[Flo: Contamination chart New blue\] shows the associated contamination chart. We notice quite good efficiency, i.e. low contamination by other types, in the L10 SFG, Seyfert 2, and LINER regions.
If we take a look at AGNs, we notice that almost 60% of K06 Seyfert 2 galaxies are successfully classified as L10 Seyfert 2. Moreover 26% belong to the L10 SFG/Sy2 region, which would give us a total of more than 85% of K06 Seyfert 2 galaxies being classified as Seyfert2 with the L10 diagnostic. However, the contamination chart shows that the L10 SFG/Sy2 region is actually made up of only 21% K06 Seyfert 2, which means it cannot be used to reliably look for additional Seyfert 2 galaxies in high-redshift samples. The Seyfert 2 region itself shows a very low contamination (13%) by other types.
Most K06 LINERs (74%) are also successfully classified as L10 LINERs, and 17% belong to the L10 LIN/comp region. That gives a global success rate of 91% for LINERs. As already stated in Paper I, these results are much better than results produced by the former L04 diagnostic; however, the contamination by composites in the LIN/comp region is not negligible. Only 40% of the L10 LIN/comp objects actually are K06 LINERs, and 57% are K06 composites. That gives a global 37% contamination in the union of the L10 LINERs and LIN/comp regions.
Finally we confirm the conclusion of Paper I from these two tables, which is that the L10 diagnostic is very efficient for SFGs. If we consider the union of the L10 SFG, SFG/Sy2, and SFG/comp regions, the success rate is 99.7% and the contamination by other types only 16%. This low contamination by composite galaxies and AGNs, in particular in the SFG/Sy2 and SFG/comp regions, has been shown to not critically bias SFG studies such as metallicity. , for instance, performed such tests using the L04 diagnostic, i.e. with an even more contamination by AGNs in the SFG region.
[00.00.0000]{}
\[Flo: Success chart DEW\]
[00.00.0000]{}
\[Flo: Contamination chart DEW\]
Tables \[Flo: Success chart DEW\] and \[Flo: Contamination chart DEW\] show the success and contamination charts of the DEW diagnostic. Again, the results are very good for SFGs with a 95% success rate, and only an 8% contamination, less than for L10. Still, this better contamination chart for SFGs does not drastically reflect a worse success rate for AGNs. The success rate is indeed 80% for Seyfert 2 and 98% for LINERs. The DEW has in fact greater ability to separate SFGs from AGNs than standard diagnostic diagrams as in K06 or L10 classifications. However, the main limitations of the DEW diagnostic clearly appear from the contamination chart regarding DEW AGNs. The DEW AGN region is actually made up of only 37% K06 AGNs. There is indeed a high contamination by 18% K06 SFGs, much higher than with the L10 diagnostic (less than 3% in the Seyfert 2 and LINERs regions). Moreover, the DEW AGN region is contaminated by 45% K06 composites, to be compared to 30% for the L10 LINERs, and only 1% for the L10 Seyfert 2. As one can see in Fig. \[Flo:summary\], composites get completely confused with Seyfert 2 and LINERs in the DEW diagnostic, while they are rather confused with SFGs in the L10 diagnostic. We do note that this contamination is explained mainly by the fact that @2006MNRAS.371..972S use a different definition of SFGs and AGN galaxies (as discussed in Sect. \[sub:Summary\] above). Indeed, in DEW diagnostic’s philosophy, it should not be considered as a “contamination” but as a “contribution” of an AGN to star-forming galaxies. Finally, the DEW diagnostic does not allow any distinction between Seyfert 2 and LINERs.
Regarding the classification of SFGs, we conclude that one should use the L10 diagnostic for its very high success rate and DEW diagnostic for its lower contamination. About AGNs, both advantages of the L10 and DEW diagnostic diagrams may be put together to provide a better diagnostic, which is the goal of the present paper. We emphasize that we did not use the DEW diagnostic itself for our own classification. We only used the DEW diagram to derive a new diagnostic where it is needed (see Sect. \[sub: New blue classification\] below).
AGN counts\[sub: New blue classification\]
------------------------------------------
In order to explore the limitations of L10 and DEW classifications of AGNs better, we now count the number of AGNs (Seyfert 2 and LINERs) as a function of the ionization state, roughly given by the $\log\left([\mathrm{O}\textsc{iii}]/\mathrm{H}\beta\right)$ emission-line ratio. To achieve this test, we divide the K06 or L10 diagnostic diagrams in equal horizontal slices, and then in each slice we count the number of AGNs. Figure \[Flo: AGN count newblue\] shows the absolute and difference counts (relative to K06) obtained with, from left to right, the following classifications: L04, DEW, L10, and the present paper’s diagnostic (see Sect. \[sec:Spectral-classification\] below). In each panel, the results are compared to the actual count of AGNs according to the reference K06 diagnostic.
We confirm, as stated in Paper I, that the L04 diagnostic tends to underestimate the amount of AGNs, even when including L04 candidate AGNs, and that a very high number of AGNs (mainly LINERs) are lost in this diagnostic. However, we can put this effect into context thanks to Fig. \[Flo: AGN count newblue\], as we see that it only becomes significant for $\log\left([\mathrm{O}\textsc{iii}]/\mathrm{H}\beta\right)\lesssim0.9$, or $\log\left([\mathrm{O}\textsc{iii}]/\mathrm{H}\beta\right)\lesssim0.7$ if we include candidate AGNs. For AGNs with a high ionization state, the L04 diagnostic indeed gives perfect results.
Figure \[Flo: AGN count newblue\] also shows that DEW and L10 classifications are doing quite well by following the K06’s curve almost exactly for $\log\left([\mathrm{O}\textsc{iii}]/\mathrm{H}\beta\right)\gtrsim0.25$. In both cases, we notice an underestimate of the number of AGNs for $0.25\lesssim\log\left([\mathrm{O}\textsc{iii}]/\mathrm{H}\beta\right)\lesssim0.7$, where this effect is more significant for the L10 diagnostic than for the DEW diagnostic. Nevertheless, the DEW diagnostic clearly overestimates the number of AGNs in low ionization states, i.e. mainly LINERs ($\log\left([\mathrm{O}\textsc{iii}]/\mathrm{H}\beta\right)\lesssim0.25$). In this region, the L10 diagnostic, in contrast, satisfyingly follows the K06’s curve, with a small underestimate.
Unfortunately, including the SFG/Sy2 and LIN/comp regions does not help. It makes a peak of galaxies at $\log\left([\mathrm{O}\textsc{iii}]/\mathrm{H}\beta\right)\simeq0.4$ appear that does not fit the reference profile, while in a lower ionization state the number of AGNs is now clearly overestimated. Those two effects are from the high contamination of the L10 SFG/Sy2 region by K06 SFGs and of the L10 LIN/comp region by K06 composites.
![image](143fig5){width="0.2\paperwidth"} ![image](143fig6){width="0.2\paperwidth"}![image](143fig7){width="0.2\paperwidth"}![image](143fig8){width="0.2\paperwidth"}
![image](143fig9){width="0.2\paperwidth"} ![image](143fig10){width="0.2\paperwidth"}![image](143fig11){width="0.2\paperwidth"}![image](143fig12){width="0.2\paperwidth"}
The supplementary M11 diagnostic\[sec:Spectral-classification\]
===============================================================
![image](143fig13){width="0.29\paperwidth"}![image](143fig14){width="0.29\paperwidth"}![image](143fig15){width="0.29\paperwidth"}
![image](143fig16){width="0.29\paperwidth"}![image](143fig17){width="0.29\paperwidth"}![image](143fig18){width="0.29\paperwidth"}
![image](143fig19){width="0.29\paperwidth"}![image](143fig20){width="0.29\paperwidth"}![image](143fig21){width="0.29\paperwidth"}
![image](143fig22){width="0.29\paperwidth"}![image](143fig23){width="0.29\paperwidth"}![image](143fig24){width="0.29\paperwidth"}
Figure \[Flo: Mixed areas DEW & New blue\] shows four regions where galaxies of different types (according to the reference K06diagnostic) are confused in the $\log\left([\mathrm{O}\textsc{iii}]\lambda5007/\mathrm{H}\beta\right)$ vs. $\log\left([\mathrm{O}\textrm{\textsc{ii}}]\lambda\lambda3726+3729/\mathrm{H}\beta\right)$ diagram. It shows also in its center and right panels how these galaxies behave in the $D_{n}(4000)$ vs. $\mathrm{max\left(EW[O\textsc{ii}],EW[Ne\textsc{iii}]\right)}$ diagram. From top to bottom, the four studied regions are SFG/Sy2, SFG/comp, another SFG/comp region not defined in the L10 diagnostic but where a non negligible number of the composites (25%) are still mixed with SFGs, and LIN/comp.
The SFG/Sy2 region
------------------
In the L10 SFG/Sy2 region (see Fig. \[Flo: Mixed areas DEW & New blue\] top), K06 Seyfert 2 and SFGs are confused. It is unfortunately obvious in the bottom-left panel that the DEW diagnostic *does not* separate the two classes of objects correctly in the L10 SFG/Sy2 region. We thus propose a new demarcation line to separate K06 Seyfert 2 from SFGs, *valid only in the L10 SFG/Sy2 region*, with the equation $$D_{n}(4000)=0.22\times\mathrm{\log\left(max\left(EW[O\textsc{ii}],EW[Ne\textsc{iii}]\right)\right)}+0.97.\label{eq: Red dashed DEW}$$ Seyfert 2 would fall above this line, SFG below.The slope and zero point of this line have been optimized by minimizing on a grid the following function: $$\chi^{2}=(1-S^{A})^{2}+(1-S_{B})^{2}+(C^{A})^{2}+(C_{B})^{2},\label{eq:minimize}$$ where $S^{A}$, $S_{B}$, $C^{A}$, $C_{B}$ are the success rate for AGN above the defined line, the success rate for SFG below the line, the contamination by SFG above the line, and the contamination by AGN below the line (all values between 0 and 1), respectively. Indeed, we want to maximize the success rates and minimize the contamination at the same time above and below the defined line. The grid is done in $0.01$dex steps in both slope and zero point. To minimize computer time, limits on this grid are defined by eye.
----------- ------- -----------
M11 SFG Seyfert 2
*total* *100* *100*
SFG 99.10 3.10
Seyfert 2 0.90 96.90
----------- ------- -----------
: Success chart for the supplementary M11 diagnostic in the L10 SFG/Sy2 region.
\[Flo: succes SFG-Sey2\]
We have established the success of our new diagnostic in the L10 SFG/Sy2 region (see Table \[Flo: succes SFG-Sey2\]). This chart shows that our demarcation line works almost perfectly and can be used in that area. We have now correctly classified almost all actual Seyfert 2 in our sample: 97% of those in the L10 SFG/Sy2 region are classified as Seyfert 2. Given that 59% of the K06 Seyfert 2 in the whole sample were already correctly classified, and 26% of them classified as L10 SFG/Sy2, this increases the global success rate to 85%. This is the best success rate one can obtain by combining L10 and DEW diagrams. The contamination in the SFG/Sy2 region is made of 3.1% SFGs above the line defined by Eq. \[eq: Red dashed DEW\] and 0.9% Seyfert 2 below it.
The SFG/comp region
-------------------
The L10 SF/comp (see Fig. \[Flo: Mixed areas DEW & New blue\] second line) contain composites, SFGs, and very few LINERs. Following the optimization procedure explained above, we find the following equation which separate as many SFGs as possible from composites:
$$D_{n}(4000)=-0.11\times\mathrm{\log\left(max\left(EW[O\textsc{ii}],EW[Ne\textsc{iii}]\right)\right)}+1.4,\label{eq: newmix2}$$
where SFGs are below this line. Since SFGs dominate the sample, the region below this line is composed of 98% SFGs. Conversely, the region above this line is still a mix between SFGs (59%), composites (36%), and a few LINERs (4%). We again applied the optimization procedure but now only consider the latest region, in order to isolate pure composites as much as possible. We obtain: $$D_{n}(4000)=-0.17\times\mathrm{\log\left(max\left(EW[O\textsc{ii}],EW[Ne\textsc{iii}]\right)\right)}+1.75,\label{eq: newmix2b}$$ where composites are above the line, SFG/comp below it (i.e. between the two lines).
------------ ------- -------- ------------
M11 SFG LINERs composites
*total* *100* *100* *100*
SFG 63.69 3.47 5.31
composites 0.19 70.14 13.95
SFG/comp 36.12 26.39 80.74
------------ ------- -------- ------------
: Success chart for the supplementary M11 diagnostic in the L10 SFG/comp region.
\[Flo: succes SFGLINcomp\]
------------ --------- ------- -------- ------------
M11 *total* SFG LINERs composites
SFG *100* 98.30 0.02 1.67
composites *100* 5.85 8.21 85.93
SFG/comp *100* 68.81 0.19 31.00
------------ --------- ------- -------- ------------
: Contamination chart for the supplementary M11 diagnostic in the L10 SFG/comp region.
\[Flo: conta SFGLINcomp\]
Tables \[Flo: succes SFGLINcomp\] and \[Flo: conta SFGLINcomp\]show the success chart and the contamination chart of the supplementary M11 diagnostic, in the L10 SFG/comp region. From all the K06 SFGs present in this region, 64% are now correctly classified as SFGs, another 36% being still ambiguously classified as SFG/comp.*It is unfortunately impossible* to increase this success rate without misclassifying too many K06 composites as SFGs. Only 14% of K06 composites could be isolated. Newly isolated SFGs are not significantly contaminated by K06 composites (2%), and neither are the newly isolated composites by K06 SFGs (6%). The majority of the K06 of composites (81%) are still ambiguously classified as SFG/comp. The best reason to use this diagram is clearly for isolating SFGs. The SFG/comp region is now made of 69% K06 SFGs and 31% K06 composites, which is more balanced than in Paper I (respectively 83% and 17%).
Additional region of composites mixed with SFGs
-----------------------------------------------
We define a last region where K06 composites are mixed with K06 SFGs in the L10 diagnostic. It is located below the SFG/Sy2 region, and above $\log\left([\mathrm{O}\textsc{iii}]\lambda5007/\mathrm{H}\beta\right)>-0.4$, excluding the SFG/comp region (see Fig. \[Flo: Mixed areas DEW & New blue\] third line). Even though we can see that these K06 SFGs are mostly spread over the bottom right of the DEW diagnostic diagram and K06 composites are slightly above, there is still a small area where they get together.
Following the same optimization procedure as above, we first fit the optimized demarcation line between pure SFGs and SFGs mixed with composites: $$D_{n}(4000)=0.44\times\mathrm{\log\left(max\left(EW[O\textsc{ii}],EW[Ne\textsc{iii}]\right)\right)}+0.72,\label{eq: newmix3below}$$ where SFGs are below this line. As in previous section, the region above this line contains a mix of SFGs (25%) and composites (75%). In this region, we fit another optimized demarcation line between pure composites and composites mixed with SFGs: $$D_{n}(4000)=-0.37\times\mathrm{\log\left(max\left(EW[O\textsc{ii}],EW[Ne\textsc{iii}]\right)\right)}+1.66,\label{eq: newmix3above}$$ where composites are above this line, and SFG/comp below it (i.e. between the two lines). We finally add the remaining mixed galaxies to the SFG/comp type.
------------ ------- ------------
M11 SFG composites
*total* *100* *100*
SFG 92.10 12.69
composites 2.90 61.97
SFG/comp 5.00 25.34
------------ ------- ------------
: Success chart for the supplementary M11 diagnostic in the additional region of the L10 diagnostic where SFG are mixed with composites.
\[Flo: successSFGcomp\]
------------ --------- ------- ------------
M11 *total* SFG composites
SFG *100* 96.39 3.61
composites *100* 14.50 85.30
SFG/comp *100* 42.08 57.92
------------ --------- ------- ------------
: Contamination chart for the supplementary M11 diagnostic in the additional region of the L10 diagnostic where SFG are mixed with composites.
\[Flo: contSFGcomp\]
Tables \[Flo: succes SFGLINcomp\] and \[Flo: contSFGcomp\] show the success and contamination charts of the supplementary M11 diagnostic in this last region. Of all K06 SFGs present in this region, 92% are now correctly classified, and another 5% are ambiguously classified as SFG/comp. We also managed to isolate 62% of the K06 composites falling in this region, which is an improvement compared to the L10 diagnostic. Conversely, 12% of the K06 composites are now unfortunately misclassified as SFGs. This is still an improvement : we recall that 100% of the K06 composites in this region used to be classified as SFGs with the L10 diagnostic. Most of them (62%) are now classified as composite galaxies, and another 25% are still ambiguously classified as SFG/comp. The composites defined in this region are contaminated by 15% K06 SFGs, which may not be neglected. The SFG/comp galaxies defined in this region are made of approximately half K06 SFGs (42%) and half K06 composites (58%).
The LIN/comp region
-------------------
Figure \[Flo: Mixed areas DEW & New blue\] (bottom) shows the LIN/comp region of the L10 diagnostic in the DEW diagnostic diagram. It is clear that this diagram cannot be used to isolate LINERs cleanly from composites. Indeed, one may argue that LINERs are more concentrated on the top and composites on the bottom of the diagram, so we propose a straight line at $D_{n}(4000)\approx1.75$ as a separation. Using this line we find, however, 61% LINERs and 39% composites above, 28% and 72% below respectively, which is unsatisfactory. Thus, we do not update the LIN/comp region as in Paper I.
Discussion
----------
![image](143fig25){width="0.3\paperwidth"} ![image](143fig26){width="0.3\paperwidth"}
Figure \[Flo:summary-1\] shows our supplementary M11 diagnostic combined with L10 diagnostic, in one of the standard K06 diagnostic diagrams. The left panel looks pretty good: SFGs Seyfert 2, LINERs, and some composites lie almost perfectly in the correct corresponding regions of this diagram. In contrast, the right hand panel shows the limitations: objects that are still ambiguously classified or as SFG/comp or as LIN/comp. Anyway, comparing Fig. \[Flo:summary-1\] to the bottom left hand panel of Fig. \[Flo:summary\], we see a clear improvement.
-------------------------- --------- ------------ ----------- ---------
L10/M11 SFG Composites Seyfert 2 LINERs
*total* *100* *100* *100* *100*
SFG 77.97 8.92 0.92 0.10
SFG/comp 20.99 50.97 0.47 0.98
composites 0.66 23.44 5.46 4.30
Seyfert 2 0.12 0.93 84.71 3.93
LINERs 0.17 6.94 8.00 73.51
LIN/comp 0.11 8.81 0.44 17.18
*total SFG*[$^{1}$]{} *98.96* *59.88* *1.39* *1.08*
*total LINERs*[$^{2}$]{} *0.27* *15.74* *8.44* *90.70*
*total comp.*[$^{3}$]{} *21.75* *83.22* *6.38* *22.46*
-------------------------- --------- ------------ ----------- ---------
: Overall success chart for the L10 and M11 diagnostics.
[$^{1}$ SFG+SFG/com]{}
[$^{2}$ LINERs+LIN/comp]{}
[$^{3}$ composites+SFG/comp+LIN/comp.]{}
\[Flo: Success chart Hybrid\]
-------------------------- --------- --------- ------------ ----------- ---------
L10/M11 *total* SFG Composites Seyfert 2 LINERs
SFG *100* 97.68 2.26 0.05 0.01
SFG/comp *100* 66.81 32.89 0.07 0.23
composites *100* 11.02 79.77 3.99 5.23
Seyfert 2 *100* 2.73 4.42 86.20 6.66
LINERs *100* 2.28 19.40 4.80 73.51
LIN/comp *100* 3.37 56.57 0.61 39.46
*total SFG*[$^{1}$]{} *100* *75.83* *15.37* *3.30* *5.50*
*total LINERs*[$^{2}$]{} *100* *2.61* *30.68* *3.53* *63.18*
*total comp.*[$^{3}$]{} *100* *53.67* *41.63* *0.68* *4.02*
-------------------------- --------- --------- ------------ ----------- ---------
: Overall contamination chart for the L10 and M11 diagnostics.
[$^{1}$ SFG+SFG/com]{}
[$^{2}$ LINERs+LIN/comp]{}
[$^{3}$ composites+SFG/comp+LIN/comp.]{}
\[Flo: Contamination chart Hybrid\]
Tables \[Flo: Success chart Hybrid\] and \[Flo: Contamination chart Hybrid\] establish new contamination and success charts in order to get a more precise measurement of this improvement. Success chart shows that 1% of K06 SFG are classified in a region where they are not supposed to be. K06 composites are predominantly found in the SFG/comp region (51%) than in the composite region (23%). This is an improvement over Paper I, but it shows that neither the L10 nor the DEW diagrams are really good at identifying composites at high redshift. K06 Seyfert 2 galaxies and LINERs have a high success rate (respectively 85% and 74%), which is for Seyfert 2 galaxies a really good improvement compared to Paper I. For K06 LINERs, the success rate increases to 91% including the LIN/comp, but one has to be aware that this region is actually made of only 39% K06 LINERs and is dominated by 57% K06 composites.
Nevertheless, one can conclude from the contamination chart that SFGs, Seyfert 2, and LINERs are not significantly contaminating each other. The main contamination comes in all cases from composites: 33% in the SFG/comp region, 19% in the LINERs region, and 57% in the LIN/comp region. It is conversely very low in the SFG and Seyfert 2 regions.
One could worry about aperture effects. Indeed, SDSS spectra are based on 3” fibers. This may end in overestimated $D_{n}(4000)$ values for close objects where only the central bulge is covered by the fiber. However, one can see in Fig. \[Flo: Mixed areas DEW & New blue\] that only objects with $D_{n}(4000)<1.5$ could change their classification with a significant aperture effect. Looking at Fig. 16 in @2003MNRAS.341...33K, we see that those objects do not actually suffer from a strong aperture effect. We conclude that our diagnostic is not biased by this effect.
Finally, we invite the reader to take a look at the right hand panels of Fig. \[Flo: AGN count newblue\]. It shows the AGN counts obtained with our new supplementary M11 diagnostic combined with L10 diagnostic. We clearly see that the diagnostic derived in the present paper is the one that follows the reference K06 curve more accurately. There are still problems for $\log\left([\mathrm{O}\textsc{iii}]/\mathrm{H}\beta\right)\lesssim0.25$, which is normal since we did not manage to change the classification of LINERs by adding the DEW diagnostic diagram.
Conclusion \[sec:Conclusion\]
=============================
By adding the M11 diagnostic to the L10 diagnostic derived in Paper I, we now have a very good classification of emission-line galaxies that can be used on high-redshifts samples. The main improvements compared to Paper I are
- The unambiguous classification of objects in the former SFG/Sy2 region as SFGs or Seyfert 2.
- The unambiguous classification of some of the objects in the SFG/comp region as SFGs or composites (where no composites at all were found in Paper I).
- A better definition of the SFG/comp region, which leaves fewer possible composites not flagged as such. We emphasize again that this region is in any case dominated by SFGs.
No improvements could have been done in the LIN/comp region, which is left unchanged compared to Paper I.
In order to use the diagnostic derived in this paper, one should follow these steps.
1. Classify objects in the $\log\left([\mathrm{O}\textsc{iii}]\lambda5007/\mathrm{H}\beta\right)$ vs. $\log\left([\mathrm{O}\textrm{\textsc{ii}}]\lambda\lambda3726+3729/\mathrm{H}\beta\right)$ with the L10 diagnostic derived in Paper I (see also equations in Sect. \[sub:L10-classification\]).
2. Classify objects falling in the SFG/Sy2 region as SFGs or Seyfert 2 using Eq. \[eq: Red dashed DEW\].
3. Isolate objects falling the the SFG/comp region as
1. SFGs using Eq. \[eq: newmix2\], and
2. composites using Eq. \[eq: newmix2b\].
4. Define a new SFG/comp region inside the SFG region using $\log\left([\mathrm{O}\textsc{iii}]\lambda5007/\mathrm{H}\beta\right)>-0.4$, so inside this new region, isolate
1. SFGs using Eq. \[eq: newmix3below\], and
2. composites using Eq. \[eq: newmix3above\].
In both points (3) and (4) above, objects not classified as SFGs or composites remain of the ambiguous SFG/comp type. We invite the reader to look at the “JClassif” software (available at: <http://www.ast.obs-mip.fr/galaxie/>), which performs these steps automatically on any sample, as well as other classification schemes.
Table \[Flo: Contamination chart Hybrid\] may be used as a probability chart showing whether each type in our diagnostic is one of the K06 reference types. However, we warn the reader that the relative proportions of SFGs, composites and AGNs in each regions of our diagnostic diagrams may evolve with redshift compared to the SDSS sample. Finally, we note that it is possible to upgrade this classification to higher redshifts where $[\mathrm{O}\textsc{iii}]\lambda5007$ and $\mathrm{H}\beta$ emission lines get red-shifted out of the spectra ($1.0\apprle z\apprle1.5$ on optical spectra). To that goal one may use the equations provided by @2007MNRAS.381..125P, which convert $[\mathrm{Ne}\textsc{iii}]\lambda3869$ and $\mathrm{H}\delta$ to $[\mathrm{O}\textsc{iii}]\lambda5007$ and $\mathrm{H}\beta$.
F.L. thanks “La cité de l’espace” for financial support while this paper was being written. The data used in this paper were produced by a collaboration of researchers (currently or formerly) from the MPA and the JHU. The team is made up of Stéphane Charlot (IAP), Guinevere Kauffmann and Simon White (MPA), Tim Heckman (JHU), Christy Tremonti (Max-Planck for Astronomy, Heidelberg - formerly JHU), and Jarle Brinchmann (Sterrewach Leiden - formerly MPA). All data presented in this paper were processed with the JClassif software, part of the Galaxie pipeline. We thank the referee for useful corrections and suggestions for improving the paper.
Funding for the SDSS and SDSS-II has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, the U.S. Department of Energy, the National Aeronautics and Space Administration, the Japanese Monbukagakusho, the Max Planck Society, and the Higher Education Funding Council for England. The SDSS Web Site is http://www.sdss.org/.
The SDSS is managed by the Astrophysical Research Consortium for the Participating Institutions. The Participating Institutions are the American Museum of Natural History, Astrophysical Institute Potsdam, University of Basel, University of Cambridge, Case Western Reserve University, University of Chicago, Drexel University, Fermilab, the Institute for Advanced Study, the Japan Participation Group, Johns Hopkins University, the Joint Institute for Nuclear Astrophysics, the Kavli Institute for Particle Astrophysics and Cosmology, the Korean Scientist Group, the Chinese Academy of Sciences (LAMOST), Los Alamos National Laboratory, the Max-Planck-Institute for Astronomy (MPIA), the Max-Planck-Institute for Astrophysics (MPA), New Mexico State University, Ohio State University, University of Pittsburgh, University of Portsmouth, Princeton University, the United States Naval Observatory, and the University of Washington.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Let $p$ be prime, let $G$ be a $p$-valuable group, isomorphic to $\mathbb{Z}_p^d\rtimes\mathbb{Z}_p$, and let $k$ be a field of characteristic $p$. We will prove that all faithful prime ideals of the completed group algebra $kG$ are controlled by $Z(G)$, and a complete decomposition for $Spec(kG)$ will follow. The principal technique we employ will be to study the convergence of Mahler expansions for inner automorphisms.'
author:
- Adam Jones
title: 'Completed Group Algebras of Abelian-by-procyclic Groups'
---
Introduction
============
Let $p$ be a prime, $k$ be a field of characteristic $p$, and let $G$ be a compact $p$-adic Lie group. The *completed group algebra*, or *Iwasawa algebra* of $G$ with respect to $k$ is defined as:
$$kG:=\underset{N\trianglelefteq_o G}{\varprojlim} k[G/N].$$
We aim to improve our understanding of the prime ideal structure of completed group algebras, which would have profound consequences for the representation theory of compact $p$-adic Lie groups.\
Background
----------
In [@survey Section 6], Ardakov and Brown ask a number of questions regarding the two-sided ideal structure of $kG$. Several of these have now been answered or partially answered, but a number of them remain open. We hope that this paper will take important steps towards providing an answer to these questions.\
First recall some important definitions:
Let $I$ be a right ideal of $kG$:\
1. We say that $I$ is *faithful* if for all $g\in G$, $g-1\in I$ if and only if $g=1$, i.e. $G\to\frac{kG}{I},g\mapsto g+I$ is injective.\
2. We say that $H\leq_c G$ *controls* $I$ if $I=(I\cap kH)kG$.\
Define the *controller subgroup* of $I$ by $I^{\chi}:=\bigcap\{U\leq_o G: U$ controls $I\}$, and denote by $Spec^f(kG)$ the set of all faithful prime ideals of $kG$.
Also recall the following useful result [@controller Theorem A]:
Let $I$ be a right ideal of $kG$. Then $I^{\chi}\trianglelefteq_c G$, and $H\leq_c G$ controls $I$ if and only if $I^{\chi}\subseteq H$.
We will assume throughout that $G$ is $p$-valuable, i.e. carries a complete $p$-valuation $\omega:G\to\mathbb{R}\cup\{\infty\}$ in the sense of [@Lazard [slowromancap3@]{} 2.1.2].\
**** we say $N\trianglelefteq_c^i G$ to mean that $N$ is a closed, isolated, normal subgroup of $G$.\
One of the main open problems in the subject concerns proving a control theorem for faithful prime ideals in $kG$. To illustrate the importance of this problem, consider the following result ([@nilpotent Theorem A]):
\[decomposition\]
Let $G$ be a $p$-valuable group such that for every $N\trianglelefteq_c^i G$, $P\in Spec^f(k\frac{G}{N})$, $P$ is controlled by the centre of $\frac{G}{N}$, i.e. $P^{\chi}\subseteq Z(\frac{G}{N})$.\
Then every prime ideal $P$ of $kG$ is completely prime, i.e. $\frac{kG}{P}$ is a domain, and we have a bijection:
$Spec(kG)\leftrightarrow \underset{N\trianglelefteq_c^i G}{\coprod}{Spec^f(kZ(\frac{G}{N}))}$
**** This is stronger than the statement given in [@nilpotent], but it has an identical proof.\
This Theorem is the strongest result we have to date concerning ideal classification for $kG$. Using it, we reduce the problem of classifying ideals in the non-commutative ring $kG$ to the simpler problem of classifying ideals in the *commutative strata* $kZ(\frac{G}{N})$.\
The result gives a positive answer to [@survey Question N] for all groups $G$ for which we have the appropriate control theorem for faithful primes in $kG$, i.e. for all groups $G$ such that $P^{\chi}\subseteq Z(\frac{G}{N})$ for $P$ faithful.
It follows from the following result ([@nilpotent Theorem 8.4]) that this is true if $G$ is nilpotent.
\[nilpotence\]
Let $G$ be a $p$-valuable group, $P\in Spec^f(kG)$, and suppose that $P$ is controlled by a nilpotent subgroup of $G$. Then $P$ is controlled by the centre of $G$.
We want to extend this result to more general groups, in particular when $G$ is solvable, and thus give a positive answer to [@survey Question O].
Aims
----
In this paper, we will consider certain classes of metabelian groups, i.e. groups whose commutator subgroup is abelian.\
A $p$-valuable group $G$ is *abelian-by-procyclic* if it is isomorphic to $\mathbb{Z}_p^d\rtimes\mathbb{Z}_p$ for some $d\in\mathbb{N}$. The completed group algebras of these groups have the form of skew-power series rings $R[[x;\sigma,\delta]]$, for $R$ a commutative power series ring.\
Our main result establishes the control condition in Theorem \[decomposition\] for groups of this form:
\[A\]
Let $G$ be a $p$-valuable, abelian-by-procyclic group. Then every faithful prime ideal of $kG$ is controlled by $Z(G)$.
Using Lemma \[Lie-invariant\] below, we see that if $G$ is an abelian-by-procyclic group, and $N$ is a closed, isolated normal subgroup, then $\frac{G}{N}$ is also abelian-by-procyclic, and hence by the theorem, every faithful prime ideal of $k\frac{G}{N}$ is controlled by the centre. Hence we can apply Theorem \[decomposition\] to get a complete decomposition for $Spec(kG)$.\
In particular, if $G$ is any solvable, $p$-valuable group of rank 2 or 3, then $G$ is abelian-by-procyclic, so again, we can completely determine $Spec(kG)$.\
Most of the work in this paper will go towards proving the following theorem.
\[B\]
Let $G$ be a non-abelian, $p$-valuable, abelian-by-procyclic group. If $P\in Spec^f(kG)$ then $P$ is controlled by a proper, open subgroup of $G$.
This result is actually all we need to deduce Theorem \[A\]:\
*Proof of Theorem \[A\].* Recall from [@nilpotent Definition 5.5] that a prime ideal $P$ of $kG$ is *non-splitting* if for all $U\leq_o G$ controlling $P$, $P\cap kU$ is prime in $kU$.\
Fix $G$ a $p$-valuable, abelian-by-procyclic group, and suppose that $P\in Spec^f(G)$ is non-splitting.\
Consider the controller subgroup $P^{\chi}$ of $P$. Using [@nilpotent Proposition 5.5] and the non-splitting property, we see that $Q:=P\cap kP^{\chi}$ is a faithful prime ideal of $kP^{\chi}$. We know that $P^{\chi}$ is a closed, normal subgroup of $G$, so it follows from Lemma \[Lie-invariant\] below that $P^{\chi}$ is abelian-by-procyclic.\
If $P^{\chi}$ is abelian, then it follows from Theorem \[nilpotence\] that $P^{\chi}$ is central in $G$, i.e. $P^{\chi}\subseteq Z(G)$ and $P$ is controlled by $Z(G)$ as required. So assume for contradiction that $P^{\chi}$ is non-abelian:\
Applying Theorem \[B\] gives that $Q$ is controlled by a proper open subgroup $U$ of $kP^{\chi}$, i.e. $Q=(Q\cap kU)kP^{\chi}$. But $P$ is controlled by $P^{\chi}$ by [@controller Theorem A], so:
$P=(P\cap kP^{\chi})kG=QkG=(Q\cap kU)kP^{\chi}kG=(P\cap kU)kG$
Hence $P$ is controlled by $U$, which is a proper subgroup of $P^{\chi}$ – contradiction.\
So, we conclude that any faithful, non-splitting prime ideal of $kG$ is controlled by $Z(G)$. Now suppose that $I$ is a faithful, *virtually non-splitting* right ideal of $kG$, i.e. $I=PkG$ for some open subgroup $U$ of $G$, $P$ a faithful, non-splitting prime ideal of $kU$.\
Using Lemma \[Lie-invariant\] below, we see that $U$ is $p$-valuable, abelian-by-procyclic, so by the above discussion, $P$ is controlled by $Z(U)$, and in fact, $Z(U)=Z(G)\cap U$ by [@nilpotent Lemma 8.4]. Therefore, since $I\cap kU=P$ by [@nilpotent Lemma 5.1(ii)]:
$I=PkG=(P\cap kZ(U))kUkG=(I\cap kU\cap kZ(G))kG=(I\cap kZ(G))kG$
So since $G$ is $p$-valuable and every faithful, virtually non-splitting right ideal of $kG$ is controlled by $Z(G)$, it follows from [@nilpotent Theorem 5.8, Corollary 5.8] that every faithful prime ideal of $kG$ is controlled by $Z(G)$ as required.\
So the remainder of our work will be to prove Theorem \[B\], this proof will be given at the end of section 6.
Outline
-------
Throughout, we will fix $G=\mathbb{Z}_p^{d}\rtimes\mathbb{Z}_p$ a non-abelian, $p$-valuable, abelian-by-procyclic group, $P$ a faithful prime ideal of $kG$, and let $\tau:kG\to Q(\frac{kG}{P})$ be the natural map. To prove a control theorem for $P$, we will follow a similar approach to the method used in [@nilpotent].\
The most important notion we will need is the concept of the *Mahler expansion* of an automorphism $\varphi$ of $G$, introduced in [@nilpotent Chapter 6], and which we will recap in section 2.
In our case, we will take $\varphi$ to be the automorphism defining the action of $\mathbb{Z}_p$ on $\mathbb{Z}_p^d$.\
In section 2, we will see how we can use the Mahler expansion of $\varphi^{p^m}$ to deduce an expansion of the form:
$0=q_1^{p^m}\tau\partial_1(y)+....+q_{d}^{p^m}\tau\partial_{d}(y)+O(q^{p^m})$
Where $y\in P$ is arbitrary, $\partial_i:kG\to kG$ is a derivation, $q_i,q\in Q(\frac{kG}{P})$, we call the $q$’s *Mahler approximations*. We want to examine convergence of this expression as $m\rightarrow\infty$ to deduce that $\tau\partial_i(P)=0$ for some $i\leq d$, from which a control theorem should follow.\
Also, for an appropriate polynomial $f$, we will see that we have a second useful expansion:
$0=f(q_1)^{p^m}\tau\partial_1(y)+....+f(q_{d})^{p^m}\tau\partial_{d}(y)+O(f(q)^{p^m})$
The idea is to compare the growth of the Mahler approximations with $m$ so that we can scale this expression and get that the higher order terms tend to zero, and the lower order terms converge to something non-zero.
One possiblity would be to divide out the lower order terms, which was the approach used in [@nilpotent]. But an important difference in our situation is that we cannot be sure that the elements $f(q_i)$ are *regular*, i.e. that dividing them out of the expression will not affect convergence of the higher order terms.\
In section 3, we will recap how to use a filtration on $kG$ to construct an appropriate filtration $v$ on $Q(\frac{kG}{P})$, and introduce the notion of a *growth rate function*, which allows us to examine the growth with $m$ of our elements $f(q)^{p^m}$ with respect to $v$.\
We will also introduce the notion of a *growth preserving polynomial* (GPP), and show how a control theorem follows from the existence of such a polynomial $f$ satisfying certain $v$-regularity conditions.
To prove that a GPP $f$ satisfies these conditions, it is only really necessary to prove that $f(q)$ is central, non-nilpotent of minimal degree inside gr $\frac{kG}{P}$.\
Ensuring the elements $f(q)$ are central and non-nilpotent in gr $\frac{kG}{P}$ can usually be done, provided $Q(\frac{kG}{P})$ is not a CSA – a case we deal with separately in section 4 using a technique involving diagonalisation of the Mahler approximations.
Ensuring these elements have minimal degree, however, is more of a problem when using the standard filtration on $kG$.\
In section 5, we define a new filtration on $kG$ which ensures that we can find a GPP satisfying the required conditions, and we construct such a polynomial in section 6. This will allow us to complete our analysis and prove Theorem \[B\].\
**** I would like to thank Konstantin Ardakov for supervising me in my research, and for many helpful conversations and suggestions. I would also like to thank EPSRC for funding this research.
Preliminaries
=============
Throughout, we will use the notation $(.,.)$ to denote the group commutator, i.e. $(g,h)=ghg^{-1}h^{-1}$.
Abelian-by-procyclic groups
---------------------------
Fix $G$ a compact $p$-adic Lie group carrying a $p$-valuation $\omega$. Since $G$ is compact, it follows that $G$ has finite rank $d\in\mathbb{N}$ by [@Lazard [slowromancap3@]{} 2.2.6], i.e. there exists a finite *ordered basis* $\underline{g}=\{g_1,....,g_d\}$ such that for all $g\in G$, $g=g_1^{\alpha_1}....g_d^{\alpha_d}$ for some $\alpha_i\in\mathbb{Z}_p$ unique, and $\omega(g)=\min\{v_p(\alpha_i)+\omega(g_i):i=1,....,d\}$.\
Hence $G$ is homeomorphic to $\mathbb{Z}_p^d$, and furthermore, it follows from [@Lazard [slowromancap1@]{} 2.3.17] that $kG\cong k[[b_1,\cdots,b_d]]$ as $k$-vector spaces, where $b_i=g_i-1$ for each $i$.\
Recall the definition of a $p$-saturated group [@Lazard [slowromancap3@]{}.2.1.6], and recall that the category of $p$-saturated groups is isomorphic to the category of saturated $\mathbb{Z}_p$-Lie algebras via the $\log$ and $\exp$ functors [@Lazard [slowromancap4@]{}.3.2.6].
This means that for $G$ a $p$-saturated group, $\log(G)$ is a free $\mathbb{Z}_p$-Lie subalgebra of the $\mathbb{Q}_p$-Lie algebra $\mathfrak{L}(G)$ of $G$.\
Also, recall that any $p$-valuable group $G$ can be embedded as an open subgroup into a $p$-saturable group $Sat(G)$, and hence $Sat(G)^{p^t}\subseteq G$ for some $t\in\mathbb{N}$ – see [@Lazard [slowromancap3@]{}.3.3.1] for full details.
A compact $p$-adic Lie group $G$ is *abelian-by-procyclic* if there exists $H\trianglelefteq_c^{i} G$ such that $H$ is abelian and $\frac{G}{H}\cong\mathbb{Z}_p$. For $G$ non-abelian, $H$ is unique, and we call it the *principal subgroup* of $G$; it follows easily that $Z(G)\subseteq H$ and $(G,G)\subseteq H$, so $G$ is metabelian.
If $G$ is non-abelian, abelian-by-procyclic with principal subgroup $H$, then $G\cong H\rtimes\mathbb{Z}_p$. So if we assume that $G$ is $p$-valuable, we have that $G\cong\mathbb{Z}_p^d\rtimes\mathbb{Z}_p$.\
Moreover, we can choose an element $X\in G\backslash H$ such that for every $g\in G$, $g=hX^{\alpha}$ for some unique $h\in H$, $\alpha\in\mathbb{Z}_p$. We call $X$ a *procyclic element* in $G$.
\[Lie-invariant\]
Let $G$ be a non-abelian $p$-valuable abelian-by-procyclic group with principal subgroup $H$, and let $C$ be a closed subgroup of $G$. Then $C$ is abelian-by-procyclic.
Furthermore, if $N$ is a closed, isolated normal subgroup of $G$, then $\frac{G}{N}$ is $p$-valuable, abelian-by-procyclic.
Let $H':=C\cap H$, then clearly $H'$ is a closed, isolated, abelian normal subgroup of $C$.\
Now, $\frac{C}{H'}=\frac{C}{H\cap C}\cong\frac{CH}{H}\leq\frac{G}{H}\cong\mathbb{Z}_p$, hence $\frac{C}{H'}$ is isomorphic to a closed subgroup of $\mathbb{Z}_p$, i.e. it is isomorphic to either 0 or $\mathbb{Z}_p$.\
If $\frac{C}{H'}=0$ then $C=H'=H\cap C$ so $C\subseteq H$. Hence $C$ is abelian, and therefore abelian-by-procyclic.\
If $\frac{C}{H'}\cong\mathbb{Z}_p$ then $C$ is abelian-by-procyclic by definition.\
For $N\trianglelefteq_o^i G$, it follows from [@Lazard [slowromancap4@]{}.3.4.2] that $\frac{G}{N}$ is $p$-valuable, and clearly $\frac{HN}{N}$ is a closed, abelian normal subgroup of $G$.\
Consider the natural surjection of groups $\mathbb{Z}_p\cong\frac{G}{H}\twoheadrightarrow{}\frac{G}{HN}\cong\frac{G/N}{HN/N}$:\
If the kernel of this map is zero, then it is an isomorphism, so $\frac{G/N}{HN/N}\cong\mathbb{Z}_p$, so $\frac{G}{N}$ is abelian-by-procyclic by definition.\
If the kernel is non-zero, then $\frac{G}{HN}$ is a non-trivial quotient of $\mathbb{Z}_p$, and hence is finite, giving that $HN$ is open in $G$. So $\frac{HN}{N}$ is abelian and open in $\frac{G}{N}$, and it follows that $\frac{G}{N}$ is abelian, and hence abelian-by-procyclic.
Now, fix $G$ a non-abelian $p$-valuable, abelian-by-procyclic group with principal subgroup $H$, procyclic element $X$. Let $\varphi\in Inn(G)$ be conjugation by $X$, then it is clear that $\varphi\neq id$.\
Recall from [@nilpotent Definition 4.8, Proposition 4.9] the definition of $z(\varphi^{p^m}):H\to Sat(H)$, $h\mapsto \lim_{n\rightarrow\infty}{(\varphi^{p^{m+n}}(h)h^{-1})^{p^{-n}}}$.\
Using [@Lazard [slowromancap4@]{}. 3.2.3] it follows that $z(\varphi^{p^m})(h)=\exp([p^m\log(X),\log(h)])$, and hence $z(\varphi^{p^m})(h)=z(\varphi)(h)^{p^m}$ for all $m$.\
For each $m\in\mathbb{N}$, define $u_m:=z(\varphi^{p^m})$, then it is clear that $u_m(h)=u_0(h)^{p^m}$ for all $g\in G$. So fix $m_1\in\mathbb{N}$ such that $u_0(h)^{p^{m_1}}\in H$ for all $g\in G$, and let $u:=u_{m_1}:H\to H$.\
We call $m_1$ the *initial power*, we may choose this to be as high as we need.
Filtrations
-----------
Let $R$ be any ring, we assume that all rings are unital.
\[filt\]
A *filtration* of $R$ is a map $w:R\to\mathbb{Z}\cup\{\infty\}$ such that for all $x,y\in R$, $w(x+y)\geq\min\{w(x),w(y)\}$, $w(xy)\geq w(x)+w(y)$, $w(1)=0$, $w(0)=\infty$. The filtration is *separated* if $w(x)=\infty$ implies that $x=0$ for all $x\in R$.
If $R$ carries a filtration $w$, then there is an induced topology on $R$ with the subgroups $F_nR:=\{r\in R:w(r)\geq n\}$ forming a basis for the neighbourhoods of the identity. This topology is Hausdorff if and only if the filtration is separated.\
Recall from [@LVO Ch.[slowromancap2@]{} Definition 2.1.1] that a filtration is *Zariskian* if $F_1R\subseteq J(F_0R)$ and the *Rees ring* $\widetilde{R}:=\underset{n\in\mathbb{Z}}{\oplus}{F_nR}$ is Noetherian.\
Zariskian filtrations can only be defined on Noetherian rings, and by [@LVO Ch.[slowromancap2@]{} Theorem 2.1.2] we see that a Zariskian filtration is separated.\
If $R$ carries a filtration $w$, then define the *associated graded ring* of $R$ to be
gr $R:=\underset{n\in\mathbb{Z}}{\oplus}{\frac{F_nR}{F_{n+1}R}}$
This is a graded ring with multiplication given by $(r+F_{n+1}R)\cdot (s+F_{m+1}R)=(rs+F_{n+m+1}R)$.\
**** For $r\in R$ with $w(r)=n$ we denote by gr$(r):=r+F_{n+1}R\in$ gr $R$.\
We say that a filtration $w$ is *positive* if $w(r)\geq 0$ for all $r\in R$.
\[valuation\]
If $R$ carries a filtration $v$ and $x\in R\backslash\{0\}$, we say that $x$ is *$v$-regular* if $v(xy)=v(x)+v(y)$ for all $y\in R$, i.e. gr$(x)$ is not a zero divisor in gr $R$. If all non-zero $x$ are $v$-regular we say that $v$ is a *valuation*.
Note that if $x$ is $v$-regular and a unit then $x^{-1}$ is $v$-regular and $v(x^{-1})=-v(x)$.\
**** 1. If $I$ is an ideal of $R$, the *$I$-adic* filtration on $R$ is given by $F_0R=R$ and $F_nR=I^n$ for all $n>0$. If $I=\pi R$ for some normal element $\pi\in R$, we call this the *$\pi$-adic filtration*.\
2. If $R$ carries a filtration $w$, then $w$ extends naturally to $M_k(R)$ via $w((a_{i,j}))=\min\{w(a_{i,j}):i,j=1,\cdots,k\}$ – the *standard matrix filtration*.\
3. If $R$ carries a filtration $w$ and $I\trianglelefteq R$, we define the *quotient filtration* $\overline{w}:\frac{R}{I}\to\mathbb{Z}\cup \{\infty\}, r+I\mapsto\sup\{w(r+y):y\in I\}$. In this case, gr$_{\overline{w}}$ $\frac{R}{I}=\frac{gr R}{gr I}$.\
4. Let $(G,\omega)$ be a $p$-valuable group with ordered basis $\underline{g}=\{g_1,\cdots,g_d\}$. We say that $\omega$ is an *abelian* $p$-valuation if $\omega(G)\subseteq\frac{1}{e}\mathbb{Z}$ for some $e\in\mathbb{Z}$, and $\omega((g,h))>\omega(g)+\omega(h)$ for all $g,h\in G$.
If $\omega$ is abelian, then by [@nilpotent Corollary 6.2], $kG\cong k[[b_1,\cdots,b_d]]$ carries an associated Zariskian filtration given by
$w(\underset{\alpha\in\mathbb{N}^d}{\sum}{\lambda_{\alpha}b_1^{\alpha_1}\cdots b_d^{\alpha_d}})=\inf\{\sum_{i=1}^{d}{e\alpha_i\omega(g_i)}:\lambda_{\alpha}\neq 0\}\in\mathbb{Z}\cup\{\infty\}$.
We call $w$ the *Lazard filtration* associated with $w$. It induces the natural complete, Hausdorff topology on $kG$, and gr $kG\cong k[X_1,\cdots,X_d]$, where $X_i=$ gr$(b_i)=$ gr$(g_i-1)$.
Since gr $kG$ is a domain, it follows that $w$ is a valuation.\
Now, recall from [@McConnell Definition 1.5.8] the definition of a *crossed product*, $R\ast F$, of a ring $R$ with a group $F$. That is $R\ast F$ is a ring containing $R$, free as an $R$-module with basis $\{\overline{g}:g\in F\}\subseteq R^{\times}$, where $\overline{1_F}=1_R$, such that $R\overline{g_1}=\overline{g_1}R$ and $(\overline{g_1}R)(\overline{g_2}R)=\overline{g_1g_2}R$ for all $g_1,g_2\in F$.\
Also recall from [@Passman] that given a crossed product $S=R\ast F$, we can define the *action* $\sigma:F\to Aut(R)$ and the *twist* $\gamma:F\times F\to R^{\times}$ such that for all $g,g_1,g_2\in F$, $r\in R$:\
$\overline{g}r=\sigma(g)(r)\overline{g}$ and $\overline{g_1}$ $\overline{g_2}=\gamma(g_1,g_2)\overline{g_1g_2}$.
\[crossed product\]
Let $R$ be a ring with a complete, positive, Zariskian valuation $w:R\to\mathbb{N}\cup\{\infty\}$, let $F$ be a finite group, and let $S=R\ast F$ be a crossed product with action $\sigma$ and twist $\gamma$. Suppose that $w(\sigma(g)(r))=w(r)$ for all $g\in F$, $r\in R$.\
Then $w$ extends to a complete, positive, Zariskian filtration $w':S\to \mathbb{N}\cup\{\infty\}$ defined by $w'(\sum_{g\in F}{r_g \overline{g}})=\min\{w(r_g):g\in G\}$, and gr$_{w'}$ $S \cong ($gr$_w$ $R)\ast F$.
From the definition it is clear that $w'(s_1+s_2)\geq\min\{w'(s_1),w'(s_2)\}$. So to prove that $w$ defines a ring filtration, it remains to check that $w'(s_1s_2)\geq w'(s_1)+w'(s_2)$.\
In fact, using the additive property, we only need to prove that $w'(r\overline{g}s\overline{h})\geq w'(r\overline{g})+w'(s\overline{h})$ for all $r,s\in R$, $g,h\in F$. We will in fact show that equality holds here:\
$w'(r\overline{g}s\overline{h})=w'(r\sigma(g)(s)\overline{g} \overline{h})=w'(r\sigma(g)(s)\gamma(g,h)\overline{gh})=w(r\sigma(g)(s)\gamma(g,h))$\
$=w(r)+w(\sigma(g)(s))+w(\gamma(g,h))$ (by the valuation property)\
$=w(r)+w(s)$.\
The last equality follows because $w(\sigma(g)(s))=w(s)$ by assumption, and since $R$ is positively filtered and $\gamma(g,h)$ is a unit in $R$, it must have value zero. Clearly $w(r)+w(s)=w'(r\overline{g})+w'(s\overline{h})$ so we are done.\
Hence $w'$ is a well-defined ring filtration, clearly $w'(r)=w(r)$ for all $r\in R$, and $w'(\overline{g})=0$ for all $g\in F$. We can define $\theta:$ gr$_w$ $R\to $ gr$_{w'}$ $S, r+F_{n+1}R\mapsto r+F_{n+1}S$, which is a well defined, injective ring homomorphism.\
Given $s\in S$, $s=\underset{g\in F}{\sum}{r_g \overline{g}}$, so let $A_s:=\{g\in F:w(r_g)=w'(s)\}$. Then:\
gr$(s)=\underset{g\in A_s}{\sum}{r_g \overline{g}}+F_{w'(s)+1}S=\underset{g\in A_s}{\sum}{(r_g+F_{w'(s)+1}S)(\overline{g}+F_1S)}=\underset{g\in A_s}{\sum}\theta($gr$(r_g))$gr$(\overline{g})$.\
Hence gr$_{w'}$ $S$ is finitely generated over $\theta($gr $R)$ by $\{$gr$(\overline{g}):g\in F\}$. This set forms a basis, hence gr$_{w'}$ $S$ is free over $\theta($gr $R)$, and it is clear that each gr$(\overline{g})$ is a unit in gr $S$, and they are in bijection with the elements of $F$.\
Therefore gr $S$ is Noetherian, and clearly $R\ast F$ is complete with respect to $w'$. Hence $w'$ is Zariskian by [@LVO Chapter [slowromancap2@]{}, Theorem 2.1.2].\
Finally, gr$(r\overline{g})$gr$(s\overline{h})=$gr$(r\overline{g}s\overline{h})$ since $w'(r\overline{g}s\overline{h})=w'(r\overline{g})+w'(s\overline{h})$, so it is readily checked that $($gr $R)$gr$(\overline{g})$gr$(\overline{h})=(($gr $R)($gr$(\overline{g}))(($gr $R)($gr$(\overline{h}))$, and clearly $($gr $R)($gr$(\overline{g}))=($gr$(\overline{g}))($gr $R)$.\
Therefore gr$_{w'}$ $S=($gr$_w$ $R)\ast F$.
This result will be useful to us later, because for any $p$-valuable group $G$, $U\trianglelefteq_o G$, $kG\cong kU\ast\frac{G}{U}$.
Mahler expansions
-----------------
We will now recap the notion of the Mahler expansion of an automorphism.\
Let $G$ be a compact $p$-adic Lie group. Define $C^{\infty}=C^{\infty}(G,k)=\{f:G\to k:f$ locally constant$\}$, and for each $U\leq_o G$, define $C^{\infty U}:=\{f\in C^{\infty}:f$ constant on the cosets of $U\}$.\
Clearly $C^{\infty}$ is a $k$-algebra, $C^{\infty U}$ is a subalgebra, and recall from [@controller Lemma 2.9] that there is an action $\gamma:C^{\infty}\to End_k(kG)$ of $C^{\infty}$ on $kG$. Also recall the following result ([@controller Proposition 2.8]):
\[control\]
Let $I$ be a right ideal of $kG$, $U$ an open subgroup of $kG$. Then $I$ is controlled by $U$ if and only if $I$ is a $C^{\infty U}$-submodule of $kG$, i.e. if and only if for all $g\in G$, $\gamma(\delta_{Ug})(I)\subseteq I$, where $\delta_{Ug}$ is the characteristic function of the coset $Ug$.
Now assume that $G$ is $p$-valuable, with $p$-valuation $\omega$, and let $\underline{g}=\{g_1,\cdots,g_d\}$ be an ordered basis for $(G,\omega)$. For each $\alpha\in\mathbb{N}^d$ define $i_{\underline{g}}^{(\alpha)}:G\to k,\underline{g}^{\beta}\mapsto\binom{\beta}{\alpha}$. Then $i_{\underline{g}}^{(\alpha)}\in C^{\infty}$, so let $\partial_{\underline{g}}^{(\alpha)}:=\gamma(i_{\underline{g}}^{(\alpha)})$ – the *$\alpha$-quantized divided power* with respect to $\underline{g}$.
Also, for each $i=1,\cdots,d$, we define $\partial_i:=\partial_{\underline{g}}^{(e_i)}$, where $e_i$ is the standard $i$’th basis vector, these are $k$-linear derivations of $kG$.
\[Frattini\]
Suppose that $I$ is a right ideal of $kG$ and $\partial_j(I)\subseteq I$ for some $j\in\{1,\cdots,d\}$. Then $I$ is controlled by a proper open subgroup of $G$.
Recall from [@nilpotent Lemma 7.13] that if $V$ is an open normal subgroup of $G$ with ordered basis
$\{g_1,\cdots,g_{s-1},g_s^p,\cdots,g_r^p,g_{r+1},\cdots,g_d\}$ for some $1\leq s\leq r\leq d$, then for each $g\in G$, $\gamma(\delta_{Vg})$ can be expressed as a polynomial in $\partial_s,\cdots,\partial_r$.
So it follows from Proposition \[control\] that if $\partial_i(I)\subseteq I$ for all $i=s,\cdots,r$, then $I$ is controlled by $V$.\
Since we know that $\partial_j(I)\subseteq I$, it remains to show that we can find a proper, open normal subgroup $U$ of $G$ with ordered basis $\{g_1,\cdots,g_{j-1},g_j^p,g_{j+1},\cdots,g_d\}$, and it will follow that $U$ controls $I$.\
**** Given $d$ variables $x_1,\cdots,x_d$, we will write $\underline{x}$ to denote the set $\{x_1,\cdots,x_d\}$, $\underline{x}_{j,p}$ to denote the same set, but with $x_j$ replaced by $x_j^p$, and $\underline{x}_{j}$ to denote the set with $x_j$ removed altogether. We write $\underline{x}^{\alpha}$ to denote $x_1^{\alpha_1}\cdots x_d^{\alpha_d}$.\
Let $U$ be the subgroup of $G$ generated topologically by the set $\underline{g}_{j,p}$. It is clear that this subgroup contains $G^p$, and hence it is open in $G$. Let us suppose, for contradiction, that it contains an element $u=\underline{g}^{\alpha}$ where $\alpha\in\mathbb{Z}_p^d$ and $p\nmid\alpha_j$.\
Recall from [@DDMS Definition 1.8] that the *Frattini subgroup* $\phi(G)$ of $G$ is defined as the intersection of all maximal open subgroups of $G$, and since $G$ is a pro-$p$ group, it follows from [@DDMS Proposition 1.13] that $\phi(G)$ contains $G^p$ and $[G,G]$.
Hence $\phi(G)$ is an open normal subgroup of $G$, and $\frac{G}{\phi(G)}$ is abelian.\
Since $\underline{g}_{j,p}$ generates $U$, it is clear that $\underline{g\phi(G)}_{p,j}$ generates $\frac{U\phi(G)}{\phi(G)}$. Therefore $u\phi(G)=\underline{g\phi(G)}_{j,p}^{\beta}$ for some $\beta\in\mathbb{Z}_p^d$.
But we know that $u=\underline{g}^{\alpha}$, so $u\phi(G)=\underline{g\phi(G)}^{\alpha}$. So since $\frac{G}{\phi(G)}$ is abelian, it follows that $g_j^{\alpha_j-p\beta_j}\phi(G)=\underline{g\phi(G)}_j^{\gamma}$ for some $\gamma\in\mathbb{Z}_p^d$.\
But since $p\nmid\alpha_j$, $\alpha_j-p\beta_j$ is a $p$-adic unit, and hence $g_j\phi(G)=\underline{g\phi(G)}_j^{\delta}$, where $\delta_i=\gamma_i(\alpha_i-p\beta_i)^{-1}$. Therefore $\frac{G}{\phi(G)}$ is generated by $\underline{g\phi(G)}_j=\frac{\underline{g}_j\phi(G)}{\phi(G)}$.\
It follows from [@DDMS Proposition 1.9] that $G$ is generated by $\underline{g}_j$, which has size $d-1$, and this is a contradiction since the rank $d$ of $G$ is the minimal cardinality of a generating set.\
Therefore, every $u\in U$ has the form $\underline{g}_{j,p}^{\beta}$ for some $\beta\in\mathbb{Z}_p^d$, i.e $\{g_1,\cdots,g_j^p,\cdots,g_d\}$ is an ordered basis for $U$.\
Finally, $U$ is maximal, so it contains $\phi(G)\supseteq [G,G]$, and hence it is normal in $G$ as required.
Now, given $\varphi\in Inn(G)$, clearly $\varphi$ extends to a $k$-linear endomorphism of $kG$, and using Mahler’s theorem, we can express $\varphi^{p^m}$ as:
$$\label{Mahler1}
\varphi^{p^m}=\sum_{\alpha\in\mathbb{N^d}}{\big\langle \varphi^{p^m},\partial_{\underline{g}}^{(\alpha)}}\big\rangle\partial_{\underline{g}}^{(\alpha)}=id+c_{1,m}\partial_1+....+c_{d,m}\partial_d+....$$
Where $\big\langle \varphi^{p^m},\partial_{\underline{g}}^{(\alpha)}\big\rangle\in kG$ is the $\alpha$-*Mahler coefficient* of $\varphi^{p^m}$ (see [@nilpotent Corollary 6.6] for full details).\
Given $P\in Spec^f(kG)$, our general approach is to choose $\varphi\neq id$, and use analysis of (\[Mahler1\]) to obtain a sequence of endomorphisms preserving $P$, converging pointwise in $m$ to an expression involving only $\partial_1,\cdots,\partial_d$, which will imply a control theorem using Proposition \[Frattini\].
However, it is more convenient for us to reduce modulo $P$, whence we can pass to the ring of quotients $Q(\frac{kG}{P})$ and divide out by anything regular mod $P$.\
So let $\tau:kG\to Q(\frac{kG}{P})$ be the natural map, then inside End$_k(kG,Q(\frac{kG}{P}))$ our expression becomes:
$$\label{Mahler2}
\tau\varphi^{p^m}-\tau=\sum_{\alpha\in\mathbb{N^d}}{\tau(\big\langle \varphi^{p^m},\partial_{\underline{g}}^{(\alpha)}}\big\rangle)\tau\partial_{\underline{g}}^{(\alpha)}$$
Since we want to analyse convergence of this expression as $m\rightarrow\infty$, we need to define a certain well behaved filtration $v$ on $Q(\frac{kG}{P})$, which we call a *non-commutative valuation*. We will describe the construction of $v$ in the next section.\
In our case, we take $G$ to be non-abelian, $p$-valuable, abelian-by-procyclic group, with principal subgroup $H$, procyclic element $X$, and we take $\varphi$ to be conjugation by $X$, which is a non-trivial inner automorphism.\
It is clear that $\varphi|_H$ is trivial modulo centre since $H$ is abelian. Therefore it follows from the proof of [@nilpotent Lemma 6.7] that if $\underline{g}=\{h_1,\cdots,h_d,X\}$ is an ordered basis for $(G,\omega)$, where $\{h_1,\cdots,h_d\}$ is some ordered basis for $H$, then for any $m\in\mathbb{N}$ $\alpha\in\mathbb{N}^{d+1}$, we have:
$\big\langle \varphi^{p^m},\partial_{\underline{g}}^{(\alpha)}\big\rangle=(\varphi^{p^m}(h_1)h_1^{-1}-1)^{\alpha_1}\cdots(\varphi^{p^m}(h_d)h_d^{-1}-1)^{\alpha_d}(\varphi^{p^m}(X)X^{-1}-1)^{\alpha_{d+1}}$
And since $\varphi(X)=X$, it is clear that this is 0 if $\alpha_{d+1}\neq 0$. Hence we may assume that $\alpha\in\mathbb{N}^d$, and:
$\big\langle \varphi^{p^m},\partial_{\underline{g}}^{(\alpha)}\big\rangle=(\varphi^{p^m}(h_1)h_1^{-1}-1)^{\alpha_1}\cdots(\varphi^{p^m}(h_d)h_d^{-1}-1)^{\alpha_d}$
Recall our function $u:H\to H$ defined in section 2.1, we can now use $u$ to approximate the Mahler coefficients inside $Q(\frac{kG}{P})$. Setting $q_i=\tau(u(h_i)-1)$ for $i=1,...,d$, it follows from the proof of [@nilpotent Proposition 7.7] that we can derive the following expression from (\[Mahler2\]):
$$\label{Mahler3}
\tau\varphi^{p^m}-\tau=q_1^{p^m}\tau\partial_1+....+q_{d}^{p^m}\tau\partial_{d}+\sum_{\vert\alpha\vert\geq 2}{\underline{q}^{\alpha p^m}\tau\partial_{\underline{g}}^{(\alpha)}}+\varepsilon_m$$
Where $\underline{q}^{\alpha p^m}=q_1^{\alpha_1}\cdots q_d^{\alpha_d}$, $\varepsilon_m=\underset{\alpha\in\mathbb{N}^d}{\sum}{(\big\langle \varphi^{p^m},\partial_{\underline{g}}^{(\alpha)}\big\rangle-\underline{q}^{\alpha p^m})\tau\partial_{\underline{g}}^{(\alpha)}}\in$ End$_k(kG,Q)$, and there exists $t\in\mathbb{N}$ such that $v(\varepsilon_m(r))>p^{2m-t}$ for all $r\in kG$.\
Since $\varphi(P)=P$ it is clear that the left hand side of this expression annihilates $P$. So take any $y\in P$ and apply it to both sides of (\[Mahler3\]) and we obtain:
$$\label{Mahler}
0=q_1^{p^m}\tau\partial_1(y)+....+q_{d}^{p^m}\tau\partial_{d}(y)+O(q^{p^m})$$
Where $q\in Q$ with $v(q^{p^m})\geq \underset{i\leq d}{\min}\{2v(q_i^{p^m})\}$ for all $m$.\
Furthermore, let $f(x)=a_0x+a_1x^p+a_2x^{p^2}+\cdots+a_nx^{p^n}$ be a polynomial, where $a_i\in\tau(kH)$ for each $i$.\
Then for each $m\in\mathbb{N}$, $i=0,\cdots,n$, consider expression (\[Mahler\]) above, with $m$ replaced by $m+i$, and multiply by $a_i^{p^m}$ to obtain:
$0=(a_iq_1^{p^i})^{p^m}\tau\partial_1(y)+....+(a_iq_{d}^{p^i})^{p^m}\tau\partial_{d}(y)+O((a_iq^{p^i})^{p^m})$
Sum all these expressions as $i$ ranges from $0$ to $n$ to obtain:
$$\label{MahlerPol}
0=f(q_1)^{p^m}\tau\partial_1(y)+....+f(q_{d})^{p^m}\tau\partial_{d}(y)+O(f(q)^{p^m})$$
In the next section, we will see how after ensuring certain conditions on $f$, we can use this expression to deduce a control theorem.
Non-commutative Valuations
==========================
In this section, we fix $Q$ a simple Artinian ring. First, recall from [@nilpotent] the definition of a non-commutative valuation.
A *non-commutative valuation* on $Q$ is a Zariskian filtration $v:Q\to\mathbb{Z}\cup\{\infty\}$ such that if $\widehat{Q}$ is the completion of $Q$ with respect to $v$, then $\widehat{Q}\cong M_k(Q(D))$ for some complete non-commutative DVR $D$, and $v$ is induced by the natural $J(D)$-adic filtration.
It follows from this definition that if $q\in Q$ with $v(q)\geq 0$, then $q$ is $v$-regular if and only if $q$ is normal in $F_0\widehat{Q}$.\
We want to construct a non-commutative valuation $v$ on $Q(\frac{kG}{P})$ which we can use to analyse our Mahler expansion (\[MahlerPol\]).
Construction
------------
Recall that [@nilpotent Theorem C] gives that if $R$ is a prime ring carrying a Zariskian filtration such that gr $R$ is commutative, then we can construct a non-commutative valuation on $Q(R)$.\
The main theorem of this section generalises this result.\
Let $R$ be a prime ring with a positive Zariskian filtration $w:R\to\mathbb{N}$ $\cup$ $\{\infty\}$ such that gr$_w R$ is finitely generated over a central, graded, Noetherian subring $A$, and we will assume that the positive part $A_{>0}$ of $A$ is not nilpotent, and hence we may fix a minimal prime ideal $\mathfrak{q}$ of $A$ with $\mathfrak{q}\not\supseteq A_{> 0}$. Define:
$T=\{X\in A\backslash\mathfrak{q}: X$ is homogeneous $\}$.
Then $T$ is central, and hence localisable in gr $R$, and the left and right localisations agree.
\[homogeneous localsiation\]
Let $\mathfrak{q}':=T^{-1}\mathfrak{q}$, then $\mathfrak{q}'$ is a nilpotent ideal of $T^{-1}A$ and:\
i. There exists $Z\in T$, homogeneous of positive degree, such that $\frac{T^{-1}A}{\mathfrak{q}'}\cong (\frac{T^{-1}A}{\mathfrak{q}'})_0[\bar{Z},\bar{Z}^{-1}]$, where $\bar{Z}:=Z+\mathfrak{q}$.\
ii. The quotient $\frac{(T^{-1}A)_{\geq 0}}{Z(T^{-1}A)_{\geq 0}}$ is Artinian, and $T^{-1}A$ is gr-Artinian, i.e. every descending chain of graded ideals terminates.
Since $A$ is a graded, commutative, Noetherian ring, this is identical to the proof of [@nilpotent Proposition 3.2].
Since gr $R$ is finitely generated over $A$, it follows that $T^{-1}$gr $R$ is finitely generated over $T^{-1}A$. So using this lemma, we see that $T^{-1}$gr $R$ is gr-Artinian.\
Let $S:=\{r\in R: $ gr$(r)\in T\}$, then since $w$ is Zariskian, $S$ is localisable by [@Ore; @sets Corollary 2.2], and $S^{-1}R$ carries a Zariskian filtration $w'$ such that gr$_{w'}$ $S^{-1}R\cong T^{-1}$gr $R$, and if $r\in R$ then $w'(r)\geq w(r)$, and equality holds if $r\in S$.
Furthermore, $w'$ satisfies $w'(s^{-1}r)=w'(r)-w(s)$ for all $r\in $ gr $R$, $s\in S$.\
Now, since $R$ is prime, the proof of [@nilpotent Lemma 3.3] shows that $S^{-1}R=Q(R)$, so let $Q'$ be the completion of $Q(R)$ with respect to $w'$.\
Let $U:=F_0Q'$, which is Noetherian by [@LVO Ch.[slowromancap2@]{} Lemma 2.1.4] then gr$_{w'}$ $U\cong (T^{-1}$gr $R)_{\geq 0}$, and since gr $Q'=T^{-1}$gr $R$ is gr-Artinian, $Q'$ is Artinian.
\[microlocalisation\]
There exists a regular, normal element $z\in J(U)\cap Q'^{\times}$ such that $\frac{U}{zU}$ has Krull dimension 1 on both sides, and for all $n\in\mathbb{Z}$, $F_{nw'(z)}Q'=z^nU$, hence the $z$-adic filtration on $Q'$ is topologically equivalent to $w'$.
Recall the element $Z\in T^{-1}A$ from Lemma \[homogeneous localsiation\]($i$), then we can choose an element $z\in U$ such that gr$_{w'}(z)=Z$. Since $Z$ is a unit in $T^{-1}A$, we can in fact choose $z$ to be regular and normal in $U$. Then since $w'$ is Zariskian and $Z$ has positive degree, $z\in F_1Q'\subseteq J(U)$.\
Furthermore, since $\bar{Z}=Z+\mathfrak{q}'$ is a unit in $\frac{T^{-1}A}{\mathfrak{q}'}$ and $\mathfrak{q}'$ is nilpotent, it follows that $Z$ is a unit in $T^{-1}A$, and hence in $T^{-1}$gr $R=$ gr $Q'$.
So it follows that $z$ is not a zero divisor in $Q'$, and hence is a unit since $Q'$ is artinian. Also, for all $u\in Q'$, $w'(zuz^{-1})=w'(u)$ since $Z$ is central in gr $Q'$, hence $z$ is normal in $U$.\
Since $(T^{-1}$gr $R)_{\geq 0}$ is finitely generated over $(T^{-1}A)_{\geq 0}$, it follows that $\frac{\text{gr }U}{Z \text{gr }U}$ is finitely generated over the image of $\frac{(T^{-1}A)_{\geq 0}}{Z(T^{-1}A)_{\geq 0}}\to \frac{\text{gr }U}{Z \text{gr }U}$.
This image is gr-Artinian by Lemma \[homogeneous localsiation\]($ii$) and hence $\frac{\text{gr }U}{Z \text{gr }U}$ it is also gr-Artinian.\
Therefore $\frac{U}{zU}$ is Artinian, and the proof of [@nilpotent Proposition 3.4] gives us that $U$ has Krull dimension at most 1 on both sides, and that $F_{nw'(z)} Q'=z^{n} U$ for all $n\in\mathbb{Z}$.
So, after passing to a simple quotient $\widehat{Q}$ of $Q'$, and letting $V:=\widehat{Q}_{\geq 0}$ be the image of $U$ in $\widehat{Q}$, then since $Q(R)$ is simple, it follows that the map $Q(R)\to\widehat{Q}$ is injective, and the image is dense with respect to the quotient filtration.\
Now, choose a maximal order $\mathcal{O}$ in $\widehat{Q}$, which is equivalent to $V$ in the sense of [@McConnell Definition 1.9]. Such an order exists by [@nilpotent Theorem 3.11], and it is Noetherian.
Furthermore, let $z\in J(U)$ be the regular, normal element from Lemma \[microlocalisation\], and let $\overline{z}\in J(V)$ be the image of $z$ in $V$, then $\mathcal{O}\subseteq \overline{z}^{-r} V$ for some $r\in\mathbb{N}$ by [@nilpotent Proposition 3.7].\
It follows from [@nilpotent Theorem 3.6] that $\mathcal{O}\cong M_n(D)$ for some complete non-commutative DVR $D$, and hence $\widehat{Q}\cong M_n(Q(D))$. So let $v$ be the $J(\mathcal{O})$-adic filtration, i.e. the filtration induced from the valuation on $D$. Then $v$ is topologically equivalent to the $\overline{z}$-adic filtration on $\widehat{Q}$.\
It is clear from the definition that the restriction of $v$ to $Q(R)$ is a non-commutative valuation, and the proof of [@nilpotent Theorem C] shows that $(R,w)\to (Q(R),v)$ is continuous.\
Note that our construction depends on a choice of minimal prime ideal $\mathfrak{q}$ of $A$. So altogether, we have proved the following:
\[filtration\]
Let $R$ be a prime ring with a Zariskian filtration $w:R\to\mathbb{N}\cup\{\infty\}$ such that gr$_w$ $R$ is finitely generated over a central, graded, Noetherian subring $A$, and the positive part $A_{> 0}$ of $A$ is not nilpotent.\
Then for every minimal prime ideal $\mathfrak{q}$ of $A$ with $q\not\supseteq A_{> 0}$, there exists a corresponding non-commutative valuation $v_{\mathfrak{q}}$ on $Q(R)$ such that the inclusion $(R,w)\to (Q(R),v_{\mathfrak{q}})$ is continuous.
In particular, if $P$ is a prime ideal of $kG$, then $R=\frac{kG}{P}$ carries a natural Zariskian filtration, given by the quotient of the Lazard filtration on $kG$, and gr $R\cong\frac{\text{gr }kG}{\text{gr }P}$ is commutative, and if $P\neq J(kG)$ then $($gr $R)_{\geq 0}$ is not nilpotent by [@nilpotent Lemma 7.2].
Hence we may apply Theorem \[filtration\] to obtain a non-commutative valuation $v$ on $Q(\frac{kG}{P})$ such that the natural map $\tau:(kG,w)\to(Q(\frac{kG}{P}),v)$ is continuous.
Properties
----------
We will now explore some important properties of the non-commutative valuation $v_{\mathfrak{q}}$ on $Q(R)$ that we have constructed.\
So again, we have that gr $R$ is finitely generated over $A$, and $\mathfrak{q}$ is a minimal prime ideal of $A$, not containing $A_{>0}$. Recall first the data that we used in the construction of $v_{\mathfrak{q}}$:
- $w'$ – a Zariskian filtration on $Q(R)$ such that $w'(r)\geq w(r)$ for all $r\in R$, with equality if gr$_w(r)\in A\backslash\mathfrak{q}$. Moreover, if gr$_w(r)\in A\backslash\mathfrak{q}$ then $r$ is $w'$-regular.
- $Q'$ – the completion of $Q(R)$ with respect to $w'$.
- $U$ – the positive part of $Q'$, a Noetherian ring.
- $z$ – a regular, normal element of $J(U)$ such that $z^nU=F_{nw'(z)}Q'$ for all $n\in\mathbb{Z}$.
- $v_{z,U}$ – the $z$-adic filtration on $Q'$, topologically equivalent to $w'$.
- $\widehat{Q}$ – a simple quotient of $Q'$.
- $V$ – the positive part of $\widehat{Q}$, which is the image of $U$ in $\widehat{Q}$.
- $\overline{z}$ – the image of $z$ in $V$.
- $v_{\overline{z},V}$ – the $\overline{z}$-adic filtration on $\widehat{Q}$, topologically equivalent to the quotient filtration.
- $\mathcal{O}$ – a maximal order in $\widehat{Q}$, equivalent to $V$, satisfying $\mathcal{O}\subseteq \overline{z}^{-r}V$ for some $r\geq 0$.
- $v_{\overline{z},\mathcal{O}}$ – the $\overline{z}$-adic filtration on $\mathcal{O}$.
- $v_{\mathfrak{q}}$ – the $J(\mathcal{O})$-adic filtration on $\widehat{Q}$, topologically equivalent to $v_{\overline{z},\mathcal{O}}$.
From now on, we will assume further that $R$ is an $\mathbb{F}_p$-algebra.
\[normal\]
Given $r\in R$ such that gr$(r)\in A\backslash\mathfrak{q}$, we have:\
i. $r$ is normal in $U$, a unit in $Q'$ and for any $u\in U$, $w'(rur^{-1}-u)>w'(u)$.\
ii $v_{\overline{z},V}(r)= v_{z,U}(r)$.
$i$. Since $r\in S=\{s\in R:$ gr$(s)\in A\backslash\mathfrak{q}\}$ and $Q(R)=S^{-1}R$, $r$ is a unit in $Q'$, and we know that $w'(r)=w(r)$. Given $u\in U$, we want to prove that $rur^{-1}\in U$, thus showing that $r$ is normal in $U$.\
We know that $U=F_0Q'$ is the completion of the positive part $F_0Q(R)$ of $Q(R)$ by definition, and we may assume that $u$ lies in $Q(R)$, i.e. $u=s^{-1}t$ for some $s\in S$, $t\in R$, and $w'(u)=w'(t)-w(s)\geq 0$.\
But gr$(r),$ gr$(s)\notin\mathfrak{q}$, and hence gr$(r)$gr$(s)\neq 0$, which means that $w(rs)=w(r)+w(s)$. Therefore $w'(r^{-1}ur)=w'((sr)^{-1}tr)=w'(tr)-w(rs)\geq w'(t)+w'(r)-w(r)-w(s)=w'(t)-w'(s)\geq 0$, and so $r^{-1}ur\in U$ as required.\
Furthermore, since gr$(r)\in A$ is central in $gr$ $R$, $w'(ru-ur)>w'(u)+w'(r)$, and thus $w'(rur^{-1}-u)=w'((ru-ur)r^{-1})\geq w'(ru-ur)-w'(r)>w'(u)+w'(r)-w'(r)=w'(u)$.\
$ii$. Let $t:=v_{z,U}(r)$.\
So $r\in z^tU\backslash z^{t+1}U=F_{tw'(z)}Q'\backslash F_{(t+1)w'(z)}Q'$, and hence $w'(r)=tw'(z)+j$ for some $0\leq j<w'(z)$.
Since gr$_{w'}(r)\in A\backslash\mathfrak{q}$, we have that $w'(r^{-1})=-w'(r)=-tw'(z)-j$.\
Let $\overline{r}$ be the image of $r$ in $\widehat{Q}$. Then since $r\in z^tU$, it is clear that $\overline{r}\in\overline{z}^tV$, hence $v_{\overline{z},V}(r)\geq t$, so it remains to prove that $v_{\overline{z},V}(r)\leq t$.\
Suppose that $\overline{r}\in\overline{z}^{t+1}V$, i.e. $r-z^{t+1}u$ maps to zero in $\widehat{Q}$ for some $u\in U$, and hence $z^{-t}r-zu=z^{-t}(r-z^{t+1}u)$ also maps to zero.\
Let $a=z^{-t}r$, $b=-zu$. Then $w'(b)\geq w'(u)+w'(z)\geq w'(z)$, $w'(a^{-1})=w'(r^{-1})+tw'(z)=-tw'(z)-j+tw'(z)=-j$, so $w'(a^{-1}b)\geq w'(z)-j>w'(z)-w'(z)=0$, and therefore $(a^{-1}b)^n\rightarrow 0$ as $n\rightarrow\infty$.\
So by completeness of $Q'$, the series $\underset{n\geq 0}{\sum}{(-1)^{n}(a^{-1}b)^na^{-1}}$ converges in $Q'$, and the limit is the inverse of $a+b$, hence $a+b=z^{-t}r-zu$ is a unit in $Q'$.\
Therefore a unit in $Q'$ maps to zero in $\widehat{Q}$ – contradiction.\
Hence $\overline{r}\notin\overline{z}^{t+1}V$, so $v_{\overline{z},V}(r)\leq t$ as required.
\[regular\]
Let $u\in U$ be regular and normal, then $u$ is a unit in $Q'$. Furthermore, if $w'(uau^{-1}-a)>w'(a)$ for all $a\in Q'$, then setting $\overline{u}$ as the image of $u$ in $V$, we have that for sufficiently high $m\in\mathbb{N}$, $\overline{u}^{p^m}$ is $v_{\mathfrak{q}}$-regular.
Since $u$ is regular in $U$, it is not a zero divisor, so it follows that $u$ is not a zero divisor in $Q'$, and hence a unit since $Q'$ is artinian.\
Since $u$ is normal in $U$, i.e. $uU=Uu$, it follows that $\overline{u}V=V\overline{u}$, so $\overline{u}$ is normal in $V$. We want to prove that for $m$ sufficiently high, $\overline{u}^{p^m}$ is normal in $\mathcal{O}=F_0\widehat{Q}$, and it will follow that it is $v_{\mathfrak{q}}$-regular.\
We know that $w'(uau^{-1}-a)>w'(a)$ for all $a\in Q'$, so let $\theta:Q'\to Q'$ be the conjugation action of $u$, then $(\theta-id)(F_nQ')\subseteq F_{n+1}Q'$ for all $n\in\mathbb{Z}$.
Therefore, for all $k\in\mathbb{N}$, $(\theta-id)^k(F_nQ')\subseteq F_{n+k}Q'$.\
Since $Q'$ is an $\mathbb{F}_p$-algebra, it follows that $(\theta^{p^m}-id)(F_nQ')=(\theta-id)^{p^m}(F_nQ')\subseteq F_{n+p^m}Q'$, and clearly $\theta^{p^m}$ is conjugation by $\overline{u}^{p^m}$.\
So fix $k\in\mathbb{N}$ such that $p^k\geq w'(z)$. Then we know that $z^nU=F_{nw'(z)}U$, so $(\theta^{p^k}-id)(z^nU)\subseteq F_{nw'(z)+p^k}U\subseteq F_{(n+1)w'(z)}U=z^{n+1}U$.\
Hence we have that for all $a\in Q'$, $v_{z,U}(u^{p^k}au^{-p^k}-a)>v_{z,U}(a)$, and it follows immediately that $v_{\overline{z},V}(\overline{u}^{p^k}a\overline{u}^{-p^k}-a)>v_{\overline{z},V}(a)$ for all $a\in\widehat{Q}$.\
For convenience, let $v:=\overline{u}^{p^k}\in V$. We know that $v_{\overline{z},V}(vav^{-1}-a)>v_{\overline{z},V}(a)$ for all $a\in\widehat{Q}$, and we want to prove that $v^{p^m}$ is normal in $\mathcal{O}$ for $m$ sufficently high.\
Let $I=\{v\in V:qv\in V$ for all $q\in \mathcal{O}\}$, then $I$ is a two-sided ideal of $V$, and since $\mathcal{O}\subseteq \overline{z}^{-r}V$, we have that $\overline{z}^rV\subseteq I$.\
Let $\psi:\widehat{Q}\to\widehat{Q}$ be conjugation by $v$, then we know that $(\psi-id)(\overline{z}^nV)\subseteq \overline{z}^{n+1}V$, and hence $(\psi-id)^s(V)\subseteq \overline{z}^sV$ for all $s$.
Choose $m\in\mathbb{N}$ such that $p^m\geq r$, then $(\psi^{p^m}-id)(V)=(\psi-id)^{p^m}(V)\subseteq \overline{z}^{p^m}V\subseteq \overline{z}^rV\subseteq I$.\
Therefore, for all $a\in V$, $v^{p^m}av^{-p^m}-a\in I$, and in particular, for all $a\in I$, $v^{p^m}av^{-p^m}\in I$, so $v^{p^m}Iv^{-p^m}\subseteq I$.
So set $b:=v^{p^m}$, then it follows from Noetherianity of $V$ that $bIb^{-1}=I$.\
Finally, consider the subring $\mathcal{O}':=b^{-1}\mathcal{O}b$ of $\widehat{Q}$ containing $V$, then since $\mathcal{O}$ is a maximal order equivalent to $V$, it follows immediately that $\mathcal{O}'$ is equivalent to $V$, and that $\mathcal{O}'$ is also maximal.\
Given $c\in \mathcal{O}'$, $c=b^{-1}qb$ for some $q\in \mathcal{O}$. So given $x\in I$, $cx=b^{-1}qbx=b^{-1}qbxb^{-1}b\in b^{-1}Ib=I$, so $c\in\mathcal{O}_l(I)=\{q\in\widehat{Q}:qI\subseteq I\}$, and hence $\mathcal{O}'\subseteq \mathcal{O}_l(I)$.\
But $\mathcal{O}_l(I)$ is a maximal order in $\widehat{Q}$, equivalent to $V$ by [@McConnell Lemma 1.12], and this order contains $\mathcal{O}$ by the definition of $I$.
So since $\mathcal{O}$ and $\mathcal{O}'$ are maximal orders and are both contained in $\mathcal{O}_l(I)$, it follows that $\mathcal{O}_l(I)=\mathcal{O}=\mathcal{O}'=b\mathcal{O}b^{-1}$.\
Therefore $b=v^{p^m}=\overline{u}^{p^{m+k}}$ is normal in $\mathcal{O}$ as required.
In particular, it is clear that $z\in U$ satisfies the property that $w'(zaz^{-1}-a)>w'(a)$ for all $a\in Q'$, thus $\overline{z}^{p^m}$ is normal in $\mathcal{O}$ for large $m$.\
The next result will be very useful to us later when we want to compare values of elements in $Q(\frac{kG}{P})$ based on their values in $kG$.
\[comparison\]
Given $r\in R$ such that gr$_w(r)\in A\backslash\mathfrak{q}$, there exists $m\in\mathbb{N}$ such that $r^{p^m}$ is $v_{\mathfrak{q}}$-regular inside $\widehat{Q}$. Also, if $s\in R$ with $w(s)>w(r)$ then for sufficiently high $m$, $v_{\mathfrak{q}}(s^{p^m})>v_{\mathfrak{q}}(r^{p^m})$.
Moreover, if $w(s)=w(r)$ and gr$_w(s)\in\mathfrak{q}$ then we also have that $v_{\mathfrak{q}}(s^{p^m})>v_{\mathfrak{q}}(r^{p^m})$ for sufficiently high $m$.
Since gr$_w(r)\in A\backslash\mathfrak{q}$, it follows from Lemma \[normal\]($i$) that $r$ is normal and regular in $U$, and $w'(rur^{-1}-u)>w'(u)$ for all $u\in U$. So for $m\in\mathbb{N}$ sufficiently high, $r^{p^m}$ is $v_{\mathfrak{q}}$-regular by Proposition \[regular\].\
Note that since gr$_w(r)\in A\backslash\mathfrak{q}$, we have that $w'(r)=w(r)$. In fact, since gr$_w(r)$ is not nilpotent, we actually have that $w'(r^n)=w(r^n)=nw(r)$ for all $n\in\mathbb{N}$. So if $w(s)>w(r)$, then for any $n$, $w'(s^n)\geq nw(s)>nw(r)=w'(r^n)$.
Moreover, if $w(s)=w(r)$ and gr$_w(s)\in\mathfrak{q}$, then since $\mathfrak{q}'=T^{-1}\mathfrak{q}$ is nilpotent by Lemma \[homogeneous localsiation\], it follows that for $n$ sufficiently high, $w'(s^n)>nw'(s)$, and hence $w'(s^n)> nw(s)=nw(r)=w(r^n)=w'(r^n)$.\
So, in either case, after replacing $r$ and $s$ by high $p$’th powers of $r$ and $s$ if necessary, we may assume that $w'(s)>w'(r)$, i.e. $w'(s)\geq w'(r)+1$.\
It follows that for every $K>0$, we can find $m\in\mathbb{N}$ such that $w'(s^{p^m})\geq w'(r^{p^m})+K$. First we will prove the same result for $v_{z,U}$:\
Given $K>0$, let $N=w'(z)(K+1)$, so that $K=\frac{1}{w'(z)}N-1$, then choose $m$ such that $w'(s^{p^m})\geq w'(r^{p^m})+N$, and let $l:=v_{z,U}(s^{p^m})$, $t:=v_{z,U}(r^{p^m})$.\
So $s^{p^m}\in z^lU\backslash z^{l+1}U=F_{lw'(z)}Q'\backslash F_{(l+1)w'(z)}Q'$, and $r^{p^m}\in z^tU\backslash z^{t+1}U=F_{lw'(z)}Q'\backslash F_{(l+1)w'(z)}Q'$.\
Hence $(l+1)w'(z)\geq w'(s^{p^m})\geq lw'(z)$ and $(t+1)w'(z)\geq w'(r^{p^m})\geq tw'(z)$.\
Therefore, $v_{z,U}(s^{p^m})=l=\frac{1}{w'(z)}((l+1)w'(z))-1\geq\frac{1}{w'(z)}w'(s^{p^m})-1$\
$\geq\frac{1}{w'(z)}(w'(r^{p^m})+N)-1\geq\frac{1}{w'(z)}(tw'(z)+N)-1=t+\frac{1}{w'(z)}N-1=t+K=v_{z,U}(r^{p^m})+K$ as required.\
Now, since gr$_w(r)\in A\backslash\mathfrak{q}$, we have that $v_{\overline{z},V}(r^{p^m})=v_{z,U}(r^{p^m})$ for all $m$ by Lemma \[normal\]($ii$).\
Therefore, since $v_{\overline{z},V}(s^{p^m})\geq v_{z,U}(s^{p^m})$ for all $m$, it follows that for every $K>0$, there exists $m\in\mathbb{N}$ such that $v_{\overline{z},V}(s^{p^m})\geq v_{\overline{z},V}(r^{p^m})+K$.\
Now we will consider $v_{\overline{z},\mathcal{O}}$, the $\overline{z}$-adic filtration on $\widehat{Q}$.\
Recall that $V\subseteq\mathcal{O}\subseteq\overline{z}^{-r}V$, and thus $z^nV\subseteq z^n\mathcal{O}\subseteq z^{n-r}V$ for all $n$. Hence
$v_{\overline{z},V}(v)-r\leq v_{\overline{z},\mathcal{O}}(v)\leq v_{\overline{z},V}(v)$ for all $v\in V$.\
For any $K>0$, choose $m$ such that $v_{\overline{z},V}(s^{p^m})\geq v_{\overline{z},V}(r^{p^m})+K+r$. Then:\
$v_{\overline{z},\mathcal{O}}(s^{p^m})\geq v_{\overline{z},V}(s^{p^m})-r\geq v_{\overline{z},V}(r^{p^m})+K+r-r\geq v_{\overline{z},\mathcal{O}}(r^{p^m})+K$.\
Now, using Proposition \[regular\], we know that we can find $k\in\mathbb{N}$ such that $x:=\overline{z}^{p^k}$ is normal in $\mathcal{O}$, i.e. $x\mathcal{O}=\mathcal{O}x$ is a two-sided ideal of $\mathcal{O}$.
Then since $\mathcal{O}\cong M_n(D)$ for some non-commutative DVR $D$, it follows that $x\mathcal{O}=J(\mathcal{O})^a$ for some $a\in\mathbb{N}$, and $x^m\mathcal{O}=J(\mathcal{O})^{am}$ for all $m$.\
So, choose $m\in\mathbb{N}$ such that $r^{p^m}$ is $v_{\mathfrak{q}}$-regular, $v_{\mathfrak{q}}(r^{p^m})\geq a$ and $v_{\overline{z},\mathcal{O}}(s^{p^m})\geq v_{\overline{z},\mathcal{O}}(r^{p^m})+p^k$.\
Then suppose that $v_{\mathfrak{q}}(r^{p^m})=n$, i.e. $r^{p^m}\in J(\mathcal{O})^n\backslash J(\mathcal{O})^{n+1}$ and $n\geq a$.\
We have that $n=qa+t$ for some $q,t\in\mathbb{N}$, $0\leq t<a$, so $q\geq 1$ and $qa\leq n<n+1\leq (q+1)a$. Therefore:\
$r^{p^m}\in J(\mathcal{O})^n\subseteq J(\mathcal{O})^{qa}=x^q\mathcal{O}=z^{p^kq}\mathcal{O}$, and so $v_{\overline{z},\mathcal{O}}(r^{p^m})\geq p^kq$.\
Hence $v_{\overline{z},\mathcal{O}}(s^{p^m})\geq v_{\overline{z},\mathcal{O}}(r^{p^m})+p^k\geq p^kq+p^k=p^k(q+1)$, so $s^{p^m}\in z^{p^k(q+1)}\mathcal{O}=x^{q+1}\mathcal{O}=J(\mathcal{O})^{a(q+1)}\subseteq J(\mathcal{O})^{n+1}$.\
Therefore $v_{\mathfrak{q}}(s^{p^m})\geq n+1>n=v_{\mathfrak{q}}(r^{p^m})$.\
Furthermore, for all $l\in\mathbb{N}$, $v_{\mathfrak{q}}(s^{p^{m+l
}})\geq p^lv_{\mathfrak{q}}(s^{p^m})>p^lv_{\mathfrak{q}}(r^{p^m})=v_{\mathfrak{q}}(r^{p^{m+l}})$ – the last equality holds since $r^{p^m}$ is $v_{\mathfrak{q}}$-regular.
Hence $v_{\mathfrak{q}}(s^{p^n})>v_{\mathfrak{q}}(r^{p^n})$ for all sufficiently high $n$ as required.
Growth Rates
------------
Since we are usually only interested in convergence, it makes sense to consider the growth of elements values as they are raised to high powers.
\[growth rate\]
Let $Q$ be a ring with a filtration $v:Q\to\mathbb{Z}\cup\{\infty\}$. Define $\rho:Q\to\mathbb{R}\cup\{\infty\},x\to\underset{n\rightarrow\infty}{\lim}{\frac{v(x^n)}{n}}$. This is the *growth rate function* of $Q$ with respect to $v$, the proof of [@Bergmann Lemma 1] shows that this is well defined.
\[growth properties\]
Let $Q$ be a ring with a filtration $v:Q\to\mathbb{Z}\cup\{\infty\}$, and let $\rho$ be the corresponding growth rate function. Then for all $x,y\in Q$:\
i. $\rho(x^n)=n\rho(x)$ for all $n\in\mathbb{N}$.
ii\. If $x$ and $y$ commute then $\rho(x+y)\geq\min\{\rho(x),\rho(y)\}$ and $\rho(xy)\geq\rho(x)+\rho(y)$.
iii\. $\rho(x)\geq v(x)$ and $\rho$ is invariant under conjugation.
iv\. If $Q$ is simple and artinian and $v$ is separated, then $\rho(x)=\infty$ if and only if $x$ is nilpotent.
v\. If $x$ is $v$-regular and commutes with $y$, then $\rho(x)=v(x)$ and $\rho(xy)=v(x)+\rho(y)$.
$i$ and $ii$ are given by the proof of [@Bergmann Lemma 1].\
$iii$. For each $n\in\mathbb{N}$, $\frac{v(x^n)}{n}\geq\frac{nv(x)}{n}=v(x)$, and so $\rho(x)\geq v(x)$.\
Given $u\in Q^{\times}$, $\rho(uxu^{-1})=\underset{n\rightarrow\infty}{\lim}{\frac{v((uxu^{-1})^n)}{n}}=\underset{n\rightarrow\infty}{\lim}{\frac{v(u(x^n)u^{-1})}{n}}\geq\underset{n\rightarrow\infty}{\lim}{\frac{v(u)+v(x^n)+v(u^{-1})}{n}}=\underset{n\rightarrow\infty}{\lim}{\frac{v(x^n)}{n}}+\underset{n\rightarrow\infty}{\lim}{\frac{v(u)}{n}}+\underset{n\rightarrow\infty}{\lim}{\frac{v(u^{-1})}{n}}=\rho(x)$.\
Hence $\rho(x)=\rho(u^{-1}uxu^{-1}u)\geq\rho(uxu^{-1})\geq\rho(x)$ – forcing equality. Therefore $\rho$ is invariant under conjugation.\
$iv$. Clearly if $x$ is nilpotent then $\rho(x)=\infty$.\
First suppose that $x$ is a unit, then for any $y\in Q$, $v(y)=v(x^{-1}xy)\geq v(x^{-1})+v(xy)$, and so $v(xy)\leq v(y)-v(x^{-1})$. It follows using induction that for all $n\in\mathbb{N}$, $v(x^ny)\leq v(y)-nv(x^{-1})$, and hence $\frac{v(x^ny)}{n}\leq \frac{v(y)}{n}-v(x^{-1})$.\
Taking $y=1$, it follows easily that $\rho(x)\leq -v(x^{-1})$, and since $v$ is separated, this is less than $\infty$.\
Now, since $Q$ is simple and artinian, we have that $Q\cong M_l(K)$ for some division ring $K$, $l\in\mathbb{N}$. So applying Fitting’s Lemma [@Fitting section 3.4], we can find a unit $u\in Q^{\times}$ such that $uxu^{-1}$ has standard Fitting block form $\left(\begin{array}{cc} A & 0\\
0 & B\end{array}\right)$, where $A$ and $B$ are square matrices over $K$, possibly empty, $A$ is invertible and $B$ is nilpotent.\
If $x$ is not nilpotent then $uxu^{-1}$ is not nilpotent, and hence $A$ is non-empty. Therefore, $\rho(uxu^{-1})=\rho(A)$, and since $A$ is invertible, $\rho(A)<\infty$. So by part $iii$, $\rho(x)=\rho(uxu^{-1})<\infty$.\
$v$. Since $x$ is $v$-regular, $v(x^n)=nv(x)$ for all $n$, so clearly $\rho(x)=v(x)$. Also, $\rho(xy)=\underset{n\rightarrow\infty}{\lim}{\frac{v((xy)^n)}{n}}=\underset{n\rightarrow\infty}{\lim}{\frac{v(x^ny^n)}{n}}=\underset{n\rightarrow\infty}{\lim}{\frac{v(x^n)+v(y^n)}{n}}=\underset{n\rightarrow\infty}{\lim}{\frac{v((x^n)}{n}}+\underset{n\rightarrow\infty}{\lim}{\frac{v(y^n)}{n}}=\rho(x)+\rho(y)=v(x)+v(y)$.
So let $G$ be a non-abelian $p$-valuable, abelian-by-procyclic group with principal subgroup $H$, procyclic element $X$, and let $P$ be a faithful prime ideal of $kG$. Fix $v$ a non-commutative valuation on $Q(\frac{kG}{P})$ such that the natural map $\tau:kG\to Q(\frac{kG}{P})$ is continuous, and let $\rho$ be the growth rate function of $v$.\
Recall the function $u=z(\varphi^{p^{m_1}}):H\to H$ defined in section 2.1, where $\varphi$ is conjugation by $X$. Define $\lambda:=\inf\{\rho(\tau(u(h)-1)):h\in G\}$. Since we know that for all $h\in H$, $u(h)=u_0(h)^{p^{m_1}}$ and $v(\tau(h-1)^{p^m}))\geq 1$ for sufficiently high $m$, we may choose $m_1$ such that $v(\tau(u(h)-1))\geq 1$, and hence $\lambda\geq 1$.
\[faithful\]
$\lambda<\infty$ and $\lambda$ is attained as the growth rate of $\tau(u(h)-1)$ for some $h\in H$.
Suppose that $\lambda=\infty$, i.e. $\rho(\tau(u(h)-1))=\infty$ for all $h\in G$.\
By Lemma \[growth properties\]($iv$), it follows that $\tau(u(h)-1)$ is nilpotent for all $h\in H$, and hence $u(h)^{p^m}-1\in P$ for sufficiently high $m$.
So since $P$ is faithful, it follows that $u(h)^{p^m}=u_0(h)^{p^{m_1+m}}=1$, and hence $u_0(h)=1$ by the torsionfree property, and this holds for all $h\in H$.\
But in this case, $1=u_0(h)=\exp([\log(X),\log(h)])$, so $[\log(X),\log(h)]=0$ in $\log(Sat(G))$, and it follows that $\log(Sat(G))$ is abelian, and hence $G$ is abelian – contradiction.\
Therefore $\lambda$ is finite, and it is clear that for any basis $\{h_1,\cdots,h_d\}$ for $H$, $\lambda=\min\{\rho(\tau(u(h_i)-1)):i=1,\cdots,d\}$, hence it is attained.
Growth Preserving Polynomials
-----------------------------
We will now see the first example of proving a control theorem using our Mahler expansions.\
Consider a polynomial of the form $f(t)=a_0t+a_1t^p+\cdots+a_rt^{p^r}$, where $a_i\in\tau(kH)$ and $r\geq 0$, we call $r$ the *$p$-degree* of $f$.
Then $f:\tau(kH)\to\tau(kH)$ is an $\mathbb{F}_p$-linear map.\
Recall the definition of $\lambda\geq 1$ from section 3.3, which is implicit in the following definition.
\[GPP\]
We say that $f(t)=a_0t+a_1t^p+\cdots+a_rt^{p^r}$ is a *growth preserving polynomial*, or *GPP*, if:\
i. $\rho(f(q))\geq p^{r}\lambda$ for all $q\in\tau(kH)$ with $\rho(q)\geq\lambda$.\
ii. $\rho(f(q))> p^{r}\lambda$ for all $q\in\tau(kH)$ with $\rho(q)>\lambda$.\
We say that a GPP $f$ is *trivial* if for all $q=\tau(u(h)-1)$, $\rho(f(q))>p^r\lambda$.\
Furthermore, $f$ is a *special* GPP if $f$ is not trivial, and for any $q=\tau(u(h)-1)$ with $\rho(f(q))=p^{r}\lambda$, we have that $f(q)^{p^k}$ is $v$-regular for sufficiently high $k$.
**** $f(t)=t$ is clearly a GPP. In general it need not be special, and it is not trivial, because if $\rho(q)=\lambda$ then $\rho(f(q))=\rho(q)=\lambda$.
For any growth preserving polynomial $f(t)$, define $K_f:=\{h\in H:\rho(f(\tau(u(h)-1)))>p^{r}\lambda\}$.
\[sub\]
If $f(t)$ is a GPP, then $K_f$ is an open subgroup of $H$ containing $H^p$. Moreover, $K_f=H$ if and only if $f$ is trivial.
Firstly, it is clear from the definition that $K_f=H$ if and only if $\rho(f(\tau(u(h)-1)))>p^r\lambda$ for all $h\in H$, i.e. if and only if $f$ is trivial.\
Given $h,h'\in H$, let $q=\tau(u(h)-1)$, $q'=\tau(u(h')-1)$. Then
$\tau(u(hh')-1)=\tau((u(h)-1)(u(h')-1)+(u(h)-1)+(u(h')-1)))=qq'+q+q'$
Therefore $f(\tau(u(hh')-1))=f(qq')+f(q)+f(q)$ using $\mathbb{F}_p$-linearity of $f$. But $\rho(q),\rho(q')\geq\lambda$ by the definition of $\lambda$, so $\rho(qq')\geq 2\lambda>\lambda$, so by the definition of a GPP, $\rho(f(qq'))>p^r\lambda$.\
We know that $\rho(f(q)),\rho(f(q'))> p^r\lambda$, thus $\rho(f(q_1q_2)+f(q_1)+f(q_2))>p^r\lambda$. Therefore, $\rho(f(\tau(u(hh')-1)))>p^r\lambda$ and $hh'\in K_f$ as required.\
Also, $\tau(u(h^{-1})-1)=-\tau(u(h^{-1}))\tau(u(h)-1)=-\tau(u(h^{-1})-1)\tau(u(h)-1)-\tau(u(h)-1)$, so by the same argument it follows that $h^{-1}$ in $K_f$, and $K_f$ is a subgroup of $H$.\
Finally, for any $h\in H$ $\rho(\tau(u(h^p)-1))=\rho(\tau(u(h)-1)^p)\geq p\lambda>\lambda$, hence $\rho(f(\tau(u(h^p)-1)))>p^r\lambda$ and $h^p\in K_f$. Therefore $K_f$ contains $H^p$ and $K_f$ is an open subgroup of $H$.
In particular, for $f(t)=t$, let $K:=K_f=\{h\in H:\rho(\tau(u(h)-1))>\lambda\}$. Then $K$ is a proper open subgroup of $H$ containing $H^p$.
Given a polynomial $f(t)=a_0t+a_1t^p+\cdots+a_rt^{p^r}$, and a basis $\{h_1,\cdots,h_d\}$ for $H$, let $q_i=\tau(u(h_i)-1)$, and recall our expression (\[MahlerPol\]) from section 2:
$0=f(q_1)^{p^m}\tau\partial_1(y)+....+f(q_{d})^{p^m}\tau\partial_{d}(y)+O(f(q)^{p^m})$
Where $v(q^{p^m})\geq 2v(q_i^{m})$ for all $m$, and hence $\rho(q)>\rho(q_i)\geq\lambda$. Then if $f$ is a GPP, it follows that $\rho(f(q))>p^r\lambda$, and hence $v(f(q)^{p^m})>p^{m+r}\lambda$ for $m>>0$.\
For the rest of this section, fix a non-trivial GPP $f$ of $p$ degree $r$. Then $K_f$ is a proper open subgroup of $H$ containing $H^p$ by Lemma \[sub\], so fix a basis $\{h_1,\cdots,h_d\}$ for $H$ such that $\{h_1^p,\cdots,h_t^p,h_{t+1},\cdots,h_d\}$ is an ordered basis for $K_f$.\
Set $q_i:=\tau(u(h_i)-1)$ for each $i$, so that $\rho(f(q_i))=p^r\lambda$ for $i\leq t$ and $\rho(f(q_i))>p^r\lambda$ for $i>t$, and define:
$S_m:=\left(\begin{array}{cccccc} f(q_1)^{p^m} & f(q_2)^{p^m} & . & . & f(q_t)^{p^m}\\
f(q_1)^{p^{m+1}} & f(q_2)^{p^{m+1}} & . & . & f(q_t)^{p^{m+1}}\\
. & . & . & . & .\\
. & . & . & . & .\\
f(q_1)^{p^{m+t-1}} & f(q_2)^{p^{m+t-1}} & . & . & f(q_t)^{p^{m+t-1}}\end{array}\right)$, $\underline{\partial}:=\left(\begin{array}{c}\tau\partial_1(y)\\
\tau\partial_2(y)\\
.\\
.\\
.\\
\tau\partial_t(y)\end{array}\right)$\
Then we can rewrite our expression as:
$$0=S_m\underline{\partial}+\left(\begin{array}{c} O(f(q)^{p^m})\\
O(f(q)^{p^{m+1}})\\
.\\
.\\
.\\
O(f(q)^{p^{m+t-1}})\end{array}\right)$$
And multiplying by $adj(S_m)$ gives:
$$\label{Mahler''}
0=det(S_m)\underline{\partial}+adj(S_m)\left(\begin{array}{c} O(f(q)^{p^m})\\
O(f(q)^{p^{m+1}})\\
.\\
.\\
.\\
O(f(q)^{p^{m+t-1}})\end{array}\right)$$
\[adjoint\]
Suppose that $f$ is special. Then for each $i,j\leq t$, the $(i,j)$-entry of $adj(S_m)$ has value at least $\frac{p^t-1}{p-1}p^{m+r}\lambda-p^{m+r+j-1}\lambda$ for sufficiently high $m$.
By definition, the $(i,j)$-entry of $adj(S_m)$ is (up to sign) the determinant of the matrix $(S_m)_{i,j}$ obtained by removing the $j$’th row and $i$’th column of $S_m$. This determinant is a sum of elements of the form:
$f(q_{k_1})^{p^m}f(q_{k_2})^{p^{m+1}}\cdots \widehat{f(q_{k_j})^{p^{m+j-1}}}\cdots f(q_{k_t})^{p^{m+t-1}}$
For $k_i\leq t$, where the hat indicates that the $j$’th term in this product is omitted.\
Since $f$ is special, $f(q_{k_s})^{p^m}$ is $v$-regular for $m>>0$. So since $\rho(f(q_{k_s}))=p^r\lambda$, it follows that $f(q_{k_s})$ is $v$-regular of value $p^{m+r}\lambda$.\
Therefore, this $(i,j)$-entry has value at least $(1+p+\cdots+\widehat{p^{j-1}}+\cdots+p^{t-1})p^{m+r}\lambda$
$=\frac{p^t-1}{p-1}p^{m+r}\lambda-p^{m+r+j-1}\lambda$.
\[delta\]
Let $\Delta:=\underset{\alpha\in\mathbb{P}^{t-1}\mathbb{F}_p}{\prod}{(\alpha_1f(q_1)+\cdots+\alpha_tf(q_t))}$.
Then there exists $\delta\in\tau(kH)$, which is a product of length $\frac{p^t-1}{p-1}$ in elements of the form $f(\tau(u(h)-1))$, $h\in H\backslash K_f$, such that $\rho(\Delta-\delta)>\frac{p^t-1}{p-1}p^r\lambda$.
For each $\alpha\in\mathbb{F}_p^t\backslash\{0\}$, we have that $\alpha_1f(q_1)+\cdots+\alpha_tf(q_t)=f(\alpha_1q_1+\cdots+\alpha_tq_t)$ using linearity of $f$.\
Using expansions inside $kH$, we see that $\alpha_1q_1+\cdots+\alpha_{t}q_{t}=\tau(u(h_1^{\alpha_1}\cdots h_{t}^{\alpha_{t}})-1)+O(q_jq_k)$, and hence:\
$f(\alpha_1q_1+\cdots+\alpha_{t}q_t)=f(\tau(u(h_1^{\alpha_1}\cdots h_t^{\alpha_t})-1))+O(f(q_jq_k))$.\
So setting $h_{i,\alpha}:=u(h_1^{\alpha_1}\cdots h_{t}^{\alpha_{t}})$. So since $\alpha_i\neq 0$ for some $i$, it follows from $\mathbb{F}_p$-linear independence of $h_1,\cdots,h_t$ modulo $K_f$, that $\rho(f(\tau(u(h_{i,\alpha})-1)))=p^r\lambda$, and hence $h_{i,\alpha}\in H\backslash K_f$.\
Set $\delta:=\underset{\alpha\in\mathbb{P}^{t-1}\mathbb{F}_p}{\prod}{f(\tau(u(h_{i,\alpha})-1))}$. Then $\Delta=\underset{\alpha\in\mathbb{P}^{t-1}\mathbb{F}_p}{\prod}{(\alpha_1f(q_1)+\cdots+\alpha_tf(q_t))}$\
$=\underset{\alpha\in\mathbb{P}^{t-1}\mathbb{F}_p}{\prod}{f(\tau(u(h_{i,\alpha})-1))}+O(f(q_jq_k))=\delta+\epsilon$\
Where $\epsilon$ is a sum of products over all $i$, $\alpha$ in $f(\tau(u(h_{i,\alpha})-1))$ and $O(f(q_jq_k))$, with each product containing at least one $O(f(q_jq_k))$.\
Since the length of each of these products is $\frac{p^{t}-1}{p-1}$, and each term has growth rate at least $p^r\lambda$, with one or more having growth rate strictly greater than $p^r\lambda$, it follows that $\rho(\epsilon)>\frac{p^t-1}{p-1}p^r\lambda$ as required.
\[special-GPP\]
Suppose that there exists a special growth preserving polynomial $f$. Then $P$ is controlled by a proper open subgroup of $G$.
Since $f$ is special, we have that for some $k>0$ $f(q_i)^{p^k}$ is $v$-regular for each $i\leq t$, and thus $v(f(q_i)^{p^k})=\rho(f(q_i)^{p^k})=p^{r+k}\lambda$ and for all $m\geq k$, $v(f(q_i)^{p^m})=p^{r+m}\lambda$.\
Also, since $\rho(f(q))>p^r\lambda$, we can choose $c>0$ such that $\rho(f(q))>p^r\lambda+c$, and hence $v(f(q)^{p^m})>p^{m+r}\lambda+p^mc$ for sufficiently high $m$.\
Consider our Mahler expansion (\[Mahler”\]):
$0=det(S_m)\underline{\partial}+adj(S_m)\left(\begin{array}{c} O(f(q)^{p^m})\\
O(f(q)^{p^{m+1}})\\
.\\
.\\
.\\
O(f(q)^{p^{m+t-1}})\end{array}\right)$
We will analyse this expression to prove that $\tau\partial_i(P)=0$ for all $i\leq t$, and it will follow from Proposition \[Frattini\] that $P$ is controlled by a proper open subgroup of $G$ as required.\
Consider the $i$’th entry of the vector
$adj(S_m)\left(\begin{array}{c} O(f(q)^{p^m})\\
O(f(q)^{p^{m+1}})\\
.\\
.\\
.\\
O(f(q)^{p^{m+t-1}})\end{array}\right)$
This has the form $adj(S_m)_{i,1}O(f(q)^{p^m})+adj(S_m)_{i,2}O(f(q)^{p^{m+1}})+\cdots+adj(S_m)_{i,t}O(f(q)^{p^{m+t-1}})$.
By Lemma \[adjoint\], we know that $v(adj(S_m)_{i,j})\geq \frac{p^t-1}{p-1}p^{m+r}\lambda-p^{m+r+j-1}\lambda$ for $m>>0$, and hence:\
$v(adj(S_m)_{i,j}O(f(q)^{p^{m+j-1}}))\geq v(adj(S_m)_{i,j})+v(O(f(q)^{p^{m+j-1}}))$\
$\geq \frac{p^t-1}{p-1}p^{m+r}\lambda-p^{m+r+j-1}\lambda+p^{m+r+j-1}\lambda+p^{m+j-1}c=\frac{p^t-1}{p-1}p^{m+r}\lambda+p^{m+j-1}c$ for each $j$.\
Hence this $i$’th entry has value at least $\frac{p^t-1}{p-1}p^{m+r}\lambda+p^mc$.\
Therefore, the $i$’th entry of our expression (\[Mahler”\]) has the form $0=det(S_m)\tau\partial_i(y)+\epsilon_{i,m}$, where $v(\epsilon_{i,m})\geq \frac{p^t-1}{p-1}p^{m+r}\lambda+p^mc$.\
Now, take $\Delta:=det(S_0)$, and it follows that $det(S_m)=\Delta^{p^m}$ for each $m$. Also, using [@chevalley Lemma 1.1($ii$)] we see that:\
$\Delta=\beta\cdot\underset{\alpha\in\mathbb{P}^{t-1}\mathbb{F}_p}{\prod}{(\alpha_1f(q_1)+\cdots+\alpha_tf(q_t))}$ for some $\beta\in\mathbb{F}_p$.\
Therefore, by Lemma \[delta\], we can find an element $\delta\in\tau(kH)$, which up to scalar multiple is a product of length $\frac{p^t-1}{p-1}$ in elements of the form $f(\tau(u(h)-1))$, with $\rho(f(\tau(u(h)-1)))=p^r\lambda$ for each $h$, such that $\rho(\Delta-\delta)>\frac{p^t-1}{p-1}p^{r}\lambda$.
Hence we can find $c'>0$ such that for all $m>>0$, $v((\Delta-\delta)^{p^m})\geq \frac{p^t-1}{p-1}p^{m+r}\lambda+p^mc'$.\
Therefore, $0=det(S_m)\tau\partial_i(y)+\epsilon_{i,m}=\Delta^{p^m}\tau\partial_i(y)+\epsilon_{i,m}=\delta^{p^m}\tau\partial_i(y)+(\Delta-\delta)^{p^m}\tau\partial_i(y)+\epsilon_{i,m}$.\
Again, since $f$ is a special GPP, if $h\in H$ and $\rho(f(\tau(u(h)-1)))=p^r\lambda$, it follows that $f(\tau(u(h)-1))^{p^k}$ is $v$-regular for $k>>0$.\
So since $\delta$ is a product of $\frac{p^t-1}{p-1}$ elements of the form $f(\tau(u(h)-1))$ of growth rate $p^r\lambda$, it follows that for some $k\in\mathbb{N}$, $\delta^{p^k}$ is $v$-regular of value $\frac{p^t-1}{p-1}p^{r+k}\lambda$.\
Therefore for all $m\geq k$, $\delta^{p^m}$ is $v$-regular of value $\frac{p^t-1}{p-1}p^{r+m}\lambda$, and dividing out by $\delta^{p^m}$ gives that for each $i=1,\cdots,t$:
$0=\tau\partial_i(y)+\delta^{-p^m}(\Delta-\delta)^{p^m}\tau\partial_i(y)+\delta^{-p^m}\epsilon_{i,m}$.
But for $m>>0$, $v(\delta^{-p^m}(\Delta-\delta)^{p^m})\geq \frac{p^t-1}{p-1}p^{m+r}\lambda+p^mc'-\frac{p^t-1}{p-1}p^{m+r}\lambda=p^mc'$, and $v(\delta^{-p^m}\epsilon_{i,m})\geq \frac{p^t-1}{p-1}p^{m+r}\lambda+p^mc-\frac{p^t-1}{p-1}p^{m+r}\lambda=p^mc$, hence the right hand side of this expression converges to $\tau\partial_i(y)$.\
Therefore, $\tau\partial_i(y)=0$, and since this holds for all $y\in P$, we have that $\tau\partial_i(P)=0$ for each $i=1,\cdots,t$.
So to prove Theorem \[B\], it remains only to prove the existence of a special growth preserving polynomial. We will construct such a polynomial in section 6.
Central Simple algebras
=======================
Suppose that $G$ is an abelian-by-procyclic group, and $P$ is a faithful prime ideal of $kG$. Since $Q:=Q(\frac{kG}{P})$ is simple, its centre is a field. Recall that the proof of Theorem \[B\] will split into two parts, when $Q$ is finite dimensional over its centre, and when it is not.
In this section, we deal with the former case. So we will assume throughout that $Q$ is finitely generated over its centre, i.e. is a central simple algebra.
Isometric embedding
-------------------
Fix a non-commutative valuation $v$ on $Q$. Then by definition, the completion $\widehat{Q}$ of $Q$ with respect to $v$ is isomorphic to $M_n(Q(D))$ for some complete, non-commutative DVR $D$.
But since $Q$ is finite dimensional over its centre, the same property holds for $\widehat{Q}$, and hence $Q(D)$ is finite dimensional over its centre.\
Let $F:=Z(Q(D))$, $s:= $dim$_F(Q(D))$, $R:=F\cap D$. Then $F$ is a field, $R$ is a commutative DVR, and $Q(D)\cong F^s$. Let $\pi\in R$ be a uniformiser, and suppose that $v(\pi)=t>0$.
\[technical’\]
Let $\{y_1,\cdots,y_s\}$ be an $F$-basis for $Q(D)$ with $0\leq v(y_i)\leq t$ for all $i$. Then there exists $l\in\mathbb{N}$ such that if $v(r_1y_1+\cdots+r_sy_s)\geq l$ then $v(r_i)>0$ for some $i$ with $r_i\neq 0$.
First, note that the field $F$ is complete with respect to the non-archimedian valuation $v$, and $Q(D)\cong F^s$ carries two filtrations as a $F$-vector space, which both restrict to $v$ on $F$. One is the natural valuation $v$ on $Q(D)$, the other is given by $v_0(r_1y_1+....+r_sy_s)=\min\{v(r_i):i=1,...,s\}$.\
But it follows from [@norm Proposition 2.27] that any two norms on $F^s$ are topologically equivalent. Hence $v$ and $v_0$ induce the same topology on $Q(D)$.
It is easy to see that any subspace of $F^s$ is closed with respect to $v_0$, and hence also with respect to $v$.\
So, suppose for contradiction that for each $m\in\mathbb{N}$, there exist $r_{i,m}\in K$, not all zero, with $v(r_{i,m})\leq 0$ if $r_{i,m}\neq 0$, and $v(r_{1,m}y_1+....+r_{s,m}y_s)\geq m$.\
Then there exists $i$ such that $r_{i,m}\neq 0$ for infinitely many $m$, and we can assume without loss of generality that $i=1$. So from now on, we assume that $r_{1,m}\neq 0$ for all $m$.\
Dividing out by $r_{1,m}$ gives that $v(y_1+t_{2,m}y_2+....+t_{s,m}y_s)\geq m-v(r_{1,m})\geq m$, where $t_{i,m}=r_{1,m}^{-1}r_{i,m}\in K$.\
Hence $\underset{m\rightarrow\infty}{\lim}{(y_1+t_{2,m}y_2+....+t_{s,m}y_s)}=0$, and thus $\underset{n\rightarrow\infty}{\lim}{(t_{2,m}y_2+....+t_{s,m}y_s)}$ exists and equals $-y_1$. But since Span$_F\{y_2,....,y_s\}$ is closed in $F^n$, this means that $y_1\in$ Span$_F\{y_2,.....,y_s\}$ – contradiction.
\[integral basis\]
There exists a basis $\{x_1,....,x_s\}\subseteq D$ for $Q(D)$ over $F$ such that $D=Rx_1\oplus....\oplus R x_s$.
It is clear that we can find an $F$-basis $\{y_1,....,y_s\}\subseteq D$ for $Q(D)$ such that $v(y_i)<t$ for all $i$, just by rescaling elements of some arbitrary basis.
Therefore, by Lemma \[technical’\], there exists $l\in\mathbb{N}$ such that if $v(r_1y_1+\cdots+r_st_s)\geq l$ then $v(r_i)>0$ for some $i$ with $r_i\neq 0$.\
Choose $m\in\mathbb{N}$ such that $tm>l$. Then given $x\in D\backslash \{0\}$, $v(x)\geq 0$ so $v(\pi^mx)\geq tm>l$. So if $\pi^mx=r_1y_1+....+r_sy_s$ then $v(r_i)\geq 0$ for some $r_i\neq 0$.\
It follows from an easy inductive argument that $\pi^{ms}x\in Ry_1\oplus.....\oplus Ry_s$, and hence $\pi^{ms}D\subseteq Ry_1\oplus.....\oplus Ry_s$.\
But $Ry_1\oplus.....\oplus Ry_s$ is a free $R$-module, and $R$ is a commutative PID, hence any $R$-submodule is also free. Hence $\pi^{ms}D=R(\pi^{ms}x_1)\oplus....\oplus R(\pi^{ms}x_e)$ for some $x_i\in D$.\
It follows easily that $e=s$, $\{x_1,....,x_s\}$ is an $F$-basis for $Q(D)$, and $D=Rx_1\oplus....\oplus Rx_s$.
Now, we restrict our non-commutative valuation $v$ on $\widehat{Q}\cong M_n(Q(D))$ to $Q(D)$, and by definition this is the natural $J(D)$-adic valuation.
Using Proposition \[integral basis\], we fix a basis $\{x_1,\cdots,x_s\}\subseteq D$ for $Q(D)$ over $F$, with $D=Rx_1\oplus Rx_2\oplus\cdots\oplus R x_s$.
\[embedding\]
Let $F'$ be any finite extension of $F$; then $v$ extends to $F'$. Let $v'$ be the standard matrix filtration of $M_s(F')$ with respect to $v$.
Then there is a continuous embedding of $F$-algebras $\phi: Q(D)\xhookrightarrow{} M_s(F')$ such that
$v'(\phi(x))\leq v(x)<v'(\phi(x))+2t$ for all $x\in Q(D)$.
Hence applying the functor $M_n$ to $\phi$ gives us a continuous embedding $M_n(\phi):\widehat{Q}\hookrightarrow{} M_{ns}(F')$ such that $v'(M_n(\phi)(x))\leq v(x)<v'(M_n(\phi)(x))+2t$ for all $x\in \widehat{Q}$.
It is clear that the embedding $F\to F'$ is an isometry, so it suffices to prove the result for $F'=F$.\
Again, define $v_0:Q(D)\to\mathbb{Z}$ $\cup$ $\{\infty\},\sum_{i=1}^{s}{r_ix_i}\mapsto \min\{v(r_i): i=1,....,s\}$, it is readily checked that this is a separated filtration of $F$-vector spaces, and clearly $v(x)\geq v_0(x)$ for all $x\in Q(D)$.\
Then if $x=r_1x_1+....+r_sx_s\in Q(D)$ with $0\leq v(x)<t$, then $v(r_i)\geq 0$ for all $i$ because $D$ is an $R$-lattice by Lemma \[integral basis\]. Since $v(x_i)\geq 0$ for all $i$, $v(r_j)<t$ for some $j$, so since $r_j\in R$, this means that $v(r_j)=0$, and hence $v_0(x)=0$.\
So if $x\in Q(D)$ with $v(x)=l$, then $at\leq l < (a+1)t$ where $a=\floor{\frac{l}{t}}$, and hence $0\leq v(\pi^{-a}x)<l$, so $v_0(\pi^{-a}x)=0$.
Thus $\pi^{-a}x=r_1x_1+....+r_sx_s$ with $v(r_i)\geq 0$ for all $i$, $v(r_j)=0$ for some $i$, and hence $v_0(x)=ta$, so $v(x)< v_0(x)+t$.\
So it follows that $v_0(x)\leq v(x)< v_0(x)+t$ for all $x\in Q(D)$, in particular $0\leq v(x_i)<t$ for all $i$. Hence the identity map $(Q(D),v)\to (Q(D),v_0)$ is bounded.\
Now, consider the map $\phi:Q(D)\to $ End$_F(Q(D)), x\mapsto (Q(D)\to Q(D), d\mapsto x\cdot d)$, this is an injective $F$-algebra homomorphism.\
Also, End$_F(Q(D))$ carries a natural filtration of $F$-algebras given by
$v'(\psi):=\min\{v_0(\psi(x_i)):i=1,...,s\}$ for each $\psi\in $ End$_F(Q(D))$.
Using the isomorphism End$_F(Q(D))\cong M_s(F)$, this is just the standard matrix filtration, and it is readily seen that
$v'(\psi)=\inf\{v_0(\psi(d)):d\in Q(D), 0\leq v'(d)<t\}$ for all $\psi\in $ End$_F(Q(D))$.
So if $v_0(x)=r$ then since $v_0(1)=0$ and $v_0(\phi(x)(1))=v_0(x\cdot 1)=r$, it follows that $v'(\phi(x))\leq r$. But for all $i=1,...,s$:\
$v_0(x\cdot x_i)>v(x\cdot x_i)-t\geq r+v(x_i)-t\geq r-t$, hence $v'(\phi(x))\geq r-t$.\
Therefore $v'(\phi(x))\leq v_0(x)\leq v'(\phi(x))+t$ for all $x$, so $\phi$ is bounded, and hence continuous.\
Finally, since for all $x\in Q(D)$, $v_0(x)\leq v(x)\leq v_0(x)+t$ and $v'(\phi(x))\leq v_0(x)\leq v'(\phi(x))+t$, it follows that $v'(\phi(x))\leq v(x)\leq v'(\phi(x))+2t$.
Recall from Definition \[growth rate\] the growth rate function $\rho'$ of $M_{ns}(F')$ with respect to $v'$. Then using Proposition \[embedding\], we see that for all $x\in\widehat{Q}$:\
$\rho(x)=\underset{n\rightarrow\infty}{\lim}{\frac{v(x^n)}{n}}\leq \underset{n\rightarrow\infty}{\lim}{\frac{v'(x^n)+2t}{n}}=\rho'(x)\leq\rho(x)$ – forcing equality.\
Therefore $\rho'=\rho$ when restricted to $\widehat{Q}$.
Diagonalisation
---------------
Again, recall our Mahler expansion (\[Mahler\]):
$0=q_1^{p^m}\tau\partial_1(y)+....+q_{d}^{p^m}\tau\partial_{d}(y)+O(q^{p^m})$
Where each $q_i=\tau(u(h_i)-1)$ for some basis $\{h_1,\cdots,h_d\}$ for $H$, and $\rho(q)>\rho(q_i)$ for each $i$.\
We may embed $Q(D)$ continuously into $M_s(F')$ for any finite extension $F'$ of $F=Z(Q(D))$ by Proposition \[embedding\], and since each $q_i$ is a square matrix over $Q(D)$, by choosing $F'$ appropriately, we may ensure that they can be reduced to Jordan normal form inside $M_{ns}(F')$.\
But since $F'$ has characteristic $p$, after raising to sufficiently high $p$’th powers, a Jordan block becomes diagonal. So we may choose $m_0\in\mathbb{N}$ such that $q_i^{p^{m_0}}$ is diagonalisable for each $i$.\
But $q_1,\cdots,q_d$ commute, and it is well known that commuting matrices can be simultaneously diagonalised. Hence there exists $a\in M_{ns}(F')$ invertible such that $aq_i^{p^{m_0}}a^{-1}$ is diagonal for each $i$.\
So, let $t_i:=aq_ia^{-1}$, then after multiplying (\[Mahler\]) on the left by $a$, we get:
$0=t_1^{p^m}a\tau\partial_1(y)+....+t_{d}^{p^m}a\tau\partial_{d}(y)+O(aq^{p^m})$
Note that since $t_i^{p^{m_0}}$ is diagonal for each $i$, $\rho'(t_i^{p^{m_0}})=v'(t_i^{p^{m_0}})$, and $\rho'(t_i)=\rho'(q_i)$ since growth rates are invariant under conjugation by Lemma \[growth properties\]($iii$). Since $\rho'=\rho$ on $\widehat{Q}$, it follows that $\rho(q_i^{p^{m_0}})=v'(t_i^{p^{m_0}})$.
Moreover, $v'(t_i^{p^m})=\rho(q_i^{p^m})=p^m\rho(q_i)$ for each $m\geq m_0$.\
Also, $q_i=\tau(u(h_i)-1)=\tau(u_0(h_i)-1)^{p^{m_1}}$, so after replacing $m_1$ by $m_1+m_0$ we may ensure that each $t_i$ is diagonal, and hence $v'(t_i)=\rho(q_i)$.\
Now, let $K:=\{h\in H:\rho(\tau(u(h)-1))>\lambda\}$. Then since $id$ is a non-trivial GPP and $K=K_{id}$, it follows from Lemma \[sub\] that $K$ is a proper open subgroup of $H$ containing $H^p$.\
For the rest of this section, fix a basis $\{h_1,\cdots,h_d\}$ for $H$ such that $\{h_1^p,\cdots,h_r^p,h_{r+1},\cdots,h_d\}$ is a basis for $K$, $q_i=\tau(u(h_i)-1)$, $t_i=aq_ia^{-1}$ as above.\
Then it follows that for all $i\leq r$, $v'(t_i)=\rho(q_i)=\lambda$, and for $i>r$, $v'(t_i)>\lambda$, so we have:
$$\label{Mahlerf}
0=t_1^{p^m}a\tau\partial_1(y)+....+t_{r}^{p^m}a\tau\partial_{r}(y)+O(aq^{p^m})$$
Where $\rho(q)>\lambda$.\
\[independent\]
Given $c_1,\cdots,c_m\in M_{ns}(F')$ with $v'(c_i)=\mu$ for some $i$, we say that $c_1,\cdots,c_m$ are *$\mathbb{F}_p$-linearly independent modulo $\mu^+$* if for any $\alpha_1,\cdots,\alpha_m\in\mathbb{F}_p$, $v'(\alpha_1c_1+\cdots+\alpha_mc_m)>\mu$ if and only if $\alpha_i=0$ for all $i$.
\[technical\]
$t_1,\cdots,t_r$ are $\mathbb{F}_p$-linearly dependent modulo $\lambda^+$.
Suppose, for contradiction, that $v'(\alpha_1t_1+\cdots+\alpha_rt_r)>\lambda$ for some $\alpha_i\in\mathbb{F}_p$, not all zero, then using Lemma \[growth properties\]($iii$) we see that\
$\rho(\alpha_1q_1+\cdots+\alpha_rq_r)=\rho(a(\alpha_1q_1+\cdots+\alpha_rq_r)a^{-1})=\rho(\alpha_1t_1+\cdots+\alpha_rt_r)=v'(\alpha_1t_1+\cdots+\alpha_rt_r)>\lambda$.\
But since $q_i=\tau(u(h_i)-1)$ for each $i$, we can see using expansions in $kG$ that $\alpha_1q_1+\cdots+\alpha_rq_r=\tau(u(h_1^{\alpha_1}\cdots h_r^{\alpha_r})-1)+O(q_iq_j)$, and clearly $\rho(O(q_iq_j))>\lambda$, and hence $\rho(\tau(u(h_1^{\alpha_1}\cdots h_r^{\alpha_r})-1))>\lambda$.\
But $K=\{g\in G:\rho(\tau(u(g)-1))>\lambda\}=\langle h_1^p,\cdots,h_r^p,h_{r+1},\cdots,h_d,X\rangle$, so since $p$ does not divide every $\alpha_i$, it follows that $h_1^{\alpha_1}\cdots h_r^{\alpha_r}\notin K$, and hence $\rho(\tau(u(h_1^{\alpha_1}\cdots h_r^{\alpha_r})-1))=\lambda$ – contradiction.
**** For each $i=1,\cdots,ns$, denote by $e_i$ the diagonal matrix with 1 in the $i$’th diagonal position, 0 elsewhere.
\[linear-indep\]
Suppose $d_1,\cdots,d_r\in M_{ns}(F')$ are diagonal, $v'(d_i)=\lambda$ for each $i$, and suppose that for all $m\in\mathbb{N}$ we have:
$0=d_1^{p^m}a_1+\cdots+d_r^{p^m}a_r+O(aq^{p^m})$
Where $a_i,a,q\in M_{ns}(F')$, $\rho(q)>\lambda$.\
Suppose further that for some $j\in\{1,\cdots,ns\}$, the $j$’th entries of $d_1,\cdots,d_r$ are $\mathbb{F}_p$-linearly independent modulo $\lambda^+$. Then $e_ja_i=0$ for all $i=1,\cdots,r$.
Firstly, since $d_{1,j},\cdots,d_{r,j}$ are $\mathbb{F}_p$-linearly independent modulo $\lambda^+$, it follows immediately that $e_jd_1,\cdots,e_jd_r$ are $\mathbb{F}_p$-linearly independent modulo $\lambda^+$. And:
$0=(e_jd_1)^{p^m}a_1+\cdots+(e_jd_r)^{p^m}a_r+O(e_jaq^{p^m})$
For convenience, set $d_i':=e_jd_i$, and in a similar vein to the proof of Theorem \[special-GPP\], define the following matrices:
$D_m:=\left(\begin{array}{cccccc} d_1'^{p^m} & d_2'^{p^m} & . & . & d_r'^{p^m}\\
d_1'^{p^{m+1}} & d_2'^{p^{m+1}} & . & . & d_r'^{p^{m+1}}\\
. & . & . & . & .\\
. & . & . & . & .\\
d_1'^{p^{m+t-1}} & d_2'^{p^{m+t-1}} & . & . & d_r'^{p^{m+t-1}}\end{array}\right)$, $\underline{a}:=\left(\begin{array}{c}a_1\\
a_2\\
.\\
.\\
.\\
a_r\end{array}\right)$\
Then we can rewrite our expression as:
$0=D_m\underline{a}+\left(\begin{array}{c} O(e_jaq^{p^m})\\
O(e_jaq^{p^{m+1}})\\
.\\
.\\
.\\
O(e_jaq^{p^{m+t-1}})\end{array}\right)$
And multiplying by $adj(D_m)$ gives:
$$\label{Mahlerf'}
0=det(D_m)\underline{a}+adj(D_m)\left(\begin{array}{c} O(e_jq^{p^m})\\
O(e_jaq^{p^{m+1}})\\
.\\
.\\
.\\
O(e_jaq^{p^{m+t-1}})\end{array}\right)$$
And the proof of Lemma \[adjoint\] shows that the $(i,j)$-entry of $adj(D_m)$ has value at least $\frac{p^r-1}{p-1}p^{m}\lambda-p^{m+j-1}\lambda$.\
Since $\rho(q)>\lambda$, fix $c>0$ such that $\rho(q)>\lambda+c$, and hence $v'(e_jaq^{p^m})\geq p^m\lambda+p^mc+v(a)$ for all sufficiently high $m$. Then we see that the $i$’th entry of the vector
$adj(D_m)\left(\begin{array}{c} O(e_jaq^{p^m})\\
O(e_jaq^{p^{m+1}})\\
.\\
.\\
.\\
O(e_jaq^{p^{m+t-1}})\end{array}\right)$
has value at least $\frac{p^r-1}{p-1}p^{m}\lambda+p^mc+v(a)$ for $m>>0$.\
So examining the $i$’th entry of our expression (\[Mahlerf’\]) gives that $0=det(D_m)a_i+\epsilon_{i,m}$, where $v'(\epsilon_{i,m})\geq \frac{p^t-1}{p-1}p^{m+r}\lambda+p^mc+v(a)$.\
Let $\Delta:=det(D_0)$, then $det(D_m)=\Delta^{p^m}$ for all $m\in\mathbb{N}$, and using [@chevalley Lemma 1.1($ii$)] we see that
$\Delta=\beta\cdot\underset{\alpha\in\mathbb{P}^{r-1}\mathbb{F}_p}{\prod}{(\alpha_1d_1'+\cdots+\alpha_rd_r')}$ for some $\beta\in\mathbb{F}_p$
Since $d_1',\cdots,d_r'$ are $\mathbb{F}_p$-linearly independent modulo $\lambda^+$, each term in this product has value $\lambda$, and moreover is a diagonal matrix, with only the $j$’th diagonal entry non-zero.\
Let $\delta$ be the $j$’th diagonal entry of $\Delta$. Then $\delta\in F'$, $\delta^{-1}\Delta=e_j$, and $v(\delta)=\underset{\alpha\in\mathbb{P}^{r-1}\mathbb{F}_p}{\sum}\lambda=\frac{p^r-1}{p-1}\lambda$. So:
$0=\delta^{-p^m}\Delta^{p^m}a_i+\delta^{-p^m}\epsilon_{i,m}=e_ja_i+\delta^{-p^m}\epsilon_{i,m}$
and $v'(\delta^{-p^m}\epsilon_{i,m})=v'(\epsilon_{i,m})-p^m\frac{p^r-1}{p-1}\lambda\geq\frac{p^r-1}{p-1}p^m\lambda+p^mc+v(a)-\frac{p^m-1}{p-1}p^m\lambda=v(a)+p^mc\rightarrow\infty$ as $m\rightarrow\infty$.
Hence $\delta^{-p^m}\epsilon_{i,m}\rightarrow 0$ and $e_ja_i=0$ as required.
Linear Dependence
-----------------
Consider again the maps $\partial_1,\cdots,\partial_r:kG\to kG$. These are $k$-linear derivations of $kG$, and we want to prove that $\partial_i(P)=0$ for all $i$.
\[artinian\]
Let $\delta:kG\to kG$ be any $k$-linear derivation of $kG$. Then if $c\tau\delta(P)=0$ for some $0\neq c\in M_{ns}(F')$ then $\tau\delta(P)=0$
Let $I=\{a\in M_{ns}(F'):a\tau\delta(P)=0\}$, then it is clear that $I$ is a left ideal of $M_{ns}(F')$, and $I\neq 0$ since $0\neq c\in I$. We want to prove that $1\in I$, and hence $\tau\delta(P)=0$.\
We will first prove that $I$ is right $\widehat{Q}$-invariant:\
Given $r\in kG$, $y\in P$, $\delta(ry)=r\delta(y)+\delta(r)y$ since $\delta$ is a derivation. So $\tau\delta(ry)=\tau(r)\tau\delta(y)+\tau\delta(r)\tau(y)=\tau(r)\tau\delta(y)$.
Therefore, for any $a\in I$, $a\tau(r)\tau\delta(y)=a\tau\delta(ry)=0$ since $ry\in P$. Thus $a\tau(r)\in I$.\
It follows that $I$ is right $\frac{kG}{P}$-invariant.\
Given $s\in kG$, regular mod $P$ (i.e. $\tau(s)$ is a unit in $Q(\frac{kG}{P})$), we have that $I\tau(s)\subseteq I$. Hence we have a descending chain $I\supseteq I\tau(s)\supseteq I\tau(s)^2\supseteq\cdots$ of right ideals of $M_{ns}(F')$.
So since $M_{ns}(F')$ is artinian, it follows that $I\tau(s)^n=I\tau(s)^{n+1}$ for some $n\in\mathbb{N}$, so dividing out by $\tau(s)^{n+1}$ gives that $I\tau(s)^{-1}=I$.\
Therefore, $I$ is right $Q(\frac{kG}{P})$-invariant, and passing to the completion gives that it is right $\widehat{Q}$-invariant as required.\
This means that $I\cap\widehat{Q}$ is a two sided ideal of the simple ring $\widehat{Q}\cong M_n(Q(D))$. We will prove that $I\cap\widehat{Q}\neq 0$, and it will follow that $I\cap\widehat{Q}=\widehat{Q}$ and thus $1\in I$.\
We know that $\widehat{Q}\cong M_n(Q(D))$ and $Q(D)\xhookrightarrow{} M_s(F')$. Since $Q(D)$ is a division ring, we must have that $M_s(F')$ is free as a right $Q(D)$-module, so let $\{x_1,\cdots,x_t\}$ be a basis for $M_{s}(F')$ over $Q(D)$.
It follows easily that $\{x_1I_{ns},\cdots,x_tI_{ns}\}$ is a basis for $M_{ns}(F')$ over $M_n(Q(D))=\widehat{Q}$.\
Now, $c\in I$ and $c\neq 0$, so $c=x_1c_1+\cdots+x_tc_t$ for some $c_i\in\widehat{Q}$, not all zero, and $c\tau\delta(y)=0$ for all $y\in P$.\
Therefore $0=c\tau\delta(y)=x_1(c_1\tau\delta(y))+x_2(c_2\tau\delta_2(y))+\cdots+x_t(c_t\tau\delta(y))$, so it follows from $\widehat{Q}$-linear independence of $x_1I_{ns},\cdots,x_tI_{ns}$ that $c_i\tau\delta(y)=0$ for all $i$, and hence $c_i\in I\cap\widehat{Q}$.\
So choose $i$ such that $c_i\neq 0$, and since $c_i\in I\cap\widehat{Q}$, we have that $I\neq 0$ as required.
\[linear-dep\]
Let $\delta_1,\cdots,\delta_r:kG\to kG$ be $k$-linear derivations of $kG$, and suppose that there exist matrices $a,q,d_1,\cdots,d_r\in M_{ns}(F')$ such that $a$ is invertible, the $d_i$ are diagonal of value $\lambda$, $\rho(q)>\lambda$ and for all $y\in P$:
$0=d_1^{p^m}a\tau\delta_1(y)+d_2^{p^m}a\tau\delta_2(y)+\cdots+d_r^{p^m}a\tau\delta_r(y)+O(aq^{p^m})$
Suppose further that $d_1,\cdots,d_r$ are $\mathbb{F}_p$-linearly independent modulo $\lambda^+$, then $\tau\delta_i(P)=0$ for all $i$.
We will use induction on $r$. First suppose that $r=1$.\
Then since $0=d_1^{p^m}a\tau\delta_1(y)+O(aq^{p^m})$, it follows immediately from Proposition \[linear-indep\] that $e_ja\tau\delta_1(y)=0$ for any $j=1,\cdots,ns$ such that $v(d_{1,j})=\lambda$, and this holds for all $y\in P$.
Since $a$ is a unit, $e_ja\neq 0$, so using Lemma \[artinian\], we see that $\tau\delta_1(P)=0$ as required.\
Now suppose, for induction, that the result holds for $r-1$:\
Assume first that there exists $j=1,\cdots,ns$ such that $d_{1,j},\cdots,d_{r,j}$ are $\mathbb{F}_p$-linearly independent modulo $\lambda^+$. Then using Proposition \[linear-indep\] and Lemma \[artinian\] again, we see that $e_ja\tau\delta_i(y)=0$ for all $i=1,\cdots,r$, $y\in P$, and hence $\tau\delta_i(P)=0$ for all $i$ as required.\
Hence we may assume that all the corresponding entries of $d_1,\cdots,d_r$ are $\mathbb{F}_p$-linearly dependent modulo $\lambda^+$, i.e. given $j=1,\cdots,ns$, we can find $\beta_1,\cdots,\beta_r\in\mathbb{F}_p$ such that $v(\beta_1d_{1,j}+\cdots+\beta_rd_{r,j})>\lambda$. We can of course choose $j$ such that $v(d_{i,j})=\lambda$ for some $i$.
Assume without loss of generality that $v(d_{r,j})=\lambda$ and that $\beta_r\neq 0$, so after rescaling we may assume that $\beta_r=-1$.\
It follows immediately that $v'(e_j\beta_1d_1+\cdots+e_j\beta_{r-1}d_{r-1}-e_jd_r)>\lambda$, so set
$\epsilon:=e_j\beta_1d_{1,j}+\cdots+e_j\beta_{r-1}d_{r-1,j}-e_jd_{r,j}$ for convenience.\
Multiplying our expression by $e_j$ gives:\
$0=e_jd_1^{p^m}a\tau\delta_1(y)+\cdots+e_jd_{r-1}^{p^m}a\tau\delta_{r-1}(y)+e_jd_r^{p^m}a\tau\delta_r(y)+O(e_jaq^{p^m})$\
$=e_jd_1^{p^m}a\tau\delta_1(y)+\cdots+e_jd_{r-1}^{p^m}a\tau\delta_{r-1}(y)-e_j(\beta_1d_1+\cdots+\beta_{r-1}d_{r-1})^{p^m}a\tau\delta_r(y)+\epsilon^{p^m}a\tau\delta_r(y)+O(aq^{p^m})$\
$=(e_jd_1)^{p^m}a(\tau\delta_1-\beta_1\tau\delta_r)(y)+\cdots+(e_jd_{r-1})^{p^m}a(\tau\delta_{r-1}-\beta_{r-1}\tau\delta_r)(y)+\epsilon^{p^m}a\tau\delta_r(y)+O(aq^{p^m})$.\
Now, set $\delta_i':=\delta_i-\beta_i\delta_r$, $d_i':=e_jd_i$ for each $i=1,\cdots,r-1$. Then the $\delta_i'$ are $k$-linear derivations of $kG$, and since $\epsilon$ is diagonal and $v'(\epsilon)>\lambda$, it follows that $\rho(\epsilon)>\lambda$, and so $\epsilon^{p^m}a\tau\delta_r(y)+O(aq^{p^m})=O(aq'^{p^m})$ for some $q'$ with $\rho(q')>\lambda$. Hence:
$0=d_1'^{p^m}a\tau\delta_1'(y)+d_2'^{p^m}a\tau\delta_2'(y)+\cdots+d_{r-1}'^{p^m}a\tau\delta_{r-1}'(y)+O(aq'^{p^m})$
So it follows from induction that $\tau\delta_i'(P)=0$ for all $i$, i.e. for all $y\in P$, $\tau\delta_i(y)=\beta_i\tau\delta_r(y)$, and:\
$0=d_1^{p^m}a\tau\delta_1(y)+\cdots+d_r^{p^m}a\tau\delta_r(y)+O(aq^{p^m})$\
$=d_1^{p^m}a(\beta_1\tau\delta_r)(y)+\cdots+d_{r-1}^{p^m}a(\beta_{r-1}\tau\delta_{r})(y)+d_r^{p^m}a\tau\delta_r(y)+O(aq^{p^m})$\
$=(\beta_1d_1+\beta_2d_2+\cdots+\beta_{r-1}d_{r-1}+d_r)^{p^m}a\tau\delta_r(y)+O(aq^{p^m})$.\
But since $d_1,\cdots,d_r$ are $\mathbb{F}_p$-linearly independent modulo $\lambda^+$, it follows that
$v'(\beta_1q_1+\cdots+\beta_{r-1}d_{r-1}+d_r)=\lambda$, so just applying Proposition \[linear-indep\] and Lemma \[artinian\] again gives that $\tau\delta_r(P)=0$.\
So, we have $0=d_1^{p^m}a\tau\delta_1(y)+\cdots+d_{r-1}^{p^m}a\tau\delta_{r-1}(y)+O(aq^{p^m})$. Therefore $\tau\delta_i(P)=0$ for all $i<r$ by induction.
\[C\]
Let $G$ be a $p$-valuable, abelian-by-procyclic group, $P\in Spec^f(kG)$ such that $Q(\frac{kG}{P})$ is a CSA. Then $P$ is controlled by a proper open subgroup of $G$.
We know that $t_1,\cdots,t_r$ are $\mathbb{F}_p$-linearly independent modulo $\lambda^+$ by Lemma \[technical\], so applying Theorem \[linear-dep\] with $\delta_i=\partial_i$ and $d_i=t_i$, it follows that $\tau\partial_i(P)=0$ for all $i=1,\cdots,r$.
Hence $P$ is controlled by a proper open subgroup of $G$ by Proposition \[Frattini\].
So, from now on, we may assume that $Q(\frac{kG}{P})$ is not a CSA.
The Extended Commutator Subgroup
================================
Again, fix a non-abelian, $p$-valuable, abelian-by-procyclic group $G$, with principal subgroup $H$, procyclic element $X$. We will assume further that $G$ has *split-centre*, i.e. $1\to Z(G)\to G\to\frac{G}{Z(G)}\to 1$ is split exact.
Uniform groups
--------------
Assume for now that $G$ is *uniform*, i.e. that $(G,G)\subseteq G^{p^\epsilon}$, where $\epsilon=2$ if $p=2$ and $\epsilon=1$ if $p>2$. Note that uniform group $G$ is $p$-saturable using the $p$-valuation $\omega(g)=\max\{n\in\mathbb{N}:g\in G^{p^{n-\epsilon}}\}$ (see [@DDMS Chapter 4] for full details).\
Recall from [@DDMS Chapter 9], that a free $\mathbb{Z}_p$-Lie algebra $\mathfrak{g}$ of finite rank is *powerful* if $[\mathfrak{g},\mathfrak{g}]\subseteq p^{\epsilon}\mathfrak{g}$. It follows from [@DDMS Theorem 9.10] that a $p$-saturable group $G$ is uniform if and only if $\log(G)$ is powerful.
Let $\mathfrak{g}=\log(G)$, then using Lie theory, we see that $\mathfrak{g}=\mathfrak{h}$ $\rtimes$ Span$_{\mathbb{Z}_p}\{x\}$, where $\mathfrak{h}=\log(H)$, $x=\log(X)$, and $[\mathfrak{g},\mathfrak{g}]=[x,\mathfrak{h}]\subseteq p\mathfrak{h}$.\
Recall that a map $w:\mathfrak{g}\to\mathbb{R}\cup\{\infty\}$ is a *valuation* if for all $u,v\in\mathfrak{g}$, $\alpha\in\mathbb{Z}_p$:
- $w(u+v)\geq\min\{w(u),w(v)\}$,
- $w([u,v])\geq w(u)+w(v)$,
- $w(\alpha u)=v_p(\alpha)+w(u)$,
- $w(u)=\infty$ if and only if $u=0$,
- $w(u)>\frac{1}{p-1}$.
Also recall from [@Schneider Proposition 32.6] that if $w$ is a valuation on $\mathfrak{g}$, then $w$ corresponds to a $p$-valuation $\omega$ on $G$ defined by $\omega(g):=w(\log(g))$.
\[p-valuation\]
Let $G$ be a non-abelian, uniform, abelian-by-procyclic group with split-centre, let $\mathfrak{g}:=\log(G)$, and let $V:=\exp([\mathfrak{g},\mathfrak{g}])\subseteq H^p$. Then there exists a basis $\{h_1,\cdots,h_d\}$ for $H$, $r\leq d$ such that $\{h_{r+1},\cdots,h_{d}\}$ is a basis for $Z(G)$ and $\{h_1^{p^{t_1}},\cdots,h_r^{p^{t_r}}\}$ is a basis for $V$ for some $t_i\geq 1$.\
Moreover, there exists an abelian $p$-valuation $\omega$ on $G$ such that $(i)$ $\{h_1,\cdots,h_d,X\}$ is an ordered basis for $(G,\omega)$, and $(ii)$ $\omega(h_1^{p^{t_1}})=\omega(h_2^{p^{t_2}})=\cdots=\omega(h_r^{p^{t_r}})>\omega(X)$.
First, note that since $G$ has split centre, we have that $G\cong Z(G)\times\frac{G}{Z(G)}$. In fact, since $Z(G)\subseteq H$, we have that $H\cong Z(G)\times H'$ for some $H'\leq H$, normal and isolated in $G$.
It follows that $\mathfrak{h}=Z(\mathfrak{g})\oplus\mathfrak{h}'$, where $\mathfrak{h}':=\log(H')$, and clearly $[\mathfrak{g},\mathfrak{g}]=[x,\mathfrak{h}]=[x,\mathfrak{h}']$.\
By the Elementary Divisors Theorem, there exists a basis $\{v_1,\cdots,v_{r}\}$ for $\mathfrak{h}'$ such that $\{p^{t_1}v_1,....,p^{t_{r}}v_{r}\}$ is a basis for $[x,\mathfrak{h}']$ for some $t_i\geq 0$. And since $\mathfrak{g}$ is powerful, we have in fact that $t_i\geq\epsilon$ for each $i$.\
Let $\{v_{r+1},\cdots,v_d\}$ be any basis for $Z(\mathfrak{g})$, and set $h_i:=\exp(v_i)$ for each $i=1,\cdots d$. It follows that $\{h_1^{p^{t_1}},.....,h_{r}^{p^{{t_r}}}\}$ is a basis for $V$, and that $\{h_{r+1},\cdots,h_d\}$ is a basis for $Z(G)$ as required.\
Now, the proof of [@Schneider Lemma 26.13] shows that if $\omega$ is any $p$-valuation on $G$ and we choose $c>0$ with $\omega(g)>c+\frac{1}{p-1}$ for all $g\in G$, then $\omega_c(g):=\omega(g)-c$ defines a new $p$-valuation on $G$ satisfying $\omega_c((g,h))>\omega_c(g)+\omega_c(h)$, which preserves ordered bases.\
So if $\omega$ is an *integer valued* $p$-valuation satisfying $i$ and $ii$, then take $c:=\frac{1}{e}$ for any integer $e\geq 2$ and $\omega_c$ will also satisfy $i$ and $ii$. Also $\omega_c(G)\subseteq\frac{1}{e}\mathbb{Z}$ and $\omega_c((g,h))>\omega_c(g)+\omega_c(h)$ for all $g,h\in G$, i.e. $\omega_c$ is abelian.\
Therefore, it remains to show that we can define an integer valued $p$-valuation on $G$ satisfying $i$ and $ii$.\
Assume without loss of generality that $t_1\geq t_i$ for all $i=1,\cdots,r$. Choose $a\in\mathbb{Z}$ with $a>\epsilon$, and set $a_i:=a+t_1-t_i$ for each $i$, so that $a_i+t_i=a_j+t_j$ for all $i,j=1,\cdots,r$.\
For convenience, set $v_{d+1}:=x$, and for each $i>r$, set $a_{i}=\epsilon$. Then define:
$w:\mathfrak{g}\to\mathbb{Z}\cup\{\infty\}, \underset{i=1,...,d}{\sum}{\alpha_i v_i}\mapsto \min\{v_p(\alpha_i)+a_i:i=1,....,d\}$.
We will prove that $w$ is a valuation on $\mathfrak{g}$, and that $w(p^{t_i}v_i)=w(p^{t_j}v_j)>w(x)$ for all $i,j\leq r$. Then by defining $\omega$ on $G$ by $\omega(g)=w(\log(g))$, the result will follow.\
Firstly, the property that $w(p^{t_i}u_i)=w(p^{t_j}u_j)>w(x)$ is clear, since $w(p^{t_i}u_i)=t_i+a_i=t_j+a_j=w(p^{t_j}u_j)$ for all $i,j<d$, and $a_i+t_i=a+t_1\geq a>1=w(x)$.\
It is also clear from the definition of $w$ that $w(u+v)\geq\min\{w(u),w(v)\}$, $w(\alpha u)=v_p(\alpha)+w(u)$, $w(u)=\infty$ if and only if $u=0$, and $w(u)>\frac{1}{p-1}$ for all $u,v\in\mathfrak{g}$, $\alpha\in\mathbb{Z}_p$.\
Therefore it remains to prove that $w([u,v])\geq w(u)+w(v)$, and it is straightforward to show that it suffices to prove this for basis elements.
So since $v_{r+1},\cdots,v_d$ are central, we need only to show that $w([x,v_i])\geq w(x)+w(v_i)$ for all $i\leq r$.\
We have that $[x,v_i]=\alpha_{i,1}p^{t_1}v_1+....+\alpha_{i,r}p^{t_{r}}v_{r}$ for some $\alpha_{i,j}\in\mathbb{Z}_p$, so:\
$w([x,v_i])=\underset{j=1,...,r}{\min}\{v_p(\alpha_{i,j})+t_j+a_j\}=\underset{j=1,....,r}{\min}\{v_p(\alpha_{i,j})\}+t_i+a_i\geq a_i+t_i\geq a_i+\epsilon=w(v_i)+w(x)$.
**** This result does not hold in general if $G$ is not uniform. For example, if $p>2$ and $\mathfrak{g}=$ Span$_{\mathbb{Z}_p}\{x,y,z\}$ with $[y,z]=0$, $[x,y]=py$, $[x,z]=y+pz$, then $\mathfrak{g}$ is not powerful, and there is no valuation $w$ on $\mathfrak{g}$ that equates the values of basis elements for $[\mathfrak{g},\mathfrak{g}]$.
Now suppose that $G$ is any non-abelian, $p$-valuable, abelian-by-procyclic group with split centre, principal subgroup $H$.
\[transport\]
For any $h\in H$, $g\in G$, set $v=\log(h)$, $u=\log(g)$, then $(g,h)=\exp(\underset{n\geq 1}{\sum}{\frac{1}{n!}(ad(u))^n(v)})$
$ghg^{-1}=g\exp(v)g^{-1}=\underset{n\geq 0}{\sum}{\frac{1}{n!}(gvg^{-1})^n}=\exp(gvg^{-1})$.\
Let $l_x,r_x$ be left and right multiplication by $x$, then note that $l_{\exp(x)}=\exp(l_x)$, same for $r_x$.\
Then $gvg^{-1}=\exp(u)v\exp(u)^{-1}=\exp(u)v\exp(-u)=(l_{\exp(u)}r_{\exp(-u)})(v)$\
$=(\exp(l_u)\exp(r_{-u}))(v)=\exp(l_u-r_u)(v)=\exp(ad(u))(v)=\sum_{n\geq 0}{\frac{1}{n!}(ad(u))^n(v)}$.\
Therefore $ghg^{-1}=\exp(gvg^{-1})=\exp(\underset{n\geq 0}{\sum}{\frac{1}{n!}(ad(u))^n(v)})$.\
Finally, $\log((g,h))=\log((ghg^{-1})h^{-1})=\log(ghg^{-1})-\log(h)$ since $h$ and $ghg^{-1}$ commute. Clearly this is equal to $\underset{n\geq 1}{\sum}{\frac{1}{n!}(ad(u))^n(v)}$ as required.
We know that $G=H\rtimes\langle X\rangle$, so for each $m\in\mathbb{N}$, define $G_m:=H\rtimes\langle X^{p^{m}}\rangle$, and it is immediate that $G_m$ is an open, normal subgroup of $G$, and that it is non-abelian, $p$-valuable, abelian-by-procyclic with principal subgroup $H$, procyclic element $X^{p^{m}}$ and split centre.
\[uniform\]
There exists $m\in\mathbb{N}$ such that $G_m$ is a uniform group.
Recall that $G$ is an open subgroup of the $p$-saturated group $Sat(G)$, i.e there exists $t\in\mathbb{N}$ with $Sat(G)^{p^t}\subseteq G$. Choose any such $t$ and let $m:=t+\epsilon$.\
Given $h\in H$, by Lemma \[transport\] we have that $(X^{p^{m}},h)=\exp(\underset{n\geq 1}{\sum}{\frac{1}{n!}(ad(p^{m}x)^n(v)})$ where $x=\log(X)$ and $v=\log(h)$ lie in $\log(Sat(G))$.\
We want to prove that $(X^{p^m},h)\in H^{p^{\epsilon}}=G'^{p^{\epsilon}}\cap H$, so since $Sat(G)^{p^m}=Sat(G)^{p^{t+\epsilon}}\subseteq G^{p^{\epsilon}}$, it suffices to prove that $\frac{1}{n!}ad(p^{m}x)^n(v)\in p^{m}\log(Sat(G))$ for all $n\geq 1$.\
Clearly, for each $n$, $ad(p^{m}x)^n(v)=[p^{m}x,ad(p^{m}x)^{n-1}(u)]$, so we only need to prove that $ad(p^{m}x)^{n-1}(u)\in p^{v_p(n!)}\log(Sat(G))$, in which case:\
$\frac{1}{n!}ad(p^{m}x)^n(v)=\frac{p^{m}}{n!}[x,ad(p^{m}x)^{n-1}(u)]\in \frac{p^{v_p(n!)+m}}{n!}\log(Sat(G))\subseteq p^{m}\log(Sat(G))$.\
Let $w$ be a saturated valuation on $\log(Sat(G))$, i.e. if $w(x)>\frac{1}{p-1}+1$ then $x=py$ for some $y\in\log(Sat(G))$.\
Then since $w(ad(p^{m}x)^{n-1}(u))\geq (n-1)w(p^{m}x)+w(u)>\frac{n-1}{p-1}+\frac{1}{p-1}$, it follows that $ad(p^{m}x)^{n-1}(u)=p^kv$ for some $v\in\log(Sat(G))$, $k\geq\frac{n-1}{p-1}$.\
We will show that $k\geq v_p(n!)$, and it will follow that $ad(p^{m}x)^{n-1}(u)=p^kv\in p^{v_p(n!)}\log(Sat(G))$.\
If $n=a_0+a_1p+\cdots+a_rp^r$ for some $0\leq a_i<p$, then let $s(n)=a_0+a_1+\cdots+a_r$. We know from [@Lazard [slowromancap1@]{} 2.2.3] that $v_p(n!)=\frac{n-s(n)}{p-1}$.
Suppose that $v_p(n!)>\frac{n-1}{p-1}$, i.e. $\frac{n-s(n)}{p-1}>\frac{n-1}{p-1}$, and hence $s(n)<1$. This means that $s(n)=0$ and hence $n=0$ – contradiction.\
Therefore $k\geq \frac{n-1}{p-1}\geq v_p(n!)$ as required.
Extension of Filtration
-----------------------
From now on, fix $c\in\mathbb{N}$ minimal such that $G_c$ is uniform, we know that this exists by Lemma \[uniform\]. Let $\mathfrak{g}:=\log(G_c)$ – a powerful $\mathbb{Z}_p$-subalgebra of $\log(Sat(G))$.
\[c(G)\]
Define the *extended commutator subgroup* of $G$ to be
$c(G):=\left(Z(G)\times\exp([\mathfrak{g},\mathfrak{g}])\right)\rtimes\langle X^{p^c}\rangle\subseteq G_c$.
\[extended-commutator\]
If $G$ is any $p$-valuable, abelian-by-procyclic group with split centre, then:\
i. $c(G)$ is an open normal subgroup of $G$.\
ii. There exists a basis $\{k_1,k_2,\cdots,k_{d}\}$ for $H$ such that $\{k_{r+1},\cdots,k_d\}$ is a basis for $Z(G)$ and
$\{u_{c}(k_1),u_{c}(k_2),\cdots,u_{c}(k_{r}),k_{r+1},\cdots,k_d,X^{p^c}\}$ is a basis for $c(G)$.\
iii. We may choose this basis $\{k_1,\cdots,k_d\}$ such that for each $i\leq r$, there exist $\alpha_{i,j}\in\mathbb{Z}_p$ with $p\mid\alpha_{i,j}$ for
$j<i$ and $\alpha_{i,i}=1$, such that $Xu_{c}(k_i)X^{-1}=u_{c}(k_1)^{\alpha_{i,1}}\cdots u_{c}(k_d)^{\alpha_{i,d}}$.\
iv. There is an abelian $p$-valuation $\omega$ on $c(G)$ such that $\{u_{c}(k_1),\cdots,u_{c}(k_r),k_{r+1},\cdots,k_d,X^{p^c}\}$ is an ordered
basis for $(c(G),\omega)$ and $\omega(u_{c}(k_i))=\omega(u_{c}(k_j))>\omega(X^{p^c})$ for all $i,j\leq r$.
Let $V=\exp([\mathfrak{g},\mathfrak{g}])$, and let $x=\log(X)\in\log(Sat(G))$.\
If $h\in V$ then $h=\exp([p^c x,u])$ for some $u\in\log(H)$, i.e. $u=\log(k)$, and so:
$h=\exp([\log(X^{p^c}),\log(k)])=\underset{n\rightarrow\infty}{\lim}{(X^{p^{n+c}}k^{p^n}X^{-p^{n+c}}k^{-p^n})^{p^{-2n}}}$ by [@Lazard [slowromancap4@]{}. 3.2.3].
Thus $XhX^{-1}=\underset{n\rightarrow\infty}{\lim}{(X^{p^{n+c}}(XkX^{-1})^{p^n}X^{-p^{n+c}}(XkX^{-1})^{-p^n})^{p^{-2n}}}=\exp([\log(X^{p^c}),\log(XkX^{-1})])$.\
Clearly this lies in $V$, and hence $V$ is normal in $G$.\
Using Lemma \[transport\], it is straightforward to show that for all $h\in H$, $(X^{p^{c}},h)\in V$, therefore:
$hX^{p^{c}}h^{-1}=(X^{p^{c}},h)^{-1}X^{p^{c}}\in V\rtimes\langle X^{p^c}\rangle$
It follows that $c(G)=Z(G)\times V\rtimes\big\langle X^{p^c}\big\rangle$ is normal in $G$.\
Using Proposition \[p-valuation\], we may choose a basis $\{h_1,\cdots,h_d\}$ for $H$ such that $\{h_{r+1},\cdots,h_d\}$ is a basis for $Z(G)$ and $\{h_1^{p^{t_1}},\cdots,h_r^{p^{t_r}}\}$ is a basis for $V$.
Therefore $c(G)$ has basis $\{h_1^{p^{t_1}},\cdots,h_r^{p^{t_r}},h_{r+1},\cdots,h_d,X^{p^{c}}\}$, and hence it is open in $G$ as required.\
Now, for each $i=1,\cdots,d$, let $u_i=\log(h_i)$, then $\{u_1,\cdots,u_d\}$ is a $\mathbb{Z}_p$-basis for $\log(H)$, and $\{p^{t_1}u_1,\cdots,p^{t_r}u_r\}$ is a basis for $[\mathfrak{g},\mathfrak{g}]$.\
Therefore, for each $i$, $p^{t_i}u_i=[p^c x,v_i]$ for some $v_i\in\log(H)$, in fact we may assume that $v_i\in$ Span$_{\mathbb{Z}_p}\{u_1,\cdots,u_r\}$, and it follows that $\{v_1,\cdots,v_r\}$ forms a basis for Span$_{\mathbb{Z}_p}\{u_1,\cdots,u_r\}$.\
Let $k_i:=\exp(v_i)$ for each $i=1,\cdots r$, and for $i>r$ set $k_i=h_i$. Then we know that
$u_{c}(k_i)=\exp([p^{c}\log(X),\log(k_i)])=\exp([p^{c}x,v_i])=\exp(p^{t_i}u_i)=h_i^{p^{t_i}}$ for each $i\leq r$
It follows that $\{u_{c}(k_1),\cdots,u_{c}(k_r),k_{r+1},\cdots,k_d,X^{p^c}\}$ is a basis for $c(G)$, thus giving part $ii$.\
Now, $V$ is normal in $G$, and clearly $V^p$ is also normal, so consider the action $\psi$ of $X$ on the $r$-dimensional $\mathbb{F}_p$-vector space $\frac{V}{V^p}$, i.e. $\psi(hV^p)=(XhX^{-1})V^p$. It is clear that this action $\psi$ is $\mathbb{F}_p$-linear.\
Furthermore, since $G_c$ is uniform and $X^{p^c}\in G_c$, we have that $\psi^{p^c}=id$, i.e. $(\psi-id)^{p^c}=0$. Therefore $\psi$ has a $1$-eigenvector in $\frac{V}{V^p}$.
It follows that we may choose a basis for $\frac{V}{V^p}$ such that $\psi$ is represented by an upper-triangular matrix, with $1$’s on the diagonal.\
This basis is obtained by transforming $\{u_{c}(k_1),\cdots,u_{c}(k_r)\}=\{h_1^{p^{t_1}},\cdots,h_r^{p^{t_r}}\}$ by an invertible matrix over $\mathbb{Z}_p$. The new basis will also have the same form $\{u_{c}(k_1'),\cdots,u_{c}(k_r')\}=\{h_1'^{p^{t_1}},\cdots,h_r'^{p^{t_r}}\}$ as described by $ii$, and it will satisfy $iii$ as required.\
Finally, using Proposition \[p-valuation\], we see that there is an abelian $p$-valuation $\omega$ on the uniform group $G_c$ such that $\omega(u_{c}(k_i))=\omega(u_{c}(k_j))>\omega(X^{p^c})$ for all $i,j\leq r$, and of course $\omega$ restricts to $c(G)$, which gives us part $iv$.
We call a basis $\{k_1,\cdots,k_d\}$ for $H$ satisfying these conditions a *$k$-basis* for $H$.\
This result gives us a $p$-valuation that equates the values of $u(k_i)$ and $u(k_j)$ for each $i,j$. Unfortunately, with the standard Lazard filtration, this does not mean that $w(u(k_i)-1)=w(u(k_j)-1)$.
\[equalising\]
Let $G$ be a non-abelian, $p$-valuable, abelian-by-procyclic group with split centre. Let $c(G)$ be the extended commutator subgroup, and let $\{k_1,\cdots,k_{d}\}$ be a $k$-basis for $H$.
Then there exists a complete, Zariskian filtration $w:kG\to\mathbb{Z}\cup\{\infty\}$ such that:\
i. For all $i,j=1,\cdots,r$, $w(u_{c}(k_i)-1)=w(u_{c}(k_j)-1)=\theta$ for some integer $\theta>0$.\
ii. The associated graded $gr$ $kG\cong k[T_1,\cdots,T_{d+1}]\ast \frac{G}{c(G)}$, where $T_i=$ gr$(u_{c}(k_i)-1)$ for $i\leq r$, $T_i=$ gr$(k_i-1)$ for $r+1\leq i\leq d$ and $T_{d+1}=$ gr$(X^{p^{c}}-1)$. Each $T_i$ has positive degree, and $deg(T_i)=\theta$ for $i=1,\cdots,r$.\
iii. Set $\bar{X}:=$ gr$(X)$. Then $T_r$ is central, and for each $i< r$, $\bar{X}T_i\bar{X}^{-1}=T_i+D_i$ for some $D_i\in$ Span$_{\mathbb{F}_p}\{T_{i+1},\cdots,T_r\}$.\
Let $A:=(k[T_1,\cdots,T_{d+1}])^{\frac{G}{c(G)}}$ be the ring of invariants, then $A$ is Noetherian, central in gr $kG$, and $k[T_1,\cdots,T_{d+1}]$ is finitely generated over $A$.
Set $U=c(G)=\langle u_{c}(k_1),\cdots,u_{c}(k_r),k_{r+1},\cdots,k_d,X^{p^{c}}\rangle$. Then $U$ is an open, normal subgroup of $G$ by Proposition \[extended-commutator\]($i$), and hence $kG\cong kU\ast\frac{G}{U}$.\
Using Proposition \[extended-commutator\]($iv$), we choose an abelian $p$-valuation $\omega$ on $U$ such that $\frac{1}{e}\theta:=\omega(u_{c}(k_i))=\omega(u_{c}(k_j))>\omega(X^{p^{c}})$ for all $i,j\leq r$, where $\theta>0$ is an integer. Then we can define the Lazard valuation $w$ on $kU$ with respect to $\omega$.
Since $\{u_{c}(k_1),\cdots,u_{c}(k_{r}),k_{r+1},\cdots,k_d,X^{p^{c}}\}$ is an ordered basis for $(U,\omega)$, it follows from the definition of $w$ that:
$w(u_{c}(k_j)-1)=e\omega(u_{c}(k_j))=e\omega(u_{c}(k_i))=w(u_{c}(k_i)-1)=\theta$ for all $i,j\leq r$.
Furthermore, we have that if $V=\exp([\mathfrak{g},\mathfrak{g}])\subseteq U$ and $r\in kV$, then $w(r)\geq \theta$.\
We want to apply Proposition \[crossed product\] and extend $w$ to $kG\cong kU\ast\frac{G}{U}$. So we only need to verify that for all $g\in G$, $r\in kU$, $w(grg^{-1})=w(r)$, and it suffices to verify this property for $r=u_{c}(k_1)-1,\cdots,u_{c}(k_{r})-1,$
$k_{r+1}-1,\cdots,k_d-1,X^{p^{c}}-1$, since they form a topological generating set for $kU$.\
Since $k_{r+1},\cdots,k_d\in Z(G)$, it is obvious that $w(g(k_l-1)g^{-1})=w(k_l-1)$ for each $r+1\leq l\leq d$, $g\in G$.\
For each $j\leq r$, $gu_{c}(k_j)g^{-1}\in V$, thus:\
$w(gu_{c}(k_j)g^{-1}-1)\geq \theta=w(u_{c}(k_j)-1)$ and it follows easily that equality holds.\
Finally, $g=hX^{\beta}$ for some $h\in H$, $\beta\in\mathbb{Z}_p$, so
$gX^{p^{c}}g^{-1}-1=hX^{p^{c}}h^{-1}-1=((h,X^{p^{c}})-1)(X^{p^{c}}-1)+((h,X^{p^{c}})-1)+(X^{p^{c}}-1)$.
Hence $w(g(X^{p^{c}}-1)g^{-1})\geq\min\{w((h,X^{p^{c}})-1),w(X^{p^{c}}-1)\}$, with equality if $w((h,X^{p^{c}})-1)\neq w(X^{p^{c}}-1)$. But since $(h,X^{p^{c}})\in V$, we have that
$w((h,X^{p^{c}})-1)\geq \theta=e\omega(u_{c}(k_i))>e\omega(X^{p^{c}})=w(X^{p^{c}}-1)$
and hence $w(gX^{p^{c}}g^{-1}-1)=w(X^{p^{c}}-1)$ as required. Note that it is here that we need the fact that $\omega(u_{c}(k_i))>\omega(X^{p^{c}})$.\
Therefore we can apply Proposition \[crossed product\], and extend $w$ to $kG$ so that gr$_{w}$ $kG\cong ($gr$_w$ $kU)\ast\frac{G}{U}$, and we have that gr$_w$ $kU\cong k[T_1,....,T_{d+1}]$ as usual, where $T_i=$ gr$(u_{c}(k_i)-1)$ has degree $\theta$ for $i\leq r$, $T_i=$ gr$(k_i-1)$ for $r+1\leq i\leq d$ and $T_{d+1}=$ gr$(X^{p^{c}}-1)$ as required.\
Using Proposition \[extended-commutator\]($iii$), we see that $Xu_{c}(k_i)X^{-1}=u_{c}(k_1)^{\alpha_{1,i}}\cdots u_{c}(k_r)^{\alpha_{1,i}}$ where $p\mid\alpha_j$ for $j<i$ and $\alpha_{i,i}=1$.
Hence $\bar{X}T_i\bar{X}^{-1}=\overline{\alpha}_{i,1}T_1+\cdots+\overline{\alpha}_{i,r}T_r=T_i+\overline{\alpha}_{i+1,1}T_{i+1}+\cdots+\overline{\alpha}_{i,r}T_r$ for each $i\leq r$, thus giving part $iii$.\
Now, every element $u\in $ gr $kU$ is a root of the polynomial $\underset{g\in\frac{G}{c(G)}}{\prod}{(s-gug^{-1})}\in A[s]$, hence gr $kU$ is integral over $A$.\
So since $k\subseteq A$ and gr $kU\cong k[T_1,....,T_{d+1}]$ is a finitely generated $k$-algebra, we have that gr $kU$ is a finitely generated $A$-algebra, and hence finitely generated as an $A$-module by the integral property.\
So it follows that gr$_{w}$ $kG$ is finitely generated as a right $A$-module. Furthermore, since gr $kU$ is Noetherian and commutative, it follows from [@Enkin Theorem 2] that $A$ is Noetherian.\
Furthermore, it is easy to show that the twist $\frac{G}{U}\times\frac{G}{U}\to ($gr $kU)^{\times}$ of the crossed product is trivial, so it follows that if $r\in $ gr $kU$ is invariant under the action of $\frac{G}{U}$ then it is central. Hence $A$ is central in gr $kG$.
**** For any $h\in H$, we have that $(u_{c}(h)-1)+F_{\theta+1}kG\in$ Span$_{\mathbb{F}_p}\{T_1,\cdots,T_r\}$.
Now, let $P$ be a faithful prime ideal of $kG$, and let $w$ be a filtration on $kG$ satisfying the conditions of the proposition. Then $w$ induces the quotient filtration $\overline{w}$ of $\frac{kG}{P}$.\
By [@LVO Ch.[slowromancap2@]{} Corollary 2.1.5], $P$ is closed in $kG$, and hence $\frac{kG}{P}$ is complete, and gr$_{\overline{w}}$ $\frac{kG}{P}\cong\frac{\text{gr }P}{\text{gr }P}$ is Noetherian. Therefore $\overline{w}$ is Zariskian by [@LVO Ch.[slowromancap2@]{} Theorem 2.1.2], and clearly $\tau:kG\to\frac{kG}{P}$ is continuous.\
For convenience, set $\overline{T}:=T+$ gr $P\in$ gr $\frac{kG}{P}$ for all $T\in $ gr $kc(G)=k[T_1,\cdots,T_{d+1}]$.\
Let $\overline{A}:=\frac{A+\text{gr }P}{\text{gr }P}$ be the image of $A$ in gr $\frac{kG}{P}$, and let $A':=(k[\overline{T}_1,\cdots,\overline{T}_{d+1}])^{\frac{G}{c(G)}}$ be the ring of $\frac{G}{c(G)}$-invariants in $k[\overline{T}_1,\cdots,\overline{T}_{d+1}]=\frac{k[T_1,\cdots,T_{d+1}]+\text{gr }P}{\text{gr } P}$.\
Since $\frac{G}{c(G)}$-invariant elements in gr $kG$ are $\frac{G}{c(G)}$-invariant modulo gr $P$, it is clear that $\overline{A}\subseteq A'\subseteq\frac{k[T_1,\cdots,T_{d+1}]+\text{gr }P}{\text{gr } P}$.\
Then since $k[T_1,\cdots,T_{d+1}]$ is finitely generated over $A$ by Theorem \[equalising\], it follows that $A'$ is finitely generated over the Noetherian ring $\overline{A}$, hence $A'$ is Noetherian.\
Therefore, $\frac{kG}{P}$ is a prime ring with a Zariskian filtration $\overline{w}$ such that gr $\frac{kG}{P}$ is finitely generated over a central, Noetherian subring $A'$. Hence we may apply Theorem \[filtration\] to produce a non-commutative valuation on $Q(\frac{kG}{P})$.
A special case
--------------
In the next section, we will prove Theorem \[B\] in full generality, but first we need to deal with a special case:\
Fix a $k$-basis $\{k_1,\cdots,k_s\}$ for $H$, and a Zariskian filtration $w$ on $kG$ satisfying the conditions of Theorem \[equalising\]. Then we have that $T_r\in A$ and $\bar{X}T_i\bar{X}^{-1}=T_i+D_i$ for some $D_i\in$ Span$_{\mathbb{F}_p}\{T_{i+1},\cdots,T_{r}\}$ for all $i<r$.\
We will now suppose that for each $i<r$, $D_i$ is nilpotent modulo gr $P$.\
Then for sufficiently high $m$, $\bar{X}T_i^{p^m}\bar{X}^{-1}\equiv (T_i+D_i)^{p^m}=T_i^{p^m}+D_i^{p^m}\equiv T_i^{p^m}$ (mod gr $P$), i.e. $\overline{T}_i^{p^m}\in A'$\
Fix an integer $m_0$ such that $\overline{T}_i^{p^{m_0}}\in A'$ for all $i\leq r$.
\[CSA\]
Suppose that for each $i=1,\cdots,r$, $T_i$ is nilpotent modulo gr $P$, i.e. $\overline{T}_i$ is nilpotent. Then $Q(\frac{kG}{P})$ is a central simple algebra.
Using Theorem \[equalising\]($ii$), every element of gr $kG$ has the form
$\underset{g\in\frac{G}{c(G)}}{\sum}(\underset{\alpha\in\mathbb{N}^{d+1}}{\sum}{\lambda_{\alpha}T_1^{\alpha_1}\cdots T_{d+1}^{\alpha_{d+1}}})g$
where $\lambda_{\alpha}=0$ for all but finitely many $\alpha$.\
Therefore, it follows immediately from nilpotence of $\overline{T}_1,\cdots,\overline{T}_r$ that $\frac{\text{gr }kG}{\text{gr }P}$ is finitely generated over $\frac{k[T_{r+1},\cdots,T_{d+1}]+\text{gr }P}{\text{gr }P}$.\
But since $Z(G)=\langle k_{r+1},\cdots,k_d\rangle$ by Proposition \[extended-commutator\]($ii$), it follows that under the quotient filtration, gr $\frac{k(Z(G)\times \langle X^{p^{c}}\rangle)+P}{P}\cong\frac{k[T_{r+1},\cdots,T_{d+1}]+\text{gr }P}{\text{gr }P}$.\
So since gr $\frac{kG}{P}$ is finitely generated over gr $\frac{k(Z(G)\times\langle X^{p^{c}}\rangle) +P}{P}$, and $\frac{k(Z(G)\times\langle X^{p^{c}}\rangle) +P}{P}$ is closed in $\frac{kG}{P}$, it follows from [@LVO Ch.[slowromancap1@]{} Theorem 5.7] that $\frac{kG}{P}$ is finitely generated over $\frac{k(Z(G)\times\langle X^{p^{c}}\rangle) +P}{P}$.\
But $k(Z(G)\times\langle X^{p^{c}}\rangle)$ is commutative, so $\frac{kG}{P}$ is finitely generated as a right module over a commutative subring. Therefore, by [@McConnell Corollary 13.1.14(iii)], $\frac{kG}{P}$ satisfies a polynomial identity.
So since $\frac{kG}{P}$ is prime, it follows from Posner’s theorem [@McConnell Theorem 13.6.5], that $Q(\frac{kG}{P})$ is a central simple algebra.
**** This proof relies on the split centre property, without which we would not be able to argue that $\frac{k[T_{r+1},\cdots,T_{d+1}]+\text{gr }P}{\text{gr }P}$ arises as the associated graded of some commutative subring of $\frac{kG}{P}$.\
So let us assume that $Q(\frac{kG}{P})$ is not a CSA. Then by the proposition, we know that there exists $s\leq r$ such that $\overline{T}_s$ is not nilpotent.\
Since we know that $\overline{T}_s^{p^{m_0}}\in A'$, it follows there exists a minimal prime ideal $\mathfrak{q}$ of $A'$ such that $\overline{T}_s^{p^{m_0}}\notin\mathfrak{q}$. Using Theorem \[filtration\], we let $v=v_{\mathfrak{q}}$ be the non-commutative valuation on $Q(\frac{kG}{P})$ corresponding to $\mathfrak{q}$.\
So, $\overline{T}_s^{p^{m_0}}=$ gr$\tau(u_{c}(k_s)-1)^{p^{m_0}}\in A'\backslash\mathfrak{q}$, and hence using Theorem \[comparison\], we see that $\tau(u_{c}(k_s)-1)^{p^k}$ is $v$-regular for some $k\geq m_0$, and hence $\rho(\tau(u_{c}(k_s)-1)^{p^k})=v(\tau(u_{c}(k_s)-1)^{p^k})$.\
Recall that $\lambda=\inf\{\rho(\tau(u(g)-1)):g\in G\}<\infty$ by Lemma \[faithful\].
\[minimal\]
Let $h\in H$ such that $\rho(\tau(u(h)-1))=\lambda$. Then $\tau(u(h)-1)^{p^m}$ is $v$-regular for sufficiently high $m$.
It is clear that $w(u_c(h)-1)\geq\theta=w(u_c(k_s)-1)$, so let $T(h):=u_c(h)-1+F_{\theta+1}kG\in$ Span$_{\mathbb{F}_p}\{T_1,\cdots,T_r\}$. Then $T(h)=$ gr$(u_c(h)-1)$ if and only if $w(u_c(h)-1)=\theta$, otherwise $T(h)=0$.\
We know that gr$(u_{c}(k_s)-1)=T_s\notin$ gr $P$, and hence $\overline{w}(\tau(u_{c}(k_s)-1))=w(u_{c}(k_s)-1)$, giving that $\overline{w}(\tau(u_{c}(h)-1))\geq w(u_{c}(h)-1)\geq w(u_{c}(k_s)-1)=\overline{w}(\tau(u_{c}(k_s)-1))$.\
Also, $T(h)^{p^{m_0}}+$ gr $P\in A'$, and if $T(h)^{p^{m_0}}+$ gr $P\notin\mathfrak{q}$ then $T(h)^{p^{m_0}}+$ gr $P=$ gr$_{\overline{w}}(\tau(u_c(h)-1)^{p^{m_0}})\in A'\backslash\mathfrak{q}$, so it follows from Theorem \[comparison\] that $\tau(u(h)-1)^{p^m}$ is $v$-regular for $m>>0$.
So, suppose for contradiction that $T(h)^{p^{m_0}}+$ gr $P\in\mathfrak{q}$:\
If $T(h)^{p^{m_0}}+$ gr $P=0$ then $\overline{w}(\tau(u_c(h)-1)^{p^{m_0}})>p^{m_0}\theta=\overline{w}(\tau(u_c(k_s)-1))$, and if $T(h)^{p^{m_0}}\neq 0$ then $T(h)^{p^{m_0}}+$ gr $P=$ gr$_{\overline{w}}(\tau(u_c(h)-1)^{p^{m_0}})\in\mathfrak{q}$. In either case, using Theorem \[comparison\], it follows that for $m$ sufficiently high:
$v(\tau(u_{c}(h)-1)^{p^m})>v(\tau(u_{c}(k_s)-1)^{p^m})$.
Therefore, $v(\tau(u(h)-1))>v(\tau(u(k_s)-1))$, so since $\tau(u(k_s)-1)^{p^k}$ is $v$-regular:\
$\rho(\tau(u(h)-1))\geq \frac{1}{p^k}v(\tau(u(h)-1)^{p^k})>\frac{1}{p^k}v(\tau(u(k_s)-1)^{p^k})=\rho(\tau(u(k_s)-1))\geq\lambda$ – contradiction.
Recall the definition of a growth preserving polynomial (GPP) from Section 2.4, and recall that the identity map is a non-trivial GPP.
\[special-case\]
Suppose that $Q(\frac{kG}{P})$ is not a CSA, and that $D_i$ is nilpotent mod gr $P$ for all $i<r$. Then $id:\tau(kH)\to\tau(kH)$ is a special GPP with respect to some non-commutative valuation on $Q(\frac{kG}{P})$.
This is immediate from Definition \[GPP\] and Lemma \[minimal\].
Therefore, we will assume from now on that $D_i$ is not nilpotent mod gr $P$ for some $i$.\
**** If $G$ is uniform then $D_i=0$ for all $i$, but in general we cannot assume this. For example, if $p>2$ and $G=\langle X,Y,Z\rangle$ where $Y$ and $Z$ commute, $XYX^{-1}=Y^r$, $XZX^{-1}=(YZ)^r$, $r=e^p\in\mathbb{Z}_p$, then $G$ is non-uniform, $c=1$, and $\{Z,Y^{p-1}Z^p\}$ is a $k$-basis for $H=\langle Y,Z\rangle$. In this case, $\bar{X}T_2\bar{X}^{-1}=T_2$, $\bar{X}T_1\bar{X}^{-1}=T_1+T_2$.
Construction of Growth Preserving Polynomials
=============================================
As in the previous section, we will take $G$ to be a $p$-valuable, abelian-by-procyclic group with split centre, $P$ a faithful prime ideal of $kG$, and $w$ a Zariskian filtration on $kG$ satisfying the conditions of Theorem \[equalising\].\
We fix a $k$-basis $\{k_1,\cdots,k_d\}$ for $H$, and let $T_i:=$ gr$(u_{c}(k_i)-1)$ for each $i\leq r$, $T_i:=$ gr$(k_i-1)$ for $i>r$. We know that $T_r,\cdots,T_d$ are central, and $D_i:=\bar{X}T_i\bar{X}^{-1}-T_i\in$ Span$_{\mathbb{F}_p}\{T_{i+1},\cdots,T_r\}$ for each $i<r$.\
We now assume that not all the $D_i$ are nilpotent modulo gr $P$, so let $s<r$ be maximal such that $D_s$ is not nilpotent, i.e. for all $i>s$, $\overline{T}_i^{p^m}\in A'$ for sufficiently high $m$.
Throughout, we will fix $m_0$ such that $\overline{T}_i^{p^{m_0}}\in A'$ for all $i>s$, and we may assume that $m_1\geq m_0$.\
By definition, we know that $D_s\in$ Span$_{\mathbb{F}_p}\{T_{s+1},\cdots,T_r\}$, and hence $\overline{D}_s^{p^{m_0}}\in A'$. So since $D_s$ is not nilpotent mod gr $P$, we can fix a minimal prime ideal $\mathfrak{q}$ of $A'$ such that $\overline{D}_s^{p^{m_0}}\in A'\backslash\mathfrak{q}$.\
Let $v=v_{\mathfrak{q}}$ be the corresponding non-commutative valuation given by Theorem \[filtration\].
The Reduction Coefficients
--------------------------
Define a function $L$ of commuting variables $x$ and $y$ by:
$$L(x,y):=x^p-xy^{p-1}$$
Moreover, for commuting variables $x,y_1,y_2,\cdots,y_n$, define the iterated function
$$L^{(n)}(x,y_1,y_2,\cdots,y_n):=L(L(L(\cdots(L(x,y_1),y_2),\cdots),y_n)$$
For $n=0$, we define $L^{(n)}(x,y_1,\cdots,y_n):=x$.\
We can readily see that for any commutative $\mathbb{F}_p$-algebra $S$, $y_1,\cdots,y_n\in S$, $L(-,y_1,\cdots,y_n)$ is $\mathbb{F}_p$-linear.
\[polynomial\]
Let $S$ be an $\mathbb{F}_p$-algebra, and let $y_1,\cdots,y_n\in S$ commute. Then there exist $a_0,a_1,\cdots,a_{n-1}\in S$ such that $L^{(n)}(x,y_1,\cdots,y_n)=a_0x+a_1x^p+\cdots+a_{n-1}x^{p^{n-1}}+x^{p^n}$ for all $x$ commuting with $y_1,\cdots,y_n$.
Both statements are trivially true for $n=0$, so assume that they hold for some $n\geq 0$ and proceed by induction on $n$:\
So $L^{(n)}(x,y_1,\cdots,y_n)=a_0x+a_1x^p+\cdots+a_{n-1}x^{p^{n-1}}+x^{p^n}$, and:\
$L^{(n+1)}(x,y_1,\cdots,y_{n+1})=L^{(n)}(x,y_1,\cdots,y_n)^p-L^{(n)}(x,y_1,\cdots,y_n)y_{n+1}^{p-1}$\
$=(a_0x+\cdots+a_{n-1}x^{p^{n-1}}+x^{p^n})^p-(a_0x+\cdots+a_{n-1}x^{p^{n-1}}+x^{p^n})y_{n+1}^{p-1}$\
$=(-a_0y_{n+1}^{p-1})x+(a_0^p-a_1y_{n+1}^{p-1})x^p+\cdots+(a_{n-2}^p-a_{n-1}y_{n+1}^{p-1})x^{p^{n-1}}+(a_{n-1}^p-y_{n-1}^{p-1})x^{p^n}+x^{p^{n+1}}$.\
So setting $b_0=(-a_0y_{n+1}^{p-1})$, $b_i=(a_{i-1}^p-a_iy_{n+1}^{p-1})$ for $1\leq i\leq n$ (taking $a_n:=1$), we have that $L^{(n+1)}(x,y_1,\cdots,y_{n+1})=b_0x+b_1x^p+\cdots+b_nx^{p^n}+x^{p^{n+1}}$ as required.
Now, let $B_s:=D_s$, and for each $1\leq i<s$, let $B_i:=L^{(s-i)}(D_i,B_s,\cdots,B_{i+1})$.
\[centralising\]
For each $i\leq s$, $L^{(s-i+1)}(\overline{T}_i,\overline{B}_s,\cdots,\overline{B}_i)^{p^{m_0}}\in A'$, so in particular $\overline{B}_i^{p^{m_0}}\in A'$.
Note that $C\in k[T_1,\cdots,T_d]$ is central if and only if it is invariant under the action of $\bar{X}$.\
Also, $L(C,C)=C^p-CC^{p-1}=0$, and if $D$ is $\bar{X}$-invariant, then $\bar{X}L(C,D)\bar{X}^{-1}=L(\bar{X}C\bar{X}^{-1},D)$\
**** Let $Y':=\overline{Y}^{p^{m_0}}$ for any $Y\in$ gr $kG$.\
We will proceed by downwards induction on $i$, starting with $i=s$. Clearly $B_s'=D_s'\in$ Span$_{\mathbb{F}_p}\{T_{s+1}',\cdots,T_{r}'\}$ is invariant under the action of $\bar{X}$, so:\
$\bar{X}L(T_s',B_s')\bar{X}^{-1}=L(\bar{X}T_s'\bar{X}^{-1},B_s')=L(T_s'+B_s',B_s')=L(T_s',B_s')+L(B_s,B_s)=L(T_s',B_s')$.\
Therefore $L(T_s',B_s')$ is $\bar{X}$-invariant and the result holds.\
Suppose we have the result for all $s\geq j>i$.\
Then $B_i'=L^{(s-i)}(D_i',B_s',\cdots,B_{i+1}')$, and $D_i'\in$ Span$_{\mathbb{F}_p}\{T_{i+1}',\cdots,T_{r}'\}$.\
Using linearity of $L(-,y)$ we have that
$B_i'\in$ Span$_{\mathbb{F}_p}\{L^{(s-i)}(T_{j}',B_s',\cdots,B_{i+1}'):j=i+1,\cdots,r\}$
therefore $B_i'$ is $\bar{X}$-invariant by the inductive hypothesis.\
Also, since $B_s',\cdots,B_{i+1}'$ are $\bar{X}$-invariant, we have that:\
$\bar{X}L^{(s-i)}(T_i',B_s',\cdots,B_{i+1}')\bar{X}^{-1}=L^{(s-i)}(\bar{X}T_i'\bar{X}^{-1},B_s',\cdots,B_{i+1}')$\
$=L^{(s-i)}(T_i'+\bar{X}T_i'\bar{X}^{-1}-T_i',B_s',\cdots,B_{i+1}')=L^{(s-i)}(T_i',B_s',\cdots,B_{i+1}')+B_i$\
The final equality follows from linearity of $L(-,B_s',\cdots,B_{i+1}')$ and the fact that $D_i'=\bar{X}T_i'\bar{X}^{-1}-T_i'$.\
Set $C:=L^{(s-i)}(T_i',B_s',\cdots,B_{i+1}')$, so that
$L^{(s-i+1)}(T_i',B_s',\cdots,B_i')=L(A,B_i')=C^p-CB_i'^{p-1}$, and $\bar{X}C\bar{X}^{-1}=C+B_i'$
Then:\
$\bar{X}L(C,B_i')\bar{X}^{-1}=L(\bar{X}C\bar{X}^{-1},B_i')=L(C+B_i',B_i')=L(C,B_i')+L(B_i',B_i')=L(C,B_i')$\
Hence $L(C,B_i')=L^{(s-i+1)}(T_i',B_s',\cdots,B_i')$ is $\bar{X}$-invariant as required.
It follows immediately from this Lemma that $L^{(s)}(\overline{T},\overline{B}_s,\cdots,\overline{B}_1)^{p^{m_0}}\in A'$ for all $T\in$ Span$_{\mathbb{F}_p}\{T_1,\cdots,T_{r}\}$ (i.e. for all $T=u_{c}(h)-1+F_{\theta+1}kG$).\
Now, for each $i\leq s$, $D_i\in$ Span$_{\mathbb{F}_p}\{T_{i+1},\cdots,T_r\}$, so either $D_i=0$ or $D_i=u_c(f_i)-1+F_{\theta+1}kG$ for some $f_i\in H$ with $w(u_c(f_i)-1)=\theta$.
\[reduction\]
Define $y_s:=u_{c}(f_s)-1$, and for each $1\leq i<s$, define $y_i\in kH$ inductively by:\
$y_i:=\begin{cases}
L^{(s-i)}(u_c(f_i)-1,y_s,\cdots,y_{i+1}) & D_i\neq 0 \\
0 & D_i=0
\end{cases}
$
And define $b_i:=\tau(y_i)^{p^{m_1-c}}\in Q(\frac{kG}{P})$, we call these $b_i$ the *reduction coefficients*.
For convenience, we will replace $m_1$ by $m_1+c$, so that $\tau(u(g)-1)=\tau(u_c(g)-1)^{p^{m_1}}$ for all $g\in G$, and $b_i=\tau(y_i)^{p^{m_1}}$. Since $B_s\notin$ gr $P$, it is clear that gr$_{\overline{w}}(b_s)=\overline{B}_s^{p^{m_1}}$.\
Since gr$_{\overline{w}}(b_s)=\overline{B}_s^{p^{m_1}}\in A'\backslash\mathfrak{q}$, it follows from Theorem \[comparison\] that $b_s^{p^k}$ is $v$-regular for some $k\in\mathbb{N}$. After replacing $m_1$ by $m_1+k$, we may also assume that $b_s$ is $v$-regular.
\[reduction-value\]
For all $h\in H$, $i\leq s$, let $t:=u_c(h)-1$, $T:=t+F_{\theta+1}kG\in$ gr $kG$.\
Then $w(L^{(s-i)}(t,y_s,\cdots,y_{i+1}))\geq p^{s-i}\theta$, with equality if and only if gr$(L^{(s-i)}(t,y_s,\cdots,y_{i+1}))=L^{(s-i)}(T,B_s,\cdots B_{i+1})$, otherwise $L^{(s-i)}(T,B_s,\cdots B_{i+1})=0$.
In particular, for $y_i\neq 0$, $w(y_i)\geq p^{s-i}\theta$, with equality if and only if $B_i=$ gr$(y_i)$, otherwise $B_i=0$.
We will use downwards induction on $i$, with $i=s$ as the base case:\
Since $i=s$, $L^{(s-i)}(t,y_s,\cdots,y_{i+1})=t$, and clearly $w(t)\geq\theta=p^{s-s}\theta$, and equality holds if and only if gr$(t)=T$, and otherwise $T=0$ as required.\
Now suppose the result holds for some $i\leq s$, so let $c:=L^{(s-i)}(t,y_s,\cdots,y_{i+1})$, $C:=L^{(s-i)}(T,B_s,\cdots,B_{i+1})$. Then by induction, $w(c)\geq p^{s-i}\theta$, with equality if and only if $C=$ gr$(c)$, and $w(y_i)\geq p^{s-i}\theta$.\
So $w(L^{(s-i+1)}(t,y_s,\cdots,y_i))=w(c^p-cy_i^{p-1})\geq\min\{w(c^p),w(cy_i^{p-1})\}\geq\min\{pw(c),w(c)+(p-1)w(y_s)\}\geq p^{s-i+1}\theta$ as required.
In particular, this argument shows that if $w(c)>p^{s-i}\theta$ then $w(c^p-cy_i^{p-1})>p^{s-i+1}\theta$.\
Therefore, if $w(L^{(s-i+1)}(t,y_s,\cdots,y_i))=p^{s-i+1}\theta$ then $w(c)=p^{s-i}\theta$, and so $C=$ gr$(c)=c+F_{p^{s-i}\theta} kG$ by induction. In this case, gr$(L^{(s-i+1)}(t,y_s,\cdots,y_i))=(c^p-cy_i^{p-1})+F_{p^{s-i+1}\theta} kG$.\
Also, since $c,y_i\in kc(G)$ and $w$ is a valuation on $kc(G)$, we have that $w(c^p)=pw(c)=p^{s-i+1}\theta$ and $w(cy_i^{p-1})=w(c)+(p-1)w(y_i)\geq p^{s-i+1}\theta$.\
If $w(y_i)>p^{s-i}\theta$ then $c^p-cy_i^{p-1}+F_{\theta+1}kG=c^p+ F_{\theta+1}kG=C^p$. But since $B_i=0$ by induction, this means that $c^p-cy_i^{p-1}+F_{p^{s-i+1}\theta+1}=C^p-CB_i^{p-1}$.
Whereas, if $w(y_i)=p^{s-i}\theta$ then $B_i=$ gr$(y_i)=y_i+F_{p^{s-i}\theta+1}kG$ by assumption, so $c^p-cy_i^{p-1}+F_{p^{s-i+1}\theta+1}kG=C^p-CB_i^{p-1}$ as required.\
Finally, if $w(L^{(s-i+1)}(t,y_s,\cdots,y_i))>p^{s-i+1}\theta$ then $w(c^p-cy_i^{p-1})>p^{s-i+1}\theta$. Clearly if $C=0$ then $L^{(s-i+1)}(T,B_s,\cdots,B_i)=C^p-CB_i^{p-1}=0$, so we may assume that $C\neq 0$, and hence $C=$ gr$(c)=c+F_{p^{s-i}\theta+1}kG$.\
So since $w(c^p-cy_i^{p-1})>p^{s-i+1}\theta$, it follows that $C^p-CB_i^{p-1}=0$ as required.
Using this Lemma, we see that $\overline{w}(b_i)\geq p^{s-i}\theta$, with equality if and only if gr$_{\overline{w}}(b_i)=\overline{B}_i^{p^{m_1}}$.\
**** For each $0\leq i\leq s$, $q\in Q(\frac{kG}{P})$ commuting with $b_1,\cdots,b_s$, define $L_i(q):=L^{(s-i)}(q,b_s,\cdots,b_{i+1})$.\
e.g. $L_s(q)=q$, $L_{s-1}(q)=q^p-qb_s^{p-1}$, $L_{s-2}(q)=q^{p^2}-q^p(b_s^{p^2-p}-b_{s-1}^{p-1})+qb_s^{p-1}b_{s-1}^{p-1}$.
Polynomials
-----------
Again, we have $\lambda=\inf\{\rho(\tau(u(g)-1)):g\in G\}<\infty$.
\[inequality\]
For each $i\leq s$, $\rho(b_i)\geq p^{s-i}\lambda$, and it follows that $\rho(L_i(\tau(u(h)-1))\geq p^{s-i}\lambda$ for all $h\in H$.
Moreover, if $\rho(b_i)=p^{s-i}\lambda$ then $b_i^{p^m}$ is $v$-regular for $m$ sufficiently high.
For $i=s$ the result is clear, because $b_s=\tau(u(f_s)-1)$, so $\rho(b_s)\geq\lambda=p^{s-s}\lambda$ by definition. So we will proceed again by downwards induction on $i$.\
The inductive hypothesis states that $\rho(b_{i+1})\geq p^{s-i-1}\lambda$, and $\rho(L_{i+1}(\tau(u(h)-1)))\geq p^{s-i-1}\lambda$ for all $h\in H$.\
Thus $\rho(L_i(\tau(u(h)-1)))=\rho(L_{i+1}(\tau(u(h)-1))^p-L_{i+1}(\tau(u(h)-1))b_{i+1}^{p-1})$\
$\geq\min\{p\cdot p^{s-i-1}\lambda,p^{s-i-1}\lambda+(p-1)p^{s-i-1}\lambda\}=p^{s-i}\lambda$ for all $h$.\
By definition, $b_i=L^{(s-i)}(\tau(u(f_i)-1),b_s,\cdots,b_{i+1})=L_i(\tau(u(f_i)-1))$, so
$\rho(b_i)=\rho(L_i(\tau(u(f_i)-1)))\geq p^{s-i}\lambda$, and the first statement follows.
Finally, suppose that $\rho(b_i)=p^{s-i}\lambda$:\
Then if $\overline{w}(b_i)>p^{s-i+m_1}\theta=\overline{w}(b_s^{p^{s-i}})$, then $v(b_i^{p^m})>v(b_s^{p^{s-i+m}})$ for $m>>0$ by Theorem \[comparison\]. So using $v$-regularity of $b_s$, we see that $\rho(b_i)>\rho(b_s^{p^{s-i}})\geq p^{s-i}\lambda$ – contradiction.\
Therefore, by Lemma \[reduction-value\], we see that $\overline{w}(b_i)=p^{s-i}\theta$ and gr$_{\overline{w}}(b_i)=\overline{B}_i^{p^{m_1}}$.\
We know that $\overline{B}_i^{p^{m_1}}\in A'$, so suppose that $\overline{B}_i^{p^{m_1}}\in\mathfrak{q}$. Then since $\overline{w}(b_i)=p^{s-i}\theta=\overline{w}(b_s^{p^{s-i}})$, it follows again from Theorem \[comparison\] that $v(b_i^{p^m})>v(b_s^{p^{s-i+m}})$ for $m>>0$, and hence $\rho(b_i)>p^{s-i}\rho(b_s)\geq p^{s-i}\lambda$ – contradiction.\
Hence $\overline{B}_i^{p^{m_0}}=$ gr$_{\overline{w}}(b_i)\in A'\backslash\mathfrak{q}$, so $b_i^{p^k}$ is $v$-regular for some $k\in\mathbb{N}$ by Theorem \[comparison\].
Now, using Lemma \[polynomial\], we see that $L_i(x)=L^{(s-i)}(x,b_s,\cdots,b_{i+1})=a_0x+a_1x^p+\cdots+a_{s-i-1}x^{p^{s-i-1}}+x^{p^{s-i}}$ for some $a_j\in\tau(kH)$.\
\[growth-preserving\]
For each $i\leq s$, $L_i$ is a growth preserving polynomial of $p$-degree $s-i$, and $L_s$ is not trivial.
Firstly, it is clear that $L_s=id$, and so $L_s$ is a non-trivial GPP.\
We first want to prove that for all $q\in\tau(kH)$, if $\rho(q)\geq\lambda$ then $\rho(L_i(q))\geq p^{s-i}\lambda$, with strict inequality if $\rho(q)>\lambda$. We know that this holds for $i=s$, so as in the proof of Lemma \[inequality\], we will use downwards induction on $i$.\
So suppose that $\rho(L_{i+1}(q))\geq p^{s-i-1}\lambda$, with strict inequality if $\rho(q)>\lambda$. Then:\
$L_i(q)=L_{i+1}(q)^p-L_{i+1}(q)b_{i+1}^{p-1}$, so $\rho(L_i(q))\geq\min\{\rho(L_{i+1}(q)^p),\rho(L_{i+1}(q)b_{i+1}^{p-1})\}$.\
But $\rho(L_{i+1}(q)^p)\geq p\cdot p^{s-i-1}\lambda=p^{s-i}\lambda$, and since $\rho(b_{i+1})\geq p^{s-i}\lambda$ by Lemma \[inequality\], $\rho(L_{i+1}(q)b_{i+1}^{p-1})\geq\rho(L_{i+1}(q))+(p-1)\rho(b_{i+1})\geq p^{s-j-1}\lambda+(p-1)p^{s-i-1}\lambda=p^{s-i}\lambda$.\
By the inductive hypothesis, both these inequalities are strict if $\rho(q)>\lambda$, and thus $L_i$ is a GPP as required.
So all that remains is to prove that one of the $L_i$ is special.
Trivial growth preserving polynomials
-------------------------------------
Let us first suppose that for some $j$, $L_j$ is trivial, i.e. $L_j(\tau(u(h)-1))>p^{s-j}\lambda$ for all $h\in H$.\
We know that $L_s$ is not trivial, so we can fix $j\leq s$ such that $L_j$ is non-trivial and $L_{j-1}$ is trivial. We will need the following results:
\[limit\]
Let $A$ be a $k$-algebra, with filtration $w$ such that $A$ is complete with respect to $w$. Suppose $a\in A$ and $w(a^p-a)>0$, then $a^{p^m}\rightarrow b\in A$ with $b^p=b$ as $m\rightarrow\infty$.
Let $\varepsilon:=a^p-a$, then $w(\varepsilon)>0$, $a$ commutes with $\varepsilon$, and $a^p=a+\varepsilon$.\
Therefore, since $char(k)=p$, $a^{p^2}=a^p+\varepsilon^p=a+\varepsilon+\varepsilon^p$, and it follows from induction that for all $m\in\mathbb{N}$, $a^{p^{m+1}}=a+\varepsilon+\varepsilon^p+\cdots+\varepsilon^{p^m}$.\
But $\varepsilon^{p^m}\rightarrow 0$ as $m\rightarrow\infty$ since $w(\varepsilon)>0$, so since $A$ is complete, the sum $\underset{m\geq 0}{\sum}{\varepsilon^{p^m}}$ converges in $A$, and hence $a^{p^m}\rightarrow a+\underset{m\geq 0}{\sum}{\varepsilon^{p^m}}\in A$.\
So let $b:=a+\underset{m\geq 0}{\sum}{\varepsilon^{p^m}}$, then $b^p=a^p+(\underset{m\geq 0}{\sum}{\varepsilon^{p^m}})^p=a+\varepsilon+\underset{m\geq 1}{\sum}{\varepsilon^{p^m}}=a+\underset{m\geq 0}{\sum}{\varepsilon^{p^m}}=b$ as required.
\[divided-powers\]
Let $Q=\widehat{Q(\frac{kG}{P})}$, and let $\delta_1,\cdots,\delta_r:kG\to kG$ be derivations such that $\tau\delta_i(P)\neq 0$ for all $i$. Set $N:=\{(a_1,\cdots,a_r)\in Q^r:(a_1\tau\delta_1+\cdots+a_r\tau\delta_r)(P)=0\}$.\
Then $N$ is a $Q$-bisubmodule of $Q^r$, and either $N=0$ or there exist $\alpha_1,\cdots,\alpha_r\in Z(Q)$, not all zero, such that for all $(a_1,\cdots,a_r)\in N$, $\alpha_1a_1+\cdots+\alpha_ra_r=0$.
Since $v$ is a non-commutative valuation, we have that $Q$ is simple and artinian, and the proof that $N$ is a $Q$-bisubmodule of $Q^r$ is similar to the proof of Lemma \[artinian\]. For the second statement, we will proceed using induction on $r$.\
First suppose that $r=1$, then $N$ is a two sided ideal of the simple ring $Q$, so it is either $0$ or $Q$. But if $N=Q$ then $1\in Q$ so $\tau\delta_1(P)=0$ – contradiction. Hence $N=0$.\
Now suppose that the result holds for $r-1$ for some $r>1$. If $N\neq 0$ then there exists $(a_1,\cdots,a_r)\in N$ with $a_i\neq 0$ for some $i$, and we may assume without loss of generality that $i=1$.\
So, let $A:=\{a\in Q:(a,a_2,\cdots, a_r)\in N$ for some $a_i\in Q\}$, then clearly $A$ is a two-sided ideal of $Q$, so $A=0$ or $A=Q$. But $A\neq 0$ since $a_1\in A$ and $a_1\neq 0$.
Therefore $A=Q$, and hence we have that for all $b\in Q$, $(b,b_2,\cdots,b_r)\in N$ for some $b_i\in Q$.\
Let $N'=\{(a_2,\cdots,a_r)\in Q^{r-1}:(a_2\tau\delta_2+\cdots+a_r\tau\delta_r)(P)=0\}$. Suppose first that $N'=0$.\
Then if for some $q\in Q$, $(q,x_2,\cdots,x_r),(q,x_2',\cdots,x_r')\in N$ for $x_i,x_i'\in Q$, we have that $(x_2-x_2',\cdots,x_r-x_r')\in N'=0$, and hence $x_i=x_i'$ for all $i$.
Hence there is a unique $(1,\beta_2,\cdots,\beta_r)\in N$.\
Given $x\in Q$, $(x,x\beta_2,\cdots,x\beta_r)$, $(x,\beta_2x,\cdots,\beta_rx)\in N$, and so $([x,\beta_2],\cdots,[x,\beta_r])\in N'$. Hence $[x,\beta_i]=0$ for all $i$, so $\beta_i\in Z(Q)$.\
Moreover, if $(a_1,\cdots,a_r)\in N$, then since $(a_1,a_1\beta_2,\cdots,a_1\beta_r)\in N$, it follows that $a_i=\beta_ia_1$ for all $i>1$, and since $\tau\delta_1(P)\neq 0$, it is clear that $\beta_i\neq 0$ for some $i$, thus giving the result.\
So from now on, we may assume that $N'\neq 0$, so by the inductive hypothesis, this means that there exist $\alpha_2,\cdots,\alpha_r\in Z(Q)$, not all zero, such that for all $(a_2,\cdots,a_r)\in N'$, $\alpha_2a_2+\cdots+\alpha_ra_r=0$.\
Again, suppose we have that $(a,x_2,\cdots,x_r),(a,x_2',\cdots,x_r')\in N$ for some $a,x_i,x_i'\in Q$. Then clearly $(x_2-x_2',\cdots,x_r-x_r')\in N'$, and hence $\alpha_2(x_2-x_2')+\cdots+\alpha_r(x_r-x_r')=0$, i.e. $\alpha_2x_2+\cdots+\alpha_rx_r=\alpha_2x_2'+\cdots+\alpha_rx_r'$.\
So, given $q\in Q$, $(1,x_2,\cdots,x_r)\in N$, we have that $(q,qx_2,\cdots,qx_r),(q,x_2q,\cdots,x_rq)\in N$, and hence
$\alpha_2qx_2+\cdots+\alpha_rqx_r=\alpha_2x_2q+\cdots+\alpha_rx_rq$
i.e $[q,\alpha_2x_2+\cdots+\alpha_rx_r]=0$.\
Since this holds for all $q\in Q$, it follows that $\alpha_2x_2+\cdots+\alpha_rx_r\in Z(Q)$, so let $-\alpha_1$ be this value.\
In fact, for any such $(1,x_2',\cdots,x_r')\in N$, $\alpha_2x_2'+\cdots+\alpha_rx_r'=\alpha_2x_2+\cdots+\alpha_rx_r=-\alpha_1$, so $-\alpha_1\in Z(Q)$ is unchanged, regardless of our choice of $x_i$.\
Finally, suppose that $(a_1,\cdots,a_r),(1,x_2,\cdots,x_r)\in N$, then $(a_1,a_1x_2,\cdots,a_1x_r)\in N$, and hence $(a_2-a_1x_2,\cdots,a_r-a_1x_r)\in N'$. Thus $\alpha_2(a_2-a_1x_2)+\cdots+\alpha_r(a_r-a_1x_r)=0$, i.e.
$\alpha_2a_2+\cdots+\alpha_ra_r=a_1(\alpha_2x_2+\cdots+\alpha_rx_r)=-\alpha_1a_1$.
Therefore $\alpha_1a_1+\alpha_2a_2+\cdots+\alpha_ra_r=0$, and $\alpha_i\in Z(Q)$ as required.
Recall from Lemma \[sub\] that for any GPP $f$ of $p$-degree $r$, $K_f:=\{h\in H:\rho(f(\tau(u(h)-1)))>p^r\lambda\}$ is an open subgroup of $H$ containing $H^p$, and that it is proper in $H$ if $f$ is non-trivial. For each $i\leq s$, define $K_i:=K_{L_i}$.\
Then since $L_{j-1}$ is trivial and $L_{j}$ is not, we know that $K_{j-1}=H$ and $K_{j}$ is a proper subgroup of $H$.
\[convergence\]
There exists $k\in\mathbb{N}$ such that $b_{j}^{p^k}$ is $v$-regular of value $p^{k+s-j}\lambda$, and for any $h\in H\backslash K_{j}$, $(L_{j}(\tau(u(h)-1)b_{j}^{-1})^{p^m}\rightarrow c\in \widehat{Q(\frac{kG}{P})}$ with $c\neq 0$ and $c^p=c$.
Since $L_{j-1}$ is trivial, we know that for each $h\in H$, $\rho(L_{j-1}(\tau(u(h)-1)))>p^{s-j+1}\lambda$.\
Choose $h\in H\backslash K_{j}$, i.e. $\rho(L_{j}(\tau(u(h)-1))=p^{s-j}\lambda$. Setting $q:=\tau(u(h)-1)$ for convenience, we have:
$\rho(L_{j-1}(q))=\rho(L_{j}(q)^p-L_{j}(q)b_{j}^{p-1})>p^{s-j+1}\lambda$
But $\rho(L_{j}(q)^p-L_{j}(q)b_{j}^{p-1})\geq\min\{\rho(L_{j}(q)^p),\rho(L_{j}(q)b_{j}^{p-1})\}$, with equality if $\rho(L_{j}(q)^p)\neq\rho(L_{j}(q)b_{j}^{p-1})$.\
So if $\rho(b_{j})> p^{s-j}\lambda$, then we have that:
$\rho(L_{j}(q)b_{j}^{p-1})>\rho(L_{j}(q))+(p-1)p^{s-j}\lambda=p^{s-j}\lambda$.
But $\rho(L_{j}(q)^p)=p\rho(L_{j}(q))=p^{s-j+1}\lambda$, and hence $\rho(L_{j-1}(\tau(u(h)-1)))=\min\{\rho(L_{j}(q)^p),\rho(L_{j}(q)b_{j}^{p-1})\}$
$=p^{s-j+1}\lambda$ – contradiction.\
Therefore, $\rho(b_{j})\leq p^{s-j}\lambda$, so using Lemma \[inequality\], we see that $\rho(b_{j})=p^{s-j}\lambda$, and $b_{j}^{p^k}$ is $v$-regular for some $k$, and thus $v(b_{j}^{p^k})=\rho(b_{j}^{p^k})=p^{k+s-j}\lambda$.\
Now, $\rho((L_{j}(q)^{p^k}b_{j}^{-p^k})^{p}-(L_{j}(q)^{p^k}b_{j}^{-p^k}))=\rho(b_{j}^{-p^{k+1}}(L_{j}(q)^p-L_{j}(q)b_{j}^{p-1})^{p^k})$\
$=\rho(L_{j-1}(q)^{p^k})-pv(b_{j}^{p^k})>p^{s-j+k+1}\lambda-p^{s-j+k+1}\lambda=0$\
This means that $v(((L_{j}(q)b_{j}^{-1})^{p}-(L_{j}(q)b_{j}^{-1}))^{p^m})>0$ for $m>>0$, so it follows from Lemma \[limit\] that $L_{j}(\tau(u(h)-1))b_{j}^{-1})^{p^m}$ converges to $c\in\widehat{Q(\frac{kG}{P})}$ with $c^p=c$.\
Finally, since $\rho(L_{j}(\tau(u(h)-1))^{p^k}b_{j}^{-p^k})=0$, it follows that $c\neq 0$.
\[maximal\]
If $L_{j-1}$ is trivial and $L_{j}$ is not trivial, then $P$ is controlled by a proper open subgroup of $G$.
Since $K_{j}$ is a proper subgroup of $H$ containing $H^p$, we can choose an ordered basis $\{h_1,\cdots,h_d\}$ for $H$ such that $\{h_1^p,\cdots,h_t^p,h_{t+1},\cdots,h_d\}$ is an ordered basis for $K_{j}$.\
Consider our Mahler expression (\[MahlerPol\]), taking $f=L_{j}$, $q_i=\tau(u(h_i)-1)$:
$$\label{Mahlerx}
0=L_{j}(q_1)^{p^m}\tau\partial_1(y)+\cdots+L_{j}(q_{d})^{p^m}\tau\partial_{d}(y)+O(L_{j}(q)^{p^m})$$
Where $\rho(q)>\lambda$, and hence $\rho(L_{j}(q))>p^{s-j}\lambda$. Note that we also have:
$\rho(L_{j}(q_i))=p^{s-j}\lambda$ for all $i\leq t$, and $\rho(L_{j}(q_i))>p^{s-j}\lambda$ for all $i>t$.
Using Lemma \[convergence\], we see that $b_{j}^{p^k}$ is $v$-regular of value $p^{k+s-j}$ for some $k$, and for each $i\leq t$, $(L_{j}(q_i)^{p^k}b_{j}^{-p^k})^{p^m}\rightarrow c_i\neq 0$ as $m\rightarrow\infty$, with $c_i^p=c_i$. Clearly $c_1,\cdots,c_r$ commute.\
So, divide out our expression (\[Mahlerx\]) by $b_{j}^{p^m}$, which is $v$-regular of value $p^{m+s-j}\lambda$ to obtain:
$$0=(b_{j}^{-1}L_{j}(q_1))^{p^m}\tau\partial_1(y)+\cdots+(b_{j}^{-1}L_{j}(q_{d-1}))^{p^m}\tau\partial_{d-1}(y)+O((b_{j}^{-1}L_{j}(q))^{p^m})$$
Take the limit as $m\rightarrow\infty$ and the higher order terms will converge to zero. Hence the expression converges to $c_1\tau\partial_1(y)+\cdots+c_r\tau\partial_r(y)$.
Therefore $(c_1\tau\partial_1+\cdots+c_r\tau\partial_r)(P)=0$.\
Now, using Proposition \[Frattini\], we know that if $\tau\partial_i(P)=0$ for some $i\leq r$ then $P$ is controlled by a proper open subgroup of $G$. So we will suppose, for contradiction, that $\tau\partial_i(P)\neq 0$ for all $i\leq r$.\
Let $N:=\{(q_1,\cdots,q_r)\in\widehat{Q(\frac{kG}{P})}:(q_1\tau\partial_1+\cdots+q_r\tau\partial_r)(P)=0\}$, then $0\neq (c_1,\cdots,c_r)\in N$, so $N\neq 0$.
Therefore, using Proposition \[divided-powers\], we see that $c_1,\cdots,c_r$ are $Z(Q)$-linearly dependent.\
So, we can find some $1<t\leq r$ such that $c_1,\cdots,c_t$ are $Z(Q)$-linearly dependent, but no proper subset of $\{c_1,\cdots,c_t\}$ is $Z(Q)$-linearly dependent.
It follows that we can find $\alpha_2,\cdots,\alpha_t\in Z(Q)\backslash\{0\}$ such that $c_1+\alpha_2c_2+\cdots+\alpha_tc_t=0$.\
Therefore, since $c_i^p=c_i$ for all $i$, we also have that:
$c_1+\alpha_2^pc_2+\cdots+\alpha_t^pc_t=(c_1+\alpha_2c_2+\cdots+\alpha_tc_t)^p=0$
Hence $(\alpha_2^p-\alpha_2)c_2+\cdots+(\alpha_t^p-\alpha_t)c_t=0$.\
So using minimality of $\{c_1,\cdots,c_t\}$, this means that $\alpha_i^p=\alpha_i$ for each $i$, and it follows that $\alpha_i\in\mathbb{F}_p$ for each $i$, i.e. $c_1,\cdots,c_r$ are $\mathbb{F}_p$-linearly dependent.\
So, we can find $\beta_1,\cdots,\beta_r\in\mathbb{F}_p$, not all zero, such that $\beta_1c_1+\cdots+\beta_rc_r=0$, or in other words:\
$\underset{m\rightarrow\infty}{\lim}{(L_{j}(\beta_1q_1+\cdots+\beta_rq_r)^{p^k}b_{j}^{-p^k})^{p^m}=0}$
Therefore, $\rho(L_{j}(\beta_1q_1+\cdots+\beta_rq_r)^{p^k}b_{j}^{-p^k})>0$, and hence $\rho(L_{j}(\beta_1q_1+\cdots+\beta_rq_r))>v(b_{j})=p^{s-j}\lambda$.\
But $\beta_1q_1+\cdots+\beta_rq_r=\tau(u(h_1^{\beta_1}\cdots h_r^{\beta_r})-1)+\varepsilon$, where $\rho(\varepsilon)>\lambda$, and we know that $\rho(L_{j}(\tau(u(h_1^{\beta_1}\cdots h_r^{\beta_r})-1)))=p^{s-j}\lambda$ by the definition of $K_{j}$, and $\rho(L_{j}(\varepsilon))>p^{s-j}\lambda$ since $L_{j}$ is a GPP.
Hence $\rho(L_{j}(\beta_1q_1+\cdots+\beta_rq_r))=\rho(L_{j}(\tau(u(h_1^{\beta_1}\cdots h_r^{\beta_r})-1)))=p^{s-j}\lambda$ – contradiction.\
Therefore $P$ is controlled by a proper open subgroup of $G$.
Control Theorem
---------------
We may now suppose that $L_i$ is not trivial for all $i\leq s$, in particular, $L_0$ is a non-trivial GPP of $p$-degree $s$.
\[L-special\]
$L_0$ is a special growth preserving polynomial.
Given $h\in H$ such that $\rho(L_0(\tau(u(h)-1)))=p^s\lambda$, we want to prove that $L_0(\tau(u(h)-1))^{p^k}$ is $v$-regular for some $k$. Let $T=u_{c}(h)-1+F_{\theta+1}kG\in$ Span$_{\mathbb{F}_p}\{T_1,\cdots,T_r\}$.\
We know that $L^{(s)}(\overline{T},\overline{B}_s,\cdots,\overline{B}_1)^{p^{m_1}}$ lies in $A'$ by Lemma \[centralising\]. If $L^{(s)}(\overline{T},\overline{B}_s,\cdots,\overline{B}_1)^{p^{m_0}}\in\mathfrak{q}$ then we may assume that $L^{(s)}(\overline{T},\overline{B}_s,\cdots,\overline{B}_1)^{p^{m_1}}=0$, so using Lemma \[reduction-value\] we see that $\overline{w}(L^{(s)}(\tau(u(h)-1),b_s,\cdots,b_1))>p^{s+m_1}\theta=\overline{w}(b_s^{p^s})$.\
So again, since gr$_{\overline{w}}(b_s)=\overline{B}_s\in A'\backslash\mathfrak{q}$, it follows from Theorem \[comparison\] that
$v(L^{(s)}(\tau(u(h)-1),b_s,\cdots,b_1)^{p^m})>v(b_s^{p^{m+s}})$ for $m>>0$
Hence $\rho(L_0(\tau(u(h)-1)))=\rho(L^{(s)}(\tau(u(h)-1),b_s,\cdots,b_1))>\rho(b_s^{p^s})\geq p^s\lambda$ – contradiction.\
Therefore, we have that $L^{(s)}(\overline{T},\overline{B}_s,\cdots,\overline{B}_1)^{p^{m_1}}\in A'\backslash{q}$, and hence it is equal to
gr$_{\overline{w}}(L^{(s)}(\tau(u(h)-1),b_s,\cdots,b_1))$ by Lemma \[reduction-value\].\
It follows from Theorem \[comparison\] that for $m>>0$, $L_0(\tau(u(h)-1))^{p^m}=L^{(s)}(\tau(u(h)-1),b_s,\cdots,b_1)^{p^m}$ is $v$-regular as required.
Now we can finally prove our main control theorem in all cases. But we first need the following technical result.
\[split-centre\]
Let $G=H\rtimes\langle X\rangle$ be an abelian-by-procyclic group. Then $G$ has split centre if and only if $(G,G)\cap Z(G)=1$.
It is clear that if $G$ has split centre then $(G,G)\cap Z(G)=1$. Conversely, suppose that $(G,G)\cap Z(G)=1$, and consider the $\mathbb{Z}_p$-module homomorphism $H\to H,h\mapsto (X,h)$.\
The kernel of this map is precisely $Z(G)$, therefore $(X,H)\cong\frac{H}{Z(G)}$. So since $Z(G)\cap (X,H)=1$, it follows that $Z(G)\times (X,H)$ has the same rank as $H$, hence it is open in $H$.\
Recall from [@Billy Definition 1.6] the definition of the *isolator* $i_G(N)$ of a closed, normal subgroup $N$ of $G$, and recall from [@Billy Proposition 1.7, Lemma 1.8] that it is a closed, isolated normal subgroup of $G$, and that $N$ is open in $i_G(N)$.\
Let $C=i_G((X,H))\leq H$, then it is clear that $Z(G)\cap C=1$ and that $Z(G)\times C$ is isolated.
Therefore, since $Z(G)\times (X,H)$ is open in $H$, it follows that $Z(G)\times C=H$, and hence $G=Z(G)\times C\rtimes\langle X\rangle$, and $G$ has split centre.
*Proof of Theorem \[B\].* Let $Z_1(G):=\{g\in G:(g,G)\subseteq Z(G)\}$, this is a closed subgroup of $G$ containing $Z(G)$ and contained in $H$. Suppose first that $Z_1(G)\neq Z(G)$.\
Then choose $h\in Z_1(G)\backslash Z(G)$, then $(h,G)\subseteq Z(G)$, so if we take $\psi\in Inn(G)$ to be conjugation by $h$, then $\psi$ is trivial mod centre, and clearly $\psi(P)=P$. So it follows that $P$ is controlled by a proper, open subgroup of $G$ by [@nilpotent Theorem B].\
So from now on, we may assume that $Z_1(G)=Z(G)$.\
Suppose that $(X,h)\in Z(G)$ for some $h\in H$, then clearly $(h,G)\subseteq Z(G)$, so $h\in Z_1(G)=Z(G)$, giving that $(X,h)=1$. It follows that $Z(G)\cap (G,G)=1$, and hence $G$ has split centre by Lemma \[split-centre\].\
Therefore, using Theorem \[equalising\], we can choose a $k$-basis $\{k_1,\cdots,k_d\}$ for $H$ and a filtration $w$ on $kG$ such that gr$_w$ $kG\cong k[T_1,\cdots,T_{d+1}]\ast\frac{G}{c(G)}$, where $T_i=$ gr$(u_{c}(k_i)-1)$ for $i\leq r$, $T_i=$ gr$(k_i-1)$ for $i>r$, $T_r,\cdots,T_{d}$ are central and $\bar{X}T_i\bar{X}^{-1}=T_i+D_i$ for some $D_i\in$ Span$_{\mathbb{F}_p}\{T_{i+1},\cdots,T_r\}$ for all $i<r$.\
If $Q(\frac{kG}{P})$ is a CSA, then the result follows from Corollary \[C\], so we may assume that $Q(\frac{kG}{P})$ is not a CSA.\
Hence if each $D_i$ is nilpotent mod gr $P$, then $id:\tau(kH)\to\tau(kH)$ is a special GPP with respect to some non-commutative valuation by Proposition \[special-case\].
Therefore, by Theorem \[special-GPP\], $P$ is controlled by a proper open subgroup of $G$ as required.\
If $D_s$ is not nilpotent mod gr $P$ for some $s<r$, then we can construct GPP’s $L_s,\cdots,L_0$ with respect to some non-commutative valuation using Proposition \[growth-preserving\], and $L_s$ is non-trivial.\
If $L_{j-1}$ is trivial and $L_{j}$ is non-trivial for some $0< j\leq s$, then the result follows from Theorem \[maximal\]. Whereas if all the $L_i$ are non-trivial, then $L_0$ is a special GPP by Proposition \[L-special\], and the result follows again from Theorem \[special-GPP\].
[20]{}
K. Ardakov, **Prime ideals in nilpotent Iwasawa algebras**. *Inventiones Mathematicae* **190**(2), 439-503 (2012).
K. Ardakov, **The Controller Subgroup of One-sided ideals in Completed Group Rings**. *Contemporary Mathematics, vol 562* (2012).
K. Ardakov; K. A. Brown, **Ring-theoretic properties of Iwasawa algebras: a survey**. *Documenta Math.*, Extra Volume Coates, 7-33 (2006).
K. Ardakov; F. Wei; J.J. Zhang, **Nonexistence of reflexive ideals in Iwasawa algebras of Chevalley type**. *J. Algebra* **320**(1), 259-275 (2008).
G.M. Bergmann, *A Weak Nullensatz for Valuations*. *Proceedings of the American Mathematical society* **28**(1), 32-38 (1971).
T. Diagana; F. Ramaroson, *Non-Archimedian Operator Theory*. Springer, Heidelberg, (2016).
J.D. Dixon; M.P.F. du Sautoy; A. Mann; D. Segal, *Analytic Pro-$p$ Groups (second edition)*. *Cambridge Studies in Advanced Mathematics, vol 61. Cambridge University Press, Cambridge,* (1999).
P.M. Eakin, Jr, **The converse to a well-known theorem on Noetherian rings**. Math. Ann. 177, **1360**(37), 278-282 (1968).
N. Jacobson, *Basic Algebra 2 (second edition)*. *Dover Publications,* (2009).
M. Lazard, **Groupes analytiques $p$-adiques**. *Publ. Math. IHÉS* **26**, 389-603 (1965).
H. Li, **Lifting Ore sets of Noetherian filtered rings and applications**. *J. Algebra* **179**(3), 686-703 (1996).
H. Li; F. Van Oystaeyen, *Zariskian Filtrations*. *Kluwer Academic, Dordrecht,* (1996).
J.C. McConnell; J.C. Robson, *Noncommutative Noetherian Rings*. *Graduate Studies in Mathematics, vol 30. American Mathematical Society, Providence,* (2001).
D.S. Passman, *Infinite Crossed Products*. *Pure and Applied Mathematics, vol. 135. Academic Press, Boston,* (1989).
P. Schneider, *$p$-adic Lie Groups*. *Grundlehren der Mathematischen Wissenschaften, vol. 344. Springer, Heidelberg,* (2011).
W. Woods, **On the structure of virtually nilpotent compact $p$-adic analytic groups**. arxiv: 1608.03137vl \[math.GR\], (2016).
| {
"pile_set_name": "ArXiv"
} |
---
abstract: |
It is of prime importance to recognize evolution and extinction effects in supernovae results as a function of redshift, for SN Ia to be considered as distance indicators. This review surveys all observational data searching for an evolution and/or extinction, according to host morphology. For instance, it has been observed that high-z SNe Ia have bluer colours than the local ones: although this goes against extinction to explain why SN are dimmer with redshift until z $\sim$ 1, supporting a decelerating universe, it also demonstrates intrinsic evolution effects.\
SNe Ia could evolve because the age and metallicity of their progenitors evolve. The main parameter is carbon abundance. Smaller C leads to a dimmer SN Ia and also less scatter on peak brightness, as it is the case in elliptical galaxy today. Age of the progenitor is an important factor: young populations lead to brighter SNe Ia, as in spiral galaxies, and a spread in ages lead to a larger scatter, explaining the observed lower scatter at high z.\
Selection biases also play a role, like the Malmquist bias; high-z SNe Ia are found at larger distance from their host center: there is more obscuration in the center, and also detection is easier with no contamination from the center. This might be one of the reason why less obscuration has been found for SNe Ia at high z.\
There is clearly a sample evolution with z: currently only the less bright SNe Ia are detected at high z, with less scatter. The brightest objects have a slowly declining light-curve, and at high z, no slow decline has been observed. This may be interpreted as an age effect, high-z SN having younger progenitors.
address: 'Observatoire de Paris, LERMA, 61 Av. de l’Observatoire, F-75014, Paris, France'
author:
- 'F. Combes'
title: 'Properties of SN-host galaxies'
---
supernovae ,galaxies ,star formation ,morphological type
Introduction: SN Ia as “calibrated” candles
===========================================
Type Ia supernovae are now used as fundamental probes of the cosmological parameters, based on a tight empirical relation between their peak luminosity and the width of their light-curve (Riess et al. 1996, Perlmutter et al. 1997). It has been recognized early than SN Ia were not “standard” candles, since important variations of their peak luminosities are observed, as a function of metallicity, age, environment, morphological type of the supernovae hosts. But most of these variations can be calibrated, and corrected away, if the correlation between the peak luminosity and the rate of decline is taken into account (Phillips 1993): the scatter in distances is then much reduced. The problem arises with evolution, since these effects are not known a priori, and could mimick a cosmological constant.\
In particular the presence and nature of dust could vary with redshift, or the rate of supernovae explosions and their nature could vary systematically. A striking example of evolution of supernovae with redshift is the observation that high-z SN Ia have bluer colours than local ones (cf Fig. 1, Leibundgut 2001). This result is surprising, since it goes against extinction and reddening.
Evolution of SN Ia, corrections and consequences
================================================
To better understand the origin of these evolutions, we must know more on the progenitors of these SNe, to predict age or metallicity effects.
Progenitors of type Ia supernovae
---------------------------------
Type Ia supernovae are likely to be thermonuclear explosion (by fusion of carbon and oxygen) of white dwarfs having approached the Chandrasekhar mass of 1.39 M$_\odot$ (cf the review by Hillebrandt & Niemeyer 2000). Their principal characteristic constraining models is their absence of hydrogen in their spectrum. They correspond to the total explosion of a Carbon-Oxygen white dwarf (C+O WD). They must accrete mass through binary evolution (since these WD are not massive enough at the beginning), and there are two possibilities: either two C+O white dwarfs merge (double-degenerate model), in a time-scale of a few 100 Myr, or only one C+O WD is accreting (single degenerate), with a longer time-scale, a few Gyrs, the most likely scenario. The merger scenario is not supported by the homogeneity of observed quantities, since the obtention of the critical mass by fusion introduces more freedom in initial parameters. The range of luminosities is assumed to be connected to the amount of radioactive $^{56}$Ni produced in the explosion, decaying to $^{56}$Co and $^{56}$Fe. The amount of nickel available cannot vary by much more than a factor 2. The merger scenario is supposed to lead to a collapse supernova instead.\
Recently, hydrogen (broad H$\alpha$ line) has been discovered in a type Ia supernova (Hamuy et al 2003), which has been considered to support the single-degenerate progenitor. However, it could be interpreted as a double-degenerate case also (Livio & Riess 2003), and in any case is too rare to clarify definitively the issue. Some of the observed variations could be due to the various time-scales and ages of the progenitors: the most luminous SN Ia are found in spirals, the dimmest in Ellipticals. The rate of SN Ia has been estimated (Pain et al. 1996, 2002), and could bring insight into the problem: 6% of the stars between 3-9 M$_\odot$ experience such an explosion.
Effects of age and metallicity
------------------------------
Smaller Carbon abundance of progenitors leads to dimmer type Ia supernovae (Umeda et al 1999). The C fraction varies between 0.36-0.5 just before the explosion. The mass of the progenitors are smaller at lower metallicity, and the model predicts dimmer SN at high z, with lower scatter. Timmes et al. (2003) confirm that the peak luminosity should vary linearly with metallicity Z, and this could already explain a scatter of 25% for a region like the solar neighborhood, where Z varies between 0.3 and 3 solar. However, observations do not confirm this strong predicted influence of metallicity: in elliptical and S0 galaxies, where metallicity varies with radius, no significant radial gradients of SNe Ia peak absolute magnitudes or decay rates have been detected (Ivanov et al. 2000). The variations in brightness and light curve width of supernovae reveal high values, that are attributed more to age. Once the light-curve shape method is applied, there does not remain any radial gradient of colours. This supports the validity of the empirical calibrations.\
The effect of age could be partly due to metallicity (stars being less metallic in the past), but also due to different explosions scenarios. At high redshift the progenitors were obviously younger, which favors the short-time scale double-degenerate scenario. However this explosion model leads to brighter SNe, which is not observed. Brightest objects have a slowly declining light curve. At high z no slow decline are observed, as if the brightest SN are not seen, (contrary to the Malmquist bias).
Corrections toward standardisation
----------------------------------
The relation between the peak luminosity of SNe Ia and their initial rate of decline, is different in the various bands: the decline is faster in the B-band, than in V and I, and the colours at maximum depends on the decline rate (Phillips 1993); the fastest declining light curves (the less bright objects) correspond to the reddest events. All these characteristics are used to reduce the range of peak luminosities, and to approach the standard candle regime. Various methods have been used by the two main teams, the Supernova Cosmology Project (SCP, Perlmutter et al 1997), and the High-Redshift Supernova Search team (HZT, Riess et al 1998). The methods are (see Fig. 2):\
– MLCS: Multicolor Light-Curve Shape (HZT), calibrated on nearby SNe Ia in the local universe (z $<$ 0.15), uses the color characteristics to estimate corrections for extinction and reddening due to material in the host galaxies;\
– TF: Template Fit (HZT) uses several template curves, from peak to 15 days afterwards, to fit a particular high-z SN Ia;\
– SF: Stretch Factor (SCP): the shift in the peak magnitude is proportional to how much the light-curve width must be stretched to fit a standard curve.\
What is striking in the comparison of the various methods in Fig. 2 (made by Drell et al. 2000) is that the amount of corrections are quite different, and in particular the SF method corrections are very small, meaning that these SNe Ia are almost standard candles. The samples are different, although 14 objects are in common for the 3 methods. The study by Drell et al. (2000) did not reveal any systematic differences in distances between the two methods MLCS and TF as a function of redshift (only a scatter of about 0.4 mag), but a little dependence on absolute magnitude, for the high-z sample.\
Drell et al. (2000) made interesting simulation of cosmic evolution of SNe, as for instance their peak luminosity varying slightly with (1+z) linearly, and showed clearly how the determination of cosmological parameters and cosmic evolution is degenerate. With the number of high-z SNe Ia observed up to now, observing a larger number will not raise the degeneracy, but a better understanding of the explosion process is more needed.
Extinction
==========
The possibility that intergalactic dust, distributed uniformly between galaxies, produce the extinction able to account for the dimming of high-z SNe Ia, has been investigated by Aguirre (1999) and Aguirre & Haiman (2000). The exctinction must not be accompanied by reddenning, which is not observed (this must be gray dust). This hypothesis is compatible with the data, given that a large amount of gas is observed in the intergalactic space (Ly$\alpha$ absorbants), and that the metallicity of the gas appears high (0.1 to 0.3 solar), a significant fraction of metals can be intergalactic. The grains have to be of large size ($>$ 0.1$\mu$) for the “gray” requirement. Very small grains provide most of the reddening but less than half of the opacity for optical extinction. This gray dust would however absorb the cosmic UV-optical background and by re-emission, contribute significantly (75%) to the cosmic infrared background (CIB). There are no measurable distortions of the CMB predicted, but the dust FIR background is only marginally compatible with the data. Also, this component is not detected by X-ray scattering (Paerels et al 2002). The extinction could as well be provided by intervening galaxies, but this hypothesis appears unlikely also (Goobar et al 2002).\
Extinction from host galaxies appears more likely, although high-z SNe Ia are not reddened, and apparently not obscured, according to their correction factors, which is difficult to explain, except through selection effects and Malmquist bias (Farrah et al. 2002). The proportion of morphological types for the hosts do not show evolution: 70% are found in spiral galaxies and 30% in ellipticals, as for local events.
Influence of distance from center
---------------------------------
The radial distribution of SNe in their host galaxies is observed to be similar at low and high z, at least for the samples observed with CCD, but not with the samples discovered photographically (Shaw effect (cf Shaw 1979), where the center of galaxies is undersampled in SNe, Howell et al. 2000). SNe in the outer regions of galaxies are dimmer than those in the central regions by 0.3 mag (Riess et al. 1999). This could explain the anti-Malmquist bias observed for high-z SNe. Another concording feature is that SNe in early-type galaxies tend to be dimmer, and large-distance SNe are in general in early-types (Hamuy et al. 1996). Physical explanations for the dependence of peak luminosity with radius might be searched in the metallicity gradient of the galaxy host, or in the possible different progenitors between disk and bulge (age effect).
Influence of dust and host-type
-------------------------------
Sullivan et al (SCP, 2003) have recently studied in details coherent samples of SNe Ia, and their host morphologies. They built a Hubble diagram (Fig. 3) according to the host types, omitting outliers (too large stretch factors, highly reddened objects). The classification of the hosts is made from colours, HST images and spectral types when available, the evolution of colours with redshifts being compared to stellar population models (Poggianti 1997). When stretch factors (SF) are compared for all types, early-types SF are more scattered. A striking feature is that SF are less scattered for high-redshift objects. This might be due to the age effect: at low redshift, stellar populations have a larger range of ages than at high z.\
The Hubble diagram shows more scatter for late-type hosts, which might be attributed to dust extinction, since there is more gas in late-type galaxies. There is also more scatter with SNe discovered in the central parts of the galaxies, which confirms that the scatter might be due to dust. There is no or little evolution of the amount of dust extinction with redshift. The estimated average extinction suffered by SNe Ia is small (A$_B$ $\sim$ 0.3, but may be $>$1 when all SNe are considered, including outliers). This is consistent with what is expected from galaxies, where the mean exctinction is weighted by a small number of objects highly obscured. Selection effects eliminate the more obscured SNe.
Supernova rates
===============
Since the peak luminosity of SNe varies with galaxy hosts, it is interesting to study the SNe rates as a function of galaxy types, and evolution of types with redshift. As for core-collapse supernovae (CCSN) and in a less measure for SNe Ia, it is legitimate to expect a larger SNe rate for starbursting galaxies. Starburst galaxies are dusty, and this rate has been studied in the infrared (Mattila & Meikle 2001; Mannucci et al. 2003). The rate of supernovae in starburst galaxies is found to be about 10 times more than in quiescent galaxies. But it is still 3-10 times less than expected from the star formation rate (SFR) estimated with FIR emission, which means that many SNe are still obscured (Av $>$ 30). Alternatively, the FIR tracer of star formation might not follow closely the SN rate, as the latter is found also relatively too large in low-mass galaxies (Melchior et al. 2003). Other possibilities is that starburst and quiescent galaxies have different IMF, leading to a different SN-rate/SFR ratio.\
The type Ia SN rate shows also a sharp increase toward galaxies with higher activity of star formation, suggesting that a significant fraction of type Ia SNe are associated with young stellar populations (della Valle & Livio 1994; Mannucci et al. 2003). This is important, in view of the large increase of star formation activity with redshift.\
It is well known also that the supernova rates is strongly correlated with Hubble types (Branch & van den Bergh 1993; Hamuy et al. 2000). Late-type systems are more prolific in SNe Ia, but selection effects are severe, and it is difficult to disentangle effects of age and metallicity.
Galaxy types and star formation history
=======================================
Large statistics are needed to better understand the evolution of star formation as a function of galaxy types, and in consequence the SN rate as a function of host type. Large surveys of galaxies are currently carried out, and the first results on 10$^5$ galaxies in the SLOAN Digital Sky Survey (SDSS) give already such statistics (Kauffmann et al 2003a,b). Two stellar absorption line indices (the strength of the 4000Åi break D$_n$(4000), and the Balmer absorption line index H$\delta_A$) with the help of models, yield the amount of dust extinction, the stellar masses and the fraction of mass involved in bursts in the last 1-2 Gyr.
The distribution of dust attenuation (cf Fig. 4) reveals a median value of A$_z$ =0.3, which corresponds in the B-band to A$_B \sim$ 0.6. There is as expected a broad wing at high extinction. Fig. 4 shows also that the extinction is higher for young and low-mass galaxies (the index D$_n$(4000) being an age index).\
One of the most striking result of the survey is the bimodality in the distribution of galaxies, with a separation in mass of 3 10$^{10}$ M$_\odot$. The surface density is a function of mass, as shown in Fig. 5. High-mass galaxies have also a high surface density, which is about constant (HSB). The low mass galaxies are characterized by low surface brightness (LSB), which is a power-law function of mass (with a slope near 0.54). The dependence of surface density with mass might well be interpreted as a lower efficiency of star formation at the low-mass end, due to supernovae feedback (Dekel & Woo 2003).
LSB galaxies have younger stellar populations, low concentrations, and appear less evolved. The star formation history is more related to the stellar surface density than to the stellar mass. In the low-mass range, 50% of galaxies are likely to have recent bursts (10% with large certainty), while it is only 4% for high-mass galaxies. Today, starbursts (and SNe?) mostly occur in dwarfs and LSB galaxies. It might not have been the case in the past, at high z (Kauffmann et al. 2003b)
Conclusions
===========
SNe Ia are not exactly “standard candles”, but the shape of their light curve allows their calibration towards this goal. Age and metallicity effects can then be corrected, and their evolution with redshift is being tested. There is a surprisingly low average obscuration for high-z SN Ia, A$_B$= 0.3 mag, twice lower than the average for galaxies, but this could be due to selection effects.\
SNe Ia occur more frequently in spiral galaxies, and in outer parts of E/SO galaxies. The scatter in their peak luminosities and in the correction factors is larger for late-types, which are likely to be more frequent at high redshift. This trend is however compensated by the smaller range in progenitor age, leading to less scatter at high z. The physics of SNe Ia explosions is not yet well known, but statistics as a function of metallicity and age of the host stellar populations should help to understand better the evolution of their properties.
Aguirre A.: 1999, ApJ 525, 583 Aguirre A., Haiman Z.: 2000, ApJ 532, 28 Branch, D., van den Bergh, S.: 1993, AJ 105, 2231 Dekel A., Woo J.: 2003, in press (astro-ph/0210454) della Valle, M., Livio, M.: 1994, ApJ 423, L31 Drell, P.S., Loredi T.J., Wasserman I: 2000, ApJ 530, 593 Farrah D., Meikle W., Clements D. et al.: 2002, MNRAS 336, L17 Goobar, A., Mörtsell, E., Amanullah, R. et al.: 2002, A&A 392, 757 Hamuy M., Phillips M.M., Suntzeff N.B. et al.: 1996 AJ 112, 2391 Hamuy, M., Trager, S. C., Pinto, P. et al.: 2000 AJ 120, 1479 Hamuy M., Phillips M.M., Suntzeff N.B.: 2003, Nature, in press (astro-ph/0306270) Hillebrandt, W., Niemeyer, J. C.: 2000, ARAA 38, 191 Howell, D. A., Wang, L., Wheeler, J. C.: 2000, ApJ 530, 166 Ivanov, V. D., Hamuy, M., Pinto, P. A.: 2000, ApJ 542, 588 Kauffmann G., Heckman T., White S. et al: 2003a, MNRAS 341, 33 Kauffmann G., Heckman T., White S. et al: 2003b, MNRAS 341, 54 Leibundgut B.: 2001, ARAA 39, 67 Livio M., Riess A.: 2003, in press (astro-ph/0308018) Mannucci, F., Maiolino, R., Cresci, G. et al.: 2003, A&A 401, 519 Mattila, S., Meikle, W. P. S.: 2001, MNRAS 324, 325 Melchior A-L., Combes F., Pennypacker C. et al.: 2003, A&A in prep Paerels, F., Petric, A., Telis, G., Helfand, D. J.: 2002, BAAS, 201, 9703 Pain, R., Fabbro, S., Sullivan, M. et al.: 2002, ApJ 577, 120 Pain, R., Hook, I. M., Deustua, S., et al.: 1996, ApJ 473, 356 Perlmutter S., Gabi S., Goldhaber G. et al.: 1997, ApJ 483, 565 Phillips M.M.: 1993, ApJ 413, L105 Poggianti B.M.: 1997, A&AS 122, 399 Riess A.G., Press W.H., Kirshner R.P.: 1996, ApJ 473, 88 Riess A.G., Filippenko, A. V., Challis, P.: 1998, AJ 116, 1009 Riess, A. G., Kirshner, R. P., Schmidt, B. P.: 1999, AJ 117, 707 Shaw R.L.: 1979, A&A 76, 188 Sullivan, M., Ellis R.S., Aldering G. et al: 2003, MNRAS 340, 1057 Timmes, F. X., Brown, E. F., Truran, J. W.: 2003, ApJ 590, L83 Umeda, H., Nomoto, K., Yamaoka, H., Wanajo, S.: 1999, ApJ 513, 861
| {
"pile_set_name": "ArXiv"
} |
---
abstract: |
The isoperimetric inequality in the Euclidean geometry (for polygons) states that among all $n$-gons having a fixed perimeter $p$, the one with the largest area is the regular $n$-gon. One can generalise this result to simple closed curves; in this case, the curve with the maximum area is the circle. The statement is true in hyperbolic geometry as well (see Bezdek [@Bez]).
In this paper, we generalize the isoperimetric inequality to disconnected regions, i.e. we allow the area to be split between regions. We give necessary and sufficient conditions for the isoperimetric inequality (in Euclidean and hyperbolic geometry) to hold for multiple $n$-gons whose perimeters add up to $p$.
address:
- |
Department of Mathematics and Statistics, IIT Kanpur\
Uttar Pradesh - 208016\
India
- |
Chennai Mathematical Institute\
Siruseri, Tamil Nadu - 603103\
India
author:
- Bidyut Sanki
- Arya Vadnere
title: Isoperimetric Inequality for Disconnected Regions
---
Introduction
============
The isoperimetric inequality is a profound result in Euclidean geometry, which states
Let $\Gamma$ be a simple closed curve in $\mathbb{R}^2$. Let $l$ be the length of $\Gamma$, and $A$ be the area of the region enclosed by the curve. Then we have $$l^{2}\geq4\pi A,$$ with equality holding if and only if $\Gamma$ is a circle.
For a proof (originally by Hurwitz, using Fourier series), see [@Cha].\
This result, which can be generalised to higher dimensions, encapsulates the geometric property that if one has a region with finite perimeter, then the area enclosed must be bounded.\
In particular, we can restrict our hypotheses to polygons –
\[iso\_ineq\] Among all $n-$gons with fixed area $A>0$, the one with the least perimeter is the regular $n-$gon. Equivalently, among all $n-$gons with fixed perimeter $P>0$, the one with the greatest area is the regular $n-$gon.
The proof is a standard exercise in Euclidean geometry. To prove it, we simply observe that among all triangles with a fixed base and fixed perimeter, the one with the largest area is the isosceles triangle.\
The result as stated for polygons (Theorem \[iso\_ineq\]) also holds true in hyperbolic geometry, as shown in [@Bez] by Bezdek.\
Now, a natural generalisation of this result is to consider the set of at most two polygons whose areas add up to a fixed value instead of considering all polygons with the fixed area. Then, in fact, one sees that the regular $n$-gon is still the one with the least perimeter. Namely, we prove the result stated below.
\[Isoperimetric inequality for disconnected regions\] \[iso\_disc\_Euc\] Let $P_{1},P_{2}$ and $P$ be (Euclidean) regular $n$-gons with areas $A_{1},A_{2},A$ respectively. Suppose $A_{1}+A_{2}=A$. Then $$\operatorname*{Perim}\left(P_1\right)+\operatorname*{Perim}\left(P_2\right)\geq \operatorname*{Perim}(P),$$ where $\operatorname*{Perim}(X)$ denotes the perimeter of $X$.
The proof of Theorem \[iso\_disc\_Euc\] (see Section \[sec2\]) uses simple trigonometry on the Euclidean plane. Once proved, this easily extends (by induction) to a corresponding statement with multiple $n$-gons instead of just $2$.
Let $P$ and $P_1, \dots P_k$ be regular $n$-gons with areas $A$ and $A_1, \dots, A_k$ respectively. Suppose $\sum\limits_{i=1}^k A_i = A$. Then $$\sum\limits_{i=1}^k \operatorname*{Perim}\left(P_i\right) \geq \operatorname*{Perim}(P).$$
In general Theorem \[iso\_disc\_Euc\] as stated is not true for hyperbolic polygons, as we can have polygons with area bounded but perimeter tending to infinity (ideal polygons). So, fixing $P_1$ and $P_2$, we can make the perimeter $Perim(P)$ arbitrarily large. However, in the Poincaré disk model of hyperbolic plane, as we move closer to the origin, our geometry *resembles* the Euclidean geometry; so we would expect that a similar result might work out for hyperbolic polygons with angles large enough (note - by Theorem \[iso\_ineq\], the requirement that $P_1$ and $P_2$ be regular is inessential).
In this paper, we try to find a working statement for Theorem \[iso\_disc\_Euc\] in hyperbolic geometry. We prove that –
\[Main Theorem\] \[thm\_main\] Let $P_{1},P_{2},P$ be regular hyperbolic $n-$gons $(n\geq3)$, with areas $A_{1},A_{2},A$ and interior angles $\theta_{1,},\theta_{2},\theta$ respectively. Furthermore, assume that $A_{1}+A_{2}=A$. Then we have constants $\varrho_0 > \kappa_0 > 0$, depending only on $n$, such that following hold.
1. If $\theta\geq\varrho_{0}$, then $\operatorname*{Perim}\left(P_{1}\right)+\operatorname*{Perim}\left(P_{2}\right)\geq\operatorname*{Perim}(P)$.
2. If $\theta<\kappa_{0}$, then we can construct polygons $P_{1},P_{2}$ satisfying the hypotheses, with $\operatorname*{Perim}\left(P_{1}\right)+\operatorname*{Perim}\left(P_{2}\right)<\operatorname*{Perim}\left(P\right)$.
The proof uses basic hyperbolic trigonometry to find an explicit expression for $\operatorname*{Perim}(X)$, given a hyperbolic regular $n$-gon $X$, which we analyze to obtain the desired bounds.\
Again, Theorem \[thm\_main\] can easily be extended to $k$ $n$-gons (inductively), with the same initial conditions (see Theorem \[thm\_for\_k\]).
Finally, we conjecture the values for necessary and sufficient conditions.
Euclidean geometry and Hyperbolic geometry {#sec2}
==========================================
Before we move on to hyperbolic geometry, let us quickly revise the isoperimetric inequality for disconnected regions, in the Euclidean setting.
Without much thought, one sees that for a regular $n$-gon with perimeter $p$ and area $a$, $$a=\frac{p^{2}}{4n\tan\left(\pi/n\right)}=K\cdot p^2$$ for $K=\frac{1}{4n\tan(\pi/n)}$, a constant depending only on $n$. Thus, $$A_{1}+A_{2}=A\implies p_{1}^{2}+p_{2}^{2}=p^{2},$$ which means that $p_{1},p_{2},p$ form the sides of a right triangle (by Pythagoras theorem). Then $p_{1}+p_{2}\geq p$ follows by the triangle inequality.
This proof heavily uses results specific to Euclidean geometry, and cannot be easily extended to hyperbolic geometry.
As we remarked in the introduction, although the result is false in hyperbolic geometry as stated, we would expect a similar result to be true for polygons *bounded close to the origin*. Indeed, we have –
\[thm\_weak\] Let $P,P_{1},P_{2}$ be regular hyperbolic $n$-gons, for $n\geq3$, with areas $A,A_{1},A_{2}$, and interior angles $\theta,\theta_{1},\theta_{2}$ respectively. Suppose that
1. $\theta\geq\cos^{-1}\left(-1+2\sin\left(\frac{\pi}{n}\right)\right)$
2. $A_{1}+A_{2}=A$
Then $$\operatorname*{Perim}\left(P_{1}\right)+\operatorname*{Perim}\left(P_{2}\right)\geq\operatorname*{Perim}\left(P\right).$$ Here, equality only occurs if one of $A_1$ and $A_2$ is $0$, which is a degenerate case.
Throughout the proof, when we refer to “$n-$gons”, we mean hyperbolic regular $n$-gons. Consider an $n$-gon $\Gamma=\Gamma\left(\alpha,p,a\right)$ with angle $\alpha$, perimeter $p$, area $a=\left(n-2\right)\pi-n\alpha$. We consider a triangulated section of $\Gamma$ as in Figure \[fig:1\].
![A sector of the $n$-gon[]{data-label="fig:1"}](hyptriangle.png)
Consider the right angled triangle $\Delta OAM$ in Figure \[fig:1\]. We know[^1] that (see formula (v) in Theorem 2.2.2 [@Bus]) $$\cosh\left(\frac{s}{2}\right)=\frac{\cos\left(\frac{\pi}{n}\right)}{\sin\left(\frac{\alpha}{2}\right)}.$$
Thus, $s=2\cosh^{-1}\left(\frac{\cos\left(\pi/n\right)}{\sin\left(\alpha/2\right)}\right)$ and $p=ns$. So, $$\begin{aligned}
&& p_{1}+p_{2}\geq p\\ \iff && \cosh^{-1}\left(\frac{\cos\left(\pi/n\right)}{\sin\left(\theta_{1}/2\right)}\right)+\cosh^{-1}\left(\frac{\cos\left(\pi/n\right)}{\sin\left(\theta_{2}/2\right)}\right)\geq\cosh^{-1}\left(\frac{\cos\left(\pi/n\right)}{\sin\left(\theta/2\right)}\right).\end{aligned}$$ We are given that $A_{1}+A_{2}=A;$ or equivalently $\theta_{1}+\theta_{2}=\theta+\left(n-2\right)\pi$.\
Now, fix $n$, and consider the function $$f(x)=\cosh^{-1}\left(\frac{\cos\left(\pi/n\right)}{\sin\left(x/2\right)}\right),$$ defined over $\left(0,\frac{\left(n-2\right)}{n}\pi\right)$[^2]. Then $f$ is continuous, bijective and strictly decreasing on $\left(0,\frac{\left(n-2\right)}{n}\pi\right)$. Now, $$f''(x)=\frac{\left(\cos\frac{\pi}{n}\right)\csc^{5}\left(\frac{x}{2}\right)\left(8\cos^{2}\frac{\pi}{n}+4\cos x+\cos(2x)-5\right)}{16\left(\left(\cos\frac{\pi}{n}\right)\csc\frac{x}{2}-1\right)^{3/2}\left(\left(\cos\frac{\pi}{n}\right)\csc\frac{x}{2}+1\right)^{3/2}},$$ which has a unique root in $\left(0,\frac{\left(n-2\right)}{n}\pi\right)$ at $\theta_{0}=\cos^{-1}\left(-1+2\sin\left(\frac{\pi}{n}\right)\right)$
Thus, $f\mid_{\left(\theta_{0},\frac{(n-2)}{n}\pi\right)}$ is curve-above-chord. The inequality, i.e., the theorem, then follows from the following lemma:
\[lem\_ineq\] Let $f:(a,b)\to\mathbb{R}$ be a function which is curve-above-chord and strictly decreasing on $\left(a,b\right)$. Suppose $c,d\in\left(a,b\right)$ such that $c-a=b-d$. Then $$f(c)+f(d)\geq f(a)+f(b),$$ with equality if and only if $f$ is linear.
The proof of this lemma becomes clear by the following figure (Figure \[fig:2\]).
![Proof of the inequality[]{data-label="fig:2"}](ineq.png)
In the figure, $A\equiv a,B\equiv b,f(a)\equiv C,f(b)\equiv D,c\equiv E,d\equiv F,f(c)\equiv I,f(d)\equiv J$. Triangles $\Delta CKG$ and $\Delta HLD$ are right angled; and $$c-a=b-d\implies\overline{AE}\cong\overline{BF}\implies\Delta CKG\cong\Delta HLD.$$
Now, $$\overline{AK}\cong\overline{GE},\:\overline{FL}\cong\overline{BD}\implies f(a)+f(b)=l\left(\overline{GE}\right)+l\left(\overline{HF}\right)$$
Since $f$ is curve-above-chord, $f(c)\geq l\left(\overline{GE}\right),f(d)\geq l\left(\overline{HF}\right)$, so $f(a)+f(b)\leq f(c)+f(d)$; with equality if and only if $f$ is linear.
The desired conclusion now follows directly, as $f\left(\frac{n-2}{n}\pi\right)=0$.
Before we proceed to improve the bounds, let’s formally show that this result extends to multiple $n$-gons.
\[thm\_for\_k\] Suppose we have regular hyperbolic $n$-gons $P_{1},\dots,P_{k}$ and $P$ with areas $A_{1},\dots,A_{k},A$ respectively and interior angles $\theta_{1},\dots,\theta_{k},\theta$ respectively. Further, assume that
1. $\sum\limits_{i=1}^{k}A_{i}=A$
2. $\theta\geq\cos^{-1}\left(-1+2\sin\left(\frac{\pi}{n}\right)\right)$
Then we have $$\sum_{i=1}^{k}\operatorname*{Perim}\left(P_{i}\right)\geq\operatorname*{Perim}(P).$$ Furthermore, equality only occurs in the degenerate case, where all except one of the $A_i$ are 0.
We proceed inductively. The case $k=1$ is the trivial case, and case $k=2$ is the content of Theorem \[thm\_weak\]. Assume the result is true for $k-1$ polygons. Suppose we now have $n$-gons $P_{1},\dots,P_{k}$ satisfying the hypotheses. Construct a new regular hyperbolic $n$-gon $Q$ with interior angle $\psi$ and area $\sum_{i=1}^{k-1}A_{i}=A-A_{k}$. Then, as $$A-A_{k}\leq A \text{ and } \psi\geq\theta\geq\cos^{-1}\left(-1+2\sin\left(\pi/n\right)\right),$$ so, by the inductive hypothesis, we have that $$\sum_{i=1}^{k-1}\text{Perim}\left(P_{i}\right)\geq\text{Perim}(Q).$$
Next, consider polygons $P_{k}$ and $Q$. They satisfy the hypotheses of Theorem \[thm\_weak\]. Hence, we have $$\text{Perim}\left(P_{k}\right)+\text{Perim}\left(Q\right)\geq\text{Perim}\left(P\right).$$ The desired conclusion follows.
Improving the bounds
====================
In Section \[sec2\], Lemma \[lem\_ineq\] says that the inequality holds if the curve is always above the chord on $(a,b)$. Now, consider the function $$f(x)=\cosh^{-1}\left(\frac{\cos\left(\pi/n\right)}{\sin\left(x/2\right)}\right) \text{ on } \left(0,\frac{n-2}{n}\pi\right).$$ Let $A$ be the point in $\mathbb{R}^2$ with coordinate $\left(\frac{n-2}{n}\pi,0\right).$ Let the tangent from $A$ to the curve $y=f(x)$ meet the curve again at $B\left(\varrho_{0},f\left(\varrho_{0}\right)\right)$. Note - $\varrho_{0}$ is the solution to the equation $$\frac{\cos\left(\pi/n\right)\cos\left(x/2\right)}{2\sin\left(x/2\right)\sqrt{\cos^{2}\left(\pi/n\right)-\sin^{2}\left(x/2\right)}}=-\frac{\cosh^{-1}\left(\frac{\cos\left(\pi/n\right)}{\sin\left(x/2\right)}\right)}{\left(\frac{n-2}{n}\right)\pi-x}.$$ So $\varrho_{0}$ is well-defined and computable.
Let $C\left(x,f(x)\right)$ be an arbitrary point on the curve $y=f(x)$. Then the chord $\overline{AC}$ is below the curve if and only if $x\geq\varrho_{0}$. Hence, we have –
\[upperbound\] We can replace the bound $\cos^{-1}\left(-1+2\sin\left(\frac{\pi}{n}\right)\right)$ by $\varrho_0$ in Theorem \[thm\_weak\] and the result holds true.
However, this condition is still not strong enough to be necessary. For example, consider Figure \[fig:3\]
![Graph of $f(x)=\cosh^{-1}\left(\frac{\cos\left(\pi/5\right)}{\sin\left(x/2\right)}\right)$[]{data-label="fig:3"}](bad_ineq.png)
Here, points $A,B,C,D$ denote $\left(a,f(a)\right),\left(b,f(b)\right)$,$\left(c,f(c)\right)$ and $\left(d,f(d)\right)$; where $a,b,c,d$ are as in Lemma \[lem\_ineq\]. We have picked $a<\varrho_0$, so $\overline{AB}$ intersects the curve again at $G$; but as $c$ varies, $f(c)+f(d) \geq f(a)+f(b)$ always holds. This is governed by how the function decays. Let us study this carefully.
\[lem:3.2\] Let $y=g(x)$ be a smooth, monotonically decreasing function such that $g(x)\to+\infty \text{ when } x\to0$ and $g(b)=0$ for some $b>0$. Furthermore, assume that $g$ has exactly one critical point, and $g''>0$ before the critical point. Let $B=\left(b,0\right)$, and $A=\left(a,g(a)\right)$ be such that $\overline{AB}$ intersects $y=g(x)$ at a third point $X=\left(x,g(x)\right)$. Then the ratio $\frac{AX}{XB}\to\infty$ monotonically as $a\to0$, .
We have $g(a)\to+\infty$ as $a\to0$. Therefore, the slope of $\overline{AB}$ is monotonically decreasing to $-\infty$. One can picture the line $\overline{AB}$ sweeping the region clockwise, fixed at $B$. By the nature of $g$, as slope of $\overline{AB}$ decreases to $-\infty$, $x\to b$. Hence, $\frac{AX}{XB}=\frac{x-a}{b-x}\to\infty$.
Note that, our original function $f$ satisfies all the hypotheses of Lemma \[lem:3.2\]. Thus, as we reduce $a$, we will find a unique value $a=\kappa_{0}$ such that $\frac{AX}{XB}=1$ (thus, $x=\frac{b-a}{2}$). Now, for $a<\kappa_{0}$, $\frac{AX}{XB}>1$, so $x>\frac{b-a}{2}$. Letting $\lambda=\frac{1}{2}\left(x-\frac{b-a}{2}\right)$, we can have $c=\frac{b-a}{2}-\lambda$ and $d=\frac{b-a}{2}+\lambda$. Then $c,d<x$; and we can use the ideas of Lemma \[lem\_ineq\] to see that $f(a)+f(b)>f(c)+f(d)$. Thus, we have proved –
\[lowerbound\] (Assuming notation as in Theorem \[thm\_weak\]) If $\theta<\kappa_{0}$, then we can construct polygons $P_{1},P_{2}$ satisfying the hypotheses, with $\operatorname*{Perim}\left(P_{1}\right)+\operatorname*{Perim}\left(P_{2}\right)<\operatorname*{Perim}\left(P\right)$.
Theorem \[upperbound\] and Theorem \[lowerbound\] together give Theorem \[thm\_main\].
Further thoughts and conjectures
================================
We have not found out explicit values of $\kappa_0$ and $\varrho_0$ in terms of $n$. Finding these values could possibly be insightful.
Now, based on numerical evidence, we have –
The condition $\theta \geq \kappa_0$ is also sufficient.
The proof for our result in the Euclidean setting was a lot less computational, after we used Pythagoras’ Theorem. The areas adding up corresponds really nicely to the sum of squares of side lengths.
In hyperbolic geometry, we have a notion of Pythagoras’ Theorem (refer to [@Fam]). Maybe this extends to give a short and elegant proof of Theorem \[thm\_main\].
It may further be possible to generalize these results to simple closed curves in the hyperbolic space instead of polygons.
Isaac Chavel, *Isoperimetric Inequalities: Differential Geometric and Analytic Perspectives*, Chapter 2.
K Bezdek, *Ein elementarer Beweis für die isoperimetrische Ungleichung in der Euklidischen und hyperbolischen Ebene*, Ann. Univ. Sci. Budapest, Eotvos Sect. Math. 27 (1984), 107-112
Peter Buser, *Geometry and Spectra of Compact Riemann Surfaces*, Progress in Mathematics, Vol. 106
Familiari-Calapso, M.T., *Sur une classe di triangles et sur le theorémè de Pythagore en géométrie hyperbolique.* C. R. Acad. Sci. Paris Ser. A-B 268 (1969), A603-A604.
[^1]: Trigonometry of hyperbolic right angled triangles
[^2]: Note - As $x\to0,f(x)\to\infty$, and $x\to\frac{\left(n-2\right)}{n}\pi,f(x)\to0$
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'These lectures are intended to give a pedagogical introduction to the main current picture of the very early universe. After elementary reviews of general relativity and of the standard Big Bang model, the following subjects are discussed: inflation, the classical relativistic theory of cosmological perturbations and the generation of perturbations from scalar field quantum fluctuations during inflation.'
author:
- |
David Langlois\
GRECO, Institut d’Astrophysique de Paris (CNRS)\
98bis Boulevard Arago, 75014 Paris, France
title: 'Inflation, quantum fluctuations and cosmological perturbations'
---
¶[[P]{}]{}
Ł[[L]{}]{}
Introduction
============
The purpose of these lectures is to give an introduction to the present [*standard picture of the early universe*]{}, which [*complements*]{} the older standard Big Bang model. These notes are intended for non-experts on this subject. They start with a very short introduction to General Relativity, on which modern cosmology is based, followed by an elementary review of the standard Big Bang model. We then discuss the limitations of this model and enter into the main subject of these lectures: [*inflation*]{}.
Inflation was initially invented to solve some of the problems of the standard Big Bang model and to get rid of unwanted relics generically predicted by high energy models. It turned out that inflation, as it was realized later, could also solve an additional puzzle of the standard model, that of the generation of the cosmological perturbations. This welcome surprise put inflation on a rather firm footing, about twenty years ago. Twenty years later, inflation is still alive, in a stronger position than ever because its few competitors have been eliminated as new cosmological observations have accumulated during the last few years.
A few elements on general relativity and cosmology
==================================================
Modern cosmology is based on Einstein’s theory of general relativity. It is thus useful, before discussing the early universe, to recall a few notions and useful formulas from this theory. Details can be found in standard textbooks on general relativity (see e.g. [@GR]). In the framework of general relativity, the spacetime geometry is defined by a [*metric*]{}, a symmetric tensor with two indices, whose components in a coordinate system $\{x^\mu\}$ ($\mu=0,1,2,3$) will be denoted $g_{\mu\nu}$. The square of the “distance” between two neighbouring points of spacetime is given by the expression ds\^2=g\_dx\^dx\^. We will use the signature $(-,+,+,+)$.
In a coordinate change $x^\mu\rightarrow \tilde x^\mu$, the new components of the metric are obtained by using the standard tensor transformation formulas, namely g\_=[x\^x\^]{} [x\^x\^]{}g\_.
One can define a [*covariant derivative*]{} associated to this metric, denoted $\D_\mu$, whose action on a tensor with, for example, one covariant index and one contravariant index will be given by \_T\^\_[ ]{}=\_T\^\_[ ]{} +\^\_T\^\_[ ]{} - \^\_T\^\_[ ]{} (a similar term must be added for each additional covariant or contravariant index), where the $\Gamma$ are the [*Christoffel symbols*]{} (they are not tensors), defined by \^\_=[12]{}g\^(\_g\_+\_g\_-\_g\_ ). \[christoffel\] We have used the notation $g^{\mu\nu}$ which corresponds, for the metric (and only for the metric), to the inverse of $g_{\mu\nu}$ in a matricial sense, i.e. $g_{\mu\sigma}g^{\sigma\nu}=\delta_\mu^\nu$.
The “curvature” of spacetime is characterized by the [*Riemann*]{} tensor, whose components can be expressed in terms of the Christoffel symbols according to the expression R\_\^[ ]{}= \_\_\^-\_\_\^+ \_\^\_\^- \_\^\_\^.
[*Einstein’s equations*]{} relate the spacetime geometry to its matter content. The geometry appears in Einstein’s equations via the [*Ricci tensor*]{}, defined by R\_=R\_\^[ ]{}, and the [*scalar curvature*]{}, which is the trace of the Ricci tensor, i.e. R=g\^R\_. The matter enters Einstein’s equations via the [*energy-momentum tensor*]{}, denoted $T_{\mu\nu}$, whose time/time component corresponds to the energy density, the time/space components to the momentum density and the space/space component to the stress tensor. Einstein’s equations then read G\_R\_-[12]{}R g\_=8G T\_, \[einstein\] where the tensor $G_{\mu\nu}$ is called the [*Einstein tensor*]{}. Since, by construction, the Einstein tensor satisfies the identity $\D_\mu G^{\mu}_{\, \nu}=0$, any energy-momentum on the right-hand side of Einstein’s equation must necessarily satisfy the relation \_T\^\_[ ]{}=0, \[DT0\] which can be interpreted as a generalization, in the context of a curved spacetime, of the familiar conservation laws for energy and momentum.
The motion of a particule is described by its trajectory in spacetime, $x^\mu(\lambda)$, where $\lambda$ is a parameter. A free particle, i.e. which does not feel any force (other than gravity), satisfies the [*geodesic equation*]{}, which reads t\^\_t\^=0, where $t^\mu=dx^\mu/d\lambda$ is the vector field tangent to the trajectory (note that the geodesic equation written in this form assumes that the parameter $\lambda$ is affine). Equivalently, the geodesic can be rewritten as +\^\_[dx\^d]{} [dx\^d]{}=0. \[geodesique\] The geodesic equation applies both to
- massive particles, in which case one usually takes as the parameter $\lambda$ the so-called [*proper time*]{} so that the corresponding tangent vector $u^\mu$ is normalized: $g_{\mu\nu}u^\mu u^\nu=-1$;
- massless particles, in particular the photon, in which case the tangent vector, usually denoted $k^\mu$ is light-like, i.e. $g_{\mu\nu}k^\mu k^\nu=0$.
Einstein’s equations can also be obtained from a variational principle. The corresponding action reads =[116G]{}d\^4x(R-2) +d\^4x \_[mat]{}. One can check that the variation of this action with respect to the metric $g_{\mu\nu}$ , upon using the definition T\^=[2]{}[(\_[mat]{}) ]{}, \[def\_Tmunu\] indeed gives Einstein’s equations G\_+g\_=8G T\_. This is a slight generalization of Einstein’s equations (\[einstein\]) that includes a [*cosmological constant*]{} $\Lambda$. It is worth noticing that the cosmological constant can also be interpreted as a particular energy-momentum tensor of the form $T_{\mu\nu}=-(8\pi G)^{-1}\Lambda g_{\mu\nu}$.
Review of standard cosmology
----------------------------
In this subsection, the foundations of modern cosmology are briefly recalled. They follow from Einstein’s equations introduced above and from a few hypotheses concerning spacetime and its matter content. One of the essential assumptions of cosmology (so far confirmed by observations) is to consider, as a first approximation, the universe as being homogeneous and isotropic. Note that these symmetries define implicitly a particular “slicing” of spacetime, the corresponding space-like hypersurfaces being homogeneous and isotropic. A different slicing of the [*same*]{} spacetime will give in general space-like hypersurfaces that are not homogeneous and isotropic.
The above hypothesis turns out to be very restrictive and the only metrics compatible with this requirement reduce to the so-called [*Robertson-Walker*]{} metrics, which read in an appropriate coordinate system ds\^2=-dt\^2+a\^2(t), \[RW\] with $\kappa=0,-1,1$ depending on the curvature of spatial hypersurfaces: respectively flat, elliptic or hyperbolic.
The matter content compatible with the spacetime symmetries of homogeneity and isotropy is necessarily described by an energy-momentum tensor of the form (in the same coordinate system as for the metric (\[RW\])): T\^\_[ ]{}=[Diag]{}(-(t), p(t), p(t), p(t)). The quantity $\rho$ corresponds to an energy density and $P$ to a pressure.
One can show that the so-called comoving particles, i.e. those particles whose spatial coordinates are constant in time, satisfy the geodesic equation (\[geodesique\]).
Friedmann-Lemaître equations
----------------------------
Substituting the Robertson-Walker metric (\[RW\]) in Einstein’s equations (\[einstein\]), one gets the so-called [*Friedmann-Lemaître equations*]{}: $$\begin{aligned}
\left({\dot a\over a}\right)^2 &=& {8\pi G\rho\over 3}- {\kappa\over a^2},
\label{friedmann}
\\
{\ddot a\over a} &=& -{4\pi G\over 3}\left(\rho+3 P\right).
\label{friedmann2}\end{aligned}$$ An immediate consequence of these two equations is the [*continuity equation*]{} +3H(+p)=0, \[conservation\] where $H\equiv \dot a/a$ is the [*Hubble parameter*]{}. The continuity equation can be also obtained directly from the energy-momentum conservation $\D_\mu
T^{\mu}_{\, \nu}=0$, as mentioned before.
In order to determine the cosmological evolution, it is easier to combine (\[friedmann\]) with (\[conservation\]). Let us assume an equation of state for the cosmological matter of the form $p=w\rho$ with $w$ constant. This includes the two main types of matter that play an important rôle in cosmology:
- gas of relativistic particles, $w=1/3$;
- non relativistic matter, $w\simeq 0$.
In these cases, the conservation equation (\[conservation\]) can be integrated to give a\^[-3(1+w)]{}. Substituting in (\[friedmann\]), one finds, for $\kappa=0$, 3[a\^2a\^2]{}=8G\_0(aa\_0)\^[-3(1+w)]{}, where, by convention, the subscript ’0’ stands for [*present*]{} quantities. One thus finds $\dot a^2\propto a^{2-3(1+w)}$, which gives for the evolution of the scale factor
- in a universe dominated by non relativistic matter a(t)t\^[2/3]{},
- and in a universe dominated by radiation a(t)t\^[1/2]{}.
One can also mention the case of a [*cosmological constant*]{}, which corresponds to an equation of state $w=-1$ and thus implies an exponential evolution for the scale factor a(t)(Ht).
More generally, when several types of matter coexist with respectively $p_{(i)}=w_{(i)}\rho_{(i)}$, it is convenient to introduce the dimensionless parameters \_[(i)]{}=[8G \_0\^[(i)]{}3 H\_0\^2]{}, which express the [*present*]{} ratio of the energy density of some given species with respect to the so-called [*critical energy density*]{} $\rho_{crit}= 3 H_0^2/(8\pi G)$, which corresponds to the total energy density for a flat universe.
One can then rewrite the first Friedmann equation (\[friedmann\]) as ([HH\_0]{})\^2=\_i\_[(i)]{} (aa\_0)\^[-3(1+w\_[(i)]{})]{}+\_(aa\_0)\^[-2]{}, with $\Omega_\kappa=-\kappa/a_0^2H_0^2$, which implies that the cosmological parameters must satisfy the consistency relation \_i \_[(i)]{}+\_=1. As for the second Friedmann equation (\[friedmann2\]), it implies =-[12]{}\_i \_[(i)]{}(1+w\_[(i)]{}). Present cosmological observations yield for the various parameters
- Baryons: $\Omega_b\simeq 0.05$,
- Dark matter: $\Omega_d\simeq 0.25$,
- Dark energy (compatible with a cosmological constant): $\Omega_{\Lambda}\simeq 0.7$,
- Photons: $\Omega_\gamma\simeq 5\times 10^{-5} $.
Observations have not detected so far any deviation from flatness. Radiation is very subdominant today but extrapolating backwards in time, radiation was dominant in the past since its energy density scales as $\rho_\gamma\propto a^{-4}$ in contrast with non relativistic matter ($\rho_m\propto a^{-3}$). Moreover, since the present matter content seems dominated by dark energy similar to a cosmological constant ($w_\Lambda=-1$), this indicates that our universe is presently accelerating.
The cosmological redshift
-------------------------
An important consequence of the expansion of the universe is the [*cosmological redshift*]{} of photons. This is in fact how the expansion of the universe was discovered initially.
Let us consider two light signals emitted by a comoving object at two successive instants $t_e$ and $t_e+\delta t_e$, then received later at respectively $t_o$ and $t_o +\delta t_o$ by a (comoving) observer. One can always set the observer at the center of the coordinate system. All light trajectories reaching the observer are then radial and one can write, using (\[RW\]) \_0\^[r\_e]{}[dr]{}=\_[t\_e]{}\^[t\_o]{}[dta(t)]{}. The left-hand side being identical for the two successive trajectories, the right-hand side must vanish, which yields -[t\_ea\_e]{}=0. This implies for the frequencies measured at emission and at reception a redshift given by 1+z=[a\_oa\_e]{}.
Thermal history of the universe
-------------------------------
To go beyond a simply geometrical description of cosmology, it is very fruitful to apply thermodynamics to the matter content of the universe. One can then define a temperature $T$ for the cosmological photons, not only when they are strongly interacting with ordinary matter but also after they have decoupled because, with the expansion, the thermal distribution for the gas of photons is unchanged except for a global rescaling of the temperature so that $T$ essentially evolves as T(t). This means that, going backwards in time, the universe was hotter and hotter. This is the essence of the hot Big Bang scenario.
As the universe evolves, the reaction rates between the various species are modified. A detailed analysis of these changes enables to reconstruct the past thermal history of the universe. Two events in particular play an essential rôle because of their observational consequences:
- Primordial nucleosynthesis
Nucleosynthesis occured at a temperature around $0.1$ MeV, when the average kinetic energy became sufficiently low so that nuclear binding was possible. Protons and neutrons could then combine, which lead to the production of light elements, such that Helium, Deuterium, Lithium, etc... Within the observational uncertainties, this scenario is remarkably confirmed by the present measurements.
- Decoupling of baryons and photons (or last scattering)
A more recent event is the so-called “recombination” of nuclei and electrons to form atoms. This occured at a temperature of the order of the eV. Free electrons thus almost disappeared, which entailed an effective decoupling of the cosmological photons and ordinary matter. What we see today as the Cosmic Microwave Background (CMB) is made of the fossil photons, which interacted for the last time with matter at the last scattering epoch. The CMB represents a remarkable observational tool for analysing the perturbations of the early universe, as well as for measuring the cosmological parameters introduced above.
Puzzles of the standard Big Bang model
--------------------------------------
The standard Big Bang model has encountered remarkable successes, in particular with the nucleosynthesis scenario and the prediction of the CMB, and it remains today a cornerstone in our understanding of the present and past universe. However, a few intriguing facts remain unexplained in the strict scenario of the standard Big Bang model and seem to necessitate a larger framework. We review below the main problems:
- Homogeneity problem
A first question is why the approximation of homogeneity and isotropy turns out to be so good. Indeed, inhomogeneities are unstable, because of gravitation, and they tend to grow with time. It can be verified for instance with the CMB that inhomogeneities were much smaller at the last scattering epoch than today. One thus expects that these homogeneities were still smaller further back in time. How to explain a universe so smooth in its past ?
- Flatness problem
Another puzzle lies in the (spatial) flatness of our universe. Indeed, Friedmann’s equation (\[friedmann\]) implies -1-1=[a\^2 H\^2]{}. In standard cosmology, the scale factor behaves like $a\sim t^p$ with $p<1$ ($p=1/2$ for radiation and $p=2/3$ for non-relativistic matter). As a consequence, $(aH)^{-2}$ grows with time and $|\Omega-1|$ must thus diverge with time. Therefore, in the context of the standard model, the quasi-flatness observed today requires an extreme fine-tuning of $\Omega$ near $1$ in the early universe.
- Horizon problem
![Evolution of the comoving Hubble radius $\lambda_H=(aH)^{-1}$: during standard cosmology, $\lambda_H$ increases (continuous lines), whereas during inflation $\lambda_H$ shrinks (dashed lines).](horizon2.eps){width="4.8in"}
One of the most fundamental problems in standard cosmology is certainly the [*horizon problem*]{}. The (particle) [*horizon*]{} is the maximal distance that can be covered by a light ray. For a light-like radial trajectory $dr=a(t) dt$ and the horizon is thus given by d\_[H]{}(t)= a(t)\_[t\_i]{}\^t[dt’a(t’)]{}=a(t)[t\^[1-q]{}-t\_i\^[1-q]{}1-q]{}, where the last equality is obtained by assuming $a(t)\sim t^q$ and $t_i$ is some initial time.
In standard cosmology ($q<1$), the integral converges in the limit $t_i=0$ and the horizon has a finite size, of the order of the so-called Hubble radius $H^{-1}$: d\_H(t)=[q1-q]{}H\^[-1]{}.
It also useful to consider the [*comoving Hubble radius*]{}, $(aH)^{-1}$, which represents the fraction of comoving space in causal contact. One finds that it [*grows*]{} with time, which means that the [*fraction of the universe in causal contact increases with time*]{} in the context of standard cosmology. But the CMB tells us that the Universe was quasi-homogeneous at the time of last scattering on a scale encompassing many regions a priori causally independent. How to explain this ?
A solution to the horizon problem and to the other puzzles is provided by the inflationary scenario, which we will examine in the next section. The basic idea is to invert the behaviour of the comoving Hubble radius, that is to make him [*decrease*]{} sufficiently in the very early universe. The corresponding condition is that a>0, i.e. that the Universe must undergo a [*phase of acceleration*]{}.
Inflation
=========
The broadest definition of inflation is that it corresponds to a phase of acceleration of the universe, a>0. In this broad sense, the current cosmological observations, if correctly interpreted, mean that our present universe is undergoing an inflationary phase. We are however interested here in an inflationary phase taking place in the very early universe, with different energy scales.
The Friedmann equations (\[friedmann\]) tell us that one can get acceleration only if the equation of state satisfies the condition P<-[13]{}, condition which looks at first view rather exotic.
A very simple example giving such an equation of state is a cosmological constant, corresponding to a cosmological fluid with the equation of state P=-. However, a strict cosmological constant leads to exponential inflation [*forever*]{} which cannot be followed by a radiation or matter era. Another possibility is a scalar field, which we discuss now in some details.
Cosmological scalar fields
--------------------------
The dynamics of a scalar field coupled to gravity is governed by the action \[action\_scalar\_field\] S\_=d\^4x(-[12]{}\^\_-V()). The corresponding energy-momentum tensor, which can be derived using (\[def\_Tmunu\]), is given by T\_=\_\_-g\_ ([12]{}\^\_+V()). \[Tscalarfield\] If one assumes the geometry, and thus the matter, to be homogeneous and isotropic, then the energy-momentum tensor reduces to the perfect fluid form with the energy density =-T\_0\^0=[12]{}\^2+V(), where one recognizes the sum of a kinetic energy and of a potential energy, and the pressure p=[12]{}\^2-V(). The equation of motion for the scalar field is the Klein-Gordon equation, which is obtained by taking the variation of the above action (\[action\_scalar\_field\]) with respect to the scalar field and which reads \^\_=V’, in general and +3H+V’=0 in the particular case of a FLRW (Friedmann-Lemaître-Robertson-Walker) universe.
The system of equations governing the dynamics of the scalar field and of the geometry in a FLRW universe is thus given by $$\begin{aligned}
& &H^2={8\pi G\over 3}\left({1\over 2}\dot\phi^2+V(\phi)\right),
\label{e1}\\
& &\ddot\phi+3H\dot \phi+V'=0,
\label{e2}\\
& & \dot H=-4\pi G\dot\phi^2.
\label{e3}\end{aligned}$$ The last equation can be derived from the first two and is therefore redundant.
The slow-roll regime
--------------------
The dynamical system (\[e1\]-\[e3\]) does not always give an accelerated expansion but it does so in the so-called [*slow-roll regime*]{} when the potential energy of the scalar field dominates over its kinetic energy.
More specifically, the so-called [*slow roll*]{} approximation consists in neglecting the kinetic energy of the scalar field , $\dot \phi^2/2$ in (\[e1\]) and the acceleration $\ddot\phi$ in the Klein-Gordon equation (\[e2\]). One then gets the simplified system $$\begin{aligned}
& &H^2\simeq{8\pi G\over 3} V,
\label{sr1}\\
& &3H\dot \phi+V'\simeq 0.
\label{sr2}\end{aligned}$$ Let us now examine in which regime this approximation is valid. From (\[sr2\]), the velocity of the scalar field is given by -[V’3H]{}. \[phisr\] Substituting this relation in the condition $\dot \phi^2/2 \ll V$ yields the requirement: \_V([V’V]{})\^2 1, \[epsilon\] where we have introduced the [*reduced Planck mass*]{} m\_P. Similarly, the time derivative of (\[phisr\]), using the time derivative of (\[sr1\]), gives, after substitution in $\ddot \phi\ll V'$, the condition \_V\^2 [V”V]{}1. \[eta\] In summary, the slow-roll approximation is valid when the two conditions $\epsilon_V, \eta_V \ll 1$ are satisfied, which means that the slope and the curvature of the potential, in Planck units, must be sufficiently small.
Number of e-folds
-----------------
When working with a specific inflationary model, it is important to be able to relate the cosmological scales observed at the present time with the scales during inflation. For this purpose, one usally introduces the [*number of e-foldings before the end of inflation*]{}, denoted $N$, and simply defined by N=, where $a_{end}$ is the value of the scale factor at the end of inflation and $a$ is a fiducial value for the scale factor during inflation. By definition, $N$ [*decreases*]{} during the inflationary phase and reaches zero at its end. In the slow-roll approximation, it is possible to express $N$ as a function of the scalar field. Since $dN=-d\ln a=-H dt=-(H/\dot\phi) d\phi$, one easily finds, using (\[phisr\]) and (\[sr1\]), that N()\_\^[\_[end]{}]{}[Vm\_P\^2 V’]{}d. Given an explicit potential $V(\phi)$, one can in principle integrate the above expression to obtain $N$ in terms of $\phi$. This will be illustrated below for some specific models.
Let us now discuss the link between $N$ and the present cosmological scales. Let us consider a given scale characterized by its comoving wavenumber $k=2\pi/\lambda$. This scale crossed outside the Hubble radius, during inflation, at an instant $t_*(k)$ defined by k=a(t\_\*) H(t\_\*). To get a rough estimate of the number of e-foldings of inflation that are needed to solve the horizon problem, let us first ignore the transition from a radiation era to a matter era and assume for simplicity that the inflationary phase was followed instantaneously by a radiation phase that has lasted until now. During the radiation phase, the comoving Hubble radius $(aH)^{-1}$ increases like $a$. In order to solve the horizon problem, the increase of the comoving Hubble radius during the standard evolution must be compensated by [*at least*]{} a decrease of the same amount during inflation. Since the comoving Hubble radius roughly scales like $a^{-1}$ during inflation, the minimum amount of inflation is simply given by the number of e-folds between the end of inflation and today $\ln(a_0/a_{end}) =
\ln(T_{end}/T_0)\sim \ln(10^{29}(T_{end}/ 10^{16} {\rm GeV}))$, i.e. around 60 e-folds for a temperature $T\sim 10^{16} {\rm Gev}$ at the beginning of the radiation era. As we will see later, this energy scale is typical of inflation in the simplest models.
This determines roughly the number of e-folds $N(k_0)$ between the moment when the scale corresponding to our present Hubble radius $k_0=a_0H_0$ exited the Hubble radius during inflation and the end of inflation. The other lengthscales of cosmological interest are [*smaller*]{} than $k_0^{-1}$ and therefore exited the Hubble radius during inflation [*after*]{} the scale $k_0$, whereas they entered the Hubble radius during the standard cosmological phase (either in the radiation era for the smaller scales or in the matter era for the larger scales) [*before*]{} the scale $k_0$.
A more detailed calculation, which distinguishes between the energy scales at the end of inflation and after the reheating, gives for the number of e-folds between the exit of the mode $k$ and the end of inflation N(k)62 -+ + +[13]{}. Since the smallest scale of cosmological relevance is of the order of $1$ Mpc, the range of cosmological scales covers about $9$ e-folds.
The above number of e-folds is altered if one changes the thermal history of the universe between inflation and the present time by including for instance a period of so-called thermal inflation.
Power-law potentials
--------------------
It is now time to illustrate all the points discussed above with some specific potential. We consider first the case of power-law monomial potentials, of the form \[pot\_power-law\] V()=[p]{}\^4 ()\^p, which have been abundantly used in the literature. In particular, the above potential includes the case of a free massive scalar field, $V(\phi)=m^2\phi/2$. The slow-roll conditions $\epsilon\ll 1$ and $\eta \ll 1$ both imply p , \[sr\_pl\] which means that the scalar field amplitude must be above the Planck mass during inflation.
After subsituting the potential (\[pot\_power-law\]) into the slow-roll equations of motion (\[sr1\]-\[sr2\]), one can integrate them explicitly to get \^[2-[p2]{}]{}- \_i\^[2-[p2]{}]{} =-[24-p]{} \^[3-[p2]{}]{}(t-t\_i) for $p\neq 4$ and =\_ifor $p=4$.
One can also express the scale factor as a function of the scalar field (and thus as a function of time by substituting the above expression for $\phi(t)$) by using $d\ln a/ d\phi=H/\dot \phi
\simeq-\phi/(p m_P^2)$. One finds a=a\_[end]{}. Defining the end of inflation by $\epsilon_V=1$, one gets $\phi_{end}=p \, \mP/\sqrt{2}$ and the number of e-folds is thus given by \[N\_phi\] N()-[p4]{}. This can be inverted, so that (N) , where we have ignored the second term of the right hand side of (\[N\_phi\]), in agreement with the condition (\[sr\_pl\]).
Exponential potential
---------------------
If one considers a potential of the form V=V\_0 (- [m\_P]{}), then it is possible to find an [*exact*]{} solution (i.e. valid beyond the slow-roll approximation) of the system (\[e1\]-\[e3\]), with a power-law scale factor, i.e. a(t)t\^q. The evolution of the scalar field is given by the expression (t)=[ m\_P]{}. Note that one recovers the slow-roll approximation in the limit $q\gg 1$, since the slow-roll parameters are given by $\epsilon_V=1/q$ and $\eta_V=2/q$.
Brief history of the inflationary models
----------------------------------------
Let us now try to summarize in a few lines the history of inflationary models. The first model of inflation is usually traced back to Alan Guth [@Guth:1980zm] in 1981, although one can see a precursor in the model of Alexei Starobinsky [@Starobinsky:te]. Guth’s model, which is named today [*old inflation*]{} is based on a first-order phase transition, from a false vacuum with non zero energy, which generates an exponential inflationary phase, into a true vacuum with zero energy density. The true vacuum phase appears in the shape of bubbles via quantum tunneling. The problem with this inflationary model is that, in order to get sufficient inflation to solve the problems of the standard model mentioned earlier, the nucleation rate must be sufficiently small; but, then, the bubbles never coalesce because the space that separates the bubbles undergoes inflation and expands too rapidly. Therefore, the first model of inflation is not phenomenologically viable.
After this first and unsuccessful attempt, a new generation of inflationary models appeared, usually denoted [*new inflation*]{} models [@new-inflation]. They rely on a second order phase transition, based on thermal corrections of the effective potential and thus assume that the scalar field is in thermal equilibrium.
This hypothesis of thermal equilibrium was given up in the third generation of models, initiated by Andrei Linde, and whose generic name is [*chaotic inflation*]{} [@Linde:gd]. This allows to use extremely simple potentials, quadratic or quartic, which lead to inflationary phases when the scalar field is displaced from the origin with values of the order of several Planck masses. This is sometimes considered to be problematic from a particle physics point of view, as discussed briefly later.
During the last few years, there has been a revival of the inflation model building based on high energy theories, in particular in the context of supersymmetry. In these models, the value of the scalar field is much smaller than the Planck mass.
The inflationary “zoology”
--------------------------
There exist many models of inflation. As far as single-field models are concerned (or at least [*effectively*]{} single field during inflation, the hybrid models requiring a second field to end inflation as discussed below), it is convenient to regroup them into three broad categories:
- Large field models ($0<\eta\leq \epsilon$)
The scalar field is displaced from its stable minimum by $\Delta \phi\sim m_P$. This includes the so-called chaotic type models with monomial potentials V()=\^4()\^p, or the exponential potential V()=\^4 (/ ), which have already been discussed.
This category of models is widely used in the literature because of the computational simplicity. But they are not always considered to be models which can be well motivated by particle physics. The reason is the following. The generic potential for a scalar field will contain an infinite number of terms, V()=V\_0+[12]{}m\^2\^2+[\_33]{}\^3 +[\_44]{}\^4+\_[d=5]{}\^m\_P\^[4-d]{}\^d, where the non-renormalizable ($d>4$) couplings $\lambda_d$ are a priori of order $1$. When the scalar field is of order of a few Planck masses, one has no control on the form of the potential, and all the non-normalizable terms must be taken into account in principle.
In order to work with more specific forms for the potential, inflationary model builders tend to concentrate on models where the scalar field amplitude is small with respect to the Planck mass, as those discussed just below.
- Small field models ($\eta<0<\epsilon $)
In this type of models, the scalar field is rolling away from an unstable maximum of the potential. This is a characteristic feature of spontaneous symmetry breaking. A typical potential is V()=\^4 , which can be interpreted as the lowest-order term in a Taylor expansion about the origin. Historically, this potential shape appeared in the so-called ‘new inflation’ scenario.
A particular feature of these models is that tensor modes are much more suppressed with respect to scalar modes than in the large-field models, as it will be shown later.
![Schematic potential for t he three main categories of inflationary models: a. chaotic models b. symmetry breaking models; c. hybrid models](inflation.eps){width="4.8in"}
- Hybrid models ($0<\epsilon<\eta$)
![Typical potential $V(\phi, \psi)$ for hybrid inflation.](hybrid.eps){width="4.8in"}
This category of models, which appeared rather recently, relies on the presence of two scalar fields: one plays the traditional rôle of the inflaton, while the other is necessary to end inflation.
As an illustration, let us consider the original model of hybrid inflation [@hybrid] based on the potential V(, )=[12]{}m\^2\^2+[12]{}’ \^2\^2+ [14]{}(M\^2-\^2)\^2. For values of the field $\phi$ larger than the critical value $\phi_c=\lambda M^2/\lambda'$, the potential for $\psi$ has its minimum at $\psi=0$. This is the case during inflation. $\psi$ is thus trapped in this minimum $\psi=0$, so that the effective potential for the scalar field $\phi$, which plays the rôle of the inflaton, is given by V\_[eff]{}()=V\_0+[12]{}m\^2\^2, with $V_0=\lambda M^4/4$. During the inflationary phase, the field $\phi$ slow-rolls until it reaches the critical value $\phi_c$. The shape of the potential for $\psi$ is then modified and new minima appear in $\psi=\pm M$. $\psi$ will thus roll down into one of these new minima and, as a consequence, inflation will end.
As far as inflation is concerned, hybrid inflation scenarios correspond effectively to single-field models with a potential characterized by $V''(\phi)>0$ and $0<\epsilon<\eta$. A typical potential is V()=\^4 . Once more, this potential can be seen as the lowest order in a Taylor expansion about the origin.
In the case of hybrid models, the value $\phi_N$ of the scalar field as a function of the number of e-folds before the end of inflation is not determined by the above potential and, therefore, $(\phi_N/\mu)$ can be considered as a freely adjustable parameter.
Many more details on inflationary models can be found in e.g. [@linde; @lr; @ll].
The theory of cosmological perturbations
========================================
So far, we have concentrated our attention on strictly homogeneous and isotropic aspects of cosmology. Of course, this idealized version, although extremely useful, is not sufficient to account for real cosmology and it is now time to turn to the study of deviations from homogeneity and isotropy.
In cosmology, inhomogeneities grow because of the attractive nature of gravity, which implies that inhomogeneities were much smaller in the past. As a consequence, for most of their evolution, inhomogeneities can be treated as [*linear perturbations*]{}. The linear treatment ceases to be valid on small scales in our recent past, hence the difficulty to reconstruct the primordial inhomogeneities from large-scale structure, but it is quite adequate to describe the fluctuations of the CMB at the time of last scattering. This is the reason why the CMB is currently the best observational probe of primordial inhomogeneities. For more details on the relativistic theory of cosmological perturbations, which will be briefly introduced in this chapter, the reader is invited to read the standard reviews [@cosmo_perts] in the literature.
From now on, we will be mostly working with the conformal time $\eta$, instead of the cosmic time $t$. The conformal time is defined as = so that the (spatially flat) FLRW metric takes the remarkably simple form ds\^2=a\^2(). \[metric\_eta\]
Perturbations of the geometry
-----------------------------
Let us start with the linear perturbations of the geometry. The most general linear perturbation of the FLRW metric can be expressed as ds\^2=a\^2{ -(1+2A)d\^2+2B\_idx\^id+ (\_[ij]{}+h\_[ij]{}) dx\^idx\^j } \[metpert\], where we have considered only the spatially flat FLRW metric.
We have introduced a time plus space decomposition of the perturbations. The indices $i$, $j$ stand for [*spatial*]{} indices and the perturbed quantities defined in (\[metpert\]) can be seen as three-dimensional tensors, for which the indices can be lowered (or raised) by the spatial metric $\d_{ij}$ (or its inverse).
It is very convenient to separate the perturbations into three categories, the so-called “scalar”, “vector” and “tensor” modes. For example, a spatial vector field $B^i$ can be decomposed uniquely into a longitudinal part and a transverse part, B\_i=\_i B+\_i, \_i\^i=0, \[Bi\] where the longitudinal part is curl-free and can thus be expressed as a gradient, and the transverse part is divergenceless. One thus gets one “scalar” mode, $B$ , and two “vector” modes $\B^i$ (the index $i$ takes three values but the divergenceless condition implies that only two components are independent.
A similar procedure applies to the symmetric tensor $h_{ij}$, which can be decomposed as h\_[ij]{}=2C\_[ij]{}+2 \_i\_j E+2\_[(i]{}E\_[j)]{} +\_[ij]{}, \[hij\] with $\overline{E}^{ij}$ transverse and traceless (TT), i.e. $\nabla_i\overline{E}^{ij}=0$ (transverse) and $\overline{E}^{ij}\d_{ij}=0$ (traceless), and $E_i$ transverse. The parentheses around the indices denote symmetrization, namely $2\nabla_{(i}E_{j)}= \nabla_{i}E_{j} + \nabla_{j}E_{i}$. We have thus defined two scalar modes, $C$ and $E$, two vector modes, $E_i$, and two tensor modes, $\E_{ij}$.
In the following, we will be mainly interested in the metric with only [*scalar*]{} perturbations, since scalar modes are the most relevant for cosmology and they can be treated independently of the vector and tensor modes. In matrix notation, the perturbed metric will thus be of the form $$\label{scalar_pert_metric}
g_{\mu\nu} =a^2 \left[
\begin{array}{ccc}
-(1+2A) & & \nabla_iB \\ && \\
\nabla_j B & & \left\{ (1+2C) \d_{ij} + 2\nabla_i\nabla_jE \right\}
\end{array}
\right] \,.$$ After the description of the perturbed geometry, we turn to the perturbations of the matter in the next subsection.
Perturbations of the matter
---------------------------
Quite generally, the perturbed energy-momentum tensor can be written in the form $$\label{Tmunu}
T^\mu_\nu =
\left[
\begin{array}{ccc}
-(\rho+\delta\rho) & & q_j \\ &&\\
-q^i+(\rho+P)B^i & & (P+\delta P)\delta^i_j + \pi^i_j
\end{array}
\right] \,,
\label{pert_T}$$ where $q_i$ is the momentum density and $\pi^i_j$ is the anisotropic stress tensor, which is traceless, i.e. $\pi_k^k=0$. One can then decompose these tensors into scalar, vector and tensor components, as before for the metric components, so that q\_i=\_i q+|[q]{}\_i, \_i|[q]{}\^i=0, \[qi\] and \_[ij]{}=\_i\_j -[13]{} \_[ij]{}\_k\^k+2\_[(i]{}\_[j)]{}+|\_[ij]{}, \_k \^k=0, \_k|\^[kl]{}=0, |\^k\_k=0. \[pi\_ij\]
### Fluid
A widely used description for matter in cosmology is that of a fluid. Its homogeneous part is described by the energy-momentum tensor of a perfect fluid, as seen earlier, while its perturbed part can be expressed as $$\begin{aligned}
&\d T_0^0=&-\d\rho, \\
&\d T_0^i=&-\left(\rho +p\right)v^i,
\quad \d T^0_i=\left(\rho +p\right)\left(v_i+B_i\right)\\
&\d T^i_j=&\d P\d^i_j+\pi^i_j, \end{aligned}$$ with $\pi_k^k=0$ as before and where $v^i$ is the three-dimensional fluid velocity defined by u\^i=[1a]{}v\^i. It is also possible to separate this perturbed energy-momentum tensor into scalar, vector and tensor parts by using decompositions for $v_i$ and $\pi_{ij}$ similar to (\[qi\]) and (\[pi\_ij\]).
### Scalar field
Another type of matter, especially useful in the context of inflation, is a scalar field. The homogeneous description has already been given earlier and the perturbed expression for the energy-momentum tensor follows immediately from (\[Tscalarfield\]), taking into account the metric perturbations as well. One finds $$\begin{aligned}
a^2\d T_0^0&=&-a^2\d\rho=-\phi'{\d\phi}'-a^2V'\d\phi+{\phi'}^2A,\\
a^2\d T^0_i&=& a^2q_i=-\phi'\partial_i\d\phi, \\
a^2\d T^i_j&=& -\d^i_j\left(a^2V'\d\phi+{\phi'}^2A-\phi'\d\phi'\right).\end{aligned}$$ The last equation shows that, for a scalar field, there is no anisotropic stress in the energy-momentum tensor.
Gauge transformations
---------------------
It is worth noticing that there is a fundamental difference between the perturbations in general relativity and the perturbations in other field theories where the underlying spacetime is fixed. In the latter case, one can define the perturbation of a given field $\phi$ as (p)=(p)-[|]{}(p), \[pert\] where ${\bar\phi}$ is the unperturbed field and $p$ is any point of the spacetime.
In the context of general relativity, spacetime is no longer a frozen background but must also be perturbed if matter is perturbed. As a consequence, the above definition does not make sense since the perturbed quantity $\phi$ lives in the perturbed spacetime ${\cal M}$, whereas the unperturbed quantity ${\bar\phi}$ lives in [*another spacetime*]{}: the unperturbed spacetime of reference, which we will denote $\bar{\cal M}$. In order to use a definition similar to (\[pert\]), one must introduce a one-to-one [*identification*]{}, $\iota$, between the points of $\bar{\cal M}$ and those of ${\cal M}$. The perturbation of the field can then be defined as (|[p]{})=((|[p]{}))-[|]{}(|[p]{}), where $\bar{p}$ is a point of $\bar{\cal M}$.
However, the identification $\iota$ is not uniquely defined, and therefore the definition of the perturbation depends on the particular choice for $\iota$: two different identifications, $\iota_1$ and $\iota_2$ say, lead to two different definitions for the perturbations. This ambiguity can be related to the freedom of choice of the coordinate system. Indeed, if one is given a coordinate system in $\bar{\cal M}$, one can transport it into ${\cal M}$ via the identification. $\iota_1$ and $\iota_2$ thus define two different coordinate systems in ${\cal M}$, and in this respect, a [*change of identification*]{} can be seen as a [*change of coordinates*]{} in ${\cal M}$.
The metric perturbations, introduced in (\[metpert\]), are modified in a coordinate transformation of the form x\^x\^+\^, \^=(\^0,\^i) \[transjauge\]. It can be shown that the change of the metric components can be expressed as (g\_)= -2\_[(]{}\_[)]{}. where $\Delta$ represents the variation, due the coordinate transformation, at the same old and new coordinates (and thus at different physical points). The above variation can be decomposed into individual variations for the various components of the metric defined earlier. One finds $$\begin{aligned}
\Delta A&=&-{\xi^0}'-\h\xi^0 \label{gt1}\\
\Delta B_i&=&\nabla_i\xi^0-\xi_i' \\
\Delta h_{ij}&=&- 2 \left(\nabla_{(i}\xi_{j)}
-\h\xi^0\d_{ij}\right). \label{gt3}\end{aligned}$$
The effect of a coordinate transformation can also be decomposed along the scalar, vector and tensor sectors introduced earlier. The generator $\xi^\alpha$ of the coordinate transformation can be written as \^=(\^0,\^i+\^i), with $\overline{\xi}^i$ transverse. This shows explicitly that $\xi^\alpha$ contains two scalar components, $\xi^0$ and $\xi$, and two vector components, $\overline{\xi}^i$. The transformations (\[gt1\]-\[gt3\]) are then decomposed into : $$\begin{aligned}
A&\rightarrow &A-{\xi^0}'-\h\xi^0 \nonumber\\
B&\rightarrow &B+\xi^0-\xi' \nonumber\\
C&\rightarrow &C-\h\xi^0 \nonumber\\
E&\rightarrow &E-\xi \label{transfjauge} \\
\overline{B}^i&\rightarrow &\overline{B}^i-{{\overline{\xi}}^i}' \nonumber\\
E^i&\rightarrow &E^i-\overline{\xi}^i. \nonumber
\end{aligned}$$ The tensor perturbations remain unchanged since $\xi^\alpha$ does not contain any tensor component. The matter perturbations, either in the fluid description or in the scalar field description, follow similar transformation laws in a coordinate change.
In order to study the physically relevant modes and not spurious modes due to coordinate ambiguities, two strategies can a priori be envisaged. The first consists in working from the start in a specific gauge. A familiar choice in the literature on cosmological perturbations is the [*longitudinal gauge*]{} (also called conformal Newton gauge), which imposes B\_Ł=0, E\_Ł=0.
The second approach consists in defining [*gauge-invariant variables*]{}, i.e. variables that are left unchanged under a coordinate transformation. For the scalar metric perturbations, we start with four quantities ($A$, $B$, $C$ and $E$) and we can use two gauge transformations ($\xi^0$ and $\xi$). This implies that the scalar metric perturbations must be described by [*two*]{} independent gauge-invariant quantities. Two such quantities are = A+(B-E’)’+(B-E’) and =- C -(B-E’), as it can be checked by considering the explicit transformations in (\[transfjauge\]). It turns out that, in the longitudinal gauge, the remaining scalar perturbations $A_\L$ and $C_\L$ are numerically equivalent to the gauge-invariant quantities just defined $\Phi$ and $-\Psi$.
In practice, one can combine the two strategies by doing explicit calculations in a given gauge and then by relating the quantities defined in this gauge to some gauge-invariant variables. It is then possible to translate the results in any other gauge. In the rest of these lectures, we will use the longitudinal gauge.
The perturbed Einstein equations
--------------------------------
After having defined the metric and the matter perturbations, we can now relate them via the perturbed Einstein equations. We will consider here explicitly [*only the scalar sector*]{}, which is the most complicated but also the most interesting for cosmological applications.
Starting from the perturbed metric (\[scalar\_pert\_metric\]), one can compute the components of Einstein’s tensor at linear order. In the [*longitudinal gauge*]{}, i.e. with $B_\L=E_\L=0$, one finds $$\begin{aligned}
\label{dG00}
\left(\d G^0_0\right)_\L&=&
{2\over a^2}\left[3\h^2A_\L-3\h C'_\L+\nabla^2 C_\L\right] \\
\label{dG0i}
\left(\d G^0_i\right)_\L&=&
{2\over a^2}\nabla_i\left[-\h A_\L+C'_\L\right] \\
\label{dGij}
\left(\d G^i_j\right)_\L&=&
{1\over a^2}\nabla^i\nabla_j\left(-C_\L-A_\L\right)
+{1\over a^2}\left[-2 C_\L''-4\h C_\L'+\nabla^2 C_\L \right. \cr
& & \left.+2\h A_\L'
+\nabla^2 A_\L+2\left(2\h'+\h^2\right)A_\L
\right]\delta^i_j.\end{aligned}$$
Combining with the perturbations of the energy-momentum tensor given in (\[pert\_T\]), the perturbed Einstein equations yield, [*in the longitudinal gauge*]{}, the following relations: the energy constraint (from (\[dG00\])) 3\^2 A\_Ł-3C\_Ł’+\^2 C\_Ł=-4G a\^2\_Ł, \[energy\] the momentum constraint (from (\[dG0i\])) C\_Ł’-A\_Ł=4G a\^2 q\_Ł, \[momentum\] the “anisotropy constraint” (from the traceless part of (\[dGij\])) -A\_Ł-C\_Ł=8G a\^2 \_Ł, \[anisotropy\] and finally C\_Ł”+2C\_Ł’- A\_Ł’-(2’+\^2)A\_Ł-[13]{}\^2(A\_Ł+C\_Ł) =-4G P\_Ł, \[dynamics\] obtained from the trace of (\[dGij\]).
The combination of the energy and momentum constraints gives the useful relation \^2= 4G a\^2(\_Ł- 3q\_Ł)4G a\^2 \_c, \[poisson\] where we have introduced the [*comoving*]{} energy density perturbation $\d\rho_c$: this gauge-invariant quantity corresponds, according to its definition, to the energy density perturbation measured in comoving gauges characterized by $\d T^0_i=q_i=0$. We have also replaced $C_\L$ by $-\Psi$. Note that the above equation is quite similar to the Newtonian Poisson equation, but with quantities whose natural interpretation is given in [*different*]{} gauges.
Equations for the matter
------------------------
As mentioned earlier, a consequence of Einstein’s equations is that the [*total*]{} energy-momentum tensor is covariantly conserved (see Eq. (\[DT0\])). For a fluid, the conservation of the energy-momentum tensor leads to a continuity equation that generalizes the [*continuity equation*]{} of fluid mechanics, and a momentum conservation equation that generalizes the [*Euler equation*]{}. In the case of a single fluid, combinations of the perturbed Einstein equations obtained in the previous subsection lead necessarily to the perturbed continuity and Euler equations for the fluid. In the case of several non-interacting fluids, however, one must impose [*separately*]{} the covariant conservation of [*each*]{} energy-momentum tensor: this is not a consequence of Einstein’s equations, which impose only the conservation of the [*total*]{} energy-momentum tensor.
In the perspective to deal with several cosmological fluids, it is therefore useful to write the perturbation equations, satisfied by a given fluid, that follow only from the conservation of the corresponding energy-momentum tensor, and independently of Einstein’s equations.
The continuity equation can be obtained by perturbing $u^\mu\D_\nu T^\nu_\mu=0$. One finds, in [*any gauge*]{}, ’+3(+P)+(+P)(3 C’ + \^2 E’+\^2v)=0. Dividing by $\rho$, this can be reexpressed in terms of the density contrast $\d=\d\rho/\rho$: ’+3([P]{}-w) +(1+w)(\^2 v +3C’+\^2 E’)=0, \[conserv\_pert\] where $w=p/\rho$ ($w$ is not necessarily constant here). The perturbed Euler equation is derived from the spatial projection of $\d(\D_\nu T^\nu_\mu)=0$. This gives (v+B)’ +(1-3c\_s\^2)(v+B) +A+[P+ P]{}+[23(+P)]{} \^2=0, \[euler\_pert\] where $c_s$ is the sound speed, which is related to the time derivatives of the background energy density and pressure: c\_s\^2=[p’’]{}. There are as many systems of equations (\[conserv\_pert\]-\[euler\_pert\]) as the number of fluids. If the fluids are interacting, one must add an interaction term on the right-hand side of the Euler equations.
Finally, let us stress that the fluid description is not always an adequate approximation for cosmological matter. A typical example is the photons during and after recombination: their mean free path becomes so large that they must be treated as a gas, which requires the use of the Boltzmann equation (see e.g. [@ma_bertschinger] for a presentation of the Boltzmann equation in the cosmological context).
Initial conditions for standard cosmology
-----------------------------------------
The notion of [*initial conditions*]{} depends in general on the context, since the initial conditions for a given period in the history of the universe can be seen as the final conditions of the previous phase. In cosmology, “initial conditions” usually refer to the state of the perturbations during the [*radiation dominated era*]{} (of standard cosmology) and on [*wavelengths larger than the Hubble radius*]{}.
Let us first address in details the question of initial conditions in the simple case of a single perfect fluid, radiation, with equation of state $p=\rho/3$ (which gives $c_s^2=w=1/3$). The four key equations are the continuity, Euler, Poisson and anisotropy equations, respectively Eqs (\[conserv\_pert\]), (\[euler\_pert\]), (\[poisson\]) and (\[anisotropy\]). In terms of the Fourier components, Q()=e\^[-i .]{} Q(), and of the dimensionless quantity xk(during the radiation dominated era $\h=1/\eta$), the four equations can be rewritten as $$\begin{aligned}
&& {d\d\over dx}-{4\over 3}\V+4{dC\over dx}=0,\label{ic1}\\
&& {d\V\over dx }+{1\over 4}\d +A=0, \label{ic2}\\
&& x^2 C={3\over 2}\left(\d -{4\over x}\V\right)\label{ic3}\\
&& C=-A.
\label{ic4}\end{aligned}$$ We have introduced the quantity $\V\equiv kv$, which has the dimension of a velocity. Since we are interested in perturbations with wavelength larger than the Hubble radius, i.e. such that $x=k/\h \ll 1$, it is useful to consider a Taylor expansion for the various perturbations, for instance =\^[(0)]{}+x\^[(1)]{}+[x\^22]{}\^[(2)]{}+…One then substitutes these Taylor expansions into the above system of equations. In particular, the Poisson equation (\[ic3\]) gives \^[(0)]{}=0, in order to avoid a divergence, as well as \^[(1)]{}=[14]{}\^[(0)]{}. The Euler equation (\[ic2\]) then gives A\^[(0)]{}=-[12]{}\^[(0)]{}. The conclusion is that the initial conditions for each Fourier mode are determined by a single quantity, e.g. $\d^{(0)}$, the other quantities being related via the above constraints.
In general, one must consider several cosmological fluids. Typically, the “initial” or “primordial” perturbations are defined deep in the radiation era but at temperatures low enough, i.e. after nucleosynthesis, so that the main cosmological components reduce to the usual photons, baryons, neutrinos and cold dark matter (CDM). The system (\[ic1\]-\[ic4\]) must thus be generalized to include a continuity equation and a Euler equation for each fluid. The above various cosmological species can be characterized by their number density, $n_X$, and their energy density $\rho_X$. In a multi-fluid system, it is useful to distinguish [*adiabatic*]{} and [*isocurvature*]{} perturbations.
The [*adiabatic mode*]{} is defined as a perturbation affecting all the cosmological species such that the relative ratios in the number densities remain unperturbed, i.e. such that (n\_X/n\_Y) =0. \[nullratio\] It is associated with a curvature perturbation, via Einstein’s equations, since there is a global perturbation of the matter content. This is why the adiabatic perturbation is also called [*curvature*]{} perturbation. In terms of the energy density contrasts, the adiabatic perturbation is characterized by the relations \_=[14]{}\_=[13]{}\_b=[13]{} \_c, They follow directly from the prescription (\[nullratio\]), each coefficient depending on the equation of state of the particuler species.
Since there are several cosmological species, it is also possible to perturb the matter components without perturbing the geometry. This corresponds to [*isocurvature*]{} perturbations, characterized by variations in the particle number ratios but with vanishing curvature perturbation. The variation in the relative particle number densities between two species can be quantified by the so-called [*entropy perturbation*]{} S\_[A,B]{}-[n\_Bn\_B]{}. When the equation of state for a given species is such that $w\equiv p/\rho= {\rm Const}$, then one can reexpress the entropy perturbation in terms of the density contrast, in the form S\_[A,B]{}-[\_B1+w\_B]{}. It is convenient to choose a species of reference, for instance the photons, and to define the entropy perturbations of the other species relative to it: $$\begin{aligned}
S_b&\equiv \d_b-{3\over 4} \d_\gamma, \\
S_c&\equiv \d_c-{3\over 4} \d_\gamma, \\
S_\nu&\equiv {3\over 4}\d_\nu-{3\over 4} \d_\gamma,\end{aligned}$$ thus define respectively the [*baryon isocurvature mode*]{}, the [*CDM isocurvature mode*]{}, the [*neutrino isocurvature mode*]{}. In terms of the entropy perturbations, the adiabatic mode is obviously characterized by $S_b=S_c=S_\nu=0$.
In summary, we can decompose a general perturbation, described by four density contrasts, into one adiabatic mode and three isocurvature mode. In fact, the problem is slightly more complicated because the evolution of the initial velocity fields. For a single fluid, we have seen that the velocity field is not an independent initial condition but depends on the density contrast so that there is no divergence backwards in time. In the case of the four species mentioned above, there remains however one arbitrary relative velocity between the species, which gives an additional mode, usually named the [*neutrino isocurvature velocity*]{} perturbation.
The CMB is a powerful way to study isocurvature perturbations because (primordial) adiabatic and isocurvature perturbations produce very distinctive features in the CMB anisotropies [@smoot]. Whereas an adiabatic initial perturbation generates a cosine oscillatory mode in the photon-baryon fluid, leading to an acoustic peak at $\ell\simeq 220$ (for a flat universe), a pure isocurvature initial perturbation generates a sine oscillatory mode resulting in a first peak at $\ell \simeq 330$. The unambiguous observation of the first peak at $\ell\simeq 220$ has eliminated the possibility of a dominant isocurvature perturbation. The recent observation by WMAP of the CMB polarization has also confirmed that the initial perturbation is mainly an adiabatic mode. But this does not exclude the presence of a subdominant isocurvature contribution, which could be detected in future high-precision experiments such as Planck.
Super-Hubble evolution
----------------------
In the case of [*adiabatic perturbations*]{}, there is only one (scalar) dynamical degree of freedom. One can thus choose either an energy density perturbation or a metric perturbation, to study the dynamics, the other quantities being determined by the constraints.
If one considers the metric perturbation $\Psi=\Phi$ (assuming $\pi=0$), one can combine Einstein’s equations (\[dynamics\]), with $\d p=c_s^2\d\rho$ (for adiabatic perturbations), and (\[energy\]) to obtain a second-order differential equation in terms of $\Psi$ only: ”+3(1+c\_s\^2)’+- k\^2c\_s\^2 =0. Using the background Friedmann equations, the sound speed can be reexpressed in terms of the scale factor and its derivatives. For scales larger than the sonic horizon, i.e. such that $kc_s\ll \h$, the above equation can be integrated explicitly and yields =[a\^2]{}, where $\alpha$ and $\beta$ are two integration constants.
For a scale factor evolving like $a\propto t^p$, one gets =-[p+1]{}+ p t\^[-p-1]{}. They are two modes: a constant mode and a decaying mode. Note that, in the previous subsection on the initial conditions, we eliminated the decaying mode to avoid the divergence when going backwards in time.
In a transition between two cosmological phases characterized respectively by the scale factors $ a\propto t^{p_1}$ and $a\propto t^{p_2}$, one can easily finds the relation between the asymptotic behaviours of $\Psi$ (i.e. after the decaying mode becomes negligible) by using the constancy of $\alpha$. This gives \_2=[p\_2+1p\_1+1]{}\_1. This is valid only asymptotically. In the case of a sharp transition, $\Psi$ must be continuous at the transition and the above relation will apply only after some relaxation time. For a transition radiation/non-relativistic matter, one finds \[rad\_matt\] \_[mat]{}=[910]{}\_[rad]{}.
In practice and for more general cases, it turns out that it is much more convenient to follow the evolution of cosmological perturbations by resorting to quantities that are [*conserved on super-Hubble scales*]{}. A familiar example of such a quantity is the [*curvature perturbation on uniform-energy-density hypersurfaces*]{}, which can be expressed in any gauge as =C-. \[zeta\] This is a gauge-invariant quantity by definition. The conservation equation (\[conserv\_pert\]) can then be rewritten as ’=-[+ P]{}P\_[nad]{}-[13]{}\^2(E’+v), \[zetaprim\] where $\d P_{nad}$ is the non-adiabatic part of the pressure perturbation, defined by P\_[nad]{}=P-c\_s\^2. The expression (\[zetaprim\]) shows that $\zeta$ is conserved [*on super-Hubble scales*]{} in the case of [*adiabatic perturbations*]{}.
Another convenient quantity, which is sometimes used in the literature instead of $\zeta$, is the [*curvature perturbation on comoving hypersurfaces*]{}, which can be written in any gauge as -=C+[+P]{} q. \[R\] It is easy to relate the two quantities $\zeta$ and $\R$. Substituting e.g. $\d\rho=\d\rho_c+ 3\h q$, which follows from the definition (see (\[poisson\])) of the comoving energy density perturbation, into (\[zeta\]), one finds =-+[\_c+P]{}. Using Einstein’s equations, in particular (\[poisson\]), this can be rewritten as =--[23(+P)]{}([kaH]{})\^2. The latter expression shows that $\zeta$ and $\R$ coincide in the super-Hubble limit $k\ll aH$.
The quantity $\R$ can also be expressed in terms of the two Bardeen potentials $\Phi$ and $\Psi$. Using the momentum constraint (\[momentum\]) and the Friedmann equations, one finds \[R\_Psi\] =-[HH]{}(+H). In a cosmological phase dominated by a fluid with no anisotropic stress, so that $\Phi=\Psi$, and with an equation of state $P=w\rho$ with $w$ constant, we have already seen that $\Psi$ is constant with time. Since the scale factor evolves like $a\propto t^p$ with $p=2/3(1+w)$, the relation (\[R\_Psi\]) between $\R$ and $\Psi$ reduces to \[R\_Psi2\] =[5+3w3(1+w)]{}. In the radiation era, $\R=(3/2)\Psi$, whereas in the matter era, $\R=(5/3)\Psi$, and since $\R$ is conserved, one recovers the conclusion given in Eq. (\[rad\_matt\]).
During inflation, $w\simeq -1$ and w+1=[\^2]{}-[23]{}[HH\^2]{} so that -[H\^2H]{}\_[inf]{}.
For a scalar field, the perturbed equation of motion reads +3H+([k\^2a\^2]{}+V”) =(+3)-2V’ .
Quantum fluctuations and “birth” of cosmological perturbations
==============================================================
In the previous section, we have discussed the [*classical*]{} evolution of the cosmological perturbations. In the [*classical*]{} context, the initial conditions, defined deep in the radiation era, are a priori arbitrary. What is remarkable with inflation is that the accelerated expansion can convert [*initial vacuum quantum fluctuations*]{} into “macroscopic” cosmological perturbations (see [@quantum] for the seminal works). In this sense, inflation provides us with “natural” initial conditions, which turn out to be the initial conditions that agree with the present observations.
Massless scalar field in de Sitter
----------------------------------
As a warming-up, it is instructive to discuss the case of a massless scalar field in a so-called de Sitter universe, or a FLRW spacetime with exponential expansion, $a\propto \exp(Ht)$. In conformal time, the scale factor is given by a()=-[1H]{}. The conformal time is here negative (so that the scale factor is positive) and goes from $-\infty$ to $0$. The action for a massless scalar field in this geometry is given by \[action\_dS\] S=d\^4x (-[12]{}\_\^) =d d\^3x a\^4, where we have substituted in the action the cosmological metric (\[metric\_eta\]). Note that, whereas we still allow for spatial variations of the scalar field, i.e. inhomogeneities, we will assume here, somewhat inconsistently, that the geometry is completely fixed as homogeneous. We will deal later with the question of the metric perturbations.
It is possible to write the above action with a canonical kinetic term via the change of variable u=a. After an integration by parts, the action (\[action\_dS\]) can be rewritten as S=[12]{}d d\^3x . The first two terms are familiar since they are the same as in the action for a free massless scalar field in Minkowski spacetime. The fact that our scalar field here lives in de Sitter spacetime rather than Minkowski has been reexpressed as a [*time-dependent effective mass*]{} m\^2\_[eff]{}=-[a”a]{}=-[2\^2]{}.
Our next step will be to quantize the scalar field $u$ by using the standard procedure of quantum field theory. One first turns $u$ into a quantum field denoted $\hat u$, which we expand in Fourier space as \[Fourier\_quantum\] u (, x)=[1(2)\^[3/2]{}]{}d\^3k {[a]{}\_[k]{} u\_k() e\^[i k.x]{} + [a]{}\_[k]{}\^u\_k\^\*() e\^[-i k.x]{} }, where the $\hat a^\dagger$ and $\hat a$ are creation and annihilation operators, which satisfy the usual commutation rules \[a\] = = 0, = (k-k’). The function $u_k(\eta)$ is a complex time-dependent function that must satisfy the [*classical*]{} equation of motion in Fourier space, namely \[eom\_u\] u\_k”+(k\^2-[a”a]{})u\_k=0, which is simply the equation of motion for an oscillator with a time-dependent mass. In the case of a massless scalar field in Minkowski spacetime, this effective mass is zero ($a''/a=0$) and one usually takes $u_k=(\hbar/ 2k)^{1/2}e^{-ik\eta}$ (the choice for the normalization factor will be clear below). In the case of de Sitter, one can solve explicitly the above equation with $a''/a=2/\eta^2$ and the general solution is given by \[general\_solution\] u\_k=e\^[-ik]{}(1-[ik]{}) +e\^[ik]{}(1+[ik]{}).
Canonical quantization consists in imposing the following commutation rules on the $\eta=$constant hypersurfaces: = =0 and \[canonical\] =i(x-x’), where $\pi_u\equiv \delta S/\delta u'$ is the conjugate momentum of $u$. In the present case, $\pi_u=u'$ since the kinetic term is canonical.
Subtituting the expansion (\[Fourier\_quantum\]) in the commutator (\[canonical\]), and using the commutation rules for the creation and annihilation operators (\[a\]), one obtains the relation u\_k [u’\_k]{}\^\*-u\_k\^\*u’\_k=i, \[wronskien\] which determines the normalization of the Wronskian.
The choice of a specific function $u_k(\eta)$ corresponds to a particular prescription for the physical vacuum $| 0 \rangle$, defined by \_[k]{}|0=0. A different choice for $u_k(\eta)$ is associated to a different decomposition into creation and annihilitation modes and thus to a different vacuum.
Let us now note that the wavelength associated with a given mode $k$ can always be found [*within*]{} the Hubble radius provided one goes sufficiently far backwards in time, since the comoving Hubble radius is shrinking during inflation. In other words, for $|\eta|$ sufficiently big, one gets $k|\eta|\gg 1$. Moreover, for a wavelength smaller than the Hubble radius, one can neglect the influence of the curvature of spacetime and the mode behaves as in a Minkowski spacetime, as can also be checked explicitly with the equation of motion (\[eom\_u\]) (the effective mass is negligible for $k|\eta|\gg 1$). Therefore, the most natural physical prescription is to take the particular solution that corresponds to the usual Minkowski vacuum, i.e. $u_k\sim \exp(-ik\eta)$, in the limit $k|\eta|\gg 1$. In view of (\[general\_solution\]), this corresponds to the choice u\_k=e\^[-ik]{}(1-[ik]{}), \[u\_k\] where the coefficient has been determined by the normalisation condition (\[wronskien\]). This choice, in the jargon of quantum field theory on curved spacetimes, corresponds to the [*Bunch-Davies vacuum*]{}.
Finally, one can compute the [*correlation function*]{} for the scalar field $\phi$ in the vacuum state defined above. When Fourier transformed, the correlation function defines the [*power spectrum*]{} $\P_\phi(k)$: 0| (x\_1) (x\_2)|0= d\^3 k e\^[i k.(x\_1-x\_2)]{} [¶\_(k)4k\^3]{}. Note that the homogeneity and isotropy of the quantum field is used implicitly in the definition of the power spectrum, which is “diagonal” in Fourier space (homogeneity) and depends only on the norm of $\vec k$ (isotropy). In our case, we find \[ps\_phi\] 2\^2k\^[-3]{}¶\_= [|u\_k|\^2a\^2]{}, which gives in the limit when the wavelength is [*larger than the Hubble radius*]{}, i.e. $k|\eta|\ll 1$, ¶\_(k)([H2]{})\^2 (kaH) Note that, in the opposite limit, i.e. for wavelengths smaller than the Hubble radius ($k|\eta|\gg 1$), one recovers the usual result for fluctuations in Minkowski vacuum, $\P_\phi(k)=\hbar (k/2\pi a)^2$.
We have used a quantum description of the scalar field. But the cosmological perturbations are usually described by [*classical random fields*]{}. Roughly speaking, the transition between the quantum and classical (although stochastic) descriptions makes sense when the perturbations exit the Hubble radius. Indeed each of the terms in the Wronskian (\[wronskien\]) is roughly of the order $\hbar/2(k\eta)^3$ in the super-Hubble limit and the non-commutativity can then be neglected. In this sense, one can see the exit outside the Hubble radius as a quantum-classical transition, although much refinement would be needed to make this statement more precise.
Quantum fluctuations with metric perturbations
----------------------------------------------
Let us now move to the more realistic case of a perturbed inflaton field living in a perturbed cosmological geometry. The situation is more complicated than in the previous problem, because Einstein’s equations imply that scalar field fluctuations must necessarily coexist with [*metric fluctuations*]{}. A correct treatment, either classical or quantum, must thus involve both the scalar field perturbations and metric perturbations.
In order to quantize this coupled system, the easiest procedure consists in identifying the [*true degrees of freedom*]{}, the other variables being then derived from them via constraint equations. As we saw in the classical analysis of cosmological perturbations, there exists only one scalar degree of freedom in the case of a single scalar field, which we must now identify.
The starting point is the action of the coupled system scalar field plus gravity expanded up to second order in the linear perturbations. Formally this can be written as S\[|+, g\_=|g\_+h\_\]= S\^[(0)]{}\[|, |g\_\] +S\^[(1)]{}\[, h\_;|, |g\_\] +S\^[(2)]{}\[, h\_;|, |g\_\], where the first term $S^{(0)}$ contains only the homogeneous part, $S^{(1)}$ contains all terms linear in the perturbations (with coefficients depending on the homogeneous variables), and finally $S^{(2)}$ contains the terms quadratic in the linear perturbations. When one substitutes the FLRW equations of motion in $S^{(1)}$ (after integration by parts), one finds that $S^{(1)}$ vanishes, which is not very surprising since this is how one gets the homogeneous equations of motion, via the Euler-Lagrange equations, from the variation of the action.
The term $S^{(2)}$ is the piece we are interested in: the corresponding Euler-Lagrange equations give the equations of motion for the linear perturbations, which we have already obtained; but more importantly, this term enables us to quantize the linear perturbations and to find the correct normalization.
If one restricts oneself to the scalar sector, the quadratic part of the action depends on the four metric perturbations $A$, $B$, $C$, $E$, as well as on the scalar field perturbation $\d\phi$. After some cumbersome manipulations, by using the FLRW equations of motion, one can show that the second-order action for scalar perturbations can be rewritten in terms of a single variable [@Mukhanov:rq] \[v\] v=a(-[’]{}C), which is a linear combination mixing scalar field [*and*]{} metric perturbations. The variable $v$ represents the true dynamical degree of freedom of the system, and one can check immediately that it is indeed a gauge-invariant variable.
In fact, $v$ is proportional to the comoving curvature perturbation defined in (\[R\]) and which, in the case of a single scalar field, takes the form =-C+[’]{}. Note also that, if one can check a posteriori that $v$ is the variable describing the true degree of freedom by expressing the action in terms of $v$ only (modulo, of course, a multiplicative factor depending only on homogeneous quantities: the $v$ defined here is such that it gives a canonical kinetic term in the action), one can identify $v$ in a systematic way by resorting to Hamiltonian techniques, in particular the Hamilton-Jacobi equation [@l94].
With the variable $v$, the quadratic action takes the extremely simple form S\_v=[12]{}d d\^3x , with \[z\] z=a[’]{}. This action is thus analogous to that of a scalar field in Minkowski spacetime with a time-dependent mass. One is thus back in a situation similar to the previous subsection, with the notable difference that the effective time-dependent mass is now $z''/z$, instead of $a''/a$.
The quantity we will be eventually interested in is the comoving curvature perturbation $\R$, which is related to the canonical variable $v$ by the relation v=-z. Since, by analogy with (\[ps\_phi\]), the power spectrum for $v$ is given by 2\^2 k\^[-3]{}[P]{}\_v(k)= |v\_k|\^2, the corresponding power spectrum for $\R$ is found to be \[ps\_R\] 2\^2 k\^[-3]{}[P]{}\_(k)= [|v\_k|\^2z\^2]{}.
In the case of an inflationary phase in the [*slow-roll*]{} approximation, the evolution of $\phi$ and of $H$ is much slower than that of the scale factor $a$. Consequently, one gets approximately , and all results of the previous section obtained for $u$ apply directly to our variable $v$ in the slow-roll approximation. This implies that the properly normalized function corresponding to the Bunch-Davies vacuum is approximately given by v\_ke\^[-ik]{}(1-[ik]{}). \[v\_k\] In the super-Hubble limit $k|\eta| \ll 1$ the function $v_k$ behaves like v\_k- [ik]{} i [aHk]{}, where we have used $a\simeq - 1/(H\eta)$. Consequently, on scales larger than the Hubble radius, the power spectrum for $\R$ is found, combining (\[ps\_R\]), (\[z\]) and (\[v\_k\]), to be given by ¶\_([H\^4\^2]{})\_[k=aH]{}, \[power\_S\] where we have reintroduced the cosmic time instead of the conformal time. This is the famous result for the spectrum of scalar cosmological perturbations generated from vacuum fluctuations during a slow-roll inflation phase. Note that during slow-roll inflation, the Hubble parameter and the scalar field velocity slowly evolve: for a given scale, the above amplitude of the perturbations is determined by the value of $H$ and $\dot \phi$ when the scale exited the Hubble radius. Because of this effect, the obtained spectrum is not strictly scale-invariant.
It is also instructive to recover the above result by a more intuitive derivation. One can think of the metric perturbations in the radiation era as resulting from the time difference for the end of inflation at different spatial points (separated by distances larger than the Hubble radius), the shift for the end of inflation being a consequence of the scalar field fluctuations $\d\phi\sim H/2\pi$. Indeed, \_[rad]{}\~[aa]{}\~H t, and the time shift is related to the scalar field fluctuations by $ \d t\sim \d\phi/\dot\phi$, which implies \_[rad]{}\~[H\^2]{}, which agrees with the above result since during the radiation era $\R=(3/2)\Psi$ (see Eq. (\[R\_Psi2\])). It is also worth noticing that, during inflation, in the case of the slow-roll approximation, the term involving $C$ in the linear combination (\[v\]) defining $v$ is negligible with respect to the term involving $\d\phi$. One can therefore “ignore” the rôle of the metric perturbations [*during inflation*]{} in the computation of the quantum fluctuations and consider only the scalar field perturbations. But this simplification is valid only in the context of slow-roll approximation. It is not valid in the general case, as can be verified for inflation with a power-law scale factor.
Gravitational waves
-------------------
We have focused so far our attention on scalar perturbations, which are the most important in cosmology. Tensor perturbations, or primordial gravitational waves, if ever detected in the future, would be a remarkable probe of the early universe. In the inflationary scenario, like scalar perturbations, primordial gravitational waves are generated from vacuum quantum fluctuations. Let us now explain briefly this mechanism.
The action expanded at second order in the perturbations contains a tensor part, which given by S\^[(2)]{}\_g=[164G]{}d d\^3x a\^2 \^\_|E\^i\_j\_|E\^j\_i, where $\eta^{\mu\nu}$ denotes the Minkoswki metric. Apart from the tensorial nature of $E^i_j$, this action is quite similar to that of a scalar field in a FLRW universe (\[action\_dS\]), up to a renormalization factor $1/\sqrt{32\pi G}$. The decomposition a|E\^i\_j=\_[=+,]{} v\_[k,]{}()\^i\_j([k]{};) e\^[i k.x]{} where the $\epsilon^i_j({\vec k};\lambda)$ are the polarization tensors, shows that the gravitational waves are essentially equivalent to two massless scalar fields (for each polarization) $\phi_\lambda=m_P\bar E_\lambda /2$.
The total power spectrum is thus immediately deduced from (\[ps\_phi\]): \_g=2([H2]{})\^2, where the first factor comes from the two polarizations, the second from the renormalization with respect to a canonical scalar field, the last term being the power spectrum for a scalar field derived earlier. In summary, the tensor power spectrum is \_g=[2\^2]{}([H]{})\_[k=aH]{}\^2, \[power\_T\] where the label recalls that the Hubble parameter, which can be slowly evolving during inflation, must be evaluated when the relevant scale exited the Hubble radius during inflation.
Power spectra
-------------
Let us rewrite the scalar and tensor power spectra, respectively given in (\[power\_S\]) and (\[power\_T\]), in terms of the scalar field potential only. This can be done by using the slow-roll equations (\[sr1\]-\[sr2\]). One finds for the scalar spectrum \_=[112\^2]{}([V\^3\^6[V’]{}\^2]{})\_[k=aH]{} with subscript meaning that the term on the right hand side must be evaluated at [*Hubble radius exit*]{} for the scale of interest. The scalar spectrum can also be written in terms of the first slow-roll parameter defined in (\[epsilon\]), in which case it reads \_=[124\^2]{}([V\^4\_V]{})\_[k=aH]{}. From the observations of the CMB fluctuations, \_\^[1/2]{}=[12]{}([V\^[1/2]{} \^2\_V\^[1/2]{}]{})510\^[-5]{}. If $\epsilon_V$ is order $1$, as in chaotic models, one can evaluate the typical energy scale during inflation as V\^[1/4]{}\~10\^[-3]{}\~10\^[15]{}[GeV]{}.
The tensor power spectrum, in terms of the scalar field potential, is given by \_g=[23\^2]{}([V\^4]{})\_[k=aH]{}. The ratio of the tensor and scalar amplitudes is proportional to the slow-roll parameter $\epsilon_V$: \[r\] r=16\_V.
The scalar and tensor spectra are almost scale invariant but not quite since the scalar field evolves slowly during the inflationary phase. In order to evaluate quantitatively this variation, it is convenient to introduce a scalar [*spectral index*]{} as well as a tensor one, defined respectively by n\_S(k)-1=[d\_(k)dk]{}, n\_T(k)=[d\_g(k)dk]{}. One can express the spectral indices in terms of the slow-roll parameters. For this purpose, let us note that, in the slow-roll approximation, $d\ln k=d\ln(aH)\simeq d\ln a$, so that = [H]{}-[V’3H\^2]{}- m\_P\^2[V’V]{}, where the slow-roll equations (\[sr1\]-\[sr2\]) have been used. Therefore, one gets n\_s(k)-1=2\_V- 6 \_V, where $\epsilon_V$ and $\eta_V$ are the two slow-roll parameters defined in (\[epsilon\]) and (\[eta\]). Similarly, one finds for the tensor spectral index n\_T(k)=-2\_V. Comparing with Eq. (\[r\]), this yields the relation r=-8 n\_T, the so-called [*consistency relation*]{} which relates purely [*observable*]{} quantities. This means that if one was able to observe the primordial gravitational waves and measure the amplitude and spectral index of their spectrum, a rather formidable task, then one would be able to test directly the paradigm of single field slow-roll inflation.
Finally, let us mention the possibility to get information on inflation from the measurement of the running of the spectral index. Introducing the second-order slow-roll parameter \_V = \^4 , the running is given by = -24 \_V\^2 +16 \_V \_V -2 \_V . As one can see, the amplitude of the variation depends on the slow-roll parameters and thus on the models of inflation.
Conclusions
-----------
To conclude, let us mention the existence of more sophisticated models of inflation, such as models where several scalar fields contribute to inflation. In contrast with the single inflaton case, which can generate only adiabatic primordial fluctuations, because all types of matter are decay product of the same inflaton, [*multi-inflaton models*]{} can generate both adiabatic and isocurvature perturbations, which can even be correlated [@l99].
Another recent direction of research is the possibility to disconnect the fluctuations of the inflaton, the field that drives inflation, from the observed cosmological perturbations, which could have been generated from the quantum fluctuations of another scalar field [@curvaton].
From the theoretical point of view an important challenge remains to identify viable and natural candidates for the inflaton field(s) in the framework of high energy physics, with the hope that future observations of the cosmological perturbations will be precise enough to discriminate between various candidates and thus give us a clue about which physics really drove inflation.
Alternative scenarios to inflation can also been envisaged, as long as they can predict unambiguously primordial fluctuations compatible with the present observations. In this respect, one must emphasize that the cosmological perturbations represent today essentially the only observational window that gives access to the very high energy physics, hence the importance for any early universe model to be able to give firm predictions for the primordial fluctuations it generates.
I would like to thank the organizers of this Cargese school for inviting me to lecture at this very nice village in Corsica. I also wish to thank the other lecturers and participants for their remarks and questions, as well as Gérard Smadja and Filippo Vernizzi for their careful reading of the draft.
[99]{}
R. Wald, [*General Relativity*]{} (Chicago, 1984); S.M. Carroll, [*Spacetime and Geometry*]{}, (Addison Wesley, San Francisco, 2004).
A. H. Guth, Phys. Rev. D [**23**]{}, 347 (1981).
A. A. Starobinsky, Phys. Lett. B [**91**]{}, 99 (1980).
A. D. Linde, Phys. Lett. B [**108**]{}, 389 (1982); A. Albrecht and P. J. Steinhardt, Phys. Rev. Lett. [**48**]{}, 1220 (1982).
A. D. Linde, Phys. Lett. B [**129**]{}, 177 (1983).
A. D. Linde, Phys. Rev. D [**49**]{}, 748 (1994) \[arXiv:astro-ph/9307002\].
A. Linde, [*Particle physics and Inflationary Cosmology*]{}, (Harwood, Chur, 1990).
D.H. Lyth and A. Riotto, Phys. Rept. [**314**]{}, 1 (1998) \[hep-ph/9807278\].
A.R. Liddle and D.H. Lyth, [*Cosmological inflation and large-scale structure*]{}, Cambridge University Press (2000).
H. Kodama and M. Sasaki, Prog. Theor. Phys. Suppl. [**78**]{}, 1 (1984); V. F. Mukhanov, H. A. Feldman and R. H. Brandenberger, Phys. Rept. [**215**]{}, 203 (1992).
C. P. Ma and E. Bertschinger, Astrophys. J. [**455**]{}, 7 (1995) \[arXiv:astro-ph/9506072\].
G. Smoot, these proceedings.
V. F. Mukhanov and G. V. Chibisov, JETP Lett. [**33**]{}, 532 (1981) \[Pisma Zh. Eksp. Teor. Fiz. [**33**]{}, 549 (1981)\]; A. H. Guth and S. Y. Pi, Phys. Rev. Lett. [**49**]{}, 1110 (1982); A. A. Starobinsky, Phys. Lett. B [**117**]{}, 175 (1982); S. W. Hawking, Phys. Lett. B [**115**]{}, 295 (1982).
V. F. Mukhanov, Phys. Lett. B [**218**]{}, 17 (1989).
D. Langlois, Class. Quant. Grav. [**11**]{}, 389 (1994).
D. Langlois, Phys. Rev. D [**59**]{}, 123512 (1999) \[arXiv:astro-ph/9906080\].
D. H. Lyth and D. Wands, Phys. Lett. B [**524**]{}, 5 (2002) \[arXiv:hep-ph/0110002\] ; G. Dvali, A. Gruzinov and M. Zaldarriaga, Phys. Rev. D [**69**]{}, 023505 (2004) \[arXiv:astro-ph/0303591\].
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'A recently developed renormalization approach is used to study the electron-phonon coupling in many-electron systems. By starting from an Hamiltonian which includes a small gauge symmetry breaking field, we directly derive a BCS-like equation for the energy gap from the renormalization approach. The effective electron-electron interaction for Cooper pairs does not contain any singularities. Furthermore, it is found that phonon-induced particle-hole excitations only contribute to the attractive electron-electron interaction if their energy difference is smaller than the phonon energy.'
address: ' Institut für Theoretische Physik, Technische Universität Dresden, D-01062 Dresden, Germany '
author:
- 'A. Hübsch and K. W. Becker'
title: ' Renormalization of the electron–phonon interaction: a reformulation of the BCS–gap equation '
---
Introduction
============
The famous BCS-theory [@BCS] of superconductivity is essentially based on the analysis of attractive interactions between electrons of many-particle systems [@Cooper]. As was pointed out by Fröhlich [@Froehlich] such an interaction can result from an effective coupling between electrons mediated via phonons. The recent discovery of superconductivity in magnesium diboride MgB$_{2}$ [@Nagamatsu] below a rather high $T_c$ of about 39 K has attracted again a lot of interest on this classical scenario of a phonon-mediated superconductivity. However, the electron-electron interaction derived by Fröhlich [@Froehlich] contains some problems. There are certain regions in momentum space where the attractive interaction becomes singular and changes its sign due to a vanishing energy denominator.
Recently, effective phonon-induced electron-electron interactions were also derived [@Lenz; @Mielke] by use of Wegner’s flow equation method [@Wegner] and by a similarity renormalization proposed by Glatzek and Wilson [@Wilson1; @Wilson2]. The main idea of these approaches is to perform a continuous unitary transformation which leads to an expression for an effective electron-electron interaction which is less singular than Fröhlich’s result [@Froehlich].
Recently, we have developed a renormalization approach which is based on perturbation theory [@Becker]. This approach resembles Wegner’s flow equation method [@Wegner] and the similarity renormalization [@Wilson1; @Wilson2] in some aspects. Therefore, the investigation of an effective phonon-induced electron-electron interaction is very useful to compare the three methods in more details. Therefore, in this paper we directly diagonalize the classical problem of interacting electrons and phonons by use of the new renormalization technique [@Becker]. the Hamiltonian is given by $$\begin{aligned}
\label{1}
{\cal H} &=&
\sum_{{\bf k},\sigma} \varepsilon_{\bf k} \,
c_{{\bf k},\sigma}^{\dagger}c_{{\bf k},\sigma} +
\sum_{\bf q} \omega_{\bf q} \,
b_{\bf q}^{\dagger}b_{\bf q} +
\sum_{{\bf k},{\bf q},\sigma}
\left(
g_{\bf q} \,
c_{{\bf k},\sigma}^{\dagger} c_{({\bf k}+{\bf q}),\sigma}
b_{\bf q}^{\dagger} +
g_{\bf q}^{*} \,
c_{({\bf k}+{\bf q}),\sigma}^{\dagger} c_{{\bf k},\sigma}
b_{\bf q}
\right),\end{aligned}$$ which will be used to describe superconducting properties. In $c_{{\bf k},\sigma}^{\dagger}$ and $c_{{\bf k},\sigma}$ are the usual creation and annihilation operators for electrons with wave vector ${\bf k}$ and spin $\sigma$. $b_{\bf q}^{\dagger}$ and $b_{\bf q}$ denote phonon operators with phonon energies $\omega_{\bf q}$. The electron excitation energies $\varepsilon_{\bf k}$ are measured from the chemical potential $\mu$.
The paper is organized as follows. In the next section we briefly repeat our recently developed renormalization approach [@Becker]. In Sec. \[El-Ph\] this approach will be applied to the electron-phonon system in order to derive a BCS-like gap equation. Furthermore, effective electron-electron interaction derived in this framework will be compared with the results from former approaches . Finally, our conclusions are presented in Sec. \[Conc\].
Projector-based renormalization method (PRM) {#Ren}
============================================
The PRM [@Becker] starts from the decomposition of a given many-particle Hamiltonian ${\cal H}$ into an unperturbed part ${\cal H}_{0}$ and into a perturbation ${\cal H}_{1}$ $$\begin{aligned}
\label{2}
{\cal H} &=& {\cal H}_{0} + \varepsilon {\cal H}_{1} =: H(\varepsilon). \end{aligned}$$ We assume that the eigenvalue problem ${\cal H}_{0} |n\rangle = E^{(0)}_n |n\rangle$ of the unperturbed part ${\cal H}_{0}$ is known. The parameter $\varepsilon$ accounts for the order of perturbation processes discussed below. Let us define projection operators ${\bf P}_{\lambda}$ and ${\bf Q}_{\lambda}$ by $$\begin{aligned}
\label{3}
{\bf P}_{\lambda}A &=& \sum_{|E_n^{(0)} - E_m^{(0)}| \leq \lambda}
|n\rangle \langle m| \, \langle n| A |m\rangle \qquad\mbox{and} \\
\label{4}
{\bf Q}_{\lambda} &=& {\bf 1} - {\bf P}_{\lambda}.\end{aligned}$$ ${\bf P}_{\lambda}$ and ${\bf Q}_{\lambda}$ are super-operators acting on usual operators $A$ of the unitary space. Here, ${\bf P}_{\lambda}$ projects on that part of $A$ which is formed by all transition operators $|n\rangle \langle m|$ with energy differences $|E_n^{(0)} - E_m^{(0)}|$ less or equal to a given cutoff $\lambda$. The cutoff $\lambda$ is smaller than the cutoff $\Lambda$ of the original Hamiltonian ${\cal H}$. ${\bf Q}_{\lambda}$ is orthogonal to ${\bf P}_{\lambda}$ and projects on high energy transitions larger than $\lambda$.
The aim is to transform the initial Hamiltonian ${\cal H}$ into an effective Hamiltonian ${\cal H}_{\lambda}$ which has no matrix elements between eigenstates of ${\cal H}_0$ with energy differences larger than $\lambda$. ${\cal H}_{\lambda}$ will be constructed by use of an unitary transformation $$\begin{aligned}
\label{5}
{\cal H}_{\lambda} &=&
e^{X_{\lambda}} \, {\cal H} \, e^{-X_{\lambda}} .\end{aligned}$$ Due to construction the effective Hamiltonian ${\cal H}_{\lambda}$ will therefore have the same eigenspectrum as the original Hamiltonian ${\cal H}$. The generator $X_{\lambda}$ of the transformation is anti-Hermitian, $X^{\dagger}_{\lambda}=-X_{\lambda}$. To find an appropriate generator $X_{\lambda}$ we use the condition that ${\cal H}_{\lambda}$ has no matrix elements with transition energies larger than $\lambda$, i.e., $$\begin{aligned}
\label{6}
{\bf Q}_{\lambda}{\cal H}_{\lambda} &=& 0\end{aligned}$$ has to be fulfilled. By assuming that $X_{\lambda}$ can be written as a power series in the perturbation parameter $\varepsilon$ $$\begin{aligned}
\label{7}
X_{\lambda} &=&
\varepsilon X_{\lambda}^{(1)} + \varepsilon^{2} X_{\lambda}^{(2)} +
\varepsilon^{3} X_{\lambda}^{(3)} + \dots\end{aligned}$$ the effective Hamiltonian ${\cal H}_{\lambda}$ can be expanded in a power series in $\varepsilon$ as well $$\begin{aligned}
\label{8}
{\cal H}_{\lambda} &=&
{\cal H}_{0} +
\varepsilon
\left\{
{\cal H}_{1} +
\left[
X_{\lambda}^{(1)}, {\cal H}_{0}
\right]
\right\} + \\
&&
+ \varepsilon^{2}
\left\{
\left[
X_{\lambda}^{(1)}, {\cal H}_{1}
\right] +
\left[
X_{\lambda}^{(2)}, {\cal H}_{0}
\right] +
\frac{1}{2!}
\left[
X_{\lambda}^{(1)},
\left[
X_{\lambda}^{(1)}, {\cal H}_{0}
\right]
\right]
\right\} + {\cal O}(\varepsilon^{3}) .\nonumber\end{aligned}$$ Now, the high-energy parts parts ${\bf Q}_{\lambda} X_{\lambda}^{(n)}$ of $X_{\lambda}^{(n)}$ can successively be determined from Eq. whereas the low-energy parts ${\bf P}_{\lambda} X_{\lambda}^{(n)}$ can still be chosen arbitrarily. In the following we use for convenience $
{\bf P}_{\lambda} X_{\lambda}^{(1)} =
{\bf P}_{\lambda} X_{\lambda}^{(2)} = 0
$ so that for the effective Hamiltonian ${\cal H}_{\lambda}$ up to second order in ${\cal H}_1$ follows $$\begin{aligned}
\label{9}
{\cal H}_{\lambda} &=&
{\cal H}_{0} + {\bf P}_{\lambda}{\cal H}_{1} -
\frac{1}{2} {\bf P}_{\lambda}
\left[
( {\bf Q}_{\lambda}{\cal H}_{1}), \frac{1}{{\bf L}_{0}}
( {\bf Q}_{\lambda}{\cal H}_{1})
\right] -
{\bf P}_{\lambda}
\left[
({\bf P}_{\lambda}{\cal H}_{1}), \frac{1}{ {\bf L}_{0} }
({\bf Q}_{\lambda}{\cal H}_{1})
\right]\end{aligned}$$ Here $\varepsilon$ was set equal to $1$. The quantity ${\bf L}_{0}$ in denotes the Liouville operator of the unperturbed Hamiltonian. It is defined by ${\bf L}_{0}=[{\cal H}_{0},A]$ for any operator $A$. Note that the perturbation expansion can easily be extended to higher orders in $\varepsilon$. One should also note that the correct size dependence of the Hamiltonian is automatically guaranteed due to the commutators appearing in .
Next, let us use this perturbation theory to establish a renormalization approach by successively reducing the cutoff energy $\lambda$. In particular, instead of eliminating high-energy excitations in one step a sequence of stepwise transformations is used. Thereby, we obtain an effective model which becomes diagonal in the limit $\lambda\rightarrow 0$. In an infinitesimal formulation, the method yields renormalization equations as function of the cutoff $\lambda$. To find these equations we start from the renormalized Hamiltonian $$\begin{aligned}
\label{10}
{\cal H}_{\lambda} &=& {\cal H}_{0,\lambda} + {\cal H}_{1,\lambda}\end{aligned}$$ after all excitations with energy differences larger than $\lambda$ have been eliminated. Now we perform an additional renormalization of ${\cal H}_{\lambda}$ by eliminating all excitations inside an energy shell between $\lambda$ and a smaller energy cutoff $(\lambda-\Delta\lambda)$ where $\Delta\lambda>0$. The new Hamiltonian is found by use of $$\begin{aligned}
\label{11}
{\cal H}_{(\lambda-\Delta\lambda)} &=&
{\bf P}_{(\lambda-\Delta\lambda)} {\cal H}_{\lambda} -
\frac{1}{2} {\bf P}_{(\lambda-\Delta\lambda)}
\left[
( {\bf Q}_{(\lambda-\Delta\lambda)}{\cal H}_{1,\lambda}),
\frac{1}{{\bf L}_{0,\lambda}}
( {\bf Q}_{(\lambda-\Delta\lambda)}{\cal H}_{1,\lambda})
\right] + \\
&& -
{\bf P}_{(\lambda-\Delta\lambda)}
\left[
({\bf P}_{(\lambda-\Delta\lambda)}{\cal H}_{1,\lambda}),
\frac{1}{ {\bf L}_{0,\lambda} }
({\bf Q}_{(\lambda-\Delta\lambda)}{\cal H}_{1,\lambda})
\right] . \nonumber\end{aligned}$$ Here, ${\bf L}_{0,\lambda}$ denotes the Liouville operator with respect to the unperturbed part ${\cal H}_{0,\lambda}$ of the $\lambda$ dependent Hamiltonian ${\cal H}_{\lambda}$. Note that the flow equations derived from Eq. will lead to an approximative renormalization of ${\cal H}_{\lambda}$ because only contributions up to second order in ${\cal H}_{1,\lambda}$ are included in Eq. . For a concrete evaluation of Eq. it is useful to divide the second order term on the r.h.s into two parts: The first one connects eigenstates of ${\cal H}_{0,\lambda}$ with the same energy. This part commutes with ${\cal H}_{0,\lambda}$ and can therefore be considered as renormalization of the unperturbed Hamiltonian $$\begin{aligned}
\label{12}
{\cal H}_{0,(\lambda-\Delta\lambda)} - {\cal H}_{0,\lambda} &=&
- \frac{1}{2} \, {\bf P}_{0}
\left[
( {\bf Q}_{(\lambda-\Delta\lambda)}{\cal H}_{1,\lambda}),
\frac{1}{{\bf L}_{0,\lambda}}
( {\bf Q}_{(\lambda-\Delta\lambda)}{\cal H}_{1,\lambda})
\right] .\end{aligned}$$ In contrast the second part connects eigenstates of ${\cal H}_{0,\lambda}$ with different energies and represents a renormalization of the interaction part of the Hamiltonian $$\begin{aligned}
\label{13}
{\cal H}_{1,(\lambda-\Delta\lambda)} -
{\bf P}_{(\lambda-\Delta\lambda)}{\cal H}_{1,\lambda}
&=& - {\bf P}_{(\lambda-\Delta\lambda)}
\left[
({\bf P}_{(\lambda-\Delta\lambda)}{\cal H}_{1,\lambda}),
\frac{1}{ {\bf L}_{0,\lambda} }
({\bf Q}_{(\lambda-\Delta\lambda)}{\cal H}_{1,\lambda})
\right] \\
&& - \frac{1}{2}
\left(
{\bf P}_{(\lambda-\Delta\lambda)} - {\bf P}_{0}
\right)
\left[
( {\bf Q}_{(\lambda-\Delta\lambda)}{\cal H}_{1,\lambda}),
\frac{1}{{\bf L}_{0,\lambda}}
( {\bf Q}_{(\lambda-\Delta\lambda)}{\cal H}_{1,\lambda})
\right]. \nonumber\end{aligned}$$ Note that for small $\Delta\lambda$ only the mixed term, i.e., the first part on the r.h.s. of Eq. , contributes to the renormalization of ${\cal H}_{1,\lambda}$ $$\begin{aligned}
\label{14}
{\cal H}_{1,(\lambda-\Delta\lambda)} -
{\bf P}_{(\lambda-\Delta\lambda)}{\cal H}_{1,\lambda}
&\approx&
- {\bf P}_{(\lambda-\Delta\lambda)}
\left[
({\bf P}_{(\lambda-\Delta\lambda)}{\cal H}_{1,\lambda}),
\frac{1}{ {\bf L}_{0,\lambda} }
({\bf Q}_{(\lambda-\Delta\lambda)}{\cal H}_{1,\lambda})
\right] .\end{aligned}$$ In the limit $\Delta\lambda \rightarrow 0$, i.e. for vanishing shell width, equations and lead to differential equations for the Hamiltonian as function of the cutoff energy $\lambda$. The resulting equations for the parameters of the Hamiltonian are called flow equations. Their solution depend on the initial values of the parameters of the Hamiltonian. Note that for $\lambda \rightarrow 0$ the resulting Hamiltonian only consists of the unperturbed part ${\cal H}_{(\lambda \rightarrow 0)}$ so that an effectively diagonal Hamiltonian is obtained.
Application to the electron-phonon system {#El-Ph}
=========================================
In this section we apply the renormalization approach discussed above to the system of interacting electrons and phonons. The aim is to decouple the electron and the phonon subsystems. The Hamiltonian is gauge invariant. In contrast, a BCS-like Hamiltonian breaks this symmetry. Thus, in order to describe the superconducting state of the system, the renormalized Hamiltonian should contain a symmetry breaking field. Therefore, our starting Hamiltonian ${\cal H}_{\lambda}$ reads $$\begin{aligned}
\label{15}
{\cal H}_{\lambda} &=& {\cal H}_{0,\lambda} + {\cal H}_{1,\lambda}, \end{aligned}$$ after all excitations with energies larger than $\lambda$ have been eliminated, where $$\begin{aligned}
\label{16}
{\cal H}_{0,\lambda} &=&
\sum_{{\bf k},\sigma} \varepsilon_{\bf k} \,
c_{{\bf k},\sigma}^{\dagger}c_{{\bf k},\sigma} +
\sum_{\bf q} \omega_{\bf q} \,
b_{\bf q}^{\dagger}b_{\bf q} -
\sum_{\bf k}
\left(
\Delta_{{\bf k},\lambda} \,
c_{{\bf k},\uparrow}^{\dagger} c_{-{\bf k},\downarrow}^{\dagger} +
\Delta_{{\bf k},\lambda}^{*} \,
c_{-{\bf k},\downarrow} c_{{\bf k},\uparrow}
\right) +
C_{\lambda}, \\
&& \nonumber \\
\label{17}
{\cal H}_{1,\lambda} &=&
{\bf P}_{\lambda} {\cal H}_{1} \,=\,
{\bf P}_{\lambda}
\sum_{{\bf k},{\bf q},\sigma}
\left(
g_{\bf q} \,
c_{{\bf k},\sigma}^{\dagger} c_{({\bf k}+{\bf q}),\sigma}
b_{\bf q}^{\dagger} +
g_{\bf q}^{*} \,
c_{({\bf k}+{\bf q}),\sigma}^{\dagger} c_{{\bf k},\sigma}
b_{\bf q}
\right) .\end{aligned}$$ The ’fields’ $\Delta_{{\bf k},\lambda}$ and $\Delta_{{\bf k},\lambda}^*$ in ${\cal H}_{0,\lambda}$ couple to the operators $c_{{\bf k},\uparrow}^{\dagger} c_{-{\bf k},\downarrow}^{\dagger}$ and $c_{-{\bf k},\downarrow} c_{{\bf k},\uparrow}$ and break the gauge invariance. They will take over the role of the superconducting gap function but still depend on $\lambda$. The initial values for $\Delta_{{\bf k}, \lambda}$ and the energy shift $C_{\lambda}$ are those of the original model $$\begin{aligned}
\label{18}
\Delta_{{\bf k},(\lambda=\Lambda)} &=& 0, \quad
C_{(\lambda=\Lambda)} \,=\,0.\end{aligned}$$ Note that renormalization contributions to the electron energies $\varepsilon_{\bf k}$, the phonon energies $\omega_{\bf q}$, and the electron-phonon interactions $g_{\bf q}$ have been neglected in . Also, additional interactions which would appear due to renormalization processes have been omitted. Let us first solve the eigenvalue problem of ${\cal H}_{0,\lambda}$. For this purpose, we perform a Bogoliubov transformation [@Bogoliubov] and introduce new $\lambda$ dependent fermionic quasi-particles $$\begin{aligned}
\label{19}
\alpha_{{\bf k},\lambda}^{\dagger} &=&
u_{{\bf k},\lambda}^{*} c_{{\bf k},\uparrow}^{\dagger} -
v_{{\bf k},\lambda}^{*} c_{-{\bf k},\downarrow} ,\\
\beta_{{\bf k},\lambda}^{\dagger} &=&
u_{{\bf k},\lambda}^{*} c_{-{\bf k},\downarrow}^{\dagger} +
v_{{\bf k},\lambda}^{*} c_{{\bf k},\uparrow}\nonumber\end{aligned}$$ where $$\begin{aligned}
\label{20}
\left|
u_{{\bf k},\lambda}
\right|^{2}
&=&
\frac{1}{2}
\left(
1 +
\frac{
\varepsilon_{\bf k}
}{
\sqrt{
\varepsilon_{\bf k}^{2} +
\left| \Delta_{{\bf k},\lambda} \right|^{2}
}
}
\right), \\
%
\left| v_{{\bf k},\lambda} \right|^{2}
&=&
\frac{1}{2}
\left(
1 -
\frac{
\varepsilon_{\bf k}
}{
\sqrt{
\varepsilon_{\bf k}^{2} +
\left| \Delta_{{\bf k},\lambda} \right|^{2}
}
}
\right). \nonumber\end{aligned}$$ ${\cal H}_{0,\lambda}$ can be rewritten as $$\begin{aligned}
\label{21}
{\cal H}_{0,\lambda} &=&
\sum_{\bf k} E_{{\bf k},\lambda}
\left(
\alpha_{{\bf k},\lambda}^{\dagger} \alpha_{{\bf k},\lambda} +
\beta_{{\bf k},\lambda}^{\dagger} \beta_{{\bf k},\lambda}
\right) +
\sum_{\bf k}
\left(
\varepsilon_{\bf k} - E_{{\bf k},\lambda}
\right) +
\sum_{\bf q} \omega_{\bf q} \, b_{\bf q}^{\dagger}b_{\bf q} + C_{\lambda} \end{aligned}$$ where the fermionic excitation energies are given by $
E_{{\bf k},\lambda} =
\sqrt{
\varepsilon_{\bf k}^{2} + \left| \Delta_{{\bf k},\lambda} \right|^{2}
}
$.
Next let us eliminate all excitations within the energy shell between $\lambda$ and $(\lambda-\Delta\lambda)$ by applying the renormalization scheme of section II. We are primarily interested in the renormalization of the gap function $\Delta_{{\bf k}, \lambda}$. Note that for this case we have to consider the renormalization contribution given by $$\begin{aligned}
\label{22}
\delta{\cal H}_1(\lambda,\Delta\lambda) &:=&
{\cal H}_{1,(\lambda-\Delta\lambda)} -
{\bf P}_{(\lambda-\Delta\lambda)} {\cal H}_{1,\lambda}.\end{aligned}$$ The reason is that the renormalization of ${\cal H}_{0, \lambda}$ only gives contributions which connects eigenstates of ${\cal H}_{0,\lambda}$ with the same energy. Thus, this part only changes the quasiparticle energies from $ E_{{\bf k},\lambda}$ to $ E_{{\bf k},(\lambda -\Delta \lambda)}$. In contrast, the renormalization changes the relative weight of the operator terms in and is exactly the renormalization needed to describe the flow of $\Delta_{{\bf k},\lambda}$ which will be discussed in the following.
There is no principle problem to evaluate the renormalization contributions . First, one expresses the creation and annihilation operators by the quasiparticle operators and uses the relation $
{\bf L}_{0,\lambda} \alpha_{k,\lambda}^{\dagger}=
E_{{\bf k},\lambda} \alpha_{{\bf k}, \lambda}^{\dagger}
$ and an equivalent relation for $\beta_{{\bf k}, \lambda}^{\dagger}$ to evaluate the denominator in Eq. . Then the quasi-particle operators have to be transformed back to the original electron operators. The main reason for this procedure is the fact that the Bogoliubov transformation , depends on the cutoff $\lambda$. Thereby, a lot of terms arise which contribute to the renormalization . For convenience, we evaluate the denominator in Eq. by use of the assumption $\varepsilon_{\bf k}^{2} \gg |\Delta_{{\bf k},\lambda}|^{2}$. As it turns out, the resulting flow equations still contain sums over ${\bf k}$. Note that the approximation used is valid for most of the ${\bf k}$ dependent terms (except for those ${\bf k}$ values close to the Fermi momentum). Thus, it does not strongly affected the renormalization contributions. The two operator expressions contributing to the commutator in are given in this approximation by $$\begin{aligned}
\label{23}
{\bf P}_{(\lambda-\Delta\lambda)} {\cal H}_{1,\lambda}
&=&
\sum_{{\bf k},{\bf q},\sigma}
\Theta\left[
(\lambda-\Delta\lambda) -
\left|
\varepsilon_{\bf k} - \varepsilon_{({\bf k}+{\bf q})} +
\omega_{\bf q}
\right|
\right]
\left\{
g_{\bf q} \,
c_{{\bf k},\sigma}^{\dagger} c_{({\bf k}+{\bf q}),\sigma}
b_{\bf q}^{\dagger}
+ {\rm h.c.}
\right\},\\
&& \nonumber \\
\frac{1}{{\bf L}_{0,\lambda}}
{\bf Q}_{(\lambda-\Delta\lambda)} {\cal H}_{1,\lambda}
&=&
\label{24}
\sum_{{\bf k},{\bf q},\sigma}
\frac{
\delta\Theta_{{\bf k},{\bf q}}(\lambda,\Delta\lambda)
}{
\varepsilon_{\bf k} - \varepsilon_{({\bf k}+{\bf q})} + \omega_{\bf q}
}
\left\{
g_{\bf q} \,
c_{{\bf k},\sigma}^{\dagger} c_{({\bf k}+{\bf q}),\sigma}
b_{\bf q}^{\dagger}
- {\rm h.c.}
\right\} \\
&& \nonumber\end{aligned}$$ where $$\begin{aligned}
\label{25}
\delta\Theta_{{\bf k},{\bf q}}(\lambda,\Delta\lambda) &=&
\Theta\left[
\left|
\varepsilon_{\bf k} - \varepsilon_{({\bf k}+{\bf q})} +
\omega_{\bf q}
\right| - (\lambda-\Delta\lambda)
\right] -
\Theta\left[
\left|
\varepsilon_{\bf k} - \varepsilon_{({\bf k}+{\bf q})} +
\omega_{\bf q}
\right| - \lambda
\right] \end{aligned}$$ describes the restriction to excitations on the energy shell $\Delta \lambda$. We are not interested in the renormalization of the phonon modes. Therefore all contributions including phonon operators are neglected. By using and we then find from $$\begin{aligned}
&&\nonumber\\
\delta{\cal H}_1(\lambda,\Delta\lambda) &=&
- {\bf P}_{(\lambda-\Delta\lambda)}
\sum_{{\bf k},{\bf k}^{\prime},{\bf q},\sigma,\sigma^{\prime}}
\frac{
\left| g_{\bf q} \right|^{2}
\delta\Theta_{{\bf k},{\bf q}}(\lambda,\Delta\lambda)
\Theta\left[
(\lambda-\Delta\lambda) -
\left|
\varepsilon_{{\bf k}^{\prime}} -
\varepsilon_{({\bf k}^{\prime}+{\bf q})} + \omega_{\bf q}
\right|
\right]
}{
\varepsilon_{\bf k} - \varepsilon_{({\bf k}+{\bf q})} + \omega_{\bf q}
} \times
\nonumber\\
&&
\label{26}
\qquad\qquad\qquad
\times\left\{
c_{({\bf k}+{\bf q}),\sigma}^{\dagger} c_{{\bf k},\sigma}
c_{{\bf k}^{\prime},\sigma^{\prime}}^{\dagger}
c_{({\bf k}^{\prime}+{\bf q}),\sigma^{\prime}}
+ {\rm h.c}
\right\}.\end{aligned}$$
In the following we restrict ourselves to renormalization contributions which lead to the formation of Cooper pairs. Consequently, the conditions ${\bf k}^{\prime}=-({\bf k}+{\bf q})$ and $\sigma^{\prime}=-\sigma$ have to be fulfilled so that $$\begin{aligned}
&& \nonumber\\[-4ex]
\label{27}
&& - \lim_{\Delta\lambda\rightarrow 0}
\frac{\delta{\cal H}_1(\lambda,\Delta\lambda)}{\Delta\lambda} = \\[1.5ex]
&& \hspace*{0.5cm} =
\sum_{{\bf k},{\bf q},\sigma}
\Theta\left[
\lambda -
2\left|
\varepsilon_{\bf k} - \varepsilon_{({\bf k}+{\bf q})}
\right|
\right]
\Theta\left[
\lambda -
\left|
\varepsilon_{({\bf k}+{\bf q})} - \varepsilon_{\bf k} + \omega_{\bf q}
\right|
\right]
\delta\left(
\left|
\varepsilon_{\bf k} - \varepsilon_{({\bf k}+{\bf q})} + \omega_{\bf q}
\right|
- \lambda
\right) \times
\nonumber\\
&&
\qquad\qquad
\times \frac{
\left| g_{\bf q} \right|^{2}
}{
\varepsilon_{\bf k} - \varepsilon_{({\bf k}+{\bf q})} + \omega_{\bf q}
}
\left\{
c_{({\bf k}+{\bf q}),\sigma}^{\dagger}
c_{-({\bf k}+{\bf q}),-\sigma}^{\dagger}
c_{-{\bf k},-\sigma}
c_{{\bf k},\sigma}
+ {\rm h.c}
\right\}\nonumber \\[-2ex]
&& \nonumber \end{aligned}$$ results from . Here, $
\Theta\left[
\lambda -
2\left|
\varepsilon_{\bf k} - \varepsilon_{({\bf k}+{\bf q})}
\right|
\right]
$ is due to the projector operator ${\bf P}_{(\lambda-\Delta\lambda)}$ in . Note that the differential expression on the l.h.s. of is different from the differential $d{\cal H}_{1,\lambda}/ d\lambda$. This follows from the definition of $
\delta {\cal H}_{1}(\lambda, \Delta \lambda) =
{\cal H}_{1,(\lambda -\Delta \lambda)} -
{\bf P}_{(\lambda -\Delta \lambda)}{\cal H}_{1, \lambda}
$ which differs from $
\Delta {\cal H}_{1}(\lambda, \Delta \lambda) =
{\cal H}_{1,(\lambda -\Delta \lambda)} -
{\cal H}_{1, \lambda}
$ due to the second term. Next we can simplify the $\Theta$-functions in Eq. by discussing $\varepsilon_{\bf k}\geq \varepsilon_{({\bf k}+{\bf q})}$ and $\varepsilon_{({\bf k}+{\bf q})} > \varepsilon_{\bf k}$ separately. There are no contributions from the latter case. For $\varepsilon_{\bf k}\geq \varepsilon_{({\bf k}+{\bf q})}$ the contribution from the first term in the curly bracket in and from its conjugate can be combined. By exploiting the $\Theta$-functions the result can be rewritten as $$\begin{aligned}
\label{28}
- \lim_{\Delta\lambda\rightarrow 0}
\frac{\delta{\cal H}_1(\lambda,\Delta\lambda)}{\Delta\lambda}
&=&
\sum_{{\bf k},{\bf q},\sigma}
\delta\left(
\left|
\varepsilon_{\bf k} - \varepsilon_{({\bf k}+{\bf q})}
\right|
+ \omega_{\bf q} - \lambda
\right)
\frac{
\left| g_{\bf q} \right|^{2}
\Theta\left[
\omega_{\bf q} -
\left| \varepsilon_{\bf k} - \varepsilon_{({\bf k}+{\bf q})} \right|
\right]
}{
\left|
\varepsilon_{\bf k} - \varepsilon_{({\bf k}+{\bf q})}
\right| + \omega_{\bf q}
}\times \nonumber \\[1ex]
&&
\qquad\qquad\times
c_{({\bf k}+{\bf q}),\sigma}^{\dagger}
c_{-({\bf k}+{\bf q}),-\sigma}^{\dagger}
c_{-{\bf k},-\sigma}
c_{{\bf k},\sigma}.\end{aligned}$$ Here, we have assumed $g_{\bf q}=g_{-{\bf q}}$. Eq. describes the renormalization of the $\lambda$ dependent Hamiltonian ${\cal H}_{\lambda}$ with respect to the cutoff $\lambda$. Next we use to derive flow equations for the parameters $\Delta_{{\bf k},\lambda}$ and $C_{\lambda}$. For this purpose, a factorization with respect to the full Hamiltonian ${\cal H}$ is carried out. The final flow equations read $$\begin{aligned}
\frac{d\Delta_{{\bf k},\lambda}}{d\lambda} &=&
-2 \sum_{\bf q}
\delta\left(
\left|
\varepsilon_{\bf k} - \varepsilon_{({\bf k}+{\bf q})}
\right|
+ \omega_{\bf q} - \lambda
\right)
\frac{
\left| g_{\bf q} \right|^{2}
\Theta\left[
\omega_{\bf q} -
\left| \varepsilon_{\bf k} - \varepsilon_{({\bf k}+{\bf q})} \right|
\right]
}{
\left|
\varepsilon_{\bf k} - \varepsilon_{({\bf k}+{\bf q})}
\right| + \omega_{\bf q}
}
\left\langle
c_{-({\bf k}+{\bf q}),\downarrow} c_{({\bf k}+{\bf q}),\uparrow}
\right\rangle,\nonumber\\[-2ex]
&& \label{29} \\[1ex]
\label{30}
\frac{dC_{\lambda}}{d\lambda} &=&
\sum_{\bf k}
\left\langle
c_{{\bf k},\uparrow}^{\dagger} c_{-{\bf k},\downarrow}^{\dagger}
\right\rangle
\frac{d\Delta_{{\bf k},\lambda}}{d\lambda}.\\[-3ex]
&& \nonumber \end{aligned}$$ Note that in contrast to , Eqs. and are differential equations with normal derivatives of $\Delta_{{\bf k},\lambda}$ and $C_{\lambda}$. This fact can be explained as follows: As discussed above, the difference between the expressions $
\delta {\cal H}_{1}(\lambda, \Delta \lambda) =
{\cal H}_{1,(\lambda -\Delta \lambda)} -
{\bf P}_{(\lambda -\Delta \lambda)}{\cal H}_{1, \lambda}
$ and $
\Delta {\cal H}_{1}(\lambda, \Delta \lambda) =
{\cal H}_{1,(\lambda -\Delta \lambda)} -
{\cal H}_{1, \lambda}
$ is given by the quantity ${\bf Q}_{(\lambda-\Delta\lambda)} {\cal H}_{1,\lambda}$ which consists of all matrix elements of ${\cal H}_{1,\lambda}$ between eigenstates of ${\cal H}_{0,\lambda}$ with energy differences between $(\lambda-\Delta\lambda)$ and $\lambda$. We are interested in the new Hamiltonian $
{\cal H}_{(\lambda-\Delta\lambda)} =
{\bf P}_{(\lambda-\Delta\lambda)} {\cal H}_{(\lambda-\Delta\lambda)}
$ which only contains transition operators between states with energy differences smaller than $(\lambda-\Delta\lambda)$. Therefore, all renormalization contributions which lead to matrix elements with energy differences larger than $(\lambda-\Delta\lambda)$ are not relevant. Thus, we obtain differential equations for the parameters of ${\cal H}_{(\lambda-\Delta\lambda)}$.
Note that the factor $2$ in front of is due to the sum over $\sigma$ in . The expectation values $\langle \dots \rangle$ in and are formed with the full Hamiltonian ${\cal H}$ and are independent of $\lambda$. The flow equations can be easily integrated between the lower cutoff $(\lambda\rightarrow 0)$ and the cutoff $\Lambda$ of the original model. The result is $$\begin{aligned}
&&\nonumber\\[-3ex]
\label{31}
\tilde\Delta_{\bf k} &=&
\Delta_{{\bf k},\Lambda} +
2\sum_{\bf q}
\frac{
\left| g_{\bf q} \right|^{2}
\Theta\left[
\omega_{\bf q} -
\left| \varepsilon_{\bf k} - \varepsilon_{({\bf k}+{\bf q})} \right|
\right]
}{
\left|
\varepsilon_{\bf k} - \varepsilon_{({\bf k}+{\bf q})}
\right| + \omega_{\bf q}
}
\left\langle
c_{-({\bf k}+{\bf q}),\downarrow} c_{({\bf k}+{\bf q}),\uparrow}
\right\rangle,\\[2ex]
%
\label{32}
\tilde C &=&
C_{\Lambda} +
\sum_{\bf k}
\left\langle
c_{{\bf k},\uparrow}^{\dagger} c_{-{\bf k},\downarrow}^{\dagger}
\right\rangle
\left( \tilde\Delta_{\bf k} - \Delta_{{\bf k},\Lambda} \right)\end{aligned}$$ where a short hand notation for the desired values of $\Delta_{{\bf k}, \lambda}$ and $C_{{\bf k}, \lambda}$ at $\lambda=0$ was introduced: $
\tilde\Delta_{\bf k} = \Delta_{{\bf k},(\lambda\rightarrow 0)},$ $
\tilde C = C_{(\lambda\rightarrow 0)}
$. Note that $\tilde\Delta_{\bf k}$ and $\tilde{C}$ only depend on the parameters of the original system and on $\Delta_{{\bf k},\Lambda}$ and $C_{\Lambda}$. The initial conditions for $\Delta_{{\bf k},\Lambda}$ and $C_{\Lambda}$ will be used later. For $\lambda\rightarrow 0$ the renormalized Hamiltonian $
\tilde{\cal H} = {\cal H}_{(\lambda\rightarrow 0)}
$ reads $$\begin{aligned}
\label{33}
\tilde{\cal H} &=&
\sum_{{\bf k},\sigma} \varepsilon_{\bf k} \,
c_{{\bf k},\sigma}^{\dagger}c_{{\bf k},\sigma} +
\sum_{\bf q} \omega_{\bf q} \,
b_{\bf q}^{\dagger}b_{\bf q} -
\sum_{\bf k}
\left(
\tilde\Delta_{\bf k} \,
c_{{\bf k},\uparrow}^{\dagger} c_{-{\bf k},\downarrow}^{\dagger} +
\tilde\Delta_{\bf k}^{*} \,
c_{-{\bf k},\downarrow} c_{{\bf k},\uparrow}
\right) +
\tilde{C}.\end{aligned}$$ $\tilde{\cal H}$ can easily be diagonalized by a Bogoliubov transformation according to and $$\begin{aligned}
\label{34}
\tilde{\cal H} &=&
\sum_{\bf k} \tilde{E}_{\bf k}
\left(
\tilde{\alpha}_{\bf k}^{\dagger} \tilde{\alpha}_{\bf k} +
\tilde{\beta}_{\bf k}^{\dagger}
\tilde{\beta}_{\bf k}
\right) +
\sum_{\bf k}
\left(
\varepsilon_{\bf k} - \tilde{E}_{\bf k}
\right) +
\sum_{\bf q} \omega_{\bf q} \, b_{\bf q}^{\dagger}b_{\bf q} + \tilde{C}\end{aligned}$$ where $\tilde{E}_{\bf k} = E_{{\bf k},(\lambda\rightarrow 0)}$, $\tilde\alpha_{\bf k} = \alpha_{{\bf k},(\lambda\rightarrow 0)}$, and $\tilde\beta_{\bf k} = \beta_{{\bf k},(\lambda\rightarrow 0)}$.
Finally, we have to determine the expectation values in and . Since $\tilde{\cal H}$ emerged from the original model ${\cal H}$ by an unitary transformation, the free energy can be calculated either from ${\cal H}$ or from $\tilde{{\cal H}}$ $$\begin{aligned}
\label{35}
F &=&
- \frac{1}{\beta}
\ln {\rm Tr} \, e^{-\beta{\cal H}}
\,=\,
- \frac{1}{\beta}
\ln {\rm Tr} \, e^{-\beta\tilde{\cal H}}, \\
&=&
- \frac{2}{\beta}
\sum_{{\bf k}^{\prime}} \ln
\left(
1 + e^{-\beta\tilde{E}_{{\bf k}^{\prime}}}
\right) +
\frac{1}{\beta}
\sum_{\bf q}
\left(
1 - e^{-\beta\omega_{\bf q}}
\right) +
\sum_{{\bf k}^{\prime}}
\left(
\varepsilon_{{\bf k}^{\prime}} - \tilde{E}_{{\bf k}^{\prime}}
\right) + \tilde{C} \nonumber\end{aligned}$$ where was used. The required expectation values are found by functional derivative $$\begin{aligned}
\label{36}
\left\langle
c_{{\bf k},\uparrow}^{\dagger} c_{-{\bf k},\downarrow}^{\dagger}
\right\rangle
&=&
- \frac{\partial F}{\partial\Delta_{{\bf k},\Lambda}}\\
&=&
\sum_{{\bf k}^{\prime}}
\frac{
1 - 2f(\tilde{E}_{{\bf k}^{\prime}})
}{
2\sqrt{
\varepsilon_{{\bf k}^{\prime}}^{2} +
\left| \tilde\Delta_{{\bf k}^{\prime}} \right|^{2}
}
}
\left[
\tilde{\Delta}_{{\bf k}^{\prime}}^{*}
\frac{
\partial \tilde{\Delta}_{{\bf k}^{\prime}}
}{
\partial \Delta_{{\bf k},\Lambda}
}
+ \tilde{\Delta}_{{\bf k}^{\prime}}
\frac{
\partial \tilde{\Delta}_{{\bf k}^{\prime}}^{*}
}{
\partial \Delta_{{\bf k},\Lambda}
}
\right]
+ \frac{ \partial \tilde{C}}{\partial \Delta_{{\bf k},\Lambda}}
\nonumber \\[2ex]
&=&
\frac{
\tilde{\Delta}_{\bf k}^{*}
\left[ 1 - 2f(\tilde{E}_{\bf k}) \right]
}{
2\sqrt{
\varepsilon_{\bf k}^{2} +
\left| \tilde\Delta_{\bf k} \right|^{2}
}
} + {\cal O}
\left(
\left[
\frac{
\left| g_{\bf q} \right|^{2}
}{
\left|
\varepsilon_{\bf k} - \varepsilon_{({\bf k}+{\bf q})}
\right| + \omega_{\bf q}
}
\right]^{2}
\right).\nonumber\end{aligned}$$ Here, $f(\tilde{E}_{\bf k})$ denotes the Fermi function with respect to the energy $\tilde{E}_{\bf k}$. If we neglect higher order corrections, Eqs. and can be rewritten as $$\begin{aligned}
\label{37}
\tilde\Delta_{\bf k} &=&
\sum_{\bf q}
\left\{
\frac{
2 \left| g_{\bf q} \right|^{2}
\Theta\left[
\omega_{\bf q} -
\left| \varepsilon_{\bf k} - \varepsilon_{({\bf k}+{\bf q})} \right|
\right]
}{
\left|
\varepsilon_{\bf k} - \varepsilon_{({\bf k}+{\bf q})}
\right| + \omega_{\bf q}
}
\right\}
\frac{
\tilde{\Delta}_{{\bf k}+{\bf q}}^{*}
\left[ 1 - 2f( \tilde{E}_{{\bf k}+{\bf q}} ) \right]
}{
2\sqrt{
\varepsilon_{{\bf k}+{\bf q}}^{2} +
\left| \tilde \Delta_{{\bf k}+{\bf q}} \right|^{2}
}
},\\[2ex]
%
\label{38}
\tilde C &=&
\sum_{\bf k}
\left| \tilde{\Delta}_{\bf k} \right|^{2}
\frac{
1 - 2f( \tilde{E}_{{\bf k}+{\bf q}} )
}{
2\sqrt{
\varepsilon_{{\bf k}+{\bf q}}^{2} +
\left| \tilde \Delta_{{\bf k}+{\bf q}} \right|^{2}
}
}\end{aligned}$$ where the initial conditions were used. Note that Eq. has the form of the usual BCS-gap equation. Thus, the term inside the brackets $\{\dots\}$ can be interpreted as the absolute value of the effective phonon-induced electron-electron interaction $$\begin{aligned}
\label{39}
V_{{\bf k},{\bf q}} &=&
- \frac{
2 \left| g_{\bf q} \right|^{2}
\Theta\left[
\omega_{\bf q} -
\left| \varepsilon_{\bf k} - \varepsilon_{({\bf k}+{\bf q})} \right|
\right]
}{
\left|
\varepsilon_{\bf k} - \varepsilon_{({\bf k}+{\bf q})}
\right| + \omega_{\bf q}
}\end{aligned}$$ for the formation of Cooper pairs. In contrast to the usual BCS-theory, in the present formalism both the attractive electron-electron interaction and the gap equation were derived in one step by applying the renormalization procedure to the electron-phonon system .
Let us now compare the induced electron-elctron interaction with Fröhlich’s result [@Froehlich] $$\begin{aligned}
\label{40}
V_{{\bf k},{\bf q}}^{\mbox{\tiny \rm Fr\"{o}hlich}} &=&
\frac{
2 \left| g_{\bf q} \right|^{2} \omega_{\bf q}
}{
\left[
\varepsilon_{\bf k} - \varepsilon_{({\bf k}+{\bf q})}
\right]^{2} - \omega_{\bf q}^{2}
}.\end{aligned}$$ Note that contains a divergency at $|\varepsilon_{\bf k} -\varepsilon_{({\bf k+q}})| = \omega_{\bf q}$. Furthermore, this interaction becomes repulsive for $
\left|
\varepsilon_{\bf k} - \varepsilon_{({\bf k}+{\bf q})}
\right| > \omega_{\bf q}
$. Thus, a cutoff function $
\Theta\left[
\omega_{\bf q} -
\left| \varepsilon_{\bf k} - \varepsilon_{({\bf k}+{\bf q})} \right|
\right]
$ for the electron-electron interaction is introduced by hand in the usual BCS-theory to suppress repulsive contributions to this interaction. In contrast our result has no divergency and is always attractive. Furthermore, the cutoff function in Eq. shows that the attractive interaction results from particle-hole excitations with energies $|\varepsilon_{\bf k} -\varepsilon_{({\bf k+q}})| < \omega_{\bf q}$. This result directly follows from the renormalization process.
Recently, Mielke [@Mielke] obtained a $\lambda$-dependent phonon-induced electron-electron interaction $$\begin{aligned}
\label{41}
V_{{\bf k},{\bf q},\lambda}^{\mbox{\tiny \rm Mielke}} &=&
- \frac{
2 \left| g_{\bf q} \right|^{2}
}{
\left|
\varepsilon_{\bf k} - \varepsilon_{({\bf k}+{\bf q})}
\right| + \omega_{\bf q}
} \
\Theta(|\varepsilon_{{\bf k} + {\bf q}}- \varepsilon_{{\bf k}}|
+\omega_{\bf q} -\lambda) \end{aligned}$$ where in for convenience the $\lambda$-dependence of the electron and the phonon energies is suppressed. Apart from the $\lambda$-dependent $\Theta$-function, the main difference to our result is that the cutoff function $
\Theta\left[
\omega_{\bf q} -
\left| \varepsilon_{\bf k} - \varepsilon_{({\bf k}+{\bf q})} \right|
\right]
$ is not present in . This difference may result from Mielke’s way of performing the similarity transformation [@Wilson1; @Wilson2] of the electron-phonon system . The similarity transformation is based on the introduction of continuous unitary transformations and is formulated in terms of differential equations for the parameters of the Hamiltonian. Like in our approach (see Sec. \[Ren\]) also the similarity transformation leads to a band-diagonal structure of the normalized Hamiltonian with respect to the eigen representation of the unperturbed Hamiltonian. Due to the renormalization processes also new couplings occur. Mielke has first evaluated the phonon-induced electron-electron interaction by eliminating excitations with energies larger than $\lambda$. In particular, for an Einstein model with dispersion-less phonons of frequency $\omega_0$ the interaction becomes independent of $\lambda$ if $\lambda$ is chosen less than $\omega_0$. Of course, for this case and assuming $
|\varepsilon_{{\bf k} + {\bf q}}- \varepsilon_{{\bf k}}| <
\omega_{\bf q}
$ Mielke’s result and the result become the same. Keeping this interaction and neglecting at the same time the remaining part of the electron-phonon interaction with energies smaller than $\lambda$ a BCS-like gap equation was derived by Mielke[@Mielke]. The main difference to our result is the absence of the cutoff-function $
\Theta\left[
\omega_{\bf q} -
\left| \varepsilon_{\bf k} - \varepsilon_{({\bf k}+{\bf q})} \right|
\right]
$, which demonstrates that only particle-hole excitations with energies less than $\omega_q$ participate in the attractive interactions. However, note that by setting $\lambda = 0 $ a finite value of the interaction remains in . This interaction is non-diagonal in the unperturbed Hamiltonian as used by Mielke, $
{\cal H}_{0, \lambda}^{\mbox{\tiny Mielke}} = \sum_{{\bf k},\sigma}
\varepsilon_{{\bf k},\lambda}
c_{{\bf k},\sigma}^{\dagger} c_{{\bf k},\sigma} +
\sum_{\bf q} \omega_{\bf q} b_{\bf q}^{\dagger}
b_{\bf q}
$. This seems to be a contradiction to the allowed properties for operators at $\lambda=0$ which should commute with the unperturbed Hamiltonian.
Finally, by use of Wegner’s flow equation method [@Wegner], Lenz and Wegner obtained the following phonon-induced electron-electron interaction $$\begin{aligned}
\label{42}
V_{{\bf k},{\bf q}}^{\mbox{\tiny \rm Lenz/Wegner}} &=&
- \frac{
2 \left| g_{\bf q} \right|^{2} \omega_{\bf q}
}{
\left[
\varepsilon_{\bf k} - \varepsilon_{({\bf k}+{\bf q})}
\right]^{2} + \omega_{\bf q}^{2}
}\end{aligned}$$ which is attractive for all ${\bf k}$ and ${\bf q}$ [@Lenz]. The result is similar to if $
\omega_{\bf q} \geq
\left|
\varepsilon_{\bf k} - \varepsilon_{({\bf k}+{\bf q})}
\right|
$ is fulfilled. In contrast to our result , the interaction remains finite even for $
\left|
\varepsilon_{\bf k} - \varepsilon_{({\bf k}+{\bf q})}
\right| > \omega_{\bf q}
$. Wegner’s flow equation method [@Wegner] as well as the similarity transformation [@Wilson1; @Wilson2] are based on the introduction of continuous unitary transformations. Both methods are formulated in terms of differential equations for the parameters of the Hamiltonian. However, they differ in the generator of the continuous unitary transformation. This leads to the different results and . A detailed comparison of the flow equation method and the similarity transformation can be found in Ref. .
Conclusion {#Conc}
==========
In this paper we have applied a recently developed renormalization approach [@Becker] to the ’classical’ problem of interacting electrons and phonons. By adding a small field to the Hamiltonian, which break the gauge symmetry, we directly derive a BCS-like gap equation for the coupled electron-phonon system. In particular, it is shown that the derived gap function results directly from the renormalization process. The effective phonon-induced electron-electron interaction is deduced from the gap equation. In contrast to the Fröhlich interaction [@Froehlich] no singularities appear in the effective interaction. Furthermore, the cutoff function which is included in Fröhlich’s result by hand to avoid repulsive contributions to the electron-electron interaction follows directly from the renormalization procedure. This means that phonon-induced particle-hole excitations only contribute to the attractive electron-electron interaction if their energies are smaller than the energy of the exchanged phonon.
Acknowledgements {#acknowledgements .unnumbered}
================
We would like to acknowledge helpful discussions with K. Meyer and T. Sommer. This work was supported by the DFG through the research program SFB 463.
J. Bardeen, L. N. Cooper, and J. R. Schrieffer, Phys. Rev. [**108**]{}, 1175 (1957). L. N. Cooper, Phys. Rev. [**104**]{}, 1189 (1956). H. Fröhlich, Proc. R. Soc. London A [**215**]{}, 291 (1952). J. Nagamatsu, N. Nakagawa, T. Muranaka, Y. Zenitani, and J. Akimitsu, Nature [**410**]{}, 63 (2001). P. Lenz and F. Wegner, Nucl. Phys. B [**482**]{}, 693 (1996). A. Mielke, Ann. Physik (Leipzig) [**6**]{}, 215 (1997). F. Wegner, Ann. Physik (Leipzig) [**3**]{}, 77 (1994). S.D. G[ł]{}azek and K.G. Wilson, Phys. Rev. D [**48**]{}, 5863 (1993). S.D. G[ł]{}azek and K.G. Wilson, Phys. Rev. D [**49**]{}, 4214 (1994). K. W. Becker, A. Hübsch, and T. Sommer, Phys. Rev. B [**66**]{}, 235115 (2002). N. N. Bogoliubov, Nuovo Cim. [**7**]{}, 794 (1958).
| {
"pile_set_name": "ArXiv"
} |
---
author:
- Anže Božič
- Antonio Šiber
bibliography:
- 'SI-references.bib'
title: |
Mechanical design of apertures and the infolding of pollen grain:\
Supplementary Information
---
=1
Supplementary Methods {#supplementary-methods .unnumbered}
=====================
Details of the discrete elastic model {#details-of-the-discrete-elastic-model .unnumbered}
-------------------------------------
The spherical mesh representing a fully hydrated pollen grain is constructed using the marching method for triangulation of surfaces, described in Ref. [@hartmann1998]. The method is modified so that the starting polygon is a pentagon, instead of a hexagon as in the original implementation—this yields more uniform triangulations with no visible patches consisting of predominantly hexagonally-coordinated vertices. Results presented in the manuscript were obtained using a mesh consisting of $V=8597$ vertices, $E=25785$ edges, and $F=17190$ faces; note that $F+V-E=2$, as guaranteed by the Euler formula for polyhedra. The mesh contains $1146$, $6314$, and $1134$ vertices with $5$, $6$, and $7$ nearest neighbours (joined by edges), respectively. The maximum and minimum edge lengths of the triangulation are $1.73\,a$ and $0.58\,a$, respectively, where $a$ is the average edge length. The radius of the spherical mesh is $R_0=24.34\,a$. Calculations were also performed with smaller and larger meshes, and the chosen mesh was found to be sufficiently large so that the results obtained do not depend on the nature and details of the triangulation.
Mesh vertices are assigned either to the apertures or to the exine region, depending on whether their coordinates fall within the mathematically defined borders of the apertures or not (see below). The same assignment is also made for the faces (triangles) of the triangulation based on the coordinates of their centroids. The elastic energy model is formulated for the edges of the mesh. A pair of microscopic stretching and bending constants ($\epsilon_i$, $\rho_i$) is assigned to each edge $i$, depending on the position of its endpoints and of the faces $I$ and $J$ it joins. If both endpoints are either in the aperture or in the exine region, the edge is assigned the stretching constant pertaining to the aperture, $\epsilon_{ap}$, or the exine, $\epsilon_{ex}$, respectively. If one of the endpoints belongs to the aperture and the other to the exine region, the edge is assigned a stretching rigidity of $(\epsilon_{ap} + \epsilon_{ex})/2$. If both faces joined by the edge belong either to the aperture or to the exine region, the edge is assigned the bending rigidity of $\rho_{ap}$ or $\rho_{ex}$, respectively. Similarly, if one of the triangle faces belongs to the aperture and the other to the exine region, the edge is assigned the bending rigidity of $(\rho_{ap} + \rho_{ex})/2$.
Minimization procedure {#minimization-procedure .unnumbered}
----------------------
Vertex coordinates of the shapes minimizing the energy functional are calculated by a conjugate gradient minimization procedure described in Ref. [@hager2006]. The volume constraint is implemented as an energy penalty of the form $\lambda (V_{actual} - V_{target})^2$, where $V_{actual}$ and $V_{target}$ are the actual and target volumes of the system. The constraint parameter $\lambda$ is sequentially increased until the volume found is within the predefined tolerance (less than $10^{-4}$ in our calculations) [@Siber2006]. Once the minimum of the energy is found, it is checked that the total contribution of the energy penalty is vanishingly small when compared with the elastic energy of the system.
Folding pathways are calculated by sequentially reducing the target volume and using the shape obtained in the previous step of the procedure as the initial guess for the conjugate gradient minimization. The procedure is started with the fully hydrated, spherical shape and its corresponding volume $V_0$. Once the minimal target volume is reached, the procedure is repeated backwards, from the maximally infolded shape towards the fully hydrated shape, by sequentially increasing the target volume. This procedure enables detection of minimal energy states which were missed in the forward minimization but can still be detected in the backward minimization, and vice versa. It also enables us to detect any hysteresis in the folding pathway, which we indeed observe in some cases, especially when the area covered by the apertures is large.
Model of the aperture shape {#model-of-the-aperture-shape .unnumbered}
---------------------------
Apertures are arranged equatorially, and the width of each aperture along the equator is $\phi_c$. The total equatorial angular span of all $n$ apertures is $\phi _{ap} = n \phi _c$. Centres of neighbouring apertures are separated by an azimuthal angle of $\alpha = 2 \pi / n$, where $n$ is the number of apertures in the pollen grain. The azimuthal angle $\varphi \in [0,2\pi]$ is rescaled so that $\tilde{\varphi} = \varphi - {\rm int}(\varphi / \alpha)\,\alpha$, where ${\rm int}(x)$ denotes the integer part of $x$. All points for which the azimuthal angle $\tilde{\varphi}$ and the polar angle $\vartheta \in [0,2\pi]$ fulfill the conditions $$\begin{aligned}
&\tilde{\varphi}& < \quad \phi_c \nonumber \qquad{\rm and} \\
\frac{\pi}{2} - \theta_{\Psi}(\tilde{\varphi}, \phi_c) \quad < &\vartheta& < \quad \frac{\pi}{2} + \theta_{\Psi}(\tilde{\varphi}, \phi_c)\end{aligned}$$ belong to the aperture. Note again that the apertures are arranged along the equator of the grain, where they assume the largest azimuthal width. We have also defined $\theta_{\Psi}(\tilde{\varphi}, \phi_c) = \Psi \sqrt{\left(\frac{\phi_c}{2}\right)^2 - \left(\tilde{\varphi} - \frac{\phi_c}{2} \right)^2}$, where the parameter $\Psi$ defines the shape of the aperture. When $\Psi = 1$, the aperture is approximately circular (porus). As $\Psi \rightarrow \infty$, the aperture assumes a shape of a spherical lune (idealized colpus). The maximal polar span of the aperture $\theta=\pi$ (pole-to-pole) is obtained when $\tilde{\varphi} = \phi_c/2$ and $\theta={\rm max}[\Psi \phi_c, \pi]$. Calculations presented in the main text were performed with $\Psi \rightarrow \infty$. Figure \[fig:S1\] shows equatorial cross-sections (middle row) and 3D shapes (bottom row) of infolded triaperturate grains for different shapes of the apertures (from an idealized colpus to porus; top row).
Minimum volume and prolateness of maximally infolded shapes {#minimum-volume-and-prolateness-of-maximally-infolded-shapes .unnumbered}
===========================================================
Bounding box dimensions {#bounding-box-dimensions .unnumbered}
-----------------------
Dimensions of the bounding box of a pollen grain are calculated with respect to fixed coordinate axes. This means that the bounding box should not be considered as the minimal bounding box in some predefined sense (e.g., minimal volume or minimal equatorial area box). While different definitions of the bounding box can produce somewhat different ratios of the box dimensions, any difference would be small in our case, as the cross-section of the grain in the equatorial plane is nearly isometric, i.e., it typically does not show a prominent elongation along any direction in the equatorial plane.
Analytical approximations for the bounding box dimensions and the volume of the maximally infolded shape {#analytical-approximations-for-the-bounding-box-dimensions-and-the-volume-of-the-maximally-infolded-shape .unnumbered}
--------------------------------------------------------------------------------------------------------
The final, minimum volume of a fully infolded shape, $V_{in}$, is a quantity which depends on the total area of the apertures $A_{ap}$, their number $n$, and, in general, also on the elasticity of the grain. A rough estimate of $V_{in}$ can be obtained by assuming that the infolded shape does not stretch significantly and approximating it by an ellipsoid. The cross-section in the equatorial plane can be approximated by a circle whose perimeter can be calculated by formally excising the apertures. The longest axis of the ellipsoid can be obtained from the requirement that the exine does not stretch when the equatorial radius of the grain shrinks. From these considerations, we can calculate the dimensions of the bounding box of the grain at the point of complete infolding as $$\begin{aligned}
\frac{u_x+u_y}{4 R_0} &=& 1 - \tilde{\phi}_{ap} \nonumber \\
\frac{u_z}{2 R_0} &=& 1 + \tilde{\phi}_{ap},\end{aligned}$$ where $\tilde{\phi}_{ap} = \phi _{ap} / 2\pi = A_{ap} / A_0$. For $\tilde{\phi}_{ap} = 1/3$, this gives $(u_x+u_y)/(4 R_0) = 2/3$, and $u_z/(2 R_0) = 4/3$, which compares favourably with the values of $0.75$ and $1.25$ obtained in the numerical simulations (Fig. 2 in the main text). The volume of the grain at the point of maximal infolding is given by $$\frac{\Delta V_{in}}{V_0} \approx \tilde{\phi}_{ap} + \tilde{\phi}_{ap}^2 - \tilde{\phi}_{ap}^3,
\label{eq:S_vin}$$ where $\Delta V_{in} = V_0 - V_{in}$. This rough estimate does not account for the volume subtracted from the grain by the infolded apertures themselves. For $\tilde{\phi}_{ap} = 0.25$, $0.33$, and $0.41$, the equation predicts $\Delta V_{in}/V_0 \approx 0.30$, $0.41$, and $0.51$, respectively, in reasonable agreement with the values obtained for $n=3$ ($0.27$, $0.35$, and $0.5$; see Fig. \[fig:S2\] and Fig. 2 in the main text). Equation \[eq:S\_vin\] less accurately describes grains with $n=1$ and $2$, because the infolded apertures take up a significant volume of the infolded grain. This effect is rather difficult to estimate analytically, and we have found the following approximation to be useful: $$\frac{\Delta V_{in}}{V_0} \approx \tilde{\phi}_{ap} + \tilde{\phi}_{ap}^2 - \tilde{\phi}_{ap}^3 - \frac{\tilde{\phi}_{ap}^2}{2n} ( 1 + \tilde{\phi}_{ap})$$ The last term, subtracted from the approximation in , accounts for the volume taken from the grain by the infolded apertures. The correction is small when $n>2$, but becomes important for $n=1$ and $2$. The smallest infolded volume of a grain is obtained when the entire aperture area is lumped in a single, large aperture ($n=1$).
Maximally infolded shapes of bi- and tetraaperturate grains {#maximally-infolded-shapes-of-bi--and-tetraaperturate-grains .unnumbered}
===========================================================
Figures \[fig:S3\] and \[fig:S4\] show phase diagrams, analogous to the one shown for tricolpate ($n=3$) pollen in Fig. 3 in the main text, for pollen grains with two and four colpi, respectively. The shapes of infolded pollen, shown in these diagrams, were analyzed in order to produce Fig. 4 in the main text. We also analysed the phase diagram for monocolpate ($n=1$) pollen; however, for the case of a single and sufficiently large aperture, the minimal infolded volume can no longer be thought of as a quantity independent of the elasticity of the problem. Consequently, representing the cross-sections of infolded pollen shapes in a diagram for a single volume of the infolded grain, as was done for $n=2$, $3$, and $4$, can thus be somewhat misleading. In order to produce the data for $n=1$ required to produce Fig. 4, we must instead follow the folding pathway all the way to the point of minimum achievable volume or to the point where it becomes obvious that the apertures do not close at all. This analysis was performed for $n=1$ as well as for $n=2$, $3$, and 4, although the information in the latter three cases can be also reliably obtained from Figs. 3, \[fig:S3\], and \[fig:S4\].
![Shapes of desiccated triaperturate grains with $\tilde{\gamma}=7000$ and $f=0.01$ at $\Delta V/V_0=0.35$ for different aperture sizes and shapes. The span of the aperture along the azimuthal angle is the same in all four cases, $\phi_c=0.22\pi$, while the span along the polar angle is gradually reduced from $\theta=\pi$ (pole-to-pole colpus) to $\theta=0.22\pi$ (circular pore). This means that the total proportion of the aperture area $A_{ap}/A_0$ is gradually reduced as well—see the main text and SI Methods for more details.[]{data-label="fig:S1"}](FigS1-small.pdf){width="60.00000%"}
![Regular aperture closing during desiccation of tricolpate pollen with $\tilde{\gamma}=7000$, $f=0.01$, and $A_{ap}/A_0=0.25$ (black line) and $0.41$ (gray line). The elastic energy of the shapes is shown as a function of volume reduction $\Delta V/V_0$ (top panel). The characteristic shapes of the shells are shown in the middle and bottom rows of graphs for $A_{ap}/A_0=0.25$ and $0.41$, respectively, for $\Delta V/V_0=0.012$, $0.13$, $0.26$ (middle row) and $\Delta V/V_0=0.012$, $0.13$, $0.26$, $0.35$, $0.49$ (bottom row). Shape colours indicate the surface curvature, with the brightest yellow and the darkest blue corresponding to largest positive and negative curvatures, respectively.[]{data-label="fig:S2"}](FigS2-small.pdf){width="60.00000%"}
![Phase diagram of bicolpate pollen folding in the $\gamma$-$f$ plane, showing equatorial cross-sections of the pollen shapes. The relative change of volume upon desiccation is $\Delta V/V_0=0.42$ throughout, and the colpi (shown in red in the cross-sections) span $A_{ap}/A_0=1/3$ of the total area of the pollen surface. Numbers next to the pollen shapes denote the elongation of the bounding box of the shapes, $2u_z/(u_x+u_y)$, indicating the prolateness of the grain. The shaded region of the phase diagram corresponds to the elastic parameters where the apertures close completely and nearly symmetrically, without asymmetric deformation of the exine.[]{data-label="fig:S3"}](FigS3-small.pdf){width="60.00000%"}
![Phase diagram of tetracolpate pollen folding in the $\gamma$-$f$ plane, showing equatorial cross-sections of the pollen shapes. The relative change of volume upon desiccation is $\Delta V/V_0=0.35$ throughout, and the colpi (shown in red in the cross-sections) span $A_{ap}/A_0=1/3$ of the total area of the pollen surface. Numbers next to the pollen shapes denote the elongation of the bounding box of the shapes, $2u_z/(u_x+u_y)$, indicating the prolateness of the grain. The shaded region of the phase diagram corresponds to the elastic parameters where the apertures close completely and nearly symmetrically, without asymmetric deformation of the exine.[]{data-label="fig:S4"}](FigS4-small.pdf){width="60.00000%"}
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'The interstellar medium is observed in a hierarchical fractal structure over several orders of magnitude in scale. Aiming to understand the origin of this structure, we carry out numerical simulations of molecular cloud fragmentation, taking into account self-gravity, dissipation and energy input. Self-gravity is computed through a tree code, with fully or quasi periodic boundary conditions. Energy dissipation is introduced through cloud-cloud ineslatic collisions. Several schemes are tested for the energy input. It appears that energy input from galactic shear allows to achieve a stationary clumped state for the gas, avoiding final collapse. When a stationary turbulent cascade is established, it is possible to derive meaningful statistical studies on the data such as the fractal dimension of the mass distribution.'
author:
- 'B. Semelin and F. Combes'
date: 'Received XX XX, 1999; accepted XX XX, 2000'
title: ' N-body simulations of self-gravitating gas in stationary fragmented state '
---
Introduction
============
From many studies over the last two decades, it has been established that the interstellar medium (ISM) has a clumpy hierarchical structure, approaching a fractal structure independent of scale over 4 to 6 orders of magnitude in sizes (Larson 1981, Scalo 1985, Falgarone et al 1992, Heithausen et al 1998). The structure extends up to giant molecular clouds (GMC) of 100 pc scale, and possibly down to 10 AU scale, as revealed by HI absorption VLBI (Diamond et al 1989, Faison et al 1998) or extreme scattering events in quasar monitoring (Fiedler et al. 1987, Fiedler et al. 1994). It is not yet clear which mechanism is the main responsible for this structure; it could be driven by turbulence, since the Reynolds number is very high, or self-gravity, since clouds appear to be virialized on most of the scales, with the help of magnetic fields, differential rotation, etc...
One problem of the ISM turbulence is that the relative velocities of clumps are supersonic, leading to high dissipation, and short lifetime of the structures. An energy source should then be provided to maintain the turbulence. It could be provided by star formation (stellar winds, bipolar flows, supernovae, etc... Norman & Silk 1980). However, the power-law relations observed between size and line-width, for example, are the same in regions of star-formation or quiescent regions, either in the galactic disk, or even outside of the optical disk, in the large HI extensions, where very little star formation occurs. In these last regions alternative processes of energy input must be available. A first possibilty is the injection by the galactic shear. Additionally, since there is no heating sources in the gas, the dense cold clumps should be bathing at least in the cosmological background radiation, at a temperature of 3K (Pfenniger & Combes 1994). In any case, in a nearly isothermal regime, the ISM should fragment recursively (e.g. Hoyle 1953), and be Jeans unstable at every scale, down to the smallest fragments, where the cooling time becomes of the same order as the collapse time, i.e. when a quasi-adiabatic regime is reached (Rees 1976).
In this work, we try to investigate the effect of self-gravity through N-body simulations. We are not interested in star formation, but essentially in the fractal structure formation, that could be driven essentially by gravity (e.g. de Vega et al. 1996). Our aim is to reach a quasi-stationary state, where there is statistical equilibrium between coalescence and fragmentation of the clouds. This is possible when the cooling (energy dissipated through cloud collisions, and subsequent radiation) is compensated by an energy flux due to external sources : cosmic rays, star formation, differential shear... Previous simulations of ISM fragmentation have been performed to study the formation of condensed cores, some with isolated boundary conditions (where the cloud globally collapses and forms stars, e.g. Boss 1997, Burkert et al. 1997), or in periodic boundary conditions (Klessen 1997, Klessen et al. 1998). The latter authors assume that the cloud at large scale is stable, supported by turbulence, or other processes. They follow the over-densities in a given range of scales, and schematically stop the condensed cores as sink particles when they should form structures below their resolution. We also adopt periodic boundary conditions, since numerical simulations are very limited in their scale dynamics, and we can only consider scales much smaller than the large-scale cut-off of the fractal structure. We are also limited by our spatial resolution: the smallest scale considered is far larger than the physical small-scale cut-off of the structures. Our dynamics is computed within this range of scales.
Our goal is to achieve long enough integration time for the system to reach a stationary state, stationary in a statistical sense. This state should have its energy confined in a narrow domain, thus presenting the density contrasts of a fragmented medium while avoiding gravitational collapse after a few dynamical times. Only when those conditions are fulfilled, can we try to build a meaningful model of the gas. Let us consider for example the turbulence concepts.
Theoretical attempts have been made to describe the interstellar medium as a turbulent system. While dealing with a system both self-gravitating and compressible, the standard approach has been to adapt Kolmogorov picture of incompressible turbulence. The first classical assumption is that the rate of energy transfer between scales is constant within the so-called inertial range. This inertial range is delimited by a dissipative scale range at small scales and a large scale where energy is fed into the system. In the case of the interstellar medium, the energy source can be the galactic shear, or the galactic magnetic field, or, on smaller scales, stellar winds and such. In this classical picture of turbulence, one derives the relation $v_L \sim L^{1/3}$, where $v_L$ is the velocity of structures on scale $L$. If we consider now that the structures are virialized at all scales, we get the relation $\rho_L \sim L^{-4/3}$, where $\rho_L$ is the density of structures on scale $L$. This produces a fractal dimension $D=1.66$. A more consistent version is to take compressibility into account in Kolmogorov cascade; then $\rho_L v_L^3 / L \sim cst$. Adding virialization, we get the relations $v_L \sim L^{3/5}$ and $\rho_L \sim L^{-4/5}$. This produces the fractal dimension $D=2.2$. It should be emphasized again that all these scenarios assume a quasi-stationary regime.
However, numerical simulations of molecular cloud fragmentation have been, so far, carried out in dissipative schemes which allow for efficient clumping (e.g Klessen et al 1998, Monaghan & Lattanzio 1991). As such, they do not reach stationary states. Our approach is to add an energy input making up for the dissipative loss. After relaxation from initial conditions we attempt to reach a fragmented but non-collapsing state of the medium that does not decay into homogeneous or centrally condensed states. It is then meaningful to compute the velocity and density fields power spectra and test the standard theoretical assumptions. It is also an opportunity to investigate the possible fractal structure of the gas. Indeed, starting from a weakly perturbed homogeneous density field, the formation time of a fractal density field independent of the initial conditions is at the very least of the order of the free-fall time, and more likely many times longer. A long integration time should permit full apparition of a fractal mass distribution, independent of the initial conditions.
This program encounters both standard and specific difficulties. Galaxies clustering as well as ISM clumping require large density contrasts, that is to say high spatial resolution in numerical simulations. This point is tackled, be it uneasily, by adaptative-mesh algorithms or tree algorithms, or multi-scale schemes ($\hbox{P}^3\hbox{M}$). Another problem is the need for higher time resolution in collapsed region. This is CPU-time consuming or is dealt with by multiple time steps. This point is of primary sensitivity in our simulation since we need to follow accurately the internal dynamics of the clumps and filaments to avoid total energy divergence. Finally we need a long integration time to reach the stationary state. This is directly in competition for CPU-time with spatial resolution for given computing resources.
Numerical methods
==================
Self-gravity
-------------
In our simulation we use N-body dynamics with a hierarchical tree algorithm and multipole expansion to compute the forces, as designed by Barnes & Hut (1986, 1989). The number of particles is between $10\;000$ and $120\;000$ particles. We are limited by the fact that we need long integration time in our study.
Within the N-body framework the tree algorithm is a $N\ln(N)$ search algorithm picking the closest neighbours for exact force computation and sorting farther out particles in hierarchical boxes for multipole expansion. The size of the boxes used in the expansion is set, for the contribution of a given region to the force, by a control parameter $\theta$ defining the maximal angular size of the boxes seen from the point where the force is computed. Typical values used for $\theta$ are 0.5 to 1.0. Multipole expansion is carried out to quadrupole terms. As usual a short distance softening is used in the contribution of the closest particle to the interaction. The softening length is taken as $\sim 1/10$ of the inter-particle distance of the homogeneous state.
Boundary conditions
-------------------
There are two ways to erase the finite size effect and avoid spherical collapse in a N-body simulation. We can choose either quasi-periodic boundary conditions or fully periodic boundary conditions following Ewald method (Hernquist et al 1991). In quasi-periodic conditions the interaction of two particles is computed between the first particle and the closest of all the replicas of the other particle. The replicas are generated in the periodization of the simulation box to the whole space. In the fully periodic conditions, the first particle interacts with all the replicas of the other one. The relevance of each method will be discussed in section \[3dbc\]. We will compare them for simple free-falls using the fractal dimension as diagnosis.
Time integration and initial conditions
----------------------------------------
The integration is carried out through a multiple time steps leap-frog scheme. Making a Keplerian assumption about the orbits of the particles, we have the following relation between time step and the local interparticle distance: $\delta\tau \sim (\delta l)^{3 \over 2}$. In a fractal medium, the exponent could be different. We do not take this into account since it would lead us to modify the integration scheme dynamically according to the fractal properties of the system. As the tree sorts particles in cells of size $2^n d_0$ according to the local density, we should use $(2 \sqrt{2})^n \delta
t_0$ time steps. For simplicity we are using $3^n \delta t_0$. Then, $n$ different time steps allow us to follow the dynamics with the same accuracy over regions with density contrasts as high as $2^{3(n-1)}$.
We will use several types of initial conditions. The usual power law for the density power spectrum will be used for the free falls. It is usually implemented by using Zel’dovitch approximation which is valid before first shell crossing for a pressureless perfect fluid. The validity range of this approximation is difficult to assess in an N-body simulation. We use a similar implementation that does not however extend to non-linear regimes but also gives density power spectra with the desired shapes. A gaussian velocity field is chosen with a specified power spectrum $P_v(k)\sim k^{\alpha-2}$, then the particles are positioned at the nodes of a grid and displaced according to the velocity field with one time step. The resulting density fluctuations spectrum is $P_{\rho}(k) \sim k^{\alpha}$ according to the matter conservation.
Gas physics
------------
There can be several modelizations for the ISM: either it is considered as a continuous and fluid medium and simulated through gas hydrodynamics (with pressure, shocks, etc..) or, given the highly clumped nature of the interstellar medium, and its highly inhomogeneous structure (density variations over 6 orders of magnitude), it can be considered as a collection of dense fragments, that dissipate their relative energy through collisions. Our choice of particle dynamics assumes that each particle stands for a clump and neglects the interclump medium. Another reason is that we expect a fractal distribution of the masses. This means a non-analytical density field which is not to be easily handled by hydrodynamical codes.
The dissipation enters the dynamics through a gridless sticky particles collision scheme. The collision round is periodically computed every 1 to 10 time steps. The frequency of this collision round is one way to adjust the strength of the dissipation. Since, as the precise collision schemes will reveal , we adopt a statistical treatment of the dissipation, it is not necessary to compute the collision round at each time step. Two schemes are investigated. The first uses the tree search to find a candidate to collision with a given particle, in a sphere whose radius $r_c$ is a fixed parameter. Inelastic collisions are then computed, dissipating energy but conserving linear momentum. In the second scheme, each particle has a probability to collide proportional to the inverse of the local mean collision time. If it passes the probability test, the collision is computed with the most suitable neighbour. In this second scheme, dissipation may occur even in low density regions; in the first it occurs only above a given density threshold. In practice this implies that, in the first scheme, dissipation happens only at small scales, while in the second, it happens on all scales but is still stronger at small scales.
Special attention must be paid to the existence of a small scale cutoff in the physics of the system. Indeed, at very small scale ( $\sim 20 $ AU ), the gas is quasi-adiabatic. Its cooling time is then much longer than the isothermal free fall time at the same scale. Moreover, the mean collision time between clumps at this scale, in a fractal medium, is much shorter than the cooling time. As a result, collisions completely prevent the already slowed down processes of collapse and fragementation in the quasi-adiabatic gas: the fractal structure is broken at the corresponding scale. To take these phenomena into account in our simulation, we need to introduce a large-density cutoff. This is achieved by computing elastic or super-elastic collisions at scale $< 0.5 r_c$, instead of inelastic collisions. The practical implementation follows the same procedure as for the first inelastic collision scheme. In the case of super-elastic collisions, the overall energy balance is still negative at small scales (due to a volume effect), however we do introduce a mechanism of energy injection at small scale. This choice can be furthermore justified by physical considerations.
When there is no energy provided by star formation (for example in the outer parts of galaxies, where gas extends radially much beyond the stellar disk), the gas is only interacting with the intergalactic radiation field, and the cosmic background radiation. The latter provides a minimum temperature for the gas, and plays the role of a thermostat (at the temperature of 2.76K at zero redshift). The interaction between the background and the gas can only occur at the smallest scale of the fragmentation of the medium, corresponding to our small scale cutoff; the radiative processes involve hydrogen and other more heavy elements (Combes & Pfenniger 1997). To maintain the gas isothermal therefore requires some energy input at the smallest scale.
Several schemes for the energy input have been tried. Reinjection at small scale through added thermal motion has proved unable to sustain the system in any other state than an homogeneous one. Then, turning to the turbulent point of view, we have tried reinjection at large scale. Two main methods have been tested; reinjection through a large scale random force field, and reinjection through the action of the galactic shear. Their effect will be discussed in section 5.
Finally, the statistical properties of the system can be described in many different way. We will use the correlation fractal dimension as diagnosis. Definition and example of application of this tool are given in the appendix.
Simple 3-D free falls
======================
The choice of the boundary conditions {#3dbc}
--------------------------------------
A fractal structure such as observed in the ISM or in the galaxies distribution obeys a [*statistical*]{} translational invariance. If we run simulations with vacuum boundary condition, this invariance is grossly broken. On the other hand the quasi-periodic or fully-periodic conditions restore this invariance to some extent. They have been used in cosmological simulations. Hernquist et al (1991) have compared the two models to analytical results and found that the fully-periodic model simulates more accurately the self-gravitating gas in an expanding universe. This is in agreement with expectations, since uniform expansion is automatically included in a fully-periodic model, while it is not in a quasi-periodic one. Klessen (1996,1998) applied Ewald method to simulations of the interstellar medium. This is not natural since expansion is necessary to validate Ewald method. Nevertheless one can argue that the dynamics is not altered by the use of Ewald method in regions with high density contrasts.
Hernquist’s study was on the dynamics of the modes. We are more interested in the fractal properties. As far as we are concerned the question is wether the same initial conditions produce the same fractal properties in a cosmological framework (fully-periodic conditions) and in a ISM framework (quasi-periodic conditions). We will try to answer this question in section \[3dff\].
The choice of the initial conditions
-------------------------------------
The power spectrum of the density fluctuations is most commonly described as a power law in cosmological simulations. Then, different exponents in the power law produce different fractal dimensions. The comoslogist can hope to find the fractal dimension from observations and go back to the initial power spectrum. In the case of the interstellar medium however, it is unlikely that the fractal dimension is the result of initial conditions since the life-time of the medium is much longer than the dynamical time of the structures. In this regard, all initial conditions should be erased and produce the same fractal dimension. We will check whether this is the case for power law density spectra with different exponents. Simple free falls with such initial conditions will allow the comparison between the two types of periodic boundary conditions and will allow us to compare between 3D and 2D models and different dissipation schemes.
Simulations and results {#3dff}
-------------------------
The four first simulations are free falls from power law initial conditions in either fully or quasi-periodic boundary conditions for two different power law exponents: $\alpha=-1$ and $\alpha=-2$. About $120\,000$ particles were used in the simulations. The time evolution of the particle distribution for $\alpha=-2$ and fully periodic condition is plotted in Fig. \[film\]. If we compare the qualitative aspect of the matter distribution with those found in SPH simulations (Klessen 1998), a difference appears. Filaments are less present in this N-body simulation than in SPH simulations. We believe that this is due to the strongly dissipative nature of SPH simulations. Indeed filaments form automaticaly, [*even without gravity*]{}, with $P(k) \sim k^{-2}$ initial conditions for the density field. Then, the strong dissipation of SPH codes is necessary to retain them; $N$-body codes seem to show that gravity by itself is not enough.
### Fractal dimension
For each of the four simulations, the fractal dimension as a function of scale has been computed at different times of the free fall. Results are summarized in Fig. \[fracdim\].
The fractal dimension of a mathematical fractal would appear either as an horizontal straight line or as a curve oscillating around the fractal dimension if the fractal has no randomization (like a Cantor set). For our system, it appears that at sufficiently early stages of the free fall, and for scales above the dissipative cutoff, the matter distribution is indeed fractal. However the dimension goes down with time to reach values of 2 for $\alpha=-2$, and between $2$ and $2.5$ for $\alpha=-1$. This is the fractal dimension for the total density, not for the fluctuations only. Stabilization of the fractal dimension marginally happens before the dissipation breaks the fractal.
One conclusion is however reasonable: as far as the fractal dimension is concerned, the difference between fully-periodic boundary conditions and quasi-periodic boundary conditions is not important. The deviation is a bit stronger in the $\alpha=-1$ case, which is understandable since $\alpha=-1$ produces weaker density contrasts in the initial conditions than $\alpha=-2$ and so the dynamics is less decoupled from the expansion. The effect of this conclusion is that we will not use the fully-periodic boundary conditions in further simulations since they require more CPU time and have no physical ground in the ISM framework. Clump mass spectra are given in appendix B.
Simple 2-D free falls
=====================
We consider a 2-D system where particles have only two cartesian space coordinates but still obey a ${1 \over r^2}$ interaction force.
Motivations for the two dimensional study
-----------------------------------------
Turning to 2-D simulations allows access, for given computing resources, to a wider scale-range of the dynamics. Or, for given computing resources and a given scale-range, to larger integration times. The later is what we will need in the simulation with energy injection. In practice we will choose to use $10^4$ particles in 2-D simulations to access both a larger integration time and a wider scale range. Indeed, between a 3-D simulation with $N=10^5$ particles and a 2-D simulation with $10^4$ particles one improves the scale range from $[1, N^{1 \over 3}=46]$ to $[1,N^{ 1 \over 2}=100]$. At the same time one can increase the integration-time by a factor of $\sim 10$ at equal CPU cost.
We can also add a physical reason to these computational justifications. The simulations with an energy injection aim to describe the behaviour of molecular clouds in the thin galatic disk. This medium has a strong anisotropy between the two dimensions in the galactic plane and the third one. As such it is a suitable candidate for a bidimensional modeling.
Moreover, studying a bidimensional system will allow us to check how the fractal dimension, dependent or not of the initial conditions, is affected by dimension of the space.
Simulations with 2 different dissipative schemes
-------------------------------------------------
Here again initial conditions with a power law density power spectrum are used. Exponent $\alpha=-2$ is chosen to allow a comparison with the 3-D simulations. Boundary conditions are quasi-periodic. We have carried out two simulations with the two different dissipative schemes described in section 2.4 . The fractal dimensions at different stages of each of these two simulations are shown in Fig. \[ff2d\].
In both simulations the fractal dimension of the homogeneous initial density field is 2 at scales above mean inter-particle distance and 0 below. Then the density fluctuations develop and the fractal dimension goes down to about $1.5-1.6$ in both simulations. This value was not reached in 3-D simulations. This shows that the dimension of the space has an influence on the fractal dimension of a self-gravitating system evolved from initial density fields with a power law power spectrum. Indeed spherical configurations are favoured in 3 dimensions and may produce a different type of fractal than cylindrical configurations which are favored in two dimensions.
Another point is that the fractal dimension of the system at small scales is indeed sensitive to the dissipative scheme. The second scheme even alters the fractality of large scales. So even if we believe that the fractality of a self-gravitating system is controlled by gravity, we must keep in mind the fact that the dissipation has an influence on the result. In this regard, discrepancies should appear between $N$-body simulations including SPH or other codes.
The clump mass spectra are not conclusive due to insufficient number of clumps formed with 10609 particles.
2-D simulations with an energy source
======================================
As already stated, the ISM is a dissipative medium. Structures radiate their energy in a time-scale much shorter than the GMCs typical life time inferred from observations. Therefore we are led to provide an energy source in the system to obtain a life time longer than the free fall time.
The choice of the energy source
--------------------------------
Different types of sources are invoked to provide the necessary energy input: stellar winds, shock waves, heating from a thermostat, galatic shear... We have tested several possibilities.
Our first idea was to model an interaction with a thermostat by adding periodically a small random component to the velocity field. We have tried to add it either to all the particles or only to the particle in the cool regions. If enough energy is put into this random injection, the mono-clump collapse is avoided. However the resulting final state is not fractal, nor does it show density structures on several different scales. It consists in a few condensed points, like “blackholes”, in a hot homogeneous phase.
The second idea is to model a generic large scale injection by introducing a large scale random force field. We have tried stationary and fluctuating fields. It is possible in this scheme, by tuning the input and the dissipation, to lengthen the medium life time in an “interesting” inhomogenous state by a factor 2 or 3. However the final state, after a few dynamical times, is the same as for thermal input.
We would like to emphasize that, in all the cases just mentioned, the existence of super-elastic collisions at very short distances, are [*not at all*]{} able to avoid the collapse of “blackholes”. They are indeed beaten by inelastic collisions which happen more frequently. Thus the density cutoff introduced by superelastic collision, is not able all by itself to sustain or destroy clumps.
Only one of the schemes we have tested avoids the blackhole/homogeneity duality. In this scheme, energy is provided by the galactic shear. It produces an inhomogeneous state whose life time does not appear to be limited by a final collapse in the simulations. We now describe this scheme in detail.
Modelisation of the galactic shear
------------------------------------
The idea is to consider the simulation box as a small part of a bigger system in differential rotation. The dynamics of such a sub-system has been studied in the field of planetary systems formation (Wisdom and Tremaine 1988). Toomre (1981) and recently Hubert and Pfenniger (1999) have applied it to galactic dynamics.
The simulation box is a rotating frame with angular speed $\Omega_0$ which is the galactic angular speed at the center of the box. Coriolis force acts on the particles moving in the box. An additional external force arises from the discrepancy between the inertial force and the local mean gravitational attraction of the galaxy. The balance between these two forces is achieved only at points that are equally distant from the galactic center as the center of the box. Let us consider a 2-dimensional case. If $y$ is the orthoradial direction, and $x$ the radial direction the equations of motion are written:
$$\begin{aligned}
\ddot{y} &=& -2 \Omega_0 \dot{x} + F_y \nonumber\\
\ddot{x} &=& 2 \Omega_0 \dot{y} - 2 \Omega_0 r_0
\left.{d \Omega \over d r}\right|_{r_0} \! x + F_x \nonumber\\
\nonumber\end{aligned}$$
The $F_{x,y}$ are the projections of the internal gravitational forces. The shear force acting in the radial direction is able to inject into the system energy taken from the rotation of the galaxy as a whole.
Some modifications must also be brought to the boundary condition. It is not consistent with the differential rotation to keep a strict spatial periodicity. To take differential rotation into account, layers of cells at different radii must slide according to the variation of the galactic angular speed between them. One particle still interacts with the closest of the replicas of another particle (including the particle itself). This is sketched on Fig. \[shearsch\].
Results
--------
We have computed simulations in two dimensions with 10609 particles. The direction orthogonal to the galactic disk is not taken into account. A tuning between dissipation rate and shear strength is necessary to obtain a structured but non collapsing configuration. However it is not a fine tuning at all. We found out that, in a range of values of the shear, the dissipation adapts itself to compensate the energy input so that we didn’t encounter any limitation in the life time of the medium. The simulations have been carried out over more than 20 free-fall times. Snapshots from a simulation are plotted in Fig. \[coriol\].
### Formation of persistent structures
We can see that tilted stripes appear in the density field. This structure appears also in simulations by Toomre (1981) and Hubert and Pfenniger (1999). The new feature is the intermittent appearance of dense clumps within the stripes. Theses clumps remain for a few free fall times of the total system, then disappear, torn apart by the shear. This phenomenon is unknown in the simple free fall simulations where a clump, once formed, gets denser and more massive with time. It also appeared that the tearing of the clumps by the shear happens only if super-elastic collisions enact an efficient density cut-off. If we use elastic collisions only, below the dissipation scale, thus enacting a less efficient density cut-off, the clumps persist. However they do not collapse to the dense blackhole states encountered with different large scale energy input schemes. So the combination of the galactic shear and super-elastic collisions below the dissipation scale is necessary to obtain destroyable clumps, while the shear alone produces rather stable clumps, and the super-elastic collisions alone cannot avoid complete collapse.
While we cannot consider the density field as a fractal struture, it is indeed the beginning of a hierarchical fragmentation since we have two levels of structures. However the stripes are mainly the result of the shear. On the other hand the clumps are the simple gravitational fragmentation of the stripes. These two structures are not created by the exact same dynamical processes.
It must also be noted that the shear plays a direct role at large scale (of the order of 10 pc), where the galactic tidal forces are comparable to the self-gravity forces of the corresponding structures. But since the structures are fragmenting hierarchically, it is expected that its effect propagates and cascades down to the smallest scales.
### Fractal analysis
The fractal dimension of the medium is plotted at different times in Fig. \[df\]. Two characteristic scales are clearly visible. At large scale, the scale of the stripes, the dimension is about 1.9 with some fluctuations in time. At small scale, the dimension fluctuates strongly due to the appearance and disappearance of clumps. This shows through the appearance of a bump at the clump size. This size is about the dissipation scale. It is important to mention that the small scale range, while under the cutoff of the initial homogeneous condition, is not under our dynamical resolution. The mean inter-particle distance decreases from its initial value and is consistently followed by the $N$-body dynamics.
### Effect of the initial conditions
We have already emphasized that in GMC simulations, the resulting structure should be independent of the initial conditions. We have performed the simulation for various values of the exponant of the power spectrum. In the range $[-0.5,0.5]$ the long time behaviour is the one described above, thus satisfying the required independence. However, for more steep spectrum value like -2, the medium tend to collapse early into large clumps, thereby preventing the shear to act efficiently.
We see that the behaviour is independent of the initial conditions as long as the continuity of the medium at scale larger than the GMC size (the clump size in the simulation) is not broken so that the shear can act to sustain the GMC against collapse.
Discussion and Conclusion
=========================
We have shown that, in simulations without energy input, the evolution of the fractal dimension depends on the initial conditions and also, to some extent, on the dissipation schemes. Then we have investigated the effects of an energy input on the dynamics of the system. This input is necessary to prevent the total collapse of the system. Moreover, we do not want the input to destroy the inhomogeneous fractal structure of the medium, as done by random reinjections (thermal bath, random force field). Among the solutions we have tested, only the highly regular force field provided by the galactic shear preserves an inhomogeneous state. However this state is quite strongly constrained by the geometry of the shear. Stripes appear as a result. Interestingly, clumps are formed in these stripes and can be destroyed if we have an efficient enough density cutoff .
This state does not have a well-defined fractal dimension. Stripes are the effect of a large scale action on the system and within our dynamic range in scale, we cannot get rid of this boundary effect. If we had more resolution, the clumps could fragment, and we might reach a scale domain free of boundary effects. One of our prospects is to follow the inner dynamic of a clump with simulations in recursive sub-boxes going down in scale. This can produce a great dynamic scale range but only in a small region of space.
Computing the fractal dimension
================================
By fractal dimension we mean more accurately the correlation fractal dimension. We will use the notation $D_f$, defined as follows : if we consider a fractal set of points of equal weight located at $\hbox{\bf x}_j$, the fractal dimension of the ensemble obeys the relation :
$$\lim_{r \rightarrow 0} r^{D_f}= \lim_{r \rightarrow 0} \left\langle \sum_j \int_
{|\hbox{\bf x}-\hbox{\bf x}_i|<r } \!\!\!\!
\delta(\hbox{\bf x}-\hbox{\bf x}_j)
d\hbox{\bf x} \right\rangle_i \; .$$ The brackets stand for an ensemble average. In computational applications and in physical systems, the $r \rightarrow 0$ limit is not reached. However the relation should hold at scales where the boundary effects created by the finite size of the system are negligible.
In practice, for a non-fractal ensemble of points, $D_f$ usually depends on $r$ (but not always). For a fractal it is independent of $r$, or at least it oscillates around a mean value. The latter case can happen for a set with a strong scale periodicity like the Cantor set. Examples are given in Fig. \[fractanal\]. As the above remarks show, $D_f$ is not a definitive criterion of fractality.
According to the definition, the computation of the fractal dimension is based on the following method. $r^{D_f}$ is the mean mass in a sphere of radius $r$ centered on a particle. This mass is actually computed for each particle using the tree search, then it is averaged and the value $D_f(r)$ is derived. The average can be made on a subset of points to lower the computation cost, the result is then noisier.
Clump mass spectrum
===================
Although the results are not very conclusive, we give the mass distribution of the clumps in the 3-D simple freefall, to allow comparison with observational data and other numerical simulations. Plotting $\log_{10}(N)$ against $\log_{10}(m)$, the slope infered from observation data is $-0.5$. Thus the clump mass spectrum behaves as a power law : ${dN \over dm} \sim m^{-1.5}$.
To produce the mass spectrum, the first step is to define individual clumps. We have used an algorithm very similar to the one proposed by Williams et al (1994). We compute the density field from an interpolation of the distribution of the particles with a cloud-in-cell scheme. It should be mentione d that this procedure tends to produce an artificially high number of very small clumps associated to isolated particles or pair of particules which should not be considered as clumps at all.
On Fig. \[cms\] the clump mass spectrum is given at different times of the evolution and for the two types of initial conditions ($\alpha=-1\,,\,-2$). Quasi-periodic boundary conditions are used. As time increases, as we can see, the clump mass spectrum slope decreases to reach $-0.38 \pm 0.03$ for $\alpha=-1$ and $-0.18 \pm 0.03$ for $\alpha=-2$. At later times, the power law is broken by the final dissipative collapse. According to this diagnosis, the case $\alpha=-1$ is closer to the observed values. However, once again, this simulation provides only a transient state which is unlikely to be a good model of a GMC.
Barnes J., Hut P.: 1986, Nature 324, 446 Barnes J., Hut P.: 1989, ApJS 70, 389 Boss A.: 1997, ApJ 483, 309 Burkert A., Bate M.R., Bodenheimer P.: 1997, MNRAS 289, 497 Combes F., Pfenniger D.: 1997, A&A 327, 453 de Vega H., Sanchez N., Combes F.: 1996, Nature 383, 53 Diamond P.J., Goss W.M., Romney J.D. et al: 1989, ApJ 347, 302 Faison M.D., Goss W.M., Diamond P.J., Taylor G.B.: 1998, AJ 116, 2916 Falgarone E., Puget J-L., Perault M.: 1992, A&A 257, 715 Fiedler R.L., Dennison B., Johnston K., Hewish A.: 1987, Nature 326, 6765 Fiedler R.L., Pauls T., Johnston K., Dennison B.: 1994, ApJ 430, 595 Fleck R.C.: 1981, ApJ 246, L151 Heithausen A., Bensch F., Stutzki J., Falgarone E., Panis J.F.: 1998, A&A 331, L65 Hernquist L., Bouchet F., Suto Y.: 1991, ApJS 75, 231 Hoyle F.: 1953, ApJ 118, 513 Huber D., Pfenniger D.: 1999, astro-ph/9904209, to be published in the proceedings of The Evolution of Galaxies on Cosmological Timescale, eds. J.E. Beckman & T.J. Mahoney, Astrophysics and Space Science. Klessen R. S.: 1997, MNRAS , 11 Klessen R. S., Burkert A., Bate M.R: 1998, ApJ 501, L205 Klessen R. S., Burkert A.: 1999, astro-ph/9904090 submitted to ApJ. Larson R.B.: 1981, MNRAS 194, 809 Monaghan J.J., Lattanzio J.C.: 1991, ApJ 375, 177 Norman C.A., Silk J.: 1980, ApJ 238, 158 Pfenniger D., Combes F.: 1994, A&A 285, 94 Rees M.J., 1976, MNRAS 176, 483 Scalo J.M.: 1985, in “Protostars and Planets II”, ed. D.C. Black and M.S. Matthews, Univ. of Arizona Press, Tucson, p. 201 Toomre A.: 1981, in “The Structure and Evolution of Normal Galaxies”, eds. Fall S.M. and Lynden-Bell D., Cambridge Univ. Press, p. 111 Wisdom J.,Tremaine S., 1988, Astron. J., 95, 925 Williams J. P.,De Geus E. J., Blitz L.: 1994, ApJ 428,693
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We introduce and study the bounded word problem and the precise word problem for groups given by means of generators and defining relations. For example, for every finitely presented group, the bounded word problem is in , i.e., it can be solved in nondeterministic polynomial time, and the precise word problem is in , i.e., it can be solved in polynomial space. The main technical result of the paper states that, for certain finite presentations of groups, which include the Baumslag-Solitar one-relator groups and free products of cyclic groups, the bounded word problem and the precise word problem can be solved in polylogarithmic space. As consequences of developed techniques that can be described as calculus of brackets, we obtain polylogarithmic space bounds for the computational complexity of the diagram problem for free groups, for the width problem for elements of free groups, and for computation of the area defined by polygonal singular closed curves in the plane. We also obtain polynomial time bounds for these problems.'
address: |
Department of Mathematics\
University of Illinois\
1409 West Green Street\
Urbana\
IL 61801\
USA
author:
- 'S. V. Ivanov'
title: The bounded and precise word problems for presentations of groups
---
[^1]
Introduction
============
Suppose that a finitely generated group ${\mathcal{G}}$ is defined by a presentation by means of generators and defining relations $$\label{pr3}
{\mathcal{G}}= \langle \ {\mathcal{A}}\ \| \ R=1, \ R \in {\mathcal{R}}\ \rangle \ ,$$ where ${\mathcal{A}}= \{ a_1, \dots , a_m\}$ is a finite alphabet and ${\mathcal{R}}$ is a set of defining [ relators]{} which are nonempty cyclically reduced words over the alphabet ${\mathcal{A}}^{\pm 1} := {\mathcal{A}}\cup {\mathcal{A}}^{-1}$. Let ${\mathcal{F}}({\mathcal{A}})$ denote the free group over ${\mathcal{A}}$, let $|W|$ be the length of a word $W$ over the alphabet ${\mathcal{A}}^{\pm 1}$, and let ${\mathcal{N}}( {\mathcal{R}})$ denote the normal closure of ${\mathcal{R}}$ in ${\mathcal{F}}({\mathcal{A}})$. The notation implies that ${\mathcal{G}}$ is the quotient group ${\mathcal{F}}({\mathcal{A}})/{\mathcal{N}}( {\mathcal{R}})$. We will say that a word $W$ over ${\mathcal{A}}^{\pm 1}$ is equal to 1 in the group ${\mathcal{G}}$, given by , if $W \in {\mathcal{N}}( {\mathcal{R}})$ in which case we also write $W \overset {\mathcal{G}}= 1$. Recall that the presentation is called [*finite*]{} if both the sets ${\mathcal{A}}$ and ${\mathcal{R}}$ are finite, in which case ${\mathcal{G}}$ is called [*finitely presented*]{}. The presentation is called [*decidable*]{} if there is an algorithm that decides whether a given word over ${\mathcal{A}}^{\pm 1}$ belongs to ${\mathcal{R}}$.
The classical word problem for a finite group presentation , put forward by Dehn [@Dehn] in 1911, asks whether, for a given word $W$ over ${\mathcal{A}}^{\pm 1}$, it is true that $W \overset {\mathcal{G}}= 1$. The word problem is said to be [*solvable*]{} (or [*decidable*]{}) for a decidable presentation if there exists an algorithm which, given a word $W$ over ${\mathcal{A}}^{\pm 1}$, decides whether or not $W \overset {\mathcal{G}}= 1$.
Analogously to Anisimov [@A], one might consider the set of [*all*]{} words $W$ over ${\mathcal{A}}^{\pm 1}$, not necessarily reduced, such that $W \overset {\mathcal{G}}= 1$ as a language ${\mathcal{L}}({\mathcal{G}}) := \{ W \mid W \overset {\mathcal{G}}= 1 \}$ over ${\mathcal{A}}^{\pm 1}$ and inquire about a computational complexity class $\textsf K$ which would contain this language. If ${\mathcal{L}}({\mathcal{G}})$ is in $\textsf K$, we say that the word problem for ${\mathcal{G}}$ is in $\textsf K$. For example, it is well known that the word problem is in , i.e., solvable in deterministic polynomial time, for surface groups, for finitely presented groups with small cancellation condition $C'(\lambda)$, where $\lambda \le \tfrac 16$, for word hyperbolic groups, for finitely presented groups given by Dehn presentations etc., see [@Gromov], [@LS]. On the other hand, according to results of Novikov [@NovikovA], [@Novikov] and Boone [@BooneA], [@Boone], based on earlier semigroup constructions of Turing [@Turing] and Post [@Post], see also Markov’s papers [@M1], [@M2], there exists a finitely presented group ${\mathcal{G}}$ for which the word problem is unsolvable, i.e., there is no algorithm that decides whether $W \overset {\mathcal{G}}= 1 $. The proof of this remarkable Novikov–Boone theorem was later significantly simplified by Borisov [@Borisov], see also [@Rt].
In this paper, we introduce and study two related problems which we call the bounded word problem and the precise word problem for a decidable group presentation .
The [*bounded word problem*]{} for a decidable presentation inquires whether, for given a word $W$ over ${\mathcal{A}}^{\pm 1}$ and an integer $n \ge 0$ written in unary, denoted $1^n$, one can represent the word $W$ as a product in ${\mathcal{F}}({\mathcal{A}})$ of at most $n$ conjugates of some words in ${\mathcal{R}}^{\pm 1} := {\mathcal{R}}\cup {\mathcal{R}}^{-1}$, i.e., whether there are $R_1, \dots, R_k \in {\mathcal{R}}^{\pm 1}$ and $S_1, \dots, S_k \in {\mathcal{F}}({\mathcal{A}})$ such that $
W = S_1 R_1 S_1^{-1} \dots S_k R_k S_k^{-1}
$ in ${\mathcal{F}}({\mathcal{A}})$ and $k \le n$. Equivalently, for given an input $(W, 1^n)$, the bounded word problem asks whether there exists a disk diagram over , also called a van Kampen diagram, whose boundary label is $W$ and whose number of faces is at most $n$.
As above for the word problem, we say that the bounded word problem for a decidable presentation is solvable if there exists an algorithm that, given an input $(W, 1^n)$, decides whether $W$ is a product in ${\mathcal{F}}({\mathcal{A}})$ of at most $n$ conjugates of words in ${\mathcal{R}}^{\pm 1}$. Analogously, if the language of those pairs $(W, 1^n)$, for which the bounded word problem has a positive answer, belongs to a computational complexity class $\textsf K$, then we say that the bounded word problem for ${\mathcal{G}}$ is in $\textsf K$.
The [*precise word problem*]{} for a decidable presentation asks whether, for given a word $W$ over ${\mathcal{A}}^{\pm 1}$ and a nonnegative integer $1^n$, one can represent the word $W$ as a product in ${\mathcal{F}}({\mathcal{A}})$ of $n$ conjugates of words in ${\mathcal{R}}^{\pm 1}$ and $n$ is minimal with this property. Equivalently, for given an input $(W, 1^n)$, the precise word problem asks whether there exists a disk diagram over whose boundary label is $W$, whose number of faces is $n$, and there are no such diagrams with fewer number of faces.
The definitions for the precise word problem for being solvable and being in a complexity class $\textsf K$ are similar to the corresponding definitions for the (bounded) word problem.
In the following proposition we list basic comparative properties of the standard word problem and its bounded and precise versions.
\[prp1\] $(\mathrm{a})$ There exists a decidable group presentation for which the word problem is solvable while the bounded and precise word problems are not solvable.
$(\mathrm{b})$ If the bounded word problem is solvable for , then the precise word problem is also solvable.
$(\mathrm{c})$ For every finite group presentation , the bounded word problem is in ${\textsf{NP}}$, i.e., it can be solved in nondeterministic polynomial time, and the precise word problem is in , i.e., it can be solved in polynomial space.
$(\mathrm{d})$ There exists a finite group presentation for which the bounded and precise word problems are solvable while the word problem is not solvable.
$(\mathrm{e})$ There exists a finitely presented group for which the bounded word problem is -complete and the precise word problem is -hard.
Note that a number of interesting results on solvability of the word problem, solvability of the bounded word problem and computability of the Dehn function for decidable group presentations can be found in a preprint of Cummins [@DC] based on his PhD thesis written under the author’s supervision.
It is of interest to look at the bounded and precise word problems for finitely presented groups for which the word problem could be solved very easily. Curiously, even for very simple presentations such as $ \langle \ a, b \ \| \ a=1 \ \rangle $ and $ \langle \ a, b \ \| \ ab=ba \ \rangle $, for which the word problem is obviously in $\textsf{L}$, i.e., solvable in deterministic logarithmic space, it does not seem to be possible to solve the bounded and precise word problem in logarithmic space. In this article, we will show that the bounded and precise word problems for these “simple" presentations and their generalizations can be solved in polynomial time. With much more effort, we will also prove that the bounded and precise word problems for such presentations can be solved in polylogarithmic space. Similarly to [@ATWZ], we adopt -style notation and denote $\textsf{L}^\alpha := \mbox{DSPACE}((\log s)^\alpha)$, i.e., $\textsf{L}^\alpha$ is the class of decision problems that can be solved in deterministic space $O((\log s)^\alpha)$, where $s$ is the size of input.
\[thm1\] Let the group ${\mathcal{G}}_2$ be defined by a presentation of the form $$\label{pr1}
{\mathcal{G}}_2 := \langle \, a_1, \dots, a_m \ \| \ a_{i}^{k_i} =1, \ k_i \in E_i, \ i = 1, \ldots, m \, \rangle ,$$ where for every $i$, one of the following holds: $E_i = \{ 0\}$ or, for some integer $n_i >0$, $E_i = \{ n_i \}$ or $E_i = n_i \mathbb N = \{ n_i, 2n_i, 3n_i,\dots \}$. Then both the bounded and precise word problems for are in $\textsf{L}^3$ and in $\textsf{P}$. Specifically, the problems can be solved in deterministic space $O((\log|W|)^3)$ or in deterministic time $O( |W|^4\log|W| )$.
It is worth mentioning that, to prove $\textsf{L}^3$ part of Theorem \[thm1\], we will not devise a concrete algorithm that solves the bounded and precise word problems for presentation in deterministic space $O((\log|W|)^3)$. Instead, we will develop a certain nondeterministic procedure that solves the bounded word problem for presentation nondeterministically in space $O((\log|W|)^2)$ and time $O(|W|)$ and then use Savitch’s theorem [@Sav] on conversion of nondeterministic computations in space $S$ and time $T$ into deterministic computations in space $O(S \log T)$.
The proof of $\textsf{P}$ part of Theorem \[thm1\] is much easier. Here our arguments are analogous to folklore arguments [@folk], [@Ril16] that solve the [precise word problem]{} for presentation $\langle \, a, b \, \| \, a=1, b =1 \, \rangle$ in polynomial time and that utilize the method of dynamic programming. Interestingly, these folklore arguments are strikingly similar to the arguments that have been used in computational biology to efficiently solve the problem of planar folding of long chains such as RNA and DNA biomolecules, see [@CB1], [@CB2], [@CB3], [@CB4].
The techniques to prove $\textsf{L}^3$ part of Theorem \[thm1\] and their generalizations, that occupy a significant part of this article and that could be described as calculus of [bracket system]{}s, have applications to other problems. For example, Grigorchuk and Kurchanov [@GK] defined the [*width*]{} of an element $W$ of the free group ${\mathcal{F}}({\mathcal{A}})$ over ${\mathcal{A}}$ as the minimal number $h = h(W)$ so that $$\label{ff11}
W = S_1 a_{j_1}^{k_1} S_1^{-1} \dots S_h a_{j_h}^{k_h} S_h^{-1}$$ in ${\mathcal{F}}({\mathcal{A}})$, where $a_{j_1}, \dots, a_{j_h} \in {\mathcal{A}}$, $S_{1}, \dots, S_{h} \in {\mathcal{F}}({\mathcal{A}})$ and $k_{1}, \dots, k_{h}$ are some integers. Alternatively, the width $h(W)$ of $W$ can be defined as an integer such that the [precise word problem]{} holds for the pair $(W, h(W))$ for the presentation [pr1]{} in which $E_i = \mathbb N$ for every $i$. Grigorchuk and Kurchanov [@GK] found an algorithm that computes the width $h(W)$ for given $W \in {\mathcal{F}}({\mathcal{A}})$ and inquired whether computation of the width $h(W)$ can be done in deterministic polynomial time. Ol’shanskii [@Ol] gave a different geometric proof to this result of Grigorchuk and Kurchanov and suggested some generalizations.
Majumdar, Robbins, and Zyskin [@SL1], [@SL2] introduced and investigated the [*spelling length*]{} $h_1(W)$ of a word $W \in {\mathcal{F}}({\mathcal{A}})$ defined by a similar to [ff11]{} formula in which $k_j=\pm 1$ for every $j$. Alternatively, the spelling length $h_1(W)$ is an integer such that the [precise word problem]{} holds for the pair $(W, h_1(W))$ for the presentation [pr1]{} in which $E_i = \{ 1 \}$ for every $i$.
As a corollary of Theorem \[thm1\], we obtain a positive solution to the problem of Grigorchuk and Kurchanov and also compute both the width and the spelling length of $W$ in cubic logarithmic space. We remark that Riley [@Ril16] gave an independent solution to the Grigorchuk–Kurchanov problem.
\[cor1\] Let $W$ be a word over ${\mathcal{A}}^{\pm 1}$ and $n \ge 0$ be an integer. Then the decision problems that inquire whether the width $ h(W)$ or the spelling length $h_1(W)$ of $W$ is equal to $n$ belong to $\textsf{L}^3$ and $\textsf{P}$. Specifically, the problems can be solved in deterministic space $O((\log|W|)^3)$ or in deterministic time $O( |W|^4\log|W| )$.
Making many technical modifications but keeping the general strategy of arguments unchanged, we will obtain similar to Theorem \[thm1\] results for Baumslag–Solitar one-relator groups.
\[thm2\] Let the group ${\mathcal{G}}_3$ be defined by a presentation of the form $$\label{pr3b}
{\mathcal{G}}_3 := \langle \ a_1, \dots, a_m \ \| \ a_2 a_1^{n_1} a_2^{-1} = a_1^{ n_2} \, \rangle ,$$ where $n_1, n_2$ are some nonzero integers. Then both the bounded and precise word problems for are in $\textsf{L}^3$ and in $\textsf{P}$. Specifically, the problems can be solved in deterministic space $O((\max(\log|W|,\log n) (\log|W|)^2)$ or in deterministic time $O(|W|^4)$.
As another application of our techniques, we will obtain a solution in polylogarithmic space for the (minimal) diagram problem for presentation which includes the case of the free group ${\mathcal{F}}({\mathcal{A}}) = \langle {\mathcal{A}}\ \| \ \varnothing \rangle$ over ${\mathcal{A}}$ without relations. Recall that the [*diagram problem*]{} for a decidable presentation is a search problem that, given a word $W$ over ${\mathcal{A}}^{\pm 1}$ with $W \overset {\mathcal{G}}= 1$, asks to algorithmically construct a disk diagram ${\Delta}$ over whose boundary ${\partial}{\Delta}$ is labeled by $W$, denoted ${\varphi}({\partial}{\Delta}) \equiv W$, for the definitions see Sect. 2. Analogously, the [*minimal diagram problem*]{} for a decidable presentation is a search problem that, given a word $W$ over ${\mathcal{A}}^{\pm 1}$ with $W \overset {\mathcal{G}}= 1$, asks to algorithmically construct a disk diagram ${\Delta}$ over such that ${\varphi}({\partial}{\Delta}) \equiv W$ and ${\Delta}$ contains a minimal number of faces.
Recall that, according to Lipton and Zalcstein [@LZ], the word problem for the free group ${\mathcal{F}}({\mathcal{A}})$, given by presentation with ${\mathcal{R}}= \varnothing$, is in . However, construction of an actual diagram ${\Delta}$ over ${\mathcal{F}}({\mathcal{A}})$ for a word $W$ over ${\mathcal{A}}^{\pm 1}$ such that ${\varphi}({\partial}{\Delta}) \equiv W$, is a different matter and it is not known whether this construction could be done in polylogarithmic space (note that it is easy to construct such a diagram, which is a tree, in polynomial time). In fact, many results of this article grew out of the attempts to solve the diagram problem for free groups with no relations in subpolynomial space.
\[thm3\] Both the diagram problem and the minimal diagram problem for group presentation can be solved in deterministic space $O((\log|W|)^3)$ or in deterministic time $O(|W|^4\log|W|)$.
Furthermore, let $W$ be a word such that $W \overset {{\mathcal{G}}_2} =1$ and let $$\tau({\Delta}) = (\tau_1({\Delta}), \ldots, \tau_{s_\tau}({\Delta}))$$ be a tuple of integers, where the absolute value $| \tau_i({\Delta}) |$ of each $\tau_i({\Delta})$ represents the number of certain vertices or faces in a disk diagram ${\Delta}$ over such that ${\varphi}({\partial}{\Delta}) \equiv W$. Then, in deterministic space $O( (\log |W|)^3 )$, one can algorithmically construct such a minimal diagram ${\Delta}$ which is also smallest relative to the tuple $\tau({\Delta})$ (the tuples are ordered lexicographically).
We point out that the case of the free group ${\mathcal{F}}({\mathcal{A}}) = \langle {\mathcal{A}}\ \| \ \varnothing \rangle$ with no relations is covered by Theorem \[thm3\] and, since there are no relations, every diagram ${\Delta}$ over ${\mathcal{F}}({\mathcal{A}}) = \langle {\mathcal{A}}\ \| \ \varnothing \rangle$ is minimal. Hence, for the free group ${\mathcal{F}}({\mathcal{A}})$, Theorem \[thm3\] implies the following.
\[cor2\] There is a deterministic algorithm that, for given word $W$ over the alphabet ${\mathcal{A}}^{\pm 1}$ such that $W \overset{{\mathcal{F}}({\mathcal{A}})}{=} 1$, where ${\mathcal{F}}({\mathcal{A}}) = \langle {\mathcal{A}}\ \| \ \varnothing \rangle$ is the free group over ${\mathcal{A}}$, constructs a pattern of cancellations of letters in $W$ that result in the empty word and the algorithm operates in space $O( (\log |W|)^3 )$.
Furthermore, let ${\Delta}$ be a disk diagram over ${\mathcal{F}}({\mathcal{A}}) $ that corresponds to a pattern of cancellations of letters in $W$, i.e., ${\varphi}({\partial}{\Delta}) \equiv W$, and let $$\tau({\Delta}) = (\tau_1({\Delta}), \ldots, \tau_{s_\tau}({\Delta}))$$ be a tuple of integers, where the absolute value $| \tau_i({\Delta}) |$ of each $\tau_i({\Delta})$ represents the number of vertices in ${\Delta}$ of certain degree. Then, also in deterministic space $O( (\log |W|)^3 )$, one can algorithmically construct such a diagram ${\Delta}$ which is smallest relative to the tuple $\tau({\Delta})$.
Here is the analogue of Theorem \[thm3\] for presentations of one-relator Baumslag–Solitar groups.
\[thm4\] Suppose that $W$ is a word over the alphabet ${\mathcal{A}}^{\pm 1}$ such that the bounded word problem for presentation holds for the pair $(W, n)$. Then a minimal diagram ${\Delta}$ over such that ${\varphi}( {\partial}{\Delta}) \equiv W$ can be algorithmically constructed in deterministic space $O( \max(\log |W|, \log n)(\log |W|)^2)$ or in deterministic time $O( |W|^4)$.
In addition, if $|n_1 | = |n_2 |$ in , then the minimal diagram problem for presentation can be solved in deterministic space $O( (\log |W|)^3 )$ or in deterministic time $O( |W|^3\log |W|)$.
As further applications of the techniques of the proof of Theorems \[thm2\], \[thm4\], we obtain computational results on discrete homotopy of polygonal closed curves in the plane.
Let ${\mathcal{T}}$ denote a tessellation of the plane $\mathbb R^2$ into unit squares whose vertices are points with integer coordinates. Let $c$ be a finite closed path in ${\mathcal{T}}$ so that edges of $c$ are edges of ${\mathcal{T}}$. Consider the following two types of elementary operations over $c$. If $e$ is an oriented edge of $c$, $e^{-1}$ is the edge with an opposite to $e$ orientation, and $ee^{-1}$ is a subpath of $c$ so that $c = c_1 ee^{-1} c_2$, where $c_1, c_2$ are subpaths of $c$, then the operation $c \to c_1 c_2$ over $c$ is called an [*[elementary homotopy]{} of type 1*]{}. Suppose that $c =c_1 u c_2$, where $c_1, u, c_2$ are subpaths of $c$, and a boundary path ${\partial}s$ of a unit square $s$ of ${\mathcal{T}}$ is ${\partial}s = uv$, where $u, v$ are subpaths of ${\partial}s$ and either of $u, v$ could be of zero length, i.e., either of $u, v$ could be a single vertex of ${\partial}s$. Then the operation $c \to c_1 v^{-1} c_2$ over $c$ is called an [*[elementary homotopy]{} of type 2*]{}.
\[thm5\] Let $c$ be a finite closed path in a tessellation ${\mathcal{T}}$ of the plane $\mathbb R^2$ into unit squares so that edges of $c$ are edges of ${\mathcal{T}}$. Then a minimal number $m_2(c)$ such that there is a finite sequence of [elementary homotopies]{} of type 1–2, which turns $c$ into a single point and which contains $m_2(c)$ [elementary homotopies]{} of type 2, can be computed in deterministic space $O((\log |c|)^3)$ or in deterministic time $O( |c|^3\log |c|)$, where $|c|$ is the length of $c$.
Furthermore, such a sequence of [elementary homotopies]{} of type 1–2, which turns $c$ into a single point and which contains $m_2(c)$ [elementary homotopies]{} of type 2, can also be computed in deterministic space $O((\log |c|)^3)$ or in deterministic time $O( |c|^3\log |c|)$.
We remark that this number $m_2(c)$ defined in Theorem \[thm5\] can be regarded as the area “bounded" by a closed path $c$ in ${\mathcal{T}}$. Clearly, if $c$ is simple, i.e., $c$ has no self-intersections, then $m_2(c)$ is the area of the compact region bounded by $c$. If $c$ is simple then the area bounded by $c$ can be computed in logarithmic space $O(\log |c|)$ as follows from the “shoelace" formula for the area of a simple polygon. More generally, assume that $c$ is a continuous closed curve in $\mathbb R^2$, i.e., $c$ is the image of a continuous map $\mathbb S^1 \to \mathbb R^2$, where $\mathbb S^1$ is a circle. Consider a homotopy $H : \mathbb S^1 \times [0,1] \to \mathbb R^2$ that turns the curve $c = H(\mathbb S^1 \times \{ 0 \} )$ into a point $H(\mathbb S^1 \times \{ 1 \} )$ so that for every $t$, $0\le t <1$, $H(\mathbb S^1 \times \{ t \} )$ is a closed curve. Let $A(H)$ denote the area swept by the curves $H(\mathbb S^1 \times \{ t \} )$, $0\le t <1$, and let $A(c)$ denote the infimum $\inf_H A(H)$ over all such homotopies $H$. As above, we remark that this number $A(c)$ can be regarded as the area defined (or “bounded") by $c$. Note that this number $A(c)$ is different from the signed area of $c$ defined by applying the “shoelace" formula to singular polygons. Other applications that we discuss here involve polygonal (or piecewise linear) closed curves in the plane and computation and approximation of the area defined by these curves in polylogarithmic space or in polynomial time.
We say that $c$ is a [*polygonal*]{} closed curve in the plane $\mathbb R^2$ with given tessellation ${\mathcal{T}}$ into unit squares if $c$ consists of finitely many line segments $c_1, \dots, c_k$, $k >0$, whose endpoints are vertices of ${\mathcal{T}}$, $c= c_1 \dots c_k$, and $c$ is closed. If $c_i \subset {\mathcal{T}}$ then the ${\mathcal{T}}$-length $|c_i|_{{\mathcal{T}}}$ of $c_i$ is the number of edges of ${\mathcal{T}}$ in $c_i$. If $c_i \not\subset {\mathcal{T}}$ then the ${\mathcal{T}}$-length $|c_i|_{{\mathcal{T}}}$ of $c_i$ is the number of connected components in $c_i \setminus {\mathcal{T}}$. We assume that $|c_i|_{{\mathcal{T}}} >0$ for every $i$ and set $|c|_{{\mathcal{T}}} := \sum_{i=1}^k |c_i|_{{\mathcal{T}}}$.
\[thm6\] Suppose that $n \ge 1$ is a fixed integer and $c$ is a polygonal closed curve in the plane $\mathbb R^2$ with given tessellation ${\mathcal{T}}$ into unit squares. Then, in deterministic space $O( (\log |c|_{{\mathcal{T}}} )^3)$ or in deterministic time $O( |c|_{{\mathcal{T}}}^{n+3}\log |c|_{{\mathcal{T}}} )$, one can compute a rational number $r_n$ such that $|A(c) - r_n | < \tfrac {1}{ |c|_{{\mathcal{T}}}^n }$.
In particular, if the area $A(c)$ defined by $c$ is known to be an integer multiple of $\tfrac 1 L$, where $L>0$ is a given integer and $L< |c|_{{\mathcal{T}}}^{n}/2$, then $A(c)$ can be computed in deterministic space $O( (\log |c|_{{\mathcal{T}}} )^3)$ or in deterministic time $O( |c|_{{\mathcal{T}}}^{n+3} \log |c|_{{\mathcal{T}}})$.
\[cor3\] Let $K \ge 1$ be a fixed integer and let $c$ be a polygonal closed curve in the plane $\mathbb R^2$ with given tessellation ${\mathcal{T}}$ into unit squares such that $c$ has one of the following two properties $\mathrm{(a)}$–$\mathrm{(b)}$.
$\mathrm{(a)}$ If $c_i, c_j$ are two nonparallel line segments of $c$ then their intersection point, if it exists, has coordinates that are integer multiples of $\tfrac 1 K$.
$\mathrm{(b)}$ If $c_i$ is a line segment of $c$ and $a_{i,x}, a_{i,y}$ are coprime integers such that the line given by an equation $a_{i,x}x + a_{i,y}y = b_{i}$, where $b_i$ is an integer, contains $c_i$, then $\max(|a_{i,x}|,|a_{i,y}|) \le K$.
Then the area $A(c)$ defined by $c$ can be computed in deterministic space $O( (\log |c|_{{\mathcal{T}}} )^3)$ or in deterministic time $O( |c|_{{\mathcal{T}}}^{n+3} \log |c|_{{\mathcal{T}}} )$, where $n$ depends on $K$.
In particular, if ${\mathcal{T}}_\ast$ is a tessellation of the plane $\mathbb R^2$ into equilateral triangles of unit area, or into regular hexagons of unit area, and $q$ is a finite closed path in ${\mathcal{T}}_\ast$ whose edges are edges of ${\mathcal{T}}_\ast$, then the area $A(q)$ defined by $q$ can be computed in deterministic space $O( (\log |q|)^3)$ or in deterministic time $O( |q|^5\log |q|)$.
It is tempting to try to lift the restrictions of Corollary \[cor3\] to be able to compute, in polylogarithmic space, the area $A(c)$ defined by an arbitrary polygonal closed curve $c$ in the plane equipped with a tessellation ${\mathcal{T}}$ into unit squares. However, in the general situation, this idea would not work because the rational number $A(c)$ might have an exponentially large denominator, hence, $A(c)$ could take polynomial space just to store (let alone the computations), see an example in the end of Sect. 10.
We remark in passing that there are decision problems in that are not known to be -complete or in , called -intermediate problems, that are solvable in polylogarithmic space. For example, a restricted version of the -complete clique problem asks whether a graph on $n$ vertices contains a clique with at most $[ \log n ]$ vertices, where $[k]$ is the integer part of $k$, and this restriction is obviously a problem solvable in nondeterministic space $O([ \log n ]^2)$. More natural examples of such -intermediate problems would be a decision version of the problem on finding a minimum dominating set in a tournament, [@MV], [@PY], and the problem on isomorphism of two finite groups given by their multiplication tables, [@BKLM], [@LSZ].
Preliminaries
=============
If $U, V$ are words over an alphabet ${\mathcal{A}}^{\pm 1} := {\mathcal{A}}\cup {\mathcal{A}}^{-1}$, then $U \overset 0 = V$ denotes the equality of $U, V$ as elements of the free group ${\mathcal{F}}({\mathcal{A}})$ whose set of free generators is ${\mathcal{A}}$. The equality of natural images of words $U, V$ in the group ${\mathcal{G}}$, given by presentation [pr3]{}, is denoted $U \overset {\mathcal{G}}= V$.
The letter-by-letter equality of words $U, V$ is denoted $U \equiv V$. If $U \equiv a_{i_1}^{{\varepsilon}_1} \dots a_{i_\ell}^{{\varepsilon}_\ell}$, where $a_{i_1}, \dots, a_{i_\ell} \in {\mathcal{A}}$ and ${\varepsilon}_1, \dots, {\varepsilon}_\ell \in \{ \pm 1\}$, then the length of $U$ is $|U| =\ell$. A nonempty word $U$ over ${\mathcal{A}}^{\pm 1}$ is called [*reduced*]{} if $U$ contains no subwords of the form $a a^{-1}$, $a^{-1}a$, where $a \in {\mathcal{A}}$.
Let ${\Delta}$ be a 2-complex and let ${\Delta}(i)$ denote the set of nonoriented $i$-cells of ${\Delta}$, $i=0,1,2$. We also consider the set $\vec {\Delta}(1)$ of oriented 1-cells of ${\Delta}$. If $e \in \vec {\Delta}(1)$ then $e^{-1} $ denotes $e$ with the opposite orientation, note that $e \ne e^{-1}$. For every $e \in \vec {\Delta}(1)$, let $e_-$, $e_+$ denote the initial, terminal, resp., vertices of $e$. In particular, $(e^{-1})_- = e_+$ and $(e^{-1})_+ = e_-$. The closures of $i$-cells of ${\Delta}$ are called [*vertices, edges, faces*]{} when $i=0, 1, 2$, resp.
A path $p = e_1 \dots e_\ell$ in ${\Delta}$ is a sequence of oriented edges $e_1, \dots, e_\ell$ of ${\Delta}$ such that $(e_i)_+ = (e_{i+1})_-$, $i =1,\dots, \ell-1$. The length of a path $p= e_1 \dots e_\ell$ is $|p| = \ell$. The initial vertex of $p$ is $p_- := (e_1)_-$ and the terminal vertex of $p$ is $p_+ := (e_\ell)_+$. We allow the possibility that $|p| = 0$ and $p = p_-$.
A path $p$ is called [*reduced*]{} if $|p| > 0$ and $p$ contains no subpath of the form $e e^{-1}$, where $e$ is an edge. A path $p$ is called [*closed*]{} if $p_- = p_+$.
A [*cyclic* ]{} path is a closed path with no distinguished initial vertex. A path $p= e_1 \dots e_\ell$ is called [*simple*]{} if the vertices $(e_1)_-, \dots, (e_\ell)_-, (e_\ell)_+$ are all distinct. A closed path $p = e_1 \dots e_\ell$ is [*simple*]{} if the vertices $(e_1)_-, \dots, (e_\ell)_-$ are all distinct.
A [*disk diagram*]{}, also called a [*van Kampen diagram*]{}, ${\Delta}$ over a presentation is a planar connected and simply connected finite 2-complex which is equipped with a labeling function $${\varphi}: \vec {\Delta}(1) \to {\mathcal{A}}\cup {\mathcal{A}}^{-1} = {\mathcal{A}}^{\pm 1}$$ such that, for every $e \in \vec {\Delta}(1)$, ${\varphi}(e^{-1}) = {\varphi}(e)^{-1}$ and, for every face $\Pi$ of ${\Delta}$, if ${\partial}\Pi = e_1 \dots e_\ell$ is a boundary path of $\Pi$, where $e_1, \dots, e_\ell \in \vec {\Delta}(1)$, then $${\varphi}({\partial}\Pi) := {\varphi}(e_1) \dots {\varphi}(e_\ell)$$ is a cyclic permutation of one of the words in ${\mathcal{R}}^{\pm 1} = {\mathcal{R}}\cup {\mathcal{R}}^{-1}$.
A disk diagram ${\Delta}$ over presentation is always considered with an embedding ${\Delta}\to \mathbb R^2$ into the plane $\mathbb R^2$. This embedding makes it possible to define positive (=counterclockwise) and negative (=clockwise) orientations for boundaries of faces of ${\Delta}$, for the boundary path ${\partial}{\Delta}$ of ${\Delta}$, and, more generally, for boundaries of disk subdiagrams of ${\Delta}$. It is convenient to assume, as in [@Iv94], [@Ol89], that the boundary path ${\partial}\Pi$ of every face $\Pi$ of a disk diagram $ {\Delta}$ has the positive orientation while the boundary path ${\partial}{\Delta}$ of ${\Delta}$ has the negative orientation.
If $o \in {\partial}{\Delta}$ is a vertex, let ${\partial}|_o {\Delta}$ denote a boundary path of ${\Delta}$ (negatively oriented) starting (and ending) at $o$. Using this notation, we now state van Kampen lemma on geometric interpretation of consequences of defining relations.
\[vk\] Let $W$ be a nonempty word over ${\mathcal{A}}^{\pm 1}$ and let a group ${\mathcal{G}}$ be defined by presentation [pr3]{}. Then $W \overset {\mathcal{G}}= 1$ if and only if there is a disk diagram over presentation [pr3]{} such that ${\varphi}( {\partial}|_o {\Delta}) \equiv W$ for some vertex $o \in {\partial}{\Delta}$.
The proof is straightforward, for details the reader is referred to [@Iv94], [@Ol89], see also [@LS].
A disk diagram ${\Delta}$ over presentation [pr3]{} with ${\varphi}( {\partial}{\Delta}) \equiv W$ is called [*minimal* ]{} if ${\Delta}$ contains a minimal number of faces among all disk diagrams ${\Delta}'$ such that ${\varphi}( {\partial}{\Delta}' ) \equiv W$.
A disk diagram ${\Delta}$ over [pr3]{} is called [*reduced*]{} if ${\Delta}$ contains no two faces $\Pi_1$, $\Pi_2$ such that there is a vertex $v \in {\partial}\Pi_1$, $v \in {\partial}\Pi_2$ and the boundary paths ${\partial}|_v \Pi_1$, ${\partial}|_v \Pi_2$ of the faces, starting at $v$, satisfy ${\varphi}( {\partial}|_v \Pi_1 ) \equiv {\varphi}( {\partial}|_v \Pi_2)^{-1}$. If ${\Delta}$ is not reduced and $\Pi_1$, $\Pi_2$ is a pair of faces that violates the definition of being reduced for ${\Delta}$, then these faces $\Pi_1$, $\Pi_2$ are called a [*reducible pair*]{} (cf. similar definitions in [@Iv94], [@LS], [@Ol89]). A reducible pair of faces $\Pi_1$, $\Pi_2$ can be removed from ${\Delta}$ by a surgery that cuts through the vertex $v$ and identifies the boundary paths ${\partial}|_v \Pi_1$ and $({\partial}|_v \Pi_2)^{-1}$, see Fig. 2.1, more details can be found in [@LS], [@Ol89]. As a result, one obtains a disk diagram ${\Delta}'$ such that ${\varphi}( {\partial}{\Delta}' ) \equiv {\varphi}( {\partial}{\Delta})$ and $|{\Delta}'(2) | = |{\Delta}(2) | -2$, where $|{\Delta}(2)|$ is the number of faces in ${\Delta}$. In particular, a minimal disk diagram is always reduced.
at (-1,-.7) [$v$]{}; at (-2,0) [$\Pi_1$]{}; at (0,0) [$\Pi_2$]{}; (-1,0)\[fill = black\]circle (0.05); = \[-open triangle 45\] (1.5,0) – (2.4,0); (3,-1.2) – (3,1.2); (3,-1.2)\[fill = black\]circle (0.05); (3,1.2)\[fill = black\]circle (0.05); at (3.5,-1.5) [$v''$]{}; at (3.5,1.5) [$v'$]{}; at (1,-2) [Fig. 2.1]{}; (0,0) ellipse (1 and 0.7); (-2,0) ellipse (1 and 0.7);
Note that if $o \in {\partial}{\Delta}$ is a vertex and ${\partial}|_o {\Delta}= q_1 q_2$ is a factorization of the boundary path ${\partial}|_o {\Delta}$ of ${\Delta}$, $q_1$ is closed, $0 < |q_1|, |q_2| < | {\partial}{\Delta}|$, then the notation ${\partial}|_{o} {\Delta}$ is in fact ambiguous, because the path $q_2 q_1$ also has the form ${\partial}|_o {\Delta}$. To avoid this (and other) type of ambiguity, for a given pair $(W, {\Delta})$, where $W \overset {\mathcal{G}}= 1$ and ${\Delta}$ is a disk diagram for $W$ as in Lemma \[vk\], we consider a “model" simple path $P_W$ such that $| P_W | = |W|$, $P_W$ is equipped with a labeling function ${\varphi}: \vec P_W(1) \to {\mathcal{A}}^{\pm 1} $ on the set $\vec P_W(1)$ of its oriented edges so that ${\varphi}(e^{-1}) = {\varphi}(e)^{-1}$ and ${\varphi}( P_W) \equiv W$.
It will be convenient to identify vertices of $P_W$ with integers $0, 1, \dots, |W|$ so that $(P_W)_- =0$ and the numbers corresponding to vertices increase by one as one goes along $P_W$ from $(P_W)_- =0$ to $(P_W)_+ =|W|$. This makes it possible to compare vertices $v_1, v_2 \in P_W(0)$ by the standard order $\le$ defined on the integers, consider vertices $v_1\pm 1$ etc.
For a given pair $(W, {\Delta})$, where $W \overset {\mathcal{G}}= 1$, let $${\alpha}: P_W \to {\partial}{\Delta}$$ be a continuous cellular map that preserves dimension of cells, ${\varphi}$-labels of edges, and has the property that ${\alpha}((P_W)_-) = {\alpha}(0) = o$, where $o$ is a fixed vertex of ${\partial}{\Delta}$ with ${\varphi}( {\partial}|_o {\Delta}) \equiv W$.
If $v$ is a vertex of $P_W$, let $$P_W(\operatorname{\textsf{fact}}, v) = p_1p_2$$ denote the factorization of $P_W$ defined by $v$ so that $(p_1)_+ = v$. Analogously, if $v_1, v_2$ are vertices of $P_W$ with $v_1 \le v_2$, we let $$P_W(\operatorname{\textsf{fact}}, v_1, v_2) = p_1p_2p_3$$ denote the factorization of $P_W$ defined by $v_1, v_2$ so that $(p_2)_- = v_1$ and $(p_2)_+ = v_2$. Note that if $v_1 = v_2$, then $p_2 = \{ v_1\}$ and $|p_2 | = 0$. Clearly, $|p_2| = v_2 - v_1$.
Making use of the introduced notation, consider a vertex $v$ of $P_W$ and let $P_W(\operatorname{\textsf{fact}}, v) = p_1p_2$. Define ${\partial}|_v {\Delta}: = {\alpha}(p_2) {\alpha}(p_1)$. This notation ${\partial}|_v {\Delta}$, in place of ${\partial}|_{{\alpha}(v)} {\Delta}$, will help us to avoid the potential ambiguity when writing ${\partial}|_{{\alpha}(v)} {\Delta}$. In particular, if $\bar W$ is a cyclic permutation of $W$ so that the first $k$, where $0 \le k \le |W|-1$, letters of $W$ are put at the end of $W$, then ${\varphi}({\partial}|_{k} {\Delta}) \equiv \bar W$. It is clear that ${\varphi}({\partial}|_{0} {\Delta}) \equiv W$.
Consider the following property.
1. Suppose that ${\Delta}$ is a [disk diagram]{} over [pr1]{}. If $\Pi$ is a face of ${\Delta}$ and $e \in {\partial}\Pi$ is an edge then $e^{-1} \in {\partial}{\Delta}$.
We now state a lemma in which we record some simple properties of [disk diagram]{}s over [pr1]{} related to property (A).
\[vk2\] Let ${\Delta}$ be a disk diagram over presentation [pr1]{}. Then the following hold true.
$(\mathrm{a})$ If ${\Delta}$ has property (A), then the degree of every vertex of ${\Delta}$ is at most $2|{\partial}{\Delta}|$, the boundary path ${\partial}\Pi$ of every face $\Pi$ of ${\Delta}$ is simple, and $$|{\Delta}(2)| \le | {\partial}{\Delta}| , \quad \sum_{\Pi \in {\Delta}(2)} |{\partial}\Pi | \le | {\partial}{\Delta}| .$$
$(\mathrm{b})$ There exists a disk diagram ${\Delta}'$ over [pr1]{} such that ${\varphi}( {\partial}{\Delta}') \equiv {\varphi}( {\partial}{\Delta})$, $| {\Delta}'(2) | \le |{\Delta}(2)|$ and ${\Delta}'$ has property (A).
\(a) Let $v$ be a vertex of ${\Delta}$. By property (A), $v \in {\partial}{\Delta}$ and if $e$ is an edge such that $e_- = v$, then either $e \in {\partial}{\Delta}$ or $e \in {\partial}\Pi$, where $\Pi$ is a face of ${\Delta}$, and $e^{-1} \in {\partial}{\Delta}$. This implies that $\deg v \le 2|{\partial}{\Delta}|$.
If the boundary path ${\partial}\Pi$ of a face $\Pi$ of ${\Delta}$ is not simple, then there is a factorization ${\partial}\Pi = u_1 u_2$, where $u_1, u_2$ are closed subpaths of ${\partial}\Pi$ and $0 < |u_1|, |u_2| < |{\partial}\Pi|$, see Fig. 2.2. Clearly, the edges of one of the paths $u_1, u_2$ do not belong to the boundary ${\partial}{\Delta}$ of ${\Delta}$, contrary to property (A) of ${\Delta}$.
(0,-1) ellipse (2.8 and 3); (0,-2) ellipse (1.5 and 2); (.1,2) – (-.3,2); (-.1,0) – (.2,0); (0,-4) \[fill = black\] circle (.07); at (0,1.4) [$u_1$]{}; at (0,-.7) [$u_2$]{}; at (1.6,0) [$\Pi$]{}; at (0,-5) [Fig. 2.2]{}; at (5,-3) [$\partial \Pi = u_1u_2$]{}; at (0,-2.2) [$\Delta_2$]{};
The inequality $|{\Delta}(2)| \le | {\partial}{\Delta}|$ and its stronger version $\sum_{\Pi \in {\Delta}(2)} |{\partial}\Pi | \le | {\partial}{\Delta}|$ are immediate from property (A).
\(b) Suppose that a disk diagram ${\Delta}$ over [pr1]{} does not have property (A).
First assume that ${\Delta}$ contains a face $\Pi$ whose boundary path ${\partial}\Pi$ is not simple. Then, as in the proof of part (a), there is a factorization ${\partial}\Pi = u_1 u_2$, where $u_1, u_2$ are closed subpaths of ${\partial}\Pi$ and $0 < |u_1|, |u_2| < |{\partial}\Pi|$. Renaming $u_1$ and $u_2$ if necessary, we may assume that $u_2$ bounds a disk subdiagram ${\Delta}_2$ of ${\Delta}$ such that ${\Delta}_2$ does not contain $\Pi$, see Fig. 2.2. If $u_2$ is not simple, then we can replace $u_2$ by its closed subpath $u'_2$ such that $0 < |u'_2| < |u_2|$ and $u'_2$ bounds a disk subdiagram ${\Delta}'_2$ that contains no $\Pi$. Hence, choosing a shortest path $u_2$ as above, we may assume that $u_2$ is simple. By Lemma \[vk\] applied to ${\Delta}_2$ with ${\partial}{\Delta}_2 = u_2$, we have $$\label{ud2}
{\varphi}(u_2) \equiv {\varphi}({\partial}{\Delta}_2) \overset{{\mathcal{G}}_2} = 1.$$
Denote ${\varphi}({\partial}\Pi) \equiv a_i^{{\varepsilon}k}$, where ${\varepsilon}= \pm 1$, $k \in E_i$.
Note that the group ${{\mathcal{G}}_2}$ is the free product of cyclic groups generated by the images of generators $a_1, \dots, a_m$ and the image of $a_j$ has order $n_j >0$ if $E_j \ne \{ 0\}$ or the image of $a_j$ has infinite order if $E_j = \{ 0\}$. Hence, an equality $a_j^\ell \overset{{\mathcal{G}}_2} = 1$, where $\ell \ne 0$, implies that $E_j \ne \{ 0\}$ and $n_j$ divides $\ell$.
It follows from [ud2]{} and $0 < |u_2| < |{\partial}\Pi|$ that ${\varphi}(u_2) \equiv a_i^{{\varepsilon}k_2}$, where $k_2 \in E_i$, $k_2 < k$. Therefore, $E_i = \{ n_i, 2n_i, \dots \}$ and ${\varphi}(u_1) \equiv a_i^{{\varepsilon}(k-k_2)}$, where $k-k_2 \in E_i$. Hence, we can consider a face $\Pi'$ such that ${\varphi}({\partial}\Pi') \equiv {\varphi}(u_1) \equiv a_i^{{\varepsilon}(k-k_2)}$. Now we take the subdiagram ${\Delta}_2$ out of ${\Delta}$ and replace the face $\Pi$ with ${\varphi}({\partial}\Pi) \equiv a_i^{{\varepsilon}k}$ by the face $\Pi'$ with ${\varphi}({\partial}\Pi') \equiv a_i^{{\varepsilon}(k-k_2)}$. Doing this results in a [disk diagram]{} ${\Delta}'$ such that ${\varphi}({\partial}{\Delta}')\equiv {\varphi}({\partial}{\Delta}')$ and $| {\Delta}'(2) | < | {\Delta}(2) |$ as $| {\Delta}_2(2) | >0$.
Assume that, for every face $\Pi$ in ${\Delta}$, the boundary path ${\partial}\Pi$ is simple. Also, assume that the property (A) fails for ${\Delta}$. Then there are faces $\Pi_1$, $\Pi_2$, $\Pi_1 \ne \Pi_2$, and an edge $e$ such that $e \in {\partial}\Pi_1$ and $e^{-1} \in {\partial}\Pi_2$. Consider a disk subdiagram $\Gamma$ of ${\Delta}$ that contains $\Pi_1$, $\Pi_2$ and $\Gamma$ is minimal with this property relative to $|\Gamma(2)|+|\Gamma(1)|$. Since ${\partial}\Pi_1$, ${\partial}\Pi_2$ are simple paths, it follows that ${\partial}\Gamma = r_1 r_2$, where $r_1^{-1}$ is a subpath of ${\partial}\Pi_1$ and $r_2^{-1}$ is a subpath of ${\partial}\Pi_2$. Denote ${\varphi}({\partial}\Pi_1) \equiv a_i^{{\varepsilon}_1 k_1}$, where ${\varepsilon}_1 = \pm 1$ and $k_1 \in E_i$. Clearly, ${\varphi}({\partial}\Pi_2) \equiv a_i^{-{\varepsilon}_1 k_2}$, where $k_2 \in E_i$ and $${\varphi}({\partial}\Gamma) \equiv {\varphi}(r_1) {\varphi}(r_2) \overset{0} = a_i^{{\varepsilon}k} ,$$ where ${\varepsilon}= \pm 1$ and $k \ge 0$. As above, we observe that $k \in E_i$ following from $a_i^{{\varepsilon}k} \overset{{\mathcal{G}}_2} = 1$. Hence, we may consider a disk diagram $\Gamma'$ such that ${\partial}\Gamma' = r'_1 r'_2$, where ${\varphi}(r'_1)\equiv {\varphi}(r_1)$, ${\varphi}(r'_2)\equiv {\varphi}(r_2)$, and $\Gamma'$ contains a single face $\Pi$ such that ${\varphi}({\partial}\Pi) \equiv a_i^{-{\varepsilon}k}$ if $k \ne 0$ or $\Gamma'$ contains no faces if $k = 0$. We take the subdiagram $\Gamma$ out of ${\Delta}$ and replace $\Gamma$ with $\Gamma'$, producing thereby a [disk diagram]{} ${\Delta}'$ such that ${\varphi}({\partial}{\Delta}')\equiv {\varphi}({\partial}{\Delta}')$ and $| {\Delta}'(2) | < | {\Delta}(2) |$.
We now observe that if property (A) fails for ${\Delta}$ then there is a face $\Pi$ in ${\Delta}$ such that ${\partial}\Pi$ is not simple or there are distinct faces $\Pi_1$, $\Pi_2$ and an edge $e$ such that $e \in {\partial}\Pi_1$ and $e^{-1} \in {\partial}\Pi_2$. In either case, as was shown above, we can find a [disk diagram]{} ${\Delta}'$ such that ${\varphi}({\partial}{\Delta}')\equiv {\varphi}({\partial}{\Delta}')$ and $| {\Delta}'(2) | < | {\Delta}(2) |$. Now obvious induction on $| {\Delta}(2) |$ completes the proof of part (b).
In view of Lemma \[vk2\](b), we will be assuming in Sects. 4–5, 8 that if ${\Delta}$ is a [disk diagram]{} over presentation [pr1]{}, then ${\Delta}$ has property (A).
Proof of Proposition \[prp1\]
=============================
$(\mathrm{a})$ There exists a decidable group presentation for which the word problem is solvable while the bounded and precise word problems are not solvable.
$(\mathrm{b})$ If the bounded word problem is solvable for , then the precise word problem is also solvable.
$(\mathrm{c})$ For every finite group presentation , the bounded word problem is in ${\textsf{NP}}$, i.e., it can be solved in nondeterministic polynomial time, and the precise word problem is in , i.e., it can be solved in polynomial space.
$(\mathrm{d})$ There exists a finite group presentation for which the bounded and precise word problems are solvable while the word problem is not solvable.
$(\mathrm{e})$ There exists a finitely presented group for which the bounded word problem is -complete and the precise word problem is -hard.
\(a) We will use the construction of [@GI08 Example 3] suggested by C. Jockush and I. Kapovich. Consider the group presentation $$\label{jkp}
\langle \ a, b \ \| \ a^i=1, \ a^i b^{k_i}=1, \ \
i \in \mathbb N \, \rangle ,$$ where $\mathbb K = \{ k_1, k_2, \dots \}$ is a recursively enumerable but not recursive subset of the set of natural numbers $\mathbb N$ with the indicated enumeration and $k_1 =1$. It is clear that the set of relations is decidable and this presentation defines the trivial group, hence the word problem is solvable for [jkp]{}. On the other hand, it is easy to see that the bounded word problem for a pair $(b^k, 2)$, where $k \in \mathbb N$, holds true if and only if $k \in \mathbb K$. Analogously, the precise word problem for a pair $(b^k, 2)$ holds true if and only if $k \in \mathbb K$. Since the set $\mathbb K$ is not recursive, it follows that both the bounded word problem and the precise word problem for presentation [jkp]{} are unsolvable.
\(b) Note that the [precise word problem]{} holds true for a pair $(W, n)$ [if and only if]{}the [bounded word problem]{} is true for $(W, n)$ and the [bounded word problem]{} is false for $(W, n-1)$. This remark means that the solvability of the [bounded word problem]{} for [pr3]{} implies the solvability of the [precise word problem]{} for [pr3]{}. On the other hand, the [bounded word problem]{} holds for a pair $(W, n)$ [if and only if]{}the [precise word problem]{} holds for $(W, k)$ with some $k \le n$. This remark means that the solvability of the [precise word problem]{} for [pr3]{} implies the solvability of the [bounded word problem]{} for [pr3]{}, as required.
\(c) Suppose that presentation [pr3]{} is finite, i.e., both ${\mathcal{A}}$ and ${\mathcal{R}}$ are finite, and we are given a pair $(W, 1^n)$. It follows from definitions and Lemma \[vk\] that the [bounded word problem]{} holds for the pair $(W, 1^n)$ [if and only if]{} there is a disk diagram ${\Delta}$ such that ${\varphi}( {\partial}{\Delta}) \equiv W$ and $ | {\Delta}(2) | \le n$. Observe that $ | \vec {\Delta}(1) | \le M n+ |W|$, where $M = \max \{ |R| \, : \, R \in {\mathcal{R}}\}$ is a constant. Therefore, the size of a disk diagram ${\Delta}$ with ${\varphi}( {\partial}{\Delta}) \equiv W$ is bounded by a linear function in $n +|W|$ and such a diagram ${\Delta}$ can be used as a certificate to verify in polynomial time that the [bounded word problem]{} holds for the pair $(W, 1^n)$. Thus the [bounded word problem]{} for finite presentation [pr3]{} is in .
Recall that the [precise word problem]{} holds for a pair $(W, 1^n)$ [if and only if]{} the [bounded word problem]{} is true for $(W, 1^n)$ and the [bounded word problem]{} is false for $(W, 1^{n-1})$. As we saw above, the [bounded word problem]{} for [pr3]{} is in , hence, the complement of the [bounded word problem]{} for [pr3]{} is in . Since both and are subsets of , it follows that the [precise word problem]{} for finite presentation [pr3]{} is in .
\(d) According to Boone [@BooneA], [@Boone] and Novikov [@NovikovA], [@Novikov], see also [@LS], there exists a finite group presentation [pr3]{} such that the word problem for this presentation is not solvable. In view of part (c) both the [bounded word problem]{} and [precise word problem]{} for this presentation are solvable.
\(e) According to Birget, Sapir, Ol’shanskii and Rips [@BORS], there exists a finite group presentation [pr3]{} whose isoperimetric function is bounded by a polynomial $p(x)$ and for which the word problem is -complete. It follows from definitions that if $W \overset {\mathcal{G}}= 1 $ and ${\Delta}$ is a minimal diagram over presentation [pr3]{} such that ${\varphi}( {\partial}{\Delta}) \equiv W$ then $ | {\Delta}(2) | \le p(|W|)$. Therefore, the [bounded word problem]{}, whose input is $(W, 1^n)$, where $n \ge p(|W|)$, is equivalent to the word problem, whose input is $W$. Since the latter problem is -complete, it follows that the [bounded word problem]{} for [pr3]{} is -hard. By part (c), the [bounded word problem]{} for [pr3]{} is in , whence the [bounded word problem]{} for [pr3]{} is -complete.
Note that the word problem for given word $W$ is equivalent to the disjunction of the claims that the [precise word problem]{} holds for the pairs $(W, 1^1), (W, 1^2), \dots$, $(W, 1^{p(|W|)})$. Since $p(x)$ is a polynomial, it follows that the [precise word problem]{} for this presentation [pr3]{} is -hard.
Calculus of Brackets for Group Presentation
============================================
As in Theorem \[thm1\], consider a group presentation of the form $$\tag{1.2}
{\mathcal{G}}_2 = \langle \ a_1, \dots, a_m \ \| \ a_{i}^{k_i} =1, \ k_i \in E_i, \ i = 1, \ldots, m \, \rangle ,$$ where, for every $i$, one of the following holds: $E_i = \{ 0\}$, or, for some integer $n_i >0$, $E_i = \{ n_i \}$, or $E_i = n_i \mathbb N = \{ n_i, 2n_i, 3n_i,\dots \}$.
Suppose that $W$ is a nonempty word over ${\mathcal{A}}^{\pm 1}$, $W \overset {{\mathcal{G}}_2} = 1$ and ${\Delta}$ is a disk diagram over presentation such that ${\varphi}({\partial}|_0 {\Delta}) \equiv W$ and ${\Delta}$ has property (A). Recall that the existence of such a diagram ${\Delta}$ follows from Lemmas \[vk\]–\[vk2\](b).
\[lem1\] Suppose that ${\Delta}$ is a disk diagram over presentation and ${\Delta}$ contains no faces, i.e., ${\Delta}$ is a tree or, equivalently, ${\Delta}$ is a disk diagram over presentation $F({\mathcal{A}}) = \langle \ a_1, \dots, a_m \ \| \ \varnothing \ \rangle$ of the free group $F({\mathcal{A}}) $ with no relations, and assume that ${\varphi}({\partial}|_0 {\Delta}) \equiv W$, where $|W| > 2$. Then there are vertices $v_1, v_2 \in P_W$ such that $v_1< v_2$, ${\alpha}(v_1) = {\alpha}(v_2)$ and if $P_W(\operatorname{\textsf{fact}},v_1, v_2) = p_1 p_2 p_3$ is the factorization of $P_W$ defined by $v_1, v_2$, then $$\min(|p_2|, |p_1|+|p_3|) \ge \tfrac 13 | {\partial}|_0 {\Delta}| = \tfrac 13 |W| .$$
It is easy to verify that if $|W| \le 6$, then Lemma \[lem1\] is true. Hence, we may assume that $|W| > 6$.
For every pair $v'_1, v'_2$ of vertices of $P_W$ such that $v'_1 < v'_2$ and ${\alpha}(v'_1) = {\alpha}(v'_2)$, consider the factorization $P_W(\operatorname{\textsf{fact}},v'_1, v'_2) = p'_1 p'_2 p'_3$ and pick such a pair that maximizes $\min(|p'_2|, |p'_1|+|p'_3|)$. Let $v_1, v_2$ be such a maximal pair and denote $P_W(\operatorname{\textsf{fact}}, v_1, v_2) = p_1 p_2 p_3$. Arguing on the contrary, assume that $$\label{p13}
\min(|p_2|, |p_1|+|p_3|) < \tfrac 13 |W| .$$ Denote $q_i = {\alpha}(p_i)$, $i =1,2,3$. Let $e_1, \dots, e_k$, $f_1, \dots, f_\ell$, $k, \ell \ge 1$, be all edges that start at the vertex ${\alpha}(v_1) = {\alpha}(v_2)$ so that $$q_2 = e_1 s_1 e_1^{-1} \dots e_k s_k e_k^{-1} , \quad
q_3q_1 = f_1 t_1 f_1^{-1} \dots f_\ell t_\ell f_\ell^{-1} ,$$ where $s_1, \dots, s_k$ and $t_1, \dots, t_\ell$ are subpaths of $q_2$ and $q_3q_1 $, resp., see Fig. 4.1.
(0,0) node (v1) ; (-2,0) – (v1); (-1,0) node (v3) – (v1); (0,2) – (v1) – (v1) – (-2,-2); (0,1) – (v1) – (v1) – (-1,-1) node (v5) ; (8,0) – (0,0) node (v4) ; (0,4) circle (2); (-4,0) node (v2) circle (0); (v2) circle (2); (-2,-4) circle (2); (10,0) circle (2); (0,0) – (2,-6); (0,0) – (4,-4); (2,-8) circle (2); (6,-4) circle (2); (2,0) – (2,0) – (4,0) – (0,0) – (2,0) – (4,0) – (6,0); (0,0) – (0,1.2); (0,0) – (-1.2,0); (0,0) node (v6) – (-1.2,-1.2); (v6) – (1,-3); (v6) – (2,-2); (-6,-0.3); (0,6)–(0.1,6); (12,0) – (12,-0.1); (8,-4)–(8,-4.1); (4,-8)–(4,-8.1); (-4,-4.3); (0,2) \[fill = black\] circle (.06); (0,0) \[fill = black\] circle (.06); (-2,0) \[fill = black\] circle (.06); (-2,-2) \[fill = black\] circle (.06); (2,-6) \[fill = black\] circle (.06); (4,-4) \[fill = black\] circle (.06); (8,0) \[fill = black\] circle (.06); at (-0.9,0.7) [$e_1$]{}; at (-6.7,0) [$s_1$]{}; at (0,6.7) [$s_2$]{}; at (0.8,1) [$e_2$]{}; at (6,0.7) [$e_k$]{}; at (12.7,0) [$s_k$]{}; at (2.7,-1.6) [$f_1$]{}; at (8.7,-4) [$t_1$]{}; at (1.7,-2.8) [$f_2$]{}; at (2,-2) ; at (4.7,-8) [$t_2$]{}; at (-0.3,-1.2) [$f_\ell$]{}; at (-4.7,-4) [$t_\ell$]{}; at (4,2) [$\ldots$]{}; at (0.7,-5) [$\ldots$]{}; at (10,-8) [Fig. 4.1]{};
First we assume that $|p_2|\ge |p_1|+|p_3|$. Then, in view of inequality , $$\label{gt0}
|p_2| > \tfrac 23 |W| , \quad |p_1|+|p_3| < \tfrac 13 |W| .$$ Suppose that for some $i$ we have $$\label{gt12}
| e_i s_i e_i^{-1}| \ge \tfrac 12 |W| .$$
Pick vertices $v'_1, v'_2 \in P_W$ for which if $P_W(\operatorname{\textsf{fact}},v'_1, v'_2) = p'_1 p'_2 p'_3$ then ${\alpha}(p'_2) = e_i s_i e_i^{-1}$. If $k >1$, then $| e_i s_i e_i^{-1}| < |p_2|$ and we have a contradiction to the maximality of the pair $v_1$, $v_2$ because ${\alpha}(v'_1) = {\alpha}(v'_2)$. Hence, $k =1$ and $i=1$.
Now we pick vertices $v'_1, v'_2 \in P_W$ for which if $P_W(\operatorname{\textsf{fact}},v'_1, v'_2) = p'_1 p'_2 p'_3$ then ${\alpha}(p'_2) = s_1$. Note $|p'_2|= |s_1| = |p_2|-2 > \tfrac 23 |W| -2 \ge \tfrac 13 |W| $ for $|W|> 6$ and $|p'_1|+|p'_3|= |p_1|+|p_3|+2 < \tfrac 13 |W| +2 \le \tfrac 23 |W|$ for $|W|> 6$. Hence, either $$\label{ss11}
\min(|p'_2|, |p'_1|+|p'_3|) \ge \tfrac 13 |W|$$ if $|p'_2| \le |p'_1|+|p'_3|$ or $$\label{ss12}
\min(|p'_2|, |p'_1|+|p'_3|) > \min(|p_2|, |p_1|+|p_3|)$$ if $|p'_2| > |p'_1|+|p'_3|$. In either case, we obtain a contradiction to the maximality of the pair $v_1, v_2$ because ${\alpha}(v'_1) = {\alpha}(v'_2)$. Thus it is shown that the inequality is false, hence, for every $i =1, \dots, k$, we have $| e_i s_i e_i^{-1}| < \tfrac 12 |W|$.
Assume $| e_i s_i e_i^{-1}| \ge \tfrac 13 |W|$ for some $i$. Pick vertices $v'_1, v'_2 \in P_W$ for which if $P_W(\operatorname{\textsf{fact}},v'_1, v'_2) = p'_1 p'_2 p'_3$ then ${\alpha}(p'_2) = e_i s_i e_i^{-1}$. Since $| e_i s_i e_i^{-1}| < \tfrac 12 |W|$, it follows that $ \min(|p'_2|, |p'_1|+|p'_3|) = |p'_2| \ge \tfrac 13 |W|$. A contradiction to the maximality of the pair $v_1, v_2$ proves that $| e_i s_i e_i^{-1}| < \tfrac 13 |W|$ for every $i =1, \dots, k$. According to , $| p_2| > \tfrac 23 |W|$, hence, $k \ge 3$ and, for some $i \ge 2$, we obtain $$\tfrac 13 |W| \le | e_1 s_1 e_1^{-1} \dots e_i s_i e_i^{-1} |
\le \tfrac 23 |W| .$$ This means the existence of vertices $v'_1, v'_2 \in P_W$ for which, if $$P_W(\operatorname{\textsf{fact}},v'_1, v'_2) = p'_1 p'_2 p'_3 ,$$ then the paths $p'_1, p'_2, p'_3$ have the properties that ${\alpha}(p'_2) = e_1 s_1 e_1^{-1} \dots e_i s_i e_i^{-1}$ and $$\min(|p'_2|, |p'_1|+|p'_3|) \ge \tfrac 13 |W| .$$ This contradiction to the maximality of the pair $v_1, v_2$ completes the first main case $|p_2| \ge |p_1|+|p_3|$.
Now assume that $|p_2| < |p_1|+|p_3|$. In this case, we repeat the above arguments with necessary changes. By the inequality , $|p_2| < \tfrac 13 |W|$ and $ |p_1|+|p_3| > \tfrac 23 |W|$. Suppose that for some $j$ we have $$\label{gt12b}
| f_j t_j f_j^{-1}| \ge \tfrac 12 |W|$$
Pick vertices $v'_1, v'_2 \in P_W$ so that ${\alpha}(v'_1) = {\alpha}(v'_2)$, $v'_1 < v'_2$ and, if $$P_W(\operatorname{\textsf{fact}},v'_1, v'_2) = p'_1 p'_2 p'_3 ,$$ then either ${\alpha}(p'_2) = f_j t_j f_j^{-1}$ in case when $f_j t_j f_j^{-1}$ is a subpath of one of $q_1, q_3$, or ${\alpha}(p'_3) {\alpha}(p'_1)= f_j t_j f_j^{-1}$ in case when $f_j t_j f_j^{-1}$ has common edges with both $q_1$ and $q_3$. In either case, $$\min(|p'_2|, |p'_1|+|p'_3|) > \min(|p_2|, |p_1|+|p_3|)$$ whenever $\ell > 1$. By the maximality of the pair $v_1$, $v_2$, we conclude that $\ell = 1$ and $j=1$.
In the case $\ell=j=1$, we consider two subcases: $\min(|p_1|, |p_3|) > 0$ and $\min(|p_1|, |p_3|) = 0$.
Assume that $\min(|p_1|, |p_3|) > 0$. Then we can pick vertices $v'_1, v'_2 \in P_W$ for which, if $P_W(\operatorname{\textsf{fact}},v'_1, v'_2) = p'_1 p'_2 p'_3$, then the subpaths $p'_1, p'_2, p'_3$ of $P_W$ have the properties that ${\alpha}(p'_2) = f_1^{-1} q_2 f_1$ and ${\alpha}(p'_3) {\alpha}(p'_1) = t_1$. Similarly to the above arguments that led to inequalities –, it follows from the inequality $|W|> 6$ that either $$\min(|p'_2|, |p'_1|+|p'_3|) \ge \tfrac 13 |W|$$ if $|p'_1|+|p'_3| < |p'_2|$ or $$\min(|p'_2|, |p'_1|+|p'_3|) > \min(|p_2|, |p_1|+|p_3|)$$ if $|p'_1|+|p'_3| \ge |p'_2|$. In either case, we obtain a contradiction to the maximality of the pair $v_1, v_2$.
Now assume that $\min(|p_1|, |p_3|) =0$. For definiteness, let $|p_i|=0$, $i \in \{1,3\}$. Then we can pick vertices $v'_1, v'_2 \in P_W$ for which ${\alpha}(v'_1) = {\alpha}(v'_2)$, $v'_1 < v'_2$ and, if $P_W(\operatorname{\textsf{fact}},v'_1, v'_2) = p'_1 p'_2 p'_3$, then the subpaths $p'_1, p'_2, p'_3$ of $P_W$ have the properties that $p'_2 = p_{4-i}$, $p'_i = p_{2}$, $|p'_{4-i}| =| p_{i}| =0$. Hence, $|p'_2| > |p'_1|+|p'_3|$ and $$\min(|p'_2|, |p'_1|+|p'_3|) = \min(|p_2|, |p_1|+|p_3|) .$$ This means that the subcase $\min(|p_1|, |p_3|) =0$ is reduced to the case $|p'_2| \ge |p'_1|+|p'_3|$ which was considered above.
The case $\ell=j=1$ is complete and it is shown that the inequality is false, hence, for every $j =1, \dots, \ell$, we have $| f_j t_j f_j^{-1} | < \tfrac 12 |W|$.
Suppose that $| f_j t_j f_j^{-1} | \ge \tfrac 13 |W|$ for some $j$. Pick vertices $v'_1, v'_2 \in P_W$ so that if $P_W(\operatorname{\textsf{fact}},v'_1, v'_2)= p'_1 p'_2 p'_3$, then either ${\alpha}(p'_2) = f_j t_j f_j^{-1}$ in case when $f_j t_j f_j^{-1}$ is a subpath of one of $q_1, q_3$, or ${\alpha}(p'_3) {\alpha}(p'_1)= f_j t_j f_j^{-1}$ in case when $f_j t_j f_j^{-1}$ has common edges with both $q_1$ and $q_3$. Since $| f_j t_j f_j^{-1} | < \tfrac 12 |W|$, it follows that $$\min(|p'_2|, |p'_1|+|p'_3|) \ge \tfrac 13 |W| .$$ A contradiction to the maximality of the pair $v_1$, $v_2$ proves that $| f_j t_j f_j^{-1} | < \tfrac 13 |W|$ for every $j=1, \dots, \ell$. Since $| p_1|+| p_3| > \tfrac 23 |W|$, we get $\ell \ge 3$ and, for some $j \ge 2$, we obtain $$\tfrac 13 |W| \le | f_1 t_1 f_1^{-1} \dots f_j t_j f_j^{-1} |
\le \tfrac 23 |W| \ .$$ This means the existence of vertices $v'_1, v'_2 \in P_W$ for which ${\alpha}(v'_1)={\alpha}(v'_2)$ and if $P_W(\operatorname{\textsf{fact}},v'_1, v'_2) = p'_1 p'_2 p'_3$ then the subpaths $p'_1$, $p'_2$, $p'_3$ have the following properties: Either ${\alpha}(p'_2) = f_1 t_1 f_1^{-1} \dots f_j t_j f_j^{-1}$ or ${\alpha}(p'_3){\alpha}(p'_1) = f_1 t_1 f_1^{-1} \dots f_j t_j f_j^{-1}$. The it is clear that $$\min(|p'_2|, |p'_1|+|p'_3|) \ge \tfrac 13 |W| .$$ This contradiction to the choice of the pair $v_1, v_2$ completes the second main case when $|p_2| < |p_1|+|p_3|$.
\[lem2\] Suppose ${\Delta}$ is a disk diagram with property (A) over presentation and ${\varphi}({\partial}|_0 {\Delta}) \equiv W$ with $|W| > 2$. Then one of the following two claims holds.
$(\mathrm{a})$ There are vertices $v_1, v_2 \in P_W$ such that ${\alpha}(v_1) = {\alpha}(v_2)$, $v_1 < v_2$, and if $P_W(\operatorname{\textsf{fact}}, v_1, v_2) = p_1 p_2 p_3$ is the factorization of $P_W$ defined by $v_1, v_2$, then $$\min(|p_2|, |p_1|+|p_3|) \ge \tfrac 16 |W| .$$
$(\mathrm{b})$ There exists a face $\Pi$ in ${\Delta}$ with $| {\partial}\Pi | \ge 2$ and there are vertices $v_1, v_2 \in P_W$ such that ${\alpha}(v_1), {\alpha}(v_2) \in {\partial}\Pi$, $v_1 < v_2$, and if $P_W(\operatorname{\textsf{fact}}, v_1, v_2) = p_1 p_2 p_3$ then $$\label{in5}
\min(|p_2|, |p_1|+|p_3|) \ge
\tfrac 16 |W| .$$ In addition, if $({\partial}\Pi )^{-1} = e_1 \dots e_{|{\partial}\Pi |}$, where $e_i \in \vec {\Delta}(1)$, and ${\partial}{\Delta}= e_1 h_1 \dots e_{|{\partial}\Pi |} h_{|{\partial}\Pi |}$, where $h_i$ is a closed subpath of ${\partial}{\Delta}$, then, for every $i$, $h_i$ is a subpath of either ${\alpha}( p_2)$ or ${\alpha}( p_3) {\alpha}( p_1)$ and $|h_i | \le \tfrac 56 |W|$, see Fig. 4.2.
(-2.1,-2.15) \[fill = black\] circle (.05); (0,-3) \[fill = black\] circle (.05); (-3,0) \[fill = black\] circle (.05); (-2.1,2.1) \[fill = black\] circle (.05); (0,2.98) \[fill = black\] circle (.056); (2.1,-2.1) \[fill = black\] circle (.05); (3,0) \[fill = black\] circle (.05); (1.7,2.45) \[fill = black\] circle (.05); (2.64,1.45) \[fill = black\] circle (.05); (.2,-4.9) –(.0,-4.95); (-2.8,1.1) –(-2.77,1.2); (-2.7,-1.3) –(-2.72,-1.228); (-5,-.1) –(-5,0.1); (-4.3,-3.2) –(-4.3,-3.); (0,0) circle (3); plot\[smooth, tension=.7\] coordinates [(2.64,1.45) (3.2, 2.6) (4, 2.6) (4, 1.7) (2.64,1.45)]{}; plot\[smooth, tension=.7\] coordinates [(1.7,2.45) (1.6,3.2) (2.5,3.5) (2.6,2.5) (1.7,2.45)]{}; plot\[smooth, tension=.7\] coordinates [(0,-3) (-1,-4.5) (0.4,-4.8) (0,-3)]{}; plot\[smooth, tension=.7\] coordinates [(0,-4)]{}; plot\[smooth, tension=.7\] coordinates [(-2.1,-2.1) (-2,-3) (-4,-4) (-4,-2) (-2.1,-2.1) ]{}; (-1.1,-2.79) – (-1.2, -2.74); plot\[smooth, tension=.7\] coordinates [(-3,0) (-4,-1) (-5,0) (-4,1) (-3,0)]{}; plot\[smooth, tension=.7\] coordinates [(-2.1,2.1) (-3.3,2) (-3.6,3) (-2.5,3) (-2.1,2.1)]{}; plot\[smooth, tension=.7\] coordinates [(-2,-2)]{}; plot\[smooth cycle, tension=.5\] coordinates [(0,3) (-1,4) (0,5) (1,4) (0,3)]{}; plot\[smooth, tension=.7\] coordinates [(3,0) (4,1) (5,0) (4,-1) (3,0)]{}; plot\[smooth, tension=.7\] coordinates [(2.1,-2.1) (4,-2) (4,-4) (2,-4) (2.1,-2.1)]{}; (1.2,-2.73) – (1.1,-2.78); (3.2,-4.27) – (3.1,-4.28); (-3.2,3.14) – (-3.1,3.14); at (1.,-3.5) [$e_{| \partial \Pi |}$]{}; at (0.2,-5.55) [$h_{| \partial \Pi |}$]{}; at (-1.3,-3.4) [$e_{1}$]{}; at (-5,-3.1) [$h_{1}$]{}; at (-3.3,-1.4) [$e_{2}$]{}; at (-5.8,0) [$h_{2}$]{}; at (-3.4,1.4) [$e_{3}$]{}; at (0,0) [$\Pi$]{}; at (-5,-5.4) [Fig. 4.2]{}; at (3.4,-4.9) [$h_{| \partial \Pi |-1}$]{}; at (-3.2,3.7) [$h_3$]{};
Since ${\Delta}$ has property (A), it follows that if $e \in {\partial}\Pi $, where $\Pi$ is a face of ${\Delta}$, then $e^{-1} \in {\partial}{\Delta}$ and if $e\in {\partial}{\Delta}$, then $e^{-1} \in {\partial}\Pi' \cup {\partial}{\Delta}$, where $\Pi'$ is a face of ${\Delta}$.
Consider a planar graph $\Gamma_{\Delta}$ constructed from ${\Delta}$ as follows. For every face $\Pi$ of ${\Delta}$, we pick a vertex $o_\Pi$ in the interior of $\Pi$. The vertex set of $\Gamma_{\Delta}$ is $$V(\Gamma_{\Delta}) := {\Delta}(0) \cup \{ o_\Pi \mid \Pi \in {\Delta}(2) \} \ ,$$ where ${\Delta}(i)$ is the set of $i$-cells of ${\Delta}$, $i =0,1,2$. For every face $\Pi$ of ${\Delta}$, we delete nonoriented edges of ${\partial}\Pi$ and draw $|{\partial}\Pi |$ nonoriented edges that connect $o_\Pi$ to all vertices of ${\partial}\Pi$, see Fig. 4.3.
(-4,0) \[fill = black\] circle (.06); (-6,2) \[fill = black\] circle (.06); (-4,4) \[fill = black\] circle (.06); (-2,2) \[fill = black\] circle (.06); (4,0) \[fill = black\] circle (.06); (2,2) \[fill = black\] circle (.06); (4,4) \[fill = black\] circle (.06); (6,2) \[fill = black\] circle (.06); (4,2) \[fill = black\] circle (.06); (-4,2) circle (2); = \[-open triangle 45\] (-1,2) – (1,2); (4,0) – (4,4); (2,2) – (6,2); at (4.8,1.2) [$o_\Pi$]{}; at (2.7,0.7) [$E_\Pi$]{}; at (-4,2) [$\Pi$]{}; at (.3,-.8) [Fig. 4.3]{};
We draw these edges so that their interiors are disjoint and are contained in the interior of $\Pi$. Let $E_\Pi$ denote the set of these $|{\partial}\Pi |$ edges. The set $E(\Gamma_{\Delta})$ of nonoriented edges of $\Gamma_{\Delta}$ is ${\Delta}(1)$, without edges of faces of ${\Delta}$, combined with $\cup_{\Pi \in {\Delta}(2) } E_\Pi$, hence, $$E(\Gamma_{\Delta}) := \cup_{\Pi \in {\Delta}(2) } E_\Pi \cup \left( {\Delta}(1) \setminus \{ e \mid e \in {\Delta}(1), e \in {\partial}\Pi, \Pi \in {\Delta}(2) \} \right) \ .$$ It follows from definitions that $| V(\Gamma_{\Delta})| = |{\Delta}(0)| + |{\Delta}(2)|$, $| E(\Gamma_{\Delta})| = | {\Delta}(1)|$ and that $\Gamma_{\Delta}$ is a tree. Assigning labels to oriented edges of $\Gamma_{\Delta}$, by using letters from ${\mathcal{A}}^{\pm 1}$, we can turn $\Gamma_{\Delta}$ into a disk diagram over presentation $\langle \ a_1, \dots, a_m \ \| \ \varnothing \ \rangle$ of the free group $F({\mathcal{A}})$.
Denote ${\varphi}( {\partial}|_{o'} \Gamma_{\Delta}) \equiv W'$ for some vertex $o' \in {\Delta}(0)$, where $| W'| = |\vec {\Delta}(1)|$, and let ${\alpha}'(P_{W'}) = {\partial}|_{o'} \Gamma_{\Delta}= {\partial}|_{0} \Gamma_{\Delta}$. Since $|W' | \ge |W| > 2$, Lemma \[lem1\] applies to $\Gamma_{\Delta}$ and yields the existence of vertices $u_1, u_2 \in P_{W'}$ such that ${\alpha}'(u_1) = {\alpha}'(u_2)$ in ${\partial}\Gamma_{\Delta}$, $u_1 < u_2$, and if $P_{W'}(\operatorname{\textsf{fact}}, u_1, u_2) = r_1 r_2 r_3$, then $$\label{in5aa}
\min(|r_2|, |r_1|+|r_3|) \ge \tfrac 13 |W'| .$$
First suppose that ${\alpha}'(u_1)$ is a vertex of ${\Delta}$. It follows from the definition of the tree $\Gamma_{\Delta}$ that there is a factorization $P_{W} = p_1 p_2 p_3$ of the path $P_W$ such that the vertex ${\alpha}((p_2)_-) = {\alpha}((p_2)_+)$ is ${\alpha}'(u_1) \in {\Delta}(0)$ and $|p_i| \le |r_i| \le 2|p_i|$, $i=1,2,3$. Indeed, to get from ${\partial}{\Delta}$ to ${\partial}\Gamma_{\Delta}$ we replace every edge $e \in {\partial}\Pi$, $\Pi \in {\Delta}(2)$, by two edges of $E_\Pi$, see Fig. 4.3. Hence, if $r$ is a subpath of ${\partial}\Gamma_{\Delta}$ and $p$ is a corresponding to $r$ subpath of ${\partial}{\Delta}$ with $r_- = p_- \in {\Delta}(0)$, $r_+ = p_+ \in {\Delta}(0)$, then $|p | \le |r| \le 2 |p | $. Then it follows from that $$\min(|p_2|, |p_1|+|p_3|) \ge \tfrac 12 \min(|r_2|, |r_1|+|r_3|) \ge \tfrac 16 |W'| \ge \tfrac 16 |W| ,$$ as required.
Now assume that ${\alpha}'(u_1) = {\alpha}'(u_2) =o_\Pi$ for some face $\Pi \in {\Delta}(2)$. Let $e_1, \dots, e_k$, $f_1, \dots, f_\ell$, $k, \ell \ge 0$, be all oriented edges of $\Gamma_{\Delta}$ that start at the vertex ${\alpha}'(u_1) = {\alpha}'(u_2) = o_\Pi$ so that $${\alpha}'(r_2) = e_1 s_1 e_1^{-1} \dots e_k s_k e_k^{-1} , \quad
{\alpha}'(r_3) {\alpha}'(r_1) = f_1 t_1 f_1^{-1} \dots f_\ell t_\ell f_\ell^{-1} ,$$ where $s_1, \dots, s_k$ and $t_1, \dots, t_\ell$ are closed subpaths of ${\alpha}'(r_2) $ and ${\alpha}'(r_3) {\alpha}'(r_1)$, resp. Since ${\alpha}'(u_1) = {\alpha}'(u_2) =o_\Pi$, it follows that $k+\ell = |{\partial}\Pi |$. Since $\min(|r_2|, |r_1|+|r_3|) \ge \tfrac 13 |W'| >0$ is an integer, we also have that $k,\ell \ge 1$ and $|{\partial}\Pi | >1$. If $|r_3| >0$, we consider vertices $u_1' := u_1+1$, $u_2' := u_2+1$. On the other hand, if $|r_3| =0$, then $|r_1| >0$ and we consider vertices $u_1' := u_1-1$, $u_2' := u_2-1$. In either case, denote $P_{W'}(u_1', u_2' ) = r'_1 r'_2 r'_3$. Then $|r'_2| = |r_2|$ and $|r'_1| + |r'_3|= |r_1| + |r_3|$, hence, by virtue of [in5aa]{}, $$\min(|r'_2|, |r'_1|+|r'_3|) \ge \tfrac 13 |W'| .$$ Note that the vertices ${\alpha}'( (r'_2)_- )$, ${\alpha}'( (r'_2)_+ )$ belong to the boundary ${\partial}\Pi$. Hence, as above, there is also a factorization $P_{W}(\operatorname{\textsf{fact}}, v_1, v_2) = p_1 p_2 p_3$ such that ${\alpha}(v_1) = {\alpha}'(u'_1)$, ${\alpha}(v_2) = {\alpha}'(u'_2)$ and $|p_i| \le |r'_i| \le 2|p_i|$, $i=1,2,3$. Therefore, $$\min(|p_2|, |p_1|+|p_3|) \ge \tfrac 12 \min(|r'_2|, |r'_1|+|r'_3|) \ge \tfrac 16 |W'| \ge \tfrac 16 |W| ,$$ as required.
It remains to observe that it follows from the definition of vertices $v_1, v_2$ that every $h_i$ is a subpath of one of the paths ${\alpha}( p_2)$, ${\alpha}( p_3) {\alpha}( p_1)$. In particular, $|h_i| \le \tfrac 56 |W|$, as desired.
In the definitions below, we assume that ${\Delta}$ is a disk diagram over presentation [pr1]{} such that ${\Delta}$ has property (A), ${\varphi}({\partial}|_0 {\Delta}) \equiv W$, $|W| >0$, and that the pair $(W, {\Delta})$ is fixed.
A 6-tuple $$b = (b(1), b(2), b(3), b(4), b(5), b(6))$$ of integers $b(1), b(2), b(3), b(4), b(5), b(6)$ is called a [*bracket* ]{} for the pair $(W, {\Delta})$ if $b(1), b(2)$ satisfy the inequalities $0 \le b(1) \le b(2) \le |W|$ and, in the notation $P_W(\operatorname{\textsf{fact}}, b(1), b(2) ) = p_1 p_2 p_3$, one of the following two properties (B1)–(B2) holds true.
1. $b(3)= b(4)=b(5)=0$, ${\alpha}(b(1))={\alpha}(b(2))$, and the disk subdiagram ${\Delta}_b$ of ${\Delta}$, defined by ${\partial}|_{b(1)} {\Delta}_b = {\alpha}(p_2)$, contains $b(6)$ faces, see Fig. 4.4(B1).
2. $b(3) >0$ and ${\alpha}(b(1)), {\alpha}(b(2)) \in {\partial}\Pi$, where $\Pi$ is a face of ${\Delta}$ such that ${\varphi}({\partial}\Pi) \equiv a_{b(3)}^{{\varepsilon}b(4)}$, $b(4) >0$, ${\varepsilon}= \pm 1$, and if ${\Delta}_b $ is the disk subdiagram of ${\Delta}$, defined by ${\partial}|_{b(1)} {\Delta}_b = {\alpha}(p_2) u$, where $u$ is the subpath of ${\partial}\Pi$ with $u_-= {\alpha}(b(2))$ and $u_+= {\alpha}(b(1))$, then ${\varphi}(u) \equiv a_{b(3)}^{-b(5)}$ and $|{\Delta}_b(2)| = b(6)$, see Fig. 4.4(B2).
(5,4) \[fill = black\] circle (.074); (5,-1.5) \[fill = black\] circle (.074); plot\[smooth, tension=1.2\] coordinates [(5,4) (2,6.5) (5,9) (8,6.5) (5,4)]{}; plot\[smooth, tension=1.5\] coordinates [ (5,4) (2,1.5) (5,-1.5) (8,1.5) (5,4)]{}; (2,1.5) – (2,1.55); (8,1.5) – (8,1.45) ; (5,9) – (5.1,9); at (5,10) [$\alpha(p_2)= a(b)$]{}; at (0.96,4.1) [$\alpha(b(1))= \alpha(b(2))$]{}; at (9.2,1.5) [$\alpha(p_3)$]{}; at (0.8,1.5) [$\alpha(p_1)$]{}; at (5,-0.8) [$\alpha(0)$]{}; at (5,6.2) [$\Delta_b$]{}; at (5,-3) [Fig. 4.4(B1)]{};
(16.9,6) \[fill = black\] circle (.074); (21,6) \[fill = black\] circle (.074); (19,-2) \[fill = black\] circle (.074); (18.9,10.3) – (19.1,10.3);
(19.1,6.2) – (18.9,6.2); (17,6) – (17.1,6); (16.2,-1.1) – (16.1,-1); (21.7,-1.13) – (21.6,-1.21); (22.6,3.9) – (22.6,4.1); (14.68,4.7) – (14.8,4.87); plot\[smooth, tension=.9\] coordinates [(17,6) (21,6) (23,4) (21,2) (17,2) (15,4) (17,6)]{}; plot\[smooth, tension=.7\] coordinates [(17,2) (15,0) (19,-2) (23,0) (21,2)]{}; plot\[smooth, tension=.7\] coordinates [(17,6) (15,8) (17,10) (21,10) (23,8) (21,6)]{}; plot\[smooth, tension=.7\] coordinates [(22.4,3.3) (22.6,4) (22.4,4.7)]{}; at (21.8,4) [$\partial \Pi$]{}; at (19,4) [$\Pi$]{}; at (15.2,-1.4) [$\alpha(p_1)$]{}; at (22.3,-1.6) [$\alpha(p_3)$]{}; at (19,-1.35) [$\alpha(0)$]{}; at (19,5.5) [$u$]{}; at (19,7.9) [$\Delta_b$]{}; at (18.9,9.2) [$\alpha(p_2)=a(b)$]{}; at (19,-3) [Fig. 4.4(B2)]{}; plot\[smooth, tension=.7\] coordinates [(14.8,2.8) (14.4,4) (14.9,5)]{}; at (12.2,5.7) [$ \alpha(p_1)\alpha(p_2)\alpha(p_3) = \partial \Delta$]{};
A bracket $b$ is said to have [*type B1*]{} or [*type B2*]{} if the property (B1) or (B2), resp., holds for $b$. Note that the equality $b(4) = 0$ in property (B1) and the inequality $b(4) >0$ in property (B2) imply that the type of a bracket is unique.
The boundary subpath ${\alpha}(p_2)$ of the disk subdiagram ${\Delta}_b$ associated with a bracket $b$ is denoted $a(b)$ and is called the [*arc*]{} of $b$, see Figs. 4.4(B1)–4.4(B2).
For example, $b_v = (v, v, 0, 0, 0,0)$ is a bracket of type B1 for every vertex $v$ of $P_W$, called a [*starting*]{} bracket at $v$. Note that $a(b_v) = {\alpha}( v )= {\alpha}( b_v(1) )$.
The [*final*]{} bracket for $(W, {\Delta})$ is $b_F = (0, |W|, 0, 0, 0, | {\Delta}(2) | )$, it has type B1 and $a(b_F) = {\partial}|_{0} {\Delta}$.
Let $B$ be a set of brackets for the pair $(W, {\Delta})$, perhaps, $B$ is empty, $B = \varnothing$. We say that $B$ is a [*bracket system*]{} if, for every pair $b, c \in B$ of distinct brackets, either $b(2) \le c(1)$ or $c(2) \le b(1)$. In particular, the arcs of distinct brackets in $B$ have no edges in common. A [bracket system]{} consisting of a single final bracket is called a [*final [bracket system]{}*]{}.
Now we describe four kinds of operations over brackets and over [bracket system]{}s: additions, extensions, turns, and mergers. Let $B$ be a [bracket system]{}.
[*Additions.*]{}
Suppose $b$ is a starting bracket, $b \not\in B$, and $B \cup \{ b\}$ is a [bracket system]{}. Then we may add $b$ to $B$ thus making an [addition]{} operation over $B$.
[*Extensions.*]{}
Suppose $b \in B$, $b = (b(1), b(2), b(3), b(4), b(5), b(6))$, and $e_1 a(b) e_2$ is a subpath of ${\partial}|_{0} {\Delta}$, where $a(b)$ is the arc of $b$ and $e_1, e_2$ are edges one of which could be missing.
Assume that $b$ is of type B2, in particular, $b(3), b(4) >0$. Using the notation of the condition (B2), suppose $e_1^{-1}$ is an edge of ${\partial}\Pi$, and ${\varphi}(e_1) = a_{b(3)}^{{\varepsilon}_1}$, where ${\varepsilon}_1 = \pm 1$.
If $| b(5) | \le b(4)-2$ and ${\varepsilon}_1 b(5) \ge 0$, then we consider a bracket $b'$ such that $$\begin{aligned}
& b'(1) = b(1)-1, \ b'(2) = b(2) , \ b'(3) = b(3) , \\
& b'(4) = b(4) , \ b'(5) = b(5)+{\varepsilon}_1 , b'(6) = b(6) .\end{aligned}$$ Note that $a(b') = e_1 a(b)$. We say that $b'$ is obtained from $b$ by an extension of type 1 (on the left). If $(B \setminus \{ b \}) \cup \{ b' \}$ is a [bracket system]{}, then replacement of $b \in B$ with $b'$ in $B$ is called an [*extension* ]{} operation over $B$ of type 1.
On the other hand, if $| b(5) | = b(4)-1$ and ${\varepsilon}_1 b(5) \ge 0$, then we consider a bracket $b'$ such that $$b'(1) = b(1)-1 , \ b'(2) = b(2) , b'(3) = b'(4) = b'(5) = 0 , b'(6) = b(6)+1 .$$ In this case, we say that $b'$ is obtained from $b$ by an extension of type 2 (on the left). Note that $a(b') = e_1 a(b)$ and $b'$ has type B1. If $(B \setminus \{ b \}) \cup \{ b' \}$ is a [bracket system]{}, then replacement of $b \in B$ with $b'$ in $B$ is called an [*extension* ]{} operation over $B$ of type 2.
Analogously, assume that $b$ has type B2, $e_2^{-1}$ is an edge of ${\partial}\Pi$, and ${\varphi}(e_2) = a_{b(3)}^{{\varepsilon}_2}$, where ${\varepsilon}_2 = \pm 1$.
If $| b(5) | \le b(4)-2$ and ${\varepsilon}_2 b(5) \ge 0$, then we consider a bracket $b'$ such that $$\begin{aligned}
& b'(1) = b(1) , \ b'(2) = b(2)+1, \ b'(3) = b(3) , \\
& b'(4) = b(4) , \ b'(5) = b(5)+{\varepsilon}_2 , \ b'(6) = b(6) .\end{aligned}$$ Note that $a(b') = a(b)e_2$ and $b'$ has type B2. We say that $b'$ is obtained from $b$ by an extension of type 1 (on the right). If $(B \setminus \{ b \}) \cup \{ b' \}$ is a [bracket system]{}, then replacement of $b \in B$ with $b'$ in $B$ is called an [*extension* ]{} operation over $B$ of type 1.
On the other hand, if $| b(5) | = b(4)-1$, then we may consider a bracket $b'$ such that $$b'(1) = b(1) , \ b'(2) = b(2)+1 , \ b'(3) =b'(4) = b'(5) = 0 , \ b'(6) = b(6)+1 .$$ Note that $a(b') = a(b) e_2$ and $b'$ has type B1. We say that $b'$ is obtained from $b$ by an extension of type 2 (on the right). If $(B \setminus \{ b \}) \cup \{ b' \}$ is a [bracket system]{}, then replacement of $b \in B$ with $b'$ in $B$ is called an [*extension* ]{} operation over $B$ of type 2.
Assume that $b \in B$ is a bracket of type B1, $e_1 a(b) e_2$ is a subpath of ${\partial}|_{0} {\Delta}$, both $e_1, e_2$ exist, and $e_1 = e_2^{-1}$. Consider a bracket $b'$ of type B1 such that $$b'(1) = b(1)-1 , \ b'(2) = b(2)+1 , \ b'(3) = b'(4) =b'(5) =0 , b'(6) = b(6) .$$ Note that $a(b') = e_1 a(b)e_2$. We say that $b'$ is obtained from $b$ by an extension of type 3. If $(B \setminus \{ b \}) \cup \{ b' \}$ is a [bracket system]{}, then replacement of $b \in B$ with $b'$ in $B$ is called an [*extension* ]{} operation over $B$ of type 3.
[*Turns.*]{}
Let $b \in B$ be a bracket of type B1. Then $b(3) = b(4) =b(5) =0$. Suppose $\Pi$ is a face in ${\Delta}$ such that $\Pi$ is not in the disk subdiagram ${\Delta}_b$, associated with $b$ and bounded by the arc $a(b)$ of $b$, and ${\alpha}(b(1)) \in {\partial}\Pi$. If ${\varphi}({\partial}\Pi) = a_{j}^{{\varepsilon}n_{\Pi}}$, where ${\varepsilon}= \pm 1$, $n_{\Pi} \in E_j$, then we consider a bracket $b'$ with $b'(i) = b(i)$ for $i=1,2,5,6$, and $b'(3) = j$, $b'(4) = n_\Pi$. Note that $b'$ has type B2. We say that $b'$ is obtained from $b$ by a [*turn*]{} operation. Replacement of $b \in B$ with $b'$ in $B$ is also called a [ turn ]{} operation over $B$. Note that $(B \setminus \{ b \}) \cup \{ b' \}$ is automatically a [bracket system]{} (because so is $B$).
[*Mergers.*]{}
Now suppose that $b, c \in B$ are distinct brackets such that $b(2) = c(1)$ and one of $b(3), c(3)$ is 0. Then one of $b, c$ is of type B1 and the other has type B1 or B2. Consider a bracket $b'$ such that $b'(1) = b(1)$, $b'(2) = c(2)$, and $b'(i) = b(i) + c(i)$ for $i =3,4,5,6$. Note that $a(b') = a(b_1)a(b_2)$ and $b'$ is of type B1 if both $b_i, b_j$ have type B1 or $b'$ is of type B2 if one of $b, c$ has type B2. We say that $b'$ is obtained from $b, c$ by a merger operation. Taking both $b, c$ out of $B$ and putting $b'$ in $B$ is a [*merger* ]{} operation over $B$. Note that $(B \setminus \{ b, c \}) \cup \{ b' \}$ is automatically a [bracket system]{}.
We will say that additions, extensions, turns and mergers, as defined above, are [*[elementary operation]{}s*]{} over brackets and [bracket system]{}s for the pair $(W, {\Delta})$.
Assume that one [bracket system]{} $B_\ell$ is obtained from another [bracket system]{} $B_0$ by a finite sequence $\Omega$ of [elementary operation]{}s and $B_0, B_1, \dots, B_\ell$ is the corresponding to $\Omega$ sequence of [bracket system]{}s. Such a sequence $B_0, B_1, \dots, B_\ell$ of [bracket system]{}s will be called [*operational*]{}.
We will say that the sequence $B_0, B_1, \dots$, $B_\ell$ has [*size bounded by*]{} $ (k_1, k_2)$ if $\ell \le k_1$ and, for every $i$, the number of brackets in $B_i$ is at most $k_2$. Whenever it is not ambiguous, we will also say that $\Omega$ has size bounded by $(k_1, k_2)$ if so does the corresponding to $\Omega$ sequence $B_0, B_1, \dots, B_\ell$ of [bracket system]{}s.
\[lem3\] There exists a sequence of [elementary operation]{}s that converts the empty [bracket system]{} for $(W, {\Delta})$ into the final [bracket system]{} and has size bounded by $(8|W|, |W|+1)$.
For every $k$ with $0 \le k \le |W|$, consider a starting bracket $(k,k,0,0,0,0)$ for $(W, {\Delta})$. Making $|W|+1$ additions, we get a [bracket system]{} $$B_{W} = \{ (k,k,0,0,0,0) \mid 0 \le k \le |W| \}$$ of $|W|+1$ starting brackets. Now, looking at the disk diagram ${\Delta}$, we can easily find a sequence of extensions, turns and mergers that converts $B_{W}$ into the final [bracket system]{}, denoted $B_{\ell}$. Note that the inequality $b(4) \le |W|$ for every bracket $b$ of intermediate systems follows from definitions and Lemma \[vk2\]. To estimate the total number of [elementary operation]{}s, we note that the number of additions is $|W|+1$. The number of extensions is at most $|W|$ because every extension applied to a [bracket system]{} $B$ increases the number $$\eta(B) := \sum_{b \in B} (b(2)-b(1) )$$ by 1 or 2 and $\eta( B_{W}) = 0$, whereas $\eta( B_{\ell}) = |W|$. The number of mergers is $ |W|$ because the number of brackets $|B|$ in a [bracket system]{} $B$ decreases by 1 if $B$ is obtained by a merger and $| B_{W} | = |W|+1$, $|B_{\ell} | = 1$. The number of turns does not exceed the total number of additions, extensions, and mergers, because a turn operation is applied to a bracket of type B1 and results in a bracket of type B2 to which a turn operation may not be applied, whence a turn operation succeeds an addition, or an extension, or a merger. Therefore, the number of turns is at most $3|W|+1$ and so $\ell \le 6|W|+2 \le 8|W|$.
\[lem4\] Suppose there is a sequence $\Omega$ of [elementary operation]{}s that converts the empty [bracket system]{} $E$ for $(W, {\Delta})$ into the final [bracket system]{} $F$ and has size bounded by $(k_1, k_2)$. Then there is also a sequence of [elementary operation]{}s that transforms $E$ into $F$ and has size bounded by $(11|W|, k_2)$.
Assume that the sequence $\Omega$ has an addition operation which introduces a starting bracket $c = (k,k,0,0,0,0)$ with $0 \le k \le |W|$. Since the final [bracket system]{} contains no starting brackets, $c$ must disappear and an [elementary operation]{} is applied to $c$. If a merger is applied to $c$ and another bracket $b$ and the merger yields ${\widehat}c$, then ${\widehat}c = b$. This means that the addition of $c$ and the merger could be skipped without affecting the sequence otherwise. Note that the size of the new sequence $\Omega'$ is bounded by $(k_1-2, k_2)$. Therefore, we may assume that no merger is applied to a starting bracket in $\Omega$.
Now suppose that a turn is applied to $c$, yields $c'$ and then a merger is applied to $c'$, $b$ and the merger produces ${\widehat}c$. Note that $c'$ has type B2 and $b$ has type B1. Then it is possible to apply a turn to $b$ and get $b'$ with $b' = {\widehat}c$. Hence, we can skip the addition of $c$, the turn of $c$, the merger and use, in their place, a turn of $b$. Clearly, the size of the new sequence $\Omega'$ is bounded by $(k_1-2, k_2)$.
Thus, by induction on $k_1$, we may assume that, for every starting bracket $c$ which is added by $\Omega$, an extension is applied to $c$ or an extension is applied to $c'$ and $c'$ is obtained from $c$ by a turn.
Now we will show that, for every starting bracket $c$ which is added by $\Omega$, there are at most 2 operations of additions of $c$ in $\Omega$. Arguing on the contrary, assume that $c_1, c_2, c_3$ are the brackets equal to $c$ whose additions are done in $\Omega$. By the above remark, for every $i=1,2,3$, either an extension is applied to $c_i$, resulting in a bracket ${\widehat}c_i$, or a turn is applied to $c_i$, resulting in $c'_i$, and then an extension is applied to $c'_i$, resulting in a bracket ${\widehat}c_i$.
Let $c_1, c_2, c_3$ be listed in the order in which the brackets ${\widehat}c_1, {\widehat}c_2, {\widehat}c_3$ are created by $\Omega$. Note that if ${\widehat}c_1$ is obtained from $c_1$ by an extension (with no turn), then the extension has type 3 and $${\widehat}c_1(1) = c_1(1)-1 , \ {\widehat}c_1(2) = c_1(2)+1 .$$ This means that brackets ${\widehat}c_2$, ${\widehat}c_3$ could not be created by an extension after ${\widehat}c_1$ appears, as $d(2) \le d'(1) $ or $d(1) \ge d'(2) $ for distinct brackets $d, d' \in B$ of any [bracket system]{} $B$. This contradiction proves that ${\widehat}c_1$ is obtained from $c_1'$ by an extension. Then $c_1'$ has type B2, the extension has type 1 or 2 (and is either on the left or on the right). Similarly to the forgoing argument, we can see that if ${\widehat}c_1$ is obtained by an extension on the left/right, then ${\widehat}c_2$ must be obtained by an extension on the right/left, resp., and that ${\widehat}c_3$ cannot be obtained by any extension. This contradiction proves that it is not possible to have in $\Omega$ more than two additions of any starting bracket $c$. Thus, the number of additions in $\Omega$ is at most $2|W|+2$.
As in the proof of Lemma \[lem3\], the number of extensions is at most $|W|$ and the number of mergers is $\le 2|W|+1$. Hence, the number of turns is $\le 5|W|+3$ and the total number of [elementary operation]{}s is at most $10|W|+6 \le 11|W|$ as desired.
\[lem5\] Let there be a sequence $\Omega$ of [elementary operation]{}s that transforms the empty [bracket system]{} for the pair $(W, {\Delta})$ into the final [bracket system]{} and has size bounded by $(k_1, k_2)$ and let $c$ be a starting bracket for $(W, {\Delta})$. Then there is also a sequence of [elementary operation]{}s that converts the [bracket system]{} $\{ c \}$ into the final [bracket system]{} and has size bounded by $(k_1+1, k_2+1)$.
Let $$\label{bso}
B_0, B_1, \dots, B_\ell$$ be the corresponding to $\Omega$ sequence of [bracket system]{}s, where $B_0$ is empty and $B_\ell$ is final.
First suppose that $c(1) = 0$ or $c(1) = |W|$.
Assume that no addition of $c$ is used in $\Omega$. Setting $B'_i := B_i \cup \{ c \}$, $i=0,1, \dots, \ell$, we obtain an operational sequence of [bracket system]{}s $B'_0, \dots, B'_\ell$ that starts with $\{ c \}$ and ends in the [bracket system]{} $ B'_\ell = \{ c, d_F \}$, where $d_F$ is the final bracket for $(W, {\Delta})$. Note that $B_{i+1}'$ can be obtained from $B_{i}'$ by the same elementary operation that was used to get $B_{i+1}$ from $B_{i}$. A merger operation applied to $B'_\ell$ yields the final [bracket system]{} $ B'_{\ell+1} = \{ d_F \}$ and $B'_0, \dots, B'_\ell, B'_{\ell+1}$ is a desired sequence of [bracket system]{}s of size bounded by $(k_1+1, k_2+1)$.
Now suppose that an addition of $c$ is used in $\Omega$, $B_{i^*+1} = B_{i^*} \cup \{ c \}$ is obtained from $B_{i'}$ by addition of $c$, and $i^*$ is minimal with this property. Define $B'_j := B_j \cup \{ c \}$ for $j=0, \dots, i^*$ and $B'_{i^*+1} := B_{i^*+2}, \dots, B'_{\ell -1} := B_{\ell}$. Then $B'_0, \dots, B'_{\ell -1}$ is a desired operational sequence of [bracket system]{}s that starts with $\{ c \}$, ends in the final [bracket system]{}, and has size bounded by $(k_1-1, k_2+1)$.
We may now assume that $0 < c(1) < |W|$. Let $B_{i^*}$ be the first [bracket system]{} of the sequence such that $B_{i^*} \cup \{ c \}$ is not a [bracket system]{}. The existence of such $B_{i^*}$ follows from the facts that $B_0 \cup \{ c \}$ is a [bracket system]{} and $B_\ell \cup \{ c \}$ is not. Since $B_0 \cup \{ c \}$ is a [bracket system]{}, it follows that $i^* \ge 1$ and $B_{i^*-1} \cup \{ c \}$ is a [bracket system]{}. Since $B_{i^*-1} \cup \{ c \}$ is a [bracket system]{} and $B_{i^*} \cup \{ c \}$ is not, there is a bracket $b \in B_{i^*}$ such that $b(1) < c(1) < b(2)$ and $b$ is obtained from a bracket $d_1 \in B_{i^*-1}$ by an extension or $b$ is obtained from brackets $d_1, d_2 \in B_{i^*-1}$ by a merger. In either case, it follows from definitions of [elementary operation]{}s that $d_i(j) = c(1)$ for some $i, j \in \{ 1,2\}$. Hence, we can use a merger applied to $d_i(j)$ and $c$ which would result in $d_i(j)$, i.e., in elimination of $c$ from $B_{i^*-1} \cup \{ c \}$ and in getting thereby $B_{i^*-1}$ from $B_{i^*-1} \cup \{ c \}$. Now we can see that the original sequence of [elementary operation]{}s, together with the merger $B_{i^*-1} \cup \{ c \} \to B_{i^*-1}$ can be used to produce the following operational sequence of [bracket system]{}s $$B_{0} \cup \{ c \}, \dots, B_{i^*-1} \cup \{ c \} , B_{i^*-1}, B_{i^*}, \dots, B_\ell .$$ Clearly, the size of this new sequence is bounded by $(k_1+1, k_2+1)$, as required.
\[lem6\] There exists a sequence of [elementary operation]{}s that converts the empty [bracket system]{} for the pair $(W, {\Delta})$ into the final [bracket system]{} and has size bounded by $$( 11|W| , \ C(\log |W| +1) ) ,$$ where $C = (\log \tfrac 65)^{-1}$.
First suppose that $|W| \le 2$. Then ${\Delta}$ consists of a single edge, or of a single face $\Pi$ with $|{\partial}\Pi | \le 2$, or of two faces $\Pi_1, \Pi_2$ with $|{\partial}\Pi_1 | = |{\partial}\Pi_2 | = 1$. In each of these three cases, we can easily find a sequence of [elementary operation]{}s that transforms the empty [bracket system]{} for $(W, {\Delta})$ into the final [bracket system]{} by using a single addition, at most two turns, and at most two extensions. Hence, the size of the sequence is bounded by $$(5,1) \le ( 11|W| , \ C(\log |W| +1)$$ as $C > 1$.
Assuming $|W| > 2$, we proceed by induction on the length $|W|$. By Lemma \[lem2\] applied to $(W, {\Delta})$, we obtain either the existence of vertices $v_1, v_2 \in P_W$ such that $v_1 < v_2$, ${\alpha}(v_1) = {\alpha}(v_2)$ and, if $P_W(\operatorname{\textsf{fact}},v_1, v_2 ) = p_1 p_2 p_3$, then $$\label{in10}
\min(|p_2|, |p_1|+|p_3|) \ge \tfrac 16|W|$$ or we get the existence of a face $\Pi$ in ${\Delta}$ and vertices $v_1, v_2 \in P_W$ with the properties stated in part (b) of Lemma \[lem2\].
First assume that Lemma \[lem2\](a) holds true and ${\partial}|_0 {\Delta}= q_1 q_2 q_3$, where $q_i = {\alpha}(p_i) $, $i =1,2,3$. Consider disk subdiagrams ${\Delta}_1, {\Delta}_2$ of ${\Delta}$ given by ${\partial}|_{v_2} {\Delta}_2 = q_2$, ${\partial}|_{0} {\Delta}_1 = q_1q_3$. Denote $W_2 \equiv {\varphi}(q_2)$, $W_1 \equiv {\varphi}(q_1){\varphi}(q_3)$ and let $P_{W_i} = P_{W_i}(W_i, {\Delta}_i)$, $i=1,2$, denote the corresponding paths such that ${\alpha}_1(P_{W_1}) = q_1q_3$ and ${\alpha}_2(P_{W_2}) = q_2$.
Since $|W_1|, |W_2| < |W|$, it follows from the induction hypothesis that there is a sequence $\Omega_2$ of [elementary operation]{}s for $(W_2, {\Delta}_2)$ that transforms the empty [bracket system]{} into the final system and has size bounded by $$\label{in11}
(11 |W_2|, C (\log |W_2| +1)) .$$ Let $B_{2,0}, \dots, B_{2, \ell_2}$ denote the corresponding to $\Omega_2$ sequence of [bracket system]{}s, where $B_{2,0}$ is empty and $B_{2, \ell_2}$ is final.
We also consider a sequence $\Omega_1$ of [elementary operation]{}s for $(W_1, {\Delta}_1)$ that transforms the [bracket system]{} $\{ c_0 \}$, where $$\label{c0h}
c_0 := (|p_1|, |p_1|, 0,0,0,0)$$ is a starting bracket, into the final bracket system. It follows from the induction hypothesis and Lemma \[lem5\] that there is such a sequence $\Omega_1$ of size bounded by $$\label{in12}
(11 |W_1|+1, C (\log |W_1| +1)+1) .$$ Let $B_{1,0}, \dots, B_{1, \ell_1}$ denote the corresponding to $\Omega_1$ sequence of [bracket system]{}s, where $B_{1,0} = \{ c_0 \} $ and $B_{1, \ell_1}$ is final.
We will show below that these two sequences $\Omega_2$, $\Omega_1$ of [elementary operation]{}s, the first one for $(W_2, {\Delta}_2)$ and the second one for $(W_1, {\Delta}_1)$, could be modified and combined into a single sequence of [elementary operation]{}s for $(W, {\Delta})$ that transforms the empty [bracket system]{}into the final system and has size with the desired upper bound.
First we observe that every bracket $b = (b(1), \dots, b(6))$ for $(W_2, {\Delta}_2)$ naturally gives rise to a bracket ${\widehat}b = ({\widehat}b(1), \dots, {\widehat}b(6))$ for $(W, {\Delta})$. Specifically, define $${\widehat}b := (b(1)+|p_1|, b(2)+|p_1|, b(3), b(4), b(5), b(6)) .$$ Let ${\widehat}B_{2,j}$ denote the [bracket system]{} obtained from $B_{2,j}$ by replacing every bracket $b \in B_{2,j}$ with ${\widehat}b$. Then ${\widehat}B_{2,0}, \dots, {\widehat}B_{2, \ell_2}$ is a sequence of [bracket system]{}s for $(W, {\Delta})$ that changes the empty [bracket system]{} into $${\widehat}B_{2, \ell_2} = \{ (|p_1|, |p_1|+ |p_2|, 0, 0, |{\Delta}_2(2)|) \} .$$
Define a relation $\succeq$ on the set of pairs $(b,i)$, where $b \in B_{1,i}$, that is the reflexive and transitive closure of the relation $(c, i+1) \succ (b,i)$, where $c \in B_{1,i+1} $ is obtained from $b, b' \in B_{1,i}$ by an [elementary operation]{} $\sigma$, where $b'$ could be missing. It follows from the definitions that $\succeq$ is a partial order on the set of such pairs $(b,i)$ and that if $(b_2, i_2) \succeq (b_1, i_1)$ then $i_2 \ge i_1$ and $$b_2(1) \le b_1(1) \le b_1(2) \le b_2(2) .$$ Note that the converse need not hold, e.g., if $b_1$ is a starting bracket and $i_1 = i_2$, $b_1(1) = b_1(2) = b_2(1)$, $b_1 \ne b_2$, then the above inequalities hold but $(b_2, i_2) \not\succeq (b_1, i_1)$.
Now we observe that every bracket $d = (d(1), \dots, d(6))$, $d \in B_{1,i}$, for $(W_1, {\Delta}_1)$ naturally gives rise to a bracket $${\widehat}d = ({\widehat}d(1), \dots, {\widehat}d(6))$$ for $(W, {\Delta})$ in the following fashion.
If $(d, i)$ is not comparable with $(c_0, 0)$, where $c_0$ is defined in [c0h]{}, by the relation $\succeq$ and $d(1) \le d(2) \le |p_1|$, then $${\widehat}d :=d .$$
If $(d, i) $ is not comparable with $(c_0,0)$ by the relation $\succeq$ and $|p_1| \le d(1) \le d(2) $, then $${\widehat}d := (d(1)+ |p_2|, d(2)+ |p_2|, d(3), d(4),d(5),d(6)) .$$
If $(d, i) \succeq (c_0,0)$, then $${\widehat}d := (d(1), d(2)+ |p_2|, d(3), d(4),d(5),d(6) +|{\Delta}_2(2)|) .$$
Note that the above three cases cover all possible situations because if $(d,i)$ is not comparable with $(c_0,0)$ by the relation $\succeq$, then $d(2) \le |p_1| = c_0(1)$ or $d(1) \ge |p_1| = c_0(2)$.
As above, let ${\widehat}B_{1,i} := \{ {\widehat}d \mid d \in B_{1,i} \}$. Then ${\widehat}B_{1,0}, \dots, {\widehat}B_{1, \ell_1}$ is a sequence of [bracket system]{}s for $(W, {\Delta})$ that changes the [bracket system]{}$${\widehat}B_{1,0} = {\widehat}B_{2,\ell_2} = \{ (|p_1|,|p_1p_2|, 0, 0, 0, |{\Delta}_2(2)| ) \}$$ into the final [bracket system]{}$${\widehat}B_{1, \ell_1} = (0, |p_1|+ |p_2|+ |p_3|, 0, 0, 0, |{\Delta}_1(2)|+ |{\Delta}_2(2)|) .$$ More specifically, it is straightforward to verify that ${\widehat}B_{1,0}, \dots, {\widehat}B_{1, \ell_1}$ is an operational sequence of [bracket system]{}s for $(W, {\Delta})$ which corresponds to an analogue ${\widehat}\Omega_1$ of the sequence of [elementary operation]{}s $\Omega_1$ for $(W_1, {\Delta}_1)$ so that if a bracket $b \in B_{1,i}$, $i \ge 1$, is obtained from brackets $d_1, d_2 \in B_{1,i-1}$, where one of $d_1, d_2$ could be missing, by an [elementary operation]{} $\sigma$ of $\Omega_1$, then ${\widehat}b \in {\widehat}B_{1,i}$ is obtained from ${\widehat}d_1, {\widehat}d_2 \in {\widehat}B_{1,i-1}$ by an [elementary operation]{} ${\widehat}\sigma$ of ${\widehat}\Omega_1$ and ${\widehat}\sigma$ has the same type as $\sigma$.
Thus, with the indicated changes, we can now combine the foregoing sequences of [bracket system]{}s for $(W_2, {\Delta}_2)$ and for $(W_1, {\Delta}_1)$ into a single sequence of [bracket system]{}s for $(W, {\Delta})$ that transforms the empty [bracket system]{} into the [bracket system]{} $\{ (|p_1|,|p_1p_2|, 0, 0, 0, |{\Delta}_2(2)| ) \}$ and then continues to turn the latter into the final [bracket system]{}. It follows from definitions and bounds [in11]{}–[in12]{} that the size of thus constructed sequence is bounded by $$\begin{gathered}
( 11|W_1|+11|W_2|+1,\ \max( C(\log |W_1|+ 1)+1, \ C( \log |W_2|+1 ) ) ) \
\end{gathered}$$ Therefore, in view of Lemma \[lem4\], it remains to show that $$\begin{gathered}
\max( C(\log |W_1| +1)+ 1, C(\log |W_2|+1 ) ) \le
C (\log |W|+1) .\end{gathered}$$ In view of inequality , $$\max( C(\log |W_1| +1)+ 1, C(\log |W_2|+1 ) ) \le C (\log (\tfrac 56|W|)+1) +1 ,$$ and $C (\log (\tfrac 56|W|)+1) +1 \le C (\log |W|+1) $ if $C \ge (\log \tfrac 65)^{-1}$. Thus the first main case, when Lemma \[lem2\](a) holds for the pair $(W, {\Delta})$, is complete.
Now assume that Lemma \[lem2\](b) holds true for the pair $(W, {\Delta})$ and $\Pi$ is the face in ${\Delta}$ with $| {\partial}\Pi | \ge 2$, $v_1, v_2 \in P_W$ are the vertices of $P_W$ such that $v_1 < v_2$, ${\alpha}(v_1), {\alpha}(v_2) \in {\partial}\Pi$, and if $P_W(\operatorname{\textsf{fact}},v_1, v_2) = p_1 p_2 p_3$ then $$\min(|p_2|, |p_1|+|p_3|) \ge
\tfrac 16 |W| .$$ Furthermore, if $({\partial}\Pi )^{-1} = e_1 \dots e_{|{\partial}\Pi |}$, where $e_i \in \vec {\Delta}(1)$, and the cyclic boundary ${\partial}{\Delta}$ of ${\Delta}$ is ${\partial}{\Delta}= e_1 h_1 \dots$ $ e_{|{\partial}\Pi |} h_{|{\partial}\Pi |}$, where $h_i$ is a closed subpath of ${\partial}{\Delta}$, then, for every $i$, $h_i$ is a subpath of either ${\alpha}( p_2)$ or ${\alpha}( p_3) {\alpha}( p_1)$ and $|h_i | \le \tfrac 56 |W|$.
For $i = 2, \dots,|{\partial}\Pi |$, denote $W_i := {\varphi}(h_i)$ and let ${\Delta}_i $ be the disk subdiagram of ${\Delta}$ with ${\partial}|_{(h_i)_-} {\Delta}_i = h_i$. We also consider a path $P_{W_i}$ with ${\varphi}(P_{W_i}) \equiv W_i$ and let ${\alpha}_i : P_{W_i} \to {\Delta}_i$ denote the map whose definition is analogous to that of ${\alpha}: P_{W} \to {\Delta}$ (note that $|W_i| = 0$ is now possible).
By the induction hypothesis on $|W|$ applied to $(W_i, {\Delta}_i)$ (the case $|W_i| = 0$ is vacuous), there is a sequence $\Omega_i$ of [elementary operation]{}s for $(W_i, {\Delta}_i)$ that transforms the empty [bracket system]{} into the final one and has size bounded by $( 10|W_i|, C(\log |W_i|+ 1) )$.
Making a cyclic permutation of indices of $e_i, h_i$ in ${\partial}{\Delta}= e_1 h_1 \dots e_{|{\partial}\Pi |} h_{|{\partial}\Pi |}$ if necessary, we may assume that $$P_W = s_2 f_2 q_2 f_3 q_3 \dots f_{|{\partial}\Pi |} q_{|{\partial}\Pi |} f_1 s_1 ,$$ where ${\alpha}(f_i) = e_i$, $i = 1, \dots,|{\partial}\Pi |$, ${\alpha}(q_j) = h_j$, $j = 2, \dots,|{\partial}\Pi |$, and ${\alpha}(s_1){\alpha}(s_2) = h_1$, see Fig. 4.2. Note that $|s_1| =0$ or $|s_2| =0$ is possible.
Let $B_{i,0}, \dots, B_{i, \ell_i}$ denote the corresponding to $\Omega_i$ sequence of [bracket system]{}s, where $B_{i,0}$ is empty and $B_{i, \ell_i}$ is final. As in the above arguments, we can easily convert every bracket $b \in \cup_j B_{i,j}$, where $i >1$, into a bracket ${\widehat}b$ for $(W, {\Delta})$ by using the rule $${\widehat}b := (b(1) + | s_2 f_2 \dots f_i |, b(2)+ | s_2 f_2 \dots f_i | , b(3), b(4), b(5), b(6)) .$$ Then the sequence ${\widehat}B_{i,0}, \dots, {\widehat}B_{i, \ell_i}$, where ${\widehat}B_{i,j} := \{ {\widehat}b \mid b \in B_{i,j} \}$, becomes an operational sequence of [bracket system]{}s for $(W, {\Delta})$ that transforms the empty [bracket system]{} into ${\widehat}B_{i,\ell_i} = \{ {\widehat}d_i \}$, where $${\widehat}d_i = (| s_2 f_2 \dots f_i |, | s_2 f_2 \dots f_i q_i|, 0, 0,0, |{\Delta}_i(2)| ) \} , \ i >1 .$$ We also remark that the sequence of [bracket system]{}s ${\widehat}B_{i,0}, \dots, {\widehat}B_{i, \ell_i}$ corresponds to an analogue ${\widehat}\Omega_i$ of the sequence of [elementary operation]{}s $\Omega_i$ for $(W_i, {\Delta}_i)$ so that if a bracket $b \in B_{i,j}$, $j \ge 1$, is obtained from brackets $g_1, g_2 \in B_{i,j-1}$, where one of $g_1, g_2$ could be missing, by an [elementary operation]{} $\sigma$ of $\Omega_i$, then ${\widehat}b \in {\widehat}B_{i, j}$ is obtained from ${\widehat}g_1, {\widehat}g_2 \in {\widehat}B_{i,j-1}$ by an [elementary operation]{} ${\widehat}\sigma$ of ${\widehat}\Omega_i$ and ${\widehat}\sigma$ has the same type as $\sigma$.
Denote ${\varphi}( ({\partial}\Pi)^{-1}) = a_{j_\Pi}^{{\varepsilon}n_\Pi}$. Using the operational sequence ${\widehat}B_{2,0}, \dots, {\widehat}B_{2, \ell_2}$, we convert the empty [bracket system]{} into $\{ {\widehat}d_2 \}$. Applying a turn operation to ${\widehat}d_2$, we change ${\widehat}d_2(3)$ from 0 to $j_\Pi$ and $d_2(4)$ from 0 to $n_\Pi$. Note that $n_\Pi \le |W|$ by Lemma \[vk2\], hence this is correct to do so. Then we apply two extensions of type 1 on the left and on the right to increase ${\widehat}d_2(2)$ by 1 and to decrease ${\widehat}d_2(1)$ by 1, see Fig. 4.2. Let $${\widetilde}d_2 = (|s_2|, |s_2 f_2 q_2 f_3|, j_\Pi, n_\Pi, 2{\varepsilon}, |{\Delta}_2(2) | )$$ denote the bracket of type B2 obtained this way. Now, starting with the [bracket system]{} $\{ {\widetilde}d_2 \}$, we apply those [elementary operation]{}s that are used to create the sequence ${\widehat}B_{3,0}, \dots, {\widehat}B_{3, \ell_3}$, and obtain the [bracket system]{} $\{ {\widetilde}d_2, {\widehat}d_3 \}$. Applying a merger to ${\widetilde}d_2, {\widehat}d_3$, we get $${\widehat}d_3' = (| s_2 |, | s_2 f_2 q_2f_3 q_3|, j_\Pi, n_\Pi, 2{\varepsilon}, |{\Delta}_2(2) | + |{\Delta}_3(2)| ) .$$ Let ${\widetilde}d_3$ be obtained from $ {\widehat}d_3'$ by extension of type 1 on the right, so $${\widetilde}d_3 = (| s_2 |, | s_2 f_2 q_2f_3 q_3 f_4|, j_\Pi, n_\Pi, 3{\varepsilon}, |{\Delta}_2(2) | + |{\Delta}_3(2)| ) .$$ Iterating in this manner, we will arrive at a [bracket system]{} consisting of the single bracket $${\widehat}d_{| {\partial}\Pi |}' = (| s_2 |, | s_2 f_2 \dots q_{| {\partial}\Pi |} |, j_\Pi, n_\Pi, {\varepsilon}(| {\partial}\Pi |-1), \sum_{i\ge 2}|{\Delta}_i(2) | ) .$$ Applying to $ {\widehat}d_{| {\partial}\Pi |}' $ an extension of type 2 on the right along the edge $f_1 = {\alpha}(e_1)$, see Fig. 4.2, we obtain the bracket $${\widetilde}d_{| {\partial}\Pi |} = ( | s_2 |, |W| - |s_1|, 0,0,0, 1+ \sum_{i\ge 2}|{\Delta}_i(2) | )$$ of type B1.
For $i =1$, we let $W_1 := {\varphi}( s_2) {\varphi}(s_1) $ and let ${\Delta}_1$ be the disk subdiagram of ${\Delta}$ with ${\partial}|_{(s_2)_-} {\Delta}_1 = s_2 s_1$. Referring to the induction hypothesis on $|W|$ for $(W_1, {\Delta}_1)$ and to Lemma \[lem5\], we conclude that there is a sequence $\Omega'_1$ of [elementary operation]{}s that changes the starting bracket $$\begin{gathered}
\label{c2h}
c_2 := (|s_2|, |s_2|, 0,0,0,0)\end{gathered}$$ into the final [bracket system]{} and has size bounded by $$( 11|W_1| +1, \ C(\log |W_1|+ 1) +1).$$
Let $B'_{1,0}, \dots, B'_{1, \ell'_1}$ denote the corresponding to $\Omega_1'$ sequence of [bracket system]{}s, where $B'_{1,0} = \{ c_2 \}$ and $B'_{1, \ell'_1}$ is final. Similarly to the above construction of ${\widehat}B_{1,i}$ from $B_{1,i}$, we will make changes over brackets $b \in \cup_j B'_{1,j}$ so that every $b$ becomes a bracket ${\widehat}b$ for $(W, {\Delta})$ and if ${\widehat}B'_{1,i} := \{ {\widehat}b \mid b \in B'_{1,i} \} $, then the sequence ${\widehat}B'_{1,0}, \dots, {\widehat}B'_{1, \ell'_1}$ transforms the [bracket system]{}${\widehat}B'_{1, 0} = \{ {\widehat}c_2 \} = \{ {\widetilde}d_{| {\partial}\Pi |} \} $ into the final one for $(W, {\Delta})$.
Specifically, define a relation $\succeq'$ on the set of all pairs $(b,i)$, where $b \in B'_{1,i}$, that is reflexive and transitive closure of the relation $(c, i+1) \succ' (b,i)$, where a bracket $c \in B_{1,i+1} $ is obtained from brackets $b, b' \in B_{1,i}$ by an [elementary operation]{} $\sigma$, where $b'$ could be missing. As before, it follows from the definitions that $\succeq'$ is a partial order on the set of all such pairs $(b,i)$ and that if $(b_2, i_2) \succeq' (b_1, i_1)$ then $i_2 \ge i_1$ and $$b_2(1) \le b_1(1) \le b_1(2) \le b_2(2) .$$
Furthermore, every bracket $b= (b(1), \dots, b(6))$, where $b \in B'_{1,i}$, for $(W_1, {\Delta}_1)$, naturally gives rise to a bracket $${\widehat}b= ({\widehat}b(1), \dots, {\widehat}b(6))$$ for $(W, {\Delta})$ in the following fashion.
If the pair $(b, i)$ is not comparable with $(c_2, 0)$, where $c_2$ is defined in [c2h]{}, by the relation $\succeq'$ and $ b(2) \le |s_2|$, then $${\widehat}b :=b .$$
If the $(b, i) $ is not comparable with $(c_2,0)$ by the relation $\succeq'$ and $|s_1| \le b(1)$, then $${\widehat}b := (b(1)+ |W|- |h_1|, b(2)+ |W|- |h_1|, b(3), b(4),b(5),b(6)) .$$
If $(d, i) \succeq' (c_2,0)$, then $${\widehat}b := (b(1), b(2)+ |W|- |h_1|, b(3), b(4), b(5), b(6) +1+\sum_{i\ge 2} |{\Delta}_i(2)| ) .$$
As before, we note that these three cases cover all possible situations because if $(b,i)$ is not comparable with $(c_2,0)$ by the relation $\succeq'$, then either $$b(2) \le |W|- |h_1| = c_2(1)$$ or $b(1) \ge |W|- |h_1| = c_2(2)$.
Similarly to the foregoing arguments, we check that ${\widehat}B'_{1,0}, \dots, {\widehat}B'_{1, \ell_1}$ is an operational sequence of [bracket system]{}s for $(W, {\Delta})$ which corresponds to an analogue ${\widehat}\Omega_1'$ of the sequence of [elementary operation]{}s $\Omega_1'$ for $(W_1, {\Delta}_1)$ so that if a bracket $b \in B'_{1,i}$, $i \ge 1$, is obtained from brackets $g_1, g_2 \in B'_{1,i-1}$, where one of $g_1, g_2$ could be missing, by an [elementary operation]{} $\sigma$ of $\Omega'_1$, then ${\widehat}b \in {\widehat}B'_{1,i}$ is obtained from ${\widehat}g_1, {\widehat}g_2 \in {\widehat}B'_{1,i-1}$ by an [elementary operation]{} ${\widehat}\sigma$ of ${\widehat}\Omega'_1$ and ${\widehat}\sigma$ has the same type as $\sigma$.
Summarizing, we conclude that there exists a sequence of [elementary operation]{}s $\Omega$, containing subsequences ${\widehat}\Omega_2, \dots$, ${\widehat}\Omega_{|{\partial}\Pi | }$, that transforms the empty bracket system for $(W, {\Delta})$ into the [bracket system]{} $B = \{ {\widetilde}d_{|{\partial}\Pi |} \}$ and then, via subsequence ${\widehat}\Omega'_1$, continues to transform $$B = \{ {\widetilde}d_{|{\partial}\Pi |} \} = \{ {\widehat}c_2 \}$$ into the final [bracket system]{} for $(W, {\Delta})$. It follows from the induction hypothesis for the pairs $(W_i, {\Delta}_i)$ and definitions that the size of thus constructed sequence $\Omega$ is bounded by $$( (11 \! \cdot \! \! \! \! \! \sum_{1 \le i \le |{\partial}\Pi |} |W_i| ) + 2 |{\partial}\Pi | +2, \max_{1 \le i \le |{\partial}\Pi |} \{ C(\log |W_i| +1 ) +1 \} ) .$$ In view of Lemma \[lem4\], it suffices to show that $$\max_{1 \le i \le |{\partial}\Pi |} \{ C(\log |W_i| +1 ) +1 \} \le C(\log |W| +1 ) .$$ Since $|W_i| \le \tfrac 56|W|$, the latter inequality holds true, as in the above case, for $C = (\log \tfrac 56)^{-1}$.
Let $W$ be an arbitrary nonempty word over the alphabet ${\mathcal{A}}^{\pm 1}$, not necessarily representing the trivial element of the group given by presentation [pr1]{} (and $W$ is not necessarily reduced). As before, let $P_W$ be a labeled simple path with ${\varphi}( P_W ) \equiv W$ and let vertices of $P_W$ be identified along $P_W$ with integers $0, \dots, |W|$ so that $(P_W)_- = 0, \dots$, $(P_W)_+ = |W|$.
A 6-tuple $$b = (b(1), b(2), b(3), b(4), b(5), b(6))$$ of integers $b(1), b(2), b(3), b(4), b(5), b(6)$ is called a [*pseudobracket*]{} for the word $W$ if $b(1), b(2)$ satisfy the inequalities $0 \le b(1) \le b(2) \le |W|$ and one of the following two properties (PB1), (PB2) holds for $b$.
1. $b(3)= b(4)=b(5)=0$ and $0 \le b(6) \le |W|$.
2. $b(3) \in \{ 1, \dots, m\}$, $b(4) \in E_{b(3)}$, where the set $E_{b(3)}$ is defined in [pr1]{}, $0 < b(4) \le |W|$, $-b(4) < b(5) < b(4)$, and $0 \le b(6) \le |W|$.
We say that a [pseudobracket]{} $b$ has [*type PB1*]{} if the property (PB1) holds for $b$. We also say that a [pseudobracket]{} $b$ has [*type PB2*]{} if the property (PB2) holds for $b$. Clearly, these two types are mutually exclusive.
Let $p$ denote the subpath of $P_W$ such that $p_- = b(1)$ and $p_+ = b(2)$, perhaps, $p_- = p_+$ and $|p| = 0$. The subpath $p$ of $P_W$ is denoted $a(b)$ and is called the [*arc*]{} of the pseudobracket $b$.
For example, $b_v = (v, v, 0, 0, 0,0)$ is a pseudobracket of type PB1 for every vertex $v \in P_W$ and such $b_v$ is called a [*starting*]{} pseudobracket. Note that $a(b) = \{ v \}$. A [*final*]{} pseudobracket for $W$ is $c = (0, |W|, 0, 0,0, k)$, where $k \ge 0$ is an integer. Note that $a(c) = P_W$.
Observe that if $b$ is a bracket for the pair $(W, {\Delta})$ of type B1 or B2, then $b$ is also a [pseudobracket]{} of type PB1 or PB2, resp., for the word $W$.
Let $B$ be a finite set of [pseudobracket]{}s for $W$, perhaps, $B$ is empty. We say that $B$ is a [*pseudobracket system*]{} for $W$ if, for every pair $b, c \in B$ of distinct [pseudobracket]{}s, either $b(2) \le c(1)$ or $c(2) \le b(1)$. It follows from the definitions that every [bracket system]{} for $(W, {\Delta})$ is also a [pseudobracket system]{} for the word $W$. $B$ is called a [*final [pseudobracket system]{} *]{} for $W$ if $B$ consists of a single final [pseudobracket]{} for $W$.
Now we describe four kinds of [elementary operation]{}s over [pseudobracket]{}s and over [pseudobracket system]{}s: additions, extensions, turns, and mergers, which are analogous to those definitions for brackets and [bracket system]{}s, except there are no any diagrams and no faces involved.
Let $B$ be a [pseudobracket system]{} for a nonempty word $W$ over ${\mathcal{A}}^{\pm 1}$.
[*Additions.*]{}
Suppose $b$ is a starting [pseudobracket]{}, $b \not\in B$, and $B \cup \{ b\}$ is a [pseudobracket system]{}. Then we may add $b$ to $B$ thus making an [addition]{} operation over $B$.
[*Extensions.*]{}
Suppose $b \in B$, $b = (b(1), b(2), b(3), b(4), b(5), b(6))$, and $e_1 a(b) e_2$ is a subpath of $P_W$, where $a(b)$ is the arc of $b$ and $e_1, e_2$ are edges one of which could be missing.
Assume that $b$ is of type PB2. Suppose ${\varphi}(e_1) = a_{b(3)}^{{\varepsilon}_1}$, where ${\varepsilon}_1 = \pm 1$.
If $| b(5) | \le b(4)-2$ and ${\varepsilon}_1 b(5) \ge 0$, then we consider a [pseudobracket]{} $b'$ of type PB2 such that $$\begin{aligned}
& b'(1) = b(1)-1, \ b'(2) = b(2) , \ b'(3) = b(3) , \\
& b'(4) = b(4) , \ b'(5) = b(5)+{\varepsilon}_1 , b'(6) = b(6) .\end{aligned}$$ Note that $a(b') = e_1 a(b)$. We say that $b'$ is obtained from $b$ by an extension of type 1 (on the left). If $(B \setminus \{ b \}) \cup \{ b' \}$ is a [pseudobracket system]{}, then replacement of $b \in B$ with $b'$ in $B$ is called an [*extension* ]{} operation over $B$ of type 1.
On the other hand, if $| b(5) | = b(4)-1$ and ${\varepsilon}_1 b(5) \ge 0$, then we consider a [pseudobracket]{} $b'$ of type PB1 such that $$b'(1) = b(1)-1 , \
b'(2) = b(2) , \ b'(3) = b'(4) = b'(5) = 0, \ b'(6) = b(6)+1 .$$ In this case, we say that $b'$ is obtained from $b$ by an extension of type 2 (on the left). Note that $a(b') = e_1 a(b)$ and $b'$ has type PB1. If $(B \setminus \{ b \}) \cup \{ b' \}$ is a [pseudobracket system]{}, then replacement of $b \in B$ with $b'$ in $B$ is called an [*extension* ]{} operation over $B$ of type 2.
Analogously, assume that $b$ has type PB2 and ${\varphi}(e_2) = a_{b(3)}^{{\varepsilon}_2}$, where ${\varepsilon}_2 = \pm 1$.
If $| b(5) | \le b(4)-2$ and ${\varepsilon}_2 b(5) \ge 0$, then we consider a [pseudobracket]{} $b'$ such that $$\begin{aligned}
& b'(1) = b(1), \ b'(2) = b(2)+1 , \ b'(3) = b(3) , \\
& b'(4) = b(4) , \ b'(5) = b(5)+{\varepsilon}_2 , b'(6) = b(6) .\end{aligned}$$ Note that $a(b') = a(b)e_2$ and $b'$ has type PB2. We say that $b'$ is obtained from $b$ by an extension of type 1 (on the right). If $(B \setminus \{ b \}) \cup \{ b' \}$ is a [pseudobracket system]{}, then replacement of $b \in B$ with $b'$ in $B$ is called an [*extension* ]{} operation over $B$ of type 1.
On the other hand, if $| b(5) | = b(4)-1$ and ${\varepsilon}_2 b(5) \ge 0$, then we consider a [pseudobracket]{} $b'$ such that $$b'(1) = b(1) , \ b'(2) = b(2)+1 , \ b'(3) =b'(4) = b'(5) =0 , \ b'(6) = b(6)+1 .$$ Note that $a(b') = a(b) e_2$ and $b'$ has type PB1. We say that $b'$ is obtained from $b$ by an extension of type 2 (on the right). If $(B \setminus \{ b \}) \cup \{ b' \}$ is a [pseudobracket system]{}, then replacement of $b \in B$ with $b'$ in $B$ is called an [*extension* ]{} operation over $B$ of type 2.
Assume that $b \in B$ is a [pseudobracket]{} of type PB1, $e_1 a(b) e_2$ is a subpath of $P_W$, where $a(b)$ is the arc of $b$ and $e_1, e_2$ are edges with ${\varphi}(e_1) = {\varphi}(e_2)^{-1}$. Consider a [pseudobracket]{} $b'$ of type PB1 such that $$b'(1) = b(1)-1 , \ b'(2) = b(2)+1 , \ b'(3) = b'(4) =b'(5) =0 , \ b'(6) = b(6) .$$ Note that $a(b') = e_1 a(b)e_2$. We say that $b'$ is obtained from $b$ by an extension of type 3. If $(B \setminus \{ b \}) \cup \{ b' \}$ is a [pseudobracket system]{}, then replacement of $b \in B$ with $b'$ in $B$ is called an [*extension* ]{} operation over $B$ of type 3.
[*Turns.*]{}
Let $b \in B$ be a [pseudobracket]{} of type PB1. Then, by the definition, $b(3) = b(4) =b(5) =0$. Pick $j \in \{ 1, \dots, m\}$ such that $E_j \ne \{ 0 \}$. Consider a [pseudobracket]{} $b'$ with $b'(i) = b(i)$ for $i=1,2,4,5,6$, and $b'(3) = j$. Note that $b'$ is of type PB2 and $a(b') = a(b)$. We say that $b'$ is obtained from $b$ by a [*turn*]{} operation. Replacement of $b \in B$ with $b'$ in $B$ is also called a [*turn* ]{} operation over $B$. Note that $(B \setminus \{ b \}) \cup \{ b' \}$ is a [pseudobracket system]{} because so is $B$.
[*Mergers.*]{}
Now suppose that $b, c \in B$ are distinct [pseudobracket]{}s such that $b(2) = c(1)$ and one of $b(3), c(3)$ is 0. Then one of $b, c$ is of type PB1 and the other has type PB1 or PB2. Consider a [pseudobracket]{} $b'$ such that $b'(1) = b(1)$, $b'(2) = c(2)$, and $b'(i) = b(i) + c(i)$ for $i=3,4,5,6$. Note that $a(b') = a(b)a(c)$ and $b'$ is of type PB1 if both $b, c$ have type PB1 or $b'$ is of type PB2 if one of $b, c$ has type PB2. We say that $b'$ is obtained from $b, c$ by a [*merger*]{} operation. Taking both $b, c$ out of $B$ and putting $b'$ in $B$ is a [*merger* ]{} operation over $B$.
As before, we will say that additions, extensions, turns and mergers, as defined above, are [*[elementary operation]{}s*]{} over [pseudobracket]{}s and [pseudobracket system]{}s for $W$.
Assume that one [pseudobracket system]{} $B_\ell$ is obtained from another [pseudobracket system]{} $B_0$ by a finite sequence $\Omega$ of [elementary operation]{}s and $B_0, B_1, \dots, B_\ell$ is the corresponding to $\Omega$ sequence of [pseudobracket system]{}s. Such a sequence $B_0, B_1, \dots, B_\ell$ of [pseudobracket system]{}s is called [*operational*]{}. We say that a sequence $B_0, B_1, \dots, B_\ell$ of [pseudobracket system]{}s has [*size bounded by*]{} $ (k_1, k_2)$ if $\ell \le k_1$ and, for every $i$, the number of [pseudobracket]{}s in $B_i$ is at most $k_2$. Whenever it is not ambiguous, we also say that $\Omega$ has size bounded by $(k_1, k_2)$ if so does the corresponding to $\Omega$ sequence $B_0, B_1, \dots, B_\ell$ of [pseudobracket system]{}s.
The significance of [pseudobracket system]{}s and [elementary operation]{}s over [pseudobracket system]{}s introduced above is revealed in the following lemma.
\[lem7\] Suppose that the empty [pseudobracket system]{} $B_0$ for $W$ can be transformed by a finite sequence $\Omega$ of [elementary operation]{}s into a final [pseudobracket system]{} $\{ b_F \}$. Then $W \overset{{\mathcal{G}}_2} = 1$, i.e., $W$ represents the trivial element of the group defined by presentation [pr1]{}, and there is a disk diagram ${\Delta}$ over [pr1]{} such that ${\Delta}$ has property (A), ${\varphi}({\partial}|_0 {\Delta}) \equiv W$ and $|{\Delta}(2) | = b_F(6)$.
Let $B_0, B_1, \dots, B_\ell$ be the corresponding to $\Omega$ sequence of [pseudobracket system]{}s, where $B_0$ is empty and $B_\ell$ is final. Consider the following Claims (C1)–(C2) for a [pseudobracket]{} $b \in B_i$, $i = 1, \dots, \ell$, in which $a(b)$ denotes the arc of $b$.
1. If a [pseudobracket]{} $b \in B_i$ has type PB1 then ${\varphi}(a(b)) \overset {{\mathcal{G}}_2} = 1$ in the group ${\mathcal{G}}_2$ given by presentation [pr1]{} and there is a disk diagram ${\Delta}_b$ over such that ${\Delta}_b$ has property (A), ${\varphi}( {\partial}{\Delta}_b ) \equiv {\varphi}(a(b))$ and $| {\Delta}_b(2) | = b(6)$.
2. If a [pseudobracket]{} $b \in B_i$ has type PB2, then ${\varphi}(a(b)) \overset {{\mathcal{G}}_2} = a_{b(3)}^{b(5)}$ and there is a disk diagram ${\Delta}_b$ over such that ${\Delta}_b$ has property (A), ${\partial}{\Delta}_b = p q^{-1}$, where $p$, $q^{-1}$ are subpaths of ${\partial}{\Delta}_b$, ${\varphi}( p ) \equiv {\varphi}(a(b))$, ${\varphi}( q ) \equiv a_{b(3)}^{b(5)}$, $| {\Delta}_b(2) | = b(6)$, and if $e$ is an edge of $q^{-1}$ then $e^{-1} \in p$.
By induction on $i \ge 1$, we will prove that Claims (C1)–(C2) hold true for every [pseudobracket]{} $b' \in B_i$. Note that if $b'$ is a starting [pseudobracket]{}, then $b'$ is of type PB1 and Claim (C1) is obviously true for $b'$ (Claim (C2) is vacuously true for $b'$). Since $B_1$ consists of a starting [pseudobracket]{}, the base of induction is established.
To make the induction step from $i-1$ to $i$, $i \ge 2$, we consider the cases corresponding to the type of the [elementary operation]{} that is used to get $B_i$ from $B_{i-1}$.
Suppose that $B_i$ is obtained from $B_{i-1}$ by an [elementary operation]{} $\sigma$ and $b' \in B_i$ is the [pseudobracket]{} obtained from $b, c \in B_{i-1}$ by application of $\sigma$, here one of $b, c$ or both, depending on type of $\sigma$, could be missing. By the induction hypothesis, Claims (C1)–(C2) hold for every [pseudobracket]{} of $B_i$ different from $b'$ and it suffices to show that the suitable Claim (C1)–(C2) holds for $b'$.
If $B_i$ is obtained from $B_{i-1}$ by an addition, then it suffices to refer to the above remark that Claims (C1)–(C2) hold for a starting [pseudobracket]{}.
Suppose that $B_i$ is obtained from $B_{i-1}$ by an extension of type 1 and let $b' \in B_i$ be the [pseudobracket]{} created from $b \in B_{i-1}$ by an extension of type 1 on the left (the “right" subcase is symmetric).
Note that both $b$ and $b'$ have type PB2. By the induction hypothesis, which is Claim (C2) for $b$, there exists a disk diagram ${\Delta}_b$ such that ${\Delta}_b$ has property (A), $${\partial}{\Delta}_b = p q^{-1}, \ \ {\varphi}( p ) \equiv {\varphi}(a(b)), \ \ {\varphi}( q ) \equiv a_{b(3)}^{b(5)}, \ \
| {\Delta}_b(2) | = b(6),$$ and if $e$ is an edge of $q^{-1}$ then $e^{-1} \in p$. Here and below we use the notation of the definition of an extension of type 1.
Let $e_1$ denote the edge of $P_W$ such that $a(b') = e_1 a(b)$ and ${\varphi}(e_1) = a_{b(3)}^{{\varepsilon}_1}$, ${\varepsilon}_1 = \pm 1$. Consider a “loose" edge $f$ with ${\varphi}(f) = a_{b(3)}^{{\varepsilon}_1}$. We attach the vertex $f_+$ to the vertex $p_- = q_-$ of ${\Delta}_b$ to get a new disk diagram ${\Delta}'_b$ such that ${\partial}{\Delta}'_b = f p q^{-1}f^{-1}$, see Fig. 4.5. Note that property (A) holds for ${\Delta}'_b$, $${\varphi}(fp) \equiv a_{b(3)}^{{\varepsilon}_1} {\varphi}(a(b)) \equiv {\varphi}(e_1 a(b)) \equiv {\varphi}(a(b')) , \quad
{\varphi}(q^{-1}f^{-1}) \equiv a_{b(3)}^{b(5)+{\varepsilon}_1} \equiv a_{b(3)}^{b'(5)} ,$$ $| {\Delta}'_b(2) | = | {\Delta}_b(2) | = b(6) = b'(6)$, and if $e$ is an edge of $q^{-1}$ then $e^{-1} \in p$. Thus, Claim (C2) holds for the [pseudobracket]{} $b'$ with ${\Delta}_{b'} := {\Delta}'_b$.
(-26.1,-2) arc (0:180:2); (-31.1,-2) – (-26.1,-2); (-31.1,-2) –(-28,-2); (-31.1,-2) –(-30.5,-2); (-28.1,0) –(-28.05,0); (-31.1,-2) \[fill = black\] circle (.05); (-30.1,-2) \[fill = black\] circle (.05); (-26.1,-2) \[fill = black\] circle (.05); at (-28,-1) [$\Delta_b$]{}; at (-30.6,-2.7) [$f$]{}; at (-28,-2.7) [$q$]{}; at (-28.1,.6) [$p$]{}; at (-30.5,0) [$\Delta'_b$]{}; at (-28,-3.8) [Fig. 4.5]{};
Assume that $B_i$ is obtained from $B_{i-1}$ by an extension of type 2 and let $b' \in B_i$ be the [pseudobracket]{} obtained from $b \in B_{i-1}$ by an extension of type 2 on the left (the “right" subcase is symmetric).
Note that $b$ has type PB2, while $b'$ has type PB1. By the induction hypothesis, which is Claim (C2) for $b$, there exists a disk diagram ${\Delta}_b$ such that ${\Delta}_b$ has property (A), $${\partial}{\Delta}_b = p q^{-1}, \ \ {\varphi}( p ) \equiv {\varphi}(a(b)), \ \ {\varphi}( q ) \equiv a_{b(3)}^{b(5)}, \ \
| {\Delta}_b(2) | = b(6),$$ and if $e$ is an edge of $q^{-1}$ then $e^{-1} \in p$. Here and below we use the notation of the definition of an extension of type 2.
Let $e_1$ denote the edge of $P_W$ such that $a(b') = e_1 a(b)$ and ${\varphi}(e_1) = a_{b(3)}^{{\varepsilon}_1}$, ${\varepsilon}_1 = \pm 1$. Consider a “loose" edge $f$ with ${\varphi}(f) = a_{b(3)}^{{\varepsilon}_1}$. We attach the vertex $f_+$ to $p_- = q_-$ and the vertex $f_-$ to $p_+ = q_+$ of ${\Delta}_b$. Since ${\varepsilon}_1 b(5) \ge 0$ and $| b(5)| = b(4) -1$, it follows that ${\varepsilon}_1 + b(5) = {\varepsilon}_1 |b(4)|$. Therefore, ${\varphi}(qf) \equiv a_{b(3)}^{{\varepsilon}_1b(4)}$ and we can attach a new face $\Pi$ with ${\varphi}({\partial}\Pi) \equiv a_{b(3)}^{-{\varepsilon}_1b(4)}$ so that the boundary path ${\partial}\Pi$ is identified with the path $q^{-1}f^{-1}$, see Fig. 4.6. This way we obtain a new disk diagram ${\Delta}'_b$ such that ${\varphi}( {\partial}|_{f_-} {\Delta}'_b ) \equiv {\varphi}( f ) {\varphi}( p)$ and $| {\Delta}'_b(2) | = | {\Delta}_b(2) | +1$, see Fig. 4.6. It follows from construction of ${\Delta}'_b$ that property (A) holds for ${\Delta}'_b$ $${\varphi}(fp) \equiv a_{b(3)}^{{\varepsilon}_1} {\varphi}(a(b)) \equiv {\varphi}(e_1 a(b)) \equiv {\varphi}(a(b')) \overset{{\mathcal{G}}_2} = 1 , \ \
{\varphi}(q^{-1}f^{-1}) \equiv a_{b(3)}^{b(5)+{\varepsilon}_1} \equiv a_{b'(3)}^{b'(5)} ,$$ and $| {\Delta}'_b(2) | = | {\Delta}_b(2) |+1 = b(6)+1 = b'(6)$. Therefore, Claim (C1) holds for the [pseudobracket]{} $b'$ if we set ${\Delta}_{b'} := {\Delta}'_b$.
(-19,-2) arc (0:360:2); (-23,-2) – (-19,-2); (-23,-2) –(-21,-2); (-21,0) –(-20.95,0); (-20.95,-4) –(-21,-4); (-23,-2) \[fill = black\] circle (.05); (-19,-2) \[fill = black\] circle (.05); at (-21,-5) [Fig. 4.6]{}; at (-20.5,-1) [$\Delta_b$]{}; at (-21,-1.6) [$q$]{}; at (-21.,-.5) [$p$]{}; at (-23.5,-.5) [$\Delta'_b$]{}; at (-21.5,-2.7) [$\Pi$]{}; at (-21,-3.5) [$f$]{};
Suppose that an extension of type 3 was applied to get $B_i$ from $B_{i-1}$ and $b' \in B_i$ is the [pseudobracket]{}created from $b \in B_{i-1}$ by the operation.
Note that both $b$ and $b'$ have type PB1. By the induction hypothesis, which is Claim (C1) for $b$, there exists a disk diagram ${\Delta}_b$ such that ${\Delta}_b$ has property (A) and $${\varphi}({\partial}{\Delta}_b) \equiv {\varphi}(a(b)) a_{b(3)}^{-b(5)} , \quad | {\Delta}_b(2) | = b(6) .$$ Here and below we use the notation of the definition of an extension of type 3.
Denote ${\partial}{\Delta}_b = p$, where ${\varphi}(p) \equiv {\varphi}(a(b)) $ and let $e_1, e_2$ be the edges of $P_W$ such that $a(b') = e_1 a(b)e_2$ and ${\varphi}(e_1) ={\varphi}(e_2)^{-1}$. Consider a “loose" edge $f$ with ${\varphi}(f) = {\varphi}(e_1)$. We attach the vertex $f_+$ to the vertex $p_-$ of ${\Delta}_b$ to get a new disk diagram ${\Delta}'_b$ such that ${\partial}{\Delta}'_b = f p f^{-1}$, see Fig. 4.7. Since ${\Delta}_b$ has property (A), $${\varphi}( {\partial}{\Delta}'_b ) \equiv {\varphi}(fpf^{-1}) \equiv {\varphi}(e_1) {\varphi}(a(b)) {\varphi}(e_1)^{-1} \equiv {\varphi}(a(b'))$$ and $| {\Delta}'_b(2) | = | {\Delta}_b(2) | = b(6) = b'(6)$, it follows that Claim (C1) holds for the [pseudobracket]{} $b'$ with ${\Delta}_{b'} := {\Delta}'_b$.
(0,0) circle (1); (0,-2) –(0,-1); (0,-2) –(0,-1.4); (-.1,1) –(.1,1); (0,-2) \[fill = black\] circle (.05); (0,-1) \[fill = black\] circle (.05); at (0,-2.7) [Fig. 4.7]{}; at (0,-.4) [$\Delta_b$]{}; at (-.4,-1.5) [$f$]{}; at (0,.5) [$p$]{}; at (1.2,-1.3) [$\Delta'_b$]{};
Suppose that $B_i$ is obtained from $B_{i-1}$ by a turn operation and let $b' \in B_i$ be the [pseudobracket]{} created from $b \in B_{i-1}$ by the turn operation. By the definition of a turn, $b$ has type PB1 and $b'$ has type PB2. By the induction hypothesis, which is Claim (C1) for $b$, there exists a disk diagram ${\Delta}_b$ such that ${\Delta}_b$ has property (A) $${\varphi}({\partial}{\Delta}_b) \equiv {\varphi}(a(b)) , \quad | {\Delta}_b(2) | = b(6) .$$ Here and below we use the notation of the definition of a turn operation.
Denote ${\Delta}'_b := {\Delta}_b$ and let ${\partial}{\Delta}'_b := p q^{-1}$, where $p, q^{-1}$ are subpaths of ${\partial}{\Delta}'_b$ such that ${\varphi}(p) \equiv {\varphi}(a(b)) $ and $|q| = 0$. Clearly, ${\Delta}'_b$ has property (A). Since $a(b) = a(b')$, $b'(5) = b(5)=0$, and $$| {\Delta}'_b(2) | = | {\Delta}_b(2) | = b(6) = b'(6) ,$$ it follows that ${\varphi}(p) \equiv {\varphi}(a(b')) , \ {\varphi}(q) \equiv a_{b'(3)}^{b'(5)}$. Hence, Claim (C2) holds for the [pseudobracket]{} $b'$ with ${\Delta}_{b'} := {\Delta}'_b$.
Finally, assume that $B_i$ results from $B_{i-1}$ by a merger operation and let $b' \in B_i$ be the [pseudobracket]{}created from [pseudobracket]{}s $b, c \in B_{i-1}$, where $b(2) = c(1)$, by the operation. By the definition of a merger operation, one of the [pseudobracket]{}s $b, c$ must have type PB1. By the induction hypothesis, there are disk diagrams ${\Delta}_{b}, {\Delta}_{c}$ for $b, c$, resp., as stated in Claims (C1)–(C2). Denote ${\partial}{\Delta}_{b} = p_b q_b^{-1}$, ${\partial}{\Delta}_{c} = p_c q_c^{-1}$, where ${\varphi}(p_b) \equiv {\varphi}(a(b))$, ${\varphi}(p_c) \equiv {\varphi}(a(c))$, and, for $x \in \{ b,c \}$, $| q_x| =0$ if $x$ has type PB1 or ${\varphi}(q_x) \equiv a_{x(3)}^{ x(5)}$ if $x$ has type PB2. Note that $| {\Delta}_x(2) | = x(6)$.
Consider a disk diagram ${\Delta}'$ obtained from ${\Delta}_{b}, {\Delta}_{c}$ by identification of the vertices $(p_b)_+$ and $(p_c)_-$, see Fig. 4.8. Then $$| {\Delta}'(2) | = | {\Delta}_b(2) | +| {\Delta}_c(2) | , \qquad {\partial}{\Delta}' = p_bp_c q_c^{-1} q_b^{-1} .$$ Note that $${\varphi}(p_b p_c) \equiv {\varphi}(a(b)) {\varphi}(a(c)) \equiv {\varphi}(a(b')) , \quad | {\Delta}'(2) | = b(6)+c(6)$$ and $|q_c^{-1} q_b^{-1} | = 0$ if both $b, c$ have type PB1, or ${\varphi}(q_b q_c) \equiv a_{b(3)}^{b(5)}$ if $b$ has type PB2 and $c$ has type PB1, or ${\varphi}(q_b q_c) \equiv a_{c(3)}^{c(5)}$ if $b$ has type PB1 and $c$ has type PB2. Therefore, it follows that Claim (C1) holds true for $b'$ with ${\Delta}_{b'} = {\Delta}'$ if both $b, c$ have type PB1 or Claim (C2) holds for $b'$ with ${\Delta}_{b'} = {\Delta}'$ if one of $b, c$ has type PB2.
(-20,-2) arc (0:180:2); (-24,-2) node (v1) – (-20,-2); (-23,-2) –(-22,-2); (-22,0) –(-21.95,0); (-24,-2) \[fill = black\] circle (.05); (-20,-2) \[fill = black\] circle (.05); at (-22,-1) [$\Delta_{c}$]{}; at (-22,0.5) [$p_c$]{}; at (-22,-2.7) [$q_c$]{}; (-24,-2) arc (0:180:2); (-28,-2) node (v1) – (-24,-2); (-27,-2) –(-26,-2); (-26,0) –(-25.95,0); (-28,-2) \[fill = black\] circle (.05); (-24,-2) \[fill = black\] circle (.05); at (-26,-1) [$\Delta_{b}$]{}; at (-26,0.5) [$p_b$]{}; at (-24,0) [$\Delta'$]{}; at (-26,-2.7) [$q_b$]{}; at (-24,-3.5) [Fig. 4.8]{};
All possible cases are discussed and the induction step is complete. Claims (C1)–(C2) are proven.
We can now finish the proof of Lemma \[lem7\]. By Claim (C1) applied to the [pseudobracket]{} $b_F = (0, |W|, 0, 0,0, b_F(6) )$ of the final [pseudobracket system]{} $B_\ell = \{ b_F \}$, there is a disk diagram ${\Delta}_{b_F}$ over [pr1]{} such that ${\Delta}_{b_F}$ has property (A), ${\varphi}( {\partial}|_{0} {\Delta}_{b_F} ) \equiv W$ and $| {\Delta}_{b_F}(2) | = b_F(6)$. Thus, ${\Delta}_{b_F}$ is a desired [disk diagram]{} and Lemma \[lem7\] is proven.
\[lem8\] Suppose $W$ is a nonempty word over ${\mathcal{A}}^{\pm 1}$ and $n \ge 0$ is an integer. Then $W$ is a product of at most $n$ conjugates of words $R^{\pm 1}$, where $R=1$ is a relation of presentation , if and only if there is a sequence $\Omega$ of [elementary operation]{}s such that $\Omega$ transforms the empty [pseudobracket system]{} for $W$ into a final [pseudobracket system]{}$\{ b_F \}$, where $b_F(6) \le n$, and $\Omega$ has size bounded by $ (11|W|, C(\log|W| +1))$, where $C = (\log \tfrac 65)^{-1}$.
Assume that $W$ is a product of at most $n$ conjugates of words $R^{\pm 1}$, where $R=1$ is a relation of presentation . By Lemmas \[vk\]–\[vk2\](b), there is a disk diagram ${\Delta}$ over such that ${\Delta}$ has property (A), ${\varphi}({\partial}|_0 {\Delta}) \equiv W$ and $|{\Delta}(2)| \le \min( n, |W|)$. Then Lemma \[lem6\] applies and yields a sequence $\Omega$ of [elementary operation]{}s over [bracket system]{}s for $(W, {\Delta})$ that converts the empty [bracket system]{} for $(W, {\Delta})$ into the final one and has size bounded by $ (11|W|, C(\log|W| +1))$. It follows from arguments of Lemma \[lem6\] and Lemma \[vk2\](a) that if $b$ is a bracket of one of the intermediate [bracket system]{}s, associated with $\Omega$, then $b(4) \le |W|$. Observe that every bracket $b \in B$ and every intermediate [bracket system]{} $B$ for $(W, {\Delta})$, associated with $\Omega$, could be considered as a [pseudobracket]{} and a [pseudobracket system]{} for $W$, resp., we automatically have a desired sequence of [pseudobracket system]{}s.
Conversely, the existence of a sequence $\Omega$ of [elementary operation]{}s over [pseudobracket system]{}s, as specified in Lemma \[lem8\], implies, by Lemma \[lem7\], that there is a disk diagram ${\Delta}$ over such that ${\Delta}$ has property (A), ${\varphi}({\partial}|_0 {\Delta}) \equiv W$ and $|{\Delta}(2)| =b_F(6) \le n$. The existence of such a [disk diagram]{} ${\Delta}$ means that $W$ is a product of $|{\Delta}(2)| =b_F(6) \le n$ conjugates of words $R^{\pm 1}$, where $R=1$ is a relation of presentation .
Proofs of Theorem \[thm1\] and Corollary \[cor1\]
=================================================
Let the group ${\mathcal{G}}_2$ be defined by a presentation of the form $$\tag{1.2}
{\mathcal{G}}_2 := \langle \, a_1, \dots, a_m \ \| \ a_{i}^{k_i} =1, \ k_i \in E_i, \ i = 1, \ldots, m \, \rangle ,$$ where for every $i$, one of the following holds: $E_i = \{ 0\}$ or, for some integer $n_i >0$, $E_i = \{ n_i \}$ or $E_i = n_i \mathbb N = \{ n_i, 2n_i, 3n_i,\dots \}$. Then both the bounded and precise word problems for are in $\textsf{L}^3$ and in $\textsf{P}$. Specifically, the problems can be solved in deterministic space $O((\log|W|)^3)$ or in deterministic time $O( |W|^4\log|W|)$.
We start with $\textsf{L}^3$ part of Theorem \[thm1\]. First we discuss a nondeterministic algorithm which solves the bounded word problem for presentation and which is based on Lemma \[lem8\].
Given an input $(W, 1^n)$, where $W$ is a nonempty word (not necessarily reduced) over the alphabet ${\mathcal{A}}^{\pm 1}$ and $n \ge 0$ is an integer, written in unary notation as $1^n$, we begin with the empty [pseudobracket system]{} and nondeterministically apply a sequence of [elementary operation]{}s of size bounded by $( 11|W|, C(\log|W| +1) )$, where $C = (\log \tfrac 65)^{-1}$. If such a sequence of [elementary operation]{}s results in a final [pseudobracket system]{} $\{ (0,|W|,0,0,0,n') \}$, where $n' \le n$, then our algorithm accepts and, in view of Lemma \[lem8\], we may conclude that $W$ is a product of $n' \le n$ conjugates of words $R^{\pm 1}$, where $R=1$ is a relation of . It follows from the definitions and Lemma \[lem8\] that the number of [elementary operation]{}s needed for this algorithm to accept is at most $11|W|$. Hence, it follows from the definition of [elementary operation]{}s over [pseudobracket system]{}s for $W$ that the time needed to run this nondeterministic algorithm is $O(|W|)$. To estimate the space requirements of this algorithm, we note that if $b$ is a [pseudobracket]{} for $W$, then $b(1)$, $b(2)$ are integers in the range from 0 to $|W|$, hence, when written in binary notation, will take at most $C'(\log|W| +1)$ space, where $C'$ is a constant. Since $b(3), b(4), b(5), b(6)$ are also integers that satisfy inequalities $$0 \le b(3) \le m , \ 0 \le b(4) \le |W| , \ |b(5)| \le b(4), \ 0 \le b(6) \le |W| ,$$ and $m$ is a constant, it follows that the total space required to run this algorithm is at most $$5C'(\log|W|+1 +\log m)C(\log|W| +1) = O((\log |W|)^2) .$$ Note that this bound is independent of $n$ because it follows from Lemma \[vk2\](b) that if $W$ is a product of $n'$ conjugates of words $R^{\pm 1}$, where $R=1$ is a relation of , then it is possible to assume that $n' \le |W|$.
Furthermore, according to Savitch’s theorem [@Sav], see also [@AB], [@PCC], the existence of a nondeterministic algorithm that recognizes a language in space $S$ and time $T$ implies the existence of a deterministic algorithm that recognizes the language in space $O(S \log T)$. Therefore, by Savitch’s theorem [@Sav], there is a deterministic algorithm that solves the bounded word problem for presentation in space $O((\log |W|)^3)$.
To solve the precise word problem for presentation , suppose that we are given a pair $(W, 1^n)$ and wish to find out if $W$ is a product of $n$ conjugates of words $R^{\pm 1}$, where $R=1$ is a relation of , and $n$ is minimal with this property. By Lemma \[vk2\], we may assume that $n \le |W|$. Using the foregoing deterministic algorithm, we consecutively check whether the bounded word problem is solvable for the two pairs $(W, 1^{n-1})$ and $(W, 1^n)$. It is clear that the precise word problem for the pair $(W, 1^n)$ has a positive solution if and only if the bounded word problem has a negative solution for the pair $(W, 1^{n-1})$ and has a positive solution for the pair $(W, 1^n)$ and that these two facts can be verified in deterministic space $O((\log |W|)^3)$.
Now we describe an algorithm that solves the [precise word problem]{} for presentation in polynomial time. Our arguments are analogous to folklore arguments [@folk], [@Ril16] that solve the precise word problem for presentation $\langle \, a, b \, \| \, a=1, b=1 \, \rangle$ in polynomial time and that are based on the method of dynamic programming.
For a word $U$ over ${\mathcal{A}}^{\pm 1}$ consider the following property.
1. $U$ is nonempty and if $b_1, b_2 \in {\mathcal{A}}^{\pm 1}$ are the first, last, resp., letters of $U$ then $b_1 \ne b_2^{-1}$ and $b_1 \ne b_2$.
\[lema1\] Let ${\Delta}$ be a [disk diagram]{} over presentation [pr1]{} such that ${\Delta}$ has property (A) and the word $W \equiv {\varphi}({\partial}|_0 {\Delta})$ has property (E). Then there exists a factorization ${\partial}|_0 {\Delta}= q_1 q_2$ such that $|q_1|, |q_2| >0$ and $(q_1)_+ = (q_2)_- = {\alpha}(0)$. In particular, ${\varphi}(q_1) \overset {{\mathcal{G}}_2} = 1$, ${\varphi}(q_2) \overset {{\mathcal{G}}_2} = 1$.
Let ${\partial}|_0 {\Delta}= e_1 q e_2$, where $e_1, e_2$ are edges, $q$ is a subpath of ${\partial}|_0 {\Delta}$.
Suppose $e_1^{-1}$ is an edge of ${\partial}|_0 {\Delta}$. Since $e_1^{-1} \ne e_2$ by property (E) of $W$, it follows that $e_1^{-1}$ is an edge of $q$ and $q = r_1 e_1^{-1} r_2 $, where $r_1, r_2$ are subpaths of $q$. Hence, $q_1 = e_1 r_1 e_1^{-1} $ and $q_2 = r_2 e_2$ are desired paths.
Hence, we may assume that $e_1^{-1}$ is an edge of the boundary path ${\partial}\Pi_1$ of a face $\Pi_1$. Arguing in a similar fashion, we may assume that $e_2^{-1}$ is an edge of the boundary path ${\partial}\Pi_2$ of a face $\Pi_2$. If $\Pi_1 = \Pi_2$ then, in view of relations of the presentation , we have ${\varphi}(e_1) = {\varphi}(e_2)$ contrary to property (E) of $W$. Hence, $\Pi_1 \ne \Pi_2$.
Since for every edge $f$ of ${\partial}\Pi_1$ the edge $f^{-1}$ belongs to ${\partial}{\Delta}$ by property (A), we obtain the existence of a desired path $q_1$ of the form $q_1 = e_1$ if $|{\partial}\Pi_1| = 1$ or $q_1 = e_1 r f^{-1}$ if $|{\partial}\Pi_1| > 1$, where $r$ is a subpath of ${\partial}|_0 {\Delta}$, $f$ is the edge of ${\partial}\Pi_1$ such that $e_1^{-1} f$ is a subpath of ${\partial}\Pi_1$, $f_- = {\alpha}(0)$, and $|q_1| < |{\partial}{\Delta}|$.
If $U \overset {{\mathcal{G}}_2} = 1$, let $\mu_2(U)$ denote the integer such that the [precise word problem]{} for presentation holds for the pair $(U, \mu_2(U))$. If $U \overset {{\mathcal{G}}_2} \ne 1$, we set $\mu_2(U) := \infty$.
\[lema2\] Let $U$ be a word such that $U \overset {{\mathcal{G}}_2} = 1$ and $U$ has property (E). Then there exists a factorization $U \equiv U_1 U_2$ such that $|U_1|, |U_2| >0$ and $$\mu_2(U) = \mu_2(U_1) +\mu_2(U_2) .$$
Consider a [disk diagram]{} ${\Delta}$ over [pr1]{} such that ${\varphi}({\partial}|_0 {\Delta}) \equiv U$ and $| {\Delta}(2)| = \mu_2(U)$. By Lemma \[vk2\](b), we may assume that ${\Delta}$ has property (A). Hence, it follows from Lemma \[lema1\] applied to ${\Delta}$ that there is a factorization ${\partial}|_0 {\Delta}= q_1 q_2$ such that $|q_1|, |q_2| >0$ and $(q_1)_+ = (q_2)_- = {\alpha}(0)$. Denote $U_i := {\varphi}(q_i)$ and let ${\Delta}_i$ be the subdiagram of ${\Delta}$ bounded by $q_i$, $i=1,2$. Since ${\Delta}$ is a minimal [disk diagram]{} for $U$, it follows that ${\Delta}_i$ is a minimal [disk diagram]{} for $U_i$, $i=1,2$. Hence, $|{\Delta}_i(2) | = \mu_2(U_i)$ and $$\mu_2(U) = |{\Delta}(2) | = |{\Delta}_1(2) |+|{\Delta}_2(2) | = \mu_2(U_1)+\mu_2(U_2) ,$$ as required.
Let $U$ be a nonempty word over ${\mathcal{A}}^{\pm 1}$ and let $U(i,j)$, where $1 \le i \le |U|$ and $0 \le j \le |U|$, denote the subword of $U$ that starts with the $i$th letter of $U$ and has length $j$. For example, $U = U(1, |U|)$ and $U(i, 0)$ is the empty subword.
If $a_j \in {\mathcal{A}}$, let $|U|_{a_j}$ denote the total number of occurrences of letters $a_j, a_j^{-1}$ in $U$. Note that we can decide whether $U \overset {{\mathcal{G}}_2} = 1$ in time $O(|U|^2)$ by cancelling subwords $a_j^{\pm n_j}$, $a_j^{-1} a_j$, $a_j a_j^{-1}$, and checking whether the word obtained by a process of such cancellations is empty.
Let $W$ be a nonempty word over ${\mathcal{A}}^{\pm 1}$. Define a parameterized word $W[i, j, k,\ell]$, where $1 \le i \le |W|$, $0 \le j \le |W|$, $1\le k \le m$, and $\ell$ is an integer that satisfies $|\ell | \le |W|_{a_k} - |W(i,j)|_{a_k}$ so that $$\begin{gathered}
\label{eeqq1}
W[i, j, k, \ell] := W(i,j)a_k^\ell\end{gathered}$$ and the word $W(i,j)a_k^\ell $ is not empty, i.e., $j+|\ell| \ge 1$.
Note that the total number of such parameterized words $W[i, j, k,\ell]$ is bounded by $O(|W|^3)$, here $m = |{\mathcal{A}}|$ is a constant as the presentation is fixed. Let $\mathcal S_2(W)$ denote the set of all parameterized words $W[i, j, k,\ell]$. Elements $W[i, j, k,\ell]$ and $W[i', j', k',\ell']$ of $\mathcal S_2(W)$ are defined to be equal [if and only if]{} the quadruples $(i, j, k,\ell)$, $(i', j', k',\ell')$ are equal. Hence, we wish to distinguish between elements of $\mathcal S_2(W)$ and actual words represented by elements of $\mathcal S_2(W)$. It is clear that we can have $W(i,j)a_k^\ell \equiv W(i',j')a_{k'}^{\ell'}$ when $W[i, j, k,\ell] \ne W[i', j', k',\ell']$.
If $U$ is the word represented by $W[i, j, k,\ell]$, i.e., $U \equiv W(i,j)a_k^\ell$, then we denote this by writing $$U \overset \star = W[i, j, k,\ell] .$$
We introduce a partial order on the set $\mathcal S_2(W)$ by setting $$W[i', j', k',\ell'] \prec W[i, j, k,\ell]$$ if $j' < j$ or $j' = j$ and $|\ell'| < |\ell|$. In other words, we partially order the set $\mathcal S_2(W)$ by using the lexicographical order on pairs $(j, |\ell|)$ associated with elements $W[i, j, k,\ell]$ of $\mathcal S_2(W)$.
Define $$\mu_2(W[i, j, k,\ell]) := \mu_2(W(i,j)a_k^\ell) .$$
To compute the number $\mu_2(W) = \mu_2(W[1, |W|, 1,0])$ in polynomial time, we use the method of dynamic programming in which the parameter is $(j, |\ell|)$. In other words, we compute the number $\mu_2(W[i, j, k,\ell])$ by induction on parameter $(j, |\ell|)$. The base of induction (or initialization) for $j + |\ell| = 1$ is obvious as $\mu_2(W[i, j, k,\ell])$ is $0$ or $1$ depending on the presentation [pr1]{}.
To make the induction step, we assume that the numbers $\mu_2(W[i', j', k',\ell'])$ are already computed for all $W[i', j', k',\ell']$ whenever $(j', |\ell'|) \prec (j, |\ell|)$ and we compute the number $\mu_2(W[i, j, k,\ell])$.
If $W(i,j)a_k^\ell \overset {{\mathcal{G}}_2} \ne 1$, we set $\mu_2( W[i', j', k',\ell'] ) := \infty$.
Assume that $W(i,j)a_k^\ell \overset {{\mathcal{G}}_2} = 1$.
Suppose that the word $W(i,j)a_k^\ell$ has property (E). Consider all possible factorizations for $W(i,j)a_k^\ell$ of the form $$\begin{gathered}
\label{eeqq2}
W(i,j)a_k^\ell \equiv U_1 U_2 ,\end{gathered}$$ where $|U_1|, |U_2| \ge 1$. Let us show that either of the words $U_1, U_2$ can be represented in a form $W[i', j', k',\ell']$ so that $W[i', j', k',\ell'] \prec W[i, j, k,\ell]$.
First suppose $|U_1| \le |W(i,j)| =j$. If $i +|U_1| \le |W|$, we have $$U_1 \overset \star = W[i, |U_1|, 1,0] , \quad U_2 \overset \star = W[i +|U_1| , j-|U_1|,k,\ell] .$$ On the other hand, if $i +|U_1| = |W|+1$, we have $$U_1 \overset \star = W[i, |U_1|, 1,0] , \quad U_2\overset \star = W[|W| , 0,k,\ell] .$$
Now suppose $|U_1| > |W(i,j)| =j$. Let $\ell_1, \ell_2$ be integers such that $\ell_1, \ell_2, \ell$ have the same sign, $ \ell = \ell_1+\ell_2$, and $j + |\ell| = |U_1| +\ell_2$. Then we have $$U_1 \overset \star = W[i, |U_1|, k,\ell_1] , \quad U_2 \overset \star = W[ 1,0 , k, \ell_2] .$$
Note that the induction parameter $(j', |\ell'|)$ for indicated representations of $U_1, U_2$ is smaller than the parameter $(j, |\ell|)$ for the original parameterized word $W[i, j, k,\ell]$. It follows from Lemma \[lema2\] applied to the word $W(i,j)a_k^\ell$ that there is a factorization $$W(i,j)a_k^\ell \equiv U'_1 U'_2$$ such that $|U'_1|, |U'_2| \ge 1$ and $\mu_2(W(i,j)a_k^\ell) = \mu_2(U'_1) + \mu_2(U'_2)$. Hence, taking the minimum $$\begin{gathered}
\label{minUU}
\min (\mu_2(U_1) + \mu_2(U_2))\end{gathered}$$ over all factorizations , we obtain the number $\mu_2(W(i,j)a_k^\ell)$.
Assume that the word $W(i,j)a_k^\ell$ has no property (E), i.e., $W(i,j)a_k^\ell \equiv bUb^{-1}$ or $W(i,j)a_k^\ell \equiv b$ or $W(i,j)a_k^\ell \equiv bUb$, where $b \in {\mathcal{A}}^{\pm 1}$ and $U$ is a word.
First suppose that $W(i,j)a_k^\ell \equiv bUb^{-1}$. Note that the word $U$ can be represented in a form $W[i', j', k',\ell']$ so that $W[i', j', k',\ell'] \prec W[i, j, k,\ell]$. Indeed, if $|\ell | = 0$ then $$U \overset \star = W[i+1, j-2, k, 0] .$$ On the other hand, if $|\ell | > 0$ then $$U \overset \star = W[i+1, j-1, k,\ell_1] ,$$ where $|\ell_1| = |\ell|-1$ and $\ell_1 \ell \ge 0$. Hence, the number $\mu_2(U)$ is available by induction hypothesis. Since $\mu_2(W[i, j, k,\ell]) = \mu_2(U)$, we obtain the required number $\mu_2(W[i, j, k,\ell])$.
In case $W(i,j)a_k^\ell \equiv b$, the number $\mu_2(W[i, j, k,\ell])$ is either one or $\infty$ depending on the presentation .
Finally, consider the case when $W(i,j)a_k^\ell \equiv bUb$. Denote $b = a_{k_1}^\delta$, where $a_{k_1} \in {\mathcal{A}}$ and $\delta = \pm 1$.
If $j=0$, i.e., $ W(i,j)a_k^\ell \equiv a_k^\ell$, then $k = k_1$ and the number $\mu_2(W[i, j, k,\ell])$ can be easily computed in time $O(\log |W|)$ as $|\ell | \le |W|$ and we can use binary representation for $\ell$.
Suppose $j>0$. If $\ell =0$ then $Ub^2 \overset \star = W[i+1, j-1, k_1,\delta]$. If $|\ell | >0$ then $k=k_1$, $\ell $ and $\delta$ have the same sign, and $Ub^2 \overset \star = W[i+1, j-1, k,\ell + \delta]$.
In either subcase $j=0$ or $j>0$, we obtain $$\begin{aligned}
& Ub^2 \overset \star = W[i', j', k',\ell'] , \quad W[i', j', k',\ell'] \prec W[i, j, k,\ell] , \\
& \mu_2(W[i, j, k,\ell]) = \mu_2(W[i', j', k',\ell']) .\end{aligned}$$
This completes our inductive procedure of computation of numbers $\mu_2(W[i, j, k,\ell])$ for all $W[i, j, k,\ell] \in \mathcal S_2(W)$.
Since the length of every word $ W(i,j)a_k^\ell$ is at most $|W|$, it follows that our computation of the number $\mu_2(W[i, j, k,\ell])$ can be done in time $O(|W|\log|W| )$ including additions of binary representations of numbers $\mu_2(U_1), \mu_2(U_2)$ to compute the minimum . Since the cardinality $| \mathcal S_2(W) | $ of the set $\mathcal S_2(W)$ is bounded by $O(|W|^3)$, we conclude that computation of numbers $\mu_2(W[i, j, k,\ell])$ for all words $W[i, j, k,\ell] \in \mathcal S_2(W)$ can be done in deterministic time $O(|W|^4\log|W|)$. This means that both the [bounded word problem]{} and the [precise word problem]{} for presentation can be solved in time $O(|W|^4\log|W|)$.
The proof of Theorem \[thm1\] is complete.
Let $W$ be a word over ${\mathcal{A}}^{\pm 1}$ and $n \ge 0$ be an integer. Then the decision problems that inquire whether the width $ h(W)$ or the spelling length $h_1(W)$ of $W$ is equal to $n$ belong to $\textsf{L}^3$ and $\textsf{P}$. Specifically, the problems can be solved in deterministic space $O((\log|W|)^3)$ or in deterministic time $O( |W|^4\log|W| )$.
Observe that the decision problem asking whether the width (resp. the spelling length) of a word $W$ is $n$ is equivalent to the [precise word problem]{} whose input is $(W, n)$ for presentation in which $E_i = \mathbb N$ (resp. $E_i = \{ 1\}$) for every $i =1, \dots, m$. Therefore, a reference to proven Theorem \[thm1\] shows that the problem asking whether the width (resp. the spelling length) of $W$ is $n$ belongs to both ${\textsf L}^3$ and ${\textsf P}$ and yields the required space and time bounds for this problem.
Calculus of Brackets for Group Presentation
============================================
As in Theorem \[thm2\], consider a group presentation $$\tag{1.4}
{\mathcal{G}}_3 = \langle \, a_1 , a_2 , \dots, a_m \, \| \, a_2 a_1^{n_1} a_2^{-1} = a_1^{n_2} \rangle ,$$ where $n_1, n_2$ are nonzero integers. Suppose $W$ is a nonempty word over ${\mathcal{A}}^{\pm 1}$ such that $W \overset {{\mathcal{G}}_3 } = 1$, $n \ge 0$ is an integer and $W$ is a product of at most $n$ conjugates of the words $(a_2 a_1^{n_1} a_2^{-1} a_1^{-n_2})^{\pm 1}$. Then, as follows from arguments of the proof of Lemma \[vk\], there exists a disk diagram ${\Delta}$ over presentation such that ${\varphi}({\partial}|_0 {\Delta}) \equiv W$ and $| {\Delta}(2) | \le n$. Without loss of generality, we may assume that ${\Delta}$ is reduced. Consider the word $\rho_{a_1}(W)$ over ${\mathcal{A}}^{\pm 1} \setminus \{ a_1, a_1^{-1} \}$ which is obtained from $W$ by erasing all occurrences of $a_1^{\pm 1}$ letters.
An oriented edge $e$ of ${\Delta}$ is called an [*$a_i$-edge*]{} if ${\varphi}(e) = a_i^{\pm 1}$. We also consider a tree $\rho_{a_1}({\Delta})$ obtained from ${\Delta}$ by contraction of all $a_1$-edges of ${\Delta}$ into points and subsequent identification of all pairs of edges $e, f$ such that $e_- = f_-$ and $e_+ = f_+$. If an edge $e'$ of $\rho_{a_1}({\Delta})$ is obtained from an edge $e$ of ${\Delta}$ this way, we set ${\varphi}(e') := {\varphi}(e)$. It is easy to check that this definition is correct. This labeling function turns the tree $\rho_{a_1}({\Delta})$ into a disk diagram over presentation $\langle \, a_2 , \dots , a_m \, \| \, \varnothing \, \rangle $.
Two faces $\Pi_1, \Pi_2$ in a disk diagram ${\Delta}$ over are termed [*related*]{}, denoted $\Pi_1 \leftrightarrow \Pi_2$, if there is an $a_2$-edge $e$ such that $e \in {\partial}\Pi_1$ and $e^{- 1} \in {\partial}\Pi_2$. Consider the minimal equivalence relation $\sim_2$ on the set of faces of ${\Delta}$ generated by the relation $\leftrightarrow$. An [*$a_i$-band*]{}, where $i >1$, is a minimal subcomplex $\Gamma$ of ${\Delta}$ ($\Gamma$ is not necessarily simply connected) that contains an $a_i$-edge $f$, and, if there is a face $\Pi$ in ${\Delta}$ with $f \in ({\partial}\Pi)^{\pm 1}$, then $\Gamma$ must contain all faces of the equivalence class $[\Pi]_{\sim_2}$ of $\Pi$. Hence, an $a_i$-band $\Gamma$ is either a subcomplex consisting of a single nonoriented edge, denoted $\{ f, f^{- 1} \}$, where $f$ is an $a_i$-edge, $i >1$, and $f, f^{- 1} \in {\partial}{\Delta}$, or $\Gamma$ consists of all faces of an equivalence class $[\Pi]_{\sim_2}$ when $i=2$. Clearly, the latter case is possible only if $i =2$. In this latter case, if $\Gamma$ contains faces but $\Gamma$ has no face in $[\Pi]_{\sim_2}$ whose boundary contains an $a_2$-edge $f$ with $f^{-1} \in {\partial}{\Delta}$, $\Gamma$ is called a [*closed*]{} $a_2$-band.
\[b1\] Suppose that ${\Delta}$ is a reduced disk diagram over presentation [pr3b]{}. Then there are no closed $a_2$-bands in ${\Delta}$ and every $a_i$-band $\Gamma$ of ${\Delta}$, $i >1$, is a disk subdiagram of ${\Delta}$ such that $${\partial}|_{(f_1)_-} \Gamma = f_1 s_1 f_2 s_2 ,$$ where $f_1, f_2$ are edges of ${\partial}{\Delta}$ with ${\varphi}(f_1) = {\varphi}(f_2)^{-1} = a_i$, $s_1, s_2$ are simple paths with ${\varphi}(s_1) \equiv a_1^{k n_1}$, ${\varphi}(s_2) \equiv a_1^{-k n_2}$ for some integer $k \ge 0$, $\Gamma $ contains $k$ faces, and ${\partial}\Gamma$ is a simple closed path when $k >0$.
Since ${\Delta}$ is reduced, it follows that if two faces $\Pi_1, \Pi_2$ are related by the relation $\sim$, then $\Pi_1$ is not a mirror copy of $\Pi_2$, i.e., ${\varphi}(\Pi_1 ) \not\equiv {\varphi}(\Pi_2 )^{-1}$.
Assume that there is a closed $a_2$-band $\Gamma_0$ in ${\Delta}$. Then it follows from the above remark that there is a disk subdiagram ${\Delta}_0$ of ${\Delta}$, surrounded by $\Gamma_0$, such that ${\varphi}({\Delta}_0) \equiv a^{k_0 n_i}_1$, where $i =1$ or $i=2$ and $k_0 = | \Gamma_0(2) | >0$. This, however, is a contradiction to the [Magnus’s Freiheitssatz]{}, see [@LS], [@MKS]. Recall that the [Magnus’s Freiheitssatz]{} for a one-relator group presentation $${\mathcal{G}}_0 = \langle \, {\mathcal{A}}\, \| \, R=1 \, \rangle$$ claims that every letter $a \in {\mathcal{A}}^{\pm 1}$, that appears in a cyclically reduced word $R$ over ${\mathcal{A}}^{\pm 1}$, will also appear in $W^{\pm 1}$ whenever $W$ is reduced and $W \overset {{\mathcal{G}}_0} = 1$.
Hence, no $a_2$-band $\Gamma$ in ${\Delta}$ is closed and, therefore, ${\partial}\Gamma = f_1 s_1 f_2 s_2$ as described in Lemma’s statement. If $k >0$ and ${\partial}\Gamma$ is not a simple closed path, then a proper subpath of ${\partial}\Gamma$ bounds a disk subdiagram ${\Delta}_1$ such that ${\varphi}({\Delta}_1)$ is a proper subword of ${\varphi}( {\partial}\Gamma ) \equiv a_2 a_1^{k n_1} a_2^{-1} a_1^{-k n_2}$, where $k >0$. Then it follows from Lemma \[vk\] that either $a_1^{\ell} \overset {{\mathcal{G}}_3} = 1 $, where $\ell \ne 0$, or $a_1^{\ell'} a_2^{\varepsilon}\overset {{\mathcal{G}}_3} = 1 $, where ${\varepsilon}= \pm 1$. The first equality contradicts [Magnus’s Freiheitssatz]{} and the second one is impossible in the abelianization of ${\mathcal{G}}_3$.
The factorization ${\partial}|_{(f_1)_-} \Gamma = f_1 s_1 f_2 s_2$ of Lemma \[b1\] will be called the [*standard boundary*]{} of an $a_i$-band $\Gamma$, where $i >1$. Note that $|s_1| =|s_2| =0$ is the case when $k =0$, i.e., $\Gamma$ contains no faces and ${\partial}\Gamma = f_1 f_2$ with $f_2^{-1} = f_1$.
An alternative way of construction of the tree $\rho_{a_1}({\Delta})$ from ${\Delta}$ can now be described as contraction of the paths $s_1, s_2$ of the standard boundary of every $a_2$-band $\Gamma$ of ${\Delta}$ into points, identification of the edges $f_1, f_2^{-1}$ of $({\partial}\Gamma)^{\pm 1}$, and contraction into points of all $a_1$-edges $e$ with $e \in {\partial}{\Delta}$.
Let $s$ be a path in a disk diagram ${\Delta}_0$ over presentation [pr3b]{}. We say that $s$ is a [*special arc*]{} of ${\Delta}_0$ if either $|s| = 0$ or $|s| > 0$, $s$ is a reduced simple path, and $s$ consists entirely of $a_1$-edges. In particular, if $s$ is a special arc then every subpath of $s, s^{-1}$ is also a special arc, and $s_- \ne s_+$ whenever $|s| > 0$.
\[arc\] Suppose that ${\Delta}_0$ is a reduced disk diagram over presentation [pr3b]{} and $s, t$ are special arcs in ${\Delta}_0$ such that $s_+ = t_-$. Then there are factorizations $s = s_1 s_2$ and $t = t_1 t_2$ such that $s_2 = t_1^{-1}$ and $s_1 t_2$ is a special arc, see Fig. 6.1.
(-3,-3) –(-1.4,-3); (-3,-3) –(3,-3); (1,-3) –(1,-1); (1,-3) –(1,-1.7); (1,-3) –(2,-3); at (-0.25,-1.95) [$s_2 = t_1^{-1}$]{}; (-3,-3) \[fill = black\] circle (.06); (1,-3) \[fill = black\] circle (.06); (3,-3) \[fill = black\] circle (.06); (1,-1) \[fill = black\] circle (.06); at (-1.4,-3.6) [$s_1$]{}; at (1.9,-3.6) [$t_2$]{}; at (0.2,-4.8) [Fig. 6.1]{};
First we observe that if ${\Delta}_1$ is a disk subdiagram of ${\Delta}_0$ such that every edge of ${\partial}{\Delta}_1$ is an $a_1$-edge, then it follows from Lemma \[b1\] that ${\Delta}_1$ contains no faces, i.e., $ | {\Delta}_1(2) | = 0$, and so ${\Delta}_1$ is a tree.
We now prove this Lemma by induction on the length $| t| \ge 0$. If $| t| = 0$ then our claim is true with $s_1 = s$ and $t_2 = t$. Assume that $| t| > 0$ and $t = re$, where $e$ is an edge. Since $r$ is also a special arc with $|r| < |t|$, the induction hypothesis applies to the pair $s, r$ and yields factorizations $s = s'_1 s'_2$ and $r= r_1 r_2$ such that $s'_2 = (r_1)^{-1}$ and $s'_1 r_2$ is a special arc. If $s'_1 r_2 e$ is also a special arc, then the factorizations $s = s'_1 s'_2$ and $t = t_1 t_2$, where $t_1 = r_1$ and $t_2 = r_2 e$, have the desired property and the induction step is complete.
Hence, we may assume that $s'_1 r_2 e$ is not a special arc. Note that if $|r_2| > 0$, then the path $s'_1 r_2 e$ is reduced because $s'_1 r_2 $ and $r_2 e$ are special arcs. Assume that $s'_1 r_2 e$ is not reduced. Then $|r_2| = 0$ and the last edge of $s_1'$ is $e^{-1}$. Denote $s'_1 =s'_{11} e^{-1}$. Then, letting $s_1 := s'_{11}$, $s_2 := t^{-1}$ and $t = t_1 t_2$, where $t_1 := t$ and $| t_2 | = 0$, we have $s_2 = t^{-1}$ and $s_1 t_2$ is a desired special arc. Thus, we may assume that the path $s'_1 r_2 e$ is reduced but not simple. Note that the path $s'_1 r_2 e$ consists entirely of $a_1$-edges. Since $s'_1 r_2$ and $ r_2e$ are simple, it follows that $e_+$ belongs to $s_1'$ and defines a factorization $s'_1 = s_{11} s_{12}$, where $| s_{12}| >0$. Hence, the path $s_{12} r_2 e$ is a simple closed path which bounds a disk subdiagram ${\Delta}_1$ whose boundary ${\partial}{\Delta}_1$ consists of $a_1$-edges. By the observation made above, ${\Delta}_1$ is a tree, which, in view of the fact that ${\partial}{\Delta}_1$ is a simple closed path, implies that ${\partial}{\Delta}_1 = e^{-1} e$. Therefore, $| r_2 | = 0$ and $ s_{12} = e^{-1}$. This means that the path $s'_1 r_2 e$ is not reduced, contrary to our assumption. This contradiction completes the proof.
Let $U$ be a word over ${\mathcal{A}}^{\pm 1}$. If $a \in {\mathcal{A}}$ is a letter, then the [*$a$-length* ]{} $|U|_a$ of $U$ is the number of occurrences of $a$ and $a^{- 1}$ in $U$. We also define the [*complementary $a$-length* ]{} by $|U|_{\bar a} := |U| - |U|_{a}$.
From now on we assume, unless stated otherwise, that $W$ is a nonempty word over ${\mathcal{A}}^{\pm 1}$ such that $W \overset{{\mathcal{G}}_3} = 1$ and ${\Delta}$ is a reduced disk diagram over presentation [pr3b]{} such that ${\varphi}( {\partial}|_0 {\Delta}) \equiv W$.
The definitions of the path $P_W$, of the map ${\alpha}: P_W \to {\Delta}$ and those of related notions, given in Sect. 2, are retained and are analogous to those used in Sects. 4–5.
\[b2\] Suppose that $|W|_{\bar a_1} >2$. Then there exist vertices $v_1, v_2 \in P_W$ such that $v_1 < v_2$ and if $P_W(\operatorname{\textsf{fact}}, v_1, v_2) = p_1 p_2 p_3$, then the following hold true. There is a special arc $r$ in ${\Delta}$ such that $r_- = {\alpha}(v_1)$, $r_+ = {\alpha}(v_2)$, ${\varphi}(r) \equiv a_1^{\ell}$ for some $\ell$, and $$\min(|{\varphi}(p_2)|_{\bar a_1}, |{\varphi}(p_1)|_{\bar a_1}+|{\varphi}(p_3)|_{\bar a_1}) \ge \tfrac 13 |{\varphi}({\partial}|_0 {\Delta})|_{\bar a_1} = \tfrac 13 |W|_{\bar a_1} .$$ In addition, if $| \vec {\Delta}(1) |_{a_1} := |W|_{a_1} + (|n_1| + |n_2|)|{\Delta}(2)| $ denotes the number of $a_1$-edges of ${\Delta}$, then $|r| \le \tfrac 12 | \vec {\Delta}(1) |_{a_1}$.
Consider the tree $\rho_{a_1}({\Delta})$ constructed from ${\Delta}$ as above by collapsing $a_1$-edges $e$ into points and subsequent identification of edges $f, g$ with $f_- = g_-$, $f_+ = g_+$. Then $\rho_{a_1}({\Delta})$ is a disk diagram over with no faces and there is a surjective continuous cellular map $$\beta_1 : {\Delta}\to \rho_{a_1}({\Delta})$$ which preserves labels of edges $e$ with ${\varphi}(e) \ne a_1^{\pm 1}$ and sends $a_1$-edges into vertices. It is also clear that if $${\alpha}_{1} : P_{\rho_{a_1}(W)} \to \rho_{a_1}({\Delta})$$ is the corresponding to the pair $(\rho_{a_1}(W), \rho_{a_1}({\Delta}))$ map, then there is a cellular continuous surjective map $$\beta : P_W \to P_{\rho_{a_1}(W)}$$ which preserves labels of edges $e$ with ${\varphi}(e) \ne a_1^{\pm 1}$, sends $a_1$-edges into vertices, and makes the following diagram commutative.
$$\renewcommand{\arraystretch}{1.5}
\begin{array}{ccccc}
P_W &\xrightarrow{\hskip0.8cm {{\alpha}} \hskip0.6cm} & {\Delta}\\
\Big\downarrow\rlap{$\beta$} &&\Big\downarrow\rlap{$\beta_1$}&\\
P_{\rho_{a_1}(W)} &\xrightarrow{\hskip0.8cm {\alpha}_1 \hskip0.6cm}&\rho_{a_1}(W)\\
\end{array}$$
By Lemma \[lem1\] applied to $\rho_{a_1}({\Delta})$, there are vertices $u_1, u_2 \in P_{\rho_{a_1}(W)}$ such that ${\alpha}_1(u_1) = {\alpha}_1(u_2)$ and if $P_{\rho_{a_1}(W)}(\operatorname{\textsf{fact}}, u_1, u_2) = s_1 s_2 s_3$, then $$\label{in7}
\min(|s_2|, |s_1|+|s_3|) \ge \tfrac 13 | {\partial}\rho_{a_1}({\Delta})| = \tfrac 13 |W|_{\bar a_1} \ .$$ It follows from definitions and Lemma \[b1\] that there are some vertices $v_1, v_2 \in P_W$ such that $\beta(v_j) = u_j$, $j=1,2$, the vertices ${\alpha}(v_1), {\alpha}(v_2)$ belong to ${\partial}\Gamma$, where $\Gamma$ is an $a_i$-band in ${\Delta}$, $i >1$, and the vertices ${\alpha}(v_1), {\alpha}(v_2)$ can be connected along ${\partial}\Gamma$ by a simple reduced path $r$ such that ${\varphi}(r) \equiv a_1^{\ell}$ for some integer $\ell$, where $\ell = 0$ if $i >2$. Clearly, $r$ is a special arc of ${\Delta}$. Denote $P_{W}(\operatorname{\textsf{fact}}, v_1, v_2) = p_1 p_2 p_3$. Then it follows from definitions that $| {\varphi}(p_i) |_{\bar a_1} = |s_i|$, $i=1,2,3$, hence the inequality yields that $$\min( | {\varphi}(p_2) |_{\bar a_1}, | {\varphi}(p_1) |_{\bar a_1} + | {\varphi}(p_3) |_{\bar a_1} ) \ge \tfrac 13 | {\partial}\rho_{a_1}({\Delta})| = \tfrac 13 |W|_{\bar a_1} ,$$ as required.
Furthermore, since $r$ is simple, reduced and $r_- \ne r_+$ unless $|r| =0$, the path $r$ contains every $a_1$-edge $e$ of ${\Delta}$ at most once and if $r$ contains $e$ then $r$ contains no $e^{-1}$. Since the total number $| \vec {\Delta}(1) |_{a_1}$ of $a_1$-edges of ${\Delta}$ is equal to $$| \vec {\Delta}(1) |_{a_1} = |{\varphi}({\partial}{\Delta})|_{a_1} + (|n_1| + |n_2|)|{\Delta}(2)| ,$$ we obtain the desired inequality $$|r| = |{\varphi}(r)|_{a_1} \le \tfrac 12 | \vec {\Delta}(1) |_{a_1} = \tfrac 12 |W|_{a_1} + \tfrac 12(|n_1| + |n_2|)|{\Delta}(2)| .$$
A quadruple $$b = (b(1), b(2), b(3), b(4)) ,$$ of integers $b(1), b(2), b(3), b(4)$ is called a [*bracket* ]{} for the pair $(W, {\Delta})$ if $b(1), b(2)$ satisfy the inequalities $0 \le b(1)\le b(2) \le |W|$, and, in the notation $$P_W(\operatorname{\textsf{fact}}, b(1), b(2)) = p_1p_2p_3 ,$$ the following holds true. There exists a special arc $r(b)$ in ${\Delta}$ such that $r(b)_- = {\alpha}(b(1))$, $r(b)_+ = {\alpha}(b(2))$, and ${\varphi}(r(b)) \overset 0 = a_1^{b(3)}$. Furthermore, if ${\Delta}_b$ is the disk subdiagram of ${\Delta}$ defined by ${\partial}|_{b(1)} {\Delta}_b = {\alpha}(p_2) r(b)^{-1}$ (such ${\Delta}_b$ is well defined as $r(b)$ is a special arc in ${\Delta}$), then $|{\Delta}_b(2)| = b(4)$, see Fig. 6.2. 2.8mm
(-19,-2) arc (0:360:2); (-23,-2) – (-19,-2); (-23,-2) –(-21,-2); (-21,0) –(-20.95,0); (-22.5,-3.3) –(-22.6,-3.2); (-19.5,-3.3) – (-19.6,-3.4); (-23,-2) \[fill = black\] circle (.05); (-19,-2) \[fill = black\] circle (.05); (-21,-4) \[fill = black\] circle (.05); at (-21,-4.8) [Fig. 6.2]{}; at (-21,-1.4) [$\Delta_b$]{}; at (-21,-2.5) [$r(b)$]{}; at (-21.,-.5) [$\alpha(p_2)$]{}; at (-23.4,-3.3) [$\alpha(p_1)$]{}; at (-18.6,-3.4) [$\alpha(p_3)$]{}; at (-21,-3.5) [$\alpha(0)$]{}; at (-24,-2) [$\alpha(b(1))$]{}; at (-18,-2) [$\alpha(b(2))$]{};
This disk subdiagram ${\Delta}_b$ of ${\Delta}$, defined by ${\partial}|_{b(1)} {\Delta}_b = {\alpha}(p_2) r(b)^{-1}$, is associated with the bracket $b$. The boundary subpath ${\alpha}(p_2)$ of ${\Delta}_b$ is denoted $a(b)$ and called the [*arc*]{} of the bracket $b$. The path $r(b)$ is termed the [*complementary arc*]{} of $b$.
For example, $b_v = (v, v, 0, 0)$ is a bracket for every vertex $v$ of $P_W$, called a [*starting*]{} bracket at $v = b(1)$. Note that $a(b) = {\alpha}(v)$, $r(b) = {\alpha}(v)$, and ${\Delta}_b= \{ {\alpha}(v) \}$.
The [*final*]{} bracket for $(W, {\Delta})$ is $c = (0, |W|, 0, | {\Delta}(2) | )$. Observe that $a(c) = {\partial}|_{0} {\Delta}$, $r(c) = \{ {\alpha}(0) \}$ and $ {\Delta}_c = {\Delta}$.
Let $B$ be a finite set of brackets for the pair $(W, {\Delta})$, perhaps, $B$ is empty. We say that $B$ is a [*bracket system*]{} for $(W, {\Delta})$ if, for every pair $b, c \in B$ of distinct brackets, either $b(2) \le c(1)$ or $c(2) \le b(1)$.
Now we describe three kinds of elementary operations over a [bracket system]{} $B$ for the pair $(W, {\Delta})$: additions, extensions, and mergers.
*Additions*.
Suppose $b$ is a starting bracket, $b \not\in B$, and $B \cup \{ b\}$ is a [bracket system]{}. Then we may add $b$ to $B$ thus making an [*addition*]{} operation over $B$.
*Extensions*.
Suppose $B$ is a [bracket system]{}, $b \in B$ is a bracket and $e_1 a(b) e_2$ is a subpath of the boundary path ${\partial}|_{0} {\Delta}$, where $a(b)$ is the arc of $b$ and $e_1, e_2$ are edges one of which could be missing.
Assume that ${\varphi}(e_1)= a_1^{\varepsilon}$, where ${\varepsilon}= \pm 1$. Since $e_1$ and $r(b)$ are special arcs of ${\Delta}$ and $e_1 r(b)$ is a path, Lemma \[arc\] applies and yields that either $e_1 r(b)$ is a special arc or $r(b) = e^{-1}_1 r_1$, where $r_1$ is a subpath of $r(b)$ and $r_1$ is a special arc. In the first case, define $r(b') := e_1 r(b)$. In the second case, we set $r(b') := r_1$. Note that, in either case, $r(b')$ is a special arc and ${\varphi}( r(b') ) \overset 0 = a_1^{b(3)+{\varepsilon}}$. In either case, we consider a bracket $b'$ such that $$b'(1) = b(1)-1 , \ b'(2) = b(2) , \ b'(3) = b(3)+{\varepsilon}, b'(4) = b(4) .$$ In either case, we have that $a(b') = e_1 a(b)$, $r(b')$ is defined as above, and ${\Delta}_{b'} $ is the disk subdiagram whose boundary is ${\partial}{\Delta}_{b'} = a(b') r(b')^{-1}$. We say that $b'$ is obtained from $b$ by an extension of type 1 (on the left). If $(B \setminus \{ b \}) \cup \{ b' \}$ is a [bracket system]{}, then replacement of $b \in B$ with $b'$ in $B$ is called an [*extension* ]{} operation over $B$ of type 1.
Similarly, assume that ${\varphi}(e_2)= a_1^{\varepsilon}$, where ${\varepsilon}= \pm 1$. Since $r(b)$ is a simple path, it follows as above from Lemma \[arc\] that either $r(b)e_2 $ is a special arc or $r(b) = r_2 e^{-1}_2$, where $r_2$ is a subpath of $r(b)$ and $r_2$ is a special arc. In the first case, define $r(b') := r(b)e_2$. In the second case, we set $r(b') := r_2$. Note that, in either case, $r(b')$ is a special arc and ${\varphi}( r(b') ) \overset 0 = a_1^{b(3)+{\varepsilon}}$. In either case, we consider a bracket $b'$ such that $$b'(1) = b(1) , \ b'(2) = b(2)+1 , \ b'(3) = b(3)+{\varepsilon}, \ b'(4) = b(4) .$$ In either case, we have that $a(b') = a(b)e_2 $, $r(b')$ is defined as above, and ${\Delta}_{b'} $ is the disk subdiagram whose boundary is ${\partial}{\Delta}_{b'} = a(b') r(b')^{-1}$. We say that $b'$ is obtained from $b$ by an extension of type 1 (on the right). If $(B \setminus \{ b \}) \cup \{ b' \}$ is a [bracket system]{}, then replacement of $b \in B$ with $b'$ in $B$ is called an [*extension* ]{} operation over $B$ of type 1.
Now suppose that ${\varphi}(e_1) = {\varphi}(e_2)^{-1}=a_2^{{\varepsilon}}$, ${\varepsilon}= \pm 1$, and there is an $a_2$-bond $\Gamma$ whose standard boundary is ${\partial}|_{(e_i)_-} \Gamma = e_i s_i e_{3-i} s_{3-i}$, where $i =1$ if ${\varepsilon}=1$ and $i =2$ if ${\varepsilon}=-1$. Recall that the standard boundary of an $a_2$-bond starts with an edge $f$ with ${\varphi}(f) = a_2$.
First we assume that $|b(3)|\ne 0$, i.e., ${\varphi}(s_i) \overset 0 \neq 1$. By Lemma \[b1\], the paths $s_1, s_2$ are special arcs. Moreover, $\Gamma$ consists of $|b(3)|/|n_{i}|$ faces. Recall that ${\Delta}$ is reduced. It follows from Lemma \[arc\] applied to special arcs $r(b)$ and $s_i^{-1}$ that $r(b) = s_i$. Consider a bracket $b'$ such that $b'(1) = b(1)-1$, $b'(2) = b(2)+1$, $b'(3) = (|n_1^{-1} n_2|)^{{\varepsilon}} b(3)$, and $b'(4) = b(4) + |b(3)|/|n_{i}|$. Note that $a(b') = e_1 a(b)e_2$ and $r(b) = s_{i+1}^{-1}$. We say that $b'$ is obtained from $b$ by an extension of type 2. If $(B \setminus \{ b \}) \cup \{ b' \}$ is a [bracket system]{}, then replacement of $b \in B$ with $b'$ in $B$ is called an [*extension* ]{} operation over $B$ of type 2.
Assume that $e_1 = e_2^{-1}$ and ${\varphi}(e_1) \ne a_1^{\pm 1}$. Then we may consider a bracket $b'$ such that $b'(1) = b(1)-1$, $b'(2) = b(2)+1$, $b'(3) = b(3)$, and $b'(4) = b(4)$. Since the path $a(b)$ is closed, it follows from Lemma \[vk\] that ${\varphi}(a(b)) \overset {{\mathcal{G}}_3} = 1$ and so $b'(3) = b(3)=0$ by [Magnus’s Freiheitssatz]{}. Note that $a(b') = e_1 a(b)e_2$ and $| r(b')| =0$. We say that $b'$ is obtained from $b$ by an extension of type 3. If $(B \setminus \{ b \}) \cup \{ b' \}$ is a [bracket system]{}, then replacement of $b \in B$ with $b'$ in $B$ is called an [*extension* ]{} operation over $B$ of type 3.
*Mergers*.
Suppose that $b_1, b_2$ are distinct brackets in $B$ such that $b_1(2) = b_2(1)$. Consider the disk diagram ${\Delta}_{b_i}$, associated with the bracket $b_i$, $i=1,2$, and let ${\partial}|_{b_i(1)} {\Delta}_{b_i} = a(b_i) r(b_i)^{-1}$ be the boundary of ${\Delta}_{b_i}$, where $a(b_i), r(b_i)$ are the arcs of $b_i$, $i=1,2$.
Since $r(b_1), r(b_2)$ are special arcs with $r(b_1)_- = r(b_2)_+$, it follows from Lemma \[arc\] that there are factorizations $r(b_1) = r_1 r_0$, $r(b_2) = r_0^{-1} r_2$ such that the path $r(b') := r_1 r_2$ is a special arc of ${\Delta}$. Note that $r(b')_- = {\alpha}(b_1(1))$, $r(b')_+ = {\alpha}(b_2(2))$, and the disk subdiagram ${\Delta}_{0}$ of ${\Delta}$ defined by ${\partial}{\Delta}_{0} = a(b_1)a(b_2) r(b')^{-1}$ contains $b_1(4) + b_2(4)$ faces. Therefore, we may consider a bracket $b'$ such that $b'(1) = b_1(1)$, $b'(2) = b_2(2)$, $b'(3) = b_1(3) + b_2(3)$, and $b'(4) = b_1(4) + b_2(4)$. Note that $a(b') = a(b_1)a(b_2)$, $r(b') = r_1 r_2$, and ${\Delta}_{b'} = {\Delta}_0$. We say that $b'$ is obtained from $b_1, b_2$ by a merger operation. If $(B \setminus \{ b_1, b_2 \}) \cup \{ b' \}$ is a [bracket system]{}, then taking both $b_1$, $b_2$ out of $B$ and putting $b'$ in $B$ is a [*merger* ]{} operation over $B$.
We will say that additions, extensions, and mergers, as defined above, are [*[elementary operation]{}s*]{} over brackets and [bracket system]{}s for $(W, {\Delta})$.
Assume that one [bracket system]{} $B_\ell$ is obtained from another [bracket system]{} $B_0$ by a finite sequence $\Omega$ of [elementary operation]{}s and $B_0, B_1, \dots, B_\ell$ is the corresponding to $\Omega$ sequence of [bracket system]{}s. As before, such a sequence $B_0, B_1, \dots, B_\ell$ of [bracket system]{}s is called [*operational*]{}. We say that the sequence $B_0, B_1, \dots, B_\ell$ has [*size bounded by*]{} $(k_1, k_2)$ if $\ell \le k_1$ and, for every $i$, the number of brackets in $B_i$ is at most $k_2$. Whenever it is not ambiguous, we also say that $\Omega$ has size bounded by $(k_1, k_2)$ if so does the corresponding to $\Omega$ sequence $B_0, B_1, \dots, B_\ell$ of [bracket system]{}s.
We now study properties of brackets and [bracket system]{}s.
\[ca\] If $b$ is a bracket for the pair $(W, {\Delta})$, then $| b(3) | \le \tfrac 12 | \vec {\Delta}(1) |_{a_1}$.
By the definition of a bracket, the complementary arc $r(b)$ of $b$ is a special arc in ${\Delta}$, hence, either $|r(b)| = 0$ or, otherwise, $|r(b)| > 0$ and $r(b)$ is a simple, reduced path consisting entirely of $a_1$-edges. Since the total number of $a_1$-edges in ${\Delta}$ is $| \vec {\Delta}(1) |_{a_1}$, it follows that $| b(3) | \le | r(b) | \le \tfrac 12 | \vec {\Delta}(1) |_{a_1}$.
\[NUP\] Suppose that $b$, $c$ are two brackets for the pair $(W, {\Delta})$ and $b(1) = c(1)$, $b(2) = c(2)$. Then $b =c$.
Consider the complementary arcs $r(b)$, $r(c)$ of brackets $b$, $c$, resp. Since $r(b)_- = r(c)_-$, $r(b)_+ = r(c)_+$, and $r(b)$, $r(c)^{-1}$ are special arcs of ${\Delta}$, it follows from Lemma \[arc\] applied to special arcs $r(b)$, $r(c)^{-1}$ that $r(b) = r(c)$. This means that $b(3) = c(3)$, ${\Delta}_b = {\Delta}_c$ and $b(4) = c(4)$. Therefore, $b =c$, as desired.
\[lem3b\] There exists a sequence of [elementary operation]{}s that converts the empty [bracket system]{} for $(W, {\Delta})$ into the final [bracket system]{} and has size bounded by $(4|W|, |W|)$.
For every $k$ with $0 \le k \le |W|$, consider a starting bracket $(k,k,0,0)$ for $(W, {\Delta})$. Making $|W|+1$ additions, we get a [bracket system]{}$$B_{W} = \{ (k,k,0,0) \mid 0 \le k \le |W| \}$$ of $|W|+1$ starting brackets. Now, looking at ${\Delta}$, we can easily find a sequence of extensions and mergers that converts $B_{W}$ into the final [bracket system]{} $B_{\ell}$. To estimate the total number of [elementary operation]{}s, we note that the number of additions is $|W|+1$. The number of extensions is at most $|W|$ because every extension applied to a [bracket system]{} $B$ increases the number $$\eta(B) := \sum_{b \in B} (b(2)-b(1) )$$ by 1 or 2 and $\eta( B_{W}) = 0$, while $\eta( B_{\ell}) = |W|$. The number of mergers is $ |W|$ because the number of brackets $|B|$ in a [bracket system]{} $B$ decreases by 1 if $B$ is obtained by a merger and $| B_{W} | = |W|+1$, $|B_{\ell} | = 1$. Hence, $ \ell \le 3|W|+1 \le 4|W|$, as required.
\[lem4b\] Suppose there is a sequence $\Omega$ of [elementary operation]{}s that converts the empty [bracket system]{} $E$ for $(W, {\Delta})$ into the final [bracket system]{} $F$ and has size bounded by $(k_1, k_2)$. Then there is also a sequence of [elementary operation]{}s that changes $E$ into $F$ and has size bounded by $(7|W|, k_2)$.
Assume that the sequence $\Omega$ has an addition operation which introduces a starting bracket $c = (k,k,0,0)$ with $0 \le k \le |W|$. Since the final [bracket system]{} contains no starting bracket, $c$ must disappear and an [elementary operation]{} is applied to $c$. If a merger is applied to $c$ and another bracket $b$ and the merger yields ${\widehat}c$, then ${\widehat}c = b$. This means that the addition of $c$ and the merger could be skipped without affecting the sequence otherwise. Note that the size of the new sequence $\Omega'$ is bounded by $(k_1-2, k_2)$. Therefore, we may assume that no merger is applied to a starting bracket in $\Omega$. We also observe that, by the definition, an extension of type 2 is not applicable to a starting bracket (because of the condition $b(3) \ne 0$).
Thus, by an obvious induction on $k_1$, we may assume that, for every starting bracket $c$ which is added by $\Omega$, an extension of type 1 or 3 is applied to $c$.
Now we will show that, for every starting bracket $c$ which is added by $\Omega$, there are at most 2 operations of addition of $c$ in $\Omega$. Arguing on the contrary, assume that $c_1, c_2, c_3$ are brackets equal to $c$ whose additions are used in $\Omega$. By the above remark, for every $i=1,2,3$, an extension of type 1 or 3 is applied to $c_i$, resulting in a bracket ${\widehat}c_i$.
Let $c_1, c_2, c_3$ be listed in the order in which the brackets ${\widehat}c_1, {\widehat}c_2, {\widehat}c_3$ are created by $\Omega$. Note that if ${\widehat}c_1$ is obtained from $c_1$ by an extension of type 3, then $${\widehat}c_1(1) = c_1(1)-1, \quad {\widehat}c_1(2) = c_1(2)+1 .$$ This means that brackets ${\widehat}c_2$, ${\widehat}c_3$ could not be created by extensions after ${\widehat}c_1$ appears, as $b(2) \le c(1) $ or $b(1) \ge c(2) $ for distinct brackets $b, c \in B$ of any [bracket system]{} $B$. This observation proves that ${\widehat}c_1$ is obtained from $c_1$ by an extension of type 1. Similarly to the forgoing argument, we can see that if ${\widehat}c_1$ is obtained by an extension of type 1 on the left/right, then ${\widehat}c_2$ must be obtained by an extension on the right/left, resp., and that ${\widehat}c_3$ cannot be obtained by any extension. This contradiction proves that it is not possible to have in $\Omega$ more than two additions of any starting bracket $c$. Thus, the number of additions in $\Omega$ is at most $ 2|W|+2$. As in the proof of Lemma \[lem3b\], the number of extensions is $\le |W|$ and the number of mergers is at most $2|W|+1$. Hence, the total number of [elementary operation]{}s is at most $ 5 |W|+3 \le 7|W|$ as desired.
\[lem5b\] Let there be a sequence $\Omega$ of [elementary operation]{}s that transforms the empty [bracket system]{} for $(W, {\Delta})$ into the final [bracket system]{} and has size bounded by $(k_1, k_2)$ and let $c$ be a starting bracket for $(W, {\Delta})$. Then there is also a sequence of [elementary operation]{}s that converts the [bracket system]{} $\{ c \}$ into the final [bracket system]{} and has size bounded by $(k_1+1, k_2+1)$.
The proof is analogous to the proof of Lemma \[lem5\]. The arguments in the cases when $c(1) = 0$ or $c(1) = |W|$ are retained verbatim.
Assume that $0 < c(1) < |W|$. As before, let $B_0, B_1, \dots, B_\ell$ be the corresponding to $\Omega$ operational sequence of [bracket system]{}s, where $B_0$ is empty and $B_\ell$ is final. Let $B_{i^*}$ be the first [bracket system]{} of the sequence such that $B_{i^*} \cup \{ c \}$ is not a [bracket system]{}. The existence of $B_{i^*}$ follows from the facts that $B_0 \cup \{ c \}$ is a [bracket system]{} and $B_\ell \cup \{ c \}$ is not. Since $B_0 \cup \{ c \}$ is a [bracket system]{}, it follows that ${i^*} \ge 1$ and $B_{{i^*}-1} \cup \{ c \}$ is a [bracket system]{}. Since $B_{{i^*}-1} \cup \{ c \}$ is a [bracket system]{} and $B_{{i^*}} \cup \{ c \}$ is not, there is a bracket $b \in B_{{i^*}}$ such that $b(1) < c(1) < b(2)$ and $b$ is obtained from a bracket $d_1 \in B_{{i^*}-1}$ by an extension or $b$ is obtained from brackets $d_1, d_2 \in B_{{i^*}-1}$ by a merger. In either case, it follows from definitions of [elementary operation]{}s that $d_j(k) = c(1)$ for some $j,k \in \{ 1,2\}$. Hence, we can use a merger applied to $d_j$ and $c$ which would result in $d_j$, i.e., in elimination of $c$ from $B_{{i^*}-1} \cup \{ c \}$ and in getting thereby $B_{{i^*}-1}$ from $B_{{i^*}-1} \cup \{ c \}$. Now we can see that the original sequence of [elementary operation]{}s, together with the merger $ B_{{i^*}-1} \cup \{ c \} \to B_{{i^*}-1}$ can be used to produce the following operational sequence of [bracket system]{}s $$B_{0} \cup \{ c \}, \dots, B_{{i^*}-1} \cup \{ c \}, B_{{i^*}-1}, \dots,
B_\ell .$$ Clearly, the size of this new sequence is bounded by $(k_1+1, k_2+1)$, as desired.
\[2bb\] Let $P_W = p_1 p_2$, let the path ${\alpha}(p_2)$ be a special arc of ${\Delta}$ and let there exist a sequence $\Omega$ of [elementary operation]{}s that transforms the empty [bracket system]{} for $( W, {\Delta})$ into the final [bracket system]{} and has size bounded by $ (\ell_1, \ell_2)$. Then there is also a sequence $\Omega_{p_1}$ of [elementary operation]{}s such that $\Omega_{p_1}$ transforms the empty [bracket system]{} for $( W, {\Delta})$ into the [bracket system]{} $\{ (0, |p_1|, -k, |{\Delta}(2)|) \}$, where ${\varphi}(p_2) \overset 0 = a_1^k$, and $\Omega_{p_1}$ has size bounded by $(\ell_1, \ell_2)$. In addition, for every bracket $b \in B$, where $B$ is an intermediate [bracket system]{} of the corresponding to $\Omega_{p_1}$ operational sequence of [bracket system]{}s, it is true that $b(2) \le |p_1|$.
Let $B_0, B_1, \dots, B_\ell$ be the corresponding to $\Omega$ operational sequence of [bracket system]{}s, where $B_0$ is empty and $B_\ell = \{ (0, |W|, 0, |{\Delta}(2)| ) \}$ is the final [bracket system]{} for $( W, {\Delta})$.
For every [bracket system]{} $B_i$, we will construct an associated [bracket system]{} $B'_i$ as follows. Let $b \in B_i$ be a bracket. If $b(1) > |p_1|$, then we disregard $b$.
Suppose that $b(1) \le |p_1|$ and $b(2) > |p_1|$. Let $p_b$ be a subpath of $p_2$ such that $(p_b)_- = (p_2)_-$, $(p_b)_+ = b(2)$ and let ${\varphi}(p_b) \overset 0 = a_1^{k_p}$, see Fig. 6.3. Since $r(b)$ and ${\alpha}(p_b)$ are special arcs in ${\Delta}$, it follows from Lemma \[arc\] that the reduced path $r_p$, obtained from $r(b) {\alpha}(p_b)^{-1}$ by canceling pairs of edges of the form $e e^{-1}$, is a special arc such that $$(r_p)_- = {\alpha}(b(1)) , \quad (r_p)_+ = {\alpha}((p_b)_+) = {\alpha}(|p_1|) .$$ Hence, we can define a bracket $$b' := (b(1), |p_1|, b(3)-k_p, b(4))$$ whose complementary arc $r(b')$ is $r_p$, $r(b') := r_p$. For every such $b \in B_i$, we include $b'$ in $B'_i$.
(-19,-2) arc (0:360:2); (-23,-2) – (-19,-2); (-23,-2) –(-21,-2); (-22.5,-3.3) –(-22.6,-3.2); (-22.5,-.7) – (-22.4,-.6); (-19.5,-3.3) – (-19.6,-3.4); (-19.7,-.5) – (-19.6,-.6); (-21,0) \[fill = black\] circle (.05); (-23,-2) \[fill = black\] circle (.05); (-19,-2) \[fill = black\] circle (.05); (-21,-4) \[fill = black\] circle (.05); at (-21,-4.8) [Fig. 6.3]{}; at (-21,-2.5) [$r(b)$]{}; at (-21.,-.5) [$\alpha(|p_1|)$]{}; at (-23.4,-3.3) [$\alpha(p_1)$]{}; at (-23.2,-.5) [$\alpha(p_1)$]{}; at (-19,-.3) [$\alpha(p_b)$]{}; at (-21,-3.5) [$\alpha(0)$]{}; at (-24,-2) [$\alpha(b(1))$]{}; at (-18,-2) [$\alpha(b(2))$]{};
If $b \in B_i$ satisfies $b(2) \le |p_1|$, then we add $b$ to $B'_i$.
Going over all brackets $b \in B_i$ as described above, we obtain a new [bracket system]{} $B'_i$ associated with $B_i$. Observe that $B'_{i+1}$ is either identical to $B'_i$ or $B'_{i+1}$ is obtained from $B'_i$ by a single [elementary operation]{} which is identical to the [elementary operation]{} that is used to get $B_{i+1}$ from $B_i$.
Moreover, if $B'_{i+1} \ne B'_i$ and $B_{i+1}$ is obtained from $B_i$ by application of an [elementary operation]{} $\sigma$ to a bracket $b_1 \in B_i$ or to brackets $b_1, b_2 \in B_i$ (in the case when $\sigma$ is a merger) and this application results in $c$, written $c= \sigma(b_1)$ or $c= \sigma(b_1, b_2)$, then $B'_{i+1}$ is obtained from $B'_i$ by application of $\sigma'$, where $\sigma'$ has the same type as $\sigma$, to the bracket $b'_1 \in B'_i$ or to the brackets $b'_1, b'_2 \in B'_i$ (in the case when $\sigma$ is a merger) and $c' = \sigma'(b'_1) $ or $c'= \sigma'(b'_1, b'_2)$.
Indeed, this claim is immediate for additions, extensions of type 1 and mergers. On the other hand, it follows from the definitions of extensions of type 2 and 3 that an extension of type 2 or 3 can be applied only to a bracket $b_1$ with $b_1(2) \le |p_1|$ and results in a bracket $c = \sigma(b_1)$ with $c(2) \le |p_1|$. Hence, in this case, $b'_1 = b_1$, $c' = c$, and our claim holds true again.
It follows from foregoing observations that a new sequence $B'_0, B'_1, \dots, B'_\ell$, when repetitions are disregarded, is operational, has size bounded by $(\ell_1, \ell_2)$, $B'_0$ is empty and $B'_\ell$ consists of the single bracket $(0, |p_1|, -k, |{\Delta}(2)|)$, as desired. The last inequality $b(2) \le |p_1|$ in lemma’s statement is immediate from the definitions.
\[3bb\] Let $P_W = p_1 p_2 p_3$ be a factorization of the path $P_W$ such that ${\alpha}(p_2)$ is a special arc of ${\Delta}$ and let there exist a sequence of [elementary operation]{}s of size bounded by $(\ell_1, \ell_2)$ that transforms the empty [bracket system]{} for $( W, {\Delta})$ into the final [bracket system]{}. Then there is also a sequence of [elementary operation]{}s that transforms a [bracket system]{} consisting of the single bracket $c_0 := (|p_1|, |p_1p_2|, k_2, 0)$, where ${\varphi}(p_2) \overset 0 = a_1^{k_2}$, into the final [bracket system]{} for $( W, {\Delta})$ and has size bounded by $(\ell_1+1, \ell_2+1)$.
Consider a sequence $\Omega$ of [elementary operation]{}s that transforms the empty [bracket system]{} for $( W, {\Delta})$ into the final [bracket system]{} and has size bounded by $(\ell_1, \ell_2)$. Let $
B_0, B_1, \dots, B_\ell
$ be the corresponding to $\Omega$ sequence of [bracket system]{}s, where $B_0$ is empty and $B_\ell = \{ (0, |W|, 0, |{\Delta}(2)| ) \} $ is the final [bracket system]{} for $( W, {\Delta})$.
Define $i^*$ to be the minimal index so that $B_{i^*}$ contains a bracket $b$ such that $b(1) \le |p_1|$ and $b(2) \ge |p_1p_2|$, i.e., the arc $a(b)$ of $b$ contains the path ${\alpha}(p_2)$. Since $B_\ell$ has this property and $B_0$ does not, such an $i^*$ exists and $0< i^* \le \ell$.
First suppose that $|p_2| =0$. Then $c_0 = (|p_1|, |p_1|, 0, 0)$ and we can define the following operational sequence of [bracket system]{}s $B'_i := B_i \cup \{ c_0 \}$ for $i=0, \dots, i^*-1$. By the minimality of $i^*$, there is a unique bracket $b^* \in B_{i^*}$ such that $b^*(1) = |p_1|$ or $b^*(2) = |p_1|$.
Suppose $b^*(1) =b^*(2) =|p_1|$. Then $ B_{i^*}$ is obtained from $B_{i^*-1}$ by addition of $c_0$, $ B_{i^*}= B_{i^*-1} \cup \{ c_0 \}$, and we can consider the following operational sequence of [bracket system]{}s $$B'_0 = \{ c_0 \}, B'_1, \dots, B'_{i^*-2}, B'_{i^*-1}= B_{i^*}, B_{i^*+1}, \dots, B_{\ell}$$ that transforms $\{ c_0 \}$ into the final [bracket system]{} and has size bounded by $(\ell_1-1, \ell_2+1)$.
Suppose $b^*(1) < b^*(2)$. Then a merger of $b^*$ and $c_0$ yields $b^*$, hence this merger turns $B'_{i^*} = B_{i^*}\cup \{ c_0\}$ into $B_{i^*}$. Now we can continue the original sequence $\Omega$ of [elementary operation]{}s to get $B_{\ell}$. Thus the following $$B'_0 = \{ c_0 \}, B'_1, \dots, B'_{i^*}, B_{i^*}, B_{i^*+1}, \dots, B_{\ell}$$ is an operational sequence of [bracket system]{}s that transforms $\{ c_0 \}$ into the final [bracket system]{} and has size bounded by $(\ell_1+1, \ell_2+1)$.
Now assume that $|p_2| >0$. For every [bracket system]{} $B_i$, where $i < i^*$, we construct an associated [bracket system]{} $B'_i$ in the following manner.
Let $b \in B_i$ be a bracket. If $b(1) \ge |p_1|$ and $b(2) \le |p_1p_2|$ and $b(1) - |p_1| + |p_1p_2|- b(2) >0 $, i.e., the arc $a(b)$ of $b$ is a proper subpath of the path ${\alpha}(p_2)$, then we disregard $b$.
Suppose that $b(1) \le |p_1|$ and $ |p_1| < b(2) \le |p_1p_2|$, i.e., the arc $a(b)$ overlaps with a prefix subpath of the path ${\alpha}(p_2)$ of positive length. Let $p_{21}$ denote a subpath of $p_2$ given by $(p_{21})_- = (p_2)_-$ and $(p_{21})_+ = b(2)$, see Fig. 6.4. Then it follows from Lemma \[arc\] applied to special arcs $r(b)$, ${\alpha}(p_{21})^{-1}$ of ${\Delta}$ that the reduced path, obtained from $r(b) {\alpha}(p_{21})^{-1}$ by canceling pairs of edges of the form $ee^{-1}$, is a special arcs of ${\Delta}$. Therefore, we may consider a bracket $$b' = (b(1), |p_1|, b(3)-k_{21}, b(4)) ,$$ where ${\varphi}(p_{21}) \overset 0 = a_1^{k_{21}}$. For every such a bracket $b \in B_i$, we include $b'$ into $B'_i$.
(-19,-2) arc (0:360:2); (-23,-2) – (-19,-2); (-23,-2) –(-21,-2); (-22.5,-3.3) –(-22.6,-3.2); (-22.5,-.7) – (-22.4,-.6); (-19.18,-2.8) – (-19.28,-3.04); (-19.7,-.5) – (-19.6,-.6); (-20,-3.74) – (-20.2,-3.83); (-21,0) \[fill = black\] circle (.05); (-23,-2) \[fill = black\] circle (.05); (-19,-2) \[fill = black\] circle (.05); (-21,-4) \[fill = black\] circle (.05); (-19.6,-3.43) \[fill = black\] circle (.05); at (-21,-4.8) [Fig. 6.4]{}; at (-21,-1.4) [$\Delta_b$]{}; at (-21,-2.6) [$r(b)$]{}; at (-21.,-.5) [$\alpha(|p_1|)$]{}; at (-18.5,-3.6) [$\alpha(|p_1p_2|)$]{}; at (-23.4,-3.3) [$\alpha(p_1)$]{}; at (-23.2,-.5) [$\alpha(p_1)$]{}; at (-19.7,-4.2) [$\alpha(p_3)$]{}; at (-18.8,-.5) [$\alpha(p_{21})$]{}; at (-18.3,-2.9) [$\alpha(p_{22})$]{}; at (-21,-3.5) [$\alpha(0)$]{}; at (-24,-2) [$\alpha(b(1))$]{}; at (-18,-2) [$\alpha(b(2))$]{};
Suppose $ |p_1| \le b(1) < |p_1p_2|$ and $ |p_1p_2| \le b(2)$, i.e., the arc $a(b)$ overlaps with a suffix of the path ${\alpha}(p_2)$ of positive length. Let $p_{22}$ denote a subpath of $p_2$ given by $(p_{22})_- = b(1)$ and $(p_{22})_+ = (p_2)_+$, see Fig. 6.5. Then it follows from Lemma \[arc\] applied to special arcs ${\alpha}(p_{22})^{-1}$, $r(b)$ of ${\Delta}$ that the reduced path, obtained from ${\alpha}(p_{22})^{-1}r(b) $ by canceling pairs of edges of the form $ee^{-1}$, is a special arcs of ${\Delta}$. Hence, we may consider a bracket $$b' = (|p_1p_2|, b(2), b(3)-k_{22}, b(4)) ,$$ where ${\varphi}(p_{22}) \overset 0 = a_1^{k_{22}}$. For every such a bracket $b \in B_i$, we include $b'$ into $B'_i$.
(-19,-2) arc (0:360:2); (-23,-2) – (-19,-2); (-23,-2) –(-21,-2); (-22.5,-.7) – (-22.4,-.6); (-19.4,-3.2) – (-19.5,-3.32); (-19.7,-.5) – (-19.6,-.6); (-22.79,-2.9) – (-22.84,-2.8) ; (-21.6,-3.9) – (-21.8,-3.83); (-22.44,-3.4) \[fill = black\] circle (.05); (-21,0) \[fill = black\] circle (.05); (-23,-2) \[fill = black\] circle (.05); (-19,-2) \[fill = black\] circle (.05); (-21,-4) \[fill = black\] circle (.05); at (-20.6,-4.8) [Fig. 6.5]{}; at (-21,-1.4) [$\Delta_b$]{}; at (-21,-2.6) [$r(b)$]{}; at (-21.,-.5) [$\alpha(|p_1p_2|)$]{}; at (-23.64,-2.9) [$\alpha(p_{21})$]{}; at (-23.3,-.5) [$\alpha(p_{22})$]{}; at (-22.3,-4.2) [$\alpha(p_1)$]{}; at (-18.8,-.5) [$\alpha(p_{3})$]{}; at (-18.55,-3.4) [$\alpha(p_{3})$]{}; at (-21,-3.5) [$\alpha(0)$]{}; at (-24,-2) [$\alpha(b(1))$]{}; at (-18,-2) [$\alpha(b(2))$]{}; at (-23.3,-3.57) [$\alpha(|p_1|)$]{};
If $b \in B_i$ satisfies either $b(2) \le |p_1|$ or $b(1) \ge |p_1 p_2|$, then we just add $b$ to $B'_i$ without alterations.
We also put the bracket $c_0 =(|p_1|, |p_1p_2|, k_2, 0)$ in every $B'_i$, $i =0,\dots, i^*-1$.
Going over all brackets $b \in B_i$, as described above, and adding $c_0$, we obtain a new [bracket system]{} $B'_i$ associated with $B_i$.
Observe that for every $i < i^* -1$ the following holds true. Either $B'_{i+1}$ is identical to $B'_{i}$ or $B'_{i+1}$ is obtained from $B'_{i}$ by a single [elementary operation]{}which is identical to the [elementary operation]{} that is used to get $B_{i+1}$ from $B_i$. Moreover, if $B'_{i+1} \ne B'_i$ and $B_{i+1}$ is obtained from $B_i$ by application of an [elementary operation]{} $\sigma$ to a bracket $b_1 \in B_i$ or to brackets $b_1, b_2 \in B_i$ (in the case when $\sigma$ is a merger) and this application of $\sigma$ results in $c$, written $c=\sigma(b_1)$ or $c=\sigma(b_1, b_2)$, then $B'_{i+1} $ is obtained from $B'_i$ by application of $\sigma'$, where $\sigma'$ has the same type as $\sigma$, to the bracket $b'_1 \in B'_i$ or to the brackets $b'_1, b'_2 \in B'_i$ (in the case when $\sigma$ is a merger) and $c'=\sigma'(b'_1)$ or $c'=\sigma'(b'_1, b'_2)$, resp. Indeed, this claim is immediate for additions, extensions of type 1 and mergers. On the other hand, it follows from the definitions of extensions of types 2–3 and from the definition of $i^*$ that if $\sigma$ is an extension of type 2 or 3 then $\sigma$ can only be applied to a bracket $b_1$ such that $b_1(2) \le |p_1|$ or $ |p_1p_2| \le b_1(1)$ and this application results in a bracket $c = \sigma(b_1)$ with $c(2) \le |p_1|$ or $c(1) \le |p_1p_2|$. Hence, in this case, $b'_1 = b_1$, $c' = c$ and our claim holds true.
Since the [bracket system]{} $B_{i^*}$ contains a bracket $d$ such that ${\alpha}(p_2)$ is a subpath of $a(d)$ and $i^*$ is the minimal index with this property, it follows from the definition of [elementary operation]{}s and from the facts that ${\alpha}(p_2)$ consists of $a_1$-edges and $|p_2| >0$ that either $d$ is obtained from a bracket $b_1 \in B_{i^*-1}$ by an extension of type 1 or $d$ is obtained from brackets $b_2, b_3 \in B_{i^*-1}$ by a merger.
First suppose that $d$ is obtained from a bracket $b_1 \in B_{i^*-1}$ by an extension of type 1. In this case, we pick the image $b'_1 \in B'_{i^*-1}$ of $b_1$ and use a merger operation over $b'_1, c_0 \in B'_{i^*-1}$ to get a bracket $c_3$. Let $B'_{i^*}$ denote the new [bracket system]{} obtained from $B'_{i^*-1}$ by the merger of $b'_1, c_0 \in B'_{i^*-1}$, hence, $$B'_{i^*} :=
(B'_{i^*-1} \setminus \{ c_0, b'_1 \}) \cup \{ c_3 \} .$$
Since $d$ is obtained from $b_1$ by an extension of type 1, it follows that, for every $b \in B_{i^*-1}$, $b \ne b_1$, we have either $b(1) \ge d(2) \ge |p_1p_2|$ or $b(2) \le d(1) \le |p_1|$. Therefore, every bracket $b \in B_{i^*-1}$, $b \ne b_1$, has an image $b' \in B'_{i^*-1}$ and $b' = b$. This, together with equalities $c_3(1) = d(1)$, $c_3(2) = d(2)$ and Lemma \[NUP\], implies that $c_3 = d$ and $B'_{i^*} = B_{i^*}$. Thus we can consider the following sequence of [bracket system]{}s $$\begin{gathered}
\label{newC}
B'_0, \dots, B'_{i^*-1}, B'_{i^*} = B_{i^*}, B_{i^*+1}, \dots, B_\ell\end{gathered}$$ that starts at $B'_0 = \{ c_0 \}$ and ends in the final [bracket system]{} $B_\ell$ for $(W, {\Delta})$. It is clear that the size of this sequence is bounded by $(\ell_1, \ell_2+1)$. Hence, after deletion of possible repetitions in [newC2]{}, we obtain a desired operational sequence.
Now assume that $d$ is obtained from brackets $b_2, b_3 \in B_{i^*-1}$ by a merger and let $b_2(2) = b_3(1)$. To define $B'_{i^*-\tfrac 12} $, we apply a merger operation to $b'_2$ and $c_0$, which results in a bracket $c_2$. To define $B'_{i^*} $, we apply a merger operation to $c_2$ and $b'_3$ which results in a bracket $c_3$.
As above, we observe that since $d$ is obtained from $b_2, b_3$ by a merger, it follows that, for every bracket $b \in B_{i^*-1}$, $b \not\in \{ b_1, b_2 \}$, we have either $b(1) \ge d(2) \ge |p_1p_2|$ or $b(2) \le d(1) \le |p_1|$. Therefore, every bracket $b \in B_{i^*-1}$, $b \not\in \{ b_1, b_2 \}$, has an image $b' \in B'_{i^*-1}$ and $b' = b$. This, together with equalities $c_3(1) = d(1)$, $c_3(2) = d(2)$ and Lemma \[NUP\], implies that $c_3 = d$ and $B'_{i^*} = B_{i^*}$. Thus we can consider the following sequence of [bracket system]{}s $$\begin{gathered}
\label{newC2}
B'_0, \dots, B'_{i^*-1}, B'_{i^*-\tfrac 12}, B'_{i^*} =B_{i^*}, B_{i^*+1}, \dots, B_\ell\end{gathered}$$ that starts at $B'_0 = \{ c_0 \}$ and ends in the final [bracket system]{} $B_\ell$. Clearly, the size of the sequence is bounded by $(\ell_1+1, \ell_2+1)$. Hence, after deletion of possible repetitions in [newC2]{}, we obtain a desired operational sequence.
\[4bb\] There exists a sequence of [elementary operation]{}s that converts the empty [bracket system]{} for $(W, {\Delta})$ into the final [bracket system]{} and has size bounded by $$\label{esma}
( 7|W| , \ C(\log |W|_{\bar a_1} +1) ) ,$$ where $C = (\log \tfrac 65)^{-1}$ and $\log |W|_{\bar a_1} := 0$ if $|W|_{\bar a_1}=0$.
The arguments of this proof are similar to those of the proof of Lemma \[lem6\]. Now we proceed by induction on $|W|_{\bar a_1} \ge 0$. To establish the basis for induction, we first check our claim in the case when $|W|_{\bar a_1} \le 2$.
Let $|W|_{\bar a_1} \le 2$. Then, looking at the tree $\rho_{a_1}({\Delta})$, we see that $|W|_{\bar a_1} = 0$ or $|W|_{\bar a_1} = 2$. If $|W|_{\bar a_1} = 0$, then it follows from Lemma \[b1\] applied to ${\Delta}$ that ${\Delta}$ is a tree consisting of $a_1$-edges and we can easily convert the empty [bracket system]{} for $(W, {\Delta})$ into the final [bracket system]{} by using a single addition and $|W|$ extensions of type 1. The size of the corresponding sequence is bounded by $ ( |W| +1 , \ 1 ) $ and the bound [esma]{} holds as $C \ge 1$.
If now $|W|_{\bar a_1} = 2$, then it follows from Lemma \[b1\] applied to ${\Delta}$ that either ${\Delta}$ is a tree consisting of $a_1$-edges and two (oriented) $a_i$-edges, $i >1$, that form an $a_i$-band $\Gamma_0$ with no faces or ${\Delta}$ contains a single $a_2$-band $\Gamma$ that has faces and all edges of ${\Delta}$, that are not in $\Gamma$, are $a_1$-edges that form a forest whose trees are attached to $\Gamma$. In either case, we can convert the empty [bracket system]{} for $(W, {\Delta})$ into the final [bracket system]{} by using a single addition and $ \le |W|$ extensions of type 1 and 3 if ${\Delta}$ is a tree or of type 1 and 2 if ${\Delta}$ is not a tree. The size of the corresponding sequence of [elementary operation]{}s is bounded by $ ( |W| +1 , \ 1 ) $ and the bound [esma]{} holds as $C \ge 1$. The base step is complete.
Making the induction step, assume $|W|_{\bar a_1} > 2$. By Lemma \[b2\] applied to $(W, {\Delta})$, we obtain vertices $v_1, v_2 \in P_W$ such that if $P_W(\operatorname{\textsf{fact}}, v_1, v_2) = p_1 p_2 p_3$ then there is a special arc $r$ in ${\Delta}$ such that $r_- = {\alpha}(v_1)$, $r_+ = {\alpha}(v_2)$ and $$\begin{aligned}
\label{inq10b}
\min(|{\varphi}(p_2)|_{\bar a_1}, |{\varphi}(p_1)|_{\bar a_1}+|{\varphi}(p_3)|_{\bar a_1}) \ge \tfrac 13 |W|_{\bar a_1} ,\end{aligned}$$
Let ${\partial}|_0 {\Delta}= q_1 q_2 q_3$, where $q_i = {\alpha}(p_i) $, $i =1,2,3$. Consider disk subdiagrams ${\Delta}_1, {\Delta}_2$ of ${\Delta}$ given by ${\partial}|_{v_1} {\Delta}_2 = q_2 r^{-1}$ and by ${\partial}|_{0} {\Delta}_1 = q_1 r q_3$, see Fig. 6.6. Denote $$W_2 \equiv {\varphi}(q_2r^{-1}) , \quad W_1 \equiv {\varphi}(q_1 r q_3)$$ and let $P_{W_i} = P_{W_i}(W_i, {\Delta}_i)$, $i=1,2$, denote the corresponding paths such that ${\alpha}_1(P_{W_1}) = q_1 r q_3$ and ${\alpha}_2(P_{W_2}) = q_2r^{-1}$. We also denote ${\varphi}(r ) \overset 0 = a_1^{k_r}$.
(-19,-2) arc (0:360:2); (-23,-2) – (-19,-2); (-23,-2) –(-21,-2); (-19.4,-3.2) – (-19.5,-3.32); (-21.1,0) – (-20.9,0) ; (-22.44,-3.4) – (-22.53,-3.3); (-23,-2) \[fill = black\] circle (.05); (-19,-2) \[fill = black\] circle (.05); (-21,-4) \[fill = black\] circle (.05); at (-21,-5.4) [Fig. 6.6]{}; at (-21,-1.3) [$\Delta_2$]{}; at (-21,-2.5) [$r$]{}; at (-21.,-.64) [$\alpha(p_2)=q_2$]{}; at (-21,-3.2) [$\Delta_1$]{}; at (-18,-3.33) [$\alpha(p_{3})=q_3$]{}; at (-21,-4.5) [$\alpha(0)$]{}; at (-24,-2) [$\alpha(v_1)$]{}; at (-18,-2) [$\alpha(v_2)$]{}; at (-23.8,-3.33) [$\alpha(p_1)=q_1$]{};
Since $|W_1|_{\bar a_1}, |W_2|_{\bar a_1} < |W|_{\bar a_1}$ by , it follows from the induction hypothesis that there is a sequence ${\widehat}\Omega_i$ of [elementary operation]{}s for $(W_i, {\Delta}_i)$ that transforms the empty [bracket system]{} into the final system and has size bounded by $$\begin{gathered}
\label{esmib}
( 7|W_i| , \ C(\log |W_i|_{\bar a_1} +1 ) ) ,\end{gathered}$$ where $i=1,2$.
Furthermore, it follows from the bound with $i=2$ and from Lemma \[2bb\] applied to ${\Delta}_2$ whose boundary label has factorization ${\varphi}({\partial}|_{0} {\Delta}_2 ) \equiv W_2 \equiv W_{21} W_{22}$, where $W_{21} \equiv {\varphi}(p_2)$ and $W_{21} \equiv {\varphi}(r)^{-1}$, that there exists a sequence ${\widehat}\Omega_2$ of [elementary operation]{}s for the pair $(W_2, {\Delta}_2)$ that transforms the empty [bracket system]{} $B_{2, 0}$ into the [bracket system]{}$B_{2, \ell_2} = \{ (0, |p_2|, k_r, | {\Delta}_2(2) | ) \}$ and has size bounded by $$\begin{aligned}
\label{esm2b}
\left(7|W_2 | , C(\log |W_2|_{\bar a_1} +1 ) \right) .\end{aligned}$$ Let $B_{2,0}, B_{2,1}, \dots, B_{2, \ell_2}$ denote the associated with ${\widehat}\Omega_2$ sequence of [bracket system]{}s.
It follows from the bound with $i=1$ and from Lemma \[3bb\] applied to ${\Delta}_1$ whose boundary label has factorization ${\varphi}({\partial}|_{0} {\Delta}_1 ) \equiv W_1 \equiv W_{11} W_{12} W_{13}$, where $W_{11} \equiv {\varphi}(p_1)$, $W_{12} \equiv {\varphi}(r)$, $W_{13} \equiv {\varphi}(p_3)$, that there exists a sequence $\Omega_1$ of [elementary operation]{}s for $(W_1, {\Delta}_1)$ that transforms the [bracket system]{} $B_{1, 0} = \{ c_r \}$, where $$c_r := ( |p_1|, |p_1| +|r|, k_r, 0 ) ,$$ into the final [bracket system]{} $B_{1, \ell_1}$ and has size bounded by $$\begin{aligned}
\label{esm3b}
\left( 7 |W_1 | +1 , C(\log |W_1|_{\bar a_1} +1 ) \right) .\end{aligned}$$ Let $B_{1,0}, B_{1,1}, \dots, B_{1, \ell_1}$ be the associated with $\Omega_1$ sequence of [bracket system]{}s.
Now we will show that these two sequences $\Omega_2$, $\Omega_1$ of [elementary operation]{}s, the first one for $(W_2, {\Delta}_2)$, and the second one for $(W_1, {\Delta}_1)$, could be modified and combined into a single sequence of [elementary operation]{}s for $(W, {\Delta})$ that converts the empty [bracket system]{} into the final [bracket system]{} and has size with a desired bound.
Note that every bracket $b = (b(1),b(2), b(3), b(4))$, $b \in \cup_j B_{2,j}$, for $(W_2, {\Delta}_2)$, by Lemma \[2bb\], has the property that $b(2) \le |p_2 |$ and $b$ naturally gives rise to the bracket $${\widehat}b := (b(1)+|p_1|, b(2)+|p_1|, b(3), b(4))$$ for $(W, {\Delta})$. Let ${\widehat}B_{2,j}$ denote the [bracket system]{} for $(W, {\Delta})$ obtained from $B_{2,j}$ by replacing every bracket $b \in B_{2,j}$ with ${\widehat}b$. Then it is easy to verify that ${\widehat}B_{2,0}, \dots, {\widehat}B_{2, \ell_2}$ is an operational sequence of [bracket system]{}s for $(W, {\Delta})$ that changes the empty [bracket system]{} into $${\widehat}B_{2, \ell_2} = \{ (|p_1|, |p_1|+ |p_2|, k_r, |{\Delta}_2(2)|) \}$$ and has size bounded by [esm2b]{}.
Consider a relation $\succ_1$ on the set of all pairs $(b,i)$, where $b \in B_{1,i}$, $i=0, \dots, \ell_1$, defined so that $(c, i+1) \succ_1 (b,i)$ if and only if $c \in B_{1,i+1} $ is obtained from brackets $b, b' \in B_{1,i}$ by an [elementary operation]{} $\sigma$, here $b'$ is missing if $\sigma$ is an extension.
Define a relation $\succeq$ on the set of all pairs $(b,i)$, where $b \in B_{1,i}$, $i=0, \dots, \ell_1$, that is the reflexive and transitive closure of the relation $\succ_1$. Clearly, $\succeq$ is a partial order on the set of such pairs $(b,i)$ and if $(b_2, i_2) \succeq (b_1, i_1)$ then $i_2 \ge i_1$ and $$b_2(1) \le b_1(1) \le b_1(2) \le b_2(2) .$$
Now we observe that every bracket $d = (d(1), d(2), d(3), d(4))$, $d \in B_{1,i}$, for $(W_1, {\Delta}_1)$ naturally gives rise to a bracket ${\widehat}d = ({\widehat}d(1), {\widehat}d(2), {\widehat}d(3), {\widehat}d(4))$ for $(W, {\Delta})$ in the following manner.
If $(d, i)$ is not comparable with $(c_r, 0)$ by the relation $\succeq$ and $d(1) \le |p_1|$, then $${\widehat}d := d .$$
If $(d, i) $ is not comparable with $(c_r,0)$ and $|p_1| + |r| \le d(2)$, then $${\widehat}d := (d(1)+ |p_2|-|r|, d(2)+ |p_2|-|r|, d(3), d(4) ) .$$
If $(d, i) \succeq (c_r,0)$, then $${\widehat}d :=(d(1), d(2)+ |p_2|-|r|, d(3), d(4)+ |{\Delta}_2(2)| ) .$$
Note that the above three cases cover all possible situations.
As before, let ${\widehat}B_{1,i} := \{ {\widehat}d \mid d \in B_{1,i} \}$. Then it is easy to verify that ${\widehat}B_{1,0}, \dots, {\widehat}B_{1, \ell_1}$ is an operational sequence of [bracket system]{}s for $(W, {\Delta})$ that changes the [bracket system]{}$${\widehat}B_{1,0} = {\widehat}B_{2,\ell_2} = \{ (|p_1|,|p_1| + |p_2|, k_r, |{\Delta}_2(2)| ) \}$$ into the final [bracket system]{} ${\widehat}B_{1, \ell_1} = \{ (0, |p_1|+ |p_2|+ |p_3|, 0, |{\Delta}_1(2)|+ |{\Delta}_2(2)|) \}$.
Thus, with the indicated changes, we can now combine the foregoing modified sequences of [bracket system]{}s for $(W_2, {\Delta}_2)$ and for $(W_1, {\Delta}_1)$ into a single operational sequence $${\widehat}B_{2,0}, \dots, {\widehat}B_{2, \ell_2} = {\widehat}B_{1,0}, \dots, {\widehat}B_{1, \ell_1}$$ of [bracket system]{}s for $(W, {\Delta})$ that transforms the empty [bracket system]{} into the [bracket system]{} $\{ (|p_1|,|p_1| + |p_2|, k_r, |{\Delta}_2(2)| ) \}$ and then continues to turn the latter into the final [bracket system]{}. It follows from definitions and bounds [esm2b]{}–[esm3b]{} that the size of thus constructed sequence is bounded by $$\begin{gathered}
( 7|W_1|+7|W_2|+1,\ \max( C(\log |W_1|_{\bar a_1}+ 1)+1, \ C( \log |W_2|_{\bar a_1}+1 ) ) ) \
\end{gathered}$$ Therefore, in view of Lemma \[lem4b\], it remains to show that $$\begin{gathered}
\max( C (\log |W_1|_{\bar a_1} +1)+ 1, C (\log |W_2|_{\bar a_1} +1 ) ) \le
C (\log |W|_{\bar a_1}+1) .\end{gathered}$$ In view of the inequality , $$\max( C(\log |W_1|_{\bar a_1} +1)+ 1, C(\log |W_2|_{\bar a_1}+1 ) ) \le C (\log (\tfrac 56|W|_{\bar a_1})+1) +1 ,$$ and $C (\log (\tfrac 56|W|_{\bar a_1})+1) +1 \le C (\log |W|_{\bar a_1}+1) $ for $C = (\log \tfrac 65)^{-1}$.
Let $W$ be an arbitrary nonempty word over the alphabet ${\mathcal{A}}^{\pm 1}$, not necessarily representing the trivial element of the group given by presentation [pr3b]{}. As before, let $P_W$ be a simple labeled path with ${\varphi}( P_W ) \equiv W$ and let vertices of $P_W$ be identified along $P_W$ with integers $0,1, \dots, |W|$ so that $(P_W)_- = 0, \dots$, $(P_W)_+ = |W|$.
A quadruple $$b = (b(1), b(2), b(3), b(4))$$ of integers $b(1), b(2), b(3), b(4)$ is called a [*pseudobracket*]{} for the word $W$ if $b(1), b(2)$ satisfy the inequalities $0 \le b(1)\le b(2) \le |W|$ and $b(4) \ge 0$.
Let $p$ denote the subpath of $P_W$ with $p_- = b(1)$, $p_+ = b(2)$, perhaps, $p_- = p_+$ and $|p| = 0$. The subpath $p$ of $P_W$ is denoted $a(b)$ and called the [*arc*]{} of the pseudobracket $b$.
For example, $b_v = (v, v, 0, 0)$ is a pseudobracket for every vertex $v \in P_W$ and such $b_v$ is called a [*starting*]{} pseudobracket. Note that $a(b) = v= b(1)$. A [*final*]{} pseudobracket for $W$ is $c = (0, |W|, 0, k)$, where $k \ge 0$ is an integer. Note that $a(c) = P_W$.
Observe that if $b$ is a bracket for the pair $(W, {\Delta})$, then $b$ is also a [pseudobracket]{} for the word $W$.
Let $B$ be a finite set of [pseudobracket]{}s for $W$, perhaps, $B$ is empty. We say that $B$ is a [*pseudobracket system*]{} if, for every pair $b, c \in B$ of distinct [pseudobracket]{}s, either $b(2) \le c(1)$ or $c(2) \le b(1)$. As before, $B$ is called a [*final* ]{} [pseudobracket system]{} if $B$ contains a single [pseudobracket]{} which is final. Clearly, every [bracket system]{} for $(W, {\Delta})$ is also a [pseudobracket system]{} for the word $W$.
Now we describe three kinds of [elementary operation]{}s over [pseudobracket]{}s and over [pseudobracket system]{}s: additions, extensions, and mergers, which are analogous to those definitions for brackets and [bracket system]{}s, except there are no any diagrams and faces involved.
Let $B$ be a [pseudobracket system]{} for a word $W$.
*Additions*.
Suppose $b$ is a starting [pseudobracket]{}, $b \not\in B$, and $B \cup \{ b\}$ is a [pseudobracket system]{}. Then we can add $b$ to $B$ thus making an [*addition*]{} operation over $B$.
*Extensions*.
Suppose $B$ is a [pseudobracket system]{}, $b \in B$ is a [pseudobracket]{} and $e_1 a(b) e_2$ is a subpath of $P_W$, where $a(b)$ is the arc of $b$ and $e_1, e_2$ are edges one of which could be missing.
Assume that ${\varphi}(e_1)= a_1^{{\varepsilon}_1}$, where ${\varepsilon}_1 = \pm 1$. Then we consider a [pseudobracket]{} $b'$ such that $$b'(1) = b(1)-1 , \ b'(2) = b(2) , \ b'(3) = b(3)+{\varepsilon}_1, \ b'(4) = b(4) .$$ Note that $a(b') = e_1 a(b)$. We say that $b'$ is obtained from $b$ by an extension of type 1 (on the left). If $(B \setminus \{ b \}) \cup \{ b' \}$ is a [pseudobracket system]{}, then replacement of $b \in B$ with $b'$ in $B$ is called an [*extension* ]{} operation over $B$ of type 1.
Similarly, assume that ${\varphi}(e_2)= a_1^{{\varepsilon}_2}$, where ${\varepsilon}_2 = \pm 1$. Then we consider a [pseudobracket]{} $b'$ such that $$b'(1) = b(1) , \ b'(2) = b(2)+1 , \ b'(3) = b(3)+{\varepsilon}_2 , \ b'(4) = b(4) .$$ Note that $a(b') = a(b)e_2 $. We say that $b'$ is obtained from $b$ by an extension of type 1 (on the right). If $(B \setminus \{ b \}) \cup \{ b' \}$ is a [pseudobracket system]{}, then replacement of $b \in B$ with $b'$ in $B$ is called an [*extension* ]{} operation over $B$ of type 1.
Now suppose that both edges $e_1, e_2$ do exist and ${\varphi}(e_1) = {\varphi}(e_2)^{-1}=a_2^{{\varepsilon}}$, ${\varepsilon}= \pm 1$. We also assume that $b(3)\ne 0$ and that $b(3)$ is a multiple of $n_1$ if ${\varepsilon}=1$ or $b(3)$ is a multiple of $n_2$ if ${\varepsilon}=-1$. Consider a [pseudobracket]{} $b'$ such that $$b'(1) = b(1)-1 , \ b'(2) = b(2)+1 , \ b'(3) = (|n_1^{-1} n_2|)^{{\varepsilon}} b(3) , \ b'(4) = b(4) + |b(3)|/|n_{i({\varepsilon})}| ,$$ where $i({\varepsilon}) =1$ if ${\varepsilon}=1$ and $i({\varepsilon}) =2$ if ${\varepsilon}=-1$. Note that $a(b') = e_1 a(b)e_2$. We say that $b'$ is obtained from $b$ by an extension of type 2. If $(B \setminus \{ b \}) \cup \{ b' \}$ is a [pseudobracket system]{}, then replacement of $b \in B$ with $b'$ in $B$ is called an [*extension* ]{} operation over $B$ of type 2.
Assume that both $e_1, e_2$ do exist, ${\varphi}(e_1) = {\varphi}(e_2)^{-1}$, ${\varphi}(e_1) \ne a_1^{\pm 1}$, and $b(3)=0$. Consider a [pseudobracket]{} $b'$ such that $$b'(1) = b(1)-1 , \ b'(2) = b(2)+1 , \ b'(3) = b(3) , \ b'(4) = b(4) .$$ Note that $a(b') = e_1 a(b)e_2$. We say that $b'$ is obtained from $b$ by an extension of type 3. If $(B \setminus \{ b \}) \cup \{ b' \}$ is a [pseudobracket system]{}, then replacement of $b \in B$ with $b'$ in $B$ is called an [*extension* ]{} operation over $B$ of type 3.
*Mergers*.
Suppose that $b_1, b_2$ are distinct [pseudobracket]{}s in $B$ such that $b_1(2) = b_2(1)$. Consider a [pseudobracket]{} $b'$ such that $$b'(1) = b_1(1) , \ b'(2) = b_2(2) , \ b'(3) = b_1(3) + b_2(3) , \ b'(4) = b_1(4) + b_2(4).$$ Note that $a(b') = a(b_1)a(b_2)$. We say that $b'$ is obtained from $b_1, b_2$ by a merger operation. Taking both $b_1$, $b_2$ out of $B$ and putting $b'$ in $B$ is a [*merger* ]{} operation over $B$.
We will say that additions, extensions, and mergers, as defined above, are [*[elementary operation]{}s*]{} over [pseudobracket]{}s and [pseudobracket system]{}s for $W$.
Assume that one [pseudobracket system]{} $B_\ell$ is obtained from another [pseudobracket system]{} $B_0$ for $W$ by a finite sequence $\Omega$ of [elementary operation]{}s and $B_0, B_1, \dots, B_\ell$ is the corresponding to $\Omega$ sequence of [pseudobracket system]{}s. As above, we say that the sequence $B_0, B_1, \dots, B_\ell$ is [*operational*]{} and that $B_0, B_1, \dots, B_\ell$ has [*size bounded by*]{} $ (k_1, k_2)$ if $\ell \le k_1$ and, for every $i$, the number of [pseudobracket]{}s in $B_i$ is at most $k_2$. Whenever it is not ambiguous, we also say that $\Omega$ has size bounded by $(k_1, k_2)$ if so does the corresponding to $\Omega$ sequence $B_0, B_1, \dots, B_\ell$ of [pseudobracket system]{}s.
\[7bb\] Suppose that the empty [pseudobracket system]{} $B_0$ for $W$ can be transformed by a finite sequence $\Omega$ of [elementary operation]{}s into a final [pseudobracket system]{} $B_\ell = \{ (0, |W|, 0, k) \}$. Then $W \overset {{\mathcal{G}}_3} = 1$ and there is a reduced disk diagram ${\Delta}$ over presentation [pr3b]{} such that ${\varphi}({\partial}|_0 {\Delta}) \equiv W$ and $| {\Delta}(2) | \le k$.
Let $B_0, B_1, \dots, B_\ell$ be the corresponding to $\Omega$ sequence of [pseudobracket system]{}s, where $B_0$ is empty and $B_\ell$ is final. Consider the following claim.
1. If $c$ is a [pseudobracket]{} of $B_i$, $i =1,\dots,\ell$, then ${\varphi}(a(c)) \overset {{\mathcal{G}}_3} = a_1^{c(3)}$, where $a(c)$ is the arc of $c$, and there is a disk diagram ${\Delta}_c$ over presentation such that $ {\partial}|_0 {\Delta}_c = s r^{-1}$, where ${\varphi}(s) \equiv {\varphi}(a(c))$, $r$ is simple, $ {\varphi}( r ) \equiv a_1^{c(3)}$ and $| {\Delta}_c(2) | = c(4)$.
[*Proof of Claim (D1).*]{} By induction on $i \ge 1$, we will prove that Claim (D1) holds for every [pseudobracket]{} $c \in B_i$, $i =1,\dots,\ell$.
The base step of this induction is obvious since $B_1$ consists of a starting [pseudobracket]{} $c$ for which we have a disk diagram ${\Delta}_c$ consisting of a single vertex.
To make the induction step from $i$ to $i+1$, $i \ge 1$, we consider the cases corresponding to the type of the [elementary operation]{} that is used to get $B_{i+1}$ from $B_i$.
Suppose that $B_{i+1}$ is obtained from $B_{i}$ by an [elementary operation]{} $\sigma$ and $c \in B_{i+1}$ is the [pseudobracket]{}obtained from $b_1, b_2 \in B_{i}$ by application of $\sigma$, denoted $c =\sigma(b_1, b_2)$. Here one of $b_1, b_2$ or both, depending on type of $\sigma$, could be missing. By the induction hypothesis, Claim (D1) holds for every [pseudobracket]{} of $B_{i+1}$ different from $c$ and it suffices to show that Claim (D1) holds for $c$.
If $c \in B_{i+1}$ is obtained by an addition, then Claim (D1) holds for $c$ because it holds for any starting [pseudobracket]{}.
Let $c \in B_{i+1}$ be obtained from a [pseudobracket]{}$b \in B_{i}$ by an extension of type 1 on the left and $a(c) = e_1 a(b)$, where ${\varphi}(e_1) = a_1^{{\varepsilon}_1}$, ${\varepsilon}_1 = \pm 1$, and $a(c), a(b)$ are the arcs of $c, b$, resp. By the induction hypothesis applied to $b$, there is a disk diagram ${\Delta}_b$ over presentation such that $ {\partial}|_0 {\Delta}_b = s r^{-1}$, where ${\varphi}(s) \equiv {\varphi}(a(b))$, $r$ is simple, $ {\varphi}( r ) \equiv a_1^{b(3)}$, and $| {\Delta}_b(2) | = b(4)$. First we assume that the integers ${\varepsilon}_1, b(3)$ satisfy ${\varepsilon}_1 b(3) \ge 0$. Then we consider a new “loose" edge $f$ such that ${\varphi}(f) = a_1^{{\varepsilon}_1}$. We attach the vertex $f_+$ to the vertex $s_-$ of ${\Delta}_b$ and obtain thereby a disk diagram ${\Delta}'_b$ such that ${\partial}{\Delta}'_b = fs r^{-1} f^{-1}$, see Fig. 6.7(a). Since ${\varphi}(a(c)) \equiv {\varphi}(fs) \equiv a_1^{{\varepsilon}_1} {\varphi}(a(b))$, $fr$ is simple and $${\varphi}(fr) \equiv a_1^{{\varepsilon}_1+b(3)} , \ \ | {\Delta}'_b(2) | = | {\Delta}_b(2) | = b(4) ,$$ it follows that we can use ${\Delta}'_b$ as a required disk diagram ${\Delta}_c$ for $c$.
(-26.1,-2) arc (0:180:2); (-31.1,-2) – (-26.1,-2); (-31.1,-2) –(-28,-2); (-31.1,-2) –(-30.5,-2); (-28.1,0) –(-28.05,0); (-31.1,-2) \[fill = black\] circle (.04); (-30.1,-2) \[fill = black\] circle (.04); (-26.1,-2) \[fill = black\] circle (.04); (-19,-2) arc (0:180:2); (-23,-2) – (-19,-2); (-23,-2) –(-22.4,-2); (-22,-2) –(-20.4,-2); (-21,0) –(-20.95,0); (-22,-2) \[fill = black\] circle (.04); (-23,-2) \[fill = black\] circle (.04); (-19,-2) \[fill = black\] circle (.04); at (-21,-1) [$\Delta_b$]{}; at (-22.4,-2.7) [$g$]{}; at (-20.5,-2.7) [$r'$]{}; at (-21.,.6) [$s$]{}; at (-23.,0) [$\Delta'_b$]{}; at (-28,-1) [$\Delta_b$]{}; at (-30.6,-2.7) [$f$]{}; at (-28,-2.7) [$r$]{}; at (-28.1,.6) [$s$]{}; at (-30.5,0) [$\Delta'_b$]{}; at (-28,-3.8) [Fig. 6.7(a)]{}; at (-21,-3.8) [Fig. 6.7(b)]{}; at (-19,-3) [$r = gr'$]{};
Now suppose that the integers ${\varepsilon}_1, b(3)$ satisfy ${\varepsilon}_1 b(3) < 0$, i.e., $b(3) \ne 0$ and ${\varepsilon}_1$, $b(3)$ have different signs. Then we can write $r = gr'$, where $g$ is an edge of $r$ and ${\varphi}(g) = a_1^{-{\varepsilon}_1}$. Let ${\Delta}'_b$ denote the diagram ${\Delta}'_b$ whose boundary has factorization ${\partial}|_0 {\Delta}'_b = (g^{-1} s) (r')^{-1}$, see Fig. 6.7(b). Note that ${\varphi}(a(c)) \equiv {\varphi}(g^{-1} s)$, $r'$ is simple, and $${\varphi}(r') \equiv a_1^{{\varepsilon}_1+b(3)} , \ \ | {\Delta}'_b(2) | = | {\Delta}_b(2) | = b(4) .$$ Hence, it follows that we can use ${\Delta}'_b$ as a desired diagram ${\Delta}_c$ for $c$. The case of an extension of type 1 on the right is similar.
Let $c \in B_{i+1}$ be obtained from a [pseudobracket]{} $b \in B_{i}$ by an extension of type 2 and assume that $a(c) = e_1 a(b) e_2$, where ${\varphi}(e_1) ={\varphi}(e_2)^{-1} = a_2$ (the subcase when ${\varphi}(e_1) ={\varphi}(e_2)^{-1} = a_2^{-1}$ is similar). Let $a(c), a(b)$ denote the arcs of [pseudobracket]{}s $c, b$, resp. According to the definition of an extension of type 2, $b(3)$ is a multiple of $n_1$, say, $b(3) = n_1 \ell$ and $\ell \ne 0$. By the induction hypothesis, there is a disk diagram ${\Delta}_b$ over presentation such that $ {\partial}|_0 {\Delta}_b = s r^{-1}$, where ${\varphi}(s) \equiv {\varphi}(a(b))$, $r$ is simple, ${\varphi}( r ) \equiv a_1^{b(3)}$, and $| {\Delta}_b(2) | = b(4)$. Since $b(3) = n_1 \ell$, there is a disk diagram $\Gamma$ over such that $${\partial}\Gamma = f_1 s_1 f_2 s_2 , \ {\varphi}(f_1) = a_2 , \ {\varphi}(f_2) = a_2^{-1} ,
\ {\varphi}( s_1 ) \equiv a_1^{b(3)} , \ {\varphi}( s_2^{-1} ) \equiv a_1^{b(3) n_2/n_1} ,$$ and $\Gamma$ consists of $| \ell | >0$ faces. Note that $\Gamma$ itself is an $a_2$-band with $|\ell|$ faces. Attaching $\Gamma$ to ${\Delta}_b$ by identification of the paths $s_1$ and $r$ that have identical labels, we obtain a disk diagram ${\Delta}'_b$ such that $ {\partial}|_0 {\Delta}'_b = f_1 s f_2 s_2$ and $|{\Delta}'_b(2) | = |{\Delta}_b(2) | + |\ell |$, see Fig. 6.8. Then ${\varphi}( a(c) ) \equiv {\varphi}( f_1 s f_2 )$, $s_2^{-1}$ is simple, ${\varphi}(s_2^{-1}) \equiv a_1^{b(3) n_2/n_1}$ and we see that ${\Delta}'_b$ satisfies all desired conditions for ${\Delta}_{c}$.
(-19,-2) arc (0:180:2); (-23,-2) node (v1) – (-19,-2); (-22.,-2) –(-21,-2); (-21,0) –(-20.95,0); (-21,-3.4) –(-21.1,-3.4); (-23,-2.7) –(-23,-2.6); (-19,-2.6) –(-19,-2.75); at (-21,-1.4) [$r=s_1$]{}; at (-21,-.7) [$\Delta_b$]{}; at (-21.,.5) [$s$]{}; at (-23.,0) [$\Delta'_b$]{}; at (-21,-4.6) [Fig. 6.8]{}; (v1) rectangle (-19,-3.4) node (v2) ; at (-21,-3.9) [$s_2$]{}; at (-21,-2.5) [$s_1$]{}; at (-23.8,-2.7) [$f_1$]{}; at (-18.4,-2.7) [$f_2$]{}; at (-20,-2.6) [$\Gamma$]{};
Now let $c \in B_{i+1}$ be obtained from a [pseudobracket]{} $b \in B_{i}$ by an extension of type 3 and let $a(c) = e_1 a(b) e_2$, where ${\varphi}(e_1) = {\varphi}(e_2)^{-1}$, ${\varphi}(e_1) = a_j^{\varepsilon}$, ${\varepsilon}= \pm 1$ and $j >1$. By the definition of an extension of type 3, we have $b(3) = 0$. Hence, by the induction hypothesis, there is a [disk diagram]{} ${\Delta}_b$ over such that $ {\partial}|_0 {\Delta}_b = s r^{-1}$, where ${\varphi}(s) \equiv {\varphi}(a(b))$, $|r|=0$ and $| {\Delta}_b(2) | = b(4)$. Consider a new “loose" edge $f$ such that ${\varphi}(f) = a_j^{{\varepsilon}}$. We attach the vertex $f_+$ to the vertex $s_-=s_+$ of ${\Delta}_b$ and obtain thereby a disk diagram ${\Delta}'_b$ such that ${\partial}{\Delta}'_b = fs f^{-1}r'$, where $|r'| = 0$, see Fig. 6.9. Since ${\varphi}(a(c)) \equiv {\varphi}(fsf^{-1})$ and $ | {\Delta}'_b(2) | = | {\Delta}_b(2) | = b(4)$, it follows that we can use ${\Delta}'_b$ as a desired diagram ${\Delta}_c$ for $c$.
(0,0) circle (1); (0,-2) –(0,-1); (0,-2) –(0,-1.4); (-.1,1) –(.1,1); (0,-2) \[fill = black\] circle (.05); (0,-1) \[fill = black\] circle (.05); at (0,-2.7) [Fig. 6.9]{}; at (0,-.4) [$\Delta_b$]{}; at (-.4,-1.5) [$f$]{}; at (0,.5) [$s$]{}; at (1.2,-1.3) [$\Delta'_b$]{};
Let $c \in B_{i+1}$ be obtained from [pseudobracket]{}s $b_1, b_2 \in B_{i}$ by a merger and $a(c) = a(b_1) a(b_2)$, where $a(c), a(b_1)$, $ a(b_2)$ are the arcs of the [pseudobracket]{}s $c, b_1, b_2$, resp.
By the induction hypothesis, there are disk diagrams ${\Delta}_{b_1}$, ${\Delta}_{b_2}$ over presentation such that $${\partial}|_0 {\Delta}_{b_j} = s_{j} r_j^{-1} , \ {\varphi}(s_j) \equiv {\varphi}(a(b_j)) , \ {\varphi}( r_j ) \equiv a_1^{b_j(3)} , \ | {\Delta}_{b_j}(2) | = b_j(4) ,$$ and $r_j$ is a simple path for $j=1,2$. Assume that the numbers $ b_1(3), b_2(3)$ satisfy the condition $b_1(3) b_2(3) \ge 0$. We attach ${\Delta}_{b_1}$ to ${\Delta}_{b_2}$ by identification of the vertices $(s_1)_+$ and $(s_2)_-$ and obtain thereby a disk diagram ${\Delta}'_b$ such that $ {\partial}{\Delta}'_b = s_1 s_2 (r_1 r_2)^{-1}$ and $|{\Delta}'_b(2) | = |{\Delta}_{b_1}(2) | + |{\Delta}_{b_2}(2) |$, see Fig. 6.10(a). Note that ${\varphi}( a(c) ) \equiv {\varphi}( s_1 s_2 )$, $r_1 r_2$ is a simple path and ${\varphi}(r_1 r_2 ) \equiv a_1^{b_1(3) +b_2(3)}$. Hence, ${\Delta}'_b$ is a desired disk diagram ${\Delta}_{c}$ for $c$.
(-20,-2) arc (0:180:2); (-24,-2) node (v1) – (-20,-2); (-23,-2) –(-22,-2); (-22,0) –(-21.95,0); (-24,-2) \[fill = black\] circle (.05); (-20,-2) \[fill = black\] circle (.05); at (-22,-1) [$\Delta_{b_2}$]{}; at (-22,0.5) [$s_2$]{}; at (-22,-2.7) [$r_2$]{}; (-24,-2) arc (0:180:2); (-28,-2) node (v1) – (-24,-2); (-27,-2) –(-26,-2); (-26,0) –(-25.95,0); (-28,-2) \[fill = black\] circle (.05); (-24,-2) \[fill = black\] circle (.05); at (-26,-1) [$\Delta_{b_1}$]{}; at (-26,0.5) [$s_1$]{}; at (-24,0) [$\Delta'_b$]{}; at (-26,-2.7) [$r_1$]{}; at (-24,-4.8) [Fig. 6.10(a)]{}; (-12,-2) arc (0:180:2); (-15,-2) arc (180:360:1.5); (-16,-2) node (v1) – (-12,-2); (-15,-2) –(-13.5,-2); (-15.6,-2) –(-15.5,-2); (-14,0) –(-13.95,0); (-13.4,-3.5) –(-13.5,-3.5); (-16,-2) \[fill = black\] circle (.05); (-12,-2) \[fill = black\] circle (.05); (-15,-2) \[fill = black\] circle (.05); at (-14,-0.7) [$\Delta_{b_1}$]{}; at (-14,0.5) [$s_1$]{}; at (-12,0) [$\Delta'_b$]{}; at (-13.5,-1.4) [$r_{12}= r_2^{-1}$]{}; at (-15.5,-2.6) [$r_{11}$]{}; at (-13.4,-4) [$s_2$]{}; at (-13.6,-2.6) [$\Delta_{b_2}$]{}; at (-15,-4.8) [Fig. 6.10(b)]{};
Now suppose that $ b_1(3), b_2(3)$ satisfy the condition $b_1(3) b_2(3) < 0$. For definiteness, assume that $ |b_1(3)| \ge | b_2(3)|$ (the case $ |b_1(3)| \le | b_2(3)|$ is analogous). Denote $r_1 = r_{11}r_{12}$, where $| r_{12}| = | r_{2}| = |b_2(3)|$. Then we attach ${\Delta}_{b_2}$ to ${\Delta}_{b_1}$ by identification of the paths $r_{12}$ and $r_{2}^{-1}$ that have identical labels equal to $a_1^{-b_2(3)}$, see Fig. 6.10(b). Doing this produces a [disk diagram]{} ${\Delta}'_b$ such that ${\partial}{\Delta}'_b = s_1 s_2 r_{11}^{-1}$ and $|{\Delta}'_b(2) | = |{\Delta}_{b_1}(2) | + |{\Delta}_{b_2}(2) |$. Note that ${\varphi}( a(c) ) \equiv {\varphi}( s_1 s_2 )$, $r_{11}$ is a simple path and ${\varphi}(r_{11} ) \equiv a_1^{b_1(3) +b_2(3)}$. Hence ${\Delta}'_b$ is a desired disk diagram ${\Delta}_{c}$ for $c$. The induction step is complete and Claim (D1) is proven.
To finish the proof of Lemma \[7bb\], we note that it follows from Claim (D1) applied to the [pseudobracket]{} $b_F = (0, |W|, 0, k)$ of the final system $B_\ell = \{ b_F \}$ that there is a disk diagram ${\Delta}_{b_F}$ such that $ {\partial}|_0 {\Delta}_{b_F} = s r^{-1}$, where ${\varphi}(s) \equiv W$, $| r | =0$ and $| {\Delta}_{{b_F}}(2) | \le k$. It remains to pick a reduced disk diagram ${\Delta}$ with these properties of $ {\Delta}_{{b_F}}$.
\[8bb\] Suppose $W$ is a nonempty word over ${\mathcal{A}}^{\pm 1}$ and $n \ge 0$ is an integer. Then $W$ is a product of at most $n$ conjugates of the words $(a_2 a_1^{n_1} a_2^{-1} a_1^{-n_2})^{\pm 1}$ if and only if there is a sequence $\Omega$ of [elementary operation]{}s such that $\Omega$ transforms the empty [pseudobracket system]{} for $W$ into a final [pseudobracket system]{} $\{ (0,|W|,0,n') \}$, where $n' \le n$, $\Omega$ has size bounded by $$(7|W|, C(\log|W|_{\bar a_1} +1)) ,$$ where $C= (\log \tfrac 65)^{-1}$, and, if $b$ is a [pseudobracket]{} of an intermediate [pseudobracket system]{}of the corresponding to $\Omega$ sequence of [pseudobracket system]{}s, then $$\label{d1d}
|b(3)| \le \tfrac 12 ( |W|_{a_1} + (|n_1| + |n_2|)n) .$$
Assume that $W$ is a product of at most $n$ conjugates of the words $$(a_2 a_1^{n_1} a_2^{-1} a_1^{-n_2})^{\pm 1} .$$ It follows from Lemma \[vk\] that there is a reduced disk diagram ${\Delta}$ over presentation such that ${\varphi}({\partial}|_0 {\Delta}) \equiv W$ and $|{\Delta}(2)| \le n$. Applying Lemma \[4bb\] to the pair $(W, {\Delta})$, we obtain a sequence of [elementary operation]{}s over [bracket system]{}s for $(W, {\Delta})$ that converts the empty [bracket system]{} for $(W, {\Delta})$ into the final [bracket system]{} $\{ (0, |W|, 0, |{\Delta}(2)| ) \}$ and has size bounded by $(7|W|, C(\log|W|_{\bar a_1} +1))$. Since every bracket $b$ and every [bracket system]{} $B$ for $(W, {\Delta})$ could be considered as a [pseudobracket]{} and a [pseudobracket system]{} for $W$, resp., we automatically have a desired sequence of [pseudobracket system]{}s. The inequality [d1d]{} follows from the definition of a bracket for $( W, {\Delta})$ and the inequalities $$|b(3) | \le \tfrac 12 | \vec {\Delta}(1) |_{a_1} = \tfrac 12 (|W|_{a_1} + (|n_1| + |n_2|) |{\Delta}(2)| ) , \quad |{\Delta}(2)| \le n .$$
Conversely, the existence of a sequence of [elementary operation]{}s over [pseudobracket system]{}s for $W$, as specified in Lemma \[8bb\], implies, by virtue of Lemma \[7bb\], that $W \overset{{\mathcal{G}}_3} = 1$ and that there exists a reduced disk diagram ${\Delta}$ such that ${\varphi}({\partial}|_0 {\Delta}) \equiv W$ and $|{\Delta}(2)| =n' \le n$. Hence, $W$ is a product of $n' \le n$ conjugates of words $(a_2 a_1^{n_1} a_2^{-1} a_1^{-n_2})^{\pm 1}$, as required.
Proof of Theorem \[thm2\]
=========================
Let the group ${\mathcal{G}}_3$ be defined by a presentation of the form $$\tag{1.4}
{\mathcal{G}}_3 := \langle \ a_1, \dots, a_m \ \| \ a_2 a_1^{n_1} a_2^{-1} = a_1^{ n_2} \, \rangle ,$$ where $n_1, n_2$ are some nonzero integers. Then both the bounded and precise word problems for are in $\textsf{L}^3$ and in $\textsf{P}$. Specifically, the problems can be solved in deterministic space $O((\max(\log|W|,\log n) (\log|W|)^2)$ or in deterministic time $O(|W|^4)$.
As in the proof of Theorem \[thm1\], we start with $\textsf{L}^3$ part of Theorem \[thm2\]. First we discuss a nondeterministic algorithm which solves the bounded word problem for presentation and which is now based on Lemma \[8bb\].
Given an input $(W, 1^n)$, where $W$ is a nonempty word (not necessarily reduced) over the alphabet ${\mathcal{A}}^{\pm 1}$ and $n \ge 0$ is an integer, written in unary notation as $1^n$, we begin with the empty [pseudobracket system]{} and nondeterministically apply a sequence of [elementary operation]{}s of size $\le( 7|W|, C(\log|W|_{\bar a_1} +1) )$, where $C = (\log \tfrac 65)^{-1}$. If such a sequence of [elementary operation]{}s results in a final [pseudobracket system]{}, consisting of a single [pseudobracket]{} $\{ (0,|W|,0,n') \}$, where $n' \le n$, then our algorithm accepts and, in view of Lemma \[8bb\], we may conclude that $W$ is a product of $n' \le n$ conjugates of the words $(a_2 a_1^{n_1} a_2^{-1} a_1^{-n_2})^{\pm 1}$. It follows from definitions and Lemma \[8bb\] that the number of [elementary operation]{}s needed for this algorithm to accept is at most $7|W|$. Hence, it follows from the definition of [elementary operation]{}s over [pseudobracket system]{}s for $W$ that the time needed to run this nondeterministic algorithm is bounded by $O(|W|)$. To estimate the space requirements of this algorithm, we note that if $b$ is a [pseudobracket]{} for $W$, then $b(1), b(2)$ are integers in the range from 0 to $|W|$, hence, when written in binary notation, will take at most $C'(\log|W| +1)$ space, where $C'$ is a constant. Since $b(3), b(4)$ are also integers that satisfy inequalities $$\begin{aligned}
\label{ineqn1}
& |b(3)| \le \tfrac 12 ( |W|_{a_1} + (|n_1| + |n_2|)n) \le |W|(|n_1| + |n_2|)(n+1) , \\ \label{ineqn2}
& 0 \le b(4) \le n ,\end{aligned}$$ and $|n_1| + |n_2|$ is a constant, it follows that the total space required to run this algorithm is at most $$\begin{aligned}
& ( 2C'(\log|W|+1) +C' \log ( |W|(|n_1| + |n_2|)(n+1) ) ) \cdot C(\log|W|_{\bar a_1} +1) + \\
& + ( C' \log ( n+1) ) \cdot C(\log|W|_{\bar a_1} +1) = O( (\max (\log |W| , \log n ) \log |W| ) .\end{aligned}$$
As in the proof of Theorem \[thm1\], it now follows from Savitch’s theorem [@Sav], see also [@AB], [@PCC], that there is a deterministic algorithm that solves the bounded word problem for presentation in space $ O( \max (\log |W|, \log n)(\log |W|)^2)$.
To solve the precise word problem for presentation , assume that we are given a pair $(W, 1^n)$ and we wish to find out if $W$ is a product of $n$ conjugates of the words $(a_2 a_1^{n_1} a_2^{-1} a_1^{-n_2})^{\pm 1}$ and $n$ is minimal with this property. Using the foregoing deterministic algorithm, we check whether the bounded word problem is solvable for the two pairs $(W, 1^{n-1})$ and $(W, 1^n)$. It is clear that the precise word problem for the pair $(W, 1^n)$ has a positive solution if and only if the bounded word problem has a negative solution for the pair $(W, 1^{n-1})$ and has a positive solution for the pair $(W, 1^n)$. Since these facts can be verified in space $ O( \max (\log |W|, \log n)(\log |W|)^2)$, we obtain a solution for the precise word problem in deterministic space $ O( \max (\log |W|, \log n)(\log |W|)^2)$, as desired.
Now we describe an algorithm that solves the [precise word problem]{} for presentation in polynomial time.
Recall that if $a_i \in {\mathcal{A}}$ and $U$ is a word over ${\mathcal{A}}^{\pm 1}$, then $|U|_{a_i}$ denotes the number of all occurrences of letters $a_i$, $a_i^{-1}$ in $U$ and $|U|_{\bar a_i} := |U| - |U|_{a_i}$.
\[lema1a\] Let ${\Delta}$ be a reduced [disk diagram]{} over presentation [pr3b]{}, $W \equiv {\varphi}({\partial}|_0 {\Delta})$ and $|W|_{\bar a_1} >0$. Then at least one of the following claims $\mathrm{(a)}$–$\mathrm{(b)}$ holds true.
$\mathrm{(a)}$ There is a vertex $v$ of the path $P_W$ such that if $P_W(\operatorname{\textsf{fact}}, v) = p_1p_2$ then $|{\varphi}(p_1)|_{\bar a_1} >0$, $|{\varphi}(p_2)|_{\bar a_1} >0$ and there is a special arc $s$ in ${\Delta}$ such that $s_- = {\alpha}(0)$ and $s_+ = {\alpha}(v)$.
$\mathrm{(b)}$ Suppose $P_W(\operatorname{\textsf{fact}}, v_1, v_2) = p_1p_2p_3$ is a factorization of $P_W$ such that $|{\varphi}(p_1)|_{\bar a_1} =0$, $|{\varphi}(p_2)|_{\bar a_1} = 0$, and if $q_i = {\alpha}(p_i)$, $i=1,2,3$, $e_1, e_2$ are the first, last, resp., edges of $q_2$, then neither of $e_1, e_2$ is an $a_1$-edge. Then there exists an $a_k$-band $\Gamma$ in ${\Delta}$ whose standard boundary is ${\partial}\Gamma = e_i r_i e_{3-i} r_{3-i}$, where $i=1$ if ${\varphi}(e_1) \in {\mathcal{A}}$ and $i=2$ if ${\varphi}(e_1)^{-1} \in {\mathcal{A}}$. In particular, the subdiagram ${\Delta}_1$ of ${\Delta}$ defined by ${\partial}{\Delta}_1 = q_1 r_2^{-1} q_3$, see Fig. 7.1, contains no faces and if ${\Delta}_2$ is the subdiagram of ${\Delta}$ defined by ${\partial}{\Delta}_2 = q_2 r_1^{-1}$, see Fig. 7.1, then $| {\Delta}(2) | = | \Gamma(2) | + | {\Delta}_2(2) |$ and ${\varphi}(q_2) \overset {{\mathcal{G}}_3} = a_1^\ell$, where $\ell = 0$ if $k>2$, $\ell = n_1 \ell_0$ for some integer $\ell_0$ if ${\varphi}(e_1) = a_2$, and $\ell = n_2 \ell'_0$ for some integer $\ell'_0$ if ${\varphi}(e_1) = a_2^{-1}$.
plot\[smooth, tension=.5\] coordinates [(-2,-3.)(-3.93,-1.5) (-3.9,1.5) (-2,2.4) (-.1,1.5) (-0.07,-1.5)(-2,-3.)]{}; (-4.1,.5) – (0.1,.5); (-4.1,-.7) – (0.1,-.7); (-4.1,.5) – (-2.,.5); (0.1,-.7) –(-2.1,-.7) ; (-3.5,-2) – (-3.6,-1.9); (-.5,-2) – (-.6,-2.08); (-4.12,-.1) –(-4.12,-.0) ; (.12,-.1) –(.12,-.19) ; at (-2,2.87) [$q_2$]{}; (-2.1,2.4) –(-1.9,2.4); at (-2,-.1) [$\Gamma$]{}; at (-4.6,-0.07) [$e_1$]{}; at (.6,-0.1) [$e_2$]{}; at (-2.,0.9) [$r_1$]{}; at (-2.,-1.2) [$r_2$]{}; at (-2,-2) [$ \Delta_{1}$]{}; at (-2,1.6) [$ \Delta_{2}$]{}; (-4.1,-.7) \[fill = black\] circle (.05); (.1,-.7) \[fill = black\] circle (.05); at (-2,-5.) [Fig. 7.1]{}; at (-2,-3.5) [$\alpha(0)$]{};
(-2,-3) \[fill = black\] circle (.05); at (-3.9,-2.2) [$q_1$]{}; at (-.1,-2.2) [$q_3$]{};
Since $|W|_{\bar a_1} >0$, it follows that there is a unique factorization $P_W = p_1p_2p_3$ such that $|{\varphi}(p_1)|_{\bar a_1} =0$, $|{\varphi}(p_2)|_{\bar a_1} = 0$, and the first and the last edges of $p_2$ are not $a_1$-edges. Denote $q_i = {\alpha}(p_i)$, $i=1,2,3$, and let $e_1, f$ be the first, last, resp., edges of $q_2$.
Let ${\varphi}(e_1)= a_k^\delta$, $\delta =\pm 1$. Since $k >1$, there is an $a_k$-band $\Gamma$ whose standard boundary is ${\partial}\Gamma = e_i r_i e_{3-i} r_{3-i}$, where $i=1$ if ${\varphi}(e_1) = a_k$ and $i=2$ if ${\varphi}(e_1)= a_k^{-1}$.
First assume that $e_2 \ne f$. By Lemma \[b1\], $r_2$ is a special arc in ${\Delta}$. Since $|{\varphi}(q_1)|_{\bar a_1} =0$, there is a special arc $r$ in ${\Delta}$ such that $r_- = {\alpha}(0)$, $r_+ = (e_1)_-$. Applying Lemma \[arc\] to special arcs $r, r_2^{-1}$, we obtain a special arc $s$ such that $s_- = {\alpha}(0)$, $s_+ = (r_2)_-$. Now we can see from $e_2 \ne f$ that claim (a) of Lemma \[lema1a\] holds for $v = (p_2)_+$.
Suppose $e_2 = f$. In view of the definition of an $a_k$-band and Lemma \[b1\], we conclude that claim (b) of Lemma \[lema1a\] holds true.
If $U \overset {{\mathcal{G}}_3} = a_1^\ell$ for some integer $\ell$, we let $\mu_3(U)$ denote the integer such that the [precise word problem]{} for presentation holds for the pair $(Ua_1^{-\ell}, \mu_3(U))$. If $U \overset {{\mathcal{G}}_3} \ne a_1^\ell$ for any $\ell$, we set $\mu_3(U) := \infty$.
\[lema2a\] Let $U$ be a word such that $U \overset {{\mathcal{G}}_3} = a_1^\ell$ for some integer $\ell$ and $| U |_{\bar a_1} >0$. Then at least one of the following claims $\mathrm{(a)}$–$\mathrm{(b)}$ holds true.
$\mathrm{(a)}$ There is a factorization $U \equiv U_1 U_2$ such that $| U_1 |_{\bar a_1}, | U_2 |_{\bar a_1} >0$, $U_1 \overset {{\mathcal{G}}_3} = a_1^{\ell_1}$, $U_2 \overset {{\mathcal{G}}_3} = a_1^{\ell_2}$ for some integers $\ell_1, \ell_2$, $\ell = \ell_1 + \ell_2$, and $\mu_3(U) = \mu_3(U_1)+ \mu_3(U_2)$.
$\mathrm{(b)}$ There is a factorization $
U \equiv U_1 a_k^{\delta} U_2 a_k^{-\delta} U_3 ,
$ where $| U_1 |_{\bar a_1} = | U_2 |_{\bar a_1} = 0$, $k \ge 2$, $\delta = \pm 1$, such that $U_2 \overset {{\mathcal{G}}_3} = a_1^{\ell_2}$ for some integer $\ell_2$ and the following is true. If $k >2$, then $\ell_2 = 0$ and $U_1 U_3 \overset {0} = a_1^{\ell}$. If $k =2$ and $\delta = 1$, then $\ell_2/n_1$ is an integer and $U_1 a_1^{\ell_2n_2/n_1} U_3 \overset {0} = a_1^{\ell}$. If $k =2$ and $\delta = -1$, then $\ell_2/n_2$ is an integer and $U_1 a_1^{\ell_2n_1/n_2} U_3 \overset {0} = a_1^{\ell}$.
Then, in case $k >2$, we have $\mu_3(U) = \mu_3(U_2)$; in case $k =2$ and $\delta = 1$, we have $\mu_3(U) = \mu_3(U_2) + |\ell_2/n_1|$; in case $k =2$ and $\delta = -1$, we have $\mu_3(U) = \mu_3(U_2) + |\ell_2/n_2|$.
Consider a [disk diagram]{} over [pr3b]{} such that ${\varphi}({\partial}|_0 {\Delta}) \equiv U a_1^{-\ell}$ and $| {\Delta}(2)| = \mu_3(U)$. Clearly, ${\Delta}$ is reduced and Lemma \[lema1a\] can be applied to ${\Delta}$.
Assume that Lemma \[lema1a\](a) holds for ${\Delta}$. Then there is a special arc $s$ in ${\Delta}$ such that $s_- = {\alpha}(0)$ and $s_+ = (q_1)_+$, ${\partial}|_0 {\Delta}= q_1 q_2$, and $|{\varphi}(q_1)|_{\bar a_1}, |{\varphi}(q_2)|_{\bar a_1} >0$. Cutting ${\Delta}$ along the simple path $s$, we obtain two diagrams ${\Delta}_1, {\Delta}_2$ such that ${\partial}{\Delta}_1 = q_1 s^{-1}$, ${\partial}{\Delta}_2 = q_2 s$. Since $ |{\Delta}(2) | = \mu_3(U)$ and ${\varphi}(s) \overset 0 = a_1^{\ell_0}$ for some $\ell_0$, it follows that if $U_1 := {\varphi}(q_1)$, $U_2a_1^{-\ell} := {\varphi}(q_2)$, $U \equiv U_1 U_2$, then $ |{\Delta}_i(2) |= \mu_3(U_i)$, $i=1,2$. Since $|{\Delta}(2) | = |{\Delta}_1(2) |+|{\Delta}_2(2) | $, we obtain $$\mu_3(U) = |{\Delta}(2) | = |{\Delta}_1(2) |+|{\Delta}_2(2) | = \mu_3(U_1)+\mu_3(U_2) ,$$ as required.
Assuming that Lemma \[lema1a\](b) holds for ${\Delta}$, we derive claim (b) from the definitions and Lemma \[lema1a\](b).
Let $W$ be a word over ${\mathcal{A}}^{\pm 1}$ and let $| W |_{\bar a_1} >0$. As in the proof of Theorem \[thm1\], let $W(i,j)$, where $1 \le i \le |W|$ and $0 \le j \le |W|$, denote the subword of $W$ that starts with the $i$th letter of $W$ and has length $j$. If $W(i,j) \overset {{\mathcal{G}}_3} = a_1^{\ell_{ij}}$ for some integer $\ell_{ij}$, we set $\lambda(W(i,j)) := \ell_{ij}$. Otherwise, it is convenient to define $\lambda(W(i,j)) := \infty$.
We now compute the numbers $\lambda(W(i,j))$, $\mu_3(W(i,j))$ for all $i, j$ by the method of dynamic programming in which the parameter is $| W(i,j) |_{\bar a_1} \ge 0$. In other words, we compute the numbers $\lambda(W(i,j))$, $\mu_3(W(i,j))$ by induction on parameter $| W(i,j) |_{\bar a_1} \ge 0$.
To initialize, or to make the base step, we note that if $| W(i,j) |_{\bar a_1} = 0$ then $$\lambda(W(i,j)) = \ell_{ij} , \quad \mu_3(W(i,j)) = 0$$ where $W(i,j)\overset {0} = a_1^{\ell_{ij}}$.
To make the induction step, assume that the numbers $\lambda(W(i',j'))$, $\mu_3(W(i',j'))$ are already computed for all $W(i',j')$ such that $| W(i',j') |_{\bar a_1} < | W(i,j) |_{\bar a_1}$, where $W(i,j)$ is fixed and $| W(i,j) |_{\bar a_1} >0$.
Applying Lemma \[lema2a\] to the word $U \equiv W(i,j)$, we consider all factorizations of the form $$U \equiv U_1 U_2 ,$$ where $| U_1 |_{\bar a_1}, | U_2 |_{\bar a_1} >0$. Since $U_1 = W(i, j_1)$, where $j_1 = |U_1|$ and $U_2 = W(i+ j_1, j-j_1)$, it follows from $| U_1 |_{\bar a_1}, | U_2 |_{\bar a_1} < | U |_{\bar a_1}$ that the numbers $\lambda(U_1)$, $\mu_3(U_1)$, $\lambda(U_2)$, $\mu_3(U_2)$ are available by the induction hypothesis. Hence, we can compute the minimum $$\begin{gathered}
\label{minM}
M_a = \min ( \mu_3(U_1) + \mu_3(U_2) )\end{gathered}$$ over all such factorizations $U \equiv U_1 U_2$. If $M_a < \infty$ and $M_a = \mu_3(U'_1) + \mu_3(U'_2)$ for some factorization $U \equiv U'_1 U'_2$, then we set $L_a := \lambda(U'_1) + \lambda(U'_2)$.
We also consider a factorization for $U = W(i,j)$ of the form $$\begin{gathered}
\label{eeqq4}
U \equiv U_1 a_k^{\delta} U_2 a_k^{-\delta} U_3 ,\end{gathered}$$ where $| U_1 |_{\bar a_1} = | U_2 |_{\bar a_1} = 0$, and $k \ge 2$, $\delta = \pm 1$.
If such a factorization [eeqq4]{} is impossible, we set $M_b = L_b = \infty$.
Assume that a factorization [eeqq4]{} exists (its uniqueness is obvious). Since $$U_2 = W(i+|U_1|+1, |U_2|) , \quad | U_2 |_{\bar a_1} < | U |_{\bar a_1} ,$$ it follows that the numbers $\lambda(U_2)$, $\mu_3(U_2)$ are available. Denote $U_1 U_3 \overset {0} = a_1^{\ell}$.
If $k >2$ and $\lambda(U_2) =0$, we set $$L_b := \ell , \quad M_b := \mu_3(U_2) .$$
If $k >2$ and $\lambda(U_2) \ne 0$, we set $L_b = M_b := \infty$.
If $k =2$ and $\delta = 1$, we check whether $\lambda(U_2)$ is divisible by $n_1$. If so, we set $$L_b := \ell + \lambda(U_2)n_2/n_1 , \quad M_b := \mu_3(U_2) + |\lambda(U_2)/n_1| .$$
If $k =2$, $\delta = 1$, and $\lambda(U_2)$ is not divisible by $n_1$, we set $L_b = M_b := \infty$.
If $k =2$ and $\delta = -1$, we check whether $\lambda(U_2)$ is divisible by $n_2$. If so, we set $$L_b := \ell + \lambda(U_2)n_1/n_2, \quad M_b := \mu_3(U_2) + |\lambda(U_2)/n_2| .$$
If $k =2$, $\delta = -1$, and $\lambda(U_2)$ is not divisible by $n_2$, we set $L_b = M_b := \infty$.
It follows from Lemma \[lema2a\] applied to the word $U = W(i,j)$ that if $U \overset {{\mathcal{G}}_3} = a_1^{\ell'}$ for some integer $\ell'$, then $\lambda(U) = \min( L_a, L_b)$ and $\mu_3(U) = \min( M_a, M_b)$. On the other hand, it is immediate that if $\lambda(U) = \infty$, then $L_a = L_b = \infty$ and $M_a = M_b = \infty$. Therefore, in all cases, we obtain that $$\lambda(U) = \min( L_a, L_b), \quad \mu_3(U) = \min( M_a, M_b) .$$ This completes our computation of the numbers $\lambda(U), \mu_3(U)$ for $U = W(i,j)$.
It follows by the above inductive argument that, for every word $W(i,j)$ with finite $\lambda(W(i,j) )$, we have $$\begin{aligned}
\label{bnnd}
\max (| \lambda(W(i,j) ) |, \mu_3(W(i,j) ) ) \! \le \! | W(i,j) | \max(|n_1|, |n_2|)^{| W(i,j) |_{\bar a_1} } \! \le 2^{O(|W|)} .\end{aligned}$$ Hence, using binary representation for numbers $\lambda(W(i',j') ), \mu_3(W(i',j') )$, we can run the foregoing computation of numbers $\lambda(U), \mu_3(U)$ for $U = W(i,j)$ in time $O(| W(i,j)|^2)$, including division of $\lambda (U_2)$ by $n_1$ and by $n_2$ to decide whether $\lambda (U_2)/ n_1$, $\lambda (U_2)/ n_2$ are integers and including additions to compute the minimum .
Since the number of subwords $W(i,j)$ of $W$ is bounded by $O(| W|^2)$, it follows that running time of the described algorithm that computes the numbers $\lambda(W ), \mu_3(W)$ is $O(| W|^4)$. This implies that both the [bounded word problem]{} and the [precise word problem]{} for presentation can be solved in deterministic time $O(|W|^4)$.
Theorem \[thm2\] is proved.
Minimizing Diagrams over and Proofs of Theorem \[thm3\] and Corollary \[cor2\]
==============================================================================
Both the diagram problem and the minimal diagram problem for group presentation can be solved in deterministic space $O((\log|W|)^3)$ or in deterministic time $O(|W|^4\log|W|)$.
Furthermore, let $W$ be a word such that $W \overset {{\mathcal{G}}_2} =1$ and let $$\tau({\Delta}) = (\tau_1({\Delta}), \ldots, \tau_{s_\tau}({\Delta}))$$ be a tuple of integers, where the absolute value $| \tau_i({\Delta}) |$ of each $\tau_i({\Delta})$ represents the number of certain vertices or faces in a disk diagram ${\Delta}$ over such that ${\varphi}({\partial}{\Delta}) \equiv W$. Then, in deterministic space $O( (\log |W|)^3 )$, one can algorithmically construct such a minimal diagram ${\Delta}$ which is also smallest relative to the tuple $\tau({\Delta})$ (the tuples are ordered lexicographically).
We start by proving the space part of Theorem \[thm3\].
Let $W$ be a nonempty word over ${\mathcal{A}}^{\pm 1}$ and let $\Omega$ be a finite sequence of [elementary operation]{}s that transforms the empty [pseudobracket system]{} for $W$ into a final [pseudobracket system]{}. By Lemma \[lem7\], $W \overset {{\mathcal{G}}_2} = 1$ and there is a diagram ${\Delta}$ over such that ${\varphi}({\partial}|_{0} {\Delta}) \equiv W$. We now describe an explicit construction of such a diagram ${\Delta}= {\Delta}(\Omega)$ with property (A) based on the sequence $\Omega$. Note that this construction is based on the proof of Lemma \[lem7\] and essentially follows that proof. The question on how to construct the sequence $\Omega$ will be addressed later.
Let $B_0, B_1, \dots, B_\ell$ be the sequence of [pseudobracket system]{}s associated with $\Omega$, where $B_0$ is empty and $ B_\ell$ is final. We will construct ${\Delta}(\Omega)$ by induction on $i$. Let $B_i$ be obtained from $B_{i-1}$, $i\ge 1$, by an [elementary operation]{} $\sigma$ and let $b' \in B_i$ be the [pseudobracket]{} obtained from $b, c \in B_{i-1}$ by $\sigma$. Here we assume that one of $b, c$ (or both) could be missing.
If $\sigma$ is an addition, then we do not do anything.
Assume that $\sigma$ is an extension of type 1 or type 2 on the left and $a(b') = e_1 a(b)$, where $e_1$ is an edge of $P_W$ and ${\varphi}(e_1) = a_{b(3)}^{{\varepsilon}_1}$. Here we use the notation of the definition of an extension of type 1 or type 2 on the left. Then we construct ${\Delta}(\Omega)$ so that the edge ${\alpha}(e_1) = e$ belongs to the boundary ${\partial}{\Delta}(\Omega)$ of ${\Delta}(\Omega)$ and $e^{-1}$ belongs to ${\partial}\Pi$, where $\Pi$ is a face of ${\Delta}(\Omega)$ such that ${\varphi}( {\partial}\Pi) \equiv a_{b(3)}^{-{\varepsilon}_1b(4)}$. Note that the label ${\varphi}({\partial}\Pi)$ is uniquely determined by either of the [pseudobracket]{}s $b, b'$.
The case of an extension of type 1 or type 2 on the right is analogous.
Suppose that $\sigma$ is an extension of type 3 and $a(b') = e_1 a(b)e_2$, where $e_1, e_2$ are edges of $P_W$ with ${\varphi}(e_1) = {\varphi}(e_2)^{-1}$. Here we use the notation of the definition of an extension of type 3. Then we construct ${\Delta}(\Omega)$ so that ${\alpha}(e_1) = {\alpha}(e_2)^{-1}$ and both ${\alpha}(e_1), {\alpha}(e_2)$ belong to the boundary ${\partial}{\Delta}(\Omega)$ of ${\Delta}(\Omega)$.
If $\sigma$ is a turn or a merger, then we do not do anything to the diagram ${\Delta}(\Omega)$ under construction.
Performing these steps during the process of [elementary operation]{}s $\Omega$, we will construct, i.e., effectively describe, a required diagram ${\Delta}(\Omega)$ such that $| {\Delta}(\Omega)(2) | =b_F(6)$, where $b_F$ is the final [pseudobracket]{} of $B_\ell$, and ${\varphi}({\partial}|_{0} {\Delta}(\Omega)) \equiv W$. Note that this particular diagram ${\Delta}(\Omega)$, depending on the sequence $\Omega$, may not be minimal or even reduced, however, as in the proof of Lemma \[lem7\], our construction guarantees that ${\Delta}(\Omega)$ does have the property (A), i.e., the property that if $e$ is an edge of the boundary path ${\partial}\Pi$ of a face $\Pi$ of ${\Delta}(\Omega)$, then $e^{-1} \in {\partial}{\Delta}(\Omega)$.
We also observe that, in the proof of Lemma \[lem6\], for a given pair $(W, {\Delta}_W)$, where ${\Delta}_W$ is a diagram over such that ${\varphi}({\partial}|_{0} {\Delta}_W) \equiv W$, we constructed by induction an operational sequence $\Omega = \Omega({\Delta}_W)$ of [elementary operation]{}s that convert the empty [bracket system]{} into the final [bracket system]{} and has size bounded by $$\label{estt0}
(11|W|, C(\log |W| +1))) .$$
Recall that every [bracket system]{} is also a [pseudobracket system]{}. Looking at details of the construction of the sequence $\Omega({\Delta}_W)$, we can see that if we reconstruct a diagram ${\Delta}(\Omega({\Delta}_W))$, by utilizing our above algorithm based upon the sequence $\Omega({\Delta}_W)$, then the resulting diagram will be identical to the original diagram ${\Delta}_W$, hence, in this notation, we can write $$\label{dwd}
{\Delta}(\Omega({\Delta}_W)) = {\Delta}_W .$$
Recall that Savitch’s conversion [@Sav], see also [@AB], [@PCC], of nondeterministic computations in space $S$ and time $T$ into deterministic computations in space $O(S\log T)$ simulates possible nondeterministic computations by using tuples of configurations, also called instantaneous descriptions, of cardinality $O(\log T)$. Since a configuration takes space $S$ and the number of configurations that are kept in memory at every moment is at most $O(\log T)$, the space bound $O(S\log T)$ becomes evident. Furthermore, utilizing the same space $O(S\log T)$, one can use Savitch’s algorithm to compute an actual sequence of configurations that transform an initial configuration into a final configuration. To do this, we consider every configuration $\psi$ together with a number $k$, $0 \le k \le T$, thus replacing every configuration $\psi$ by a pair $(\psi, k)$, where $k$ plays the role of counter. We apply Savitch’s algorithm to such pairs $(\psi, k)$, so that $(\psi_2, k+1)$ is obtained from $(\psi_1, k)$ by a single operation whenever $\psi_2$ is obtained from $\psi_1$ by a single command. If $\psi$ is a final configuration, then we also assume that $(\psi, k+1)$ can be obtained from $(\psi, k)$, $0 \le k \le T-1$, by a single operation.
Now we apply Savitch’s algorithm to two pairs $(\psi_0, 0)$, $(\psi_F, T)$, where $\psi_0$ is an initial configuration and $\psi_F$ is a final configuration. If Savitch’s algorithm accepts these two pairs $(\psi_0, 0)$, $(\psi_F, T)$, i.e., the algorithm turns the pair $(\psi_0, 0)$ into $(\psi_F, T)$, then there will be a corresponding sequence $$\label{eqom1}
(\psi_0, 0), (\psi_1, 1), \dots, (\psi_i, i), \dots, (\psi_F, T)$$ of pairs such that $(\psi_i, i)$ is obtained from $(\psi_{i-1}, i-1)$, $i \ge 1$, by a single operation. Observe that we can retrieve any term, say, the $k$th term of the sequence [eqom1]{}, by rerunning Savitch’s algorithm and memorizing a current pair $(\psi, k)$ in a separate place in memory. Note that the current value of $(\psi, k)$ may change many times during computations but in the end it will be the desired pair $(\psi_k, k)$. Clearly, the space needed to run this modified Savitch’s algorithm is still $O(S\log T)$ and, consecutively outputting these pairs $(\psi_0, 0), (\psi_1, 1), \dots$, we will construct the required sequence [eqom1]{} of configurations.
Coming back to our specific situation, we recall that a [pseudobracket system]{} plays the role of a configuration, the empty [pseudobracket system]{} is an initial configuration and a final [pseudobracket system]{} is a final configuration. Therefore, it follows from Lemmas \[lem6\]–\[lem7\], and the foregoing equality [dwd]{} that picking the empty [pseudobracket system]{} $B_0$ and a final [pseudobracket system]{} $\{ b_F \}$, where $b_F = (0, |W|, 0,0,0, n')$, we can use Savitch’s algorithm to verify in deterministic space $O((\log |W|)^3)$ whether $W \overset {{\mathcal{G}}_2} = 1$ and whether there is a [disk diagram]{}${\Delta}$ over such that ${\varphi}({\partial}{\Delta}) \equiv W$ and $| {\Delta}(2) | = n'$. In addition, if the algorithm accepts, then the algorithm also constructs an operational sequence $B_0, B_1, \dots, B_\ell = \{ b_F \}$ of [pseudobracket system]{}s of size bounded by which, as was discussed above, can be used to construct a [disk diagram]{} ${\Delta}_1$ with property (A) such that ${\varphi}({\partial}{\Delta}_1) \equiv W$ and $| {\Delta}_1(2) | = n'$. It is clear that the entire construction of ${\Delta}_1$ can be done in deterministic space $O((\log |W|)^3)$, as desired.
To construct a minimal disk diagram for $W$, we consecutively choose $n' =0,1,2, \dots$ and run Savitch’s algorithm to determine whether the algorithm can turn the empty [pseudobracket system]{} into a final [pseudobracket system]{} $\{ b_F \}$, where $b_F(6) = n'$. If $n_0$ is the minimal such $n'$, then we run Savitch’s algorithm again to construct an operational sequence of [pseudobracket system]{}s and a corresponding [disk diagram]{} ${\Delta}_0$ such that ${\varphi}({\partial}{\Delta}_0) \equiv W$ and $| {\Delta}_0(2) | = n_0$. Similarly to the above arguments, it follows from Lemmas \[lem6\]–\[lem7\], and from the equality ${\Delta}(\Omega({\Delta}_W)) = {\Delta}_W$, see , that ${\Delta}_0$ is a minimal diagram and the algorithm uses space $O((\log |W|)^3)$. Thus, the minimal diagram problem can be solved for presentation in deterministic space $O((\log |W|)^3)$.
Now we will discuss the additional statement of Theorem \[thm3\]. For a diagram ${\Delta}$ over presentation [pr1]{} such that ${\varphi}({\partial}|_{0} {\Delta}) \equiv W$ and ${\Delta}$ has property (A), we consider a parameter $$\label{tau}
\tau({\Delta}) = ( \tau_1({\Delta}), \dots, \tau_{s_\tau}({\Delta})) ,$$ where $ \tau_i({\Delta})$ are integers that satisfy $0 \le | \tau_i({\Delta})| \le C_\tau |W|$, $C_\tau >0$ is a fixed constant, and that represent numbers of certain vertices, edges, faces of ${\Delta}$ (we may also consider some functions of the numbers of certain vertices, edges, faces of ${\Delta}$ that are computable in space $O((\log |W|)^3)$ and whose absolute values do not exceed $C_\tau |W|$).
For example, if we set $\tau_1({\Delta}) := |{\Delta}(2)|$ then we would be constructing a minimal diagram which is also smallest with respect to $\tau_2({\Delta})$, then smallest with respect to $\tau_3({\Delta})$ and so on. Let us set $\tau_2({\Delta}) := |{\Delta}(2)_{a_1^{\pm n_1}}|$, where $|{\Delta}(2)_{a_1^{\pm n_1}}|$ is the number of faces $\Pi_2$ in ${\Delta}$ such that ${\varphi}( {\partial}\Pi_2) \equiv a_1^{\pm n_1}$, $\tau_3({\Delta}) := |{\Delta}(2)_{a_2^{ n_2}}|$, where $|{\Delta}(2)_{a_2^{ n_2}}|$ is the number of faces $\Pi_3$ in ${\Delta}$ such that ${\varphi}( {\partial}\Pi_3) \equiv a_2^{ n_2}$.
We may also consider functions such as $\tau_4({\Delta}) := \sum_{ k_{4,1} \le |{\partial}\Pi | \le k_{4,2} } |{\partial}\Pi |$, where the summation takes place over all faces $\Pi$ in ${\Delta}$ such that $ k_{4,1} \le |{\partial}\Pi | \le k_{4,2}$, where $k_{4,1}, k_{4,2}$ are fixed integers with $0 \le k_{4,1} \le k_{4,2} \le |W|$. Note that the latter bound follows from Lemma \[vk2\](a).
If $v$ is a vertex of ${\Delta}$, let $\deg v$ denote the [*degree*]{} of $v$, i.e., the number of oriented edges $e$ of ${\Delta}$ such that $e_+ = v$. Also, if $v$ is a vertex of ${\Delta}$, let $\deg^F v$ denote the [*face degree*]{} of $v$, i.e., the number of faces $\Pi$ in ${\Delta}$ such that $v \in {\partial}\Pi$. Note that it follows from Lemma \[vk2\](a) that every face $\Pi$ of a [disk diagram]{} ${\Delta}$ with property (A) can contribute at most 1 to this sum.
We may also consider entries in $\tau({\Delta}) $ that count the number of vertices in ${\Delta}$ of certain degree. Note that doing this would be especially interesting and meaningful when the presentation contains no defining relations, hence, diagrams over are diagrams over the free group $F({\mathcal{A}}) = \langle {\mathcal{A}}\ \| \ \varnothing \rangle $ with no relations. For instance, we may set $\tau_5({\Delta}) := |{\Delta}(0)_{\ge 3}|$, where $ |{\Delta}(0)_{\ge 3}|$ is the number of vertices in ${\Delta}$ of degree at least 3, and set $\tau_6({\Delta}) := |{\Delta}(0)^F_{[k_{6,1}, k_{6,2} ]}|$, where $|{\Delta}(0)^F_{[k_{6,1}, k_{6,2} ]}|$ is the number of vertices $v$ in ${\Delta}$ such that $ k_{6,1}\le \deg^F v \le k_{6,2}$, where $k_{6,1}, k_{6,2}$ are fixed integers with $0 \le k_{6,1} \le k_{6,2} \le |W|$.
Our goal is to make some modification of the foregoing algorithm that would enable us to construct a [disk diagram]{} ${\Delta}$ over such that ${\varphi}({\partial}|_{0} {\Delta}_W) \equiv W$, ${\Delta}$ has property (A), and ${\Delta}$ is smallest relative to the parameter $\tau({\Delta})$. Recall that the tuples $\tau({\Delta})$ are ordered in the standard lexicographical way. Note that if $\tau_1({\Delta}) = - |{\Delta}(2)|$ then ${\Delta}$ would be a diagram for $W$ with property (A) which has the maximal number of faces.
Our basic strategy remains unchanged, although we will need more complicated bookkeeping (to be introduced below). We start by picking a value for the parameter ${\widetilde}\tau = ({\widetilde}\tau_1, \dots, {\widetilde}\tau_{s_\tau} )$, where ${\widetilde}\tau_i$ are fixed integers with $|{\widetilde}\tau_i| \le C_\tau |W|$. As before, using Lemmas \[lem6\], \[lem7\], and the identity ${\Delta}(\Omega({\Delta}_W)) = {\Delta}_W$, see [dwd]{}, we can utilize Savitch’s algorithm which verifies, in deterministic space $O( (\log |W|)^3 )$, whether the empty [pseudobracket system]{} $B_0$ can be turned by [elementary operation]{}s into a final [pseudobracket system]{} $B = \{ b_F \}$ which also changes the $\tau$-parameter from $(0,0, \dots, 0)$ to ${\widetilde}\tau$. If this is possible, then the algorithm also computes an operational sequence $B_0, B_1, \dots, B_\ell = B$ of [pseudobracket system]{}s and, based on this sequence, constructs a [disk diagram]{} ${\Delta}$ such that ${\varphi}({\partial}{\Delta}) \equiv W$, ${\Delta}$ has property (A) and $\tau({\Delta}) = {\widetilde}\tau$. To make sure that ${\Delta}$ is smallest relative to the parameter $\tau({\Delta})$, we can use Savitch’s algorithm for every ${\widetilde}\tau' < {\widetilde}\tau$ to verify that the empty [pseudobracket system]{} $B_0$ may not be turned by [elementary operation]{}s into a final [pseudobracket system]{} $B = \{ b_F \}$ so that the $\tau$-parameter concurrently changes from $(0,0, \dots, 0)$ to ${\widetilde}\tau'$. This concludes the description of our algorithm for construction of a diagram ${\Delta}$ over such that ${\varphi}({\partial}|_{0} {\Delta}_W) \equiv W$, ${\Delta}$ has property (A) and ${\Delta}$ is smallest relative to the parameter $\tau({\Delta})$.
Let us describe necessary modifications in bookkeeping. First, we add six more integer entries to each [pseudobracket]{} $b$. Hence, we now have $$b= (b(1), \dots, b(6), b(7), \dots, b(12) ) ,$$ where $b(1), \dots, b(6)$ are defined exactly as before, $b(7), \dots, b(12)$ are integers such that $$\begin{aligned}
\label{cond1}
b(7)+b(8) = b(5) , \quad 0 \le b(7), b(8)\le b(5) \quad & \mbox{if}
\quad b(5) \ge 0 , \\ \label{cond2}
b(7)+b(8) = b(5) , \quad b(5) \le b(7), b(8) \le 0 \quad & \mbox{ if} \quad b(5) \le 0 .\end{aligned}$$ The numbers $b(9), \dots, b(12)$ satisfy the inequalities $$0 \le b(9), \dots, b(12) \le |W|$$ and the equalities $b(7)= b(8) = b(11)= b(12) =0$ whenever $b$ has type PB1. The purpose of these new entries, to be specified below, is to keep track of the information on degrees and face degrees of the vertices of the diagram ${\Delta}(\Omega)$ over being constructed by means of an operational sequence $\Omega$ of [elementary operation]{}s.
We now inductively describe the changes over the entries $b(7), \dots, b(12)$ and over the parameter $\tau({\Delta}) = ( \tau_1({\Delta}), \dots, \tau_{s_\tau}({\Delta}))$ that are done in the process of performance of an operational sequence $\Omega$ of [elementary operation]{}s over [pseudobracket system]{}s that change the empty [pseudobracket system]{} into a final [pseudobracket system]{} for $W$, where $W \overset {{\mathcal{G}}_2} = 1$ and $|W| >0$.
To make the inductive definitions below easier to understand, we will first make informal remarks about meaning of the new entries $b(7), \dots, b(12)$.
Suppose that $b$ is a [pseudobracket]{} of type PB1. Then $b(7)= b(8) =0$ and $b(11)=b(12)=0$. The entry $b(9)$ represents the current (or intermediate) degree $\deg v$ of the vertex $v = {\alpha}(b(1)) = {\alpha}(b(2))$ which is subject to change in process of construction of the diagram ${\Delta}(\Omega)$. The entry $b(10)$ represents the current face degree $\deg^F v$ of the vertex $v = {\alpha}(b(1)) = {\alpha}(b(2))$ which is also subject to change. For example, if a [pseudobracket]{} $b'$ is obtained from [pseudobracket]{}s $b, c$ of type PB1 by a merger, then $b'(9) :=b(9)+c(9)$ and $b'(10) :=b(10)+c(10)$. Note that $b'(9), b'(10)$ are still intermediate degrees of the vertex ${\alpha}(b'(1)) = {\alpha}(b'(2))$, i.e., $b'(9), b'(10)$ are not actual degrees of ${\alpha}(b'(1))$, because there could be more [elementary operation]{}s such as extension of type 3, mergers, and turns that could further increase $b'(9), b'(10)$.
Assume that $b$ is a [pseudobracket]{} of type PB2. Then the entry $b(7)$ represents the exponent in the power $a_{b(3)}^{b(7)} \equiv {\varphi}(u_1^{-1})$, where $u_1$ is the arc of ${\partial}\Pi$, where $\Pi$ is a face such that $(u_1)_- = {\alpha}(v)$, $(u_1)_+ = {\alpha}(b(1))$, see Fig. 8.1, and $v$ is the vertex of $P_W$ at which a turn operation was performed to get a [pseudobracket]{} $\bar b$ of type PB2 which, after a number of extensions of type 1 and mergers with [pseudobracket]{}s of type PB1, becomes $b$.
(7.75,-2.622) arc (20:155:4); (1.1,-1.24) \[fill = black\] circle (.06); (4,0) \[fill = black\] circle (.06); (6.8,-1.14) \[fill = black\] circle (.06); (2.7,-0.21) – (2.4,-0.33); (5.7,-0.39) – (5.5,-0.3); at (4,-2.) [$\Pi$]{}; at (4,-3.1) [Fig. 8.1]{}; at (2.4,0.2) [$u_1$]{}; at (5.65,0.2) [$u_2$]{}; at (4,0.44) [$\alpha(v)$]{}; at (0.12,-1.1) [$\alpha(b(1))$]{}; at (7.82,-1.) [$\alpha(b(2))$]{}; at (4,-1.) [$u = u_2 u_1$]{}; at (-1,0.2) [$\varphi(u_1^{-1}) \equiv a_{b(3)}^{b(7)}$]{}; at (8.4,0.2) [$\varphi(u_2^{-1}) \equiv a_{b(3)}^{b(8)}$]{};
Similarly, the entry $b(8)$ represents the exponent in the power $a_{b(3)}^{b(8)} \equiv {\varphi}(u_2^{-1})$, where $u_2$ is the arc of ${\partial}\Pi$ such that $(u_2)_- = {\alpha}(b(2))$, $(u_1)_+ = {\alpha}(v)$, and $v$ is the vertex of $P_W$ defined as above, see Fig. 8.1. Since ${\varphi}(u^{-1}) \equiv a_{b(3)}^{b(5)}$, where $u = u_2u_1$ is the arc of ${\partial}\Pi$ defined by $u_- = {\alpha}(b(2))$, $u_+ = {\alpha}(b(1))$, see Fig. 8.1, it follows from the definitions that the conditions [cond1]{}–[cond2]{} hold true.
As in the above case when $b$ has type PB2, the entry $b(9)$ represents the current (or intermediate) degree $\deg v_1$ of the vertex $v_1 = {\alpha}(b(1))$ which is subject to change in the process of construction of ${\Delta}(\Omega)$. As above, the entry $b(10)$ represents the current face degree $\deg^F v_1$ of $v_1 = {\alpha}(b(1))$ which is also subject to change. The entry $b(11)$ is the intermediate degree $\deg v_2$ of the vertex $v_2 = {\alpha}(b(2))$ and $b(12)$ represents the current face degree $\deg^F v_2$ of the vertex $v_2 = {\alpha}(b(2))$.
Our description for meaning of the new entries $b(7), \dots, b(12)$ in thus augmented [pseudobracket]{} $b$ is complete and we now inductively describe the changes over these new entries $b(7), \dots, b(12)$ and over the parameter $\tau({\Delta})$ in process of performance of a sequence $\Omega$ of [elementary operation]{}s over [pseudobracket system]{}s that change the empty [pseudobracket system]{} into a final [pseudobracket system]{} for a word $W$ such that $W \overset {{\mathcal{G}}_2} = 1$ and $|W| >0$.
As above, assume that $\Omega$ is a sequence of [elementary operation]{}s that change the empty [pseudobracket system]{} into a final [pseudobracket system]{} for $W$. Let $B_0, B_1, \dots, B_\ell$ be the corresponding to $\Omega$ sequence of [pseudobracket system]{}s, where $B_0$ is empty and $B_\ell$ is final.
If $b \in B_{i+1}$, $i \ge 0$, is a starting [pseudobracket]{}, then we set $$b(7)= b(8)= b(9)= b(10)= b(11)= b(12)=0 .$$ As was mentioned above, the entries $b(1), \dots, b(6)$ in every [pseudobracket]{} $b \in \cup_{j=0}^\ell B_{j}$ are defined exactly as before.
Suppose that a [pseudobracket]{} $b' \in B_{i+1}$, $i \ge 1$, is obtained from $b \in B_{i}$ by an extension of type 1 on the left and $a(b') = e_1 a(b)$, where $a(b'), a(b)$ are the arcs of $b', b$, resp., and $e_1$ is an edge of $P_W$, ${\varphi}(e_1) = a_{b(3)}^{{\varepsilon}_1}$, ${\varepsilon}_1 = \pm 1$. Recall that both [pseudobracket]{}s $b', b$ have type PB2.
First we assume that $b(5) \ne 0$, i.e., one of $b(7), b(8)$ is different from 0. Then we set $$\begin{aligned}
\notag
& b'(7) := b(7) + {\varepsilon}_1 , \quad b'(8) := b(8) , \quad b'(9) := 2 , \\
& b'(10) := 1 , \quad b'(11) := b(11) , \quad b'(12) := b(12) .\end{aligned}$$ Note that it follows from the definitions that $|b'(7)| = |b(7)| +1$. We also update those entries in the sequence $\tau({\Delta})$ that are affected by the fact that $v_1 = {\alpha}(b(1))$ is now known to be a vertex of the diagram ${\Delta}(\Omega)$ (which is still under construction) such that $\deg v_1 = b(9)$ and $\deg^F v_1 = b(10)$.
For example, if $\tau_5({\Delta}) = |{\Delta}(0)_{\ge 3}|$, where $ |{\Delta}(0)_{\ge 3}|$ is the number of vertices in ${\Delta}$ of degree at least 3, and $b(9) \ge 3$, then $\tau_5({\Delta})$ is increased by 1. If $\tau_6({\Delta}) = |{\Delta}(0)^F_{[k_{6,1}, k_{6,2} ]}|$, where $|{\Delta}(0)^F_{[k_{6,1}, k_{6,2} ]}|$ is the number of vertices $u$ in ${\Delta}$ such that $ k_{6,1}\le \deg^F u \le k_{6,2}$, where $k_{6,1}, k_{6,2}$ are fixed integers with $0 \le k_{6,1} \le k_{6,2} \le |W|$, and if $\deg^F {\alpha}(b(1)) = b(10)$ satisfies $k_{6,1} \le b(10) \le k_{6,2}$, then we increase $\tau_6({\Delta})$ by 1.
Suppose that $b(5) = 0$, i.e., $b(7)= b(8) =0$. Then we set $$\begin{aligned}
\notag
& b'(7) := {\varepsilon}_1 , \quad b'(8) := 0 , \quad b'(9) := 2 , \\
& b'(10) := 1 , \quad b'(11) := b(9) , \quad b'(12) := b(10) .\end{aligned}$$ These formulas, see also [eqq5]{}–[eqq6]{} and other formulas below, reflect the convention that, for a [pseudobracket]{} $b$ of type PB2, the information about edge and face degrees of the vertex ${\alpha}(b(2))$ is stored in components $b(9), b(10)$, resp., whenever ${\alpha}(b(1)) = {\alpha}(b(2))$, i.e., $b(5)=0$. However, when ${\alpha}(b(1)) \ne {\alpha}(b(2))$, i.e., $b(5) \ne 0$, this information for the vertex ${\alpha}(b(2))$ is separately kept in components $b(11), b(12)$, whereas this information for the vertex ${\alpha}(b(1))$ is stored in components $b(9), b(10)$. In the current case $b(5) = 0$, no change to $\tau({\Delta})$ is done.
The case of an extension of type 1 on the left is complete.
Assume that a [pseudobracket]{} $b' \in B_{i+1}$, $i \ge 1$, is obtained from $b \in B_{i}$ by an extension of type 1 on the right. This case is similar to its “left" analogue studied above but not quite symmetric, because of the way we keep information about degrees of vertices and we will write down all necessary formulas. As above, both [pseudobracket]{}s $b', b$ have type PB2.
Let $a(b') = a(b)e_2$, where $a(b'), a(b)$ are the arcs of $b', b$, resp., and $e_2$ is an edge of $P_W$, ${\varphi}(e_2) = a_{b(3)}^{{\varepsilon}_2}$, ${\varepsilon}_2 = \pm 1$. If $b(5) \ne 0$, then we define $$\begin{aligned}
\label{eqq5}
& b'(7) := b(7) , \quad b'(8) := b(8) + {\varepsilon}_2 , \quad b'(9) := b(9) , \\ \label{eqq6}
& b'(10) := b(10) , \quad b'(11) := 2 , \quad b'(12) := 1 .\end{aligned}$$ Note that it follows from the definitions that $|b'(8)| = |b(8)| +1$. We also update those entries in the sequence $\tau({\Delta})$ that are affected by the fact that $v_2 = {\alpha}(b(2))$ is a vertex of ${\Delta}(\Omega)$ (which is still under construction) such that $\deg v_2 = b(11)$ and $\deg^F v_2 = b(12)$.
For example, if $\tau_5({\Delta}) = |{\Delta}(0)_{\ge 3}|$ and $b(11) \ge 3$, then $\tau_5({\Delta})$ is increased by 1. If $\tau_6({\Delta}) = |{\Delta}(0)^F_{[k_{6,1}, k_{6,2} ]}|$ and if $\deg^F {\alpha}(b(2)) = b(12)$ satisfies the inequalities $k_{6,1} \le b(12) \le k_{6,2}$, then we increase $\tau_6({\Delta})$ by 1.
If $b(5) = 0$, then, to define $b'$, we again use formulas [eqq5]{}–[eqq6]{} but we do not make any changes to $\tau({\Delta})$.
The case of an extension of type 1 on the right is complete.
Suppose that a [pseudobracket]{} $b' \in B_{i+1}$, $i \ge 1$, is obtained from $b \in B_{i}$ by an extension of type 2 on the left. Denote $a(b') = e_1 a(b)$, where $a(b'), a(b)$ are the arcs of $b', b$, resp., and $e_1$ is an edge of $P_W$, ${\varphi}(e_1) = a_{b(3)}^{{\varepsilon}_1}$, ${\varepsilon}_1 = \pm 1$. Recall that $b$ has type PB2 and $b'$ has type PB1.
First we assume that $b(5) \ne 0$, i.e., one of $b(7), b(8)$ is different from 0. Then we set $$\begin{aligned}
& b'(7) := 0 , \quad b'(8) := 0 , \quad b'(9) := b(11) , \\
& b'(10) := b(12) , \quad b'(11) := 0 , \quad b'(12) := 0 .\end{aligned}$$ We also update the sequence $\tau({\Delta})$ according to the information that $v_1 = {\alpha}(b(1))$ is a vertex of ${\Delta}(\Omega)$ (which is still under construction) such that $\deg v_1 = b(9)$, $\deg^F v_1 = b(10)$, and that ${\Delta}(\Omega)$ contains a face $\Pi_b$ such that ${\varphi}({\partial}\Pi_b) \equiv a_{b(3)}^{-{\varepsilon}_1 b(4)}$.
For example, if $\tau_5({\Delta}) = |{\Delta}(0)_{\ge 3}|$ and $b(9) \ge 3$, then $\tau_5({\Delta})$ is increased by 1. If $\tau_6({\Delta}) = |{\Delta}(0)^F_{[k_{6,1}, k_{6,2} ]}|$ and if $\deg^F {\alpha}(b(1)) = b(10)$ satisfies $k_{6,1} \le b(10) \le k_{6,2}$, then we increase $\tau_6({\Delta})$ by 1. If $\tau_2({\Delta}) = |{\Delta}(2)_{a_1^{\pm n_1}}|$, where $|{\Delta}(2)_{a_1^{\pm n_1}}|$ is the number of faces $\Pi_2$ in ${\Delta}$ such that ${\varphi}( {\partial}\Pi_2) \equiv a_1^{\pm n_1}$, and $b(3) = 1$, $b(4) = n_1$, then we increase $\tau_2({\Delta})$ by 1. If $\tau_3({\Delta}) = |{\Delta}(2)_{a_2^{ n_2}}|$, where $|{\Delta}(2)_{a_2^{ n_2}}|$ is the number of faces $\Pi_3$ in ${\Delta}$ such that ${\varphi}( {\partial}\Pi_3) \equiv a_2^{ n_2}$, and $b(3) = 2$, $-{\varepsilon}_1 b(4) = n_2$, then we increase $\tau_3({\Delta})$ by 1. If $\tau_4({\Delta}) = \sum_{ k_{4,1} \le |{\partial}\Pi | \le k_{4,2} } |{\partial}\Pi |$, where the summation takes place over all faces $\Pi$ in ${\Delta}$ such that $ k_{4,1} \le |{\partial}\Pi | \le k_{4,2}$, where $k_{4,1}, k_{4,2}$ are fixed integers with $0 \le k_{4,1} \le k_{4,2} \le |W|$, and $k_{4,1} \le b(4) \le k_{4,2}$, then we increase $\tau_4({\Delta})$ by 1.
Now assume that $b(5) = 0$, i.e., $b(7)= b(8)=0$, and so $b(4) =1$. Then we set $$\begin{aligned}
& b'(7) := 0 , \quad b'(8) := 0 , \quad b'(9) := b(9) , \\
& b'(10) := b(10) , \quad b'(11) := 0 , \quad b'(12) := 0 .\end{aligned}$$ We also update the tuple $\tau({\Delta})$ according to the information that ${\Delta}(\Omega)$ contains a face $\Pi_b$ such that ${\varphi}({\partial}\Pi_b) \equiv a_{b(3)}^{-{\varepsilon}_1}$.
The case of an extension of type 2 on the left is complete.
Suppose that a [pseudobracket]{} $b' \in B_{i+1}$, $i \ge 1$, is obtained from $b \in B_{i}$ by an extension of type 2 on the right. This case is similar to its “left" analogue discussed above but is not quite symmetric and we will write down all necessary formulas.
Denote $a(b') = a(b)e_2$, where $a(b'), a(b)$ are the arcs of $b', b$, resp., and $e_2$ is an edge of $P_W$, ${\varphi}(e_2) = a_{b(3)}^{{\varepsilon}_2}$, ${\varepsilon}_2 = \pm 1$. Recall that $b$ has type PB2 and $b'$ has type PB1.
First we assume that $b(5) \ne 0$. Then we set $$\begin{aligned}
\label{eqq1}
& b'(7) := 0 , \quad b'(8) := 0 , \quad b'(9) := b(9) , \\ \label{eqq2}
& b'(10) := b(10) , \quad b'(11) := 0 , \quad b'(12) := 0 .\end{aligned}$$ We also update the sequence $\tau({\Delta})$ according to the information that $v_2 = {\alpha}(b(2))$ is a vertex of ${\Delta}(\Omega)$ (which is still under construction) such that $\deg v_2 = b(11)$, $\deg^F v_2 = b(12)$, and that ${\Delta}(\Omega)$ contains a face $\Pi_b$ such that ${\varphi}({\partial}\Pi_b) \equiv a_{b(3)}^{-{\varepsilon}_2 b(4)}$.
For example, if $\tau_5({\Delta}) = |{\Delta}(0)_{\ge 3}|$ and $b(11) \ge 3$, then $\tau_5({\Delta})$ is increased by 1. If $\tau_6({\Delta}) = |{\Delta}(0)^F_{[k_{6,1}, k_{6,2} ]}|$ and if $\deg^F {\alpha}(b(2)) = b(12)$ satisfies $k_{6,1} \le b(12) \le k_{6,2}$, then we increase $\tau_6({\Delta})$ by 1. If $\tau_2({\Delta}) = |{\Delta}(2)_{a_1^{\pm n_1}}|$, as above, and $b(3) = 1$, $b(4) = n_1$, then we increase $\tau_2({\Delta})$ by 1. If $\tau_3({\Delta}) = |{\Delta}(2)_{a_2^{ n_2}}|$, as above, and $b(3) = 2$, $-{\varepsilon}_1 b(4) = n_2$, then we increase $\tau_3({\Delta})$ by 1. If $\tau_4({\Delta}) := \sum_{ k_{4,1} \le |{\partial}\Pi | \le k_{4,2} } |{\partial}\Pi |$, as above, and $k_{4,1} \le b(4) \le k_{4,2}$, then we increase $\tau_4({\Delta})$ by 1.
Now we assume that $b(5) = 0$, i.e., $b(7)= b(8)=0$, and so $b(4) =1$. Then, to define $b$, we again use formulas [eqq1]{}–[eqq2]{}. We also update the tuple $\tau({\Delta})$ according to the information that ${\Delta}(\Omega)$ contains a face $\Pi_b$ with ${\varphi}({\partial}\Pi_b) \equiv a_{b(3)}^{-{\varepsilon}_2}$.
The case of an extension of type 2 on the right is complete.
Suppose that a [pseudobracket]{} $b' \in B_{i+1}$, $i \ge 1$, results from $b \in B_{i}$ by an extension of type 3. Denote $a(b') = e_1 a(b)e_2$, where $a(b'), a(b)$ are the arcs of $b', b$, resp., and $e_1, e_2$ are edges of $P_W$, ${\varphi}(e_1) = {\varphi}(e_2)^{-1} = a_{j}^{{\varepsilon}}$, ${\varepsilon}= \pm 1$. Recall that both $b$ and $b'$ have type PB1. Then we set $$\begin{aligned}
& b'(7) := 0 , \quad b'(8) := 0 , \quad b'(9) := 1 , \\
& b'(10) := 0 , \quad b'(11) := 0 , \quad b'(12) := 0 .\end{aligned}$$ We also update the tuple $\tau({\Delta})$ according to the information that $v= {\alpha}(b(1)) = {\alpha}(b(2))$ is a vertex of ${\Delta}(\Omega)$ (which is still under construction) such that $\deg v = b(9)+1$ and $\deg^F v = b(10)$.
For example, if $\tau_5({\Delta}) = |{\Delta}(0)_{\ge 3}|$ and $b(9) \ge 3$, then $\tau_5({\Delta})$ is increased by 1. If $\tau_6({\Delta}) = |{\Delta}(0)^F_{[k_{6,1}, k_{6,2} ]}|$ and if $\deg^F v = b(10)$ satisfies $k_{6,1} \le b(10) \le k_{6,2}$, then we increase $\tau_6({\Delta})$ by 1.
The case of an extension of type 3 is complete.
Assume that a [pseudobracket]{} $b' \in B_{i+1}$, $i \ge 1$, is obtained from $b \in B_{i}$ by a turn operation. Recall that $b$ has type PB1 and $b'$ has type PB2. Then we set $$\begin{aligned}
& b'(7) := 0 , \quad b'(8) := 0 , \quad b'(9) := b(9)+2 , \\
& b'(10) := b(10)+1 , \quad b'(11) := 0 , \quad b'(12) := 0 .\end{aligned}$$
No change over $\tau({\Delta})$ is necessary under a turn operation. The case of a turn operation is complete.
Suppose that a [pseudobracket]{} $b' \in B_{i+1}$, $i \ge 1$, is obtained from $b, c \in B_{i}$ by a merger operation. Without loss of generality, we assume that the [pseudobracket]{}s $b, c$ which satisfy the condition $b(2)=c(1)$, i.e., $b$ is on the left of $c$. Recall that one of $b, c$ must have type PB1 and the other one has type PB1 or PB2. Consider three cases corresponding to the types of the [pseudobracket]{}s $b, c$.
First assume that both $b, c$ have type PB1. Then we set $$\begin{aligned}
& b'(7) := 0 , \quad b'(8) := 0 , \quad b'(9) := b(9) + c(9) , \\
& b'(10) := b(10) + c(10) , \quad b'(11) := 0 , \quad b'(12) := 0 .\end{aligned}$$ No change over $\tau({\Delta})$ is made.
Assume that $b$ has type PB1 and $c$ has type PB2. Then, keeping in mind that $b(2)=c(1)$, we set $$\begin{aligned}
& b'(7) := c(7) , \quad b'(8) := c(8) , \quad b'(9) := b(9) + c(9) , \\
& b'(10) := b(10) + c(10) , \quad b'(11) := c(11) , \quad b'(12) :=c(12) .\end{aligned}$$ As above, no change over $\tau({\Delta})$ is necessary.
Assume that $b$ has type PB2 and $c$ has type PB1. If $b(5) \ne 0$, then we set $$\begin{aligned}
& b'(7) := b(7) , \quad b'(8) := b(8) , \quad b'(9) := b(9) , \\
& b'(10) := b(10) , \quad b'(11) := b(11) + c(9) , \quad b'(12) := b(12)+ c(10) .\end{aligned}$$
On the other hand, if $b(5) = 0$, i.e., $b(7) = b(8) = 0$, then we set $$\begin{aligned}
& b'(7) := 0 , \quad b'(8) := 0 , \quad b'(9) := b(9)+ c(9) , \\
& b'(10) := b(10)+ c(10) , \quad b'(11) := 0 , \quad b'(12) := 0 .\end{aligned}$$ As before, no change over $\tau({\Delta})$ is made under a merger operation.
The case of a merger operation is complete.
Our inductive definitions of extended [pseudobracket]{}s and modifications of $\tau({\Delta})$ are complete. To summarize, we conclude that the described changes to the extended [pseudobracket]{}s and to the tuple $\tau({\Delta})$ will guarantee that, if our nondeterministic algorithm, which is based on Lemma \[lem8\] and which follows a sequence $\Omega$ of [elementary operation]{}s as above, accepts a pair of configurations $\psi_0$, $\psi_F$, where $\psi_0$ is the empty [pseudobracket system]{} and $\psi_F$ is a final [pseudobracket system]{}, then the final tuple $\tau({\Delta})$, i.e., the tuple associated with $\psi_F$, will be equal to the tuple $\tau({\Delta}(\Omega))$ which represents the tuple of actual parameters $\tau_1({\Delta}(\Omega)), \dots, \tau_{s}({\Delta}(\Omega))$ of the diagram ${\Delta}(\Omega)$.
Consider augmented configurations $\bar \psi = (B, \tau({\Delta}))$, corresponding to a sequence $\Omega$ of [elementary operation]{}s as above, where $B$ is a system of extended [pseudobracket]{}s and $\tau({\Delta})$ is the tuple associated with $B$. Recall that, by Lemma \[lem8\], we may assume that $\Omega$ has size bounded by $(11 |W|, C (\log |W| +1) )$. As in the proof of Theorem \[thm1\], we note that the entries $b(1), \dots, |b(5)|, b(6)$ are nonnegative and bounded by $\max(|W|, m)$. It follows from the definitions and Lemma \[vk2\](a) that for the new entries $b(7), \dots, b(12)$, we have that $|b(7)|, |b(8)| \le |b(5)| \le |W|$ and $$\begin{aligned}
0 \le b(9), b(10), b(11), b(12) \le 2 |W| .\end{aligned}$$ By the definition, every entry $\tau_i({\Delta})$ in $\tau({\Delta})$ satisfies $| \tau_i ({\Delta}) | \le C_\tau |W|$, $i = 1, \dots, s_\tau$. Since $s_\tau$, $C_\tau$ are constants, we conclude that the space needed to store an augmented configuration $\bar \psi = (B, \tau({\Delta}))$ is $O((\log |W|)^2)$. Therefore, utilizing Savitch’s algorithm as before, which now applies to augmented configurations of the form $\bar \psi = (B, \tau({\Delta}))$, we will be able to find out, in deterministic space $O((\log |W|)^3)$, whether the algorithm accepts a pair $(B_0, \tau^0({\Delta}))$, $(B_F, \tau^F({\Delta}))$, where $B_0$ is empty, $\tau^0({\Delta})$ consists of all zeros, $B_F$ is a final [pseudobracket system]{}, and $\tau^F({\Delta})$ is a final tuple corresponding to $B_F$. Since the space needed to store a final configuration $(B_F, \tau^F({\Delta}))$ is $O(\log |W|)$, we will be able to compute, in deterministic space $O((\log |W|)^3)$, a lexicographically smallest tuple ${\widehat}\tau^F({\Delta}))$ relative to the property that the pair of augmented configurations $(B_0, \tau^0({\Delta}))$, $({\widehat}B_F, {\widehat}\tau^F({\Delta}))$, where ${\widehat}B_F$ is some final [pseudobracket system]{}, is accepted by Savitch’s algorithm. Now we can do the same counter trick as in the beginning of this proof, see [eqom1]{}, to compute, in deterministic space $O((\log |W|)^3)$, a sequence $\Omega^*$ of [elementary operation]{}s and the corresponding to $\Omega^*$ sequence of augmented configurations which turns $(B_0, \tau^0({\Delta}))$ into $({\widehat}B_F, {\widehat}\tau^F({\Delta}))$ and which is constructed by Savitch’s algorithm. Finally, as in the beginning of the proof of Theorem \[thm3\], using the sequence $\Omega^*$, we can construct a desired diagram ${\Delta}^* = {\Delta}(\Omega^*)$ so that $\tau({\Delta}^*) = {\widehat}\tau^F({\Delta})$ and this construction can be done in deterministic space $O((\log |W|)^3)$. This completes the proof of the space part of Theorem \[thm3\].
To prove the time part of Theorem \[thm3\], we review the proof of $\mathsf P$ part of Theorem \[thm1\]. We observe that our arguments enable us, concurrently with computation of the number $\mu_2(W[i,j,k,l])$ for every parameterized word $W[i,j,k,l] \in \mathcal S_2(W)$ such that $W(i,j)a_k^\ell \overset {{\mathcal{G}}_2} = 1$, to inductively construct a minimal diagram ${\Delta}[i,j,k,l]$ over [pr1]{} such that ${\varphi}( {\partial}|_0 {\Delta}[i,j,k,l]) \equiv W(i,j)a_k^\ell$. Details of this construction are straightforward in each of the subcases considered in the proof of $\mathsf P$ part of Theorem \[thm1\]. Note that the time needed to run this extended algorithm is still $O( |W|^4 \log |W| )$.
Theorem \[thm3\] is proven.
There is a deterministic algorithm that, for given word $W$ over the alphabet ${\mathcal{A}}^{\pm 1}$ such that $W \overset{{\mathcal{F}}({\mathcal{A}})}{=} 1$, where ${\mathcal{F}}({\mathcal{A}}) = \langle {\mathcal{A}}\ \| \ \varnothing \rangle$ is the free group over ${\mathcal{A}}$, constructs a pattern of cancellations of letters in $W$ that result in the empty word and the algorithm operates in space $O( (\log |W|)^3 )$.
Furthermore, let ${\Delta}$ be a disk diagram over ${\mathcal{F}}({\mathcal{A}}) $ that corresponds to a pattern of cancellations of letters in $W$, i.e., ${\varphi}({\partial}{\Delta}) \equiv W$, and let $$\tau({\Delta}) = (\tau_1({\Delta}), \ldots, \tau_{s_\tau}({\Delta}))$$ be a tuple of integers, where the absolute value $| \tau_i({\Delta}) |$ of each $\tau_i({\Delta})$ represents the number of vertices in ${\Delta}$ of certain degree. Then, also in deterministic space $O( (\log |W|)^3 )$, one can algorithmically construct such a diagram ${\Delta}$ which is smallest relative to the tuple $\tau({\Delta})$.
Corollary \[cor2\] is immediate from Theorem \[thm3\] and does not need a separate proof. Nevertheless, it is worth mentioning that, in the case of presentation ${\mathcal{F}}({\mathcal{A}}) = \langle {\mathcal{A}}\ \| \ \varnothing \rangle$ of the free group ${\mathcal{F}}({\mathcal{A}})$ over ${\mathcal{A}}$, our definitions of brackets, [pseudobracket]{}s, [elementary operation]{}s and subsequent arguments become significantly simpler. Since there are no relations, we do not define brackets of type B2, nor we define [pseudobracket]{}s of type PB2. In particular, for every bracket or [pseudobracket]{} $b$, the entries $b(3), b(4), b(5)$ are always zeroes and could be disregarded. In the extended version of a [pseudobracket]{} $b$, defined for minimization of ${\Delta}$ relative to $\tau({\Delta})$, the entries $b(7), b(8)$, $b(10), b(12)$ are also always zeroes and could be disregarded. Furthermore, there is no need to consider extensions of type 1, 2 and turn operations. Hence, in this case, we only need [elementary operation]{}s which are additions, extensions of type 3 and mergers over brackets of type B1, [pseudobracket]{}s of type PB1, and over their systems.
Construction of Minimal Diagrams over and Proof of Theorem \[thm4\]
====================================================================
Suppose that $W$ is a word over the alphabet ${\mathcal{A}}^{\pm 1}$ such that the bounded word problem for presentation holds for the pair $(W, n)$. Then a minimal diagram ${\Delta}$ over such that ${\varphi}( {\partial}{\Delta}) \equiv W$ can be algorithmically constructed in deterministic space $O( \max(\log |W|, \log n)(\log |W|)^2)$ or in deterministic time $O( |W|^4)$.
In addition, if $|n_1 | = |n_2 |$ in , then the minimal diagram problem for presentation can be solved in deterministic space $O( (\log |W|)^3 )$ or in deterministic time $O( |W|^3\log |W|)$.
First we prove the space part of Theorem \[thm4\].
Let $W$ be a nonempty word over ${\mathcal{A}}^{\pm 1}$ such that $W \overset{{\mathcal{G}}_3}{=} 1$, where ${\mathcal{G}}_3$ is defined by presentation , and there is a [disk diagram]{} ${\Delta}$ over such that ${\varphi}({\partial}{\Delta}) \equiv W$ and $|{\Delta}(2)| \le n$, i.e., the bounded word problem has a positive solution for the pair $(W, n)$.
It follows from Lemma \[8bb\] that there is a finite sequence $\Omega$ of [elementary operation]{}s such that $\Omega$ converts the empty [pseudobracket system]{} for $W$ into a final one and $\Omega$ has other properties stated in Lemma \[8bb\]. As in the proof of Theorem \[thm2\], Lemma \[8bb\] gives us a nondeterministic algorithm which runs in time $O(|W|)$ and space $O((\max(\log|W|,\log n) \log|W|)$ and which accepts a word $W$ over ${\mathcal{A}}^{\pm 1}$ [if and only if]{} the bounded word problem has a positive solution for the pair $(W, n)$. Note that here the big-$O$ constants can be written down explicitly (see the proof of Theorem \[thm2\]).
Furthermore, as in the proof of Theorem \[thm2\], using Savitch’s theorem [@Sav], see also [@AB], [@PCC], we obtain a deterministic algorithm which runs in space $$\label{sp}
O((\max(\log|W|,\log n) (\log|W|)^2) ,$$ and which computes a minimal integer $n(W)$, $0 \le n(W) \le n$, such that there is a [disk diagram]{} ${\Delta}$ over so that ${\varphi}({\partial}{\Delta}) \equiv W$ and $|{\Delta}(2)| = n(W)$. To do this, we can check by Savitch’s algorithm whether the empty [pseudobracket system]{} $B_0$ can be transformed by [elementary operation]{}s into a final [pseudobracket system]{} $\{ b_F \}$, where $b_F(4) = n'$ for $n' = 0,1,2,\dots, n$.
Without loss of generality, we may assume that $n(W) >0$ because if $n(W)=0$ then Corollary \[cor2\] yields the desired result.
Having found this number $n(W) \ge 1$ in deterministic space , we will run Savitch’s algorithm again for the pair $B_0$ and $\{ b^*_F \}$, where $b^*_F = (0, |W|, 0, n(W))$, and use the counter trick, as in the proof of Theorem \[thm3\], to compute an instance of a sequence $\Omega$ of [elementary operation]{}s and the corresponding to $\Omega$ sequence $B_0, B_1, \dots, B_\ell = \{ b^*_F \}$ of [pseudobracket system]{}s. After computing these sequences $\Omega$ and $B_0, B_1, \dots, B_\ell = \{ b^*_F \}$, our algorithm halts. Denote this modification of Savitch’s algorithm by ${\mathfrak{A}}_{n}$.
Denote $$\Omega = (\omega_1, \dots, \omega_\ell),$$ where $\omega_1, \dots, \omega_\ell$ are [elementary operation]{}s and, as above, let $B_0, B_1, \dots, B_\ell$ be the corresponding to $\Omega$ sequence of [pseudobracket system]{}s so that $B_j$ is obtained by application of $\omega_j$ to $B_{j-1}$, so $B_j = \omega_j(B_{j-1})$. We also let $$(\chi_1, \dots, \chi_{\ell_2} )$$ denote the subsequence of $\Omega$ that consists of all extensions of type 2. Also, let $c_i \in B_{j_{i}-1}$ denote the [pseudobracket]{} to which the [elementary operation]{} $\chi_i$ applies and let $d_i\in B_{j_{i}}$ denote the [pseudobracket]{} obtained from $c_i$ by application of $\chi_i$, so $d_i = \chi_i(c_i)$.
According to the proof of Lemma \[7bb\], the sequence $\Omega$ or, equivalently, the sequence $B_0, B_1, \dots, B_\ell$, defines a [disk diagram]{} ${\Delta}(\Omega)$ which can be inductively constructed as in the proof of Claim (D1). Furthermore, according to the proof of Claim (D1), all faces of ${\Delta}(\Omega)$ are contained in $\ell_2$ $a_2$-bands $\Gamma_1, \dots, \Gamma_{\ell_2}$ which are in bijective correspondence with [elementary operation]{}s $\chi_1, \dots, \chi_{\ell_2}$ so that $\Gamma_i$ corresponds to $\chi_i$, $i =1, \dots, \ell_2$. Denote $$\label{bgi}
{\partial}\Gamma_i = f_i t_i g_i u_i ,$$ where $f_i, g_i$ are edges of ${\partial}{\Delta}(\Omega)$, ${\varphi}(f_i) = {\varphi}(g_i)^{-1} \in \{ a_2^{\pm 1} \}$, and $t_i, u_i$ are simple paths whose labels are powers of $a_1$. Hence, $$\sum_{i=1}^{\ell_2} | \Gamma_i(2) | = | {\Delta}(\Omega)(2) | = n(W) .$$
It follows from the definitions that if $a(c_i)$, $a(d_i)$ are the arcs of the [pseudobracket]{}s $c_i, d_i$, resp., and $e_{1,i} a(c_i) e_{2,i}$ is a subpath of the path $P_W$, where $e_{1,i}, e_{2,i}$ are edges of $P_W$, then, renaming $f_i \leftrightarrows g_i$, $t_i \leftrightarrows u_i$ if necessary, we have the following equalities $$\begin{gathered}
{\alpha}(e_{1,i}) = f_i, \ \ {\alpha}(e_{2,i}) = g_i, \ \ {\varphi}(f_{i}) = {\varphi}(g_i)^{-1} = a_2^{{\varepsilon}_i} , \ \ {\varepsilon}_i = \pm 1 , \\
{\varphi}(t_{i}) \equiv a_1^{c_i(3)}, \quad {\varphi}(u_{i}) \equiv a_1^{-d_i(3)} , \quad {\alpha}(a(c_i)) = t_i,
\quad {\alpha}(a(d_i)) = u_i^{-1}\end{gathered}$$ for every $i = 1, \dots, \ell_2$, see Fig. 9.1.
(-4,-3) –(-4,-1.3); (4,0) –(4,-1.7); (4,-3) –(0,-3); (-4,0) –(-0,0); (-4,0) rectangle (4,-3); at (-5.9,-1.5) [$f_i = \alpha(e_{1,i}) $]{}; at (5.8,-1.5) [$ g_i= \alpha(e_{2,i})$]{}; at (0,.7) [$t_i = \alpha(a(c_i))$]{}; at (0,-2.3) [$u_i = \alpha(a(d_i))^{-1}$]{}; at (0,-1) [$\Gamma_i$]{}; at (0,-4) [Fig. 9.1]{};
Note that each of these $a_2$-bands $\Gamma_1, \dots, \Gamma_{\ell_2}$ can be constructed, as a part of the sequence $\Omega$, in deterministic space [sp]{} by running the algorithm $\mathfrak{A}_{n}$. For instance, if we wish to retrieve information about the diagram $\Gamma_i$, we would be looking for the $i$th extension of type 2 in the sequence $\Omega$, denoted above by $\chi_i$. We also remark that the parameters $(c_i, {\varepsilon}_i)$, associated with the [elementary operation]{} $\chi_i$, contain all the information about the diagram $\Gamma_i$.
Observe that we are not able to keep all these pairs $(c_i, {\varepsilon}_i)$, $i =1, \dots, \ell_2$, in our intermediate computations aimed to construct ${\Delta}(\Omega)$ in polylogarithmic space because doing this would take polynomial space. However, we can reuse space, so we keep one pair $(c_i, {\varepsilon}_i)$, or a few pairs, in memory at any given time and, when we need a different pair $(c_j, {\varepsilon}_j)$, we erase $(c_i, {\varepsilon}_i)$ and compute the new pair $(c_j, {\varepsilon}_j)$ by running the algorithm $\mathfrak{A}_{n}$ as discussed above.
We can also output all these pairs $(c_i, {\varepsilon}_i)$, $i =1, \dots, \ell_2$, as a part of our description of the [disk diagram]{} ${\Delta}(\Omega)$ still under construction.
Thus, in deterministic space [sp]{}, we have obtained the information about $a_2$-bands $\Gamma_1,
\dots, \Gamma_{\ell_2}$ of ${\Delta}(\Omega)$ that contain all of the faces of ${\Delta}(\Omega)$ and it remains to describe, working in space [sp]{}, how the edges of $({\partial}{\Delta}(\Omega))^{\pm 1}$ and those of $({\partial}\Gamma_1)^{\pm 1},
\dots, ({\partial}\Gamma_{\ell_2})^{\pm 1}$ are attached to each other.
Observe that by taking the subdiagrams $\Gamma_1, \dots, \Gamma_{\ell_2}$ out of ${\Delta}(\Omega)$, we will produce $\ell_2 +1$ connected components ${\Delta}_1, \dots, {\Delta}_{\ell_2+1}$ which are [disk diagram]{}s with no faces, i.e., ${\Delta}_1, \dots, {\Delta}_{\ell_2+1}$ are [disk diagram]{}s over the free group $F({\mathcal{A}}) = \langle {\mathcal{A}}\ \| \ \varnothing \rangle $. Note that the boundary of each [disk diagram]{} ${\Delta}_i$ has a natural factorization $$\label{Dbi}
{\partial}{\Delta}_i = q_{1,i} r_{1,i} \ldots q_{k_i,i} r_{k_i,i} ,$$ where every $q_{j,i} $ is a subpath of the cyclic path ${\partial}{\Delta}(\Omega)$, perhaps, $|q_{j,i}|=0$, and every $r_{j,i} $ is one of the paths $t_1^{-1}, u_1^{-1}, \dots, t_{\ell_2}^{-1}, u_{\ell_2}^{-1}$, $|r_{j,i} | > 0$, see Fig. 9.2.
(0,0) circle (3); plot\[smooth, tension=.7\] coordinates [(-2.65,-1.4) (-2,0) (-2.8,1.1) ]{}; plot\[smooth, tension=.7\] coordinates [(-2.15,-2.1) (-1,0) (-2.3,1.9)]{}; plot\[smooth, tension=.7\] coordinates [(-1.1,2.8)(0,1.8) (1.1,2.8)]{}; plot\[smooth, tension=.7\] coordinates [(-1.8,2.4)(0,.9) (1.8,2.4)]{}; plot\[smooth, tension=.7\] coordinates [(2.65,-1.4) (2,0) (2.8,1.1) ]{}; plot\[smooth, tension=.7\] coordinates [(2.15,-2.1) (1,0) (2.3,1.9)]{}; (.1,-3) –(-.1,-3); (-.1,3) –(.1,3); (-.1,.9) –(.1,.9); (.06,1.8) –(-.06,1.8); (-3.,-.10) –(-3,.1); (-2,.10) –(-2,-.1); (2.03,2.2) –(2.13,2.1); (-1,-.1) –(-1,.1); (1,.1) –(1,-.1); (2,-.1) –(2,.1); (3,.1) –(3,-.1); (-2.12,2.1) –(-2.05,2.19); at (3.6,0) [$q_{1,2}$]{}; at (2.5,0) [$\Delta_2$]{}; at (1.55,0) [$r_{1,2}$]{}; at (.54,0) [$r_{2,4}$]{}; at (-3.6,0) [$q_{1,3}$]{}; at (-2.5,0) [$\Delta_3$]{}; at (-1.55,0) [$r_{1,3}$]{}; at (-.54,0) [$r_{3,4}$]{}; at (-2.46,2.4) [$q_{1,4}$]{}; at (2.46,2.4) [$q_{2,4}$]{}; at (0,3.34) [$q_{1,1}$]{}; at (0,2.4) [$\Delta_1$]{}; at (0,1.5) [$r_{1,1}$]{}; at (0,.5) [$r_{1,4}$]{}; at (0,-2.6) [$q_{3,4}$]{}; at (-3.6,0) [$q_{1,3}$]{}; at (-2.5,0) [$\Delta_3$]{}; at (-1.55,0) [$r_{1,3}$]{}; at (0,-1.2) [$\Delta_4$]{}; at (0,-3.7) [Fig. 9.2]{}; at (-.9,2) [$\Gamma_2$]{}; at (-1.8,-0.9) [$\Gamma_1$]{}; at (2,0.9) [$\Gamma_3$]{}; at (-2.5,-2.7) [$\Delta$]{};
It follows from the definitions and Lemma \[8bb\] that $$\label{deli}
\sum_{i=1}^{\ell_2} | {\partial}{\Delta}_i | < | {\partial}{\Delta}(\Omega) | +
| {\Delta}(\Omega)(2) |( |n_1| + |n_2|) = O(\max ( |W|, n(W) )) ,$$ as $| {\Delta}(\Omega)(2) | = n(W)$ and $| {\partial}{\Delta}(\Omega)| = |W|$.
Suppose that a vertex $v \in P_W$ is given, $0 \le v \le |P_W| = |W|$. Then there is a unique [disk diagram]{}${\Delta}_{i_v}$ among ${\Delta}_1, \dots, {\Delta}_{\ell_2+1}$ such that ${\alpha}(v) \in {\partial}{\Delta}_{i_v}$. We now describe an algorithm $\mathfrak{B}_{v}$ that, for an arbitrary edge $e$ of the boundary ${\partial}{\Delta}_{i_v}$, computes in deterministic space [sp]{} the label ${\varphi}(e)$ of $e$ and the unique location of $e$ in one of the paths ${\partial}|_0 {\Delta}(\Omega)$, $t_1^{-1}, u_1^{-1}, \dots, t_{\ell_2}^{-1}, u_{\ell_2}^{-1}$. To do this, we will go around the boundary path ${\partial}|_{{\alpha}(v)} {\Delta}_{i_v}$, starting at the vertex ${\alpha}(v)$. Initializing the parameters $v^* , d^*$, we set $$v^* := v, \qquad d^* := 1 .$$
If $e$ is the $k$th edge of ${\partial}|_{{\alpha}(v)} {\Delta}_{i_v}$, $1 \le k \le |{\partial}{\Delta}_{i_v}|$, then we also say that $e$ is the edge of ${\partial}|_{{\alpha}(v)} {\Delta}_{i_v}$ [*with number*]{} $k$.
Let $e_c$ denote the edge of $P_W$ such that $(e_c)_- = v^*$ if $v^* < |W|$ and $(e_c)_- = 0$ if $v^* = |W|$. We now consider three possible Cases 1–3.
Case 1. Assume that ${\alpha}(e_c) = f_i$ for some $i = 1, \dots, \ell_2$, see the definition [bgi]{} of ${\partial}\Gamma_i$. Note that such an equality ${\alpha}(e_c) = f_i$ can be verified in space [sp]{} by checking, one by one, all $a_2$-bands $\Gamma_1, \dots, \Gamma_{\ell_2}$.
If $v^* = v$, then the first $|t_i|$ edges of the boundary path $${\partial}|_{{\alpha}(v)} {\Delta}_{i_v} = t_i^{-1} \ldots$$ are the edges of the path $t_i^{-1}$, see Fig. 9.3(a).
In the general case when $v^*$ is arbitrary, we obtain that the edges of the boundary path ${\partial}|_{{\alpha}(v)} {\Delta}_{i_v}$ with numbers $d^*, \dots, d^*+ |t_i|-1$ are consecutive edges of the path $t_i^{-1}$ starting from the first one.
Let $v' \in P_W$ denote the vertex such that ${\alpha}(v') = (t_i)_-$, see Fig. 9.3(a). Also, denote $d' := d^* + |t_i|$.
Case 2. Assume that ${\alpha}(e_c) = g_i$ for some $i = 1, \dots, \ell_2$, see [bgi]{}. As in Case 1, we can verify this equality in space [sp]{}.
If $v^* = v$, then the first $|u_i|$ edges of the boundary path $${\partial}|_{{\alpha}(v)} {\Delta}_{i_v} = u_i^{-1} \ldots$$ are the edges of the path $u_i^{-1}$, see Fig. 9.3(b).
In the general case when $v^*$ is arbitrary, we obtain that the edges of the boundary path ${\partial}|_{{\alpha}(v)} {\Delta}_{i_v}$ with numbers $d^*, \dots, d^*+ |u_i|-1$ are consecutive edges of the path $u_i^{-1}$ starting from the first one.
Let $v' \in P_W$ denote the vertex such that ${\alpha}(v') = (u_i)_-$, see Fig. 9.3(b). Also, denote $d' := d^* + |u_i|$.
plot\[smooth, tension=.5\] coordinates [(-4,-3) (-4,2) (-2,2.8) (0,2) (0,-3)]{}; (-4.1,.5) – (0.1,.5); (-4.1,-.7) – (0.1,-.7); (-4.1,.5) – (-2.,.5); (0.1,-.7) –(-2.1,-.7) ; (-4.12,-.1) –(-4.12,-.0) ; (.12,-.1) –(.12,-.19) ; at (-2,2.8) ; (-2.1,2.8) –(-2.,2.8); at (-2,-.1) [$\Gamma_i$]{}; at (-4.6,-0.07) [$f_i$]{}; at (.6,-0.1) [$g_i$]{}; at (-2.1,0.9) [$u_i$]{}; at (-2.1,-1.2) [$t_i$]{}; at (-2,-2) [$ \Delta_{i_v}$]{}; at (-2,3.25) [$\partial \Delta$]{}; (-4.1,-.7) \[fill = black\] circle (.05); (.1,-.7) \[fill = black\] circle (.05); at (-4.9,-0.7) [$\alpha(v^*)$]{}; at (0.85,-.7) [$\alpha(v')$]{}; at (-2,-3.3) [Fig. 9.3(a)]{}; plot\[smooth, tension=.5\] coordinates [(5,-3) (5,2) (7,2.8) (9,2) (9,-3)]{}; (4.9,.5) – (9.1,.5); (4.9,-.7) – (9.1,-.7); (4.9,.5) – (7.,.5); (9.1,-.7) –(6.9,-.7) ; (4.88,-.1) –(4.88,-.0) ; (9.12,-.1) –(9.12,-.19) ; (6.9,2.8) –(7,2.8); at (7,-.1) [$\Gamma_i$]{}; at (4.4,-0.07) [$f_i$]{}; at (9.6,-0.1) [$g_i$]{}; at (6.9,0.9) [$u_i$]{}; at (6.9,-1.2) [$t_i$]{}; at (7,1.7) [$\Delta_{i_v}$]{}; at (7,3.25) [$\partial \Delta$]{}; (4.88,.5) \[fill = black\] circle (.05); (9.12,.5) \[fill = black\] circle (.05); at (4.1,0.5) [$\alpha(v')$]{}; at (9.85,.5) [$\alpha(v^*)$]{}; at (7,-3.3) [Fig. 9.3(b)]{};
Case 3. Suppose that ${\alpha}(e_c)$ is not one of the edges $f_i$, $g_i$ of $a_2$-bands $\Gamma_1,
\dots, \Gamma_{\ell_2}$. As above, we can verify this claim in space [sp]{}.
If $v^* = v$, then the first edge of the boundary path $${\partial}|_{{\alpha}(v)} {\Delta}_{i_v} = {\alpha}(e_c) \ldots$$ is ${\alpha}(e_c)$.
In the general case when $v^*$ is arbitrary, we have that the edge of the boundary path ${\partial}|_{{\alpha}(v)} {\Delta}_{i_v}$ with number $d^*$ is ${\alpha}(e_c)$, see Fig. 9.4.
Denote $v':= (e_c)_-$ and let $d' := d^* + 1$, see Fig. 9.4.
plot\[smooth, tension=.5\] coordinates [(-4,-1.6) (-4,2) (-2,2.8) (0,2) (0,-1.6)]{}; (-4.1,-.1) –(-4.1,-.0) ; (-2.1,2.8) –(-2.,2.8); at (-4.6,-0.07) [$e_c$]{}; at (-2.8,0) [$\Delta_{i_v}$]{}; at (-2,3.25) [$\partial \Delta$]{}; (-4.06,-.7) \[fill = black\] circle (.05); (-4.1,.5) \[fill = black\] circle (.05); at (-4.9,-0.7) [$\alpha(v^*)$]{}; at (-4.9,.5) [$\alpha(v')$]{}; at (-2,-2.2) [Fig. 9.4]{}; plot\[smooth, tension=.7\] coordinates [(-2.7,2.72) (-1.8,1.4) (0.1,1.2) ]{}; plot\[smooth, tension=.7\] coordinates [(-3.5,2.45) (-2.2,.8) (0.1,.5) ]{}; at (-2.5,1.6) [$\Gamma_j$]{};
The foregoing mutually exclusive Cases 1–3 describe a cycle, including the first one when $v^* = v$, of the algorithm $\mathfrak{B}_{v}$ which is finished by reassignment $v^* := v'$ and $d^* := d'$.
We now repeat the above cycle with new values of parameters $v^*, d^*$, whose storage along with storage of the original vertex $v$ requires additional $$O(\max(\log n, \log |W|))$$ space, as follows from the definitions and inequality [deli]{}. We keep on repeating this cycle until we obtain at certain point that the vertex $v^*$ is again $v$ which means that all the edges of the path ${\partial}|_{{\alpha}(v)} {\Delta}_{i_v}$ have been considered, we are back at $v$ and we stop. Note that $ d^* = |{\partial}|_{{\alpha}(v)} {\Delta}_{i_v}|$ when we stop.
We remark that, when running the algorithm $\mathfrak{B}_{v}$ we can stop at once and abort computations whenever we encounter the value of parameter $v^*$ less than $v$, because, in this case, the [disk diagram]{} ${\Delta}_{i_v}$ which contains the vertex $v$ is identical to ${\Delta}_{i_{v^*}}$ and the information about the edges of ${\Delta}_{i_{v^*}}$ can be obtained by running the algorithm $\mathfrak{B}_{v^*}$ instead. Thus, we now have that either the algorithm $\mathfrak{B}_{v}$ aborts at some point or $\mathfrak{B}_{v}$ performs several cycles and stops when $v = v^*$ and $d^* >1$, in which case we say that the algorithm $\mathfrak{B}_{v}$ [*accepts*]{} the diagram ${\Delta}_{i_v}$.
By running the algorithms $\mathfrak{B}_{v}$ consecutively for $v=0, 1, \dots$, with possible stops as described above, we will get the information about the edges of all [disk diagram]{}s ${\Delta}_1, \dots, {\Delta}_{\ell_2+1}$ so that for every $j = 1, \dots, \ell_2+1$, there will be a unique vertex $v(j) \in P_W$ such that ${\Delta}_j = {\Delta}_{i_{v(j)}}$ and the algorithm $\mathfrak{B}_{v(j)}$ accepts ${\Delta}_j$. Recall that, in the proof of Theorem \[thm1\], see also Corollary \[cor2\], we constructed a deterministic algorithm $\mathfrak{C}$ such that, when given a word $U$ so that $U = 1$ in the free group $F({\mathcal{A}})$, the algorithm $\mathfrak{C}$ outputs a diagram ${\Delta}_U$ over $F({\mathcal{A}}) = \langle {\mathcal{A}}\ \| \ \varnothing \rangle $ such that ${\varphi}({\partial}|_{0} {\Delta}_U) \equiv U$ and the algorithm $\mathfrak{C}$ operates in space $O((\log |U|)^3)$. Next, we observe that our ability to deterministically compute, in space [sp]{}, the label ${\varphi}(e)$ of every edge $e$ of the boundary path ${\partial}|_{{\alpha}(v)} {\Delta}_{i_v}$, as well as the location of $e$ in one of the paths ${\partial}|_0 {\Delta}(\Omega)$, $t_1^{-1}, u_1^{-1}, \dots, t_{\ell_2}^{-1}, u_{\ell_2}^{-1}$, combined together with the algorithm $\mathfrak{C}$, means that we can construct a [disk diagram]{} ${\widetilde}{\Delta}_{i_v}$ over $F({\mathcal{A}})$ such that ${\varphi}({\widetilde}{\Delta}_{i_v} ) \equiv {\varphi}({\Delta}_{i_v} )$ in deterministic space $$O( |{\partial}{\Delta}_{i_v}|^3 ) = O( (\max(\log |W|, \log n(W))^3)) ,$$ see [deli]{}. Replacing the [disk diagram]{}s ${\Delta}_1, \dots, {\Delta}_{\ell_2+1}$ in ${\Delta}(\Omega)$ with their substitutes ${\widetilde}{\Delta}_1, \dots, {\widetilde}{\Delta}_{\ell_2+1}$, we will obtain a disk diagram ${\widetilde}{\Delta}(\Omega)$ over such that $${\varphi}({\partial}{\widetilde}{\Delta}(\Omega)) \equiv {\varphi}({\partial}{\Delta}(\Omega)) , \qquad | {\widetilde}{\Delta}(\Omega)(2)| = | {\Delta}(\Omega)(2)| = n(W) ,$$ i.e., ${\widetilde}{\Delta}(\Omega)$ is as good as ${\Delta}(\Omega)$.
Since the [disk diagram]{}s $ {\widetilde}{\Delta}_1, \dots, {\widetilde}{\Delta}_{\ell_2+1}$ along with $a_2$-bands $\Gamma_1, \dots, \Gamma_{\ell_2}$ constitute the entire diagram $ {\widetilde}{\Delta}(\Omega)$, our construction of ${\widetilde}{\Delta}(\Omega)$, performed in space [sp]{}, is now complete.
It remains to prove the additional claim of Theorem \[thm4\]. We start with the following.
\[91\] Suppose ${\Delta}_0$ is a reduced [disk diagram]{} over , where $|n_1| = |n_2|$. Then the number $| {\Delta}_0(2)|$ of faces in ${\Delta}_0$ satisfies $| {\Delta}_0(2)| \le \tfrac {1}{4|n_1|} |{\partial}{\Delta}_0|^2$.
Denote $n_0 := |n_1|$ and set $n_2 = \kappa n_1$, where $\kappa = \pm 1$. Consider the presentation $$\label{pr3bac}
{\mathcal{G}}_4 = \langle \, {\mathcal{A}}, b_1 , \dots, b_{n_0-1} \, \| \, a_2 a_1 b_1^{-1} = a_1^{\kappa},
b_1 a_1 b_2^{-1} = a_1^{\kappa}, \ldots, b_{n_0-1} a_1 a_2^{-1} = a_1^{\kappa}
\rangle$$ that has $n_0$ relations which are obtained by splitting the relation $ a_2 a_1^{n_1} a_2^{-1} = a_1^{\kappa n_1}$ of into $n_0$ “square" relations, see Fig. 9.5 where the case $n_0 = 3$ is depicted.
(-2,2) rectangle (4,0); (0,0) – (0,2); (2,0) – (2,2); (-1.1,2) –(-.9,2); (2.9,2) –(3.1,2); (.9,2) –(1.1,2); (-1.1,0) –(-.9,0); (2.9,0) –(3.1,0); (.9,0) –(1.1,0); (-2,0.9) –(-2,1.1); (0,0.9) –(0,1.1); (2,0.9) –(2,1.1); (4,0.9) –(4,1.1); (2.9,0) –(3.1,0); (.9,0) –(1.1,0); at (-1,2.4) [$a_1$]{}; at (1,2.4) [$a_1$]{}; at (3,2.4) [$a_1$]{}; at (-2.6,1) [$a_2$]{}; at (-.6,1) [$b_1$]{}; at (1.4,1) [$b_2$]{}; at (4.6,1) [$a_2$]{}; at (-1,-.6) [$a_1^{\kappa}$]{}; at (1,-.6) [$a_1^{\kappa}$]{}; at (3,-.6) [$a_1^{\kappa}$]{}; at (1,-1.5) [Fig. 9.5]{};
Note that there is a natural isomorphism $\psi : {\mathcal{G}}_4 \to {\mathcal{G}}_3$ defined by the map $\psi(a) = a$ for $a \in {\mathcal{A}}$, and $\psi(b_j) = a_1^{-\kappa j} a_2 a_1^j$ for $j =1, \dots, n_0-1$.
If ${\Delta}_0$ is a reduced [disk diagram]{} over , then we can split faces of ${\Delta}_0$ into “square" faces over , as in Fig. 9.5, and obtain a reduced [disk diagram]{} ${\Delta}_{0, b}$ over such that $$\label{d0b}
{\varphi}({\partial}{\Delta}_{0, b} ) \equiv {\varphi}({\partial}{\Delta}_0) , \ \ \ | {\Delta}_{0, b}(2)| = n_0 | {\Delta}_{0}(2)| .$$
Let ${\Delta}$ be an arbitrary reduced [disk diagram]{} over presentation .
We modify the definition of an $a_i$-band given in Sect. 6 for diagrams over presentation so that this definition would be suitable for diagrams over .
Two faces $\Pi_1, \Pi_2$ in a disk diagram ${\Delta}$ over are called [*$j$-related*]{}, where $j =1,2$, denoted $\Pi_1 \leftrightarrow_j \Pi_2$, if there is an edge $e$ such that $e \in {\partial}\Pi_1$, $e^{- 1} \in {\partial}\Pi_2$, and ${\varphi}(e) = a_1^{\pm 1}$ if $j =1$ or ${\varphi}(e) \in \{ a_2^{\pm 1}, b_1^{\pm 1}, \dots, b_{n_0-1}^{\pm 1} \}$ if $j =2$. As before, we consider a minimal equivalence relation, denoted $\sim_j$, on the set of faces of ${\Delta}$ generated by the relation $\leftrightarrow_j$.
An [*$a_i$-band*]{}, where $i \ge 1$ is now arbitrary, is a minimal subcomplex $\Gamma$ of ${\Delta}$ that contains an edge $e$ such that ${\varphi}(e)=a_i^{\pm 1}$ if $i \ne 2$ or ${\varphi}(e) \in \{ a_2^{\pm 1}, b_1^{\pm 1}, \dots, b_{n_0-1}^{\pm 1} \}$ if $i =2$ and $\Gamma$ has the following property. If there is a face $\Pi$ in ${\Delta}$ such that $e \in ({\partial}\Pi)^{\pm 1}$, then $\Gamma$ must contain all faces of the equivalence class $[\Pi]_{\sim_i}$ of $\Pi$. As before, this definition implies that an $a_i$-band $\Gamma$ is either a subcomplex consisting of a single nonoriented edge $\{ f, f^{- 1} \}$, where ${\varphi}(f)=a_i^{\pm 1}$ if $i \ne 2$ or ${\varphi}(f) \in \{ a_2^{\pm 1}, b_1^{\pm 1}, \dots, b_{n_0-1}^{\pm 1} \}$ if $i =2$ and $f, f^{- 1} \in {\partial}{\Delta}$, or $\Gamma$ consists of all faces of an equivalence class $[\Pi]_{\sim_i}$, here $i =1,2$.
If an $a_i$-band $\Gamma$ contains faces, then $\Gamma$ is called [*essential*]{}.
If an $a_i$-band $\Gamma$ is essential but $\Gamma$ contains no face whose boundary contains an edge $f$ such that $f^{-1} \in {\partial}{\Delta}$ and ${\varphi}(f)=a_1^{\pm 1}$ if $i =1$ or ${\varphi}(f) \in \{ a_2^{\pm 1}, b_1^{\pm 1}, \dots, b_{n_0-1}^{\pm 1} \}$ if $i =2$, then $\Gamma$ is called a [*closed*]{} $a_i$-band. It follows from the definitions that if $\Gamma$ is an essential $a_i$-band, then $i=1,2$.
If $\Gamma$ is an essential $a_i$-band and $\Pi_1, \dots, \Pi_k$ are all faces of $\Gamma$, we consider a simple arc or a simple curve $c(\Gamma)$ in the interior of $\Gamma$ such that the intersection $c(\Gamma) \cap \Pi_j$ for every $j = 1, \dots, k$, is a properly embedded arc in $\Pi_j$ whose boundary points belong to the interior of different edges of ${\partial}\Pi_j$ whose ${\varphi}$-labels are in $ \{ a_1^{\pm 1} \}$ if $i =1$ or in $ \{ a_2^{\pm 1}, b_1^{\pm 1}, \dots, b_{n_0-1}^{\pm 1} \}$ if $i =2$. This arc or curve $c(\Gamma)$ will be called a [*connecting line*]{} of $\Gamma$.
Note that if $\Gamma$ contains a face $\Pi$ whose boundary has an edge $f$ such that $f^{-1} \in {\partial}{\Delta}$ and ${\varphi}(f)=a_1^{\pm 1}$ if $i =1$ or ${\varphi}(f) \in \{ a_2^{\pm 1}, b_1^{\pm 1}, \dots, b_{n_0-1}^{\pm 1} \}$ if $i =2$, then a connecting line $c(\Gamma)$ of $\Gamma$ connects two points ${\partial}c(\Gamma)$ on the boundary ${\partial}{\Delta}$. On the other hand, if $\Gamma$ contains no face $\Pi$ as above, then $c(\Gamma)$ is a closed simple curve, in which case $\Gamma$ is called a [*closed*]{} $a_i$-band.
If $\Pi$ is a face in ${\Delta}$ over , then there are exactly two bands, $a_1$-band and $a_2$-band, denoted $\Gamma_{\Pi, 1}$, $\Gamma_{\Pi, 2}$, resp., whose connecting lines $c(\Gamma_{\Pi, 1})$, $c(\Gamma_{\Pi, 2})$ pass through $\Pi$. Without loss of generality, we may assume that the intersection $$c(\Gamma_{\Pi, 1}) \cap c(\Gamma_{\Pi, 2}) \cap \Pi$$ consists of a single (piercing) point. The following lemma has similarities with Lemma \[b1\].
\[c1\] Suppose that ${\Delta}$ is a reduced disk diagram over presentation [pr3bac]{}. Then there are no closed $a_i$-bands in ${\Delta}$ and every $a_i$-band $\Gamma$ of ${\Delta}$ is a disk subdiagram of ${\Delta}$ such that $${\partial}|_{(f_1)_-} \Gamma = f_1 s_1 f_2 s_2 ,$$ where $f_1, f_2$ are edges of ${\partial}{\Delta}$ such that $$\begin{aligned}
{\varphi}(f_1), {\varphi}(f_2) & \in \{ a_i^{\pm 1} \} \ \ \mbox{if} \ i \ne 2 , \\
{\varphi}(f_1), {\varphi}(f_2) & \in \{ a_2^{\pm 1}, b_1^{\pm 1}, \dots, b_{n_0-1}^{\pm 1} \} \ \ \mbox{if} \ i =2,\end{aligned}$$ $s_1, s_2$ are simple paths such that $|s_1| = |s_2| = |\Gamma(2)|$, and ${\partial}\Gamma$ is a reduced simple closed path when $|\Gamma(2)|>0$. In addition, if $\Gamma_1$ and $\Gamma_2$ are essential $a_1$-band and $a_2$-band, resp., in ${\Delta}$, then their connecting lines $c(\Gamma_{1}), c(\Gamma_{2})$ intersect in at most one point.
Since ${\Delta}$ is reduced, it follows that if $\Pi_1 \leftrightarrow_j \Pi_2$, $j=1,2$, then $\Pi_1$ is not a mirror copy of $\Pi_2$, i.e., ${\varphi}(\Pi_1 ) \not\equiv {\varphi}(\Pi_2 )^{-1}$. This remark, together with the definition of an $a_i$-band and defining relations of presentation [pr3bac]{}, implies that if $\Pi_1 \leftrightarrow_j \Pi_2$ then the faces $\Pi_1, \Pi_2$ share exactly one nonoriented edge. This, in particular, implies $|s_1| = |s_2| = |\Gamma(2)|$ if $\Gamma$ is an $a_i$-band such that ${\partial}\Gamma = f_1 s_1 f_2 s_2$ as in the statement of Lemma \[c1\].
Assume, on the contrary, that there is an essential $a_i$-band $\Gamma_0$ in ${\Delta}$ such that either $\Gamma_0$ is closed or ${\partial}\Gamma_0 = f_1 s_1 f_2 s_2$ as above, and one of the paths $s_1, s_2$ is not simple. Then there is a disk subdiagram ${\Delta}_2$ of ${\Delta}$ bounded by edges of ${\partial}\Gamma_0$ such that ${\varphi}( {\partial}{\Delta}_2)$ is a nonempty reduced word over the alphabet $\{ a_1^{\pm 1} \}$ if $i=2$ or over the alphabet $\{ a_2^{\pm 1}, b_1^{\pm 1}, \dots, b_{n_0-1}^{\pm 1} \}$ if $i =1$. Pick such diagrams $\Gamma_0$ and ${\Delta}_2$ so that $|{\Delta}_2(2)|$ is minimal. Since ${\Delta}_2$ contains faces and ${\varphi}({\partial}{\Delta}_2)$ contains no letters either from $\{ a_1^{\pm 1} \}$ if $i=1$ or from $\{ a_2^{\pm 1}, b_1^{\pm 1}, \dots, b_{n_0-1}^{\pm 1} \}$ if $i=2$, it follows that every $a_{3-i}$-band in ${\Delta}_2$ is closed and bounds a similar to ${\Delta}_2$ diagram ${\Delta}'_2$ such that $|{\Delta}'_2(2)|< |{\Delta}_2(2)|$. This contradiction proves that an $a_i$-band $\Gamma_0$ with the above properties does not exist.
To prove the space part of the additional claim, suppose, on the contrary, that there are essential $a_1$- and $a_2$-bands $\Gamma_1$ and $\Gamma_2$, resp., in ${\Delta}$ such that the intersection $c(\Gamma_{1})\cap c(\Gamma_{2})$ of their connecting lines $c(\Gamma_{1}), c(\Gamma_{2})$ contains at least two points. We pick two such consecutive along $c(\Gamma_{1}), c(\Gamma_{2})$ points and let $\Pi_1, \Pi_2$ be the faces that contain these two points. Then there exists a disk subdiagram ${\Delta}_1$ in ${\Delta}$ such that ${\partial}{\Delta}_1 = u_1u_2$, where $u_1^{-1}$ is a subpath of ${\partial}\Gamma_{1}$, $|u_1| \ge 0$ and ${\varphi}(u_1)$ is a reduced or empty word over $\{ a_2^{\pm 1}, b_1^{\pm 1}, \dots, b_{n_0-1}^{\pm 1} \}$, while $u_2^{-1}$ is a subpath of ${\partial}\Gamma_{2}$, $|u_2| \ge 0$ and ${\varphi}(u_2)$ is a reduced or empty word over $\{ a_1^{\pm 1} \}$, see Fig. 9.6. The equality $|u_1| + |u_2| =0$ implies that the faces $\Pi_1, \Pi_2$ form a reducible pair. This contradiction to ${\Delta}$ being reduced proves that $|u_1|+ |u_2| >0$, whence $|{\Delta}_1(2)|>0$.
plot\[smooth, tension=.88\] coordinates [ (-5,-2)(-4,2) (-2,4) (0,2) (1,-2)]{}; plot\[smooth, tension=.5\] coordinates [ (-4,-2)(-3,2) (-2,3) (-1,2) (0,-2)]{}; plot\[smooth, tension=.5\] coordinates [ (-4,4)(-3,0) (-2,-1) (-1,0) (0,4)]{}; plot\[smooth, tension=.88\] coordinates [ (-5,4)(-4,0) (-2,-2) (0,0) (1,4)]{}; (-3.3,1) \[fill = black\] circle (.05); (-.7,1) \[fill = black\] circle (.05); (-2.06,3) – (-1.94,3); (-1.94,-1) – (-2.06,-1); at (-2,2.4) [$u_1$]{}; at (-2,-0.4) [$u_2$]{}; at (-3.8,1) [$\Pi_1$]{}; at (-0.1,1) [$\Pi_2$]{}; at (-2.8,3) [$\Gamma_1$]{}; at (-1,-1) [$\Gamma_2$]{}; at (-2,1) [$\Delta_1$]{}; at (-2,-3) [Fig. 9.6]{};
We now pick a disk subdiagram ${\Delta}'_1$ in ${\Delta}$ such that $|{\Delta}'_1(2)|$ is minimal, $${\partial}{\Delta}'_1 = u'_1 u'_2 , \quad |u'_1|+ |u'_2| >0 ,$$ ${\varphi}(u_1')$ is a reduced or empty word over the alphabet $\{ a_2^{\pm 1}, b_1^{\pm 1}, \dots, b_{n_0-1}^{\pm 1} \}$, and ${\varphi}(u_2)$ is a reduced or empty word over $\{ a_1^{\pm 1} \}$. Consider an $a_1$-band $\Gamma_3$ in ${\Delta}'_1$ if $|u'_2| >0$ or consider an $a_2$-band $\Gamma_4$ in ${\Delta}'_1$ if $|u'_2| =0$ and $|u'_1| >0$ such that a connecting line of $\Gamma_j$, $j=3,4$, connects points on $u'_2$ if $j =3$ or on $u'_1$ if $j =4$, see Fig. 9.7. It is easy to check that, taking $\Gamma_3$, or $\Gamma_4$, out of ${\Delta}'_1$, we obtain two disk subdiagrams ${\Delta}'_{1,1}, {\Delta}'_{1,2}$ one of which has the above properties of ${\Delta}'_1$ and a fewer number of faces, see Fig. 9.7. This contradiction to the minimality of ${\Delta}'_1$ completes the proof of Lemma \[c1\].
plot\[smooth, tension=.45\] coordinates [(-7,0)(-6,2) (-4,4) (0,4) (2,2) (3,0)]{}; plot\[smooth, tension=.45\] coordinates [(-7,0)(-6,-2) (-4,-4) (0,-4) (2,-2) (3,0)]{}; (-7,0) \[fill = black\] circle (.05); (3,0) \[fill = black\] circle (.05); (-2.06,4.18) – (-1.94,4.18); (-1.94,-4.18) – (-2.06,-4.18); at (-2,.7) [$\Gamma_3$]{}; at (-6.9,2.8) [$\Delta'_1$]{}; at (-4,2) [$\Delta'_{1,1}$]{}; at (-2,-2) [$\Delta'_{1,2}$]{}; at (-2,3.5) [$u'_1$]{}; at (-2,-3.5) [$u'_2$]{}; at (-2,-5.5) [Fig. 9.7]{}; plot\[smooth, tension=.45\] coordinates [(-5,-3.2) (-4,-2) (-2,0) (0,-2) (1,-3.2)]{}; plot\[smooth, tension=.45\] coordinates [(-6,-2) (-4.8,-.8) (-2,1.8) (0.8,-.8) (2,-2)]{};
Coming back to the diagrams ${\Delta}_{0}$ and ${\Delta}_{0,b}$, see [d0b]{}, we can see from Lemma \[c1\] that the number of $a_1$-bands in ${\Delta}_{0,b}$ is at most $\tfrac 12 |{\partial}{\Delta}_{0,b} |_{a_1} \le \tfrac 12 |{\partial}{\Delta}_{0} |$. Similarly, the number of $a_2$-bands in ${\Delta}_{0,b}$ is at most $\tfrac 12 |{\partial}{\Delta}_{0,b} | = \tfrac 12 |{\partial}{\Delta}_{0} |$. It follows from the definitions and Lemma \[c1\] that the number $|{\Delta}_{0,b}(2) |$ is equal to the number of intersections of connecting lines of $a_1$-bands and $a_2$-bands in ${\Delta}_{0,b}$. Hence, referring to Lemma \[c1\] again, we obtain that $|{\Delta}_{0,b}(2)| \le (\tfrac 12 |{\partial}{\Delta}_{0}| )^2 $. In view of [d0b]{}, we finally have $$|{\Delta}_{0}(2)| = \tfrac{1}{n_0} |{\Delta}_{0,b}(2)| \le \tfrac {1}{4n_0} |{\partial}{\Delta}_{0}|^2 ,$$ as desired. The proof of Lemma \[91\] is complete.
Suppose ${\Delta}_0$ is a reduced diagram over presentation [pr3b]{}, where $|n_1| = |n_2|$. It follows from Lemma \[c1\] that $|{\Delta}_{0}(2)| \le \tfrac {1}{4n_0} |{\partial}{\Delta}_{0}|^2$. This inequality means that the space bound [sp]{} becomes $O((\log |W|)^3)$ as $n(W) \le \tfrac {1}{4n_0} |{\partial}{\Delta}_0 |^2 = \tfrac {1}{4n_0} |W|^2$. The space part of Theorem \[thm4\] is proved.
To prove the time part of Theorem \[thm4\], we review the proof of $\mathsf P$ part of Theorem \[thm2\]. We observe that our arguments enable us, concurrently with computation of numbers $\lambda(W(i,j))$, $\mu_3(W(i,j))$ for every parameterized subword $W(i,j)$ of $W$ such that $\lambda(W(i,j)) = \ell < \infty$, to inductively construct a minimal diagram ${\Delta}(i,j)$ over [pr3b]{} such that ${\varphi}( {\partial}|_0 {\Delta}(i,j)) \equiv W(i,j)a_1^{-\ell}$. Details of this construction are straightforward in each of the subcases considered in the proof of $\mathsf P$ part of Theorem \[thm2\]. Note that this modified algorithm can still be run in time $O( |W|^4)$ for the general case.
If, in addition, $|n_1| = |n_2|$ then the inequality can be improved to $$| \lambda(W(i,j) ) | \le | W(i,j) |, \quad \mu_3(W(i,j) ) \le | W(i,j) |^2 .$$ Hence, using, as before, binary representation for numbers $\lambda(W(i',j') )$, $\mu_3(W(i',j') )$, we can run inductive computation of numbers $\lambda(U)$, $\mu_3(U)$ for $U = W(i,j)$ and construction of a diagram ${\Delta}(i,j)$ whenever it exists for given $i,j$, in time $$O(| W(i,j)|\log | W(i,j)|) .$$ This improves the bound for running time of our modified algorithm, that computes the numbers $\lambda(W ), \mu_3(W)$ and constructs a minimal diagram ${\Delta}(1, |W|)$ for $W$, from $O( |W|^4)$ to $O( |W|^3\log |W|)$, as desired.
Theorem \[thm4\] is proved.
Polygonal Curves in the Plane and Proofs of Theorems \[thm5\], \[thm6\] and Corollary \[cor3\]
==============================================================================================
Let ${\mathcal{T}}$ denote a tessellation of the plane $\mathbb R^2$ into unit squares whose vertices are points with integer coordinates. Let $c$ be a finite closed path in ${\mathcal{T}}$ so that edges of $c$ are edges of ${\mathcal{T}}$. Recall that we have the following two types of elementary operations over $c$. If $e$ is an oriented edge of $c$, $e^{-1}$ is the edge with an opposite to $e$ orientation, and $ee^{-1}$ is a subpath of $c$ so that $c = c_1 ee^{-1} c_2$, where $c_1, c_2$ are subpaths of $c$, then the operation $c \to c_1 c_2$ over $c$ is called an [*[elementary homotopy]{} of type 1*]{}. Suppose that $c =c_1 u c_2$, where $c_1, u, c_2$ are subpaths of $c$, and a boundary path ${\partial}s$ of a unit square $s$ of ${\mathcal{T}}$ is ${\partial}s = uv$, where $u, v$ are subpaths of ${\partial}s$ and either of $u, v$ could be of zero length, i.e., either of $u, v$ could be a single vertex of ${\partial}s$. Then the operation $c \to c_1 v^{-1} c_2$ over $c$ is called an [*[elementary homotopy]{} of type 2*]{}.
Let $c$ be a finite closed path in a tessellation ${\mathcal{T}}$ of the plane $\mathbb R^2$ into unit squares so that edges of $c$ are edges of ${\mathcal{T}}$. Then a minimal number $m_2(c)$ such that there is a finite sequence of [elementary homotopies]{} of type 1–2, which turns $c$ into a single point and which contains $m_2(c)$ [elementary homotopies]{} of type 2, can be computed in deterministic space $O((\log |c|)^3)$ or in deterministic time $O( |c|^3\log |c| )$, where $|c|$ is the length of $c$.
Furthermore, such a sequence of [elementary homotopies]{} of type 1–2, which turns $c$ into a single point and which contains $m_2(c)$ [elementary homotopies]{} of type 2, can also be computed in deterministic space $O((\log |c|)^3)$ or in deterministic time $O( |c|^3\log |c| )$.
First we assign ${\varphi}$-labels to edges of the tessellation ${\mathcal{T}}$. If an edge $e$ goes from a point of $\mathbb R^2$ with coordinates $(i,j)$ to a point with coordinates $(i+1,j)$, then we set ${\varphi}(e):= a_1$ and ${\varphi}(e^{-1}):= a_1^{-1}$. If an edge $f$ goes from a point of $\mathbb R^2$ with coordinates $(i,j)$ to a point with coordinates $(i,j+1)$, then we set ${\varphi}(f):= a_2$ and ${\varphi}(f^{-1}):= a_2^{-1}$.
Let $c$ be a finite closed path in ${\mathcal{T}}$ whose edges are those of ${\mathcal{T}}$. Without loss of generality, we may assume that $c$ starts at the origin, hence $c_-= c_+$ has coordinates $(0,0)$ (otherwise, we could apply a logspace reduction to achieve this property). Denote $c = {\widetilde}f_1 \dots {\widetilde}f_{|c|}$, where ${\widetilde}f_1, \dots, {\widetilde}f_{|c|}$ are edges of $c$, and let $${\varphi}(c) := {\varphi}({\widetilde}f_1) \dots {\varphi}( {\widetilde}f_{|c|}) ,$$ where $ {\varphi}(c)$ is a word over the alphabet ${\mathcal{A}}^{\pm 1} = \{ a_1^{\pm 1}, a_2^{\pm 1} \}$. Since $c$ is closed, it follows that ${\varphi}(c) \overset{{\mathcal{G}}_5}{=} 1$, where $$\label{pp1}
{\mathcal{G}}_5 := \langle \ a_1, a_2 \ \| \ a_2 a_1 a_2^{-1} a_1^{ -1} = 1 \, \rangle .$$
\[eh\] Suppose that a closed path $c$ in ${\mathcal{T}}$ can be turned into a point by a finite sequence $\Xi$ of [elementary homotopies]{} and $m_2(\Xi)$ is the number of [elementary homotopies]{} of type 2 in this sequence. Then there is a [disk diagram]{} ${\Delta}$ such that ${\varphi}({\partial}{\Delta}) \equiv {\varphi}(c)$ and $|{\Delta}(2) | = m_2(\Xi )$.
Conversely, suppose there is a [disk diagram]{} ${\Delta}$ with ${\varphi}({\partial}{\Delta}) \equiv {\varphi}(c)$. Then there is a finite sequence $\Xi$ of [elementary homotopies]{} such that $\Xi$ turns $c$ into a point and $m_2(\Xi)= |{\Delta}(2) |$.
Given a finite sequence $\Xi$ of [elementary homotopies]{} $\xi_1, \xi_2,\dots$, we can construct a [disk diagram]{} ${\Delta}$ over such that ${\varphi}({\partial}{\Delta}) \equiv {\varphi}(c)$ and $m_2(\Xi)= |{\Delta}(2) |$. Indeed, starting with a simple closed path $q_c$ in the plane $\mathbb R^2$ (without any tessellation) such that ${\varphi}(q_c) \equiv {\varphi}(c)$, we can simulate [elementary homotopies]{} in the sequence $\Xi$ so that an [elementary homotopy]{} of type 1 is simulated by folding a suitable pair of edges of the path $q_c$, see Fig. 10.1(a), and an [elementary homotopy]{} of type 2 is simulated by attachment of a face $\Pi$ over to the bounded region of $\mathbb R^2$ whose boundary is $q_c$, see Fig. 10.1(b).
(1.9,0) – (2.99,0); at (2.5,-2) [Fig. 10.1(a)]{}; (0,0) ellipse (1.5 and 1.2); (0,1.2)\[fill = black\]circle (0.05); (-0.62,1.091)\[fill = black\]circle (0.05); (0.62,1.091)\[fill = black\]circle (0.05); (-.4,1.16) – (-.25,1.19); (.26,1.19) – (.37,1.18); (.1,-1.2) – (-.1,-1.2); at (9,0.5) ; at (-0.4,1.6) [$e_1$]{}; at (0.4,1.6) [$e_2$]{}; at (0,-1.6) [$q_c$]{}; (5,0) ellipse (1.5 and 1.2); (5,1.2)\[fill = black\]circle (0.05); (5,1.2) – (5,1.9); (5,1.4) – (5,1.65); (5,1.9)\[fill = black\]circle (0.05); at (5,-1.6) [$q_{c'}$]{}; (5.1,-1.2) – (4.9,-1.2); at (6.17,1.6) [$e_1 = e_2^{-1}$]{}; (11.9,0) – (12.99,0); at (12.5,-2) [Fig. 10.1(b)]{}; (10,0) ellipse (1.5 and 1.2);
(9.9,1.2) – (10.1,1.2); (9.38,1.091) node (v1) \[fill = black\]circle (0.05); (10.62,1.091) node (v2) \[fill = black\]circle (0.05);
(10.1,-1.2) – (9.9,-1.2);
at (10,1.6) [$u_1$]{}; at (10,-1.6) [$q_c$]{}; (15,0) ellipse (1.5 and 1.2); (14.9,1.2) – (15.1,1.2); (14.38,1.091) node (u1) \[fill = black\]circle (0.05); (15.62,1.091) node (u2) \[fill = black\]circle (0.05); at (15,1.6) [$u_1$]{};
plot\[smooth, tension=1.1\] coordinates [(u1) (15,0.3) (u2)]{}; (14.9,0.32) – (15.07,0.3); at (15,-1.6) [$q_{c'}$]{}; (15.1,-1.2) – (14.9,-1.2);
at (15,0.75) [$\Pi$]{}; at (15,-0.15) [$u_2$]{}; at (12.9,1.4) [$\partial \Pi = u_2 u_1^{-1}$]{};
As a result, we will fill out the bounded region of $\mathbb R^2$ whose boundary path is $q_c$ with faces and obtain a required [disk diagram]{} ${\Delta}$ over .
The converse can be established by a straightforward induction on $|{\Delta}(2)|$.
It follows from Lemma \[eh\] and Theorem \[thm4\] that a minimal number $m_2(c)$ of [elementary homotopies]{} of type 2 in a sequence of [elementary homotopies]{} that turns the path $c$ into a point can be computed in deterministic space $O(( \log |{\varphi}(c)| )^3) = O((\log |c|)^3)$ or in deterministic time $O(|c|^3\log |c|)$.
It remains to show that a desired sequence of [elementary homotopies]{} for $c$ can be computed in deterministic space $O((\log |c|)^3)$ or in deterministic time $O(|c|^3\log |c|)$. We remark that Lemma \[eh\] and its proof reveal a close connection between sequences of [elementary homotopies]{} that turn $c$ into a point and [disk diagram]{}s ${\Delta}$ over such that ${\varphi}({\partial}|_0 {\Delta}) \equiv {\varphi}(c)$. According to Theorem \[thm4\], we can construct a [disk diagram]{} ${\Delta}_c$ such that ${\varphi}({\partial}|_0 {\Delta}_c) \equiv {\varphi}(c)$ and $|{\Delta}_c(2) | = m_2(c)$ in space $O((\log |c|)^3)$ or in time $O(|c|^3\log |c|)$. Since $|{\Delta}_c(2) | \le |{\partial}{\Delta}_c |^2/4 = |c|^2/4 $ by Lemma \[91\], it follows that we can use this diagram ${\Delta}_c$ together with Lemma \[eh\] to construct a desired sequence of [elementary homotopies]{} in deterministic time $O(|c|^2)$. Therefore, a desired sequence of [elementary homotopies]{}can be constructed in deterministic time $O(|c|^3\log |c|)$.
It is also tempting to use this diagram ${\Delta}_c$ together with Lemma \[eh\] to construct, in space $O((\log |c|)^3)$, a desired sequence of [elementary homotopies]{}which turns $c$ into a point. However, this approach does not quite work for the space part of the proof because inductive arguments of the proof of Lemma \[eh\] use intermediate paths whose storage would not be possible in $O((\log |c|)^3)$ space. For this reason, we have to turn back to calculus of [pseudobracket]{}s.
Denote $W := {\varphi}(c)$. As in Sects. 2, 6, let $P_W$ be a path such that ${\varphi}(P_W) \equiv W$ and vertices of $P_W$ are integers $0,1, \dots, |W|=|c|$. Denote $P_W = f_1 \dots f_{|c|}$, where $ f_1, \dots, f_{|c|}$ are edges of $P_W$, and let $$\gamma : P_W \to c$$ be a cellular map such that $\gamma( f_i) = {\widetilde}f_i$ for $i=1, \dots, |W|$. Recall that $c = {\widetilde}f_1 \dots {\widetilde}f_{|c|}$. Note that $\gamma(i) = ({\widetilde}f_i)_+$, where $i=1, \dots, |W|$, and $\gamma(0) = ({\widetilde}f_1)_- = ({\widetilde}f_{|c|})_+$.
Suppose $B$ is a [pseudobracket system]{} for $W$ and let $B = \{ b_1, \dots, b_k \}$, where $b_i(2) \le b_j(1)$ if $i <j$. Recall that the arc $a(b)$ of a [pseudobracket]{} $b \in B$ is a subpath of $P_W$ such that $a(b)_- = b(1)$ and $a(b)_+ = b(2)$.
We now define a path $\Gamma(B)$ for $B$ in ${\mathcal{T}}$ so that $\Gamma(B):= \gamma(P_W) = c$ if $k=0$, i.e., $B = \varnothing$, and for $k >0$ we set $$\label{pp2}
\Gamma(B) := c_0 \delta(a(b_1)) c_1 \ldots \delta(a(b_k)) c_k ,$$ where $c_i = \gamma(d_i)$ and $d_i$ is a subpath of $P_W$ defined by $(d_i)_- = b_i(2)$ and $(d_i)_+ = b_{i+1}(1)$, except for the case when $i = 0$, in which case $(d_0)_- = 0$, and except for the case when $i = k$, in which case $(d_k)_- = |W|$. For every $i=1, \ldots, k$, the path $\delta(a(b_i))$ in is defined by the equalities $$\delta(a(b_i))_- = \gamma(b_{i}(1)) , \qquad \delta(a(b_i))_+ = \gamma(b_{i}(2))$$ and by the equality ${\varphi}( \delta(a(b_i)) ) \equiv a_1^{k_1}a_2^{k_2}$ with some integers $k_1, k_2$.
Suppose that $B_0, B_1, \ldots, B_\ell$ is an operational sequence of [pseudobracket system]{}s for $W$ that corresponds to a sequence $\Omega$ of [elementary operation]{}s that turns the empty [pseudobracket system]{} $B_0$ into a final [pseudobracket system]{} $B_\ell = \{ b_{\ell, 1} \}$. We also assume that $b_{\ell, 1}(4) = |{\Delta}_c(2)|$ and ${\Delta}_c$ is a minimal [disk diagram]{} over such that ${\varphi}({\partial}{\Delta}_c) \equiv W \equiv {\varphi}(c)$. It follows from Lemmas \[91\], \[eh\] that $$\label{inqd}
|{\Delta}_c(2)| = m_2(c) \le |{\partial}{\Delta}_c|^2/4 = |W|^2/4 .$$
Hence, in view of Lemma \[8bb\], see also inequalities –, we may assume that, for every [pseudobracket]{} $b \in \cup_{i=0}^\ell B_i$, it is true that $$\begin{aligned}
\label{ieq1}
0 & \le b(1), b(2) \le |W| , \\ \notag
0 & \le | b(3) | \le (|W|_{a_1} + (|n_1| +|n_2|) |{\Delta}_c(2)|)/2 \\ \label{ieq2}
& \le (|W|_{a_1} + |W|^2/2)/2 \le |W|^2/2 , \\ \label{ieq3}
0 & \le b(4) \le |{\Delta}_c(2)| \le |W|^2/4 .\end{aligned}$$ Thus the space needed to store a [pseudobracket]{} $b \in \cup_{i=0}^\ell B_i$ is $O(\log |W|)$.
For every [pseudobracket system]{} $B_i$, denote $B_i = \{ b_{i, 1}, \ldots, b_{i, k_i} \}$, where, as above, $b_{i, j}(2) \le b_{i, j'}(1)$ whenever $j < j'$.
For every [pseudobracket system]{} $B_i$, we consider a path $c(i) := \Gamma(B_i)$ in ${\mathcal{T}}$ defined by the formula , hence, we have $$\label{pp3}
c(i) := \Gamma(B_i) = c_0(i) \delta(a(b_{i,1})) c_1(i) \ldots \delta(a(b_{i,k_i})) c_{k_i}(i) .$$ As before, for $B_0 = \varnothing$, we set $c(0) := \Gamma(B_0) = c$.
Note that the last path $c(\ell)$ in the sequence $c(0), \ldots, c(\ell)$, corresponding to a final [pseudobracket system]{} $B_\ell = \{ b_{\ell, 1} \}$, where $b_{\ell, 1} = (0, |W|, 0, m_2(c))$, consists of a single vertex and has the form equal to $c(\ell) = c_0(\ell) \delta(a(b_{\ell,1})) c_1(\ell)$, where $c_0(\ell) = c_1(\ell) = c_- = c_+$ and $\delta(a(b_{\ell,1})) =c$.
\[f2\] Let $b_{i,j} \in B_i$ be a [pseudobracket]{}. Then ${\varphi}(\delta(a(b_{i,j})) ) \equiv a_1^{b_{i,j}(3)}$, i.e., the points $\gamma(b_{i,j}(1))$, $\gamma(b_{i,j}(2))$ of ${\mathcal{T}}$ have the same $y$-coordinate.
Since the path $\gamma(a(b_{i,j})) \delta(a(b_{i,j})^{-1}$ in ${\mathcal{T}}$ is closed, it follows that $${\varphi}( \gamma(a(b_{i,j}))) \overset{{\mathcal{G}}_5}{=} {\varphi}( \delta(a(b_{i,j})^{-1} ) ,$$ where ${\mathcal{G}}_5$ is given by presentation . On the other hand, ${\varphi}( \gamma(a(b_{i,j} )) \equiv {\varphi}( a(b_{i,j} ))$. Hence, by Claim (D) of the proof of Lemma \[7bb\], we get that $${\varphi}( \delta(a(b_{i,j})) ) \overset{{\mathcal{G}}_5}{=} {\varphi}( a(b_{i,j} ) \overset{{\mathcal{G}}_5}{=} a_1^{b_{i,j}(3)} .$$ Since ${\varphi}( \delta(a(b_{i,j})) ) \equiv a_1^{k_1}a_2^{k_2}$ with some integers $k_1, k_2$, it follows that $k_1 = b_{i,j}(3)$ and $k_2 = 0$, as required.
We now analyze how the path $c(i)$, defined by , changes in comparison with the path $c(i-1)$, $i \ge 1$.
Suppose that a [pseudobracket system]{} $B_i$, $i \ge 1$, is obtained from $B_{i-1}$ by an addition. Then $c(i)=c(i-1)$, hence no change over $c(i-1)$ is done. Note that the form does change by an insertion of a subpath consisting of a single vertex which is the $\delta$-image of the arc of the added to $B_{i-1}$ starting [pseudobracket]{}.
Assume that a [pseudobracket system]{} $B_i$, $i \ge 1$, is obtained from $B_{i-1}$ by an extension of type 1 on the left and $b_{i,j} \in B_i$ is obtained from $b_{i-1,j} \in B_{i-1}$ by this [elementary operation]{}. Let $a(b_{i,j}), a(b_{i-1,j}) $ be the arcs of $b_{i,j}, b_{i-1,j}$, resp., and let $a(b_{i,j}) = e_1 a(b_{i-1,j})$, where $e_1$ is an edge of $P_W$ and ${\varphi}(e_1) = a_1^{{\varepsilon}_1}$, ${\varepsilon}_1 = \pm 1$.
If ${\varepsilon}_1 \cdot b_{i-1,j}(3) \ge 0$, then we can see that $c(i) = c(i-1)$ because $$c_{j-1}(i-1) \delta(a(b_{i-1,j})) = c_{j-1}(i) \delta(a(b_{i,j}))$$ and all other syllables of the paths $c(i)$ and $ c(i-1)$, as defined in , are identical.
On the other hand, if ${\varepsilon}_1 \cdot b_{i-1,j}(3) < 0$, then the subpath $c_{j-1}(i) \delta(a(b_{i,j}))$ of $c(i)$ differs from the subpath $c_{j-1}(i-1) \delta(a(b_{i-1,j}))$ of $c(i-1)$ by cancelation of a subpath $ee^{-1}$, where $e$ is the last edge of $c_{j-1}(i-1)$, $e^{-1}$ is the first edge of $\delta(a(b_{i-1,j}))$, and ${\varphi}(e) =a_1^{- b_{i-1,j}(3) / |b_{i-1,j}(3) | } $. Since all other syllables of the paths $c(i)$ and $ c(i-1)$, as defined in , are identical, the change of $ c(i-1)$, resulting in $c(i)$, can be described as an [elementary homotopy]{} of type 1 which deletes a subpath $ee^{-1}$, where $e$ is an edge of $c$ defined by the equalities $$\label{eab1}
e_+ = \gamma(b_{i-1,j}(1)), \qquad {\varphi}(e) = a_1^{- b_{i-1,j}(3) / |b_{i-1,j}(3) | } .$$
The case when a [pseudobracket system]{} $B_i$, $i \ge 1$, is obtained from $B_{i-1}$ by an extension of type 1 on the right is similar but we need to make some changes. Keeping most of the foregoing notation unchanged, we let $a(b_{i,j}) = a(b_{i-1,j})e_2 $, where $e_2$ is an edge of $P_W$ and ${\varphi}(e_2) = a_1^{{\varepsilon}_2}$, ${\varepsilon}_2 = \pm 1$.
If ${\varepsilon}_2 \cdot b_{i-1,j}(3) \ge 0$, then as above we conclude that $c(i) = c(i-1)$ because $$\delta(a(b_{i-1,j})) c_{j}(i-1) = \delta(a(b_{i,j}))c_{j}(i)$$ and all other syllables of the paths $c(i)$ and $ c(i-1)$, as defined in , are identical.
If ${\varepsilon}_1 \cdot b_{i-1,j}(3) < 0$, then we can see from the definitions and from Lemma \[f2\] that the path $c(i)$ can be obtained from $ c(i-1)$ by an [elementary homotopy]{} of type 1 which deletes a subpath $e^{-1}e$, where $e$ is an edge of $c$ defined by the equalities $$\label{eab2}
e_- = \gamma(b_{i-1,j}(2)), \qquad {\varphi}(e) = a_1^{- b_{i-1,j}(3) / |b_{i-1,j}(3) | } .$$
We conclude that, in the case when $B(i)$ is obtained from $B(i-1)$ by an extension of type 1, it follows from inequalities – and from equations – that the [elementary homotopy]{} of type 1 that produces $c(i)$ from $ c(i-1)$ can be computed in space $O(\log |W|)$ when the [pseudobracket]{}s $b_{i,j} \in B_i$ and $b_{i-1,j} \in B_{i-1}$ are known.
Suppose that a [pseudobracket system]{} $B_i$, $i \ge 1$, is obtained from $B_{i-1}$ by an extension of type 2 and $b_{i,j} \in B_i$ is obtained from $b_{i-1,j} \in B_{i-1}$ by this [elementary operation]{}. Denote $a(b_{i,j}) = e_1 a(b_{i-1,j}) e_2$, where $e_1, e_2$ are edges of $P_W$ and ${\varphi}(e_1) = {\varphi}(e_2)^{-1} =a_2^{{\varepsilon}}$, ${\varepsilon}= \pm 1$. According to the definition of an extension of type 2, $b_{i-1,j}(3) \ne 0$ and, in view of the equalities $n_1=n_2 =1$, see and , we can derive from Lemma \[f2\] that the path $ c(i-1)$ turns into the path $c(i)$ in the following fashion. Denote $c_{j-1}(i-1) = {\widetilde}c_{j-1}(i-1) {\widetilde}e_1$ and $c_{j}(i-1) = {\widetilde}e_2 {\widetilde}c_{j}(i-1)$, where ${\widetilde}c_{j-1}(i-1)$, ${\widetilde}c_{j}(i-1)$ are subpaths of $c_{j-1}(i-1)$, $c_{j}(i-1)$, resp., and ${\widetilde}e_1, {\widetilde}e_2$ are edges such that $\gamma( e_{i'}) = {\widetilde}e_{i'}$, $i' =1,2$. Then $c_{j-1}(i) = {\widetilde}c_{j-1}(i-1) $, $c_{j}(i) = {\widetilde}c_{j}(i-1)$, and the path $\delta(a(b_{i,j}))$ has the properties that $$\begin{aligned}
& \delta(a(b_{i,j}))_- = ({\widetilde}e_1)_-, \qquad \delta(a(b_{i,j}))_+ = ({\widetilde}e_2)_+, \\
& {\varphi}(\delta(a(b_{i-1,j})) ) \equiv a_1^{b_{i-1,j}(3)} \equiv {\varphi}(\delta(a(b_{i,j})) ) ,\end{aligned}$$ see Fig. 10.2, and the other syllables of the paths $c(i)$ and $c(i-1)$, as defined in , are identical.
(-2,2) rectangle (6,0); (0,0) – (0,2); (2,0) – (2,2); (-5,0) – (2,0); (4,0) – (9,0); (4,0) – (4,2); (2.4,2) –(2.5,2); (1.4,0) –(1.6,0); (-2,0.9) –(-2,1.1); (0,0.9) –(0,1.1); (2,0.9) –(2,1.1); (4,0.9) –(4,1.1); (6,1.1) –(6,0.88); (-3.6,0) –(-3.4,0); (7.4,0) –(7.6,0); at (-1.,.5) [$s_1$]{}; at (1.,.5) [$s_2$]{}; at (5,.5) [$s_k$]{}; at (2.4,2.6) [$\delta(a(b_{i-1,j}))$]{}; at (-2.99,1) [$\widetilde e_1 = g_1$]{}; at (-.45,1) [$g_2$]{}; at (1.55,1) [$g_3$]{}; at (3.35,1) [$g_{k-1}$]{}; at (7.3,1) [$\widetilde e_2 = g_k^{-1}$]{};
at (-4,-.6) [$\widetilde c_{j-1}(i-1) = c_j(i) $]{}; at (8.2,-.6) [$\widetilde c_{j}(i-1) = c_j(i) $]{}; at (1.5,-.6) [$\delta(a(b_{i-1,j}))$]{}; at (2,-2.) [Fig. 10.2]{}; (-2,0) \[fill = black\] circle (.055); (-2,2) \[fill = black\] circle (.055); (6,0) \[fill = black\] circle (.055); (6,2) \[fill = black\] circle (.055); at (-3.4,2.9) [$ c_{j-1}(i-1) = \widetilde c_{j-1}(i-1) \widetilde e_1 $]{}; at (7.8,2.9) [$ c_{j}(i-1) = \widetilde e_2 \widetilde c_{j}(i-1) $]{};
Denote $k := | b_{i-1,j}(3) | = | b_{i,j}(3) | >0$ and let $$\begin{aligned}
\delta(a(b_{i-1,j})) = d_{i-1,1} \ldots d_{i-1,k}, \qquad
\delta(a(b_{i,j})) = d_{i,1} \ldots d_{i,k} ,\end{aligned}$$ where $d_{i',j'}$ are edges of the paths $\delta(a(b_{i-1,j})) $, $\delta(a(b_{i,j}))$. Also, we let $g_1, \ldots, g_k$ be the edges of ${\mathcal{T}}$ such that ${\varphi}(g_{i'}) = {\varphi}({\widetilde}e_1) = a_2^{{\varepsilon}}$ and $(g_{i'})_- = (d_{i, i'})_-$ for $i' = 1, \ldots, k$, in particular, $g_1 = {\widetilde}e_1$ and $g_k = {\widetilde}e_2^{-1}$, see Fig. 10.2. Then there are $k$ [elementary homotopies]{} of type 2 which turn $c(i-1)$ into $c(i)$ and which use $k$ squares of the region bounded by the closed path ${\widetilde}e_1 \delta(a(b_{i-1,j})) {\widetilde}e_2 \delta(a(b_{i,j}))^{-1}$, see Fig. 10.2. For example, the first [elementary homotopy]{} of type 2 replaces the subpath ${\widetilde}e_1 d_{i-1,1} = g_1 d_{i-1,1}$ by $d_{i,1} g_2$. Note that ${\partial}s_1 = g_1 d_{i-1,1} (d_{i,1} g_2)^{-1}$ is a (negatively orientated) boundary path of a square $s_1$ of ${\mathcal{T}}$. The second [elementary homotopy]{} of type 2 (when $k \ge 2$) replaces the subpath $g_2 d_{i-1,2}$ by $d_{i,2} g_3$, where ${\partial}s_2 = g_2 d_{i-1,2} (d_{i,2} g_3)^{-1}$ is a (negatively orientated) boundary path of a square $s_2$ of ${\mathcal{T}}$, and so on.
In view of inequalities –, these $k = | b_{i-1,j}(3) |$ [elementary homotopies]{} of type 2 that produce $c(i)$ from $ c(i-1)$ can be computed in space $O(\log |W|)$ when the [pseudobracket]{}s $b_{i,j} \in B_i$ and $b_{i-1,j} \in B_{i-1}$ are available.
Suppose that a [pseudobracket system]{} $B_i$, $i \ge 1$, is obtained from $B_{i-1}$ by an extension of type 3 and $b_{i,j} \in B_i$ is obtained from $b_{i-1,j} \in B_{i-1}$ by this [elementary operation]{}. By the definition of an extension of type 3, $b_{i-1,j}(3) = 0$ and $a(b_{i,j}) = e_1 a(b_{i-1,j}) e_2$, where $e_1, e_2$ are edges of $P_W$ and ${\varphi}(e_1) = {\varphi}(e_2)^{-1} \ne a_1^{\pm 1}$. Hence, ${\varphi}(e_1) = {\varphi}(e_2)^{-1} = a_2^{{\varepsilon}}$, ${\varepsilon}= \pm 1$. It follows from Lemma \[f2\] that $$\begin{aligned}
\delta(a(b_{i-1,j})) = c_{j-1}(i-1)_+ = c_{j}(i-1)_- , \quad
\delta(a(b_{i,j})) = c_{j-1}(i)_+ = c_{j}(i)_- .\end{aligned}$$
As above, denote $c_{j-1}(i-1) = {\widetilde}c_{j-1}(i-1) {\widetilde}e_1$ and $c_{j}(i-1) = {\widetilde}e_2 {\widetilde}c_{j}(i-1)$, where ${\widetilde}c_{j-1}(i-1)$, ${\widetilde}c_{j}(i-1)$ are subpaths of $c_{j-1}(i-1)$, $c_{j}(i-1)$, resp., and ${\widetilde}e_1, {\widetilde}e_2$ are edges such that $\gamma( e_{i'}) = {\widetilde}e_{i'}$, $i' =1,2$. Then it follows from the definitions that $c_{j-1}(i) = {\widetilde}c_{j-1}(i-1) $, $c_{j}(i) = {\widetilde}c_{j}(i-1)$ and that all other syllables of the paths $c(i)$ and $c(i-1)$, as defined in , are identical. Hence, the change of the path $c(i-1)$ into $c(i)$ can be described as an [elementary homotopy]{} of type 1 that deletes the subpath ${\widetilde}e_1 {\widetilde}e_2$ of $c(i-1)$. Since $(e_1)_+ = b_{i-1,j}(1)$ and ${\widetilde}e_1 = \gamma(e_1)$, it is easy to see from inequalities – that we can compute this [elementary homotopy]{} of type 1 in space $O(\log |W|)$ when the [pseudobracket]{}s $b_{i,j} \in B_i$ and $b_{i-1,j} \in B_{i-1}$ are given.
Suppose that a [pseudobracket system]{} $B_i$, $i \ge 1$, is obtained from $B_{i-1}$ by a merger operation and $b_{i,j} \in B_i$ is obtained from [pseudobracket]{}s $b_{i-1,j}, b_{i-1,j+1} \in B_{i-1}$ by this merger.
First assume that $b_{i-1,j}(3) \cdot b_{i-1,j+1}(3) \ge 0$. It follows from Lemma \[f2\] and from the definitions that $$\begin{aligned}
c_{j}(i-1) = \delta(a(b_{i-1,j}))_+ , \qquad
\delta(a(b_{i-1,j})) c_{j}(i-1) \delta(a(b_{i-1,j+1})) = \delta(a(b_{i,j}))\end{aligned}$$ and all other syllables of the paths $c(i)$ and $c(i-1)$, as defined in , are identical. Therefore, we have the equality of paths $c(i-1)= c(i)$ in this case. Note that factorizations of $c(i)$ and $c(i-1)$ are different.
Now assume that $b_{i-1,j}(3) \cdot b_{i-1,j+1}(3) < 0$. It follows from Lemma \[f2\] and from the definitions that $c_{j}(i-1) = \delta(a(b_{i-1,j}))_+$ and that the subpath $\delta(a(b_{i,j}))$ of $c(i)$ can be obtained from the subpath $\delta(a(b_{i-1,j})) c_{j}(i-1) \delta(a(b_{i-1,j+1})) $ of $c(i-1)$ by cancelation of $\min(|b_{i-1,j}(3)|, |b_{i-1,j+1}(3)| )$ pairs of edges in $c(i-1)$ so that the last edge of $\delta(a(b_{i-1,j}))$ is canceled with the first edge of $\delta(a(b_{i-1,j+1}))$ as a subpath $ee^{-1}$ in the definition of an [elementary homotopy]{} of type 1 and so on until a shortest path among $\delta(a(b_{i-1,j}))$, $\delta(a(b_{i-1,j+1}))$ completely cancels. Note that all other syllables of the paths $c(i)$ and $c(i-1)$, as defined in , are identical. Thus the path $c(i)$ can be obtained from $c(i-1)$ by $\min(|b_{i-1,j}(3)|, |b_{i-1,j+1}(3)| )$ [elementary homotopies]{} of type 1 which, in view of –, can be computed in space $O(\log |W|)$ when the [pseudobracket]{}s $b_{i,j} \in B_i$ and $b_{i-1,j}, b_{i-1,j+1} \in B_{i-1}$ are known.
Recall that, in the proof of Theorem \[thm4\], we devised an algorithm $\mathfrak{A}_{n}$ that, in deterministic space , constructs a sequence of [elementary operation]{}s $\Omega$ and a corresponding sequence of [pseudobracket system]{}s $B_0, \ldots, B_\ell$ for $W$ such that $B_0$ is empty, $B_\ell = \{ b_{\ell, 1} \}$ is final and $b_{\ell, 1}(4) = | {\Delta}(2) | \le n$, where ${\Delta}$ is a minimal [disk diagram]{} over with ${\varphi}({\partial}{\Delta}) \equiv W$. In view of inequalities , we may assume that $n = |W|^2/4$, hence, the bound becomes $O( (\log |W|)^3)$ and, as we saw in –, every [pseudobracket]{} $b \in \cup_{i=0}^\ell B_i$ requires space $O(\log |W|)$ to store. As was discussed above, when given [pseudobracket system]{}s $B_{i-1}$ and $B_{i}$, we can construct, in space $O(\log |W|)$, a sequence of [elementary homotopies]{} that turns the path $c(i-1)$ into $c(i)$. Since the sequence of [pseudobracket system]{}s $B_0, \ldots, B_\ell$ for $W \equiv {\varphi}(c)$ is constructible in space $O( (\log |W|)^3)$, it follows that a sequence $\Xi$ of [elementary homotopies]{} of type 1–2 that turns the path $c=c(0)$ into a vertex $c(\ell) = c_-=c_+$ can also be constructed in deterministic space $O( (\log |W|)^3)$. This completes the proof of Theorem \[thm5\].
Recall that a [*polygonal*]{} closed curve $c$ in the plane $\mathbb R^2$, equipped with a tessellation ${\mathcal{T}}$ into unit squares, consists of finitely many line segments $c_1, \dots, c_k$, $k >0$, whose endpoints are vertices of ${\mathcal{T}}$, $c= c_1 \dots c_k$, and $c$ is closed, i.e., $c_- = c_+$. If $c_i \subset {\mathcal{T}}$ then the ${\mathcal{T}}$-length $|c_i|_{{\mathcal{T}}}$ of $c_i$ is the number of edges of $c_i$. If $c_i \not\subset {\mathcal{T}}$ then the ${\mathcal{T}}$-length $|c_i|_{{\mathcal{T}}}$ of $c_i$ is the number of connected components in $c_i \setminus {\mathcal{T}}$. We assume that $|c_i|_{{\mathcal{T}}} >0$ for every $i$ and set $|c|_{{\mathcal{T}}} := \sum_{i=1}^k |c_i|_{{\mathcal{T}}}$.
Suppose that $n \ge 1$ is a fixed integer and $c$ is a polygonal closed curve in the plane $\mathbb R^2$ with given tessellation ${\mathcal{T}}$ into unit squares. Then, in deterministic space $O( (\log |c|_{{\mathcal{T}}} )^3)$ or in deterministic time $O( |c|_{{\mathcal{T}}}^{n+3} \log |c|_{{\mathcal{T}}} )$, one can compute a rational number $r_n$ such that $|A(c) - r_n | < \tfrac {1}{ |c|_{{\mathcal{T}}}^n }$.
In particular, if the area $A(c)$ defined by $c$ is known to be an integer multiple of $\tfrac 1 L$, where $L>0$ is a given integer and $L< |c|_{{\mathcal{T}}}^{n}/2$, then $A(c)$ can be computed in deterministic space $O( (\log |c|_{{\mathcal{T}}} )^3)$ or in deterministic time $O( |c|_{{\mathcal{T}}}^{n+3} \log |c|_{{\mathcal{T}}} )$.
As above, let $c = c_1 \dots c_k$, where each $c_i$ is a line segment of $c$ of positive length that connects vertices of ${\mathcal{T}}$. For every $c_i$, we define an approximating path $\zeta(c_i)$ such that $\zeta(c_i)_- = (c_i)_-$, $\zeta(c_i)_+ = (c_i)_+$ and $\zeta(c_i) \subset {\mathcal{T}}$. If $c_i \subset {\mathcal{T}}$ then we set $\zeta(c_i) := c_i$.
Assume that $c_i \not\subset {\mathcal{T}}$. let $R_i$ be a rectangle consisting of unit squares of ${\mathcal{T}}$ so that $c_i$ is a diagonal of $R_i$. Consider the set $N(c_i) \subseteq R_i$ of all squares $s$ of $R_i$ such that the intersection $s \cap c_i$ is not empty and is not a single point. Assuming that the boundary path ${\partial}N(c_i)$ is negatively, i.e., clockwise, oriented, we represent ${\partial}N(c_i)$ in the form $${\partial}N(c_i) = q_{1}(c_i) q_{2}(c_i)^{-1} ,$$ where $q_{1}(c_i), q_{2}(c_i)^{-1}$ are subpath of ${\partial}N(c_i)$ defined by the equalities $$q_{1}(c_i)_- = q_{2}(c_i)_- = (c_i)_-, \qquad q_{1}(c_i)_+ = q_{2}(c_i)_+ = (c_i)_+.$$ It is easy to see that these equations uniquely determine the paths $q_{1}(c_i)$, $q_{2}(c_i)$, see Fig. 10.3.
(-2,2) rectangle (1.5,0); (-2,0) – (1.5,2); (.5,1.43) – (.7, 1.54); (-1.27,0) – (-1.2, 0); (1,2) – (1.2, 2); at (-0.2,-1.4) [Fig. 10.3]{}; (-2,0) \[fill = black\] circle (.05); (1.5,2) \[fill = black\] circle (.05); at (.7,1.27) [$c_i$]{}; at (-2.5,2) [$R_i$]{}; (-2,0.5) – (-1.5,0.5) – (-1.5,1) – (-1,1) – (-0.5,1) – (-0.5,1.5) – (0,1.5) – (0.5,1.5) – (0.5,2) – (1,2) – (1.5,2); (-1,0) – (-1,0.5) – (0,0.5) – (0,1) – (1,1) – (1,1.5) – (1.5,1.5) – (1.5,2); at (-1.4,-.45) [$q_{2}(c_i)$]{}; at (1,2.37) [$q_{1}(c_i)$]{}; (-2,1.5) – (1.5,1.5); (-2,1) – (1.5,1); (-2,.5) – (1.5,.5); (-1.5,2) – (-1.5,0); (-1.,2) – (-1.,0); (-.5,2) – (-.5,0); (0,2) – (0,0); (0.5,2) – (0.5,0); (1.5,2) – (1.5,0); (1.,2) – (1.,0);
\[f3\] Suppose $c_i, R_i$ are defined as above, $c_i \not\subset {\mathcal{T}}$, and $|R_i|_x$, $|R_i|_y$ are the lengths of horizontal, vertical, resp., sides of the rectangle $R_i$. Then $$\label{qq1}
\max(|R_i|_x, |R_i|_y) \le |c_i|_{\mathcal{T}}\le 2 \max(|R_i|_x, |R_i|_y) ,$$ and the area $A(c_i q_{1}(c_i)^{-1})$ bounded by the polygonal closed curve $c_i q_{1}(c_i)^{-1}$ satisfies $$\label{qq2}
A(c_i q_{1}(c_i)^{-1}) \le |c_i|_{\mathcal{T}}.$$
In addition, the paths $q_{1}(c_i), q_{2}(c_i)$ can be constructed in deterministic space $O(\log |c|_{\mathcal{T}})$ or in deterministic time $O( |c_i|_{{\mathcal{T}}} \log |c|_{\mathcal{T}})$.
We say that all unit squares of $R_i$ whose points have $x$-coordinates in the same range compose a [*column*]{} of $R_i$. Similarly, we say that all unit squares of $R_i$ whose points have $y$-coordinates in the same range compose a [*row*]{} of $R_i$. Since $c_i$ is a diagonal of $R_i$, it follows that $c_i$ has an intersection, different from a single point, with a unit square from every row and from every column of $R_i$. Hence, in view of the definition of the ${\mathcal{T}}$-length $|c_i|_{\mathcal{T}}$ of $c_i$, we have $\max(|R_i|_x, |R_i|_y) \le |c_i|_{\mathcal{T}}$.
It follows from the definitions of the region $N(R_i) \subseteq R_i$ and the ${\mathcal{T}}$-length $|c_i|_{\mathcal{T}}$ that the number of unit squares in $N(R_i)$ is $|c_i|_{\mathcal{T}}$, hence, the area $A(N(R_i)) $ is equal to $|c_i|_{\mathcal{T}}$. Since the closed curve $c_i q_{1}(c_i)^{-1}$ is contained in $N(R_i)$ and it can be turned into a simple curve by an arbitrarily small deformation, it follows that $A(c_i q_{1}(c_i)^{-1}) \le A(N(R_i)) = |c_i|_{\mathcal{T}}$, as required in .
Let $\operatorname{\textsf{sl}}(c_i)$ denote the slope of the line that goes through $c_i$. If $|\operatorname{\textsf{sl}}(c_i)| \le 1$ then we can see that $N(c_i)$ contains at most 2 squares in every column of $R_i$. On the other hand, if $|\operatorname{\textsf{sl}}(c_i)| \ge 1$ then $N(c_i)$ contains at most 2 squares in every row of $R_i$. Therefore, $
A(N(c_{i}) ) \le 2\max(|R_i|_x, |R_i|_y) .
$ Since $A(N(c_{i}) ) = |c_i|_{\mathcal{T}}$, the inequalities are proven.
Note that, moving along columns of $R_i$ if $\operatorname{\textsf{sl}}(c_i) \le 1$ or moving along rows of $R_i$ if $\operatorname{\textsf{sl}}(c_i) > 1$, we can detect all squares of $R_i$ that belong to the region $N(c_{i})$ in space $ O(\log |c|_{\mathcal{T}})$ or in time $ O( |c_i|_{\mathcal{T}}\log |c|_{\mathcal{T}})$ by checking which squares in current column or row are intersected by $c_i$ in more than one point. This implies that the paths $q_{1}(c_i), q_{2}(c_i)$ can also be constructed in space $O(\log |c|_{\mathcal{T}})$ or in time $ O( |c_i|_{\mathcal{T}}\log |c|_{\mathcal{T}})$, as desired.
For $c_i \not\subset {\mathcal{T}}$, define $\zeta(c_i) := q_{1}(c_i)$. Recall that $\zeta(c_i) = c_i $ if $c_i \subset {\mathcal{T}}$. We can now define an approximating closed path $\zeta(c)$ in ${\mathcal{T}}$ for $c$ by setting $$\zeta(c) := \zeta(c_1) \dots \zeta(c_k) .$$ By Lemma \[f3\], $$\begin{aligned}
|A(c) - A(\zeta(c)) | \le \sum_{i=1}^k A(c_i \zeta(c_i)^{-1}) \le \sum_{i=1}^k |c_i|_{\mathcal{T}}= |c|_{\mathcal{T}}.\end{aligned}$$
Consider a refined tessellation ${\mathcal{T}}_M$, where $M>1$ is an integer, so that every unit square $s$ of ${\mathcal{T}}$ is divided into $M^2$ congruent squares each of area $M^{-2}$. We repeat the foregoing definitions of the lengths $|c_i|_{{\mathcal{T}}_M}$, $|c|_{{\mathcal{T}}_M}$, the paths $q_{j, M}(c_i)$, $j=1,2$, $i=1,\dots,k$, rectangles $R_{i,M}$, regions $N_M(c_i) \subseteq R_{i,M}$, and approximating paths $\zeta_M(c_i), \zeta_M(c)$ with respect to the refined tessellation ${\mathcal{T}}_M$ in place of ${\mathcal{T}}$.
\[f4\] Suppose $c_i \not\subset {\mathcal{T}}$, and $|R_{i,M}|_x$, $|R_{i,M}|_y$ denote the path length of horizontal, vertical, resp., sides of the rectangle $R_{i,M}$. Then $|R_{i,M}|_x = M |R_{i}|_x $, $|R_{i,M}|_y = M |R_{i}|_y$, $$\label{qq3}
\max(|R_{i,M}|_x, |R_{i,M}|_y) \le |c_i|_{{\mathcal{T}}_M} \le 2 \max(|R_{i,M}|_x, |R_{i,M}|_y) ,$$ and the area bounded by the polygonal closed curve $c_i q_{1, M}(c_i)^{-1}$ satisfies $$\label{qq4}
A(c_i q_{1, M}(c_i)^{-1}) \le M^{-2} |c_i|_{{\mathcal{T}}_M} .$$
In addition, we have $$\label{qq5}
M |c_i |_{{\mathcal{T}}}/2 \le |c_i|_{{\mathcal{T}}_M} \le 2 M |c_i |_{{\mathcal{T}}} ,$$ and the paths $q_{1, M}(c_i), q_{2, M}(c_i)$ can be constructed in deterministic space $$O(\log |c|_{{\mathcal{T}}_M}) = O(\log (M |c|_{{\mathcal{T}}}) )$$ or in deterministic time $
O(|c_i|_{{\mathcal{T}}_M} \log |c|_{{\mathcal{T}}_M}) = O( M |c_i|_{{\mathcal{T}}} \log (M |c|_{{\mathcal{T}}}) )
$.
The equalities $|R_{i,M}|_x = M |R_{i}|_x $, $|R_{i,M}|_y = M |R_{i}|_y$ are obvious from the definitions. Proofs of inequalities – are analogous to the proofs of inequalities – of Lemma \[f3\] with the correction that the area of a square of the tessellation ${\mathcal{T}}_M$ is now $M^{-2}$.
The inequalities follow from inequalities , and imply that the space and time bounds $O(\log |c|_{{\mathcal{T}}_M})$, $O(|c_i|_{{\mathcal{T}}_M} \log |c|_{{\mathcal{T}}_M})$, that are obtained as corresponding bounds of Lemma \[f3\], can be rewritten in the form $O(\log (M |c|_{{\mathcal{T}}}) )$, $O( M |c_i|_{{\mathcal{T}}} \log (M |c|_{{\mathcal{T}}}) )$, resp.
It follows from the definitions and Lemma \[f4\] that $$\begin{aligned}
\notag
|A(c) - A(\zeta_M(c)) | & \le \sum_{i=1}^k A(c_i \zeta_M(c_i)^{-1} ) \le M^{-2} \sum_{i=1}^k
|c_i|_{{\mathcal{T}}_M} \\ \label{qq6}
& \le M^{-2} \sum_{i=1}^k
2M |c_i|_{{\mathcal{T}}} = 2M^{-1} |c|_{{\mathcal{T}}}\end{aligned}$$ and that the path $\zeta_M(c_i) \subset {\mathcal{T}}_M$ can be computed in space $O(\log (M |c|_{{\mathcal{T}}}) )$ or in time $O( M |c_i|_{{\mathcal{T}}} \log (M |c|_{{\mathcal{T}}}) )$.
Setting $M := |c|_{{\mathcal{T}}}^{n+2}$, where $n \ge 1$ is a fixed integer, we can see from Lemma \[f4\] that the closed path $\zeta_M(c) \subset {\mathcal{T}}_M$ can be constructed in space $O(\log |c|_{{\mathcal{T}}} ) $ or in time $O( |c|_{{\mathcal{T}}}^{n+3} \log |c|_{{\mathcal{T}}} )$. Hence, by Theorem \[thm5\] and Lemma \[f4\], the area $A(\zeta_M(c))$ can be computed in space $O((\log |c|_{{\mathcal{T}}_M})^3 ) = O((\log |c|_{{\mathcal{T}}})^3 )$ or in time $O( |c|_{{\mathcal{T}}_M}^{3} \log |c|_{{\mathcal{T}}_M} ) = O( |c|_{{\mathcal{T}}}^{n+3} \log |c|_{{\mathcal{T}}} )$. The inequality together with $M = |c|_{{\mathcal{T}}}^{n+2}$ imply that $$\begin{aligned}
\label{qq7}
|A(c) - A(\zeta_M(c)) | \le 2 M^{-1} |c|_{{\mathcal{T}}} < |c|_{{\mathcal{T}}}^{-n} ,\end{aligned}$$ here we may assume $|c|_{{\mathcal{T}}} > 2$ for otherwise $A(c) = 0$ and $r_n = 0$.
Finally, suppose that the area $A(c)$ is known to be an integer multiple of $\tfrac 1L$, where $L >0$ is an integer with $L < |c|_{{\mathcal{T}}}^{n}/2$. Applying the foregoing approximation result, in deterministic space $O((\log |c|_{{\mathcal{T}}})^3 )$ or in deterministic time $O( |c|_{{\mathcal{T}}}^{n+3} \log |c|_{{\mathcal{T}}} )$, we can compute a rational number $r_n = A(\zeta_M(c))$, where $M = |c|_{{\mathcal{T}}}^{n+2}$, such that the inequality holds true. It follows from Lemma \[91\], inequalities and the definitions that both numerator and denominator of $r_n$ are nonnegative integers that do not exceed $$\max ( |c|_{{\mathcal{T}}_M}^2/4, M^2 ) \le \max ( M^2 |c|_{{\mathcal{T}}}^{2}, M^2) \le |c|_{{\mathcal{T}}}^{2(n+3)} .$$ Hence, a binary representation of $r_n$ takes space $O(\log |c|_{{\mathcal{T}}})$.
Since it is known that $\tfrac 1 L > \tfrac{2}{|c|_{{\mathcal{T}}}^{n}}$, it follows from that there is at most one integer multiple of $\tfrac 1L $ at the distance $< \tfrac{1}{|c|_{{\mathcal{T}}}^{n}}$ from $r_n$. This means that an integer multiple of $\tfrac 1L $ closest to $r_n$ is the area $A(c)$.
Since $r_n$ and $L$ are available and their binary representations take space $O(\log |c|_{{\mathcal{T}}})$, it follows that a closest to $r_n$ integer multiple of $\tfrac 1L$, can be computed in space $O(\log |c|_{{\mathcal{T}}})$ or in time $O((\log |c|_{{\mathcal{T}}})^2)$ and this will be the desired area $A(c)$.
Thus the area $A(c)$ bounded by $c$ can be computed in space $O(\log |c|_{{\mathcal{T}}}^3)$ or in time $O( |c|_{{\mathcal{T}}}^{n+3} \log |c|_{{\mathcal{T}}} )$. Theorem \[thm6\] is proved.
Let $K \ge 1$ be a fixed integer and let $c$ be a polygonal closed curve in the plane $\mathbb R^2$ with given tessellation ${\mathcal{T}}$ into unit squares such that $c$ has one of the following two properties $\mathrm{(a)}$–$\mathrm{(b)}$.
$\mathrm{(a)}$ If $c_i, c_j$ are two nonparallel line segments of $c$ then their intersection point, if it exists, has coordinates that are integer multiples of $\tfrac 1 K$.
$\mathrm{(b)}$ If $c_i$ is a line segment of $c$ and $a_{i,x}, a_{i,y}$ are coprime integers such that the line given by an equation $a_{i,x}x + a_{i,y}y = b_{i}$, where $b_i$ is an integer, contains $c_i$, then $\max(|a_{i,x}|,|a_{i,y}|) \le K$.
Then the area $A(c)$ defined by $c$ can be computed in deterministic space $O( (\log |c|_{{\mathcal{T}}} )^3)$ or in deterministic time $O( |c|_{{\mathcal{T}}}^{n+3} \log |c|_{{\mathcal{T}}} )$, where $n$ depends on $K$.
In particular, if ${\mathcal{T}}_\ast$ is a tessellation of the plane $\mathbb R^2$ into equilateral triangles of unit area, or into regular hexagons of unit area, and $q$ is a finite closed path in ${\mathcal{T}}_\ast$ whose edges are edges of ${\mathcal{T}}_\ast$, then the area $A(q)$ defined by $q$ can be computed in deterministic space $O( (\log |q |)^3)$ or in deterministic time $O( |q|^5 \log |q | )$.
Assume that the property (a) holds for $c$. Let $t$ be a triangle in the plane whose vertices $v_1, v_2, v_3$ are points of the intersection of some nonparallel segments of $c$. Then, using the determinant formula $$\begin{aligned}
A(t) = \tfrac 12 | \det [\overrightarrow{v_1v_2}, \overrightarrow{v_1v_3} ] |\end{aligned}$$ for the area $A(t)$ of $t$, we can see that $A(t)$ is an integer multiple of $\tfrac {1} {2K^2}$. Since $A(c)$ is the sum of areas of triangles such as $t$ discussed above, it follows that $A(c)$ is also an integer multiple of $\tfrac {1} {2K^2}$. Taking $n$ so that $$\begin{aligned}
\label{qq8a}
|c|_{{\mathcal{T}}}^n > 4K^2 ,\end{aligned}$$ we see that Theorem \[thm6\] applies with $L = 2K^2$ and yields the desired conclusion.
Suppose that the property (b) holds for $c$. By the Cramer’s rule applied to the system of two linear equations $$\begin{aligned}
a_{i,x}x + a_{i,y}y & = b_{i} \\
a_{j,x}x + a_{j,y}y & = b_{j}\end{aligned}$$ which, as in property (b), define the lines that contain nonparallel segments $c_i, c_j$, we have that the intersection point of $c_{i}$ and $c_{j}$ has rational coordinates whose denominators do not exceed $$\begin{aligned}
\left|
\det \begin{bmatrix}
a_{i,x} & a_{i,y} \\
a_{j,x} & a_{j,y} \\
\end{bmatrix} \right| \le 2K^2 .\end{aligned}$$
This means that coordinates of the intersection point of two nonparallel line segments $c_{i}$, $c_{j}$ of $c$ are integer multiples of $\tfrac{1}{ (2K^2)!}$ and the case when the property (b) holds for $c$ is reduced to the case when the property (a) holds for $c$ with $K' = (2K^2)!$.
It remains to show the last claim of Corollary \[cor3\]. The case when $q$ is a closed path in the tessellation ${\mathcal{T}}_6$ of the plane $\mathbb R^2$ into regular hexagons of unit area can be obviously reduced to the case when $q$ is a closed path in the tessellation ${\mathcal{T}}_3$ of the plane $\mathbb R^2$ into equilateral triangles of unit area. For this reason we discuss the latter case only.
Consider the standard tessellation $ {\mathcal{T}}= {\mathcal{T}}_4$ of the plane $\mathbb R^2$ into unit squares and draw a diagonal in each square $s$ of ${\mathcal{T}}_4$ which connects the lower left vertex and the upper right vertex of $s$. Let ${\mathcal{T}}_{4,1}$ denote thus obtained tessellation of $\mathbb R^2$ into triangles of area 1/2. Note that ${\mathcal{T}}_3$ and ${\mathcal{T}}_{4,1}$ are isomorphic as graphs and the areas of corresponding triangles differ by a coefficient of $2^{\pm 1}$. This isomorphism enables us to define the image $q'$ in ${\mathcal{T}}_{4,1}$ of a closed path $q$ in ${\mathcal{T}}_3$. It is clear that the area $A( q')$ defined by $ q'$ relative to ${\mathcal{T}}_{4,1}$ is half of the area $A(q)$ defined by $q$ relative to ${\mathcal{T}}_{3}$, $A( q') = A( q)/2$. We can also consider $q'$ as a polygonal closed curve relative to the standard tessellation ${\mathcal{T}}= {\mathcal{T}}_4$ of $\mathbb R^2$ into unit squares. Note that $|q'|_{{\mathcal{T}}_4} = |q'|_{{\mathcal{T}}_{4,1}}= |q|_{{\mathcal{T}}_3}$ and that the property (a) of Corollary \[cor3\] holds for $ q'$ with the constant $K=1$ relative to ${\mathcal{T}}_4$. Thus, by proven part (a), the area $A(q')$ can be computed in space $O((\log |q'|_{{\mathcal{T}}_4})^3 )= O((\log |q|_{{\mathcal{T}}_3})^3 ) = O((\log |q|)^3 ) $ or in time $$O( |q'|_{{\mathcal{T}}_4}^{5} \log |q'|_{{\mathcal{T}}_4} )= O( |q|_{{\mathcal{T}}_3}^{5} \log |q|_{{\mathcal{T}}_3} ) =
O( |q|^{5} \log |q| )$$ for the reason that $K=1$ and we can use $n =2$ in [qq8a]{} unless $|q'|_{{\mathcal{T}}_4} = 2$ in which case $A(q') = A(q) =0$. Since $A( q) = 2A( q')$, our proof is complete.
It is tempting to try to lift the restrictions of Corollary \[cor3\] to be able to compute, in polylogarithmic space, the area $A(c)$ defined by an arbitrary polygonal closed curve $c$ in the plane equipped with a tessellation ${\mathcal{T}}$ into unit squares. Approximation approach of Theorem \[thm6\] together with construction of actual [elementary homotopies]{} for the approximating path $\zeta_M(c)$ of Theorem \[thm5\] that seem to indicate the sign of the pieces $A(c_i \zeta_M(c_i)^{-1})$ of the intermediate area between $c$ and $\zeta_M(c)$, both done in polylogarithmic space, provide certain credibility to this idea.
However, in the general situation, this idea would not work because the rational number $A(c)$ might have an exponentially large denominator, hence, $A(c)$ could take polynomial space just to store (let alone the computations). An example would be provided by a polygonal closed curve $c= c(n)$, where $n \ge 2$ is an integer, such that $|c|_{\mathcal{T}}< (n+1)^2$ and the denominator of $A(c)$ is greater than $2^{k_1 n -1}$, where $k_1 >0$ is a constant. Below are details of such an example.
Let $n \ge 2$ be an integer and let $p_1, \dots, p_\ell$ be all primes not exceeding $n$. Recall that there is a constant $k_1 >0$ such that $p_1 \dots p_\ell > 2^{k_1n}$, see [@HW]. We construct line segments $c_{3i+1}$, $c_{3i+2}$, $c_{3i+3}$ for $i=0, \dots, \ell-1$ and $c_{3\ell+1}$ of a polygonal closed curve $c=c(n)$ by induction as follows. Let the initial vertex $(c_1)_-$ of $c_1$ be the point (with coordinates) $(0,0)$, $(c_1)_-:= (0,0)$. Proceeding by induction on $i\ge 0$, assume that the point $(c_{3i+1})_- =(x_0, y_0)$ is already defined. Then the line segment $c_{3i+1}$ goes from the point $(c_{3i+1})_- $ to the point $(x_0, y_0+1) = (c_{3i+1})_- +(0,1)$, written $c_{3i+1} = [(c_{3i+1})_-, (c_{3i+1})_- +(0,1)]$. Next, we set $$c_{3i+2} := [(c_{3i+1})_+, (c_{3i+1})_+ + (-1,0)] , \qquad c_{3i+3} := [(c_{3i+2})_+, (c_{3i+2})_+ + (p_{i+1},-1)]$$ and, completing the induction step, define $(c_{3(i+1)+1})_- := (c_{3i+3})_+$, see Fig. 10.4, where the case $\ell = 3$ with $p_1 =2, p_2=3, p_3 = 5$ is depicted.
(0,0) \[fill = black\] circle (.045); (0,0) – (0,1); (0,0) – (0,1) – (-1,1) – (1,0) – (1,1) – (0,1) – (3,0) – (3,1) – (2,1) – (7,0) – (0,0); (0,1) \[fill = black\] circle (.045); (-1,1) \[fill = black\] circle (.045); (1,1) \[fill = black\] circle (.045); (1,0) \[fill = black\] circle (.045); (3,1) \[fill = black\] circle (.045); (3,0) \[fill = black\] circle (.045); (2,1) \[fill = black\] circle (.045); (7,0) \[fill = black\] circle (.045); (0,.3) – (0, .37); at (-0.35,0.3) [$c_1$]{}; (-.5,1) – (-.6, 1); at (-.5,1.27) [$c_2$]{}; (.4,.3) – (.48, .26); at (.5,.5) [$c_3$]{}; (1,.3) – (1, .37); at (1.35,0.3) [$c_4$]{}; (.5,1) – (.4, 1); at (.5,1.27) [$c_5$]{}; (1.8,.4) – (2.1, .3); at (2.1,.55) [$c_6$]{}; (3,.3) – (3, .37); at (3.35,0.3) [$c_7$]{}; (2.5,1) – (2.4, 1); at (2.5,1.27) [$c_8$]{}; (4.5,.5) – (5, .4); at (4.9,.7) [$c_9$]{}; (5.,.0) – (4.2, 0); at (4.3,-.35) [$c_{10}$]{}; at (2.7,-.9) [Fig. 10.4]{}; at (-1.,-.4) [$(c_{1})_- =(c_{10})_+ = (0,0)$]{};
Finally, we define $(c_{3\ell+1})_- := [(c_{3\ell})_+, (c_{1})_-]$ and $c=c(n) :=
c_1 \dots c_{3\ell} c_{3\ell+1}$, see Fig. 10.4 where $c_{3\ell+1} = c_{10}$.
It is not difficult to check that $$|c|_{\mathcal{T}}= \ell + 2\sum_{i=1}^\ell p_i < (n+1)^2$$ and that the area $A(c)$ defined by $c$ is the following sum of areas of $2\ell$ triangles $$\begin{aligned}
A(c) & = \sum_{i=1}^\ell \tfrac 12 \left( \tfrac {1}{p_i} + (p_i-1)(1- \tfrac {1}{p_i}) \right) =
\tfrac 12 \sum_{i=1}^\ell \left( \tfrac {1}{p_i} + (p_i-1) - 1 + \tfrac {1}{p_i} \right)= \\
& = \sum_{i=1}^\ell \tfrac {1}{p_i} + \tfrac 12 \sum_{i=1}^\ell (p_i-2) =
\tfrac { \sum_{i=1}^\ell \tfrac { p_1 \dots p_\ell }{p_i} }{ p_1 \dots p_\ell } + \tfrac 12 \sum_{i=1}^\ell (p_i-2) .\end{aligned}$$ Since $p_1 \dots p_\ell > 2^{k_1 n}$, it follows that, after possible cancelation of 2, the denominator of $A(c)$ is greater than $2^{k_1 n - 1}$.
It would be of interest to study similar problems for tessellation of the hyperbolic plane into regular congruent $2g$-gons, where $g \ge 2$, which would be technically close to the precise word problem and to the minimal diagram problem for the standard group presentation $$\langle \, a_1, a_2 \dots, a_{2g-1}, a_{2g} \ \| \ a_1 a_2 a_1^{-1} a_2^{-1} \dots a_{2g-1} a_{2g} a_{2g-1}^{-1} a_{2g}^{-1} =1 \, \rangle$$ of the fundamental group of an orientable closed surface of genus $g \ge 2$. Hopefully, there could be developed a version of calculus of brackets for such presentations that would be suitable for these problems.
It is likely that suitable versions of calculus of brackets could be developed for problems on efficient planar folding of RNA strands which would provide polylogarithmic space algorithms for such problems. Recall that available polynomial time algorithms for such problems, see [@CB1], [@CB2], [@CB3], [@CB4], use polynomial space. This approach would give a chance to do, also in polylogarithmic space, maximization of foldings relative to various parameters similar to those discussed in Theorem \[thm3\].
[*Acknowledgements.*]{} The author is grateful to Tim Riley for bringing to the author’s attention folklore arguments that solve the precise word problem for presentation $\langle \, a, b \, \| \, a=1, b=1 \, \rangle$ in polynomial time, the article [@SL1], and the similarity between the [precise word problem]{} for presentation $\langle \, a, b \, \| \, a=1, b=1 \, \rangle$ and the problem on efficient planar folding of RNA strands. The author thanks the referee for a number of useful remarks and suggestions.
[\[75\]]{} A. V. Anisimov, [*The group languages*]{}, Kibernetika (Kiev) [**4**]{}(1971), 18–24.
R. Armoni, A. Ta-Shma, A. Wigderson and S. Zhou, [*A $(\log n )^{4/3}$ space algorithm for $(s,t)$ connectivity in undirected graphs*]{}, J. Assoc. Comput. Mach. [**47**]{}(2000), 294–311.
S. Arora and B. Barak, [*Computational complexity – a modern approach*]{}, Cambridge Univ. Press, 2009.
D. Barrington, P. Kadau, K. Lange and P. McKenzie, [*On the complexity of some problems on groups input as multiplication tables*]{}, J. Comput. System Sci. [**63**]{}(2001), 186–200.
J.-C. Birget, A. Yu. Ol’shanskii, E. Rips, and M. V. Sapir, [*Isoperimetric functions of groups and computational complexity of the word problem*]{}, [Ann. Math.]{} [**156**]{}(2002), 467–518.
W. W. Boone, [*On certain simple undecidable problems in group theory, V, VI*]{}, Indag. Math. [**19**]{}(1957), 22–27, 227–232.
W. W. Boone, [*The word problem*]{}, Ann. Math. [**70**]{}(1959), 207–265.
V. V. Borisov, [*Simple examples of groups with unsolvable word problems*]{}, Mat. Zametki [**6**]{}(1969), 521–532.
D. Cummins, [*Dehn functions, the word problem, and the bounded word problem for decidable group presentations*]{}, preprint, [arXiv:1212.2024 \[math.GR\]]{}.
M. Dehn, [*Über unendliche diskontinuierliche Gruppen*]{}, Math. Ann. [**71**]{}(1911), 116–144.
R. I. Grigorchuk and P. F. Kurchanov, [*On the width of elements in free groups*]{}, Ukrainian Math. J. [**43**]{}(1991), 911–918.
R. I. Grigorchuk and S. V. Ivanov, [*On Dehn functions of infinite presentations of groups*]{}, Geom. Funct. Anal. [**18**]{}(2008), 1841–1874.
M. Gromov, [*Hyperbolic groups*]{}, [Essays in Group Theory]{}(S. Gersten, ed.), MSRI Publ. 8, Springer-Verlag, 1987, 75–263.
G. H. Hardy and E. M. Wright, [*An introductton to the theory of numbers*]{}, Oxford Univ. Press, London, 6th ed., 2008.
S. V. Ivanov, [*The free Burnside groups of sufficiently large exponents*]{}, [Internat. J. Algebra Comp.]{} [**4**]{}(1994), 1–308.
R. J. Lipton and Y. Zalcstein, [*Word problems solvable in logspace*]{}, J. Assoc. Comput. Mach. [**24**]{}(1977), 522–526.
R. J. Lipton, L. Snyder, and Y. Zalcstein, [*The complexity of word and isomorphism problems for finite groups*]{}, Proc. Conf. on Information Sciences and Systems, 1976, pp. 33–35.
R. C. Lyndon and P. E. Schupp, [*Combinatorial group theory*]{}, Springer-Verlag, 1977.
W. Magnus, J. Karras, D. Solitar, [*Combinatorial group theory*]{}, Interscience Publ., 1966.
A. Majumdar, J. M. Robbins, M. Zyskin, [*Tangent unit-vector fields: nonabelian homotopy invariants and the Dirichlet energy*]{}, C. R. Math. Acad. Sci. Paris [**347**]{}(2009), 1159–1164.
A. Majumdar, J. M. Robbins, M. Zyskin, [*Tangent unit-vector fields: nonabelian homotopy invariants and the Dirichlet energy*]{}, Acta Math. Sci. Ser. B Engl. Ed. 30 [**30**]{}(2010), 1357–1399.
A. A. Markov, [*The impossibility of certain algorithms in the theory of associative systems I*]{}, Dokl. Akad. Nauk [**55**]{}(1947), 583–586.
A. A. Markov, [*The impossibility of certain algorithms in the theory of associative systems II*]{}, Dokl. Akad. Nauk [**58**]{}(1947), 353–356.
N. Megiddo and U. Vishkin, [*On finding a minimum dominating set in a tournament*]{}, Theoret. Comput. Sci. [**61**]{}(1988), 307–316.
P. S. Novikov, [*On algorithmic unsolvability of the problem of identity*]{}, Doklady Akad. Nauk SSSR [**85**]{}(1952), 709–712.
P. S. Novikov, [*On the algorithmic unsolvability of the word problem in group theory*]{}, Trudy Mat. Inst. Steklov [**44**]{}(1955), 1–143.
R. Nussinov, G. Pieczenik, J. Griggs, and D. Kleitman, [*Algorithms for loop matchings*]{}, SIAM J. Appl. Math. [**35**]{}(1978), 68–82.
R. Nussinov and A. B. Jacobson, [*Fast algorithm for predicting the secondary structure of single stranded RNA*]{}, Proc. Natl. Acad. Sci. USA [**77**]{}(1980), 6309–6313.
R. Nussinov, B. Shapiro, and S. Le, J. V. Maizel, Speeding up the dynamic algorithm for planar RNA folding. Math. Biosci. [**100**]{}(1990), 33–47.
A. Yu. Ol’shanskii, [*Geometry of defining relations in groups*]{}, Nauka, Moscow, 1989; English translation: [*Math. and Its Applications, Soviet series*]{}, vol. 70, Kluwer Acad. Publ., 1991.
A. Yu. Ol’shanskii, [*On calculation of width in free groups*]{}, London Math. Soc. Lecture Note Ser. [**204**]{}(1995), 255–258.
C. H. Papadimitriou, *Computational complexity*, Addison-Wesley Publ., 1994.
C. H. Papadimitriou and M. Yannakakis, [*On limited nondeterminism and the complexity of the V-C dimension*]{}, J. Comput. System Sci. [**53**]{}(1996), 161–170.
E. L. Post, [*Recursive unsolvability of a problem of Thue*]{}, J. Symb. Logic [**12**]{}(1947), 1–11.
T. R. Riley, *Private communication*.
T. R. Riley, [*Computing area in presentations of the trivial group*]{}, preprint, [arXiv:1606.08833 \[math.GR\]]{}.
J. J. Rotman, *An introduction to the theory of groups*, Springer-Verlag, 1995.
W. J. Savitch, [*Relationships between nondeterministic and deterministic tape complexities*]{}, J. Comput. System Sci. [**4**]{}(1970), 177–192.
A. M. Turing, [*The word problem in semigroups with cancellation*]{}, Ann. Math. [**52**]{}(1950), 491–505.
M. Zuker and P. Stiegler, [*Optimal computer folding of large RNA sequences using thermodynamics and auxiliary information*]{}, Nucleic Acids Res. [**9**]{}(1981), 133–148.
[^1]: The work on this article was supported in smaller part by NSF grant DMS 09-01782
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Neutrinos are unique probes of core-collapse supernova dynamics, especially in the case of black hole (BH) forming stellar collapses, where the electromagnetic emission may be faint or absent. By investigating two 3D hydrodynamical simulations of BH-forming stellar collapses of mass $40$ and $75\ M_\odot$, we identify the physical processes preceding BH formation through neutrinos, and forecast the neutrino signal expected in the existing IceCube and Super-Kamiokande detectors, as well as in the future generation DUNE facility. Prior to the abrupt termination of the neutrino signal in correspondence to BH formation, both models exhibit episodes of a long lasting, strong standing accretion shock instability (SASI). We find that the SASI peak in the Fourier power spectrum of the neutrino event rate will be distinguishable at $3\sigma$ above the detector noise for distances up to $\sim \mathcal{O}(30)$ kpc in the most optimistic scenario, with IceCube having the highest sensitivity. Interestingly, given the long duration of the SASI episodes, the spectrograms of the expected neutrino event rate carry clear signs of the evolution of the SASI frequency as a function of time, as the shock radius and post-shock fluid velocity evolve. Due to the thick non-convective layer covering the transiently stable neutron star, any contribution from the lepton emission self-sustained asymmetry (LESA) cannot be diagnosed in the neutrino signal.'
author:
- Laurie Walk
- Irene Tamborra
- 'Hans-Thomas Janka'
- Alexander Summa
bibliography:
- 'BH\_Progenitors.bib'
title: |
Neutrino emission characteristics of black hole formation\
in three-dimensional simulations of stellar collapse
---
Introduction
============
Core-collapse supernovae (SNe) occur when stars with a zero-age main sequence mass roughly between $10$–$150\ M_\odot$ end their lives with the onset of gravitational collapse of their inner core [@Fryer:1999mi; @Colgate:1966ax; @Colgate_1971]. According to the delayed SN mechanism, the electron-degenerate iron core bounces and induces a shock wave in the infalling stellar mantle. As it travels outwards, the shock wave loses energy by dissociating iron nuclei, and stalls. The SN then enters the accretion phase, in which infalling matter continually accretes onto the shock front [@Bethe:1990mw]. In successful SNe, the shock wave is thought to be revived by neutrinos [@Bethe:1984ux]. During the accretion phase, hydrodynamical instabilities such as the standing accretion shock instability (SASI), neutrino-driven convection, and the lepton emission self-sustained asymmetry (LESA) can develop, leading to large-scale asymmetries of mass distribution visible in the neutrino emission [@Blondin:2002sm; @Blondin:2006yw; @Scheck:2007gw; @Iwakami:2008qj; @Marek:2007gr; @Foglizzo:2011aa; @Fernandez:2010db; @Tamborra:2014aua]. Convective overturn and SASI also enhance the rate of neutrino heating, aiding the revival of the SN explosion.
A revived SN explosion, in which the stellar mantle is successfully ejected, will result in the formation of a neutron star. However, a black hole (BH) can be the outcome if the explosion mechanism fails, and matter continues to accrete onto the transient proto-neutron star (PNS), pushing it over its Chandrasekar mass limit. In $10$–$30\%$ of all core-collapse cases, or likely more, the massive star is expected to end in the direct formation of a BH, as hinted by the observation of disappearing red giants [@Adams:2016ffj; @Adams:2016hit; @Smartt:2015sfa; @Gerke:2014ooa] and foreseen by recent theoretical work [@Ugliano:2012kq; @Sukhbold:2015wba; @OConnor:2012bsj].
The newly-born BH is expected to form after a few fractions of a second, up to several seconds, if it has a relatively long accretion phase. Black hole forming stellar collapses (sometimes also named “failed SNe”) emit a faint electromagnetic signal originating from the stripping of the hydrogen envelope . Sometimes, the BH formation may occur with a considerably longer delay, if not enough energy is released to finally unbind the star. In this case, a fraction of the stellar matter will fall back onto the PNS within minutes to hours and the PNS will be pushed beyond its limit, leading to BH formation [@Wong:2014tca; @Colgate_1971]. These so-called “fallback SNe” represent an intermediate class between ordinary SNe and BH-forming stellar collapses. The signals from fallback SNe will be similar to those of ordinary SNe with possible additional sub-luminous electromagnetic displays [@Zhang:2007nw]. Given the dim emission, BH-forming stellar collapses are difficult to detect electromagnetically. Neutrinos (and possibly gravitational waves) can therefore be unique probes of these cataclysmic events.
Early 1D hydrodynamical simulations of BH-forming models in spherical symmetry were carried out in [@Liebendoerfer:2002xn; @Sumiyoshi:2007pp; @Sumiyoshi:2008zw; @Ugliano:2012kq; @Sukhbold:2015wba]. In [@OConnor:2010moj] the effects of the nuclear equation of state (EoS), mass, metallicity, and rotation on BH formation were studied with a large set of so-called 1.5D progenitor models (i.e., employing spherical symmetry and an “approximate angle-averaged" rotation scheme). Recently, [@Pan:2017tpk] investigated the time dependence of BH formation on the nuclear EoS in a 2D self-consistent simulation of a non-rotating $40\ M_\odot$ model. Together, these studies provide fundamental predictions of the key features of the neutrino emission properties from BH-forming models.
The detection of a neutrino burst from a nearby BH-forming stellar collapse is bound to yield precious hints on the BH formation, see e.g. [@Yang:2011xd; @Mirizzi:2015eza]. Moreover, the neutrino signal from BH-forming stellar collapses is of relevance in the context of the detection of the diffuse SN neutrino background; the detection of the latter will indeed provide insight on the fraction of BH-forming stellar collapses on cosmological scales [@Lunardini:2009ya; @Keehn:2010pn; @Nakazato:2015rya; @Nakazato:2013maa; @Priya:2017bmm; @Horiuchi:2017qja; @Moller:2018kpn].
The information carried by neutrinos from BH-forming stellar collapses has so far been explored using inputs from 1D hydrodynamical simulations only. In recent years, state-of-the-art 3D simulations of core-collapse SNe with sophisticated energy-dependent neutrino-transport have highlighted the pletora of information carried by neutrinos, especially for what concerns the pre-explosion dynamics [@Tamborra:2014hga; @Tamborra:2013laa; @Lund:2012vm; @Kuroda:2017trn; @Lund:2010kh; @Walk:2018gaw; @Walk:2019ier; @Tamborra:2014aua]. The first 3D simulation of BH formation and fallback was carried out in [@Chan:2017tdg], motivated by the need to explain astrophysical observations of metal poor stars, natal kicks, and the spin of BHs. It was found that the mass accretion rate remains large even directly after the core bounce. The rapid growth of the PNS causes a contraction which induces violent SASI activity [@Blondin:2002sm; @Mueller:2012ak; @Janka:2016fox] and an enhanced neutrino heating in the post-shock layer. The increased heating may be responsible for an expansion of the shock wave, sometimes even an explosion, followed by the eventual collapse of the transient PNS into a BH [@Chan:2017tdg; @Kuroda:2018gqq; @Pan:2017tpk].
This work aims to identify the dominant features in the observable neutrino signal characterizing BH formation. Our analysis is carried out with two 3D simulations of BH-forming stellar collapse with different progenitor masses and metallicities. The resulting neutrino emission properties are compared and contrasted to those from ordinary core-collapse SNe, and consequently, the neutrino properties which are unique to the post-bounce evolution of a SN into a BH are determined. We provide a first attempt to infer the underlying physical processes governing BH formation, using the features detectable through neutrinos with the IceCube Neutrino Telescope [@Abbasi:2011ss], Super-Kamiokande [@Ikeda:2007sa], and DUNE [@Acciarri:2015uup].
This paper is outlined as follows. In Sec. \[sec:Simulations\] the key features of the BH-forming models used throughout this work are outlined. Section \[sec:Properties\] discusses the neutrino emission properties and how they relate to the various hydrodynamical instabilities arising in the models. The features detectable in the neutrino signal which are unique to BH formation are presented in Sec. \[sec:Detectable\]. Finally, conclusions follow in Sec. \[sec:Conclusions\].
Simulations of Black Hole Forming Stellar Collapses {#sec:Simulations}
===================================================
In this Section, we briefly describe the key features of the two 3D hydrodynamical simulations used throughout this work. The neutrino emission properties will be described in the next Section.
Two simulations of non-rotating $40\ M_\odot$ and $75\ M_\odot$ stars were carried out with the <span style="font-variant:small-caps;">Prometheus-Vertex</span> code [@Rampp:2002bq], which includes three neutrino flavors, energy dependent ray-by-ray-plus neutrino transport, and state-of-the-art modeling of the microphysics [@Marek:2007gr; @Mueller:2012is]. Both simulations were performed on an axis-free Yin-Yang grid [@2004GGG.....5.9005K; @Wongwathanarat:2010yi].
In hydrodynamical simulations of BH-forming models, where the central density and mass of the PNS continue to grow to relatively large values, an implementation of general relativity (GR) is crucial. The VERTEX code approximates the effects of GR by replacing the Newtonian gravitational potential with a modified Tolman-Oppenheimer-Volkoff (TOV) potential, as proposed in [@Marek:2005if] and implemented in [@Pan:2017tpk; @Hudepohl:2014; @Mirizzi:2015eza]. Specifically, by prescribing the TOV potential according to Case A in [@Marek:2005if], the full GR case can be closely reproduced, while keeping a sophisticated treatment of neutrino transport and the simulation costs lower than needed for solutions of the Einstein field equations. Redshift corrections and time dilation are included in the neutrino transport, while relativistic transformations of the spatial coordinates are excluded in order to remain consistent with the Newtonian hydrodynamics equations governing the fluid dynamics in these simulations [@Rampp:2002bq].
The simulation of the $40\ M_\odot$ SN model has an angular resolution of $5$ degrees and a temporal resolution in the data output of $0.5$ ms. The progenitor has solar metallicity ($Z_\odot \sim 0.0134$) [@Woosley:2007as], and the employed nuclear equation of state (EoS) was the one of Lattimer and Swesty [@Lattimer:1991nc] with nuclear incomprehensibility of $220$ MeV. The collapse into a BH occurs at $\sim 570$ ms post bounce.
The simulation of the $75\ M_\odot$ SN model has an angular resolution of $2$ degrees and a temporal resolution of $0.2$ ms in the data output. The progenitor has an ultra-poor metallicity, $Z \sim 10^{-4} Z_\odot$ [@Woosley:2002zz], and we again used the Lattimer and Swesty EoS. In this simulation, the BH formation occurs at $\simeq 250$ ms post bounce, indicating a shorter accretion phase compared to the $40\ M_\odot$ model.
Figure \[fig:simulations\] shows the time evolution of the characteristic quantities for the two models. The $75\ M_\odot$ model displays a generally constant mass accretion rate after the infall of the Si/O interface through the shock (top panel), and a steep increase in the baryon density (third panel) in the instants preceding the BH formation. Due to stabilizing effects of thermal pressure, BH formation sets in at a baryonic mass that is higher than the maximum mass of cold NSs for the employed EoS, which allows a maximum baryonic mass of about $2.3\ M_\odot$ [@Steiner:2012rk]. In both models, the PNS baryonic mass, plotted in the second panel of Fig. \[fig:simulations\], rapidly increases and reaches a value of about $2.5\ M_\odot$ just before the BH formation. As can be seen in the plot of the shock radius evolution (fourth panel), shock expansion occurs in the $75\ M_\odot$ model just prior to the BH formation, similarly to what was observed in [@Pan:2017tpk; @Chan:2017tdg; @Kuroda:2018gqq]. On the other hand, this does not happen for the $40\ M_\odot$ model where the average shock radius reaches a quasi-stationary value, although with considerable excursions with time. The behavior of the shock radius for $40\ M_\odot$ model differs from the one of the $40\ M_\odot$ model, simulated in 2D with the same EOS and nuclear incompressibility, studied in [@Pan:2017tpk]; in fact, the latter showed a shock revival prior to the collapse of the PNS into a BH. In the $40\ M_\odot$ model, episodes of shock contraction and expansion occur just before the BH formation, while in the $75\ M_\odot$ model, the shock radius quickly expands before the collapses into a BH. As expected and shown in the bottom panel of Fig. \[fig:simulations\], the PNS radius rapidly contracts until the BH forms.
![image](Fig1.pdf){width="1\columnwidth"}
Neutrino Emission Properties {#sec:Properties}
============================
The neutrino signal carries unique imprints of the hydrodynamical instabilities governing the accretion phase. To determine those characteristic of the onset of BH formation, we explore the evolution of the emitted neutrino properties of our $40\ M_\odot$ and $75\ M_\odot$ models. The neutrino properties were extracted at a radius of $500$ km and remapped from the Yin-Yang simulation grid onto a standard spherical grid. The neutrino emission properties for each flavor have been projected and plotted to appear as they would for a distant observer located along a specific angular direction, following the procedure outlined in Appendix A of Ref. [@Tamborra:2014hga]. The neutrino emission properties for every angular direction, as well as the $4 \pi$-equivalent ones, can be provided upon request for [both models](https://wwwmpa.mpa-garching.mpg.de/ccsnarchive/data/Walk2019/index.html).
Directionally independent neutrino emission properties {#sec:Other_Features}
------------------------------------------------------
Before investigating the directional dependence of the neutrino emission properties, we focus on characterizing the global features of the $40\ M_\odot$ and $75\ M_\odot$ models that may be unique to BH-forming stellar collapses. We do this by considering the neutrino emission properties obtained after integrating over all observer directions.
![image](Fig2.pdf){width="1.7\columnwidth"}
The top panels of Fig. \[fig:Luminosity\_4pi\] show the luminosity of each neutrino flavor ($\nu_e$, $\bar{\nu}_e$, and $\nu_x = \bar{\nu}_x = \nu_\mu, \nu_\tau$) extracted at 500 km and individually obtained by integrating over all observer directions for both models. The red, blue, and green curves refer to $\nu_e$, $\overline{\nu}_e$, and $\nu_x$, respectively. One can see that large amplitude modulations due to SASI survive in the $40\ M_\odot$ model (left panel), even after averaging over all observer directions, suggesting that SASI is quite strong in this model. This is not the case for the $75\ M_\odot$ model (right panel). Moreover, the $75\ M_\odot$ model has a shorter accretion phase before BH formation and a higher neutrino luminosity.
Comparing the global neutrino properties of these models with the ones of ordinary SNe, see e.g. [@Tamborra:2014hga], we find that the neutrino luminosity is larger for the BH-forming models, up to a factor of two for the $75\ M_\odot$ model. The neutrino signal suddenly terminates at $570$ and $250$ ms for the $40\ M_\odot$ and $75\ M_\odot$ models, respectively, corresponding to BH formation. At $\sim170$ ms post bounce, the neutrino luminosity drops in the $40\ M_\odot$ model because of the constant mass accretion but rapid PNS contraction (see third panel of Fig. \[fig:simulations\]). At $\sim450$ ms post bounce, the infall of the Si/O interface in the $40\ M_\odot$ model leads to a temporary decline in the neutrino luminosity and shock expansion. Notably, the neutrino luminosity increases after this drop, until $\sim 510$ ms, where it drops again. After this second drop, the luminosity climbs until the onset of the collapse to a BH. This behavior tracks the contraction and expansion of the shock radius observed in Fig. \[fig:simulations\]. At $\sim 550$ ms, the shock radius contracts again (see third panel of Fig. \[fig:simulations\]), leading to an increase in the luminosity and finally to the onset of BH formation.
Another notable feature, present in both BH-forming models, is the steady increase in the $\nu_x$ luminosity in $[150,400]$ ms for the $40\ M_\odot$ model and after the drop due to the crossing of the Si/O interface at $175$ ms for the $75\ M_\odot$ model (these time intervals correspond to the dipolar SASI phases for each model, as will be shown in the following Section). This trend in the $\nu_x$ luminosity is characteristic of the BH-forming models and it is due to the increase in temperature as the PNS contracts, similar to what was found in [@Liebendoerfer:2002xn]. Moreover, a crossing between the $\nu_e$ and $\bar\nu_e$ luminosities occurs. In fact, the $\nu_e$ luminosity tends to drop after the crossing of the Si/O interface at $450$ ms ($175$ ms) for the $40\ M_\odot$ ($75\ M_\odot$), while the $\bar\nu_e$ luminosity stays almost stationary. This decrease of the $\nu_e$ luminosity occurs because the mass accretion rate becomes lower and the PNS contracts. In contrast, the $\nu_x$ luminosity increases because of the PNS heating during its contraction and deeper decoupling of $\nu_x$. This trend is less pronounced in the $40\ M_\odot$ model because its PNS is less massive, hence it contracts more slowly and the mass accretion rate is higher. The $\bar\nu_e$’s, on the other hand, decouple in between $\nu_e$ and $\nu_x$ and tend to have a resultant intermediate trend between the increase of the $\nu_x$ luminosity and the decrease of the $\nu_e$ luminosity, which result from the simultaneous increase in the medium temperature due to the PNS contraction. The mean energies in the lower panels of Fig. \[fig:Luminosity\_4pi\] have been estimated by dividing energy flux by the number flux. No crossing of the mean energies of $\bar\nu_e$ and $\nu_x$ occurs as was found in core-collpase simulations with lower mass accretion rates and correspondingly less rapidly growing PNS masses (see e.g. [@Marek:2008qi]).
Directionally dependent neutrino emission properties {#sec:Directional_SASI}
----------------------------------------------------
Previous work [@Tamborra:2014hga; @Tamborra:2013laa; @Walk:2018gaw; @Walk:2019ier; @Kuroda:2017trn; @Lund:2012vm; @Lund:2010kh] pointed out that the neutrino emission properties are highly directionally dependent. More specifically, in SN models showing SASI activity, sinusoidal modulations in the neutrino signal associated with the bouncing of the shock wave are visible to an observer sitting along the SASI plane, while these modulations may fully disappear for an observer located perpendicular to the SASI plane [@Tamborra:2013laa; @Tamborra:2014hga]. Building on these findings, we attempt to identify potential SASI episode(s) in the BH-forming models by looking for modulations in the neutrino signal and scanning over all observer directions.
![image](Fig3.pdf){width="2.\columnwidth"}
Figure \[fig:s40\_Properties\] shows the neutrino luminosities and mean energies for the $40\ M_\odot$ model as a function of the post-bounce time along three selected observer directions. The three directions are chosen to highlight the most extreme amplitude modulations, and to show the maximal variation between periods of modulation.
Along Direction 1, a period of high amplitude modulation of the neutrino properties can be observed in the interval $[160, 500]$ ms. These modulations are indicative of a single long SASI phase along the plane of observation (SASI I). By comparing the left and central panels of Fig. \[fig:s40\_Properties\], however, one can see that only the modulations in $[160, 420]$ ms decrease upon changing to an observer along Direction 2 (SASI I, Phase I). The modulations in the second sub-interval, $[420, 500]$ ms, disappear by shifting to an observer placed along Direction 3 (SASI I, Phase II). This suggests a change of the main SASI plane at the interval $[420, 500]$ ms. Whether the shift of the SASI plane occurs gradually or instantaneously cannot be easily inferred by scanning through the observer directions alone, and will be further investigated in the next Section. Finally, a second period of signal modulation can be identified in Fig. \[fig:s40\_Properties\] between $[530, 570]$ ms (SASI II). This indicates the occurrence of a second SASI episode for the $40\ M_\odot$ model, which develops directly after the infall of the Si/O interface, and appears to remain stable until the collapse into a BH.
![image](Fig4.pdf){width="1.5\columnwidth"}
Similarly, Fig. \[fig:u75\_Properties\] shows the neutrino properties of each flavor for the $75\ M_\odot$ model. Upon scanning over all observer directions, two directions are extracted to illustrate the behavior of this model; a direction of “weak modulations," in which there is minimal total variation of the signal compared to the average variation, and a direction of “strong modulations," where the amount of signal variation is maximal. A first phase of signal modulations is apparent in the interval between $[140, 175]$ ms. These modulations remain present in the signal throughout all scanned observer directions and are due to SASI quadrupolar motions, as will be further investigated in the next Section. Following this, a SASI dipolar phase can be identified in the interval of $[175, 230]$ ms. As expected, the latter depends more strongly on the observer direction, thus disappearing when shifting between the directions of strong and weak modulations.
Characterization of SASI in black hole forming models {#sec:Evolution_SASI}
-----------------------------------------------------
In order to obtain a better characterization of SASI in the BH-forming models, and to verify the conjectures of the SASI episodes presented in the previous Section, a detailed investigation of the temporal evolution of SASI is required. This is the focus of this Section.
The full animation of the time evolution of the $\overline{\nu}_e$ luminosity relative to its $4\pi$-average, \[$(L_{\bar{\nu}_e} - \langle L_{\bar{\nu}_e}\rangle)/\langle L_{\bar{\nu}_e}\rangle$\], for the $40\ M_\odot$ model provided as [Supplemental Material](https://wwwmpa.mpa-garching.mpg.de/ccsnarchive/data/Walk2019/index.html) of this paper, shows that SASI spiral motions dominate the dynamics of this model.
Dipolar (quadrupolar) SASI motions are characterized by large-amplitude nonradial deformations of the post-shock layer, reflected by a dominant $l=1$ ($l=2$) mode in the neutrino luminosity [@Hanke:2013jat]. Following the approach introduced in Sec. IV of [@Walk:2019ier], we employ a multipole analysis of the neutrino luminosity, and track the evolution of its dipole in time. For simplicity, we limit our analysis to the $\overline{\nu}_e$ signal; however, similar trends are found for the other (anti)neutrino species.
We estimate the monopole ($A_0$), dipole ($A_1$), and quadrupole ($A_2$) of the $\overline{\nu}_e$ luminosity as described in Sec. IV and Eq. 5 of [@Walk:2019ier]. The left panel of Fig. \[fig:Rel\_Lum\_OBS\] shows the time evolution of $A_1$ and $A_2$ relative to $A_0$ for the $\overline{\nu}_e$ luminosity of the $40\ M_\odot$ BH-forming model. The dashed vertical lines indicate the two SASI intervals described in Sec. \[sec:Directional\_SASI\] (SASI I and SASI II). In both intervals, the dipole moment is dominant confirming the development of SASI in each time window.
To investigate the shift of the SASI plane at $\sim 420$ ms during the SASI I episode, the direction of the positive dipole moment is tracked along the SN emission surface over time in the top panel of Fig. \[fig:s40\_Lum\_Dip\_Evo\]. The left and central panels show the first SASI episode split up into the two sub-time intervals identified in Fig. \[fig:s40\_Properties\] (SASI I, Phase I and II), and the right panel shows the second SASI episode (SASI II). The markers indicate the coordinates along which Directions 1, 2, and 3 in Fig. \[fig:s40\_Properties\] lie from left to right, respectively.
![image](Fig5.pdf){width="1.5\columnwidth"}
![image](Fig6.pdf){width="2\columnwidth"}\
It is clear that the first SASI phase kicks in along the equatorial plane. Here, it remains relatively stable until $\sim 420$ ms. After that, the SASI dipole plane morphs from an equatorial one to the almost perpendicular plane spun by the circular pattern in the central panel of Fig. \[fig:s40\_Lum\_Dip\_Evo\]. Direction 1 (marked in the top left panel of Fig. \[fig:s40\_Lum\_Dip\_Evo\]) lies on both the equatorial plane and this shifted plane, explaining the presence of signal modulations over the full SASI time interval, $[160, 500]$ ms, in the left panel of Fig. \[fig:s40\_Properties\]. Direction 2 (see top middle panel of Fig. \[fig:s40\_Lum\_Dip\_Evo\]), however, lies far away from the equator, and thus, the central panel of Fig. \[fig:s40\_Properties\] does not show strong amplitude modulations in the first sub-time interval. The marker of Direction 3 (top right panel of Fig. \[fig:s40\_Lum\_Dip\_Evo\]) lies away from the plane to which the second phase of the first SASI episode evolves in the interval $[420, 500]$ ms, and thus, the signal modulations in this interval disappear between Direction 2 and Direction 3 in Fig. \[fig:s40\_Properties\].
The evolution of the SASI plane over time is specific to this BH-forming model, probably because of the long duration of the first SASI episode. Based on the neutrino properties discussed in this Section, we infer that the SASI plane in this model evolves possibly as a result of perturbations caused by changes in the mass-accretion rate and PNS radius.
The right panel of Fig. \[fig:Rel\_Lum\_OBS\] shows the corresponding relative dipole and quadrupole strengths of the $\overline{\nu}_e$ luminosity for the $75\ M_\odot$ model as a function of time. Interestingly, in the right panel of Fig. \[fig:Rel\_Lum\_OBS\] the quadrupole moment dominates in the earlier interval, $[140, 175]$ ms, confirming that the corresponding SASI modulations identified in the neutrino luminosity have a quadrupolar nature. In fact, by plotting the $\overline{\nu}_e$ luminosity relative to its $4\pi$-average ($\langle L_{\bar{\nu}_e}\rangle$) on a Molleweide map for the SN emission surface at four consecutive snapshots in time in Fig. \[fig:Quadrupole\_Snapshots\], a clear quadrupolar pattern of SASI can be seen. The SASI quadrupolar activity is even more apparent in the full animation of the $\overline{\nu}_e$ luminosity relative the $4\pi$-average, added as [Supplemental Material](https://wwwmpa.mpa-garching.mpg.de/ccsnarchive/data/Walk2019/index.html) of this paper, from which the snapshots in Fig. \[fig:Quadrupole\_Snapshots\] have been taken.
Once the Si/O shell interface has fallen into the shock front at $\sim 175$ ms, the shock front expands, increasing the volume within the gain layer (see Fig. \[fig:simulations\]) This leads to favorable conditions for the development of a dipolar SASI [@Foglizzo:2006fu], and it is consistent with the dominance of the dipole SASI mode in the $40\ M_\odot$ model, where the shock radius during most of the post bounce evolution (until shortly before BH formation) is larger than in the $75\ M_\odot$ model between $140$ ms and $175$ ms. This is evidenced by the dipole dominance in Fig. \[fig:Rel\_Lum\_OBS\] in $[175, 230]$ ms until the collapse to a BH takes place. The bottom right panel of Fig. \[fig:s40\_Lum\_Dip\_Evo\] shows the evolution of the $\overline{\nu}_e$ luminosity dipole moment during this SASI episode. Clearly, the SASI plane is stationary for the $75\ M_\odot$ model. The moment preceding BH formation is marked by a steep decay of the luminosity dipole at $\simeq 230$ ms. This, in turn, is also visible as a damping of the SASI modulations in Fig. \[fig:u75\_Properties\].
![image](Fig7.pdf){width="1\columnwidth"}
Absence of LESA in black hole forming models {#sec:LESA}
--------------------------------------------
In [@Tamborra:2014aua] a large hemispheric dipolar asymmetry in the electron-lepton number (ELN) flux was found in the 3D simulations of $11.2$, $20$, and $27\ M_\odot$ SN models. These findings have been confirmed in [@Janka:2016fox] and, more recently in [@OConnor:2018tuw; @Vartanyan:2018iah; @Glas:2018vcs]. LESA is thought to originate from hemispherically asymmetric convection in the PNS, which induces regions of excess of $\nu_e$ with respect to $\overline{\nu}_e$ in the PNS convective layer [@Tamborra:2014aua; @Glas:2018vcs]. This in turn leads to the development of large-scale asymmetries in the ELN flux [@Tamborra:2014aua].
In order to quantitatively characterize LESA, for each angular direction $(\theta,\phi)$, we employ the $\rho$ parameter introduced in [@Walk:2019ier]: $$\begin{aligned}
\rho = \frac{1}{T} \int_{t_1}^{t_2} dt \left|\frac{\tilde{L}_{\bar{\nu}_e}(\theta,\phi)-\langle\tilde{L}_{\bar{\nu}_e}\rangle}{\langle\tilde{L}_{\bar{\nu}_e}\rangle}\right|\ ,
\label{eq:deltarho}\end{aligned}$$ with $T = (t_2 - t_1)$ and $\langle\tilde{L}_{\bar{\nu}_e}\rangle$ denoting the time-dependent average of $\tilde{L}_{{\bar{\nu}_e}}(\theta, \phi)$ over all directions; the time-dependent luminosity $\tilde{L}_{{\bar{\nu}_e}}(\theta, \phi)$ is the “mid-line" luminosity found by determining the local maxima and minima of the luminosity signal and interpolating halfway between them (see Ref. [@Walk:2019ier] for more details).
![image](Fig8.pdf){width="1.7\columnwidth"}
We use the $\rho$ parameter to estimate the strength of LESA in the BH-forming models, and show it in Fig. \[fig:Rho\_Plot\]. To maximize any effects due to LESA, we choose to integrate over the time intervals in which the ELN dipole is found to be the largest for each model (i.e., $[170,240]$ ms for the $40\ M_\odot$ model, and $[150,200]$ ms for the $75\ M_\odot$ model). It is clear that the $75\ M_\odot$ model exhibits signs of weaker LESA compared to the $40\ M_\odot$ model, and that both exhibit a weaker LESA dipole than previously studied SN models (see, e.g., Fig. 7 of [@Walk:2019ier] for comparison). The low amplitudes of the hot/cold regions present in Fig. \[fig:Rho\_Plot\], corresponding to strong/weak ELN flux, imply that LESA is almost absent in BH-forming stellar collapses. This is due to the very high mass accretion rate of the PNS before collapse to a BH. The mass accumulates in a thick and hot accretion mantle around the core of the PNS, burying the PNS convection layer deep inside the compact object. The neutrino luminosity is strongly dominated by the emission from the convectively stable accretion layer. Any convective asymmetry that may develop in the convection zone deep inside the PNS therefore hardly affects the neutrino emission.
References [@Tamborra:2014aua; @Tamborra:2014hga] found that the development of LESA is characterized by an anti-correlation of the relative luminosities of $\nu_e$ and $\bar{\nu}_e$. Figure \[fig:Rel\_Lum\] shows the evolution of the luminosity of each neutrino flavor ($L_{\nu_\beta}$) for the directions corresponding to the maximum ELN variation shown in Fig. \[fig:Rho\_Plot\] for the $40\ M_\odot$ and $75\ M_\odot$ models, relative to the time-dependent average luminosity over all directions ($\langle L_{\nu_\beta} \rangle$). Figure \[fig:Rel\_Lum\] shows negligible anti-correlation of the luminosity variations of $\nu_e$ and $\bar{\nu}_e$, slightly visible for the $40\ M_\odot$ model and absent in the $75\ M_\odot$ model. This indeed confirms the absence of LESA in these BH-forming models.
![image](Fig9.pdf){width="1.3\columnwidth"}
Detectable Features in the Neutrino Signal {#sec:Detectable}
==========================================
Neutrinos are amongst the only potentially detectable probes of a massive star collapsing into a BH. In this Section, we focus our attention on the detectable characteristics of the emitted neutrino signal, and identify which of the physical features discussed above can be directly inferred from measurable quantities. To this purpose, we estimate the event rate detectable in IceCube, Super-Kamiokande, and in the Deep Underground Neutrino Experiment (DUNE) for the two BH-forming models.
Given the current uncertainties on the flavor conversion of neutrinos in the SN envelope [@Mirizzi:2015eza; @Horiuchi:2017sku], we refrain from considering any specific flavor conversion scenario, and instead rely on the un-oscillated neutrino signal. The real signal detected on Earth will therefore be an intermediate case between the $\bar\nu_e (\nu_e)$ un-oscillated signal (mimicking what one would detect in the absence of flavor conversions) and the $\nu_x$ un-oscillated signal (mimicking what one would detect under the assumption of full flavor conversions).
Expected neutrino event rate {#sec:Event_Rate}
----------------------------
The neutrino detector currently providing the largest event statistics for a Galactic SN is the IceCube Neutrino Observatory [@Abbasi:2011ss]. This Cherenkov detector works mainly through the inverse-beta-decay channel, and is therefore sensitive to the $\overline{\nu}_e$ flux. We estimate the IceCube event rate for each BH-forming model by folding the emitted neutrino flux, whose spectrum is approximated in terms of a Gamma distribution [@Tamborra:2014hga; @Keil:2002in], with the inverse-beta-decay cross section, as outlined in Sec. V of [@Tamborra:2014hga] and references therein.
The signal modulations developing in BH-forming stellar collapse will also be detectable by the water Cherenkov detector Super-Kamiokande [@Ikeda:2007sa]. Super-Kamiokande is also mainly sensitive to $\overline{\nu}_e$ and will have less statistics than IceCube, but it has the advantage of being virtually background free. The Super-Kamiokande detector has a fiducial volume of 22.5 kton. For the estimation of the expected event rate, we employ the same inverse-beta-decay cross section as for IceCube, and keep the same energy threshold (5 MeV). A $100\%$ detector efficiency is assumed for Galactic SNe [@Fukuda:2002uc].
![image](Fig10.pdf){width="1.5\columnwidth"}
The top panels of Fig. \[fig:IC\_Event\_Rate\] show the predicted IceCube event rate for the $40\ M_\odot$ (left) and $75\ M_\odot$ (right) models for an observer located along the directions selected in Sec. \[sec:Directional\_SASI\] and for a SN located at a distance of $10$ kpc. Each predicted signal exhibits clear large-amplitude modulations due to SASI for observers located in the proximity of the SASI planes, similar to what was found for the $15$, $20$, and $27\ M_\odot$ models in [@Walk:2018gaw; @Tamborra:2014hga; @Tamborra:2013laa]. Remarkably, however, due to the strong SASI activity, the modulations in the neutrino signal will also be visible for the $40\ M_\odot$ model for observers located away from the main SASI planes. Additionally, the quadrupolar SASI phase of the $75\ M_\odot$ model in the interval $[140, 175]$ ms is also clearly detectable along any given observer direction. The features present in the neutrino signal described above will be detectable by the Super-Kamiokande detector as well, as shown in the middle panels of Fig. \[fig:IC\_Event\_Rate\], although with a reduced event rate.
As visible from Fig. \[fig:IC\_Event\_Rate\], BH-forming stellar collapses are expected to have a slightly increased event rate than ordinary core-collapse SNe (see e.g. [@Tamborra:2013laa; @Tamborra:2014hga] for comparison). The left panel of Fig. \[fig:detection\_sign\] shows the detection significance of the time-integrated neutrino burst preceding the BH formation with the IceCube Neutrino Telescope. One can conclude that a BH-forming event will be detectable by IceCube up to $\mathcal{O}(100)$ kpc. For comparison, the middle panel of Fig. \[fig:detection\_sign\] shows the detection significance for Super-Kamiokande; the burst prior to BH formation will be detectable in neutrinos up to $\mathcal{O}(250)$ kpc.
![image](Fig11.pdf){width="2\columnwidth"}
Given the strong SASI activity of the models investigated in this paper and the high event rate for BH-forming models expected from IceCube and Super-Kamiokande, we foresee that the characteristic neutrino signatures will also be visible in the upcoming DUNE neutrino detector [@Acciarri:2015uup]. DUNE is a $40$ kton liquid-argon time projection chamber planned to be in complete operation within 2026. The main detection channel for low energy neutrinos will be the charged current absorption of electron neutrinos on $^{40}$Ar ($\nu_e + ^{40}$Ar $\rightarrow e^- + ^{40}$K$^\ast$) with an energy threshold of $5$ MeV and a planned detection efficiency of $86\%$ [@Acciarri:2015uup].
Similarly to the IceCube and Super-Kamiokande event rates, the DUNE event rate can be estimated by folding the emitted neutrino flux with the $\nu_e$-$^{40}$Ar cross-section provided in [@Acciarri:2015uup] and by taking into account the detection efficiency. The bottom panels of Fig. \[fig:IC\_Event\_Rate\] show the predicted neutrino event rate in DUNE as a function of time for the $40\ M_\odot$ (left) and $75\ M_\odot$ (right) models along the observer directions selected in Sec. \[sec:Directional\_SASI\] and at a distance of $10$ kpc. Note that the main background for the detection of neutrinos from stellar collapse events would be about $122$ solar neutrinos per day [@Acciarri:2015uup]. As we can see from Fig. \[fig:IC\_Event\_Rate\], this background is negligible with respect to the event statistics related to the neutrino burst from BH-forming stellar collapse events. The total expected number of events for the $75\ M_\odot$ model increases by $\simeq 30\%$ on average if the charged current absorption of $\bar\nu_e$ on $^{40}$Ar and the neutral current scattering of all flavors on $^{40}$Ar are included. The gap between the event rates from $\nu_e$ and the heavy lepton flavors becomes increasingly large as the post-bounce time increases, because, in this particular model, the emitted flux of $\nu_e$’s drops continuously after the infall of the Si/O interface while the flux of $\bar\nu_e$ and $\nu_x$ increases.
As shown in the right panel of Fig. \[fig:detection\_sign\], the detection of neutrinos from BH-forming collapse events will occur with a significance larger than $3\sigma$ in DUNE for bursts located up to $170$ kpc for the $75\ M_\odot$ model and $240$ kpc for the $40\ M_\odot$ model. In fact, although Super-Kamiokande and DUNE will have less statistics than IceCube (see Fig. \[fig:IC\_Event\_Rate\]), the detection of neutrinos from stellar core collapse events with these detectors will be more promising than for IceCube at large distances since these detectors are virtually background free. This feature is very encouraging for what concerns the detection of the BH-forming stellar collapse that will not be visible electromagnetically.
Fourier analysis of the event rate {#sec:Fourier}
----------------------------------
In this Section, we further identify the detectable features of SASI unique to BH formation by studying the frequency content of the neutrino event rate of the two BH-forming models in Fig. \[fig:IC\_Event\_Rate\]. For this, we investigate the spectrograms of the neutrino event rate obtained as detailed in [@Walk:2018gaw], and the power spectrum of the neutrino event rate, computed as detailed in [@Lund:2010kh].
![image](Fig12.pdf){width="2\columnwidth"}
Figure \[fig:s40\_Spec\] shows the spectrograms of the IceCube event rate for the $40\ M_\odot$ model along the three observer directions selected in Sec. \[sec:Directional\_SASI\]. Each spectrogram is normalized to the maximum Fourier power along the selected observer direction. Along Direction 1, the first SASI episode is identifiable through the stripe in hotter colors appearing in correspondence to the SASI frequency \[$\mathcal{O}(100)$ Hz\]. Remarkably, given the long-lasting SASI phase, the spectrogram clearly highlights an overall increase of the SASI frequency as a function of time. This trend can be explained by taking into account that the SASI frequency depends on the shock radius and on the NS radius in the following way [@Scheck:2007gw] $$f_{\mathrm{SASI}}^{-1} \simeq \int_{R_{\mathrm{NS}}}^{R_{\mathrm{s}}} \frac{dr}{|v|} + \int_{R_{\mathrm{NS}}}^{R_{\mathrm{s}}} \frac{dr}{c_s - |v|}\ ,$$ where $c_s$ is the radius-dependent sound speed and $v$ is the accretion velocity in the post-shock layer. As visible from Fig. \[fig:simulations\], for the $40\ M_\odot$ model, the shock radius contracts reaching a local minimum around $200$ ms, then it slightly expands around $300$ ms until it reaches a stationary value; correspondingly, $R_{\mathrm{NS}}$ contracts and the post-shock velocity tends to follow a trend opposite to the shock radius (smaller shock radii lead to higher magnitudes of the post-shock velocity and viceversa), see Figs. \[fig:simulations\] and \[fig:s40\_Properties\]. Therefore, the SASI frequency tends to decrease during phases of shock expansion and to increase during periods of shock retraction. Thus, in Fig. \[fig:s40\_Spec\], one can clearly see that $f_{\mathrm{SASI}}$ tracks the shock contraction and expansion preceding the onset of BH formation.
Notably, even the spectrogram of the signal along Direction 2 (i.e., along one of the least optimal directions for observing the modulations in the first sub-interval of the long SASI episode) shows clear signs of the evolution of the SASI frequency in time. The less prominent, but still traceable hot region in the first sub-interval, $[160, 420]$ ms, lines up perfectly with the brighter hot region in the second sub-interval, $[420, 500]$ ms, as it does along Direction 1 and 3. Also, along Direction 3, the colored regions line up at the boundary between the two sub-intervals ($t_{\mathrm{p.b.}} \simeq 420$ ms), indicating that the hot region in $[420, 500]$ ms marks a continuation of the SASI episode along a slightly different direction.
![image](Fig13.pdf){width="1.5\columnwidth"}
Figure \[fig:u75\_Spec\] gives the spectrograms of the IceCube event rate for the $75\ M_\odot$ model along the two observer directions chosen in Sec. \[sec:Directional\_SASI\]. As expected, the hot red region corresponding to the SASI frequency in the interval $[175, 230]$ ms nearly disappears between the Strong (left) and Weak (right) Modulation directions. The frequency of the quadrupolar SASI modulations in the IceCube event rate in the interval $[140, 175]$ ms are represented by a hot red region, visible as expected in both spectrograms due to their directional independence. The left hand panel shows that the SASI dipole frequency is clearly lower than the frequency of the SASI quadrupolar motion. In fact, as visible from Fig. \[fig:simulations\], for the $75\ M_\odot$ model, the relative difference $R_{\mathrm{s}} - R_{\mathrm{NS}}$ grows as a function of time as $R_{\mathrm{s}}$ expands and $R_{\mathrm{NS}}$ contracts, and this is responsible for a drop of the SASI frequency from the quadrupolar to the dipolar phase. At the transition between the quadrupolar and the dipolar phase, the shock radius shows a slight expansion followed by a contraction (between $130$ and $160$ ms just before the approach of the Si/O interface) that is also tracked by the drop of the SASI frequency in the same time interval.
![image](Fig14.pdf){width="2.\columnwidth"}
Figure \[fig:PS\_s40\] shows the Fourier power spectra of the IceCube event rate for the $40\ M_\odot$ (left) and the $75\ M_\odot$ model (right), normalized to the power of the detector background noise. On the left hand side, the power spectrum for the $40\ M_\odot$ model has been computed in the interval of the first SASI episode, $[160,500]$ ms, along each of the three observer directions selected as in Sec. \[sec:Directional\_SASI\]. Two different frequency peaks can be clearly identified, one at $\sim 110$ Hz and the other one at $\sim 130$ Hz, corresponding to the SASI frequency in $[160,420]$ ms and $[420,500]$ ms sub-intervals, respectively. Thus, there is an increase in frequency of about $20$ Hz as the shock radius retracts. However, this feature will only be detectable along directions where all SASI peaks rise above the power of the shot noise in IceCube.
The right panel of Fig. \[fig:PS\_s40\] shows the Fourier power spectrum of the $75\ M_\odot$ model. Along the Strong Modulation direction, two peaks can be clearly identified; one corresponding to the dipolar SASI frequency at $\sim 80$ Hz, and one corresponding to the frequency of the quadrupolar SASI at $\sim 160$ Hz, exactly twice the former. As expected, the peak of the dipole SASI frequency disappears along the Weak Modulations direction.
![image](Fig15.pdf){width="2.\columnwidth"}
Figure \[fig:PS\_sigma\] presents the detection significance of the SASI peak in the Fourier spectrum of the IceCube, Super-Kamiokande, and DUNE event rates for the $40\ M_\odot$ and the $75\ M_\odot$ models as a function of the SN distance. For each model, we choose the direction yielding the highest number of total events, i.e, Direction 1 for the $40\ M_\odot$ model, and the Strong Modulations direction for the $75\ M_\odot$ model. The band is found by considering the two scenarios of absence of flavor conversions or full flavor conversion, as shown in Fig \[fig:IC\_Event\_Rate\]. In IceCube, the SASI peak will be distinguishable over the detector noise for distances up to 35 kpc for the $40\ M_\odot$ model, and $\sim$ 22 kpc for the $75\ M_\odot$ model, while it will be detectable in DUNE and Super-Kamiokande only up to distances of $\mathcal{O}(10-20)$ kpc.
In conclusion, due to the higher neutrino event statistics, the Fourier power spectrum of the IceCube event rate will be able to determine the SASI frequency at greater distances than DUNE and Super-Kamiokande. However, due to the absence of background in DUNE and Super-Kamiokande, these detectors will be able to detect the neutrino burst from a BH-forming stellar collapse event up to impressively large distances.
Conclusions {#sec:Conclusions}
===========
Intriguingly, very little is known about the properties of black hole (BH) forming stellar collapse events. Throughout this work, we aim to provide a first attempt to infer detectable characteristics of the neutrino signal unique to BH formation, by exploring the neutrino emission properties of the 3D hydrodynamical simulations of two BH-forming progenitors with different masses ($40\ M_\odot$ and $75\ M_\odot$) and metallicities.
The two models have different BH formation timescales ($\simeq 570$ ms for the $40\ M_\odot$ model and $250$ ms for the $75\ M_\odot$ model). Interestingly, while the $75\ M_\odot$ model exhibit a shock expansion before the collapse into a BH, similarly to what was found in [@Chan:2017tdg; @Kuroda:2018gqq; @Pan:2017tpk], the shock radius evolves towards a quasi-stationary value in the $40\ M_\odot$ model. The extremely high accretion rate buries the proto-neutron star convection layer deep inside. The luminosity is dominated by accretion luminosity, and the neutrino-driven self-sustained asymmetry (LESA) is absent in both models.
Extremely strong and long-lasting SASI episodes occur in both models. Since the neutrino event statistics for BH-forming stellar collapse is expected to be one order of magnitude higher than those of ordinary core-collapse supernovae, the SASI frequency will be detectable by the IceCube Neutrino Telescope for BH formation occurring up to distances of 35 kpc ($\sim$ 22 kpc) for the $40\ M_\odot$ ($75\ M_\odot$) models. Similarly, the detection prospects are limited to our own Galaxy for Super-Kamiokande and DUNE.
Notably, given the long-lasting SASI, the evolution of the SASI frequency will be clearly visible, e.g. in the spectrogram of the IceCube event rate. Moreover, SASI imprints will be detectable in neutrinos even for observes located away from the SASI plane because of the strong SASI activity.
The two BH-forming progenitor models explored throughout this work illustrate the phenomenal power of using neutrinos to study the physical processes involved in BH formation. The excellent detectability prospects in neutrinos for these yet mysterious astrophysical events have the potential to unveil the inner workings of collapsing massive stars.
Acknowledgments {#acknowledgments .unnumbered}
===============
We are grateful to Tobias Melson for providing access to the data set adopted in this paper. This project was supported by the Villum Foundation (Project No. 13164), the Danmarks Frie Forskningsfonds (Project No. 8049-00038B), the Knud Højgaard Foundation, the European Research Council through grant ERC-AdG No.341157-COCO2CASA, the Deutsche Forschungsgemeinschaft through Sonderforschungbereich SFB 1258 “Neutrinos and Dark Matter in Astro- and Particle Physics” (NDM), and the Excellence Cluster “ORIGINS: From the Origin of the Universe to the First Building Blocks of Life” (EXC 2094). The model calculations were performed on SuperMUC at the Leibniz Supercomputing Centre with resources granted by the Gauss Centre for Supercomputing (LRZ project ID: pr74de).
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'The distinguished names in the title have to do with different proofs of the celebrated Soap Bubble Theorem and of radial symmetry in certain overdetermined boundary value problems. We shall give an overeview of those results and indicate some of their ramifications. We will also show how more recent proofs uncover the path to some stability results for the relevant problems.'
address: 'Dipartimento di Matematica ed Informatica “U. Dini”, Universit\` a di Firenze, viale Morgagni 67/A, 50134 Firenze, Italy.'
author:
- Rolando Magnanini
title: |
Alexandrov, Serrin, Weinberger, Reilly:\
simmetry and stability by integral identities
---
\[thm\][Proposition]{} \[thm\][Lemma]{} \[thm\][Corollary]{} \[thm\][Remark]{}
Introduction
============
In this short survey, the author wishes to give an overview of some results related to Alexandrov’s Soap Bubble Theorem and Serrin’s symmetry result for overdetermined boundary value problems. The presentation will be as untechnical as possible: we shall give no rigorous proofs — but indicate the relevant references to them — preferring to focus on ideas and their mutual connections. As the title hints, we will mainly concentrate on the method of integral identities.
We will start by presenting the various proofs of the two results, then we shall explain how they benefit from one another, and hence examine their relatinship to other areas in mathematical analysis. We will finally report on some recent stability results that detail quantitatively how close to the spherical configuration the solution is, if the relevant data are perturbed.
Alexandrov’s Soap Bubble Theorem and reflection principle
=========================================================
Alexandrov’s Soap Bubble theorem dates back to $1958$ and states:
\[th:SBT\] A compact hypersurface, embedded in $\mathbb{R}^N$, that has constant mean curvature must be a sphere.
The [*mean curvature*]{} $H$ of a hypersurface ${{\mathcal S}}$ of class $C^2$ at a given point on ${{\mathcal S}}$ is the arithmetic mean of its [*principal curvatures*]{} at that point (see [@Re]).
To prove Theorem \[th:SBT\], A. D. Alexandrov introduced what is now known as [*Alexandrov’s reflection principle*]{} (see [@Al1],[@Al2]).
The underlying idea behind Alexandrov’s proof is simple: a compact hypersurface ${{\mathcal S}}$ is a sphere if and only if it is mirror-symmetric in any fixed direction, that is, for any direction ${\theta}\in\mathbb{S}^{N-1}$, there is a hyperplane $\pi_{\theta}$ orthogonal to ${\theta}$ such that ${{\mathcal S}}$ is symmetric in $\pi_{\theta}$. The technical tools to carry out that idea pertain to the theory of elliptic partial differential equations. To understand why, we give a sketch of Alexandrov’s elegant proof.
Let the mean curvature $H$ be constant and suppose by contradiction that ${{\mathcal S}}$ is not symmetric in the direction ${\theta}$ (by a rotation, we can always suppose that ${\theta}$ is the upward vertical direction). Then there exists a hyperplane $\pi_{\theta}$ such that at least one of the following occurrences come about:
(i) the reflection ${{\mathcal S}}'$ in $\pi_{\theta}$ of the portion of ${{\mathcal S}}$ that stays below $\pi_{\theta}$, touches ${{\mathcal S}}$ internally at some point $p\in{{\mathcal S}}\setminus\pi_{\theta}$;
(ii) $\pi_{\theta}$ is orthogonal to ${{\mathcal S}}$ at some point $p\in{{\mathcal S}}\cap\pi_{\theta}$.
In both cases, around $p$ we can locally write ${{\mathcal S}}$ and ${{\mathcal S}}'$ as graphs of two real-valued functions $u$ and $u'$ of $N-1$ variables. If (i) holds, $u$ and $u'$ can be defined on an $(N-1)$-dimensional ball centered at $p$; if (ii) holds instead, $u$ and $u'$ can be defined on an $(N-1)$-dimensional [*half-ball*]{} centered at $p$, and $p$ belongs to the flat portion of its boundary. In any case, we have that $u'\le u$ since ${{\mathcal S}}'$ stays below ${{\mathcal S}}$, and also $u(p)=u'(p)$ and ${\nabla}u'(p)={\nabla}u(p)$.
A partial differential equation now comes about, since both $u$ and $u'$ satisfy the elliptic equation $$\frac1{N-1}\,{\mathop{\mathrm{div}}}\left(\frac{{\nabla}v}{\sqrt{1+|{\nabla}v|^2}}\right)=H,$$ with $H$ constant, being the left-hand side a formula for the mean curvature of the graph of $v$. A contradiction then occurs because the solutions of that equation satisfy the [*strong comparison principle*]{} and the [*Hopf’s comparison lemma*]{}. In fact, in case (i), by the strong comparison principle, it should be $u'<u$, whereas we know that $u'(p)=u(p)$; in case (ii), $p$ is on the boundary of the half-ball (the flat part) and hence, by the Hopf’s lemma, it should be that $u_{\theta}'(p)>u_{\theta}(p)$, being ${\theta}$ the normal to the flat part of the boundary of the half-ball. That gives the desired contradiction, since we know that ${\nabla}u'(p)={\nabla}u(p)$.
The reflection principle is quite flexible, since its application can be extended to other geometrical settings, such as that of [*Weingarten’s surfaces*]{}, considered by Alexandrov himself.
Serrin’s symmetry result and the method of moving planes
========================================================
Serrin’s symmetry result has to do with certain overdetermined problems for elliptic or parabolic partial differential equations. In its simplest formulation, it concerns a function $u\in C^1({\overline}{{\Omega}})\cap C^2({\Omega})$ satisfying the constraints: $$\begin{aligned}
\label{serrin1}
&{\Delta}u=N \ \mbox{ in } \ {\Omega}, \quad u=0 \ \mbox{ on } \ {\Gamma}, \\
\label{serrin2}
&u_\nu=R \ \mbox{ on } \ {\Gamma}.\end{aligned}$$ Here, ${\Omega}\subset{\mathbb{R}}^N$, $N\ge 2$, is a bounded domain with sufficiently smooth, say $C^2$, boundary ${\Gamma}$, $u_\nu$ is the outward normal derivative of $u$ on ${\Gamma}$, and $R$ is a positive constant.
Since the Dirichlet problem already admits a [*unique*]{} solution, the additional requirement makes the problem [*overdetermined*]{} and - may not admit a solution in general. Thus, the remaining data of the problem — the domain ${\Omega}$ — cannot be given arbitrarily.
In fact, Serrin’s celebrated symmetry result states:
\[th:Serrin\] The problem - admits a solution $u\in C^1({\overline}{{\Omega}})\cap C^2({\Omega})$ if and only if, up to translations, ${\Omega}$ is a ball of radius $R$ and $u(x)=(|x|^2-R^2)/2$.
This result inaugurated a new and fruitful field in mathematical research at the confluence of Analysis and Geometry, that has many applications to other areas of mathematics and natural sciences. To be sure, that same result was actually motivated by two concrete problems in Mathematical Physics regarding the torsion of a straight solid bar and the tangential stress of a fluid on the walls of a rectilinear pipe.
The proof given by Serrin in [@Se] extends and refines the idea of Alexandrov. In Serrin’s setting, the good news is that the hypersurface ${{\mathcal S}}\subset{\mathbb{R}}^{N+1}$ to be considered is already the graph of a function on ${\Omega}$; however, the bad news is that ${{\mathcal S}}$ has now a [*non-empty boundary*]{}. Moreover, the expected spherical symmetry concerns the base-domain ${\Omega}$, rather then the hypersurface ${{\mathcal S}}$: as a matter of fact, Serrin’s statement claims that ${{\mathcal S}}$ has to be (a portion of) a (spherical) [*paraboloid*]{} and not a sphere.
In his 1971’s proof, J. Serrin brilliantly adapted the reflection principle, by only considering the reflecting hyperplains $\pi_{\theta}$ orthogonal to [*horizontal*]{} directions ${\theta}$. The critical occurrences (i) and (ii) then take place in a rather modified fashion: in both cases the point $p$ belongs to ${\partial}{{\mathcal S}}={\overline}{{{\mathcal S}}}\cap({\partial}{\Omega}\times{\mathbb{R}})$ and is not a relatively internal point in ${{\mathcal S}}$. Thus, the strong comparison principle is ruled out. Nevertheless, in case (i), the Hopf’s lemma can still be applied, giving the desired contradiction.
Even so, there is an additional difficulty that one has to deal with in case (ii): the Hopf’s comparison lemma can no longer be applied. This is due to the fact that $p$ is not only in ${\partial}{{\mathcal S}}$, but its projection ${\overline}{p}$ onto ${\overline}{{\Omega}}$ is placed at a corner on the boundary of the projection ${\Omega}'$ of ${{\mathcal S}}'$ onto ${\overline}{{\Omega}}$ — ${\Omega}'$ being the domain of the possible application of Hopf’s lemma.
To circumvent this obstacle, Serrin established what is now known as [*Serrin’s corner lemma*]{} and concerns the first and second derivatives at ${\overline}{p}$ of $u'$ and $u$ in the directions $\ell$ entering ${\Omega}'$ from ${\overline}{p}$: it must hold that either $u'_\ell({\overline}{p})<u_\ell({\overline}{p})$ or $u'_{\ell\ell}({\overline}{p})<u_{\ell\ell}({\overline}{p})$ for some $\ell$. After further calculations, this lemma provides the desired contradiction.
This modification of Alexandrov’s reflection principle is what is now called the [*method of moving planes*]{}. The method is very general since, as pointed out by Serrin himself, it applies at least to elliptic equations of the form $$a(u,|{\nabla}u|)\,{\Delta}u+h(u,|{\nabla}u|)\,{\langle}{\nabla}^2 u{\nabla}u,{\nabla}u{\rangle}=f(u,|{\nabla}u|),$$ provided some sufficient conditions are satisfied by the coefficients $a, h$, and $f$ (see [@Se] for details) and, more importantly, under the assumption that [*non-positive solutions*]{} are considered (solutions of are authomatically negative by the strong maximum principle). Further extensions have also been given during the years by many authors.
Weinberger’s proof of Serrin’s result {#sec:weinberger}
=====================================
In the same issue of the journal in which [@Se] is published, H. F. Weinberger [@We] gave a different proof of Theorem \[th:Serrin\], based on integration by parts and the Cauchy-Schwarz inequality.
Weinberger’s proof profits of the fact that the so-called [*P-function*]{} associated to , defined by $$\label{def-P}
P=\frac12\,|{\nabla}u|^2-u,$$ is sub-harmonic in ${\Omega}$, since $$\label{delta-P}
{\Delta}P=|{\nabla}^2 u|^2-\frac1{N}\, ({\Delta}u)^2\ge 0,$$ by the Cauchy-Schwarz inequality applied, for instance, to the two $N^2$-dimensional vectors formed, respectively, by the entries of the identity matrix and those of the [*hessian matrix*]{} ${\nabla}^2 u$. Since $P=R^2/2$ on ${\Gamma}$, then either $P\equiv R^2/2$ or $P<R^2/2$ on ${\Omega}$, by the strong maximum principle. However, the latter occurrence is ruled out by directly calculating that $$\label{P-Weinberger}
\int_{\Omega}(R^2/2-P)\,dx=0.$$ This formula follows by applying the divergence theorem and integration by parts formulas in various forms. Indeed, from and , we have: $$\label{3-identities}
\begin{array}{c}
{\displaystyle}\int_{\Omega}P\,dx=\left(\frac12+\frac{1}{N}\right)\,\int_{\Omega}|{\nabla}u|^2 dx; \\
{\displaystyle}R\,|{\Gamma}|=\int_{\Gamma}u_\nu\,dS_x=N\,|{\Omega}|; \quad
(N+2) \int_{{\Omega}} |{\nabla}u|^2 \, dx =\int_{\Gamma}u_\nu^2 \,(x\cdot\nu)\,dS_x.
\end{array}$$ The second formula sets the correct value for $R$; the third one is a consequence of the well-known [*Rellich-Pohozaev identity*]{} ([@Po]).
Thus, it must hold that $P\equiv R^2/2$ on ${\overline}{{\Omega}}$, which implies that ${\Delta}P\equiv 0$ in ${\Omega}$. This means, in turn, that the inequality in holds with the sign of equality, that is the hessian matrix ${\nabla}^2 u$ is proportional to the identity matrix. Then, it is easy to show that $u$ must equal a [*quadratic polynomial*]{} of the form $$q(x)=\frac12\,(|x-z|^2-a),$$ for some $z\in{\mathbb{R}}^N$ and $a\in{\mathbb{R}}$. Since $u=0$ on ${\Gamma}$, we can compute that $a=R^2$, and this implies that ${\Gamma}$ is a sphere centered at $z$ and with radius $R$.
Weinberger’s argument is very elegant, but so far is known to work only for the simple setting - or some restricted extensions of it. In particular, it is not known to work if we replace $N$ in by a non-constant function of $u$.
Reilly’s proof of the Soap Bubble Theorem
=========================================
In 1982, R. C. Reilly found a proof of the Soap Bubble Theorem that bears a resemblance to Weinberger’s argument.
The key idea is to regard the hypersurface ${{\mathcal S}}$ as the zero-level surface of the solution $u$ of , that is ${{\mathcal S}}={\Gamma}$, and to observe that $$\label{Reilly}
{\Delta}u =u_{\nu\nu}+(N-1)\,H\,u_\nu \ \mbox{ on } \ {\Gamma}.$$ In this formula, that holds on any [*regular*]{} level surface of a function $u\in C^2({\overline}{{\Omega}})$, we agree to still denote by $\nu$ the vector field ${\nabla}u/|{\nabla}u|$ (that indeed coincides on ${\Gamma}$ with the unit normal field).
As in Section \[sec:weinberger\], the radial symmetry of ${\Omega}$ is obtained by showing that ${\Delta}P\equiv 0$ in ${\Omega}$. In fact, if $H\equiv H_0$ on ${\Gamma}$ for some constant $H_0$, one can show that $$\int_{\Omega}{\Delta}P\,dx \le 0.$$ This inequality follows by using the divergence theorem and by applying a set of formulas similar to : $$\label{3-identities-SBT}
\begin{array}{c}
{\displaystyle}\int_{\Gamma}P_\nu\,dS_x
=N\,|{\Omega}|-\int_{\Gamma}H\,u_\nu^2\,dS_x; \\
{\displaystyle}N\,|{\Omega}| H_0=|{\Gamma}|; \quad
\bigl(N\,|{\Omega}|\bigr)^2\le |{\Gamma}|\,\int_{\Gamma}u_\nu^2\,dS_x.
\end{array}$$ In the first identity, we use and the divergence theorem. The second formula sets the correct value for the constant $H_0$ and is a consequence of [*Minkowski’s identity*]{}, $$\int_{\Gamma}H\,(x\cdot\nu)\,dS_x=|{\Gamma}|,$$ a well-known result in differential geometry (see [@Re] for an elementary proof). The last inequality is clearly an application of Hölder’s inequality.
Notice that Reilly’s argument leaves open the possibility to extend the Soap Bubble Theorem to more general regularity settings, provided a weaker definition of mean curvature is at hand.
Extensions of Reilly’s ideas to the case of the [*symmetric invariants*]{} of the principal curvatures of ${\Gamma}$ can be found in [@Ro], where a proof of [*Heintze-Karcher inequality*]{}, $$\int_{\Gamma}\frac{dS_x}{H}\ge N|{\Omega}|,$$ is also given, in the same spirit.
The isoperimetric inequality for the torsional rigidity {#sec:torsion}
=======================================================
As an interlude, we present a connection of Serrin’s problem - to a classical result in shape optimization. In fact, as also referred to in [@Se], the solution of has to do with an important quantity in elasticity: the so-called [*torsional rigidity*]{} $\tau({\Omega})$ of a bar of cross-section ${\Omega}$ (see [@So pp. 109-119]) that, with the necessary normalizations, can be defined as the maximum of the quotient $$Q(v)=\frac{\left(N\,\int_{\Omega}v\,dx\right)^2}{\int_{\Omega}|{\nabla}v|^2\,dx},$$ among all the non-zero functions $v$ in the Sobolev space $W^{1,2}_0({\Omega})$. In fact, it turns out that $$\label{torsional-rigidity}
\tau({\Omega})=Q(u)=\int_{\Omega}|{\nabla}u|^2\,dx=-N\,\int_{\Omega}u\,dx.$$
The following statement is known as the [*Saint Venant’s Principle*]{}:
> *Among sets having given volume the ball maximizes $\tau({\Omega})$.*
One proof of this principle hinges on rearrangement tecniques (see [@PSz]).
Here, we shall give an account of the relationship between - and the Saint Venant Principle. In fact, once the existence of a maximizing set ${\Omega}_0$ is established, one can show that the solution of in ${\Omega}_0$ also satisfies with ${\Gamma}={\partial}{\Omega}_0$.
One way to see that is to introduce the technique of [*shape derivative*]{}. That consists in hunting for the optimal domain within a one-parameter family $\{{\Omega}_t\}_{t\in{\mathbb{R}}}$ of domains that evolve according to a prescribed rule. Thus, if we agree that ${\Omega}_0$ is the domain that maximizes $\tau({\Omega}_t)$ among all the domains in the family that have prescribed volume $|{\Omega}_t|=V$, then the [*method of Lagrange multipliers*]{} informs us that there is a number ${\lambda}$ such that $$T(t)-{\lambda}\,[V-V(t)]\le T(0) \ \mbox{ for any } t\in{\mathbb{R}},$$ and hence $$\label{Lagrange}
T'(0)+{\lambda}\,V'(0)=0,$$ where we mean that $T(t)=\tau({\Omega}_t)$ and $V(t)=|{\Omega}_t|$.
A convenient way to construct the evolution of the domains ${\Omega}_t$ of the family is to let each of them be the image ${\mathcal{M}}_t({\Omega})$ of a fixed domain ${\Omega}={\Omega}_0$ by a mapping ${\mathcal{M}}_t:{\mathbb{R}}^N\to{\mathbb{R}}^N$ belonging to a family such that: $$\label{mapping}
{\mathcal{M}}_0(x)=x, \quad {\mathcal{M}}'_0(x)={{\mathcal R}}(x),$$ where the “prime” means differentiation with respect to $t$.
Thus, we can consider the solution $u=u(t,x)$ of in ${\Omega}={\Omega}_t$ and obtain: $$T(t)=-N\,\int_{{\Omega}_t} u(t,x)\,dx \quad \mbox{ and } \quad V(t)=\int_{{\Omega}_t} dx.$$ The derivatives of $T$ and $V$ can be computed by applying the theory of shape derivatives, stemmed from [*Hadamard’s variational formula*]{} (see [@HP Chapter 5]). In fact, by a theorem of J. Liouville, we have that $$T'(0)=-N\,\int_{{\Omega}_0} u'(x)\,dx-N\,\int_{{\Gamma}_0} u(x)\,{{\mathcal R}}(x)\cdot\nu(x)\,dS_x$$ and $$\label{der-V}
V'(0)=\int_{{\Gamma}_0} {{\mathcal R}}(x)\cdot\nu(x)\,dS_x,$$ where ${\Gamma}_0={\partial}{\Omega}_0$ and we have set $u(x)=u(0,x)$ and denoted by $u'(x)$ the derivative of $u(t,x)$ with respect to $t$, evaluated at $t=0$. Moreover, $u'$ turns out to be the solution of the Dirichlet problem $${\Delta}u'=0 \ \mbox{ in } \ {\Omega}_0, \quad u'={\nabla}u\cdot{{\mathcal R}}\ \mbox{ on } \ {\Gamma}_0.$$ Also, since $u=0$ on ${\Gamma}_0$ and $u'$ is harmonic in ${\Omega}_0$, we calculate that $$\label{der-T}
T'(0)=-N\,\int_{{\Omega}_0} u'\,dx=-\int_{{\Omega}_0} u'\,{\Delta}u\,dx=-\int_{{\Gamma}_0} u'\,u_\nu\,dS_x,$$ after an application of Gauss-Green’s formula.
Next, we choose $${{\mathcal R}}(x)=\phi(x)\,\nu(x),$$ where $\phi$ is any compactly supported continuous function and $\nu$ is a proper extension of the unit normal vector field to a tubular neighborhood of ${\Gamma}$ (for instance the choice $\nu(x)={\nabla}{\delta}_{\Gamma}(x)$, where ${\delta}_{\Gamma}(x)$ is the distance of $x$ from ${\Gamma}$, will do).
Therefore, by this choice of ${{\mathcal R}}$, putting together , , and gives that $$\int_{{\Gamma}_0} (u_\nu^2-{\lambda})\,\phi\,dS_x=0.$$ Since $\phi$ is arbitrary, we infer that $u_\nu^2\equiv{\lambda}$ on ${\Gamma}_0$ and we compute ${\lambda}=R^2$.
Theorem \[th:Serrin\] thus confirms Saint Venant’s principle. A sufficient regularity assumption that guarantees that this argument runs is that ${\Gamma}_0$ is locally the graph of a differentiable function with Hölder continuous derivatives.
Dual formulation and quadrature domains
=======================================
In [@PS], the following characterization is proved.
\[th:Payne-Schaefer\] A function $u\in C^1({\overline}{{\Omega}})\cap C^2({\Omega})$ is solution of - if and only if the following mean value property $$\label{dual-formulation}
\frac1{|{\Omega}|}\int_{\Omega}h\,dx=\frac1{|{\Gamma}|}\int_{\Gamma}h\,dS_x$$ holds for any harmonic function $h\in C^0({\overline}{{\Omega}})\cap C^2({\Omega})$.
The proof is a straightforward consequence of Gauss-Green’s formula for laplacians.
A domain ${\Omega}$ such that holds for any harmonic function is named a [*harmonic domain*]{} (see [@RS]). Thus, the following corollary ensues.
\[cor:PS\] The euclidean ball is the only bounded harmonic domain in ${\mathbb{R}}^N$.
Theorem \[th:Payne-Schaefer\] and Corollary \[cor:PS\] are due to L. Payne and P. W. Schaefer [@PS], who also provide a proof of Theorem \[th:Serrin\] that modifies Weinberger’s argument and gets rid of the use of the maximum principle for $P$.
In [@PS], is regarded as a dual formulation of the overdetermined problem -, since it entails a linear functional, $${{\mathcal H}}({\Omega})\ni h\mapsto L(h)=\frac1{|{\Omega}|}\int_{\Omega}h\,dx-\frac1{|{\Gamma}|}\int_{\Gamma}h\,dS_x,$$ defined on the set ${{\mathcal H}}({\Omega})$ of functions in $C^0({\overline}{{\Omega}})\cap C^2({\Omega})$ that are harmonic in ${\Omega}$.
The identity recalls the well-known Gauss [*mean value theorems*]{} for harmonic functions: if ${\Omega}$ is a ball and $p$ its center, then $$h(p)=\frac1{|{\Omega}|}\int_{\Omega}h\,dx \ \mbox{ and } \ h(p)=\frac1{|{\Gamma}|}\int_{\Gamma}h\,dS_x$$ for any harmonic function $h\in C^0({\overline}{{\Omega}})\cap C^2({\Omega})$. It is interesting to remark that each mean value property characterizes the ball (or the sphere) with respect to the class of harmonic functions, as does (see [@PS], [@Ku]).
The dual formulation can also be connected to the theory of [*quadrature domains*]{} introduced by D. Aharonov and B. Gustafsson in the 1970’s (see [@AS], [@GS]). A bounded domain ${\Omega}$ in the complex plane ${\mathbb{C}}$ is a (classical) quadrature domain if there exist finitely many points $p_1,\dots, p_m\in{\Omega}$ and coefficients $c_{jk}\in{\mathbb{C}}$ so that $$\label{quadrature}
\int_{\Omega}f(z)\,dx dy=\sum_{j=1}^m \sum_{k=0}^{n_j} c_{jk} f^{(k)}(p_j)$$ for any [*holomorphic function*]{} $f$ in ${\Omega}$; here, $f^{(0)}=f$ and $f^{(k)}$ denotes the $k$-th derivative of $f$.
Formula is called a [*quadrature identity*]{}. For instance, the ball centered at $p$ is a quadrature domain that corresponds to $m=1$, $n_1=0$ and $p_1=p$. The term “quadrature” thus refers to the fact that, in a quadrature domain ${\Omega}$, provides an exact [*quadrature formula*]{} to compute the integral on the left-hand side. A remarkable fact is that quadrature domains have applications to problems in Mathematical Physics such as the Hele-Shaw problem in fluid dynamics and other free-boundary and/or inverse problems (see [@GS] and the references therin).
The notion of quadrature domain can be extended in two ways. One can replace the finite sum in by the integral $$\int f(z)\,d\mu,$$ where $\mu$ is some [*signed measure*]{} (e.g., in $\mu$ would be the linear combination of Dirac deltas and their derivatives at given points). Moreover, one can establish a generalization to higher-dimensional domains, by replacing holomorphic functions by harmonic functions — in this case, the domain is often called a [*harmonic quadrature domain*]{}. Therefore, a harmonic domain — that is satisfying for any $h\in{{\mathcal H}}({\Omega})$ — is a harmonic quadrature domain relative to (a multiple of) the [*surface measure*]{} on ${\Gamma}$.
Fundamental identities
======================
In this section, we will show that Weinberger’s and Reilly’s proofs can be further refined to encode all the information given in Serrin’s and Alexandrov’s problems into two identities. In the following two results, we use the defininitions: $$\label{def-R-H0}
R=\frac{N |{\Omega}|}{|{\Gamma}|}, \quad H_0=\frac{|{\Gamma}|}{N |{\Omega}|} \ \mbox{ and } \ q(x)=\frac12\,|x-z|^2-a, \ x\in{\mathbb{R}}^N,$$ where $z\in{\mathbb{R}}^N$ and $a\in{\mathbb{R}}$ are given parameters.
\[th:wps\] Let ${\Omega}\subset \mathbb R^N$ be a bounded domain with boundary ${\Gamma}$ of class $C^{1,{\alpha}}$, $0<{\alpha}\le 1$.
Then, the solution $u$ of satisfies identity: $$\label{idwps}
\int_{{\Omega}} (-u)\,\left\{ |{\nabla}^2 u|^2- \frac{ ({\Delta}u)^2}{N} \right\}\,dx=
\frac{1}{2}\,\int_{\Gamma}\left( u_\nu^2 - R^2 \right) \,(u_\nu-q_\nu)\,dS_x.$$
The identity is announced in [@MP1] and proved in [@MP2]. Its proof is obtained by polishing the arguments in [@We] and [@PS] and juggling around with integration by parts.
Let ${\Omega}$ be a bounded domain with boundary ${\Gamma}$ of class $C^2$ and let $u$ be the solution of .
Then, it holds true that $$\begin{gathered}
\label{identity-SBT}
\frac1{N-1}\int_{{\Omega}} \left\{ |{\nabla}^2 u|^2-\frac{({\Delta}u)^2}{N}\right\}dx+
\frac1{R}\,\int_{\Gamma}(u_\nu-R)^2 dS_x = \\
\int_{{\Gamma}}(H_0-H)\,(u_\nu-q_\nu)\,u_\nu\,dS_x+
\int_{{\Gamma}}(H_0-H)\, (u_\nu-R)\,q_\nu\, dS_x.\end{gathered}$$
The identity is proved in [@MP2] by slightly modifying one that was proved in [@MP1]. Its proof is obtained by polishing the argument in [@Re].
From identity it is clear that, if the right-hand side is zero — and that surely occurs if is in force — then ${\Delta}P\equiv 0$ owing to , and hence radial symmetry ensues, as already observed.
Since both summands at the left-hand side are non-negative, the same conclusion results from , if its right-hand side is null — and that holds if $H$ is constant. It should also be noticed that $H\equiv H_0$ implies independently that $u$ satisfies .
One more comment is in order. If we turn back to Section \[sec:torsion\], we see that $$T'(0)+R^2\,V'(0)=\frac12\,(u_\nu^2-R^2)\,(u_\nu-q_\nu),$$ if we choose $\phi=(q_\nu-u_\nu)/2$, and hence can be written as: $$\label{idwps-2}
\int_{{\Omega}} (-u)\,\left\{ |{\nabla}^2 u|^2- \frac{ ({\Delta}u)^2}{N} \right\}\,dx=
\int_{\Gamma}[T'(0)+R^2\,V'(0)]\,dS_x.$$
Ergo, this identity tells us something more about the Saint Venant Principle.
\[th:priviledged-flow\] A domain ${\Omega}$ is a ball if the function $${\mathbb{R}}\ni t\mapsto \tau({\Omega}_t)+R^2\,(V-|{\Omega}_t|)$$ obtained by modifing ${\Omega}$ by the rule with ${{\mathcal R}}={{\mathcal R}}^*$ and $${{\mathcal R}}^*=\frac12\,(q_\nu-u_\nu)\,\nu$$ has a critical point at $t=0$.
Actually, it is enough that the derivative is non-positive at $t=0$. Thus, the flow generated by that ${{\mathcal R}}^*$ is quite a priviledged one.
Theorem \[th:priviledged-flow\] seems to be new.
Stability: in the wake of Alexandrov and Serrin
===============================================
In this and the next section, we will present recent results on the stability for the radial configuration in the Soap Bubble Theorem and Serrin’s problem. Roughly speaking, the question is how much a hypersurface ${\Gamma}$ is near a sphere, if its mean curvature $H$ — or, alternatively, the normal derivative on ${\Gamma}$ of the solution $u$ of — is near a constant in some norm. Technically speaking, one may look for two concentric balls $B_{\rho_i}$ and $B_{\rho_e}$, with radii $\rho_i$ and $\rho_e$, $\rho_i<\rho_e$, such that $$\label{balls}
{\Gamma}\subset {\overline}{B}_{\rho_e}\setminus B_{\rho_i}$$ and $$\label{stability}
\rho_e-\rho_i\le \psi(\eta),$$ where $\psi:[0,\infty)\to[0,\infty)$ is a continuous function vanishing at $0$ and $\eta$ is a suitable measure of the deviation of $u_\nu$ or $H$ from being a constant.
In this section, I will briefly give an account of the results in this direction obtained by means of quantitative versions of the reflection principle or the method of moving planes.
The problem of stability for Serrin’s problem has been considered for the first time in [@ABR]. There, for a $C^{2,{\alpha}}$-regular domain ${\Omega}$, it is proved that, if $u$ is the solution of , there exist constants $C, {\varepsilon}> 0$ such that and hold for $$\psi(\eta)=C\,|\log \eta|^{-1/N} \quad \mbox{ and } \quad \eta={\Vert}u_\nu-c{\Vert}_{C^1({\Gamma})}<{\varepsilon},$$ for some constant $c$. It is important to observe that the validity of that inequality extend to the case in which at the right-hand side of the Poisson’s equation in the number $N$ is replaced by a locally Lipschitz continuous function $f(u)$. In this case, only positive solutions are considered and the constants $C, {\varepsilon}$ also depend on $f$ and the regularity of ${\Gamma}$.
In the same general framework, the stability estimate of [@ABR] has been improved in [@CMV]. There, it is in fact shown that and hold for $$\psi(\eta)=C\,\eta^\tau \quad \mbox{ and } \quad
\eta=\sup_{\substack{x,y \in {\Gamma}\\ \ x \neq y}} \frac{|u_\nu(x) - u_\nu(y)|}{|x-y|}<{\varepsilon}.$$ The exponent $\tau\in (0,1)$ can be computed for a general setting and, if ${\Omega}$ is convex, is proved to be arbitrarily close to $1/(N+1)$.
The only quantitative estimate for symmetry in the Soap Bubble Theorem, based on Alexandrov’s reflection principle, is proved in [@CV] and is optimal. There, it is shown that, if ${\Gamma}$ is an $N$-dimensional, $C^2$-regular, connected, closed hypersurface embedded in ${\mathbb{R}}^{N}$, there exist constants $C, {\varepsilon}> 0$ such that and hold for $$\psi(\eta)=C\,\eta \quad \mbox{ and } \quad \eta=\max_{\Gamma}H-\min_{\Gamma}H<{\varepsilon}.$$ The two constants depend on $N$, upper bounds for the principal curvatures of ${\Gamma}$, and $|{\Gamma}|$. The result is optimal, because is attained for ellipsoids.
We conclude this section by giving a brief outline of the arguments used to obtain stability for Serrin’s problem. The arguments are substantially those of [@ABR], that have been refined in [@CMS] and [@CMV], and adapted to the situation of the Soap Bubble Theorem in [@CV].
The idea is to fix a direction ${\theta}$ and define an approximate set $X_\eta\subset{\Omega}$, mirror-symmetric with respect to a hyperplane $\pi_{\theta}$ orthogonal to ${\theta}$, that fits ${\Omega}$ “well”, in the sense that it is the maximal ${\theta}$-symmetric set contained in ${\Omega}$ and such that its order of approximation of ${\Omega}$ can be controlled by $C\,\psi(\eta)$. It turns out that this approximation process does not depend on the particular direction ${\theta}$ chosen. Thus, one defines an approximate center of symmetry $p$ as the intersection of $N$ mutually orthogonal hyperplanes $\pi_{{\theta}_1}, \dots, \pi_{{\theta}_N}$ of symmetry. It then becomes apparent that, in any other direction ${\theta}$, the approximation deteriorates only by replacing $C$ by a possibly larger constant. It is thus possible to define the desired balls in by centering them at $p$.
Technically, the control of the approximation by $C\,\psi(\eta)$ is made possible by the application of [*Harnack’s inequality*]{} and [*Carleson’s (or boundary Harnack’s) inequality*]{}, which are the quantitative versions of the already mentioned maximum principle, Hopf’s lemma, and Serrin’s corner lemma. The improvement obtained in [@CMV] is the result of a refinement of Harnack’s inequality in suitable cones.
Stability: in the wake of Reilly and Weinberger
===============================================
Quantitative inequalities for the Soap Bubble Theorem and Serrin’s problem can also be obtained by following the tracks of Reilly’s and Weinberger’s proofs of symmetry.
In [@CM], based on the proof of Heintze-Karcher’s inequality given in [@Ro], it is shown that - hold for $$\psi(\eta)=C\,\eta^\frac1{2(N+1)} \quad \mbox{ and } \quad \eta=\max_{\Gamma}|H_0-H|<{\varepsilon},$$ for some positive constants $C, {\varepsilon}$; ${\varepsilon}$ should be sufficiently small so as to guarantee that ${\Gamma}$ is strictly [*mean convex*]{} (that means that $H>0$ on ${\Gamma}$) — the realm of validity of Heintze-Karcher’s inequality. The exponent is not optimal; however, an estimate is also given in [@CM] that gives a finer description of hypersurfaces having their mean curvature close to a constant. In fact, such an estimate specifies how ${\Gamma}$ can be close to the boundary of a disjoint union of balls.
That result has been improved in various directions in [@MP1]. In fact, based on a version of identity , - are shown to hold for some positive constants $C, {\varepsilon}$ and $$\psi(\eta)=C\,\eta^{\tau_N} \quad \mbox{ and } \quad \eta=\int_{\Gamma}(H_0-H)^+\,dS_x<{\varepsilon},$$ where $\tau_N=1/2$ for $N=2, 3$ and $\tau_N=1/(N+2)$ for $N\ge 4$. Here we mean $(t)^+=\max(t,0)$ for $t\in{\mathbb{R}}$.
That approximation is not restricted to the class of strictly mean convex hypersurfaces, but is valid for $C^2$-regular hypersurfaces, as in [@CV]. Differently from [@CV] and [@CM], it replaces the uniform deviation from $H_0$ by a weaker average deviation and yet, compared to [@CM], it improves the relevant stability exponent.
A further advance has been recently obtained in [@MP2]. In fact, it holds that $$\label{optimal-stability-SBT}
\rho_e-\rho_i\le C\,{\Vert}H_0-H{\Vert}_{2,{\Gamma}}^{\tau_N} \quad \mbox{ if } \quad {\Vert}H_0-H{\Vert}_{2,{\Gamma}}<{\varepsilon},$$ where $\tau_N=1$ for $N=2, 3$ and $\tau_N=2/(N+2)$ for $N\ge 4$ — that is the stability exponent doubles. Therefore, for $N=2, 3$ an optimal Lipschitz inequality (as in [@CV]) is established, [*even for a weaker average deviation*]{}; it seems realistic to expect that a similar Lipschitz estimate holds also for $N\ge 4$.
In [@MP2], an inequality involving the following slight modification of the so-called [*Fraenkel asymmetry*]{}, $$\label{asymmetry}
{{\mathcal A}}({\Omega})=\inf\left\{\frac{|{\Omega}{\Delta}B^x|}{|B^x|}: x \mbox{ center of a ball $B^x$ with radius $R$} \right\},$$ has also been proved: $${{\mathcal A}}({\Omega}) \le C\,{\Vert}H_0-H{\Vert}_{2,{\Gamma}}.$$ Here, ${\Omega}{\Delta}B^x$ denotes the symmetric difference of ${\Omega}$ and $B^x$, and $R$ is the constant defined in . This inequality holds for any $N\ge 2$. Under sufficient assumptions, the number ${{\mathcal A}}({\Omega})$ can be linked to the difference $\rho_e-\rho_i$ (see [@MP2]); however, the resulting stability inequality is poorer than .
At the end of this section, we shall explain how these results have been made possible by parallel estimates for Serrin’s problem.
The first improvement of the logarithmic estimate obtained in [@ABR] for problem - has been given in [@BNST]. There, the idea of working on integral identities and inequalities has also been put in action for the first time. By a combination of the ideas of Weinberger and the use of (pointwise) [*Newton’s inequalities*]{} for the hessian matrix ${\nabla}^2 u$ of $u$, the solution $u$ of is shown to satisfy - for $$\psi(\eta)=C\,\eta^{\tau_N} \quad \mbox{ and } \quad \eta=\max_{\Gamma}|u_\nu-c|<{\varepsilon},$$ where $c$ is some reference constant. In [@BNST], it is also considered the possibility to measure the deviation of $u_\nu$ from a constant by the $L^1$-norm, that is with $\eta={\Vert}u_\nu-c{\Vert}_{1,{\Gamma}}$ and, by assuming an appropriate a priori bound for $|{\nabla}u|$ on ${\Gamma}$, it is shown that ${\Omega}$ can be approximated in measure by a finite number of mutually disjoint balls $B_i$. The obtained exponent is $\tau_N=1/(4N+9)$.
Recently, that approach has been greatly improved in [@Fe] where, rather than by and , the closeness of ${\Omega}$ to a ball is measured by the asymmetry ${{\mathcal A}}({\Omega})$: $${{\mathcal A}}({\Omega})\le C\,{\Vert}u_\nu-R{\Vert}_{2,{\Gamma}}.$$
Turning back to an approximation of type -, the following formula is derived in [@MP2]: $$\label{stability-L2-Serrin}
\rho_e-\rho_i\le C\, {\Vert}u_\nu-R{\Vert}_{2,{\Gamma}}^\frac2{N+2} \quad \mbox{ if } \quad {\Vert}u_\nu-R{\Vert}_{2,{\Gamma}}<{\varepsilon}.$$
This inequality clearly makes better than those in [@BNST] and [@CMV], even if we replace the uniform norm or the Lipschitz semi-norm by an $L^2$-deviation.
As promised, we conclude this section by giving an outline of the proof of and , by drawing the reader’s attention on how the results benefit from the sharp formulas and and their interaction.
To simplify matters, it is convenient to re-write in terms of the [*harmonic*]{} function $h=q-u$: it holds that $$\label{idwps-h}
\int_{{\Omega}} (-u)\, |{\nabla}^2 h|^2\,dx=
\frac{1}{2}\,\int_{\Gamma}( R^2-u_\nu^2)\, h_\nu\,dS_x.$$ Notice that $h=q$ on ${\Gamma}$ and hence the [*oscillation*]{} of $h$ on ${\Gamma}$ can be bounded from below by $\rho_e-\rho_i$: $$\label{oscillation}
\max_{{\Gamma}} h-\min_{{\Gamma}} h=\frac12\,(\rho_e^2-\rho_i^2)\ge \frac12\,(|{\Omega}|/|B|)^{1/N}(\rho_e-\rho_i).$$ Thus, - will be obtained if can bound that oscillation in terms of the left-hand side of . In fact, its right-hand side can be easily bounded in terms of the desired $L^2$-deviation of $u_\nu$ from $R$.
To carry out this plan, the following inequalities, proved in [@MP1 Lemma 3.3], are decisive: $$\max_{{\Gamma}} h-\min_{{\Gamma}} h \le C\,\left(\int_{\Omega}h^2 dx\right)^\frac1{N+2}\le C\,\left(\int_{\Omega}|{\nabla}h|^2 dx\right)^\frac1{N+2}.$$ In the first inequality, it is crucial that $h$ is harmonic in ${\Omega}$, because the mean value property for harmonic functions on balls is used; the second inequality follows from an application of the Poincaré inequality, since we can choose $a$ so that $h$ has average zero on ${\Omega}$.
Next, notice that the obtained inequalities for the oscillation of $h$ and do not depend on the particular choice of $z\in{\mathbb{R}}^N$. A good choice for $z$ turns out to be a minimum (or any critical) point of $u$, so that it is guaranteed that $z\in{\Omega}$. That choice has yet a more important benefit: since now ${\nabla}h(z)=0$, the Hardy-Poincarè-type inequality $$\label{boas-straube}
\int_{\Omega}v(x)^2 dx\le C \int_{\Omega}(-u) |{\nabla}v(x)|^2 dx,$$ that holds for any harmonic function $v$ that is zero at some point, can be applied to each first partial derivative of $h$ so as to eventually obtain that $$\max_{{\Gamma}} h-\min_{{\Gamma}} h \le C\,\left(\int_{\Omega}(-u) |{\nabla}^2 h|^2 dx\right)^\frac1{N+2}.$$ Inequality can be derived by using an inequality proved in [@BS] or [@HS] (see [@Fe] or [@MP2] for details).
The last inequality and easily give a stability bound in terms of the deviation $\eta={\Vert}u_\nu-R{\Vert}_{1,{\Gamma}}$. Nevertheless, one can gain a better estimate by observing that, if $u_\nu-R$ tends to $0$, also $h_\nu$ does. Quantitavely, this fact can be expressed by the inequality $$\label{Feldman}
{\Vert}h_\nu{\Vert}_{2,{\Gamma}}\le C\,{\Vert}u_\nu-R{\Vert}_{2,{\Gamma}},$$ that can be derived from [@Fe]. Thus, will follow by using this inequality, after an application of Hölder’s inequality to the right-hand side of .
By keeping track of the constants $C$ and ${\varepsilon}$ in the various inequalities, one can show that those in only depend on $N$, the diameter of ${\Omega}$ and the radii of the optimal interior and exterior touching balls to ${\Gamma}$ (see [@MP2]).
In order to prove , we use the new identity , that also reads as: $$\begin{gathered}
\label{identity-SBT-h}
\frac1{N-1}\int_{{\Omega}} |{\nabla}^2 h|^2 dx+
\frac1{R}\,\int_{\Gamma}(u_\nu-R)^2 dS_x = \\
-\int_{{\Gamma}}(H_0-H)\,h_\nu\,u_\nu\,dS_x+
\int_{{\Gamma}}(H_0-H)\, (u_\nu-R)\,q_\nu\, dS_x.\end{gathered}$$ In fact, discarding the first summand at its left-hand side and applying Hölder’s inequality to the two terms at its right-hand side and thereafter, yields that $${\Vert}u_\nu-R{\Vert}_{2,{\Gamma}}\le C\,{\Vert}H_0-H{\Vert}_{2,{\Gamma}}.$$ Inequality then follows again from and the estimate $$\max_{{\Gamma}} h-\min_{{\Gamma}} h\le C \left(\int_{{\Omega}} |{\nabla}^2 h|^2\,dx\right)^{\tau_N/2}$$ already obtained in [@MP1].
Stability for a harmonic domain
===============================
We conclude this paper by deriving a stability inequality for the mean value property established in , in the spirit of that given in [@CFL] for the classical Gauss mean value property for harmonic functions on balls in ${\mathbb{R}}^N$.
We start by writing an identity, $$\frac1{|{\Omega}|}\int_{\Omega}h\,dx-\frac1{|{\Gamma}|}\int_{\Gamma}h\,dS_x=\frac1{N |{\Omega}|}\,\int_{\Gamma}h\,(u_\nu-R)\,dS_x,$$ that holds at least for any function $h\in C^2({\Omega})\cap C^0({\overline}{{\Omega}})$ which is harmonic in ${\Omega}$, if $u$ is the solution of . The identity can be easily obtained by an application of Gauss-Green’s formula and the use of .
The measure of the deviation of the two mean values from one another can be obtained by taking the norm of the linear functional $L$ defined at the left-hand side of the identity on some relevant normed space. We may for instance consider the [*Hardy-type space*]{} ${{\mathcal H}}^p({\Omega})$ of harmonic functions in ${\Omega}$, whose trace on ${\Gamma}$ is a function in $L^p({\Gamma})$. Thus, we compute that $$\begin{gathered}
{\Vert}L{\Vert}_p=\sup\left\{\left|\frac1{|{\Omega}|}\int_{\Omega}h\,dx-\frac1{|{\Gamma}|}\int_{\Gamma}h\,dS_x\right|: h\in{{\mathcal H}}^p({\Omega}), {\Vert}h{\Vert}_{p,{\Gamma}}\le 1\right\}=\\
\frac1{N |{\Omega}|}\,{\Vert}u_\nu-R{\Vert}_{p',{\Gamma}},\end{gathered}$$ where $p'$, as usual, is the conjugate exponent of $p$. These computations work for any $p\in[1,\infty]$.
Therefore, by choosing $p=2$, we obtain the inequality $$\rho_e-\rho_i\le C\,{\Vert}L{\Vert}_2^\frac2{N+2} \quad \mbox{ if } \quad {\Vert}L{\Vert}_2<{\varepsilon}$$ from . This inequality is new.
[BNST]{}
A. Aftalion, J. Busca and W. Reichel, *Approximate radial symmetry for overdetermined boundary value problems*, Adv. Diff. Eq. 4 (1999), 907-932.
D. Aharonov, H. S. Shapiro, *A minimal-area problem in conformal mapping — preliminary report*, Res. Bull. TRITA-MAT-1973-7, Royal Institute of Technology, 34 pp.
A. D. Alexandrov, *Uniqueness theorem for surfaces in the large. V*, Vestnik, Leningrad Univ. 13, 19 (1958), 5-8, Amer. Math. Soc. Transl. 21, Ser. 2, 412–416.
A. D. Alexandrov, *A characteristic property of spheres*, Ann. Math. Pura Appl. 58 (1962), 303–315.
B. Brandolini, C. Nitsch, P. Salani, C. Trombetti, *On the stability of the Serrin problem*, J. Differential Equations, 245 (2008), 1566–1583.
H. B. Boas, E. J. Straube, *Integral inequalities of Hardy and Poincaré type*, Proc. Amer. Math. Soc. 103 (1998), 172–176.
G. Ciraolo, F. Maggi, *On the shape of compact hypersurfaces with almost constant mean curvature*, Comm. Pure Appl. Math. 70 (2017), 665–716.
G. Ciraolo, R. Magnanini, S. Sakaguchi, *Solutions of elliptic equations with a level surface parallel to the boundary: stability of the radial configuration*, J. Anal. Math. 128 (2016), 337–353.
G. Ciraolo, R. Magnanini, V. Vespri, *Hölder stability for Serrin’s overdetermined problem*, Ann. Mat. Pura Appl. 195 (2016), 1333–1345.
G. Ciraolo, L. Vezzoni, *A sharp quantitative version of Alexandrov’s theorem via the method of moving planes*, to appear in J. Eur. Math. Soc., preprint arxiv:1501.07845v3.
G. Cupini, N. Fusco, E. Lanconelli, *A sharp stability result for the Gauss mean value formula*, preprint (2017), http://cvgmt.sns.it/media/doc/paper/3515/CFL.2017.pdf.
W. M. Feldman, *Stability of Serrin’s problem and dynamic stability of a model for contact angle motion*, preprint (2017), arxiv:1707.06949.
B. Gustafsson, H. Shapiro, *What is a quadrature domain?*, in “Quadrature Domains and Their Applications”, 1–25, Oper. Theory Adv. Appl., 156, Birkhäuser, Basel, 2005.
A. Henrot, M. Pierre, Variation et Optimisation de Formes: Une Analyse Géométrique. Springer-Verlag Berlin Heidelberg 2005.
R. Hurri-Syrjänen, *An improved Poincaré inequality*, Proc. Amer. Math. Soc. 120 (1994), 213–222.
B. Krummel, F. Maggi, *Isoperimetry with upper mean curvature bounds and sharp stability estimates*, preprint (2016), arxiv:1606.00490.
Ü. Kuran: *On the mean-value property of harmonic functions*, Bull. London Math. Soc. 4 (1972), 311–312.
R. Magnanini, G. Poggesi, *On the stability for Alexandrov’s Soap Bubble theorem*, to appear in J. Anal. Math., preprint arxiv (2016) arxiv:1610.07036.
R. Magnanini, G. Poggesi, *Serrin’s problem and Alexandrov’s Soap Bubble Theorem: enhanced stability via integral identities*, preprint arxiv (2017) arxiv:1708.07392.
S. I. Pohozaev, *On the eigenfunctions of the equation ${\Delta}u + {\lambda}f(u) = 0$*, Dokl. Akad. Nauk SSSR 165 (1965), 1408–1411.
L. E. Payne, P. W. Schaefer, *Duality theorems un some overdetermined boundary value problems*, Math. Meth. Appl. Sciences 11 (1989), 805–819.
G. Pólya, G. Szegö, Isoperimetric Inequalities in Mathematical Physics. Annals of Mathematics Studies, 27, Princeton University Press, Princeton, N. J., 1951.
S. Raulot, A. Savo, *On the first eigenvalue of the Dirichlet-to-Neumann operator on forms*, J. Funct. Anal. 262 (2012), 889–914.
R. C. Reilly, *Mean curvature, the Laplacian, and soap bubbles*, Amer. Math. Monthly, 89 (1982), 180–188.
A. Ros, *Compact hypersurfaces with constant higher order mean curvatures*, Rev. Mat. Iberoamericana 3 (1987), 447–453.
J. Serrin, *A symmetry problem in potential theory*, Arch. Ration. Mech. Anal. 43 (1971), 304–318.
I. S. Sokolnikoff, Mathematical Theory of Elasticity. New York, McGraw-Hill 1956.
H. F. Weinberger, *Remark on the preceding paper of Serrin*, Arch. Ration. Mech. Anal. 43 (1971), 319–320.
| {
"pile_set_name": "ArXiv"
} |
---
author:
- François Lique
- Alexandre Zanchet
- Niyazi Bulut
- 'Javier R. Goicoechea'
- Octavio Roncero
date: 'Received ; accepted '
title: Hyperfine excitation of SH$^+$ by H
---
[ SH$^+$ is a surprisingly widespread molecular ion in diffuse interstellar clouds. There, it plays an important role triggering the sulfur chemistry. In addition, SH$^+$ emission lines have been detected at the UV-illuminated edges of dense molecular clouds, photo-dissociation regions (PDRs), and toward high-mass protostars. An accurate determination of the SH$^+$ abundance and of the physical conditions prevailing in these energetic environments relies on knowing the rate coefficients of inelastic collisions between SH$^+$ molecules and hydrogen atoms, hydrogen molecules, and electrons.]{} [In this paper, we derive SH$^+$–H fine and hyperfine-resolved rate coefficients from the recent quantum calculations for the SH$^+$–H collisions, including inelastic, exchange and reactive processes.]{} [The method used is based on the infinite order sudden approach.]{} [State-to-state rate coefficients between the first 31 fine levels and 61 hyperfine levels of SH$^+$ were obtained for temperatures ranging from 10 to 1000 K. Fine structure-resolved rate coefficients present a strong propensity rule in favour of $\Delta j = \Delta N$ transitions. The $\Delta j = \Delta F$ propensity rule is observed for the hyperfine transitions. ]{} [The new rate coefficients will help significantly in the interpretation of SH$^+$ spectra from PDRs and UV-irradiated shocks where the abundance of hydrogen atoms with respect to hydrogen molecules can be significant.]{}
Introduction
============
Submillimeter emission lines from the ground rotational state of SH$^+$ were first detected toward high-mass star-forming region with Herschel/HIFI [@Benz:10]. In parallel, and using APEX telescope, @menten:11 detected rotational absorption lines produced by SH$^+$ in the low-density ($n_{\rm H}$$\lesssim$100cm$^{-3}$) diffuse clouds in the line of sight toward the strong continuum source SgrB2(M), in the Galactic Center. Despite the very endothermic formation route of this hydride ion [for a review see @Gerin:16], subsequent absorption measurements of multiple lines of sight with Herschel demonstrated the ubiquitous presence of SH$^+$ in diffuse interstellar clouds [@Godard:12].
SH$^+$ rotational lines have been also detected in emission toward the Orion Bar photo-dissociation region (PDR) [@Nagy:13], a strongly surface of the Orion molecular cloud [e.g., @Goicoechea:16]. In warm and dense PDRs ($n_{\rm H}$$\gtrsim$10$^5$cm$^{-3}$) like the Bar, SH$^+$ forms by exothermic reactions of S$^+$ with vibrationally excited H$_2$ [with $v\,\geq\,2$, see details in @Agundez:10; @Zanchet:13; @Zanchet:19]. High angular resolution images taken with ALMA shows that SH$^+$ arises from a narrow layer at the edge of the PDR, the photodissociation front that separates the atomic from the molecular gas [@Goicoechea:17]. In these PDR layers, the abundance of hydrogen atoms is comparable to that of hydrogen molecules, that are continuously being photodissociated. Both H and H$_2$, together with electrons [arising from the ionization of carbon atoms, see e.g., @Cuadrado:19] drive the collisional excitation of molecular rotational levels and atomic fine-structure levels.
In addition to PDRs, the SH$^+$ line emission observed toward massive protostars likely arises from the cavities of their molecular outflows [@Benz:10; @Benz16]. In these UV-irradiated shocks, the density of hydrogen atoms can be high as well. All in all, the molecular abundances and physical conditions in these ambients where atomic and molecular hydrogen can have comparable abundances are not well understood.
In the ISM, molecular abundances are derived from molecular line modeling. Assuming local thermodynamic equilibrium (LTE) conditions in the interstellar media with low densities is generally not a good approximation, as discussed by [@Roueff:13]. Hence, the population of molecular levels is driven by the competition between collisional and radiative processes. It is then essential to determine accurate collisional data between the involved molecules and the most abundant interstellar species, which are usually electrons and atomic and molecular hydrogen, in order to obtain reliably modeled spectra.
The computation of collisional data for the SH$^+$ started recently. First, R-matrix calculations combined with the adiabatic-nuclei-rotation and Coulomb-Born approximations was used to compute electron-impact rotational rate coefficients and hyperfine resolved rate coefficients were deduced using the infinite-order-sudden approximation [@Hamilton:18]. Then, time-independent close-coupling quantum scattering calculations are employed by [@Dagdigian:19] to compute hyperfine-resolved rate coefficients for (de-)excitation of SH$^+$ in collisions with both para- and ortho-H$_2$.
Collisional data with atomic hydrogen are much more challenging to compute because of the possible reactive nature of the SH$^+$–H collisional system. However, recently, we overcame this difficult problem and presented quantum mechanical calculations of cross sections and rate coefficients for the rotational excitation of SH$^+$ by H, including the reactive channels [@Zanchet:19] using new accurate potential energy surfaces.
Unfortunately, it was not possible to include the fine and hyperfine structure of the HS$^+(^3\Sigma^-$) molecule in the quantum dynamical calculations, whereas they are resolved in the astronomical observations leading the new set of data difficult to use in astrophysical applications.
The aim of this work is to use the quantum state-to-state rate coefficients for the HS$^+(^3\Sigma^-)$–H inelastic collisions to generate a new set of fine and hyperfine resolved data that can be used in radiative transfer models. The paper is organized as follows : Sec. II provides a brief description theoretical approach. In Sec. III, we present the results. Concluding remarks are drawn in Sec. IV.
Computational methodology
=========================
Potential energy surfaces
-------------------------
The collisions between SH$^+$($X^{3}\Sigma^{-}$) and H($^2S$) can take place on two different potential energy surfaces (PESs), the ground quartet ($^4A''$) and doublet ($^2A''$) electronic states of the H$_2$S$^+$ system. In this work, we used the H$_2$S$^+$ quartet and doublet potential energy surfaces (PESs), that were previously generated by [@Zanchet:19].
Briefly, the state-average complete active space (SA-CASSCF) method [@Werner:85] was employed to calculate the first $^4A''$ together with the two first $^2A'$ and the three first $^2A''$ electronic states. The obtained state-average orbitals and multireference configurations were then used to calculate both the lowest $^4A''$ and $^2A''$ states energies with the internally contracted multireference configuration interaction method (ic-MRCI) [@Werner:88] and Davidson correction [@Davidson:75]. For both sulfur and hydrogen atoms, the augmented correlation-consistent quintuple zeta (aug-cc-pV5Z) basis sets were used and all calculations were done using the MOLPRO suite of programs [@MOLPRO]. These energies have then been fitted using the GFIT3C procedure [@Aguado-Paniagua:92; @Aguado-etal:98].
Both PESs exhibit completely different topographies. The $^4A''$ electronic state do not present any minimum out of the van der Waals wells in the asymptotic channels and does not present any barrier to $\ch{SH+ + H -> H2 + S+}$ reaction. This reaction is exothermic on this surface and reactive collisions are likely to occur in competition with the inelastic collisions.
On the other hand, the $^2A''$ state presents a deep insertion HSH well and does not present any barrier neither. For this state, in contrast with the previous case, the $\ch{SH+ + H -> H2 + S+}$ reaction is endothermic and only inelastic collisions can occur (pure or involving H exchange).
Time independent and Wave Packet calculations
---------------------------------------------
During a collision between and , three processes compete: the inelastic (\[ine\]), reactive (\[rea\]) and exchange (\[ex\]) processes: $$\ch{SH+}(v,N) + \ch{H'} \ch{->} \ch{SH+}(v',N') + \ch{H'}
\label{ine}$$ $$\ch{SH+}(v,N) + \ch{H'} \ch{->} \ch{H'H}(v',N') + \ch{S+}
\label{rea}$$ $$\ch{SH+}(v,N) + \ch{H'} \ch{->} \ch{H'S+}(v',N') + \ch{H}
\label{ex}$$ where $v$ and $N$ designate the vibrational and rotational levels, respectively, of the molecule ( or when the reaction occurs). Only collisions with molecules in their ground vibrational state $v=0$ are considered in this work. Therefore, the vibrational quantum number $v$ will be omitted hereafter.
The spin-orbit couplings between the different H$_2$S$^+$ were ignored and the collision on the ground quartet and doublet electronic states were studied separately. Because of their different topography, the dynamical calculations were treated differently on the two PESs.
The reaction dynamics on the $^4A''$ state has been studied with a time-independent treatment based on hyperspherical coordinates. On this PES, the $\ch{SH+ + H -> H2 + S+}$ collision is a barrierless and exothermic reaction for which it has been shown [@Zanchet:19] that the reactivity is large ($k > 10^{-10}$ cm$^{3}$ s$^{-1}$), even at low temperatures. Hence, the competition between all the three processes (inelastic, exchange and reactivity) is taken into account rigorously. We used the <span style="font-variant:small-caps;">abc</span> reactive code of @skouteris:2000 to carry out close coupling calculations of the reactive, inelastic and exchange cross sections. The cross sections were obtained following the approach described in @tao:2007 and recently used to study the rotational excitation of [@Desrousseaux:20] by .
We computed the cross sections for the first 13 rotational levels of the molecule ($0< N <12$) for collisional energy ranging from 0 to 5000 cm$^{-1}$ and for all values of the total angular momentum $J$ leading to a non-zero contribution in the cross sections. More details about the scattering calculations can be found in [@Zanchet:19].
The ground doublet ($^2A''$) electronic states exhibit a large well depth. Then, time-independent treatment is not usable and the dynamics was studied from a quantum wave-packet method using the MAD-WAVE3 program [@Zanchet-etal:09b]. On this electronic state, the reactive channels are largely endothermic and are not open at the collisional energies considered in this work.
The inelastic and exchange cross sections on the $^2A''$ state were calculated using the usual partial wave expansion as $$\begin{aligned}
\sigma_{\alpha,N\rightarrow \alpha', N'}(E_k)
&=&\frac{\pi}{k^{2}} \frac{1}{2N+1} \sum_{J=0}^{J_{max}} \sum_{\Omega,\Omega'}(2J+1)
\nonumber\\
&& \times P^{J}_{\alpha vN \Omega\rightarrow \alpha' v'N'\Omega'}(E_k) \nonumber\\\end{aligned}$$ where $J$ is the total angular momentum quantum number, and $\Omega, \Omega'$ are the projections of the total angular momentum on the reactant and product body-fixed z-axis, respectively. $\alpha=I, \alpha'=I\ {\rm or}\ E$ denotes the arrangement channels, inelastic or exchange. $k^2=2\mu_rE_k/\hbar^2$ is the square of the wave vector for a collision energy $E_k$, and $P^{J}_{\alpha vN \Omega\rightarrow \alpha' v'N'\Omega'}(E_k)$ are the transition probabilities, $i.e.$ the square of the corresponding S-matrix elements. We computed the cross sections from the $N=0$ rotational states to the first 13 rotational levels of the molecule ($0< N' <12$) for collisional energy ranging from 0 to 5000 cm$^{-1}$ Because of the high computational cost of these simulations, they are only performed for $J$=0, 10, 15, 20, 25, 30, 40, 50, ..., 110, while for intermediate $J$ values they are interpolated using a uniform $J$-shifting approximation as recently used for the OH$+$–H and CH$^+$–H collisional systems [@Werfelli:15; @bulut:15]. The convergence analysis and the parameters used in the propagation for each of the two PESs used are described in detail in [@Zanchet:19].
For both sets of calculations, since the two rearrangement channels, inelastic and exchange, yields to the same products, the corresponding cross sections are summed for the doublet and quartet states independently. Finally, the cross sections for each of the two electronic states are summed with the proper degeneracy factor to give the total collision cross sections as $$\begin{aligned}
\sigma_{N \to N'} (E_{k}) = {2\over 3} \sigma^{S=3/2}_{N \to N'} (E_{k})
+ {1\over 3} \sigma^{S=1/2}_{N \to N'} (E_{k}).\end{aligned}$$ As seen in [@Zanchet:19], the magnitude of the excitation cross sections obtained on the doublet states are larger than that on the quartet states, because of the both, the non reactive character of the collision and of the deep well that favor inelastic collisions.
From the total collision cross sections $\sigma_{N \to N'} (E_{k})$, one can obtain the corresponding thermal rate coefficients at temperature $T$ by an average over the collision energy ($E_k$): $$\begin{aligned}
\label{thermal_average}
k_{N \to N'}(T) & = & \left(\frac{8}{\pi\mu k_B^3 T^3}\right)^{\frac{1}{2}} \nonumber\\
& & \times \int_{0}^{\infty} \sigma_{N \to N'}(E_k)\, E_{k}\, exp(-E_k/k_BT)\, dE_{k}\end{aligned}$$ where $k_B$ is Boltzmann’s constant and $\mu$ the reduced mass. The cross sections calculations carried out up to kinetic energy of 5000 cm$^{-1}$ allowed computing rate coefficients for temperatures ranging from 10 to 1000 K.
In all these calculations, the spin-rotation couplings of SH$^+$ have not been included, and therefore the present set of rate coefficients cannot be directly used to model interstellar SH$^+$ spectra where the fine and hyperfine structure is resolved.
Infinite order sudden (IOS) calculations
----------------------------------------
In this section, we describe how the state-to-state fine and hyperfine rate coefficients for the SH$^+$–H collisional system were computed using IOS methods [@Goldflam:77; @Faure:12] using the above $k_{0 \to N'}(T)$ rate coefficients as “fundamental” rate coefficients (those out of the lowest level)
For SH$^+$ in its ground electronic $^{3}\Sigma^{-}$ state, the molecular energy levels can be described in the Hund’s case (b) limit[^1]. The fine structure levels are labeled by $Nj$, where $j$ is the total molecular angular momentum quantum number with $\vec{j}=\vec{N}+\vec{S}$. $\vec{S}$ is the electronic spin. For molecules in a $^{3}\Sigma^{-}$ state, $S=1$. Hence, three kinds of levels ($j=N-1$, $j=N$ and $j=N+1$) exist, except for the $N=0$ rotational level which is a single level.
The hydrogen atom also possesses a non-zero nuclear spin ($I=1/2$). The coupling between $\vec{I}$ and $\vec{j}$ results in a splitting of each level into two hyperfine levels (except for the $N=1,j=0$ level which is split into only one level). Each hyperfine level is designated by a quantum number $F$ ($\vec{F}=\vec{I}+\vec{j}$) varying between $| I - j |$ and $I + j$.
Using the IOS approximation, rate coefficients among fine structure levels can be obtained from the $k_{0 \to L} (T)$ “fundamental” rate coefficients using the following formula [e.g. @corey83]: $$\begin{aligned}
\label{REEQ}
k^{IOS}_{Nj \to N'j'} (T) & = &
(2N+1)(2N'+1)(2j'+1) \sum_{L}\nonumber \\
& & \left(\begin{array}{ccc}
N' & N & L \\
0 & 0 & 0
\end{array}\right)^{2}
\left\{\begin{array}{ccc}
N & N' & L \\
j' & j & S
\end{array}
\right\}^2 \nonumber \\
& & \times k_{0 \to L} (T) \end{aligned}$$ where $\left( \quad \right)$ and $\left\{ \quad \right\}$ are respectively the “3-j” and “6-j” symbols. In the usual IOS approach, $k_{0 \to L}(T)$ is calculated for each collision angle. Here, however, we use the $k_{0 \to L}(T)$ rate coefficients of Eq. (\[thermal\_average\]) obtained with a more accurate quantum method.
The hyperfine resolved rate coefficients can also be obtained from the fundamental rate coefficients as follow [@Faure:12]: $$\begin{aligned}
\label{REEQ2}
k^{IOS}_{NjF \to N'j'F'} (T) & = &
(2N+1)(2N'+1)(2j+1)(2j'+1)\nonumber \\
& & \times (2F'+1) \sum_{L} \left(\begin{array}{ccc}
N' & N & L \\
0 & 0 & 0
\end{array}\right)^{2} \nonumber \\
& & \left\{\begin{array}{ccc}
N & N' & L \\
j' & j & S
\end{array}
\right\}^2
\left\{\begin{array}{ccc}
j & j' & L \\
F' & F & I
\end{array}
\right\}^2 \nonumber \\
& & \times k_{0 \to L} (T) \end{aligned}$$
In addition, we note that the fundamental excitation rates $k_{0\to L}(T)$ were in practice replaced by the de-excitation fundamental rates using the detailed balance relation: $$k_{0\to L}(T)=(2L+1)k_{L\to 0}(T)$$ where $$k_{L \to 0}(T) = k_{0 \to L}(T) \frac{1}{2L+1} e^{\frac{\varepsilon_L}{k_BT}}$$ $\varepsilon_{L}$ is the energies of the rotational levels $L$.
This procedure was indeed found to significantly improve the results at low temperatures due to important threshold effects.
The fine and hyperfine splittings of the rotational states are of a few cm$^{-1}$ and of a few 0.001 cm$^{-1}$, respectively and can be neglected compared to the collision energy at $T>30-50$ K so that the present approach is expected to be reasonably accurate for all the temperature range considered in this work. [@Lique:16] have investigated the accuracy of the IOS approach in the case of OH$^+$–H collisions. It was shown to be reasonably accurate (within a factor of 2), even at low temperature so that we can anticipate a similar accuracy for the present collisional system. In addition, we note that with the present approach, some fine and hyperfine rate coefficients are strictly zero. This selection rule is explained by the “3-j” and “6-j” Wigner symbols that vanish for some transitions. Using a more accurate approach, these rate coefficients will not be strictly zero but will generally be smaller than the other rate coefficients.
Results
=======
Using the computational methodology described above, we have generated fine and hyperfine resolved rate coefficients for the SH$^+$–H collisional system using the doublet and quartet pure rotational rate coefficients in order to provide the astrophysical community with the first set of data for the SH$^+$–H collisional system. In all the calculations, we have considered all the SH$^+$ energy levels with $N$, $N' \le 10$ and we have included in the calculations all the fundamental rate coefficients with $L \le 12$. The complete set of (de)excitation rate coefficients is available on-line from the LAMDA [@schoier:05] and BASECOL [@Dubernet:13] websites.
Fine and hyperfine structure excitation
---------------------------------------
The thermal dependence of the fine structure resolved state-to-state SH$^+$–H rate coefficients is illustrated in Fig. \[fig2\] for selected $N=2,j \to N'=1,j'$ transitions.
![Temperature variation of the fine structure resolved de-excitation rate coefficients for the SH$^+$ molecule in collision with H for selected $N=2,j \to N'=1,j'$ transitions.[]{data-label="fig2"}](fig2.eps){width="9cm"}
The temperature variation of the de-excitation rate coefficients is relatively smooth except at low temperature ($T < 50 $K) where they increase rapidly. The weak temperature dependence of the rate coefficients (except at low temperature) could have been anticipated, on the basis of Langevin theory for ion–neutral interactions.
A strong propensity rule exists for $\Delta j = \Delta N$ transitions. Such $\Delta j = \Delta N$ propensity rule was predicted theoretically [@alexander:83] and is general for molecules in the $^{3}\Sigma^{-}$ electronic state. It was also observed previously for the O$_{2}$(X$^3\Sigma^-$)-He [@lique:10], NH(X$^3\Sigma^-$)–He [@Tobola:11] or OH$^+$–H [@Lique:16] collisions
Figure \[fig3\] presents the temperature variation of the hyperfine structure resolved state-to-state SH$^+$–H rate coefficients for selected $N=2,j=3,F=3.5 \to N'=1,j',F'$ transitions.
![Temperature variation of the hyperfine structure resolved de-excitation rate coefficients for the SH$^+$ molecule in collision with H for the $N=2,j=3,F=3.5 \to N'=1,j',F'$ transitions.[]{data-label="fig3"}](fig3.eps){width="9cm"}
For $\Delta j = \Delta N$ transitions, we have a strong propensity rule in favor of $\Delta j= \Delta F$ hyperfine transitions . This trend is the usual trend for open-shell molecules [@alexander:85; @dumouchel:12; @kalugina12; @Lique:16]. For $\Delta j \ne \Delta N$ transitions, it is much more difficult to find a clear propensity rule. The final distribution seems to be governed by two rules: the rate coefficients show propensity in favor of $\Delta j= \Delta F$ transitions, but are also proportional to the degeneracy ($2F' + 1$) of the final hyperfine level as already found for CN–para-H$_2$ system [@kalugina12].
Comparison with SH$^+$–H$_2$ rate coefficients
----------------------------------------------
Then, we compare the new SH$^+$–H rate coefficients with those reported recently for the hperfine excitation of SH$^+$ by H$_2$ [@Dagdigian:19]. The SH$^+$ molecule has been observed in media where both atomic and molecular hydrogen are significant colliding partners and this comparison should allow evaluating the impact of the different collisional partners.
In Fig. \[fig4\], we compare the SH$^+$–H and SH$^+$–H$_2$ (both para- and ortho-H$_2$) rate coefficients for a selected number of transitions.
![Comparison between SH$^+$–H and SH$^+$–H$_2$ (both para- and ortho-H$_2$) rate coefficients for a selected number of hyperfine ($N=2,j=3,F=3.5 \to N'=1,j',F'$) transitions.[]{data-label="fig4"}](fig4.eps){width="9cm"}
In astrophysical applications, when collisional data are not available, it is very common to derive collisional data from collisional rate coefficients calculated for the same molecule in collision with another colliding partner. Such approach [@lique08b], consist in assuming that the excitation cross-sections are similar for both colliding systems and that the rate coefficients differ only by a scaling factor due to the reduced mass which appears in Eq. \[thermal\_average\]. Hence, the following scaling relationship can be used: $$k^{{\rm H}} \simeq 1.4 \times k^{{\rm H_2}}$$
One can see that, at low temperatures, the rate coefficients for collisions with H do not have the highest magnitude as expected from the scaling relationships. They can even be one order of magnitude weaker. We also note that the differences between H and H$_2$ rate coefficients depend on the transitions and on the temperature leading to the impossibility of extrapolating accurate H collisional data from H$_2$ collisional data, or the reverse. Hence, it confirms that it is unrealistic to estimate unknown collisional rate coefficients by simply applying a scaling factor to existing rate coefficients. This result was previously observed for water [@daniel:15] and ammonia [@Bouhafs:17].
However, when the temperature increases, the agreement gets better and scaling techniques would lead to a reasonable estimate of the H or H$_2$ rate coefficients for temperatures above 500 K.
Summary and Conclusion
======================
The fine and hyperfine excitation of SH$^+$ by H have been investigated. We have obtained fine and hyperfine resolved rate coefficients for transitions involving the lowest levels of SH$^+$ for temperatures ranging from 10 to 1000 K. Fine structure resolved rate coefficients present a strong propensity rules in favor of $\Delta j = \Delta N$ transition. The $\Delta j= \Delta F$ propensity rule is observed for the hyperfine transitions.
As a molecule that can be observed from ground-based observatories [@Muller:14], in the Milky Way and beyond [@Muller:17], we expect that these new data will significantly help in the accurate interpretation of SH$^+$ rotational emission spectra from dense PDRs and massive proto-stars, enable this molecular ion to act as tracer of the energetics of these regions, and of the first steps of the sulfur chemistry.
We acknowledge the French-Spanish collaborative project PICS (Ref. PIC2017FR7). F.L. acknowledges financial support from the European Research Council (Consolidator Grant COLLEXISM, grant agree- ment 811363), the Institut Universitaire de France and the Programme National ”Physique et Chimie du Milieu Interstellaire” (PCMI) of CNRS/INSU with INC/INP co-funded by CEA and CNES. The research leading to these results has received funding from MICIU under grants No. FIS2017-83473-C2 and AYA2017-85111-P. N.B. acknowledges the computing facilities by TUBITAK-TRUBA. This work was performed using HPC resources from GENCI-CINES (Grant A0070411036).
[46]{} natexlab\#1[\#1]{}
Aguado, A. & Paniagua, M. 1992, J. Chem. Phys., 96, 1265
Aguado, A., Tablero, C., & Paniagua, M. 1998, Comput. Physics Commun., 108, 259
, M., [Goicoechea]{}, J. R., [Cernicharo]{}, J., [Faure]{}, A., & [Roueff]{}, E. 2010, , 713, 662
Alexander, M. H. 1985, Chem. Phys., 92, 337
, M. H. & [Dagdigian]{}, P. J. 1983, , 79, 302
, A. O., [Bruderer]{}, S., [van Dishoeck]{}, E. F., [et al.]{} 2016, , 590, A105
, A. O., [Bruderer]{}, S., [van Dishoeck]{}, E. F., [et al.]{} 2010, , 521, L35
Bouhafs, N., Rist, C., Daniel, F., [et al.]{} 2017, Monthly Notices of the Royal Astronomical Society, 470, 2204
, N., [Lique]{}, F., & [Roncero]{}, O. 2015, Journal of Physical Chemistry A, 119, 12082
Corey, G. C. & McCourt, F. R. 1983, J. Phys. Chem, 87, 2723
, S., [Salas]{}, P., [Goicoechea]{}, J. R., [et al.]{} 2019, , 625, L3
, P. J. 2019, , 487, 3427
, F., [Faure]{}, A., [Dagdigian]{}, P. J., [et al.]{} 2015, , 446, 2312
Davidson, E. R. 1975, J. Comput. Phys., 17, 87
, B. & [Lique]{}, F. 2020, , 152, 074303
, M.-L., [Alexander]{}, M. H., [Ba]{}, Y. A., [et al.]{} 2013, , 553, A50
, F., [K[ł]{}os]{}, J., [Tobo[ł]{}a]{}, R., [et al.]{} 2012, , 137, 114306
, A. & [Lique]{}, F. 2012, , 425, 740
, M., [Neufeld]{}, D. A., & [Goicoechea]{}, J. R. 2016, , 54, 181
, B., [Falgarone]{}, E., [Gerin]{}, M., [et al.]{} 2012, , 540, A87
, J. R., [Cuadrado]{}, S., [Pety]{}, J., [et al.]{} 2017, , 601, L9
, J. R., [Pety]{}, J., [Cuadrado]{}, S., [et al.]{} 2016, , 537, 207
, R., [Kouri]{}, D. J., & [Green]{}, S. 1977, , 67, 5661
Gordy, W. & Cook, R. L. 1984, Microwave molecular spectra (Wileys and sons)
, J. R., [Faure]{}, A., & [Tennyson]{}, J. 2018, , 476, 2931
, Y., [Lique]{}, F., & [K[ł]{}os]{}, J. 2012, , 422, 812
, F. 2010, , 132, 044311
, F., [Bulut]{}, N., & [Roncero]{}, O. 2016, , 461, 4477
Lique, F., Spielfiedel, A., Dubernet, M. L., & Feautrier, N. 2005, J. Chem. Phys., 123, 134316
, F., [Tobo[ł]{}a]{}, R., [K[ł]{}os]{}, J., [et al.]{} 2008, , 478, 567
, K. M., [Wyrowski]{}, F., [Belloche]{}, A., [et al.]{} 2011, , 525, A77
, H. S. P., [Goicoechea]{}, J. R., [Cernicharo]{}, J., [et al.]{} 2014, , 569, L5
, S., [M[ü]{}ller]{}, H. S. P., [Black]{}, J. H., [et al.]{} 2017, , 606, A109
, Z., [Van der Tak]{}, F. F. S., [Ossenkopf]{}, V., [et al.]{} 2013, , 550, A96
, E. & [Lique]{}, F. 2013, Chem. Rev., 113, 8906
, F. L., [van der Tak]{}, F. F. S., [van Dishoeck]{}, E. F., & [Black]{}, J. H. 2005, , 432, 369
Skouteris, D., Castillo, J. F., & Manolopoulos, D. E. 2000, Comput. Phys. Commun., 133, 128
, L. & [Alexander]{}, M. H. 2007, , 127, 114301
, R., [Dumouchel]{}, F., [K[ł]{}os]{}, J., & [Lique]{}, F. 2011, , 134, 024305
, G., [Halvick]{}, P., [Honvault]{}, P., [Kerkeni]{}, B., & [Stoecklin]{}, T. 2015, , 143, 114304
Werner, H.-J. & Knowles, P. J. 1985, J. Chem. Phys., 82, 5053
Werner, H.-J. & Knowles, P. J. 1988, J. Chem. Phys., 89, 5803
Werner, H.-J., Knowles, P. J., Knizia, G., Manby, F. R., & Sch[ü]{}tz, M. 2012, WIREs Comput Mol Sci, 2, 242
, A., [Ag[ú]{}ndez]{}, M., [Herrero]{}, V. J., [Aguado]{}, A., & [Roncero]{}, O. 2013, , 146, 125
, A., [Lique]{}, F., [Roncero]{}, O., [Goicoechea]{}, J. R., & [Bulut]{}, N. 2019, , 626, A103
Zanchet, A., Roncero, O., Gonz[á]{}lez-Lezana, T., [et al.]{} 2009, J. Phys. Chem. A, 113, 14488
[^1]: For $^{3}\Sigma^{-}$ electronic ground state molecules, the energy levels are usually described in the intermediate coupling scheme [@Gordy:84; @lique:05]. However, the use of IOS scattering approach implies to use the Hund’s case (b) limit.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We have observed 13 methanol maser sources associated with massive star-forming regions; W3(OH), Mon R2, S 255, W 33A, IRAS 18151$-$1208, G 24.78$+$0.08, G 29.95$-$0.02, IRAS 18556$+$0136, W 48, OH 43.8$-$0.1, ON 1, Cep A and NGC 7538 at 6.7 GHz using the Japanese VLBI Network (JVN). Twelve of the thirteen sources were detected at our longest baseline of $\sim $50 M$\lambda $, and their images are presented. Seven of them are the first VLBI images at 6.7 GHz. This high detection rate and the small fringe spacing of $\sim$4 milli-arcsecond suggest that most of the methanol maser sources have compact structure. Given this compactness as well as the known properties of long-life and small internal-motion, this methanol maser line is suitable for astrometry with VLBI.'
author:
- |
Koichiro <span style="font-variant:small-caps;">Sugiyama</span>, Kenta <span style="font-variant:small-caps;">Fujisawa</span>, Akihiro <span style="font-variant:small-caps;">Doi</span>, Mareki <span style="font-variant:small-caps;">Honma</span>, Hideyuki <span style="font-variant:small-caps;">Kobayashi</span>,\
Takeshi <span style="font-variant:small-caps;">Bushimata</span>, Nanako <span style="font-variant:small-caps;">Mochizuki</span>, and Yasuhiro <span style="font-variant:small-caps;">Murata</span>
title: |
Mapping Observations of 6.7 GHz Methanol Masers\
with Japanese VLBI Network
---
Introduction {#section:introduction}
============
The class II methanol masers are well-known as tracers of early stages of high-mass star formation ([@1998MNRAS.301..640W]; ; [@2006ApJ...638..241E]). Classes I and II are defined on the basis of associated sources ([@1987Natur.326...49B]; [@1991ASPC...16..119M]); different pumping mechanisms are proposed for them ([@1992MNRAS.259..203C], [@1997MNRAS.288L..39S]). The class II masers are represented by 6.7 and 12.2 GHz lines ([@1991ApJ...380L..75M]; [@1987Natur.326...49B]). The spot size of class II masers is several AU ([@1992ApJ...401L..39M]; [@1999ApJ...519..244M]), while the class I masers are resolved out with Very Long Baseline Interferometric (VLBI) technique [@1998AAS...193.7101L].
VLBI observations for astrometry with hydroxyl, water and methanol masers have been made using the phase referencing technique. The accuracy of measuring with hydroxyl masers is $\sim$1 milli-arcsecond (mas) , while that achieved with water masers is up to a few tens of micro-arcsecond ($\mu $as) [@2006ApJ...645..337H]. @2006Sci...311...54X also have achieved the positional accuracy of $\sim $10 $\mu $as with methanol maser at 12.2 GHz. This observation has been the only one astrometric observation with methanol masers. Internal proper motions are often measured typically at a few mas per year for water masers at 22 GHz (e.g., [@1981ApJ...247.1039G], ) and this motion sometimes prevents the separation of the annual parallax from the internal proper motion. The internal proper motion for class II methanol masers are small and have been measured only for W3(OH) at 12.2 GHz [@2002ApJ...564..813M]. The lifetime of 22 GHz water masers is sometimes too short for measuring annual parallax. For example, several spots disappear over timescales of 1 month for Cepheus A and IRAS 21391$+$5802 ([@2001ApJ...560..853T], ; [@2000ApJ...538..268P]). A monitoring of variability of 6.7 GHz methanol line for four years showed that each spectral feature survives regardless of some variability [@2004MNRAS.355..553G], namely the lifetime of this maser is usually long enough for measuring annual parallax. There are 519 sites of 6.7 GHz methanol maser in a list compiled by . Recently, new 48 sources have been detected using the 305 m Arecibo radio telescope [@2007ApJ...656..255P]. For the above reasons, i.e., compactness, small internal proper motion, long-life and large number of known sources, class II methanol masers may also be a useful probe for astrometry.
The VLBI Exploration of Radio Astrometry (VERA; [@Kobayashi_etal.2003]) is a dedicated VLBI network for astrometry, which mainly observes galactic water masers for measuring their distance and motion. Methanol masers at 6.7 GHz would also contribute to revealing the Galactic structure as well as water masers. The spot size of methanol maser should be small enough for high precision astrometry. The study of spot size, however, have been made only a few cases so far. The number of sources imaged with baseline of $\geq $50 M$\lambda$ at 6.7 GHz were only four ([@1992ApJ...401L..39M]; ; ; ). discussed the size and structure of individual masing regions in detail based on their 12.2 and 6.7 GHz observations. They showed that the majority of the masing regions consist of a compact maser core surrounded by extended emission (halo), and derived the core size of 2 to 20 AU.
We have started investigations of methanol masers using the Japanese VLBI Network (JVN) for astrometry. The network is a newly-established one with 50–2560 km baselines across the Japanese islands [@2006astro.ph.12528D] and consists of ten antennas, including four radio telescopes of the VERA. We have installed 6.7 GHz receivers on three telescopes of the JVN and have made a snap-shot imaging survey toward thirteen sources associated with massive star-forming regions. The aims of this observation are to investigate the detectability of 6.7 GHz masers with our longest baseline of $\sim$50 M$\lambda$ corresponding fringe spacing of $\sim$4 mas, and to verify the imaging capability of this network.
In this paper, we describe the detail of this observation and data reduction in section \[section:observation\]. In section \[section:result\] we present the images of the detected sources and report the results on each individual source. Finally, we discuss the properties of this maser for astrometry in section \[section:discussion\], based on the results of this observation.
[lllcrcc]{} Source & & Ref. & & $d$ &\
& & & & & &\
& & & & & (kpc) &\
W3(OH) & 02 27 03.820 & 61 52 25.40 & 10 & 3294 & 1.95 & 1\
Mon R2 & 06 07 47.867 & $-$06 22 56.89 & 4 & 104 & 0.83 & 6, 7\
S 255 & 06 12 54.024 & 17 59 23.01 & 6 & 79 & 2.5 & 6, 7\
W 33A & 18 14 39.52 & $-$17 51 59.7 & 13 & 297 & 4.0 & …\
IRAS 18151$-$1208 & 18 17 58.07 & $-$12 07 27.2 & 13 & 119 & 3.0 & 8\
G 24.78$+$0.08 & 18 36 12.57 & $-$07 12 11.4 & & 84 & 7.7 & …\
G 29.95$-$0.02 & 18 46 03.741 & $-$02 39 21.43 & 6 & 182 & 9.0 & …\
IRAS 18556$+$0136 & 18 58 13.1 & 01 40 35 & 12 & 191 & 2.0 & …\
W 48 & 19 01 45.5 & 01 13 28 & 3 & 733 & 3.4 & 6\
OH 43.8$-$0.1 & 19 11 53.987 & 09 35 50.308 & 2 & 51 & 2.8 & …\
ON 1 & 20 10 09.1 & 31 31 34 & 12 & 107 & 1.8 & …\
Cep A & 22 56 17.903 & 62 01 49.65 & 13 & 371 & 0.73 & …\
NGC 7538 & 23 13 45.364 & 61 28 10.55 & 6 & 256 & 2.8 & 5, 6, 7, 9, 11\
Observations and Data Reduction {#section:observation}
===============================
VLBI Observation {#section:VLBI}
----------------
The $ 5_1\rightarrow6_0A^+ $ methanol transition at 6668.518 MHz was observed on 2005 September 26 from 5:00 to 21:00 UT using three telescopes (Yamaguchi 32 m, VERA-Mizusawa 20 m, VERA-Ishigaki 20 m) of the JVN. The maximum fringe spacing was 9.1 mas (Yamaguchi–Mizusawa, 22 M$\lambda $) and the minimum was 4.1 mas (Mizusawa–Ishigaki, 50 M$\lambda $). Right-circular polarization was received at Yamaguchi with system noise temperature of 220 K, while linear polarization was received at Mizusawa and Ishigaki stations with system noise temperatures of 120 K. The data were recorded on magnetic tapes using the VSOP-terminal system at a data rate of 128 Mbps with 2-bit quantization and 2 channels, and correlated at the Mitaka FX correlator [@Shibata_etal.1998]. From the recorded 32 MHz bandwidth, 8 MHz (6664 MHz to 6672 MHz) was divided into 1024 channels and used for data analysis, yielding a velocity resolution of 0.35 km s$^{-1}$.
Sources that are bright and have widespread velocity range were selected as targets of this observation. The following thirteen sources were observed; W3(OH), Mon R2, S 255, W 33A, IRAS 18151$-$1208, G 24.78$+$0.08, G 29.95$-$0.02, IRAS 18556$+$0136, W 48, OH 43.8$-$0.1, ON 1, Cep A and NGC 7538. The details of each source are shown in table \[tab:table1\]. Scans of 15 minutes duration were made for 2-4 times on each source at different hour angles in order to improve uv-coverages. The number of scans for each source is shown in table \[tab:table2\]. Strong continuum sources, NRAO 530, 3C 454.3, and 3C 84, were observed every two hours for bandpass and delay calibration.
The data were reduced using the Astronomical Image Processing System (AIPS; [@Greisen2003]). Correlator digitization errors were corrected using the task ACCOR. Clock offsets and clock rate offsets were corrected using strong continuum calibrators in the task FRING. Bandpass calibration was performed using strong continuum calibrators in the task BPASS. Doppler corrections were made by running the tasks SETJY and CVEL. Amplitude calibration parameters were derived from the total-power spectra of maser lines using template-method in the task ACFIT. Fringe-fitting was conducted using one spectral channel including a strong maser feature in the task FRING. The solutions were applied to all the other channels.
We have searched maser spots over an area of $\times $ with the Difmap software [@Shepherd1997]. Structure models were made by model-fitting with point sources and self-calibration algorithms iteratively. The phase solutions of self-calibration were applied to all the other channels.
In addition to the analyses described above, special amplitude calibrations were necessary because different polarizations (circular/linear) were correlated. Since a visibility amplitude reduces by $1/\sqrt{2} $ in correlation between linear and single circular polarization, the factor was corrected. For the case of linear and linear polarization, the amplitude varies with time depending on position angle between antennas. Amplitude correction factors were calculated for each observational scan and applied to the visibilities. We made a baseline-based correction to each observational scan. The correction procedure was confirmed in some observations at different date by applying the task BLCAL for bright continuum sources. This process indicated that an accuracy of this calibration was $\sim $10 [%]{}.
Spectroscopic Single-Dish Observation {#section:single}
-------------------------------------
We also have made a series of single-dish observation of 6.7 GHz methanol maser with the Yamaguchi 32 m telescope. The single-dish observations were made about one month before the VLBI observation (August 2005). In this paper, we use the spectra as the total-power (not cross-correlated) spectra in comparison with cross-correlated spectra. The accuracy of correlated flux density depends on the accuracy of the single-dish observation.
The received signal at dual circular polarizations with a bandwidth of 4 MHz each was divided into 4096 channels, yielding a velocity resolution of 0.044 km s$^{-1}$. The flux density calibration was made using an aparture efficiency of 70 [%]{} and a system noise temperature (220 K) measured on the first day of the single-dish observation. The accuracy of the calibration was 10 [%]{}. The rms of noise was typically 1.0 Jy, for data combined from dual polarizations with a 14 minutes integration. We used source positions listed in the IRAS Point Source Catalog (PSC; [@1994yCat.2125....0J]). Error in the LSR velocity was potentially to be $\pm 0.3$ km s$^{-1}$.
Results {#section:result}
=======
All the targets except for S 255 were detected. The channel-velocity maps of the detected sources are presented in figures \[fig:fig1\]–\[fig:fig13\]. Only total-power spectrum is shown in figure \[fig:fig3\] for S 255. The maps indicates positions of maser spots relative to that of the reference spot; the size and color of a spot represent its flux density in logarithmic scale and radial velocity, respectively. Correlated spectra (total CLEANed components) are shown in addition to total-power spectra for each source. The rms of image noise in a line-free channel ranges from 200 to 670 mJy beam$^{-1}$. The minimum detectable sensitivities of 7 $\sigma $ were in the range from 1.4 to 4.7 Jy beam$^{-1}$. The maximum dynamic range was 196 for W3(OH). The projected baseline ranged from 11 to 50 M$\lambda $ for all sources except for W 33A, IRAS 18151$-$1208 and G 24.78$+$0.08 (7 to 50 M$\lambda $). Flux ratios of correlated to total spectra at the peak channel of total spectra, and that of the integrated spectra are shown on column 7, 8 in table \[tab:table2\], respectively. Ratio of correlated flux densities at the longest baseline of 50 M$\lambda $ to that at zero-baseline (total-power) was 20 [%]{} on average. We describe the results on each individual source below in Right Ascension order.
[lcrrcrcc]{}Source & $N_{\mathrm{scan}}$ & $\theta_{\mathrm{maj}} \times \theta_{\mathrm{min}}$ & $P.A.$ & $\sigma $ & & $S_{\mathrm{VLBI}}^{\mathrm{p}}/S^{\mathrm{p}}$ & $S_{\mathrm{VLBI}}/S$\
& & (mas $\times$ mas) & (deg) & $ (\textrm{Jy~beam}^{-1}) $ & $ (\textrm{km~s}^{-1}) $ & (%) & (%)\
W3(OH) & 3 & $ 5.5 \times 2.3 $ & 131 & 0.24 & $-$45.46 & 31 & 45\
Mon R2 & 2 & $ 19.3 \times 2.1 $ & 140 & 0.30 & 10.64 & 24 & 33\
S 255 & 2 & $ 19.4 \times 2.1 $ & 144 & … & …& …& …\
W 33A & 3 & $ 12.6 \times 2.5 $ & 138 & 0.37 & 39.69 & 45 & 51\
IRAS 18151$-$1208 & 3 & $ 10.9 \times 2.5 $ & 138 & 0.63 & 27.83 & 84 & 208\
G 24.78$+$0.08 & 3 & $ 9.5 \times 2.4 $ & 135 & 0.28 & 113.43 & 33 & 27\
G 29.95$-$0.02 & 2 & $ 19.5 \times 2.2 $ & 139 & 0.46 & 96.10 & 86 & 78\
IRAS 18556$+$0136 & 3 & $ 9.4 \times 2.5 $ & 135 & 0.32 & 28.57 & 37 & 36\
W 48 & 3 & $ 12.2 \times 3.0 $ & 99 & 0.67 & 42.45 & 68 & 91\
OH 43.8$-$0.1 & 2 & $ 16.2 \times 2.2 $ & 139 & 0.36 & 39.48 & 18 & 30\
ON 1 & 4 & $ 4.3 \times 2.5 $ & 112 & 0.23 & $-$0.07 & 4 & 27\
Cep A & 4 & $ 4.1 \times 2.6 $ & 87 & 0.62 & $-$2.56 & 86 & 65\
NGC 7538 & 4 & $ 3.9 \times 2.5 $ & 95 & 0.20 & $-$56.10 & 69 & 30\
W3(OH) {#section:W3(OH)}
------
W3(OH) (figure \[fig:fig1\]) is a well-studied star-forming region at a distance of 1.95 $ \pm $ 0.04 kpc [@2006Sci...311...54X] containing a hot molecular core (HMC) and an UC H region [@1984ApJ...287L..81T]. The central object is thought to be an O9$ - $O7 young star with an estimated mass of $ \simeq $ $ 30~\MO $ [@1981ApJ...245..857D]. W3(OH) has been imaged with the Multi-Element Radio-Linked Interferometer Network (MERLIN, [@2005MNRAS.360.1162E]; [@2006MNRAS.tmpL..65V]; [@2006MNRAS.371.1550H]) and VLBI array at 6.7 GHz [@1992ApJ...401L..39M], and with the VLBA at 12.2 GHz ([@1988ApJ...333L..83M]; [@1999ApJ...519..244M], , ; [@2006Sci...311...54X]).
In our observation, 48 maser spots were detected. The velocity of reference feature is $-$45.46 km s$^{-1}$ ($-$45.37 km s$^{-1}$ appeared in [@2005MNRAS.360.1162E]). We detected the maser clusters 1, 5, 6 and 7 defined by @1992ApJ...401L..39M, and found a new spot of $-$46.51 km s$^{-1}$ at 180 mas west from the reference feature. There are 38 spots within a 200 mas area of the cluster 6. The other three clusters locate west, south, and south form the main cluster, respectively.
Mon R2 {#section:MonR2}
------
The Monoceros R2 (Mon R2, figure \[fig:fig2\]) molecular cloud has a cluster of seven bright infrared sources [@1976ApJ...208..390B]. The cluster is one of the closest massive star forming regions to the solar system at a distance of 830 pc ([@1968AJ.....73..233R]; [@1976AJ.....81..840H]). The 6.7 GHz methanol maser of Mon R2 has been observed with the Australia Telescope Compact Array (ATCA, [@1998MNRAS.301..640W]) and with European VLBI Network (EVN, , ). Although there are several peaks in the total-power spectrum, our VLBI observation detected only one spectral feature around 10.64 km s$^{-1}$ as eight maser spots. This eight spots ($V_\mathrm{lsr}=$ 10.29 to 11.34) correspond to ’C’ defined by @1998MNRAS.301..640W, and also are identified with that five of fourteen spots detected by .
Since this source shows an on-going flux variation, the single-dish spectrum was largely different from that of previous observations. It seems that some spectral features disappeared and some others appeared during 1992 to 2005 ([@1995MNRAS.272...96C]; ).
S 255 {#section:S255}
-----
S 255 locating at a distance of 2.5 kpc includes one UC H region G 192.58-0.04 [@1994ApJS...91..659K]. S 255 has been observed with EVN at 6.7 GHz (, ), and seven spots were detected with the longest projected baselines of $\sim $30 M$\lambda $. We observed this source and correlated with the coordinates that are used by , (), but no spot was detected. This source might be resolved out with our baselines. The single-dish spectrum observed by Yamaguchi 32 m is shown in figure \[fig:fig3\].
W 33A {#section:W33A}
-----
W 33A (figure \[fig:fig4\]) is a highly luminous object ($ L=1{\times}10^5~\LO $, [@1984ApJ...283..573S]) and coincide a deeply embedded massive young stars [@2000ApJ...537..283V]. The kinematic distance based on CS and $ \textrm{C}^{34}\textrm{S} $ observations is 4 kpc [@2000ApJ...537..283V]. A map consisting eleven maser spots has been obtained with the ATCA [@1998MNRAS.301..640W].
Our observation, the first VLBI for this source, detected 19 maser spots which correspond to ’F’, ’G’, and ’K’ defined by the ATCA observation. The total-power spectrum having several peaks, but weak spectral peaks were not detected in the VLBI map. This source consists of two clusters, and each clusters are separated larger than 1000 AU.
IRAS 18151$-$1208 {#section:18151}
-----------------
IRAS 18151$-$1208 (figure \[fig:fig5\]) is embedded in a high-density cloud and thought to be in pre-UC H phase . The kinematic distance based on CS line observation is 3.0 kpc . Thirteen maser spots were detected by our observation. This source has been observed with the ATCA and with EVN [@2002evn..conf..213V]. A spot of 27.83 km s$^{-1}$ at 137 mas south from the main cluster was newly detected one.
G 24.78$+$0.08 {#section:G24}
--------------
G 24.78$+$0.08 (figure \[fig:fig6\]) is a cluster of massive protostars at a distance of 7.7 kpc . A pair of cores associated with a compact bipolar outflow have been detected . @2004ApJ...601L.187B have detected rotating disks associated with high-mass YSOs by observing 1.4 mm continuum and CH$_{3}$CN ($J=$ 12–11) line emission. Our observation provides the first VLBI image of methanol masers at 6.7 GHz for this source. This source consists of three clusters separated larger than 3000 AU.
G 29.95$-$0.02 {#section:G29}
--------------
G 29.95$-$0.02 (figure \[fig:fig7\]) is at a distance of 9 kpc [@1989ApJ...340..265W] and coincides an NH$_{3} $ hot core, but is offset by a few arcsec from the continuum peak of a UC H region . The methanol masers of G 29.95$-$0.02 has been observed with the ATCA at 6.7 GHz [@1998MNRAS.301..640W] and with the VLBA at 12.2 GHz (, ). Our observation at 6.7 GHz provides the first VLBI image for this source, in which fourteen spots were detected. Twelve of which were clustered within 10 mas, the other two spots were isolated east from the main cluster. These two clusters detected in our map correspond to ’M’ and ’G’ defined by @1998MNRAS.301..640W, respectively. There are several spectral features between 95 and 105 km s$^{-1}$ in contrast to 12.2 GHz spectrum, most of weak features could not be detected by our VLBI observation.
IRAS 18556$ + $0136 {#section:18556}
-------------------
IRAS 18556$ + $0136 (figure \[fig:fig8\]) is at a distance of 2 kpc [@1982MNRAS.201..121B] and associated with a CO outflows . Our VLBI observation is the first one for this source, and six maser spots were detected. This source consists of two clusters which are separated larger than 5000 AU. Although there are several spectral peaks, most of weak features could not be detected.
W 48 {#section:W48}
----
W 48 (figure \[fig:fig9\]) is a well-known H region at a distance of 3.4 kpc . This source has a bright UC H region that could be an on-going massive star formation [@1989ApJ...340..265W]. W 48 has been observed with EVN at 6.7 GHz and with the VLBA at 12.2 GHz . The maser of this source is strong and the spectrum is widespread, 24 spots forming a ring like structure were detected as observed by . A spot of 43.86 km s$^{-1}$ at 34 mas north and 55 mas west from the reference spot was newly detected one by this observation.
OH 43.8$-$0.1 {#section:OH43.8}
-------------
OH 43.8$-$0.1 (figure \[fig:fig10\]) is a star-forming region at a distance of 2.8 $ \pm $ 0.5 kpc [@2005PASJ...57..595H]. This star-forming region coincides with IRAS 19095$+$0930 which is an UC H region [@1994ApJS...91..659K]. This is the first VLBI observation at 6.7 GHz for this source and we detected nine maser spots. A spot of 43.00 km s$^{-1}$ was located at east and south from the reference spot.
ON 1 {#section:ON1}
----
Onsala 1 (ON 1, figure \[fig:fig11\]) is an UC H region located in the densest part of the Onsala molecular cloud [@1983ApJ...266..580I]. This source is known to be associated with a massive star-forming region and IRAS 20081$+$3122. A kinematic distance of 1.8 kpc is used by @1998AJ....116.1897M and . Our observation is the first VLBI at 6.7 GHz and seven maser spots were detected. The redshifted cluster and blueshifted cluster are separated about 1700 AU from each other. It is surprising that the redshifted cluster corresponds to the narrow ($\sim$0.5 km s$^{-1}$) spectral maximum with a flux density of 107 Jy, the correlated flux density is only 4.6 Jy. The flux ratio of correlated to total spectrum is 4.3 [%]{}. On the other hand, the blueshifted cluster correspond to relatively weak ($\sim$20 Jy) and wide spectral peak, while the flux ratio is about 50 [%]{}.
Cep A {#section:CepA}
-----
Cepheus A (Cep A, figure \[fig:fig12\]) is a CO condensation at a distance of 730 pc [@1957ApJ...126..121J]. One of massive star forming regions in Cep A is an UC H region, CepA-HW 2 [@1984ApJ...276..204H]. Cep A has been observed at 12.2 GHz with the VLBA (, ), while our observation is the first VLBI at 6.7 GHz. We found 30 maser spots, 20 of which had $V_\mathrm{lsr}$ ranging from $-$1.15 to $-$3.26 km s$^{-1}$ and the other ten spots had $V_\mathrm{lsr}$ ranging from $-$3.61 to $-$4.67 km s$^{-1}$. It is notable that spots in the redshifted cluster aligned in the linear structure. The blueshifted cluster is located at east from the redshifted cluster.
NGC 7538 {#section:NGC7538}
--------
The NGC 7538 (figure \[fig:fig13\]) is a star forming region at a distance of 2.8 kpc ([@1982ApJS...49..183B]; [@1984ApJ...279..650C]) including at least 11 high luminosity infrared sources (NGC 7538 IRS 1$-$11), which are probably young massive stars [@1990ApJ...355..562K]. The central star has been thought as O6 ([@1976ApJ...206..728W]; [@1984ApJ...279..650C]), so the luminosity of the central source is $ 8.3{\times}10^4~\LO $ and the mass is $ \simeq 30~\MO $. It is known that this object is associated with an UC H region that is observed with the Very Large Array (VLA, [@1984ApJ...282L..27C]; [@1995ApJ...438..776G]). The methanol maser of NGC 7538 has been observed with the MERLIN and EVN at 6.7 GHz (; , ; [@2004ApJ...603L.113P], ), and with the VLBA at 12.2 GHz (; , ). The distribution of maser spots in our map corresponds to IRS 1 region . The map is in good agreement with that of . The redshifted cluster spots with $V_\mathrm{lsr}$ ranging from $-$56.45 to $-$55.75 km s$^{-1}$ aligned in the linear structure as well as previous studies (; [@2004ApJ...603L.113P]). The spectral features of $-$53.07 km s$^{-1}$ and $-$48.99 km s$^{-1}$ corresponding to infrared sources IRS 11 and IRS 9 , respectively, were not detected.
Discussions {#section:discussion}
===========
We have presented maps of twelve sources of methanol maser emission, and the seven of them are the first VLBI results at 6.7 GHz. The spatial distributions of maser spots show various morphology, such as linear (Cep A, NGC 7538), ring like structure (W 48), largely separated clusters (W3(OH), W 33A, G 24.78$+$0.08, G 29.95$-$0.02, IRAS 18556$+$0136, OH 43.8$-$0.1, ON 1, Cep A).
We discuss the properties of methanol maser at 6.7 GHz as a possible probe for astrometry comparing with the methanol maser at 12.2 GHz and the water maser at 22 GHz. We have detected twelve out of thirteen methanol masers at 6.7 GHz with the longest baseline of 50 M$\lambda $ of our array. The integrated flux density of the correlated spectra account for $\sim$50 [%]{} of the total flux, and some of which account for more than 90 [%]{}. This high detection rate, flux recovery, and the small fringe spacing of 4 mas suggest that most of the methanol maser emission have compact structure. This result is consistent with a previous study for W3(OH) by @1992ApJ...401L..39M. We also showed that the correlated flux density at 50 M$\lambda$ is typically 20 [%]{} of the total flux density. The size of maser spot inferred from the flux ratio varies from 2 to 30 AU (at a distance from 0.73 to 9 kpc of the sources). This is consistent with the core size of 2 to 20 AU obtained by . The velocity range of typically 10 km s$^{-1}$ [@1995MNRAS.272...96C] is narrow compared to that of water masers. Given this compactness and narrow velocity range as well as the known properties of long-life and small internal-motion, this methanol maser line is suitable for astrometry with VLBI.
From the viewpoint of observation, atomospheric fluctuation is a significant problem for observations at higher frequency. Strong absorption by water vapor in the atomosphere makes observations relatively difficult at 22 GHz in summer. This would be a potential problem in measurement of annual parallax. This is not the case for 6.7 GHz observation.
The astrometric VLBI observation uses continuum reference sources which are usually distant quasars. Such continuum sources show power-law, decreasing spectra, and the flux density is larger at 6.7 GHz than that of 12.2 GHz or 22 GHz. This property makes the astrometric observations easier in terms of detectability of reference sources.
The 6.7 GHz line might have some disadvantages due to its lower frequency in comparison with the 12.2 GHz line: The interstellar scintillation broadens the size of maser and reference sources. The interstellar broadening is stronger at lower frequency, and might affect to measure the precise position of the sources. Also the ionospheric density fluctuation changes path-length, consequently affects phase measurement at lower frequency. These effects depend on frequency as $\nu^{-2}$, i.e., affect 3.3 times larger for 6.7 GHz than for 12.2 GHz. Although discussed that the interstellar broadening is not significant at a scale larger that 1 mas, we have to take account of these effects for astrometry.
We have made a phase-referencing VLBI observation with the JVN and achieved that the positional accuracy of $\sim$50 $\mu $as at 8.4 GHz and the image dynamic range of $\sim$50 on a target [@2006PASJ...58..777D]. The Bigradient Phase Referencing (BPR) was used for this observation, and a reference calibrator was separated by a from the target source. The following equation (\[equ:equ1\]) can be used to derive an accuracy $\Delta \pi$ of annual parallax, $$\begin{aligned}
\Delta \pi = \frac{\theta_{\mathrm{beam}}}{D
\cdot \sqrt{N_{\mathrm{spot}} \cdot N_{\mathrm{obs}}}}
\label{equ:equ1}\end{aligned}$$ where $\theta_{\mathrm{beam}}$ is the minimum fringe spacing, $D$ is Dynamic range of the image, $N_{\mathrm{spot}}$ is the number of spots used in measuring annual parallax, and $N_{\mathrm{obs}}$ is the number of observations. If we make a series of observations for nine methanol maser spots for five epochs as in the cases of the observation by @2006Sci...311...54X, it is expected that an accuracy of $\sim$12 $\mu $as in annual parallax would be achieved.
The flux density at 6.7 GHz is typically $\sim $10 times larger than that of 12.2 GHz [@1995MNRAS.274.1126C], and the number of observable sources of 6.7 GHz is much larger than that of 12.2 GHz. This practical reason let us choose 6.7 GHz line than 12.2 GHz one.
We have started to improve the JVN at 6.7 GHz in terms of sensitivity and the number of stations to detect weak masers and reference sources. Usuda 64 m is one of the newly participating telescopes. Assuming an aperture efficiency of 50 [%]{} and $T_{\mathrm{sys}}$ of 50 K for all telescopes, it is expected that sensitivity for fringe detection (7 $\sigma $) would be less than 5 Jy. We showed that the correlated flux density at 50 M$\lambda $ is typically 20 [%]{} of the total flux density. Hence, sources with total flux density of larger than 25 Jy would be potential targets for astrometric observation. The number of such sources found in the catalog by is larger than 150.
The authors wish to thank the JVN team for observing assistance and support. The JVN project is led by the National Astronomical Observatory of Japan (NAOJ) that is a branch of the National Institutes of Natural Sciences (NINS), Hokkaido University, Gifu University, Yamaguchi University, and Kagoshima University, in cooperation with Geographical Survey Institute (GSI), the Japan Aerospace Exploration Agency (JAXA), and the National Institute of Information and Communications Technology (NICT).
Bartkiewicz, A., Szymczak, M., & van Langevelde, H. J. 2005, , 442, L61
Batrla, W., Matthews, H. E., Menten, K. M., & Walmsley, C. M. 1987, , 326, 49
Beckwith, S., Evans, N. J., II, Becklin, E. E., & Neugebauer, G.1976, , 208, 390
Beltr[á]{}n, M. T., Cesaroni, R., Neri, R., Codella, C., Furuya, R. S., Testi, L., & Olmi, L. 2004, , 601, L187
Beuther, H., Walsh, A., Schilke, P., Sridharan, T. K., Menten, K. M., & Wyrowski, F. 2002, , 390, 289
Blitz, L., Fich, M., & Stark, A. A. 1982, , 49, 183
Brand, J., & Blitz, L. 1993, , 275, 67
Bronfman, L., Nyman, L.-A., & May, J. 1996, , 115, 81
Brown, A. T., Little, L. T., MacDonald, G. H., & Matheson, D. N.1982, , 201, 121
Campbell, B. 1984, , 282, L27
Campbell, B., & Thompson, R. I. 1984, , 279, 650
Caswell, J. L., Vaile, R. A., Ellingsen, S. P., & Norris, R. P.1995b, , 274, 1126
Caswell, J. L., Vaile, R. A., Ellingsen, S. P., Whiteoak, J. B., & Norris, R. P. 1995a, , 272, 96
Cesaroni, R., Hofner, P., Walmsley, C. M., & Churchwell, E. 1998, , 331, 709
Cragg, D. M., Johns, K. P., Godfrey, P. D., & Brown, R. D. 1992, , 259, 203
Davis, C. J., Varricatt, W. P., Todd, S. P., & Ramsay Howat, S. K.2004, , 425, 981
Dent, W. R. F., Little, L. T., Kaifu, N., Ohishi, M., & Suzuki, S.1985, , 146, 375
Doi, A., et al. 2006a, ArXiv Astrophysics e-prints, arXiv:astro-ph/0612528
Doi, A., et al. 2006b, , 58, 777
Dreher, J. W., & Welch, W. J. 1981, , 245, 857
Ellingsen, S. P. 2006, , 638, 241
Etoka, S., Cohen, R. J., & Gray, M. D. 2005, , 360, 1162
Forster, J. R., & Caswell, J. L. 1989, , 213, 339
Furuya, R. S., Cesaroni, R., Codella, C., Testi, L., Bachiller, R., & Tafalla, M. 2002, , 390, L1
Gaume, R. A., Goss, W. M., Dickel, H. R., Wilson, T. L., & Johnston, K. J. 1995, , 438, 776
Genzel, R., et al. 1981a, , 247, 1039
Genzel, R., Reid, M. J., Moran, J. M., & Downes, D. 1981b, , 244, 884
Goedhart, S., Gaylard, M. J., & van der Walt, D. J. 2004, , 355, 553
Goddi, C., Moscadelli, L., Sanna, A., Cesaroni, R., & Minier, V.2007, , 461, 1027
Greisen, E. W. 2003, Information Handling in Astronomy - Historical Vistas, 109
Hachisuka, K., et al. 2006, , 645, 337
Harvey-Smith, L., & Cohen, R. J. 2006, , 371, 1550
Herbst, W., & Racine, R. 1976, , 81, 840
Honma, M., et al. 2005, , 57, 595
Hughes, V. A., & Wouterloot, J. G. A. 1984, , 276, 204
Israel, F. P., & Wootten, H. A. 1983, , 266, 580
Johnson, H. L. 1957, , 126, 121
Joint Iras Science, W. G. 1994, VizieR Online Data Catalog, 2125, 0
Kameya, O., Morita, K.-I., Kawabe, R., & Ishiguro, M. 1990, , 355, 562
Kobayashi, H., et al. 2003, Astronomical Society of the Pacific Conference Series, 306, 367
Kumar, M. S. N., Tafalla, M., & Bachiller, R. 2004, , 426, 195
Kurtz, S., Churchwell, E., & Wood, D. O. S. 1994, , 91, 659
Lonsdale, C. J., et al. 1998, Bulletin of the American Astronomical Society, 30, 1355
MacLeod, G. C., Scalise, E. J., Saedt, S., Galt, J. A., & Gaylard, M. J. 1998, , 116, 1897
Menten, K. M., Reid, M. J., Moran, J. M., Wilson, T. L., Johnston, K. J., & Batrla, W. 1988, , 333, L83
Menten, K. 1991a, ASP Conf. Ser. 16: Atoms, Ions and Molecules: New Results in Spectral Line Astrophysics, 16, 119
Menten, K. M. 1991b, , 380, L75
Menten, K. M., Reid, M. J., Pratap, P., Moran, J. M., & Wilson, T. L. 1992, , 401, L39
Mezger, P. G., Chini, R., Kreysa, E., Wink, J. E., & Salter, C. J. 1988, , 191, 44
Minier, V., Booth, R. S., & Conway, J. E. 1998, , 336, L5
Minier, V., Booth, R. S., & Conway, J. E. 2000, , 362, 1093
Minier, V., Conway, J. E., & Booth, R. S. 2001, , 369, 278
Minier, V., Booth, R. S., & Conway, J. E. 2002, , 383, 614
Moscadelli, L., Menten, K. M., Walmsley, C. M., & Reid, M. J. 1999, , 519, 244
Moscadelli, L., Menten, K. M., Walmsley, C. M., & Reid, M. J. 2002, , 564, 813
Moscadelli, L., Menten, K. M., Walmsley, C. M., & Reid, M. J. 2003, , 583, 776
Pandian, J. D., Goldsmith, P. F., & Deshpande, A. A. 2007, , 656, 255
Patel, N. A., Greenhill, L. J., Herrnstein, J., Zhang, Q., Moran, J. M., Ho, P. T. P., & Goldsmith, P. F. 2000, , 538, 268
Pestalozzi, M. R., Elitzur, M., Conway, J. E., & Booth, R. S. 2004, , 603, L113
Pestalozzi, M. R., Minier, V., & Booth, R. S. 2005, , 432, 737
Pestalozzi, M. R., Minier, V., Motte, F., & Conway, J. E. 2006, , 448, L57
Racine, R. 1968, , 73, 233
Shepherd, M. C. 1997, ASP Conf. Ser. 125: Astronomical Data Analysis Software and Systems VI, 125, 77
Shibata, K. M., Kameno, S., Inoue, M., & Kobayashi, H. 1998, ASP Conf. Ser. 144: IAU Colloq. 164: Radio Emission from Galactic and Extragalactic Compact Sources, 144, 413
Sobolev, A. M., Cragg, D. M., & Godfrey, P. D. 1997, , 288, L39
Stier, M. T., et al. 1984, , 283, 573
Szymczak, M., Hrynek, G., & Kus, A. J. 2000, , 143, 269
Torrelles, J. M., et al. 2001a, , 560, 853
Torrelles, J. M., et al. 2001b, , 411, 277
Turner, J. L., & Welch, W. J. 1984, , 287, L81
Vallee, J. P., & Avery, L. W. 1990, , 233, 553
van der Tak, F. F. S., van Dishoeck, E. F., Evans, N. J., II, & Blake, G. A. 2000, , 537, 283
Vlemmings, W. H. T., Harvey-Smith, L., & Cohen, R. J. 2006, , L65
Vlemmings, W. H. T., van Langevelde, H. J., Diamond, P. J., Habing, H. J., & Schilizzi, R. T. 2003, , 407, 213
Voronkov, M. A., Slysh, V. I., Palagi, F., & Tofani, G. 2002, Proceedings of the 6th EVN Symposium, 213
Walsh, A. J., Burton, M. G., Hyland, A. R., & Robinson, G. 1998, , 301, 640
Willner, S. P. 1976, , 206, 728
Wood, D. O. S., & Churchwell, E. 1989, , 340, 265
Xu, Y., Reid, M. J., Zheng, X. W., & Menten, K. M. 2006, Science, 311, 54
![image](figure1.eps){width="160mm"}
![image](figure2.eps){width="160mm"}
![image](figure3.eps){width="80mm"}
![image](figure4.eps){width="160mm"}
![image](figure5.eps){width="160mm"}
![image](figure6.eps){width="160mm"}
![image](figure7.eps){width="160mm"}
![image](figure8.eps){width="160mm"}
![image](figure9.eps){width="160mm"}
![image](figure10.eps){width="160mm"}
![image](figure11.eps){width="160mm"}
![image](figure12.eps){width="160mm"}
![image](figure13.eps){width="160mm"}
| {
"pile_set_name": "ArXiv"
} |