subfolder
stringclasses
367 values
filename
stringlengths
13
25
abstract
stringlengths
1
39.9k
introduction
stringlengths
0
316k
conclusions
stringlengths
0
229k
year
int64
0
99
month
int64
1
12
arxiv_id
stringlengths
8
25
1607
1607.03806_arXiv.txt
We studied the developing conditions of sigmoid structure under the influence of magnetic non-potential characteristics of a rotating sunspot in the active region (AR) 12158. Vector magnetic field measurements from Helioseismic Magnetic Imager and coronal EUV observations from Atmospheric Imaging Assembly reveal that the erupting inverse-S sigmoid had roots in the location of the rotating sunspot. Sunspot rotates at a rate of 0-5deg/h with increasing trend in the first half followed by a decrease. Time evolution of many non-potential parameters had a well correspondence with the sunspot rotation. The evolution of the AR magnetic structure is approximated by a time series of force free equilibria. The NLFFF magnetic structure around the sunspot manifests the observed sigmoid structure. Field lines from the sunspot periphery constitute the body of the sigmoid and those from interior overly the sigmoid similar to a fluxrope structure. While the sunspot is being rotating, two major CME eruptions occurred in the AR. During the first (second) event, the coronal current concentrations enhanced (degraded) consistent with the photospheric net vertical current, however the magnetic energy is released during both the cases. The analysis results suggest that the magnetic connections of the sigmoid are driven by slow motion of sunspot rotation, which transforms to a highly twisted flux rope structure in a dynamical scenario. An exceeding critical twist in the flux rope probably leads to the loss of equilibrium and thus triggering the onset of two eruptions.
\label{Intro} It is generally believed that major solar eruptions including flares and coronal mass ejections are powered by the free energy stored in the stressed magnetic fields in the so called active regions (ARs). These stressed fields transport magnetic energy and helicity during the evolution of ARs primarily by the mechanisms of flux emergence from sub-photosphere and the foot point shearing motions at the photosphere. Of the many important features, sunspot rotations are a form of uncommonly observed motions, lasting for even days, during the evolution of the ARs \citep{evershed1910,bhatnagar1967, mcintosh1981, brown2003, zhangj2007}, which are suggested to be efficient mechanisms to inject helicity and energy (e.g., \citealt{stenflo1969, barnes1972, amari1996, tokman2002, torok2003}). With the increase of observational capabilities both in sensitivity and resolution, sunspot rotation had drawn a considerable attention in an attempt to explain its characteristics in association with the transient activity. A majority of the studies based on observations examined the relationship between the sunspot rotation and coronal consequences \citep{brown2003, tian2006, tian2008}, flare productivity \citep{yanxl2008, zhangy2008,suryanarayana2010}, the association of flares with abnormal rotation rates \citep{hiremath2003, jiangy2012}, non-potential parameters \citep{zhangj2007, kazachenko2009, vemareddy2012b}, helicity injection \citep{vemareddy2012a} etc. \begin{figure*}[!htp] \centering \includegraphics[width=.97\textwidth,clip=]{fig1} \caption{Association of sigmoid structure with the sunspot rotation in AR 12158. First column: Snapshots of the corona sigmoid in composite temperatures prepared from AIA 94, 335, 193\AA~channels. Rectangular region indicates the region of rotating sunspot having roots of sigmoid, Contours of $B_z$ ($\pm150$G) are overlaid to identify the photospheric connections of the sigmoid. Second column: Vector magnetograms of the rectangular region, showing magnetic field in the rotating sunspot. Background is vertical field component and arrows show direction and magnitude of horizontal field. Third column: Horizontal velocity field overplotted on the HMI continuum intensity map of the rotating sunspot. Note the anti-clock orientated velocity vectors on the west portion due to the sunspot rotation. } \label{Fig1} \end{figure*} Numerical MHD investigations also helped greatly in understanding the relationship between sunspot rotation and the eruptive activity by studying the formation and evolution of flux ropes by twisting line-tied potential fields \citep{mikic1990, amari1996, galsgaard1997, garrard2002}. The underlying idea of these simulations is to show that the photospheric vortex motions can twist the core magnetic field in an active region upto a point where equilibrium can no longer be maintained and thus the twisted core field i.e., flux rope, erupts \citep{tokman2002,torok2003, aulanier2010, amari2010}. At the instance of reaching exceeding critical twist, the flux rope is subjected to helical kink instability \citep{torok2005}. Depending on the decay rate of restoring force by overlying field, the progressive injection of the twist in the underlying fluxrope is shown to erupt as a confined flare or a CME. As a secondary possibility, twisting motions could also weaken the stabilizing overlying field of flux rope. Recent numerical model by \citet{torok2013} demonstrates the rotating sunspot as a trigger by inflating the field passing over a pre-existing fluxrope resulting to weaken the downward tension force of the overlying field. In retrospect however, the twisting motions can twist both, the overlying field and the fluxrope, because there is no pure current free field to stabilise the entire flux rope system. Recent observational analysis (e.g., \citealt{vemareddy2014b}) indicates that the kink-instability could be the onset of eruption bringing the fluxrope to a height range of inflating field, from where the eruption is further driven by torus-instability. Although the above proof-of-concept simulations strikingly explains and reproduce the many observed features of eruptions, not many observational studies exists to reconcile the developing/formation scenario of flux rope in the host active region of rotating sunspot. In the present paper, we study the developing conditions of sigmoid structure under the influence of non-potential characteristics of rotating sunspot in an active region. Using uninterrupted, high cadence magnetic field observations of AR 11158 at the photosphere, \citet{vemareddy2012b} reported an unambiguous correspondence of sunspot rotation with many non-potential parameters including energy and helicity deposition rates. In that AR, occurrence of the major flares and CMEs are shown to co-temporal with the peak rotation rates of sunspots \citep{jiangy2012, vemareddy2015a}. Importantly, the observed characteristics of those non-potential parameters could have origins of sub-photospheric twist because the AR 11158 was emerging. So for the cause-effect relation, it would be of great interest to investigate a case of sunspot rotation in post phase of AR emergence, which is the subject of this article. Motivated by these studies, we model the AR magnetic structure by non-linear force-free approximations and examined the coronal field topology and current distribution in favour of fluxrope. Observations are outlined in section~\ref{sec2}, results, including measurement of sunspot rotation, non-potential characteristics, and force-free extrapolation are described in section~\ref{sec3}. Summary of the results with a discussion is presented in section~\ref{sec4}.
\label{sec4} In this work, we investigated the relation of sunspot rotation with the major eruptions occurred in the vicinity of it. Vortex like motions are modelled to be potential triggers of eruptions in the ARs \citep{amari1996, torok2003, torok2013} by progressively twisting the line-tied foot points. Particularly, they involve in the formation or development of twisted flux ropes, and sigmoids by injecting twist and energy in the AR magnetic structure (e.g., \citealt{ruang2014}). We found that the location of the sunspot rotation had the magnetic roots of the erupting sigmoid (associated to two CME eruptions) that existed along the PIL between the sunspot and the surrounding opposite polarity in the AR 12158 \citep{chengx2015}. Like the earlier reports \citep{vemareddy2012b}, the correspondence of sunspot rotation motion is obviously reflected in the many non-potential parameters (Figure~\ref{Fig3}) during the evolution of the AR. Unlike the earlier cases, this AR is in the post-emerged phase with decreasing flux content, which reveals a direct role of observed sunspot rotation, a purely surface phenomenon of sub-photospheric origin, in the two major sigmoid eruptions. Since the driven motion by sunspot rotation is slow (of the order of 1km/s), the evolution of the magnetic structure is said to be quasi-static and therefore the evolution is approximated by a time series of force-free equilibria (e.g., \citealt{xudong2013, vemareddy2014a}). Under this scenario, utilizing HMI 12 minute cadence vector magnetic field observations, the AR magnetic structure is reconstructed by NLFF model, which reproduces the global structure in resemblance with the coronal EUV plasma structure. The modeled magnetic structure around the rotating sunspot appears as a fan-like sheared arcade, manifesting the observed sigmoid. Acknowledging the difficulty in working with the noisy observations \citep{hoeksema2014}, and also tracing the same structure in all time snapshots, the modeled field indicates signatures of accumulating strong coronal current concentrations and a building sigmoid at different time instances. While sunspot being observed to be rotating, a moderate CME eruption occurred at 23:00UT on September 8, 2014. During the onset of eruption, the AIA multi-thermal observations conspicuously present a continuous trace of hot flux rope embedded in the middle of ambient less hot sheared structure \citep{zhangj2012, chengx2015, vemareddy2014b}. The eruption is a partial one, where flare reconnection takes place slowly and accordingly a low speed CME associated with a long duration M4.6 flare is observed. Consistent with the photospheric measurement of net vertical current during pre-to-post eruption, an increased coronal current concentration is being observed across the sigmoid as a fact of twisting by sunspot rotation. As per these notices, the estimated free energy during the eruption is small, which we believe to be averaging effect because energy releases locally during field reconfiguration. A second CME eruption launched at 17:30UT on September 10, 2014 in the AR. The CME is a halo heading toward Earth at a high speed (1014km/s) and follows an X1.6 flare. Appearance of continuous flux rope is evident amid the sigmoid during the onset of the eruption. As per net vertical current during this eruption event, the coronal current concentrations degraded across the sigmoid and the free energy estimation indicates a release of $1.44\times10^{31}$erg, which is sufficient for an X-class flare. These analysis results suggest that the magnetic connections of the sigmoid are driven by slow motion of sunspot rotation, which developed to a highly twisted flux rope structure in a dynamical scenario. An exceeding critical twist in the flux rope explains the loss of equilibrium triggering the onset of the observed eruptions. Although the NLFFF extrapolation worked best in reproducing highly twisted structure around the sunspot, given the limitations of both the observations and the model, a realization of a clear flux rope structure which is dynamic in nature, seem to be difficult task. Data driven MHD based models (e.g., \citealt{wust2006,jiangc2012, jiangc2014}) would help in better explaining the observed features in the eruptions under the influence of sunspot rotation, which would be the subject of our future investigations.
16
7
1607.03806
1607
1607.01803_arXiv.txt
{In this paper we investigate the induced inflation with two flat regions: one Starobinsky-like plateau in big field regime and one shorter plateau around the saddle point of the Einstein frame potential. This multi-phase inflationary scenario can be used to solve the problem of classical cosmology. The inflation at the saddle-point plateau is consistent with the data and can have arbitrarily low scale. The results can be useful in the context of the Higgs-Axion relaxation and in a certain limit they are equivalent to the $\alpha$-attractors.}
\label{sec:Introduction} Cosmic inflation \cite{Lyth:1998xn,Liddle:2000dt,Mazumdar:2010sa} is the well established theory of the early universe with good consistency with the data \cite{Ade:2015lrj}. It predicts the accelerated expansion of space together with flat power spectrum of primordial inhomogeneities of cosmic microwave background, which is measured by several experiments. Especially the latest data from the PLANCK experiment puts stronger constraints on the tensor-to-scalar ratio $r$, which tells us about the amplitude of primordial gravitational waves and about the scale of inflation. The PLANCK results favours the plateau-like potentials (for which the energy density of the potential is suppressed from above by the scale of inflation) over the big field models, like e.g. $m^2 \phi^2$ or other chaotic inflationary models (for which the potential is not limited from above). \\* Examples of models with plateau are e.g. Starobinsky inflation \cite{Starobinsky:1980te,Barrow:1988xh}, Higgs inflation \cite{Bezrukov:2007ep}, and their generalisations \cite{Codello:2014sua,vandeBruck:2015xpa,Artymowski:2014gea,Ben-Dayan:2014isa,Motohashi:2014tra}. In addition, in Ref. \cite{Kallosh:2013hoa} the authors claim that in a particular form of supergravity the flat plateau appears for almost any scalar potential, which suggests that inflation from the plateau may be natural to obtain. In this paper we want to investigate one of the most general forms of a potential with a plateau, namely the induced inflation \cite{Kallosh:2013tua,Kallosh:2014laa,Giudice:2014toa,Kallosh:2014rha}. In this class of models the relation between non-minimal coupling to gravity $f(\varphi) R$ and a Jordan frame potential $U(\varphi)$ provides the flat region of the Einstein frame potential in the $f \gg 1$ limit. \\* In Ref. \cite{Ijjas:2013vea} the authors argue that the suppression of plateau-like inflationary potentials leads to serious fine-tuning of initial conditions for the pre-inflationary universe. The reason for that is the following one: if one assumes the Planck scale as the scale of initial conditions of the universe, then the Einstein frame potential cannot have any significant contribution to initial energy density. Simply, the scale of the plateau is at least 10 orders of magnitude smaller than $M_p^4$. In order to avoid the domination of inhomogeneities, the energy density of which decreases slower than the time derivative of the field or the radiation energy density, one needs to assume that in the pre-inflationary universe there was a homogeneous region which consisted of $\sim10^9$ causally disconnected Hubble horizons. \\* A possible solution to that issue could be the additional inflationary era around the Planck scale, which would smooth out the Universe and therefore prepare it for inflation around the GUT scale. This approach can be seen in Ref. \cite{Hamada:2014wna}, where the pre-inflationary phase is generated by topological defects. Another example of a potential, which could solve the problem of initial conditions for inflation comes from $\alpha$-attractors \cite{Carrasco:2015rva}, where besides the Starobinsky plateau one obtains the second plateau, possibly at the Planck scale. In both cases the first phase of inflation happens close to the Planck scale, so there is no hierarchy gap between scales of inflation and initial conditions. In this paper we obtain a similar result using induced inflation with two plateaus. \\* The arguments mentioned above are not fully accepted by scientific community. For instance in Ref. \cite{Gorbunov:2014ewa} the Authors prove that initial conditions set in the Planck scale of the Jordan frame do not lead to the energy gap and fine tuning of initial conditions. Also in \cite{Carrasco:2015rva,Linde:2004nz,Linde:2014nna} one finds several arguments against statements listed in Ref. \cite{Ijjas:2013vea}. Therefore we want to stress that besides solving the issue of initial conditions for inflation our primary motivation is seeking for generalised forms of previously analysed examples of induced inflation. \\* In what follows we use the convention $8\pi G = M_{p}^{-2} = 1$, where $M_{p} = 2.435\times 10^{18}GeV$ is the reduced Planck mass. \\* The structure of this paper is as follows. In Sec. \ref{sec:general} we introduce the issue of induced inflation and the possibility of obtaining a saddle point inflation in this class of scalar-tensor theories. We also discuss the equivalence of this model to $\alpha$-attractors with non-minimal coupling to gravity. In Sec. \ref{sec:BD} and \ref{sec:Higgs} we investigate the Brans-Dicke-like and Higgs inflation-like induced inflation with the saddle point. In Sec. Finally we summarise in Sec. \ref{sec:Summary}.
\label{sec:Summary} In this paper we present the model of the induced inflation with two plateaus and therefore with two scales of inflation. In Sec. \ref{sec:general} we require the existence of a stationary point of the Einstein frame potential $V$ for any function {describing} a non-minimal coupling to gravity denoted as $f(\varphi)$. We apply this requirement to $f = \xi \sum_{k=0}^{n} \lambda_k \varphi^k$ and we obtain the simplified form of $f(\varphi)$ with 4 free parameters: $\Lambda := \lambda_0$, $\lambda := \lambda_n$, $\xi$ and $n$. We show that besides of the Starobinsky plateau, $V(\varphi)$ has a saddle point, which generate inflation with different energy scale. The saddle point depends only on $n$ and $\lambda$. We note that $f$ takes simplified forms for two particular forms of $\lambda$, namely for $\lambda_1: = 1/n$ and $\lambda_2 := (\xi/n)^n/\xi$. In the letter case there exists the $n \to \infty$ limit, which gives $f(\varphi) = 1+ \Lambda - e^{-\xi \varphi}$. The requirement of the perturbativity of the Jordan frame potential favours $\lambda \sim \lambda_2$ over $\lambda \sim \lambda_1$. {In Sec. \ref{sec:alphaattractors} we discuss the result of Ref. \cite{Artymowski:2016pjz}, namely that the flatness of the potential, which appears due to the existence of the stationary point at $\phi_s$ can be also expressed in terms of $\alpha$-attractors. If one defines $f$ as an inflaton, then its kinetic term obtains a pole at $f(\varphi_s)$. The kinetic term is similar to the one from the $\alpha$-attractors and {in the $n \to \infty$ limit} it can take exactly the same form after another redefinition of field.} \\* In Sec. \ref{sec:BD} we perform numerical and analytical analysis of the Brans-Dicke-like induced inflation, i.e. with $\Lambda = 0$. For $\lambda = \lambda_2$ one obtains only one plateau (the Starobinsky one) and the stationary point is the GR minimum, around which the potential is very flat, but not inflationary. In general two plateaus can be separated by the GR vacuum or a cascade. In both cases GR vacuum is the only vacuum of the model. For $\lambda \sim \lambda_2$ the saddle-point plateau has a very low scale and for sufficiently big $n$ such a saddle point inflation is fully consistent with the PLANCK data. This kind of inflation can be arbitrarily long lasting and low scale, so it can be used in the context of Higgs-Axion relaxation. The other way to obtain big difference between scales of inflation is to consider the $n \gg 1$ limit, for which the Starobisnky plateau can have arbitrarily high scale, e.g. the Planck scale. Pre-inflation could homogenise the Universe at the Planck scale and solve the problem of initial conditions for inflation. For $\lambda = 0$, $\lambda = \lambda_2$ and $n \to \infty$ the Einstein frame potential does not have a GR minimum in the attractive gravity regime, so this case becomes unphysical. \\* In Sec. \ref{sec:Higgs} we analyse the Higgs inflation-like scenario, for which $\Lambda = 1$. The main difference comparing to the Brans-Dicke-like model is that besides the trivial case of $\xi = 0$ one always obtain two plateaus, even for $\xi = n$. The Starobinsky plateau is always higher than the scale of the saddle point, which means that the pre-inflation cannot solve problem of initial conditions for inflation unless $M \sim \mathcal{O}(1)$. For $\lambda \propto \lambda_2$ the $b \to \infty$ limit gives an inflationary model with perfect consistency with the Planck data. The GR vacuum of the Einstein frame potential is separated from the repulsive gravity regime by the infinite wall of the potential. Taking the $\Lambda = 1, \lambda \propto \lambda_2, n\to \infty$ model for the minimal coupling to gravity gives the Einstein frame potential of the Brans-Dicke generalisation of the Starobinsky inflation.
16
7
1607.01803
1607
1607.00943_arXiv.txt
We present a simultaneous, multi-wavelength campaign targeting the nearby (7.2 pc) L8/L9 (optical/near-infrared) dwarf WISEP J060738.65+242953.4 in the mid-infrared, radio, and optical. Spitzer Space Telescope observations show no variability at the 0.2\% level over 10 hours each in the 3.6 and 4.5 micron bands. {\it Kepler} K2 monitoring over 36 days in Campaign 0 rules out stable periodic signals in the optical with amplitudes great than 1.5\% and periods between 1.5 hours and 2 days. Non-simultaneous Gemini optical spectroscopy detects lithium, constraining this L dwarf to be less than $\sim 2$ Gyr old, but no Balmer emission is observed. The low measured projected rotation velocity ($v \sin i < 6$ km s$^{-1}$) and lack of variability are very unusual compared to other brown dwarfs, and we argue that this substellar object is likely viewed pole-on. We detect quiescent (non-bursting) radio emission with the VLA. Amongst radio detected L and T dwarfs, it has the lowest observed $L_\nu$ and the lowest $v \sin i$. We discuss the implications of a pole-on detection for various proposed radio emission scenarios.
} Brown dwarfs are doomed to steadily cool and fade by their lack of hydrogen fusion, but they nevertheless exhibit a wide variety of non-equilibrium, time-dependent behaviors. Their mineral and metal condensate clouds are not completely uniform, resulting in periodic variability and ``weather" \citep{2014ApJ...782...77B,2015ApJ...799..154M}. The dramatic differences in observed spectra at the L/T transition are typically attributed to changes in the qualitative properties of clouds \citep{1999ApJ...520L.119T,Burrows:2000rt,2001ApJ...556..357A,Ackerman:2001fj,Burgasser:2002lr,2004AJ....127.3553K}. In some cases, rapid, high-amplitude variability is observed \citep{2014ApJ...793...75R}. These cloud changes are so significant that they may even affect the luminosity evolution of the brown dwarf itself \citep{2008ApJ...689.1327S,2015ApJ...805...56D}. Meanwhile, magnetic fields are generated even in L and T dwarfs, resulting in surprisingly strong, variable radio emission \citep{Berger:2002fk,McLean:2012qy,2015ApJ...808..189W,2016ApJ...818...24K}. Deeper understanding of these phenomena requires both surveys of typical ultracool (later than M7) dwarfs and detailed studies of particularly favorable targets. One such target is the nearby brown dwarf WISEP J060738.65+242953.4 (\citealt{Castro:2011W}, hereafter W0607+24). Classified as L8 in the optical and L9 in the near-infrared, with a preliminary trigonometric parallax placing it at a distance of $7.19^{+0.11}_{-0.10}$ pc \citep{2013ApJ...776..126C}, W0607+24 is the nearest known late-L dwarf northern hemisphere, and the third-nearest on the whole sky. W0607+24 is therefore a prime target in understanding the physics of the L/T transition, across which mineral condensate clouds are believed to sink below the photosphere (see \citealt{2016ApJ...817L..19T} for an alternate, cloudless model). W0607+24 also lies close to the ecliptic plane, in the K2 mission \citep{2014PASP..126..398H} Campaign 0 field. Although designed to search for transiting planets around bright Sun-like stars, the {\it Kepler} space telescope \citep{2010ApJ...713L..79K} can also obtain long time-series photometry of fainter targets that happen to lie in the field of view. Indeed, during its original mission, it was able to measure rotational modulations due to photospheric spots in late-M \citep{2013A&A...555A.108M} and L1 \citep{2013ApJ...779..172G} very-low-mass stars. Each K2 field offers the opportunity to monitor additional very-low-mass stars or brown dwarfs. K2 Campaign 2 monitoring of Upper Scorpius has detected variability in 16 young, M-type brown dwarfs \citep{2015ApJ...809L..29S}. Motivated by the K2 observations of W0607+24, we observed it simultaneously with the Karl G. Jansky Very Large Array (VLA)\footnote{The VLA is operated by the National Radio Astronomy Observatory, a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc.} and the Spitzer Space Telescope \citep{2004ApJS..154....1W}. In this paper, we present the results of ground-based spectroscopy and the simultaneous multi-wavelength observations.
Although a typical late-L dwarf in most respects, W0607+24's unusually sharp lines and lack of variability is consistent with a viewing angle near pole-on. As the nearest known northern hemisphere late-L dwarf, this offers many possibilities for future studies. For example, it could serve as an useful standard for line-broadening and polarization studies. If there is a planetary system aligned with the primary's rotation axis, then we would expect very little radial velocity signal but a strong astrometric signal. Further follow-up is needed to confirm and better characterize the radio emission. The {\it Kepler} and K2 studies of the L dwarfs W1906+40 and W0607+24 demonstrate the value of space-based monitoring with long time baselines. We are monitoring additional late-M and L dwarfs in other K2 campaigns. They are more distant than W0607+24, but because they are warmer, some are brighter and higher signal-to-noise than W0607+24. In addition, most K2 fields are less crowded than Campaign 0. The most promising potential brown dwarf target for K2 is the nearby L5 dwarf 2MASSW J1507476-162738 \citep{2000AJ....119..369R}, one of the brightest and best-studied L dwarfs. It is known to be rapid rotator ($P=2.5$ hr) seen to be spotted in some epochs \citep{2015ApJ...799..154M} but not others \citep{2013MNRAS.428.2824K}. It could be observed in K2 Campaign 15, where we would expect to detect clear signatures of rotational modulation and spot evolution over the three month campaign.
16
7
1607.00943
1607
1607.02494_arXiv.txt
Optical fibers are a key component for high-resolution spectrographs to attain high precision in radial velocity measurements. We present a custom fiber with a novel core geometry - a 'D'-shape. From a theoretical standpoint, such a fiber should provide superior scrambling and modal noise mitigation, since unlike the commonly used circular and polygonal fiber cross sections, it shows chaotic scrambling. We report on the fabrication process of a test fiber and compare the optical properties, scrambling performance, and modal noise behavior of the D-fiber with those of common polygonal fibers.
\label{sec:intro} % Optical fibers play an important role in high-precision spectroscopy. Their usage not only allows to detach the spectrograph from the telescope but fibers also offer a superior illumination stability compared to slit spectrographs, which is essential for high-precision radial velocity measurements. In the last couple of years, astronomers have benefited from the availability of fibers with non-circular cross sections, which have been shown to provide a more stable output \cite{Avila2010,Chazelas2010} and which are used nowadays in many modern spectrographs \cite{Mahadevan2012,Stuermer2014,Furesz2014}. This experimental fact has been exploited without a detailed explanation of the theory. Often, fibers are produced for other industrial applications and might therefore not be optimized for astronomical spectrographs. In this paper we discuss the question of whether the currently used fibers can be further optimized in terms of their shape to provide higher scrambling values. In section 2 we present simple ray trace simulations and discuss a theoretical framework that helps to predict the optical behavior of fibers with different cross sections. In section 3 we describe the manufacturing process of a D-shaped fiber that was manufactured as a model for a so called chaotic fiber. In section 4 we present experimental results of how chaotic fibers compare to other core shapes regarding their scrambling and modal noise behavior.
We have presented ray tracing simulations and a theoretical approach to better understand the different performance in scrambling and modal noise suppression of differently shaped optical fibers. Based on the conclusion that fibers with chaotic dynamics should perform best, we manufactured D-shaped fibers and tested their scrambling and modal behavior. First results show excellent scrambling performance, possibly outperforming polygonal fibers. Further measurements are planned to investigate the significance of these results. Similarly, the modal noise suppression in such fibers is at least as good as in polygonal fibers. Systematic experiments to measure the speckle statistics are planned in order to check for similarities and differences in the speckle statistics of polygonal fibers and the D-shaped fiber.
16
7
1607.02494
1607
1607.00491_arXiv.txt
While the extended main-sequence turn-offs (eMSTOs) found in almost all 1--2 Gyr-old star clusters in the Magellanic Clouds are often explained by postulating extended star-formation histories, the tight subgiant branches (SGBs) seen in some clusters challenge this popular scenario. Puzzlingly, the SGB of the eMSTO cluster NGC 419 is significantly broader at bluer than at redder colors. We carefully assess and confirm the reality of this observational trend. If we would assume that the widths of the features in color--magnitude space were entirely owing to a range in stellar ages, the star-formation histories of the eMSTO stars and the blue SGB region would be significantly more prolonged than that of the red part of the SGB. This cannot be explained by assuming an internal age spread. We show that rotational deceleration of a population of rapidly rotating stars, a currently hotly debated alternative scenario, naturally explains the observed trend along the SGB. Our analysis shows that a `converging' SGB could be produced if the cluster is mostly composed of rapidly rotating stars that slow down over time owing to the conservation of angular momentum during their evolutionary expansion from main-sequence turn-off stars to red giants.
The once common perception that the member stars of most star clusters originate from `simple stellar populations' (SSPs) is increasingly challenged by discoveries of extended main-sequence turn-offs (eMSTOs) in the color--magnitude diagrams (CMDs) of intermediate-age, 1--2 Gyr-old massive star clusters in the Magellanic Clouds \citep[e.g.,][]{Mack07,Mack08,Milo09,Rube10,Gira13,Li14a}. If solely interpreted in terms of stellar age distributions, such eMSTOs may imply age spreads greater than 300 Myr \citep[e.g.,][]{Rube10,Rube11,Goud14}. However, this strongly contradicts our current understanding of the evolution of SSP-like star clusters, whose maximum age spreads are expected to reach only 1--3 Myr \citep{Long14}. \cite{Li14b} were the first to focus on the subgiant-branch (SGB) morphology of an eMSTO cluster, NGC 1651. Its tight SGB is inconsistent with the presence of a significant age spread. Simultaneously, \cite{BN15} discovered that the SGB and red clump morphologies of two other intermediate-age star clusters, NGC 1806 and NGC 1846, also favor SSP scenarios. Several authors have shown that the presence of a range of stellar rotation rates in a single-age stellar population can produce the observed eMSTO features \citep[e.g.,][]{Bast09,Yang13,Bran15,Nied15}. In addition, a sufficiently low mixing efficiency could generate a tight SGB, resembling the observed SGB morphologies \citep{Yang13}. Recently, the stellar rotation scenario received further support from \cite{Milo16}, who discovered that the split main sequences in the young Large Magellanic Cloud cluster NGC 1755 could best be explained by stellar rotation, and not by adopting an age spread. Here we report the discovery of a `converging' SGB in the Small Magellanic Cloud eMSTO star cluster NGC 419: its SGB is significantly broader on the blue than on the red side. The apparently prolonged star-formation histories (SFHs) implied by the eMSTO stars and the blue region of the SGB contradict that derived for the red SGB region. We conclude that the observed SGB morphology can be explained by rotational deceleration owing to the conservation of angular momentum during the evolutionary expansion of SGB stars.
We obtained PSF photometry from archival {\sl HST} images of the Small Magellanic Cloud star cluster NGC 419 using two independent software packages, {\sc iraf/daophot} and {\sc dolphot}. The CMDs resulting from both stellar catalogs are mutually consistent and show a narrowing trend along the cluster's SGB, i.e., the NGC 419 SGB appears to be significantly narrower at redder than at bluer colors. Initially assuming, for simplicity and in line with previous work, that the widths of our CMD features may be driven entirely by stellar age distributions, we derived SFHs using a sample of eMSTO stars, as well as stars on the blue and red sides of the cluster's SGB. The SFH for the red SGB exhibits a sharp peak at an age of 1.52 Gyr and drops off steeply on either side of the peak, while the blue SGB morphology is consistent with a much larger age range, from 1.32 Gyr to 1.92 Gyr. Moreover, the SFH of the eMSTO stars suggests an even broader age range, from 1.32 Gyr to 2.02 Gyr. These three independently derived SFHs challenge the postulated presence of a significant age spread to explain the observed eMSTO extent. Hence, we considered the presence of a population of stars with different stellar rotation rates. For a synthetic cluster of age $\log(t \mbox{ yr}^{-1})=9.1$ and $Z=0.002$ or $Z=0.006$, the SGB's convergence is most prominent for stars with $\Omega_{\rm ini} / \Omega_{\rm crit} > 0.5$, which suggests that NGC 419 may be composed of a large fraction of rapidly rotating stars. Future improvements of stellar rotation models are urgently needed to reach robust conclusions as to the importance of variations in stellar rotation rates in intermediate-age star clusters.
16
7
1607.00491
1607
1607.02341_arXiv.txt
{} {We present the discovery and characterisation of the exoplanets WASP-113b and WASP-114b by the WASP survey, {\it SOPHIE} and {\it CORALIE}.} { The planetary nature of the systems was established by performing follow-up photometric and spectroscopic observations. The follow-up data were combined with the WASP-photometry and analysed with an MCMC code to obtain system parameters.} {The host stars WASP-113 and WASP-114 are very similar. They are both early G-type stars with an effective temperature of $\sim5900\,$K, [Fe/H]$\sim 0.12$ and \logg\ $\sim 4.1$dex. However, WASP-113 is older than WASP-114. Although the planetary companions have similar radii, WASP-114b is almost 4 times heavier than WASP-113b.\ WASP-113b has a mass of $0.48\,$ \Mjup\ and an orbital period of $\sim 4.5\,$days; WASP-114b has a mass of $1.77\,$ \Mjup and an orbital period of $\sim 1.5\,$days. Both planets have inflated radii, in particular WASP-113 with a radius anomaly of $\Re=0.35$. The high scale height of WASP-113b ($\sim 950$ km ) makes it a good target for follow-up atmospheric observations.} {}
In the last few years there has been a huge increase in the number of transiting exoplanets known mainly due to the Kepler satellite \citep{Borucki2010}. Currently, 2600 transiting exoplanets are known and there are another thousand unconfirmed candidates. Transiting planet systems are especially valuable because their geometry enables us to derive accurate planetary properties \citep{Charbonneau2000, Henry2000}. Time series photometry during transit allows estimation of the orbital inclination and the relative radii of the host star and planet. These can be combined with radial velocity measurements and stellar parameters to derive the absolute planetary mass \cite[e.g.][]{Barros2011a}. Hence, the bulk density of the planet can be estimated with good accuracy, giving us insight into its composition \citep{Guillot2005,Fortney2007}, thus placing constraints on planetary structure and formation models. Furthermore, follow-up observations of transiting planets gives further insight into their physical properties. Transmission spectroscopy, which consists of measuring the stellar light filtered through the planet's atmosphere during transit, provides information about exoplanet atmospheres \citep{Charbonneau2002, Vidal-Madjar2003}. Moreover, observation of secondary eclipses (i.e. occultations) offers the potential for directly measuring planetary emission spectra \cite[e.g.][]{Deming2005, Charbonneau2008, Grillmair2008}. However, currently these follow-up observations are only feasible for bright stars. The number of transiting exoplanets around stars brighter than V=13 is just a few hundred. These bright host stars enable follow-up observations to better characterise the systems. Therefore, several second generation surveys are being developed to target bright stars both ground based (NGTS, MASCARA and SPECULOOS) and space based: CHEOPS (ESA), PLATO (ESA) and TESS (NASA). The new Kepler satellite mission K2 \citep{Howell2014} is making a significant scientific impact due to monitoring a large number of fields and significant number of bright stars at a photometric precision only slightly inferior to the original Kepler mission \cite[e.g.][]{Vanderburg2014,Barros2016} In this paper, we report the discovery of WASP-113b and WASP-114b by {\it SOPHIE}, {\it CORALIE} and the WASP-project \citep{Pollacco2006} that is the leading ground-based transit survey having discovered $\sim 150$ exoplanets around stars brighter than V=13 mag. The WASP project consists of two robotic observatories: one in the Observatorio del Roque de los Muchachos, La Palma, Canary Islands, Spain and the other in the South African Astronomical Observatory of Sutherland, South Africa. The host stars WASP-113 and WASP-114 are similar early G type stars. Both planets have radius $\sim50$\% higher than Jupiter, but WASP-113b is less than half Jupiter's mass while WASP-114b has almost twice the mass of Jupiter. Hence WASP-114b is 4.2 times more dense than WASP-113b. We start by describing the photometric and spectroscopic observations of both systems in Section~2 and present the spectroscopic characterisation of the stars in Section~3. In Section~4 we describe the system analysis and present the results and we finish by discussing our results in Section~5.
We present the discovery and characterisation of WASP-113b, a hot Jupiter in a $4.5\, $day orbit around a G1 type star, and of WASP-114b, a hot Jupiter in a $1.5\,$ day orbit around a G0 star. WASP-113, with $\teff=5890\,$K, \logg$=4.2$, and [Fe/H]$=0.10$, is orbited by a $0.475\,$\Mjup\, planet with a radius of $1.409$ \Rjup\ and hence a density of $0.172\, \rho_J $ . WASP-114 has a $\teff=5940\,$K, \logg$=4.3$, and [Fe/H]$=0.14$, and the planetary companion is more massive with a mass of $1.769\,$\Mjup\, and a radius of $1.339\,$ \Rjup\ and hence a density of $0.73\, \rho_J $. A circular orbit is preferred for both planets which is not surprising given that the circularisation timescale \citep{Goldreich1966,Bodenheimer2001} is short compared to the age of the host stars. Using equation 2 of \citet{Bodenheimer2001} and assuming $Q_p=10^5-10^6$ \citep{Levrard2009}, we compute the circularisation timescale for WASP-113b, $\tau_{cir}= 0.016-0.16\,$ Gyr and for WASP-114b, $\tau_{cir}= 0.00073-0.0073\,$Gyr. These are much shorter than the host stars ages derived from spectra ( $\sim 6.2\,$Gyr and $\sim 4.3\,$Gyr for WASP-113 and WASP-114 respectively), thus favouring circular orbits. The mass-radius relationship for giant planets depends on their internal composition, since heavier elements decrease the planetary radius. Assuming the coreless models of \citet{Fortney2007} we estimated a radius of 1.05 \Rjup\ for WASP-113b and 1.15 \Rjup\ for WASP-114b. Both these predicted radii are more than $2\sigma$ smaller than the radii measured for the planets in our analysis. Hence, we conclude that the planets are inflated. Following \citet{Laughlin2011} we compute a radius anomaly, $\Re=0.35$, for WASP-113b and $\Re=0.189$ for WASP-114b. Recently it has become clear that regardless of the nature of the inflation mechanism there is a clear correlation between the stellar incident flux and the radius anomaly \citep{Laughlin2011, Weiss2013}. Furthermore, giant planets that receive modest stellar flux do not show a radius anomaly \citep{Demory2011}. At first glance the radius anomaly of WASP-113b and WASP-114b seem to contradict this correlation since the effective temperature of WASP-114b is higher than WASP-113b. In fact WASP-113b has radius anomaly above the mean relationship proposed by \citet{Laughlin2011} ( $\Re \propto T_{equ}^{1.4}$ ) while WASP-114b has a radius anomaly slightly lower than this mean scaling relationship. However, other exoplanets share the same properties as WASP-113b and WASP-114b and the differences of radius anomaly can be explained by the planetary mass. WASP-114 is 3.7 times more massive than WASP-113. Therefore, it has probably a much higher amount of planetary heavy elements and its higher gravitational binding energy contra-acts the inflation. The known exoplanets in the mass range of 0.5\Msun\ and 1.5\Msun\ have the largest known radii\citep{Lopez2016} in agreement the higher radius anomaly of WASP-113b. There are several theories to explain the inflated radii in hot Jupiters: tidal heating \citep{Bodenheimer2001}; enhanced atmospheric opacities \citep{Burrows2007}; kinetic heating due to winds \citep{Guillot2002} and ohmic dissipation \citep{Batygin2011}. The two last are mechanisms related to incident stellar flux and are favoured by the recent reported correlation. However, there is no theory capable of explaining all the measured radius anomalies \cite[e.g.][]{Spiegel2013}. Due to the old age of both WASP-113 and WASP-114 compared with the circularisation timescale it is not expected that tidal heating would be playing an important role hence a combination of the other mechanisms is more likely. WASP-113 is at the end of its main sequence life. We estimate that due the expansion of the star after the end of the main sequence the planet with be engulfed in $\sim1.4$Gyr. The response of the planetary radius to the increase of the stellar radius and stellar luminosity will distinguish mechanisms that are directly capable of inflating the planet from those that only slow down the cooling exoplanets \citep{Lopez2016}. An alternative to this long wait is to search for hot-Jupiters with periods 10-30 days around evolved stars \citep{Lopez2016}. Follow-up observations of these planets can help shed light on the radius anomaly and on the atmospheric composition of these planets. The large scale-height of WASP-113b, $\sim 950$ km, and its relatively bright host star, V = 11.8 , makes it a good target for transmission spectroscopy observations to probe its atmospheric composition. The scale height of WASP-114b is smaller $\sim 310$ km and combined with its fainter host star would make atmospheric studies more challenging.
16
7
1607.02341
1607
1607.02388_arXiv.txt
The initial mass of a star on the zero-age main sequence largely determines its fate. Stars born with masses exceeding $\sim 10\,M_\odot$ end their lives in a core-collapse event, e.g. a supernova (SN) explosion, and leave a neutron star or a black hole as remnant \citep{Heger2003}. Such massive stars are luminous, with bolometric luminosities exceeding $L_{\rm bol}\msim 10^4\,L_\odot$. On the main sequence, massive stars have spectral types earlier than B2V. These bright stars live very fast, the most massive of them die within just $\sim 10$\,Myr. Albeit we see many massive stars by naked eye in the night sky (e.g.\ the Orion Belt consists of massive stars), these stars are actually very rare and constitute only $\sim 0.4$\%\ of all stars in our Milky Way. Despite their small number, massive stars have enormous impact on the galactic ecology. Their strong ionizing radiation and stellar winds, as well as their final demise in SN explosions, largely determine the physical conditions in the interstellar medium (ISM) and influence the formation of new generations of stars and planets. Thus, massive stars are among the key players in the cosmic evolution. The atmospheres of hot massive stars are usually transparent in the continuum but opaque in many spectral lines. Because the stars are hot, a large fraction of their bolometric luminosity is emitted at ultraviolet (UV) wavelengths. The radiation leaves the star in radial direction. A photon in a spectral line $\nu_0$ may be absorbed by an ion and re-emitted in any direction, transferring its momentum to the ion. The ion would accelerated. Because of the Doppler effect, the wavelength of the spectral line will shift, and will be able to scatter light with wavelengths other than $\nu_0$. Hence, the photons within a broad wavelength range will be ``swept up'', by the same spectral line. The Coulomb coupling between particles ensures the collective motion, and a stellar wind develops. Such radiatively driven stellar winds \citep[][CAK]{CAK1975} are ubiquitous in hot non-degenerate stars. The amount of mass removed from the star by its wind is determined by the mass-loss rate, $\dot{M}$. Theory predicts that for O-stars the mass-loss rates are in the range $\dot{M}_{\rm CAK}\approx (10^{-7}-10^{-5})$\myr\ depending on the fundamental stellar parameters \Teff, \Lbol, and $\log{g}$ \citep{Pau1986, Vink2001}. Hence, during stellar life time, a significant fraction of mass is removed by the stellar wind. Thus, the mass-loss rate is a crucial factor of stellar evolution.
16
7
1607.02388
1607
1607.00344_arXiv.txt
For sufficiently wide orbital separations {\it a}, the two members of a stellar binary evolve independently. This implies that in a wide double white dwarf (DWD), the more massive WD should always be produced first, when its more massive progenitor ends its main-sequence life, and should therefore be older and cooler than its companion. The bound, wide DWD HS 2220$+$2146 ($a\approx500$ AU) does not conform to this picture: the more massive WD is the younger, hotter of the pair. We show that this discrepancy is unlikely to be due to past mass-transfer phases or to the presence of an unresolved companion. Instead, we propose that HS 2220$+$2146 formed through a new wide DWD evolutionary channel involving the merger of the inner binary in a hierarchical triple system. The resulting blue straggler and its wide companion then evolved independently, forming the WD pair seen today. Although we cannot rule out other scenarios, the most likely formation channel has the inner binary merging while both stars are still on the main sequence. This provides us with the tantalizing possibility that Kozai-Lidov oscillations may have played a role in the inner binary's merger. {\it Gaia} may uncover hundreds more wide DWDs, leading to the identification of other systems like HS 2220$+$2146. There are already indications that other WD systems may have been formed through different, but related, hierarchical triple evolutionary scenarios. Characterizing these populations may allow for thorough testing of the efficiency with which KL oscillations induce stellar mergers.
\label{sec:intro} While roughly half of all Galactic field stars have at least one stellar companion, these companions usually have little to no impact on their evolution: the separations between the stars are generally too great for them to interact. \citet{dhital2015} recently used the Sloan Digital Sky Survey \citep[SDSS;][]{york00} to identify 10$^5$ binaries whose projected physical separations peaks at $>$10$^4$ AU. For $M\lesssim8$~\Msun, stars in such widely separated binaries never interact, independently evolving through the main sequence (MS) and giant branches and becoming white dwarfs (WDs). Wide double WDs (DWDs), the evolutionary endpoints of these wide binaries, were first identified in catalogs of nearby stars as a subset of the common proper motion pairs \citep{sanduleak82,greenstein86,sion91}. As they are difficult to find, the number of wide DWDs has historically remained small, but recently \citet{andrews15} expanded the number of candidate and confirmed wide DWDs to 142. This sample includes two spectroscopically confirmed \emph{triple} degenerate systems, Sanduleak A/B \citep{maxted00} and G 21-15 \citep{farihi05}. These systems are composed of an unresolved pair of WDs with another, widely separated, WD companion. \citet{reipurth12} argued that such hierarchical systems may be the natural evolutionary endpoint of MS triple systems: the inner binary tightens while the outer companion's orbit expands. Hierarchical triples with a sufficiently large mutual inclination angle are subject to dynamical instability known as Kozai-Lidov (KL) oscillations: the outer star in a hierarchical triple perturbs the inner binary, leading to large oscillations in the inner binary's eccentricity and in the relative inclination of the two orbital planes \citep{kozai62,lidov62}. \citet{harrington68} first pointed out that these large-amplitude eccentricity oscillations would lead to a decreased periastron separation of the inner binary, possibly causing tidal interactions, mass exchanges, or even stellar mergers. These ideas have since been applied to the orbital distribution of triple stellar systems and of hot Jupiters \citep{mazeh79,kiseleva98,eggleton01,eggleton06,fabrycky07,naoz11,naoz12}. In stellar triples, when tidal forces and magnetic braking are included \citep{perets09}, or the KL equations are expanded to the octupole order \citep{naoz14}, some of the inner binaries merge, forming blue stragglers with a wide companion. Usually observed in stellar clusters as MS stars that are bluer and more luminous than the MS turn-off \citep{sandage53,johnson55,ferraro99}, blue stragglers are formed when either accretion from, or a merger with, a companion provides fresh fuel to a star extending its lifetime \citep{mccrea64, hills76}. If the component stars are massive enough to evolve off the MS in a Hubble time, a blue straggler binary formed from the merger of the inner binary in a hierarchical triple will ultimately form a DWD. \citet{andrews15} identified HS 2220$+$2146 (hereafter HS 2220) as an unusual DWD system. The WDs are 6$\farcs$2 apart, which at the spectroscopic distance of 76~pc corresponds to a projected physical separation of 470~AU. This suggests that these two coeval WDs evolved separately. However, spectra obtained with the Ultraviolet and Visual Echelle Spectrograph (UVES) on the Very Large Telescope (VLT) indicated that the more massive WD in this system has a younger cooling age (\tauc) than its less massive companion. In an independently evolving binary, the initially more massive star should evolve into a more massive WD before its companion. Since the more massive WD is apparently the younger in this DWD, HS 2220 cannot be explained through standard binary evolution. We show that the properties of HS 2220 are consistent with an evolutionary history in which the inner binary in a hierarchical triple merged to form a blue straggler, which then evolved into a WD. If this scenario is correct, HS 2220 would be the first DWD known to have formed through this evolutionary channel, and confirmation that hierarchical triple systems can indeed lead to stellar mergers. In Section \ref{sec:obs}, we use spectroscopy and gravitational redshift measurements to demonstrate that the hotter WD in HS 2220 is indeed the more massive WD. In Section~\ref{sec:accretion}, we show that these two WDs are far enough apart that they evolved independently. We describe possible formation scenarios for the DWD in Section~\ref{sec:scenario}, and include a discussion of how KL oscillations might be responsible for the formation of the more massive WD via merging of an inner binary. We put HS 2220 in the greater context of triple systems and conclude in Section~\ref{sec:disc_conc}.
\label{sec:disc_conc} HS 2220$+$2146 is a wide DWD in which the more massive WD is the younger WD in the pair, contradicting the standard expectation of stellar evolution theory.\footnote{The more massive WD in the DWD SDSS J1257$+$5428 \citep{badenes09,marsh11} was shown to have a smaller \tauc\ than its companion \citep{bours15}. However, its relatively short $P_{\rm orb} = 4.6$ hr indicates that the two WDs almost certainly interacted in the past, and it is not clear that this system followed, even qualitatively, a similar evolutionary pathway to HS 2220's.} Multiband photometry and a lack of RV variations rule out the possibility of an unseen companion in the system, and the binary is too widely separated to have had any prior mass accretion that could account for the observed $M_{\rm WD}$ and $\tau_{\rm cool}$ discrepancy. To explain the peculiar characteristics of this system, we suggest that HS 2220 went through an alternative evolutionary channel for a DWD in which it formed as a triple system. The inner binary then merged to create a blue straggler in a wide binary, a process that could have been mediated by KL oscillations. When we reconstruct the evolutionary history of HS 2220, we find that there is a large region of parameter space in which KL oscillations are active. An inner binary initially separated by $\sim$10 AU may have merged after a few 10$^3-10^4~t_{\rm quad}$, a somewhat longer timescale for the merger than that obtained in simulations by \citet{naoz14}. Alternatively, for inner separations closer to 1~AU, the inner binary's merger may have involved a resonance-like interaction between KL oscillations and GR precession. KL oscillations, combined with dissipation, have already been demonstrated to lead to the observed excess of hierarchical triple systems with an inner binary $P_{\rm orb}\ \lessim\ 7$ days \citep{tokovinin02,tokovinin06,fabrycky07}. A fraction of the inner binaries in triples can merge, forming blue stragglers in wide binaries, when either dissipation due to magnetic braking \citep{perets09} or higher order terms in the KL equations \citep{naoz14} are included. Sufficiently massive wide binaries will then evolve into DWDs, and since the blue straggler has a longer stellar lifetime than its mass would suggest, the two WDs will have an age discrepancy similar to the one observed in HS 2220. Although on-going studies of binary evolution may reveal other, exotic evolutionary channels that can form HS 2220, our proposed scenario naturally explains the observations of the system. HS 2220 is not the first WD binary suggested to have formed from a triple system. The WD in the eclipsing WD+dK2 binary V471 Tauri is the most massive known in the Hyades open cluster, but is also the hottest and youngest. \citet{obrien01} suggested that it is the end product of a blue straggler star that formed from the merger of the inner binary in a hierarchical triple. In their paradigm, however, the outer star was close enough to the inner binary that when the blue straggler evolved into a giant, the system underwent a common envelope, and the orbit shrank to the present 12.5 hr period. V471 Tauri could be the end result of one triple system evolutionary channel; compared to the progenitor of HS 2220, V471 Tauri formed from a lower-mass, outer companion in a tighter orbit. In the opposite regime, triple systems in which the inner binary is wider may never merge but still completely evolve into WDs. The Sanduleak A/B system \citep{maxted00} and G 21-15 \citep{farihi05} are two examples of spectroscopically confirmed triple WD systems that may have formed through this evolutionary pathway. These systems are composed of spectroscopic DWD binaries with common proper motion companions. Considering their hierarchical nature, and depending on the unknown mutual inclination between the two orbital planes, it is possible that these systems, too, were effected by KL oscillations. The WD systems V471 Tauri, Sanduleak A/B, G 21-15, and HS 2220$+$2146 are evidence for a still largely unexplored variety of higher-order evolutionary channels forming stellar remnants. These systems may represent the vanguard of large populations of triple systems: several authors have demonstrated that spectroscopic binaries have a high likelihood of containing a tertiary companion \citep[e.g.,][]{tokovinin06}, and \citet{raghavan10} reported that some 10\% of nearby solar-type stars are in triple or higher-order systems. Binary evolution studies, particularly with respect to compact objects, have traditionally ignored the complexities introduced by a third companion. However, considering the ubiquity of triple systems in the nearby Galactic neighborhood, it may not be safe to ignore the effect of a wide companion on a close binary, particularly with respect to rare astrophysical phenomena \citep[e.g., for a discussion of how WD triples may be an important type Ia supernova progenitor, see][]{kushnir13}. Regardless of the application, gaining a better understanding of KL oscillations is important. HS 2220 provides a unique setting for advancing toward this goal because we can estimate the merger time of the inner binary. This system, and others like it that are not yet identified, may provide new observations with which to compare simulations. We were confident that HS 2220 was a curious system because of the quality of our spectra. Although this DWD is relatively nearby, these spectra did require large-diameter telescopes. In \citet{andrews15}, we identified five other candidate DWDs in which the more massive WD may be the younger one. In all those cases, higher S/N spectra are needed to confirm the discrepancies in the fitted $\tauc$ values. Even if these systems prove to have formed through a standard binary channel, it is likely that more WD systems with complicated evolutionary histories exist in our Galactic neighborhood. However, had HS 2220 formed 1 Gyr earlier, we likely would not have been able to identify the two WDs as having different ages: the $\tauc$ uncertainties would have been too large. In searching for more of these discrepant systems, one should therefore focus on nearby, hot wide DWDs. The upcoming {\it Gaia} data are expected to contain $\sim 10^5-10^6$ new WDs with proper motions \citep{carrasco14}. Combining the temperatures and astrometric distances obtained from {\it Gaia}'s observations with a WD mass-radius relation will result in a mass and age estimate for every one of these WDs. Identifying populations of wide DWDs in which the more massive WD is the younger may therefore be possible without spectroscopic follow up of every DWD.
16
7
1607.00344
1607
1607.01562_arXiv.txt
The Galactic Centre is a hotbed of astrophysical activity, with the injection of wind material from $\sim$30 massive Wolf--Rayet (WR) stars orbiting within 12 arcsec of the supermassive black hole (SMBH) playing an important role. Hydrodynamic simulations of such colliding and accreting winds produce a complex density and temperature structure of cold wind material shocking with the ambient medium, creating a large reservoir of hot, X-ray-emitting gas. This work aims to confront the 3 Ms of \textit{Chandra} X-ray Visionary Program observations of this diffuse emission by computing the X-ray emission from these hydrodynamic simulations of the colliding WR winds, amid exploring a variety of SMBH feedback mechanisms. The major success of the model is that it reproduces the spectral shape from the 2--5 arcsec ring around the SMBH, where most of the stellar wind material that is ultimately captured by Sgr A* is shock-heated and thermalized. This naturally explains that the hot gas comes from colliding WR winds, and that the wind speeds of these stars are, in general, well constrained. The flux level of these spectra, as well as 12 $\times$ 12-arcsec$^2$ images of 4--9 keV, shows that the X-ray flux is tied to the SMBH feedback strength; stronger feedback clears out more hot gas, thereby decreasing the thermal X-ray emission. The model in which \SAs produced an intermediate-strength outflow during the last few centuries best matches the observations to within about 10 per cent, showing that SMBH feedback is required to interpret the X-ray emission in this region.
The Galactic Centre is a hotbed of astrophysical activity, hosting myriad stars of varying masses orbiting a supermassive black hole (SMBH) of $\sim$3.5$\times10^6$\,M$_\odot$ \citep[e.g.][]{GhezP05} associated with the radio source \SAs. Classified as an inactive SMBH due to its low accretion rate \citep*{MeliaFalcke01,GenzelEisenhauerGillessen10}, the proximity of \SAs makes it the only SMBH where its orbiting stars are resolved, and thus the only instance where the stellar, wind, and orbital parameters of the individual stars orbiting an SMBH can be accurately determined. Therefore, the Galactic Centre provides the best opportunity to study the interplay between an SMBH and the stars and ejected wind material orbiting it. To study the accretion rate on to \SAs, \citet{CuadraNayakshinMartins08} computed hydrodynamic simulations of the winds of 30 Wolf--Rayet (WR) stars (evolved massive stars with the highest mass-loss rates around the Galactic Centre) that orbit within $\sim$10 arcsec (=\,0.4\,pc at a distance of 8.25kpc) of \SAs. The simulations ran from 1100 yr ago to the present day, and are based on orbital data from \citet{PaumardP06} and \citet{BeloborodovP06} and stellar wind data from \citet{MartinsP07}. The end result predicted the time-dependent accretion rate of material on to \SAs while also producing the density and temperature structure of the hot/shocked and the cold/unshocked wind material ejected from the WR stars. Although the simulation does not include any mass ejected from stars with lower mass-loss rates, like the O stars, nor any material ejected prior to 1100 yr ago, it is the most complete calculation of the material around the Galactic Centre out to $\sim$10 arcsec. X-ray observations of the Galactic Centre provide key insights into the region as these high-energy photons can penetrate the high absorption column through the Galactic plane. The \textit{Chandra} X-ray Visionary Program (XVP) on the Galactic Centre, which performed grating observations of the region for 3 Ms during both flaring and quiescent states, showed that the X-ray properties of \SAs indicate that it is a radiatively inefficient accretion flow (RIAF) and has an outflow \citep{WangP13}. \textit{XMM--Newton} observations of several 100 pc away from the Galactic Centre revealed that the X-ray activity, and by extension the total activity, of \SAs was higher several centuries ago \citep{PontiP10}. From these two results, \citet*{CuadraNayakshinWang15} ran more simulations to account for possible SMBH feedback mechanisms over a range of feedback strengths. As expected, these different feedback mechanisms altered the dynamics of the colliding WR winds, significantly modifying the density and temperature structure around \SAs in the models with the strongest feedback. In addition to studying the point-source emission of the SMBH, the XVP also provided the best observations of the spatially and spectrally resolved X-ray emission out to $\sim$20 arcsec from \SAs. This diffuse emission is thermal, so it is thought to originate from the colliding stellar winds of the stars orbiting the SMBH. Therefore, this observation set provides an excellent test for the validity of the hydrodynamic simulations, as well as a means to distinguish between feedback models. This work computes the thermal X-ray emission from the aforementioned hydrodynamics simulations and compares the results to the observations with the aim of increasing our understanding of the WR stars, and more generally the full environment, surrounding \SAs. Section~\ref{M} recaps the relevant details of the hydrodynamic simulations, describes the method for computing the X-ray emission from these simulations, and discusses the observed X-ray image and spectrum that the modelling aims to explain. Section~\ref{R} presents the results of the X-ray calculations and compares them with the observations. We discuss the results in Section~\ref{D} and present our conclusions in Section~\ref{C}.
\label{C} We compute the thermal X-ray properties of the Galactic Centre from hydrodynamic simulations of the 30 WR stars orbiting within 12 arcsec of \SAs \citep{CuadraNayakshinMartins08,CuadraNayakshinWang15}. These simulations use different feedback models from the SMBH at its centre. The \textit{Chandra} XVP observations \citep{WangP13} provide an anchor point for these simulations, so we compare the observed 12 $\times$ 12 arcsec$^2$ 4--9 keV image and the 2--5 arcsec ring spectrum with the same observables as synthesized from the models. Remarkably, the shape of the model spectra, regardless of the type of feedback, agrees very well with the data. This indicates that the hot gas around \SAs is primarily from shocked WR wind material, and that the velocities of these winds are well constrained. The ISM absorption column from fitting this diffuse emission broadly agrees with the absorbing column determined from fitting the point-source emission of \SAs \citep{WangP13}. The X-ray flux strongly depends on the feedback mechanism; greater SMBH outflows clear out more WR-ejected material around \SAs, thus decreasing the model X-ray emission. Over 4--9~keV in energy and 2--5 arcsec in projected distance from \SAs (excluding IRS~13E and the nearby PWN), the X-ray emission from all models is within a factor of 2 of the observations, with the best model agreeing to within 10 per cent; this is the intermediate-strength feedback model OB5, which has an SMBH outburst of $\dot{M}_{\rm out}=10^{-4}$\,$M_\odot$\,yr$^{-1}$, $v$\,=\,5000\,km\,s$^{-1}$, and occurring from 400 to 100\,yr ago. Therefore, this work shows that the SMBH outburst is required for fitting the X-ray data, and, by extension, that the outburst still affects the current X-ray emission surrounding \SAs, even though it ended 100\,yr ago. Future work should address the completeness of the hydrodynamic simulations by adding other sources of gas in the simulation volume, such as the O stars and binaries that are located in the region currently modelled, and the `mini-spiral' and circumnuclear disc that are farther out and could prevent the hot gas from escaping the central region as easily. Both these effects could increase the overall X-ray emission, thereby requiring a reduction in mass-loss rates and/or an increase in the SMBH feedback strength to preserve the current level of model-to-observation X-ray agreement.
16
7
1607.01562
1607
1607.08733_arXiv.txt
{The inner regions of the envelopes surrounding young protostars are characterised by a complex chemistry, with prebiotic molecules present on the scales where protoplanetary disks eventually may form. The Atacama Large Millimeter/submillimeter Array (ALMA) provides an unprecedented view of these regions zooming in on Solar System scales of nearby protostars and mapping the emission from rare species.} {The goal is to introduce a systematic survey, ``Protostellar Interferometric Line Survey (PILS)'', of the chemical complexity of one of the nearby astrochemical templates, the Class~0 protostellar binary IRAS~16293$-$2422, using ALMA, to understand the origin of the complex molecules formed in its vicinity. In addition to presenting the overall survey, the analysis in this paper focuses on new results for the prebiotic molecule glycolaldehyde, its isomers and rarer isotopologues and other related molecules.} {An unbiased spectral survey of IRAS~16293$-$2422 covering the full frequency range from 329 to 363~GHz (0.8~mm) has been obtained with ALMA, in addition to a few targeted observations at 3.0 and 1.3~mm. The data consist of full maps of the protostellar binary system with an angular resolution of 0.5$''$ (60~AU diameter), a spectral resolution of 0.2~\kms\ and a sensitivity of 4--5~mJy~beam$^{-1}$~km~s$^{-1}$ -- approximately two orders of magnitude better than any previous studies. } {More than 10,000 features are detected toward one component in the protostellar binary, corresponding to an average line density of approximately one line per 3~\kms. Glycolaldehyde, its isomers, methyl formate and acetic acid, and its reduced alcohol, ethylene glycol, are clearly detected and their emission well-modeled with an excitation temperature of 300~K. For ethylene glycol both lowest state conformers, $aGg'$ and $gGg'$, are detected, the latter for the first time in the ISM. The abundance of glycolaldehyde is comparable to or slightly larger than that of ethylene glycol. In comparison to the Galactic Center these two species are over-abundant relative to methanol, possibly an indication of formation of the species at low temperatures in CO-rich ices during the infall of the material toward the central protostar. Both $^{13}$C and deuterated isotopologues of glycolaldehyde are detected, also for the first time ever in the ISM. For the deuterated species a D/H ratio of $\approx$5\% is found with no differences between the deuteration in the different functional groups of glycolaldehyde, in contrast to previous estimates for methanol and recent suggestions of significant equilibration between water and -OH functional groups at high temperatures. Measurements of the $^{13}$C-species lead to a $^{12}$C:$^{13}$C ratio of $\approx$30, lower than the typical ISM value. This low ratio may reflect an enhancement of $^{13}$CO in the ice due to either ion-molecule reactions in the gas before freeze-out or differences in the temperatures where $^{12}$CO and $^{13}$CO ices sublimate.} {The results reinforce the importance of low temperature grain surface chemistry for the formation of prebiotic molecules seen here in the gas after sublimation of the entire ice mantle. Systematic surveys of the molecules thought to be chemically related, as well as the accurate measurements of their isotopic composition, hold strong promises for understanding the origin of prebiotic molecules in the earliest stages of young stars.}
Understanding how, when and where complex organic and potentially prebiotic molecules are formed is a fundamental goal of astrochemistry and an integral part of origins of life studies. The recent images from the Atacama Large Millimeter/submillimeter Array (ALMA) of a potentially planet-forming disk around a young star with an age of only 0.5--1~Myr, HL Tau, \citep{hltau} has highlighted the importance of the physics and chemistry of the early protostellar stages: how do stars evolve during their earliest evolutionary stages and in particular, to what degree does the chemistry reflect this early evolution relative to, e.g., the conditions in the environment from which the stars are forming? Already during its first years ALMA has demonstrated enormous potential for addressing these issues with its high angular resolution and sensitivity making it possible to zoom in on solar system scales of young stars and map the chemical complexity in their environments \citep[e.g.,][]{pineda12,jorgensen12,jorgensen13,persson13,codella14,sakai14,friesen14,lindberg14alma,oya14,murillo15,podio15,belloche16,muller16}. A particular focus of ALMA observations is the search for complex molecules in regions of low-mass star formation. Over the last decade it has become clear that the chemical complexity toward the innermost envelopes of solar-type protostars can rival that of more massive hot cores \citep[see, e.g., review by][]{herbst09}. The presence of complex molecules is not solely attributed to such regions, however, but also found toward cold prestellar cores \citep[e.g.,][]{oberg10,bacmann12,vastel14} and toward outflow driven shocks \citep[e.g.,][]{arce08,sugimura11,mendoza14}. The big questions that remain include \emph{(a)} what degree of molecular complexity can arise during the protostellar stages, \emph{(b)} how exactly do complex organics form, \emph{(c)} what the roles are of grain-surface/ice-mantle vs. gas-phase reactions at low and high temperatures for specific molecules, and \emph{(d)} what the importance is of external conditions (e.g., cosmic ray induced ionization, UV radiation) and the physical environment (e.g., temperature). Many of these questions can potentially be adressed through systematic surveys with ALMA: with its high angular resolution we can zoom in on the smallest scales of young stars making it possible to unambiguously identify the emitting regions for different molecules. The advantage of studying the hot inner regions of the envelopes around protostars is that the ices there are fully sublimated and all the molecules are present in the gas-phase. ALMA's high sensitivity and spectral resolution allows for identification of faint lines of rare species and also observations of more quiescent sources (e.g., of lower masses) for which line confusion is reached at a much deeper level than for sources of higher masses. One particularly interesting source in this context is the well-studied protostellar binary, IRAS~16293$-$2422, the first low-mass protostar for which complex organic molecules \citep{vandishoeck95,cazaux03} as well as prebiotic species \citep{jorgensen12}, were identified -- the latter already during ALMA science verification. This paper presents an overview of an unbiased survey, \emph{Protostellar Interferometric Line Survey (PILS)}\footnote{http://youngstars.nbi.dk/PILS}, of IRAS~16293$-$2422 with ALMA covering a wide frequency window from 329 to 363~GHz at 0.5\arcsec\ angular resolution (60~AU diameter) as well as selected other frequencies around 1.3~mm (230~GHz) and 3.0~mm (100~GHz). The paper provides a first overview of the observations and data and presents new results concerning the presence of glycolaldehyde and related species, as well as the first detections of its rarer $^{13}$C and deuterated isotopologues. The paper is laid out as follows: Sect.~\ref{overview} presents a detailed review of studies of the physics and chemistry of IRAS~16293$-$2422 as background for this and subsequent PILS papers and Sect.~\ref{observations} presents an overview of the details of the observations and reduction. Sect.~\ref{results} presents the overall features of the datasets including the continuum emission at the three different wavelengths and information about the line emission, while Sect.~\ref{analysis} focuses on the analysis of the emission from glycolaldehyde and related molecules (Sect.~\ref{glycolaldehydeisomers}) and its isotopologues (Sect.~\ref{isotopologues}) with particular emphasis on the constraints on formation scenarios for these species. Section~\ref{summary} summarises the main findings of the paper.
\label{summary} We have presented an overview and some of the first results from a large unbiased spectral survey of the protostellar binary IRAS~16293$-$2422 using ALMA. The full frequency window from 329 to 363~GHz is covered with a spectral resolution of 0.2~km~s$^{-1}$ and beam size of 0.5$''$ (60~AU diameter at the distance of IRAS~16293$-$2422) in addition to three selected settings at 3.0 and 1.3~mm. The main findings of this paper are: \begin{itemize} \item The continuum is well detected in each band with different signatures toward the two protostars: clear elongated emission seen toward IRAS16293A in the direction of a previously reported velocity gradient, making it appear as a flattened edge-on structure. The binarity of this source at submillimeter wavelengths previously reported is not confirmed in these data. In contrast, the emission for IRAS16293B is clearly optical thick out to wavelengths of approximately 1~mm (and possibly beyond). The optical thickness of radiation toward IRAS16293B confirms that the emission has its origin in a different component than the larger scale envelope with a density larger than $3\times 10^{10}$~cm$^{-3}$. \item More than 10,000 lines are seen in spectra toward IRAS16293B. The high gas density implied by the continuum radiation means that lines of typical complex organic molecules are thermalised and LTE is thus a valid approximation for their analysis. \item The spectra at 3.0, 1.3 and 0.8~mm provide strong confirmations of the previous detection of glycolaldehyde as well as the derived excitation temperature of 300~K based on ALMA Science Verification data \citep{jorgensen12}. In addition the spectra show detections of acetic acid (isomer of glycolaldehyde) and two conformers of ethylene glycol (the reduced alcohol of glycolaldehyde) with the detection of one of these, the $gGg'$ conformer, the first reported detection in the ISM. The excitation temperatures of these species are consistent, $\approx$~300~K. The abundance of glycolaldehyde is comparable to that of the main conformer of ethylene glycol with the second conformer not much rarer, as one would expect given the high temperatures in the gas. Small differences between the relative glycolaldehyde and ethylene glycol abundances in data of different beam sizes possibly reflect glycolaldehyde being slightly more centrally concentrated than ethylene glycol. \item Relative to methanol (determined from observations of optically thin lines of the CH$_3^{18}$OH isotopologue), the abundances of glycolaldehyde and related species are between 0.03\% and a~few~\%. Glycolaldehyde and ethylene glycol are clearly more abundant relative to methanol toward IRAS16293B compared to the Galactic Center source SgrB2(N) \citep{belloche13}, whereas the abundances of the glycolaldehyde isomers (methyl formate and acetic acid) and acetaldehyde are comparable for the two sources. A possible explanation for this is the formation of glycolaldehyde from CO at low temperatures in ices toward IRAS~16293$-$2422, in agreement with recent laboratory experiments -- a route that is unlikely to apply for the warmer Galactic Center. \item The data also show detections of two $^{13}$C-isotopologues of glycolaldehyde as well as three deuterated isotopologues. These are the first detections of these five isotopologues reported for the ISM, enabled by the narrow line-widths toward IRAS16293B. The D/H ratio for glycolaldehyde is $\approx$~5\% for all three deuterated isotopologues with no measurable differences for the deuteration of the different functional groups. These ratios are higher than in water, but lower than reported D/H ratios for methanol, formaldehyde and other complex organics, although those should be revisited at the same scales. The derived $^{12}$C/$^{13}$C ratio of glycolaldehyde $\approx$~30 with our data is lower than the canonical ISM. This may reflect a low $^{12}$CO/$^{13}$CO ratio in ice from which it is formed, either due to ion-molecule reactions in the gas or differences in binding energies for the different CO isotopologues. \end{itemize} This first presentation of data has only scratched the surface of all the information available in the survey, but already raised a number of new questions concerning, in particular, the formation of complex organic molecules around protostars. Moving forward it is clear that the possibility to systematically derive accurate (relative) abundances of different organic molecules (and their isotopologues) will be an important tool. In this respect, many of the answers to the questions concerning the origin of complex, prebiotic, molecules may be hidden in this rich ALMA dataset.
16
7
1607.08733
1607
1607.06518_arXiv.txt
The relations between star formation and properties of molecular clouds are studied based on a sample of star forming regions in the Galactic Plane. \added{Sources were selected by having radio recombination lines to provide identification of associated molecular clouds and dense clumps. Radio continuum and mid-infrared emission were used to determine star formation rates, while \coo\ and submillimeter dust continuum emission were used to obtain masses of molecular and dense gas, respectively.} \replaced{This was accomplished mainly by two approaches: exploring empirical relations between star formation rate and properties of molecular clouds in the data; and testing some previously proposed models or hypotheses about star formation.}{We test whether total molecular gas or dense gas provides the best predictor of star formation rate. We also test two specific theoretical models, one relying on the molecular mass divided by the free-fall time, the other using the free-fall time divided by the crossing time. Neither is supported by the data.} The data are also compared to those from nearby star forming regions and extragalactic data. The star formation ``efficiency," defined as star formation rate divided by mass, spreads over a large range when the mass refers to molecular gas\added{; the standard deviation of the log of the efficiency decreases by a factor of three when the mass of relatively dense molecular gas is used rather than the mass of all the molecular gas.} \deleted{The spread is much reduced if the mass of relatively dense molecular gas is used. Models that introduce free-fall time and/or dynamical time are not particularly effective.}
Given the importance of star formation in the evolution of galaxies, understanding the regulation of star formation is crucial. Early work on star formation on galaxy scales relied on empirical star formation laws \citep{Schmidt:1959,Kennicutt:1989}, with little connection to the detailed studies of star formation in our own Galaxy. Recently there has been more focus on integrating the understanding of the process of star formation from the scale of galaxies to the much smaller scales of regions within molecular clouds \citep{Kennicutt:2012,Kruijssen:2014,Krumholz:2014}. While large scale studies provide essential information on the relation between large scale properties of galaxies and star formation, the process of converting gas into stars takes place on a smaller scale. Since molecular clouds (MCs) are the sites of star formation in galaxies, it is essential to establish the key processes and sequences within molecular clouds that regulate the production of newborn stars in order to gain a deeper understanding of processes at galactic scales. While there are some recent high-spatial resolution studies of nearby galaxies that can resolve regions of MCs (e.g., \citealt{2007ApJ...661..830R, 2013ApJ...779...46H}), the Milky Way offers the highest resolution view to investigate the connection between star formation and the local gas properties. There are several recent surveys of dust and molecular line emission in the Milky Way that provide information on the distributions and properties of \gmc s. Ideally, star formation in \gmc s can be directly evaluated by identifying stars or young-stellar objects inside the clouds, which along with the information on their mass and lifetime provide a good estimate of star formation rate (SFR) for the clouds. This direct method of estimating SFR has been applied for nearby ($d < 830$ pc) molecular clouds \citep{Heiderman:2010,Gutermuth:2011,Lada:2010,Evans:2014}, but these have a limited range in properties, making it difficult to test theories for the importance of cloud properties in controlling star formation rates. Furthermore, they are primarily low-mass ($\mean{\mcloud} \sim 3000$ \msun) clouds \citep{Heiderman:2010} whose star formation does not fully sample the IMF. Their star formation activity would be almost entirely invisible to observers in other galaxies. The goal of this paper is to extend this effort to larger clouds where massive stars are formed, both to sample a larger range of cloud properties and to examine regions more comparable to those that can be observed in other galaxies. The challenge is that the more massive clouds with more fully sampled IMFs are all quite distant; even Orion does not sufficiently sample the IMF to use the extragalactic indicators of star formation rate \citep{Kennicutt:2012}. For those distant clouds, counting YSOs is very difficult, both because of sensitivity limits and because of background source confusion \citep{Dunham:2011}, although recent work has been more successful \citep{Heyer:2016}. To study star formation in a larger sample of Galactic \gmc s, we resort to indirect tracers of SFR, such as those commonly used in extragalactic studies including H$\alpha$, UV continuum, total infrared luminosity, mid-infrared emission, and radio continuum emission. The shorter wavelength tracers (H$\alpha$, UV continuum) cannot be used in the plane of the Galaxy because of dust obscuration, and the total far-infrared luminosity awaits full release of surveys with {\it Herschel}. In this study, we use mid-infrared and radio continuum emission. It is known that these indirect tracers derived from extragalactic data are problematic when applied to smaller regions such as \gmc s. The problem arises mostly from the assumptions of a fully-sampled IMF and a star formation history that is constant over a long timescale \citep{Kruijssen:2014,Krumholz:2015}. Several recent studies of SFR tracers in regions with different properties suggest that some tracers offer reasonable measures of SFR (although still with large scatter) in regions above a certain minimum SFR \citep{Wu:2005,Vutisalchavakul:2013}. In this paper, we collect data from surveys of radio recombination lines, radio continuum, and mid-infrared emission to measure SFR, \coo\ spectroscopy to evaluate MC properties, and millimeter dust continuum emission to trace the dense gas component. We describe these data sets in \S \ref{data} and summarize our selection of star forming regions and their association with gas in \S \ref{analysis}. Various models for star formation prediction are tested with these data in \S \ref{tests}. In \S \ref{lowcomp}, we compare our results to similar studies of nearby clouds, and in \S \ref{exgal}, we put our results into the context of studies of other galaxies.
We compiled a sample of Galactic \gmc s that are associated with \hii\ regions and estimated their properties and SFR. The analysis of \gmc s, \hii\ regions, and SF tracers (both radio continuum and MIR emission) shows different degrees of associations between molecular gas and star formation. We classified the \gmc s into different groups: \gmc s with embedded \hii\ regions, \gmc s with overlapping \hii\ regions, and \gmc s with separated \hii\ regions. We did not use the last group because association between molecular gas and star formation was too uncertain. The sample was used to test relations between SFR and properties of \gmc s. We tested \replaced{five}{four} different models of star formation. No significant correlation was found between \epsff\ and $\tff/\tdyn$. Significant correlations exist between SFR and \mcloud, \md, and $\mcloud/\tff$. The relation between SFR and \md\ is consistent with linear, while the other two are significantly \replaced{sub-linear}{non-linear}, unlike extragalactic relations or the theoretical model by \citet{Krumholz:2012} for $\mcloud/\tff$. Combining the data \added{on Galactic Plane clouds presented in this paper} with that on nearby clouds shows that the star formation efficiency of the nearby clouds is higher when efficiency is measured versus \mcloud\ or \mcloud/\tff. The efficiency per mass of dense gas is very similar for the nearby clouds and the Galactic plane clouds. Adding extragalactic studies, we can extend the range of relevant mass scales over 7 orders of magnitude. The star formation efficiency for dense gas shows remarkable stability over this range, varying over a factor of 4, while that for total molecular gas varies by a factor of 40. \added{The standard deviation in the log of the SFE(Myr$^{-1}$) decreases by about a factor of 3 to a value of 0.19 when dense gas mass, rather than molecular mass, is used.} \medskip \added{We thank the anonymous referee for a careful reading and excellent suggestions which have improved the paper. We also thank C. McKee, C. Federrath, G. Parmentier, M. Fall, and B. Ochsendorf for comments.} We are grateful to the BGPS team for sharing ideas and information over many years. We particularly thank B. Svoboda for calculations to characterize the extinction threshold for the BGPS sample. H. Chen kindly provided data on galaxies. This work was supported by NSF grant AST-1109116 to the University of Texas at Austin. MH acknowledges support from NASA ADAP grant NNX13AF08G to the University of Massachusetts.
16
7
1607.06518
1607
1607.07458_arXiv.txt
{ Brown dwarf disks are excellent laboratories to test our understanding of disk physics in an extreme parameter regime. In this paper we investigate a sample of 29 well-characterized brown dwarfs and very low-mass stars, for which Herschel far-infrared fluxes and (sub)-mm fluxes are available. We measured new Herschel PACS fluxes for 11 objects and complement these with (sub)-mm data and Herschel fluxes from the literature. We analyze their spectral energy distributions in comparison with results from radiative transfer modeling. Fluxes in the far-infrared are strongly affected by the shape and temperature of the disk (and hence stellar luminosity), whereas the (sub)-mm fluxes mostly depend on disk mass. Nevertheless, there is a clear correlation between far-infrared and (sub)-mm fluxes. We argue that the link results from the combination of the stellar mass-luminosity relation and a scaling between disk mass and stellar mass. We find strong evidence of dust settling to the disk midplane. The spectral slopes between near- and far-infrared are mostly between $-0.5$ and $-1.2$ in our sample, which is comparable to more massive T Tauri stars; this may imply that the disk shapes are similar as well, although highly flared disks are rare among brown dwarfs. We find that dust temperatures in the range of 7--15\,K, calculated with $T\approx25\,(L/L_\odot)^{0.25}$\,K, are appropriate for deriving disk masses from (sub)-mm fluxes for these low luminosity objects. About half of our sample hosts disks with at least one Jupiter mass, confirming that many brown dwarfs harbor sufficient material for the formation of Earth-mass planets in their midst. }
Brown dwarfs (BDs) are a common outcome of star formation and have been found in large numbers in all nearby star-forming regions \citep[see review by][]{luh12}. Just like their more massive stellar siblings, young brown dwarfs are surrounded by dusty disks. Brown dwarfs have masses of 0.01--0.08$\,M_{\odot}$ and luminosities several orders of magnitude lower than stars, while their accretion rates and the disk masses are also lower. Their disks represent an interesting laboratory for studying the evolutionary processes in disks, particularly those related to planet formation, and our methods for inferring disk properties, in an extreme parameter range. Brown dwarf disks, originally found in near- and mid-infrared surveys \citep{com98,mue01,nat01,nat02, jay03}, are now investigated over the infrared, submillimeter, and millimeter spectral range. For the most part, the general evolutionary blueprint adopted for stellar disks applies to brown dwarf disks as well. The disk lifetimes are 5--10\,Myr and thus comparable to or slightly longer than in the stellar regime \citep{daw13,luh12a}. At a given age, brown dwarf disks show a range of masses and geometries, including flat and flared disks as well as disks with inner opacity holes \citep{moh04}. As in stellar disks, evidence for the presence of large, mm-sized dust grains has been found by various methods \citep[e.g.,][]{apa05,sch07}, and was recently demonstrated based on the first brown dwarf data from the Atacama Large Millimeter Array (ALMA) \citep{ric12,ric14}. Observations of brown dwarfs in the (sub)-mm and millimeter domain show that disk masses scale with stellar mass, at around 1\% of the stellar mass, albeit with large scatter \citep{sch06,kle03,moh13,and13}. Substellar disk masses rarely exceed 0.001$\,M_{\odot}$, thus only the brightest BD disks have been detected at these wavelengths. Only recently has it been possible to investigate brown dwarfs in the far-infrared between 70 and 160\,\mum, thanks to the Herschel space observatory. \citet{har12b,har12} carried out a Herschel survey of disks around $\sim 40$ very low-mass objects (VLMOs) at 70 and 160\,\mum. Comparing their data with radiative transfer models, they infer a wide range of disk masses (from $<$$10^{-6}$ to $10^{-3}\,M_{\odot}$). While the upper limit agrees with the values inferred from (sub)-mm observations, the minimum values are lower than expected. A few other groups have recently published their analyses of VLMO disks based on Herschel/PACS data (i.e., wavelength of 70--160\,\mum): \citet[detections for 12 BDs in $\rho$-Oph]{alv13}, \citet[detections for a few VLM stars in Chamaeleon]{olo13}, \citet[detections for 58 VLMOs in Taurus]{bul14}, and \citet[detections for 5 VLMOs in the TW Hydrae association]{liu15a}, most without (sub)-mm detection. One finding of these studies is that the geometrical parameters of the disk, in particular the flaring angle, seem to be similar in VLMOs compared with more massive stars. Finally, \citet{joe13} report a highly flared disk with a mass of $10^{-4}\,M_{\odot}$ for the isolated planetary-mass object OTS44, using a spectral energy distribution (SED) with a detection at 70\,\mum\ and an upper limit at 160\,\mum, but without (sub)-mm data points. So far, the focus has been on the analysis of VLMO disks in specific wavelength domains (e.g., near-infrared (NIR), far-infrared (FIR), or (sub)-mm), but there is little work on the links between the different parts of the SEDs of these sources. In this paper, we set out to study the FIR and sub-mm to mm fluxes for a sample of well-characterized VLMOs with masses below 0.2\,\Msun, for which information is available in both these wavelength domains. This approach allows us to investigate specifically physical properties of the disk that affect the long-wavelength portions of the SED, including the degree of flaring, disk mass, dust temperature, and the interdependence among these parameters.
\subsection{Correlation between FIR and (sub)-mm fluxes}\label{sec:fFIR_vs_fmm} We have shown in Sect.~\ref{sec:phot} (Fig.~\ref{fig:fFIR_vs_fmm}) that the observed FIR fluxes of our VLMO sample scale positively with the mm fluxes. A similar trend is also seen in Herbig Ae/Be stars \citep{pas16}. {How can this be interpreted in the light of our modeling result that FIR fluxes do not scale strongly with disk mass for a star of fixed luminosity?} Radiation transfer models show that the (sub)-mm flux depends primarily on disk mass and only weakly on the other parameters. In contrast, FIR fluxes depend mostly on the temperature distribution, which itself is controlled by the stellar luminosity and disk shape. The dependence of the FIR fluxes on disk mass is weak, becoming only slightly stronger in more flared disks. In particular, the dependence of FIR on disk mass is much weaker than the linear relation between $F_\mathrm{mm}$ and $M_\mathrm{d}$ as given by eq.~(\ref{eq:eq2}), which accordingly does not serve to correlate FIR and $F_\mathrm{mm}$ disk emission. Self-consistent hydrostatic equilibrium models \citep[e.g.,][]{sic15} show that, for fixed stellar parameters, disks of increasing mass are more flared and, therefore, have stronger FIR fluxes. However, even for fully flared disks the predicted correlation between the FIR flux and the (sub)-mm flux for a star of fixed luminosity is much weaker than linear. It is noteworthy that optical depth apparently plays a subordinate role for the FIR--\Mdisk\ relation; despite the fact that the least massive disks are opticaly thin at most radii ($>$2\,AU at 100\,\mum) and that optically thin layers always contribute to the overall flux, even in regions where the vertical optical depth is $>$1, our models show no strong correlation of FIR with disk mass. We therefore provide an alternative explanation linking FIR emission directly to the total luminosity of the VLMOs. We suggest that the apparent correlation between $F_\mathrm{FIR}$ and $F_\mathrm{mm}$ is in fact the result of two underlying correlations, namely the mass-luminosity relation for young VLMOs and a correlation of disk mass with stellar mass, paired with the observed strong correlation (Spearman's rank correlation parameters between 0.5 and 0.8 of all detections excluding binaries) of $F_\mathrm{FIR}$ and $L_\star$ (Fig.~\ref{fig:fFIR_vs_Lstar}). \begin{figure}[tb] \centering \includegraphics[width=1\columnwidth]{f08.eps} \caption{\label{fig:fFIR_vs_Lstar} $F_{160}$ flux as a function of target luminosity. Symbols as in Fig.~\ref{fig:fFIR_vs_fmm}. Best linear fit to all values excluding upper limits and binaries shown as dashed line. Measured slopes $m_\mathrm{FIR}$ of best linear fits $\log(F_\mathrm{FIR})\propto m_\mathrm{FIR}\log(L_\star)$ in all FIR bands: $m_{70}$\,=\,1.30$\pm$0.25, $m_{100}$\,=\,1.09$\pm$0.39, and $m_{160}$\,=\,1.14$\pm$0.24.} \end{figure} What is observed in Fig.~\ref{fig:fFIR_vs_fmm} could then be traced back to a correlation $F_\mathrm{mm}$\,$\leftrightarrow$\,\Mdisk\ , according to eq.~\ref{eq:eq2}, a scaling between \Mdisk\ and $M_\star$ \citep[assumed to be linear, e.g.,][]{and13}, and a $M_\star$--$L_\star$ relation \citep[$L_\star$\,$\approx$\,$M_\star^{1.53}$ determined for 1\,Myr isochrones by][]{bar15} (the result is not sensitive to the particular $M$--$L$ correlation). The predicted slope $m$ of the correlation between $F_\mathrm{mm}$ and $L_\star$ of $m$\,=\,0.65\,$[\log(\mathrm{mJy})/\!\log(L_\odot)]$ is, however, shallower than what is seen in the observations ($m$\,=\,1.30$\pm$0.27\,$[\log(\mathrm{mJy})/\!\log(L_\odot)]$). The weak additional dependence between mm fluxes and other parameters (in particular, the temperature) or a stronger dependence between disk mass and stellar mass can make the trend slightly steeper and closer to the observed one. In fact, the most recent study of the \Mdisk--$M_\star$ relation \citep{ans16} suggests that the correlation is steeper than linear with a coefficient of 1.7--1.8 (for regions with ages of 1--2\,Myr). Their sample extends to masses of about 0.1\,\Msun. If this correlation continues into the substellar regime, it would bring our estimate above for the trend between FIR and $F_\mathrm{mm}$ in agreement with the observations. \subsection{Spectral slopes in models and observations}\label{sec:flaring} In Fig.~\ref{fig:IRslope_vs_mm_withmodels} we compare the spectral slopes derived from the observations (in gray) with the numbers produced by the models. \begin{figure}[tb] \centering \includegraphics[width=1\columnwidth]{f09.eps} \caption{\label{fig:IRslope_vs_mm_withmodels} Model predictions of infrared slopes as a function of mm flux. Colors and symbols as in Fig.~\ref{fig:colors}. For reference, our measurements from the bottom panel of Fig.~\ref{fig:IRslope_vs_mm} were included in gray.} \end{figure} This is similar to the bottom panel of Fig.~\ref{fig:IRslope_vs_mm}, but with models overplotted. We show here the $J$$-$160 slopes because most literature objects do not have a 100\,\mum\ flux. Our results do not depend on that particular choice. As explained in Sect.~\ref{sec:models}, the shape of the disk is varied in the models by changing the flaring index and the scale height $H_0$, which are both listed in Table~\ref{tab:tab4}. The models cover three cases, from fully flared to very flat disk. The spectral slope depends strongly on the disk shape, but little on other parameters and, particularly, not on the disk mass and stellar luminosity. From Fig.~\ref{fig:IRslope_vs_mm_withmodels} it is clear that the observed slopes are well reproduced by the models for moderately flared and flat disks. The fully flared disks, on the other hand, with a very high flaring index of 1.35 and large $H_0$, predict spectral slopes that are not seen in our sample. Since our sample contains objects that are bright in the FIR (i.e., they are detected with Herschel), it is unlikely that objects with higher spectral slopes exist but are missing. A comparison with literature samples in Sect.~\ref{sec:slopes} provides further confirmation. We conclude that the overwhelming majority of disks around very low-mass stars and brown dwarfs shows some flattening of the dust geometry compared to the hydrostatic case. Our modeling results are in line with the findings of other groups. In \citet{sch06} it was already found that brown dwarf disks do not have the elevated flux levels expected for the hydrostatic case. Papers by \citet{har12}, \citet{alv13}, and \citet{liu15} fit SEDs with models calculated with a prescription analogous to ours for samples of mostly brown dwarfs (see Sect.~\ref{sec:obs} for more details on their samples). These authors all find that the flaring index ranges from 1.0 to 1.2 and the scaling factor $H_p(100\mathrm{\,AU})$ ranges from a few to 20, which again excludes the fully flared disks. \citet{liu15} also point out that the flaring index $\xi$ might be a function of spectral type (and thus stellar mass) in the brown dwarf regime with lower flaring indices for spectral types later than M8. Our sample does not extend to these late spectral types and therefore we cannot test this particular result. \subsection{Disk masses from mm fluxes}\label{sec:Md_vs_fmm} Before Herschel results were available, disk masses for brown dwarfs were mostly derived from single-band measurements at 0.85 or 1.3\,mm \citep[e.g.,][]{kle03,sch06,moh13}, using eq.~(\ref{eq:eq2}) with fixed temperature and opacity. At these wavelengths, the disks are optically thin and the emitted flux can be assumed to be proportional to the dust masses. For this calculation, it is assumed that dust grains emit as blackbodies; realistic values for opacity and dust temperature have to be adopted. This is the method we used in Sect.~\ref{sec:diskmasses} to estimate the disk masses in our sample. With the models presented in Sect.~\ref{sec:models} we can check the validity of the assumptions for the blackbody-based disk masses and evaluate uncertainties. In Fig.~\ref{fig:temps} we show the dust temperature required to reproduce the model disk mass with the blackbody-based prescription used in Sect.~\ref{sec:diskmasses} from the 1300\,\mum\ flux ($T_{1300}$) as function of the slope $\alpha_{J-160}$ for the BD and TTS models. \begin{figure}[tb] \centering \includegraphics[width=\columnwidth]{f10.eps} \caption{Values of the temperature needed to recover the model disk mass from the 1300\,\mum\ flux $T_{1300}$ vs. $\alpha_{J-160}$ for BD and TTS models. Colors and symbols as in Fig.~\ref{fig:colors}. $T_{1300}$ increases in more flared disks and decreases with the disk mass. TTS models, which have $L_\star$ that is five times larger than BD models, have higher $T_{1300}$ however, there is a large region of overlap. The dashed lines show the values of $T_{1300}$ according to the \citeauthor{and13} prescription ($T_\mathrm{A} = 25(L_\star/L_\odot)^{1/4}$\,K), green for the BD luminosity and magenta for TTS.} \label{fig:temps} \end{figure} Very similar results are obtained for the 890\,\mum\ flux. The values of $T_{1300}$ reflect the overall temperature distribution in the disk; they are larger for more flared disks, as shown by the positive correlation of $T_{1300}$ with $\alpha_{J-160}$. The value $T_{1300}$ is lower for more massive disks {because stellar radiation does not penetrate the disk as deeply and the average temperature (at a given distance from the star) remains low}. For some models, $T_{1300}$ is very low, well below 10\,K. Only very flared, low-mass BD and TTS disks have $T_{1300}\gtrsim20$\,K. If we consider the observed range of $\alpha_{J-160}$, we can conclude that for most disks the appropriate values of $T_{1300}$ are in the range 10--15\,K. Higher values do not seem justified. In Fig.~\ref{fig:temps}, the horizontal lines indicate the temperatures expected for the BD and TTS luminosity adopted in our models, based on the scaling law derived by \citet{and13}, which we have also used in Sect.~\ref{sec:diskmasses}. The values are $\sim$10 and 15\,K, respectively. Most models in the observed range of $\alpha_{J-160}$ have $T_{1300}$ within this range and the adoption of their scaling law therefore appears justified. Fig.~\ref{fig:BBdiskmass_vs_temp} plots, as a function of $\alpha_{J-160}$, the difference between the values of \Mdisk\ obtained with $T_A$ from the \citeauthor{and13} prescription and the model disk mass. \begin{figure}[tb] \centering \includegraphics[width=\columnwidth]{f11.eps} \caption{Ratio of the disk mass computed from $T_A$ over model disk mass as a function of $\alpha_{J-160}$. Colors and symbols as in Fig.~\ref{fig:colors}.} \label{fig:BBdiskmass_vs_temp} \end{figure} The difference is typically less than a factor two, which is the maximum error that is made by adopting the blackbody assumption with the luminosity-scaled dust temperature. However, it should be noted that many BD and TTS models have very similar values of $T_{1300}$, in spite of the factor 5 difference in $L_\star$. The \citeauthor{and13} $T_{1300}$ values, which scale only with $L_\star$, may introduce a spurious trend in the disk mass, especially if extended to very low $L_\star$, when $T_{1300}$ may be nominally lower than the value of $\sim$7\,K owing to heating by the IRF. A more accurate value of $T_{1300}$ can be obtained if $\alpha_{J-160}$ or any other FIR slope is known. In the case in which only FIR fluxes and no mm fluxes are available one requires an accurate stellar luminosity and a careful assessment of the temperature structure of the disk, since FIR fluxes depend strongly on temperature and disk shape and only weakly on disk mass (see Fig.~\ref{fig:colors}). Very recently, \citet{van16} proposed a new prescription to derive the disk mass from single wavelength (sub)-mm fluxes. They favor a scaling law with $L_\star$ that is flatter than the \citet{and13} law, namely $T_\mathrm{d}=22\,\mathrm{K}\,(L_\star/L_\odot)^{0.16}$, based on a grid of models with fixed shape ($H_0(100\mathrm{AU})=10$\,AU, $\xi=1.125$). The difference between the two laws is larger at lower $L_\star$, becoming a factor $\sim$1.6 for $L_\star=0.001\,L_\odot$. The results obtained with the two scaling laws are compared to the disk mass derived by fitting the SED of eight very low luminosity ($L_\star \lesssim0.01\,L_\odot$) objects in the Upper Scorpius star-forming region. Masses derived with the \citeauthor{and13} scaling law are on average 3.5 times larger than the values {assumed in their radiative transfer model}, a discrepancy that is significantly reduced when using the new scaling law. It is possible that this is due in large part to the fact that disks in Upper Scorpius tend to be more settled and flatter than disks in younger star-forming regions such as Taurus \citep{sch07,sch12}. If so, the \citeauthor{van16} results confirm our conclusion that the disk shape is an important factor when deciding on the best value of $T_\mathrm{d}$ to adopt. However, the differences in disk masses are in general not very large. We conclude that the major drawback in adopting a single scaling law to measure disk masses from (sub)-mm fluxes for a sample of objects is not so much in the error on individual objects but in the trend that this automatically introduces when comparing disks around stars of different luminosity and mass, such as those shown in Fig.~\ref{fig:disk_mass}.
16
7
1607.07458
1607
1607.07399_arXiv.txt
% We describe the development of an ambient-temperature continuously-rotating half-wave plate (HWP) for study of the Cosmic Microwave Background (CMB) polarization by the POLARBEAR-2 (PB2) experiment. Rapid polarization modulation suppresses 1/f noise due to unpolarized atmospheric turbulence and improves sensitivity to degree-angular-scale CMB fluctuations where the inflationary gravitational wave signal is thought to exist. A HWP modulator rotates the input polarization signal and therefore allows a single polarimeter to measure both linear polarization states, eliminating systematic errors associated with differencing of orthogonal detectors. PB2 projects a 365-mm-diameter focal plane of 7,588 dichroic, 95/150 GHz transition-edge-sensor bolometers onto a 4-degree field of view that scans the sky at $\sim$ 1 degree per second. We find that a 500-mm-diameter ambient-temperature sapphire achromatic HWP rotating at 2 Hz is a suitable polarization modulator for PB2. We present the design considerations for the PB2 HWP, the construction of the HWP optical stack and rotation mechanism, and the performance of the fully-assembled HWP instrument. We conclude with a discussion of HWP polarization modulation for future Simons Array receivers.
% \label{sec:intro} Precise characterization of the Cosmic Microwave Background (CMB) polarization anisotropies stands at the forefront of modern cosmology. Of particular importance is the parity-odd, divergence-free B-mode polarization pattern uniquely generated by primordial gravitational waves \cite{seljakGW, zaldarriagaAllSky, kamionkowskiCurl} and gravitational lensing \cite{huOkamoto, lewisChallinor}. The B-mode signal created by the primordial gravitational wave background (GWB) is thought to be the fingerprint of inflation \cite{guth, linde}---an epoch of exponential spatial expansion $\sim 10^{-30}$ seconds after the Big Bang---and peaks at degree angular scales. The B-mode signal created by gravitational lensing (GL) of parity-even E-modes into B-modes encodes information about large scale structure formation \cite{seljakGravPot, zaldProjMatDensity} and peaks at arcminute angular scales. The GWB remains undetected \cite{bicep2} while the GL signal is just beginning to be explored \cite{pb1BB, sptBB, actBB}. Therefore, there exists a wealth of B-mode physics yet to be harnessed, including the behavior of gravity at grand-unification energies \cite{kamionkowskiParticlePhys} and the impact of neutrinos on cosmological evolution \cite{abazajian}. \subsection{The POLARBEAR-2 experiment} % \label{sec:pb2} The POLARBEAR-2 (PB2) receiver will observe the CMB polarization anisotropies from the Atacama Desert of Chile at 5,200 m altitude in 2017 \cite{nate, yuki}. The PB2 receiver mounts onto a telescope identical to the Huan Tran Telescope (HTT) \cite{tran} and observes at 95 GHz and 150 GHz simultaneously with a 4-degree field of view that scans the sky at $\sim$ 1 degree per second \cite{tomoOptics}. The 365-mm-diameter focal plane contains 1,897 dual-polarized, multi-chroic, planar-lithographed pixels that couple to the reimaging optics via an array of synthesized elliptical silicon lenslets \cite{pb2Det}. PB2's beam size of 3.5 arcmin (5.2 arcmin) at 150 GHz (95 GHz) and its 4.1 $\mu\mathrm{K}_{\mathrm{CMB}} \sqrt{\mathrm{s}}$ noise equivalent temperature make it well-suited to probe a wide range of angular scales. Therefore, PB2 aims to both characterize the GL signal and detect the GWB.
\label{sec:future} We have presented the design and validation of an ambient-temperature achromatic sapphire HWP for observation of the CMB polarization at 95 GHz and 150 GHz. Considering the 1/f noise suppression and systematic error mitigation offered by the implementation of a rapidly-rotating HWP, PB2 will deploy the modulator presented in this proceeding when it begins observations in 2017. POLARBEAR-2 is the first installment of the Simons Array, an assembly of three telescopes called ``PB2a'', ``PB2b'', and ``PB2c''. PB2a/b will observe at 95/150 GHz, while PB2c will observe at 220/280 GHz \cite{toki}. The ambient-temperature HWP presented in this proceeding will deploy on PB2a. As we look towards polarization modulation for PB2b/c, we aim to reduce the optical power introduced by the HWP by cooling the modulator to cryogenic temperatures. By moving the HWP into the receiver, we can simultaneously utilize sapphire tan $\delta$'s strong temperature dependence \cite{parshin} and the new location's cryogenic surroundings to dramatically suppress HWP thermal emission and scattering to ambient temperature. A cryogenic HWP prototype is currently under evaluation, and we are in the development stage for the full-scale instrument that will deploy on PB2b/c in 2017/2018.
16
7
1607.07399
1607
1607.05449_arXiv.txt
Protoplanetary disks are a byproduct of the star formation process. In the dense mid-plane of these disks, planetesimals and planets are expected to form. The first step in planet formation is the growth of dust particles from submicrometer-sized grains to macroscopic mm-sized aggregates. The grain growth is accompanied by radial drift and vertical segregation of the particles within the disk. To understand this essential evolutionary step, spatially resolved multi-wavelength observations as well as photometric data are necessary which reflect the properties of both disk and dust.\\ We present the first spatially resolved image obtained with \textit{NACO} at the \textit{VLT} in the L$_\text{p}$ band of the near edge-on protoplanetary disk FS~Tau~B. Based on this new image, a previously published \textit{Hubble} image in H~band and the spectral energy distribution from optical to millimeter wavelengths, we derive constraints on the spatial dust distribution and the progress of grain growth. For this purpose we perform a disk modeling using the radiative transfer code MC3D. Radial drift and vertical sedimentation of the dust are not considered.\\ We find a best-fit model which features a disk extending from $\unit[2]{AU}$ to several hundreds AU with a moderately decreasing surface density and $M_\text{disk}=\unit[2.8\times10^{-2}]{M_\odot}$. The inclination amounts to $i=80^\circ$. Our findings indicate that substantial dust grain growth has taken place and that grains of a size equal to or larger than $\unit[1]{mm}$ are present in the disk. In conclusion, the parameters describing the vertical density distribution are better constrained than those describing the radial disk structure.
The multiple system FS~Tau is located in the Taurus-Auriga star forming region at a distance of \mbox{$\unit[140]{pc}\pm\unit[20]{pc}$} (\citealt{Elias1978}). FS~Tau is a hierarchical triple-system, consisting of the narrow \mbox{T-Tauri}-binary FS~Tau~A (separation: $0.\hspace{-2.5px}\arcsec228 - 0.\hspace{-2.5px}\arcsec27$; \citealt{Simon1992}; \citealt{Hartigan2003}) and the young stellar object (YSO) FS~Tau~B at a projected separation of $\sim20\arcsec$ west. FS~Tau~A and FS~Tau~B are accompanied by a circumbinary and a circumstellar disk, respectively. The disk around FS~Tau~B is in the focus of this study. Alternative designations of FS~Tau~B are Haro~6-5B, HH~157, HBC~381, and IRAS~04189+2650. The YSO has been classified as a Class I-II source (\citealt{Lada1987}) and has lost most of its original surrounding shell (\citealt{Yokogawa2001}). The disk is highly inclined \mbox{($i\approx67^\circ-80^\circ$;} \citealt{Krist1998}; \citealt{Yokogawa2001}) and obscures the central star at shorter wavelengths. Consequently, the disk appears as a bipolar nebula in the near-infrared (NIR), separated by an opaque band with a length of $3\arcsec-4\arcsec$ (\citealt{Padgett1999}) and a position angle of $144^\circ-150^\circ$. Since the disk is not orientated exactly edge-on and the dust particles potentially scatter non-isotropically, the two wings of the nebular structure differ in brightness. The disk mass has been constrained by several observations to $2\times10^{-3}\,\textrm{M}_\odot$ to $4\times10^{-2}\,\textrm{M}_\odot$ (\citealt{Dutrey1996}; \citealt{Yokogawa2001, Yokogawa2002}). Based on the observed low accretion rate, an age of $3.6\times10^{5}-2.4\times10^{6}$ years has been deduced (\citealt{Yokogawa2002}). Moreover, the object features a bipolar jet with a perpendicular orientation towards the opaque band (\citealt{Mundt1984}). Because of its low age and distance, FS~Tau~B is predestined to investigate the growth of dust grains in the context of planet formation. Due to its high inclination, the disk acts as a natural coronagraph reducing observational difficulties and artefacts common in coronagraphy. Furthermore, the vertical disk structure can be observed in the NIR without major disturbances by direct radiation from the stellar source. Comparable YSOs studied in the recent past are e.$\,$g.,~the Butterfly~star IRAS~04302+2247 (\citealt{Wolf2003Butterfly}; \citealt{Graefe2013}), CB~26 (\citealt{Sauter2009}), DG~Tau~B (\citealt{Kruger2011}), and HH~30 (\citealt{Madlener2012}). Since the optical depth decreases in general with increasing wavelength, observations in the mid-infrared (MIR) allow the investigation of deeper regions of the disk and thus the study of thermal \mbox{re}emission of warm dust closer to the midplane. In this context, spatially resolved multi-wavelength observations are required to reduce ambiguities in the data analysis which exist due to the lack of knowledge regarding the dust density, chemical composition, and grain size distribution. In addition, observations in the MIR potentially provide constraints for dust particles in deeper layers and thus for possible settling of larger grains \mbox{(e.$\,$g.,~\citealt{Pinte2008};} \citealt{Graefe2013}). While hot dust and scattered stellar light can be readily observed at NIR wavelengths, spatially resolved MIR observations tracing warm dust are rare. This applies also to the system of FS~Tau~B which was observed with the \textit{Hubble Space Telescope} (\textit{HST}) in the optical and NIR domain (\citealt{Krist1998}; \citealt{Padgett1999}) and with lower resolution at millimeter wavelengths (\citealt{Dutrey1996}; \citealt{Yokogawa2002}). The aim of this study is to investigate the density distribution of the protoplanetary disk of FS~Tau~B and to constrain the evolutionary stage of grain growth. We present a new observation in the L$_{\text{p}}\,$band ($\lambda=\unit[3.74]{\text{\textmu} m}$) with $\sim0.\hspace{-2.5px} \arcsec1$ resolution (Sect. \ref{chap2}). Previously published observational data are summarised and presented in Section \ref{1234321}. The modeling campaign is based on the new observation in the MIR, a high resolution image obtained with \mbox{\textit{NICMOS/HST}} in the NIR, and published photometry data (Sect.~\ref{chap3}). The results are presented and discussed in Section~\ref{chap4}.
In this paper a spatially resolved observation of FS~Tau~B obtained with the instrument \textit{NACO/VLT} in the MIR (L$_\text{p}$~band, $\lambda=\unit[3.74]{\text{\textmu} m}$) was presented. Based on this new image, previously published photometry, and a spatially resolved observation in the NIR taken with \textit{NICMOS/HST}, a parameter study was performed which resulted in new constraints for the disk parameters. The main observables are reproduced satisfactorily by the best-fit model. The disk extends from an inner radius at $R_\text{in}=\unit[2]{AU}$ to an outer radius of several hundreds AU. The values for the scale height at radius $R_{100}=\unit[100]{AU}$, $h_{100}=\unit[10_{-1}^{+\,2}]{AU}$, and the geometrical parameters $\alpha=2.1_{-0.6}^{+\,0.5}$ and $\beta=1.20_{-0.01}^{+\,0.06}$ are found to be in the typical range for protoplanetary disks. Moreover, the surface density decreases moderately with $p=0.9_{-0.6}^{+\,0.5}$. In summary, parameters describing the vertical density distribution ($\beta$, $h_{100}$) of the disk are better constrained than those influencing the radial disk structure ($R_\text{in}$, $R_\text{out}$, $\alpha$). The temperature in the midplane at $R=\unit[100]{AU}$ has a value of $T_{100}\approx\unit[24]{K}$. The dust mass is determined to $M_\text{dust}=\unit[2.8\times10^{-4}]{M_\odot}$. Assuming the canonical ratio of gas to dust, $M_\text{gas}/M_\text{dust}=100$ (e.$\,$g.,~\citealt{Hildebrand1983}), we derive a total disk mass of \mbox{$M_\text{disk}=\unit[2.8\times10^{-2}]{M_\odot}$}. Evaluation of Toomre's criterion suggests gravitational stability throughout the disk. To reproduce the observational data much larger dust grains ($a_\text{max}=\unit[1]{mm}$) than primordial particles of the ISM are needed. The spectral index $\alpha_\text{mm}\approx2.6$ implies the presence of larger dust particles and therefore grain growth in the disk. The inclination $i=80^\circ$ is constrained by a combination of SED and images, and an extinction screen in the foreground with an optical extinction of $A_\text{V}=12$ gives the best results. The heating source in our best-fit model has a luminosity of $L_\star\approx\unit[9.5]{L_\odot}$. The mass accretion rate is derived to $\unit[3.2\times10^{-7}]{\nicefrac{\text{M}_\odot}{yr}}-\unit[1.2\times10^{-6}]{\nicefrac{\text{M}_\odot}{yr}}$. The observed SED is well reproduced by the presented best-fit model. While the NIR~maps observed with \mbox{\textit{NICMOS/HST}} consist almost entirely of scattered stellar radiation, the MIR~observation is dominated ($\sim\unit[80]{\%}$) by thermal \mbox{re}emission. The simulated maps show a highly inclined disk. The radial profiles along the major axis are well reproduced, whereas the deviations on the minor axis are larger. The modeling is based on spatially resolved NIR~and MIR~observations. The decreased optical depth in the MIR reveals slightly deeper embedded regions and potentially larger particles which settled towards the disk midplane and moved closer to the central star. In our study we need the presence of larger particles for the modeling but the resolution of the MIR map is too low to get hints for dust settling or to exclude the occurrence of larger particles in the disk's surface regions. In general, observations at longer wavelengths probe larger grain sizes and deeper disk regions. Therefore, future studies have to verify the presented model by taking into account observations in the far-infrared and at \mbox{(sub-)}millimeter wavelengths. In addition, the spatial variation of particle size within the disk and thus the spatial dependency of the spectral index can be investigated with observations at these wavelengths. \textit{ALMA}, the largest \mbox{(sub-)}millimeter interferometer (e.$\,$g.,~\citealt{Boley2012}) and other high resolution and sensitive observatories, such as the planned \textit{JWST} (e.$\,$g.,~\citealt{Mather2010}), will enable us to investigate the disk of FS~Tau~B with increased sensitivity on smaller scales, and to obtain a better understanding of disk evolution in general.
16
7
1607.05449
1607
1607.07166_arXiv.txt
The dynamical impact of Lyman~$\alpha$~(Ly$\alpha$) radiation pressure on galaxy formation depends on the rate and duration of momentum transfer between Ly$\alpha$ photons and neutral hydrogen gas. Although photon trapping has the potential to multiply the effective force, ionizing radiation from stellar sources may relieve the Ly$\alpha$ pressure before appreciably affecting the kinematics of the host galaxy or efficiently coupling Ly$\alpha$ photons to the outflow. We present self-consistent Ly$\alpha$ radiation-hydrodynamics simulations of high-$z$ galaxy environments by coupling the Cosmic Ly$\alpha$ Transfer code ({\sc colt}) with spherically symmetric Lagrangian frame hydrodynamics. The accurate but computationally expensive Monte-Carlo radiative transfer calculations are feasible under the one-dimensional approximation. \hl{The initial starburst drives an expanding shell of gas from the centre and} in certain cases Ly$\alpha$ feedback significantly enhances the \hl{shell velocity}. Radiative feedback alone is capable of ejecting baryons into the intergalactic medium~(IGM) for protogalaxies with a virial mass of $M_{\rm vir} \lesssim 10^8~\Msun$. We compare the Ly$\alpha$ signatures of Population~III stars with $10^5$~K blackbody emission to that of direct collapse black holes with a nonthermal Compton-thick spectrum and find substantial differences \hl{if the Ly$\alpha$ spectra are shaped by gas pushed by Ly$\alpha$ radiation-driven winds}. For both sources, the flux emerging from the galaxy is reprocessed by the IGM such that the observed Ly$\alpha$ luminosity is reduced significantly and the time-averaged velocity offset of the Ly$\alpha$ peak is shifted redward.
16
7
1607.07166
1607
1607.02119_arXiv.txt
We present a light-traces-mass (LTM) strong-lensing model of the massive lensing cluster MACS J2135.2-0102 ($z$=0.33; hereafter MACS2135), known in part for hosting the Cosmic Eye galaxy lens. MACS2135 is also known to multiply-lens a $z=$2.3 sub-mm galaxy near the Brightest Cluster Galaxy (BCG), as well as a prominent, triply-imaged system at a large radius of $\sim$37\arcsec\ south of the BCG. We use the latest available Hubble imaging to construct an accurate lensing model for this cluster, identifying six new multiply-imaged systems with the guidance of our LTM method, so that we have roughly quadrupled the number of lensing constraints. We determine that MACS2135 is amongst the top lensing clusters known, comparable in size to the Hubble Frontier Fields. For a source at $z_{s}=2.32$, we find an effective Einstein radius of $\theta_{e}=27\pm3 \arcsec$, enclosing $1.12 \pm0.16 \times10^{14}$ $M_{\odot}$. We make our lens model, including mass and magnification maps, publicly available\footnote{ftp://wise-ftp.tau.ac.il/pub/adiz/MACS2135/}, in anticipation of searches for high-$z$ galaxies with the {\it James Webb Space Telescope} for which this cluster is a compelling target. \vspace{0.05cm}
Strong gravitational lensing (SL) by galaxy clusters has by now become a reliable, routine tool in Astronomy. Multiply-imaged background galaxies allow us to map in detail the otherwise-invisible dark matter (DM) distribution of the cluster, as well as to detect faint background objects that are highly magnified by the foreground cluster lens \citep[see reviews by][]{Kneib2011review,Bartelmann2010reviewB}. The past decade in particular has seen a dramatic increase in SL-related science, thanks mainly to the continued impressive performance of the {\it Hubble Space Telescope}, from the combination of deep high-resolution optical and NIR imaging, and because of the development of improved lens modeling techniques \citep[e.g.][]{Broadhurst2005a,Diego2005Nonparam,Jullo2007Lenstool,Liesenborgs2006,Zitrin2009_cl0024}. Cluster lensing programs such as the Cluster Lensing and Supernova with Hubble (CLASH; PI: Postman, \citealt{PostmanCLASHoverview}), and the ongoing Hubble Frontier Fields (HFF; PI: Mountain \& Lotz; see \citealt{Lotz2016HFF}) with HST, have proven extremely successful for SL, including the detection of {\it hundreds} of multiply lensed \citep[e.g.][as few examples]{Monna2014RXC2248,Jauzac2014M0416,Zitrin2014CLASH25} and high-redshift, magnified background objects extending into the reionization era above $z\gtrsim6$ \citep{Bradley2013highz,Atek2015HalfHFF_LF,Coe2014FF,Zheng2012NaturZ}, and beyond, to the current limits of detection at z$\sim11$ (\citealt{Coe2012highz,Zitrin2014highz}). Construction of luminosity functions is feasible now to $z\sim9$ (\citealt{Atek2015HalfHFF_LF,Mcleod2016,Livermore2016LF}). Several lensed supernova have been discovered \citep[e.g.][]{Patel2014SN} including the first multiply-imaged supernova as a quadrupole Einstein cross, and its subsequent reappearance \citep{Kelly2016reappearance}. Detailed studies of large highly magnified galaxies at $z\sim1-5$ have helped constrain UV-escape fractions below the Ly-limit \citep[e.g.][]{Nicha2016escape}, metallicity gradients and outflows \citep{Tucker2015} and star-formation details \citep[e.g.][]{Wuyts2012}. Cosmological models have been examined with SL through arc and Einstein radius statistics \citep{OguriBlandford2009,Horesh2010,Waizmann2012JeanClaude0717} and multi-wavelength related discoveries have been made of magnified, X-ray, radio or sub-mm galaxies \citep[e.g.][]{van_Weeren2016,Gonzales2016}. \begin{figure*} \begin{center} \includegraphics[width=157mm,trim=0cm 0cm 0cm 0cm,clip]{magOnImVNEW.pdf} \end{center} \caption{The central field of the galaxy cluster MACS2135 (R$=$[F140W+F110W]; G$=$F814W; B$=$F606W). Multiple-images and candidates, most of which (aside for systems 1 and 2) were found in this work, are indicated, and the resulting critical curves from our model are overlaid for $z_{s}=2.32$, revealing the large size and relatively high ellipticity of this lensing cluster (critical curve major-to-minor axis ratio of $\sim2.5$).}\vspace{0.5cm} \label{fig1} \end{figure*} This progress in SL is inspiring new campaigns including the reionization cluster survey, RELICS (PI: Coe), informed by the CLASH and HFF programs dedicated to SL, and designed to enhance lensing-enabled science with future facilities and in particular, the {\it James Webb Space Telescope} (JWST). Aside from the immediate science goals, part of the underlying motivation in these programs is to discover and characterize the ``best'' lensing targets for JWST for optimizing the detection of very distant background objects that lie beyond the reach of \emph{Hubble}. Since there are many massive clusters in the sky \citep[e.g.][]{OguriBlandford2009,Waizmann2012JeanClaude0717}, choosing the largest and most powerful lenses requires systematic lens modeling of controlled samples of clusters with continued space imaging for the detection of the multiply lensed images required for this purpose. We are also using the HST archive for progressing in this work with backlog of numerous unanalyzed massive clusters, including the data analyzed here as well as other X-ray selected clusters from the MAssive Cluster Survey (MACS; \citealt{Ebeling2010FinalMACS}). We begin our systematic analysis with MACS J2135.2-0102 ($z$=0.33; hereafter MACS2135), which exhibits several prominent arcs ranging up to $\gtrsim40\arcsec$ from the Brightest Cluster Galaxy (BCG), but lacks a recent lensing analysis that takes advantage of the achieved Hubble data. MACS2135 has been the subject of various previous studies. In particular, it became known as the cluster host of the Cosmic Eye galaxy-galaxy lens \citep{Smail2007CosmicEye,Stark2008NaturCosmicEye}, one of the most distant clear examples of a typical star-forming galaxy at $z=3.1$. MASC2135 was later found to multiply-image a prominent sub-mm galaxy \citep{Swinbank2010Natur,Ivison2010}. In their analysis, \citet{Swinbank2010Natur} constructed a SL model for this cluster, based on the sub-mm galaxy system - for which they measured a spectroscopic redshift of $z=2.3259$ and identified a third counter image on the east side of the cluster. They also used and measured a redshift for $z=2.32$ for a triply-imaged galaxy at a remarkable distance of $\sim37\arcsec$ south of the BCG, two of its images straddling the critical curve into a giant arc. We did not find records of other, recent SL models for this cluster. Here we make use of the most recent HST imaging, that extends significantly the coverage of earlier work described above, to enhance the lens model with many new multiple-images and to make this publicly-available given the expected large critical area (the model of \citealt{Swinbank2010Natur} implied an Einstein radius of $\sim35\arcsec$) and relatively high ellipticity, which enhances the cross section of lensing clusters \citep[][and references therein]{Zitrin2013M0416}. The paper is organized as follows. We present the observations in \S \ref{obs}, and the SL modeling in \S \ref{lensmodel}. We conclude the work and discuss the results in \S \ref{discussion}. Throughout we use a standard $\Lambda$CDM cosmology with $\Omega_{\rm m0}=0.3$, $\Omega_{\Lambda 0}=0.7$, $H_{0}=100$ $h$ km s$^{-1}$Mpc$^{-1}$, $h=0.7$, and magnitudes are given using the AB convention. 1\arcsec\ equals 4.75 kpc at the redshift of the cluster. Errors are $1\sigma$ unless otherwise stated. \vspace{0.5cm}
\label{discussion} Using HST images coupled with our LTM mass modeling we have identified, in addition to the two systems previously known \citep{Swinbank2010Natur}, six new multiply-imaged systems in MACS2135, roughly quadrupling the number of constraints to map the matter distribution in this cluster. We have correspondingly constructed a significantly improved mass model for MACS2135, which we present here and make available for the astronomical community. Our model agrees well with this cluster being a large lens, as perhaps is expected given the distance of system 2 from the BCG, and in broad agreement with the estimate presented in \citet{Swinbank2010Natur}. Only a small fraction of the clusters well-studied in the literature are known to exhibit Einstein radii exceeding $\gtrsim30\arcsec$ (nominally, for sources at redshifts around $z\sim2$). For example, only few out of the 25 X-ray selected CLASH clusters have Einstein radii comparable to, or slightly larger than that of MACS2135, and only a few clusters previously analyzed have considerably larger critical areas, e.g. Abell 1703, \citep{Limousin2008}; MACS 0717 \citep{Zitrin2009_macs0717}; RXJ1347 \citep{Zitrin2014CLASH25}, Abell 1689 \citep{Broadhurst2005a}; A370 \citep{Richard2010A370}; RCS2 J232727.6-020437, \citep{Sharon2015RSC2Big}; SDSS J120923.7+264047 \citep{Ofek2008Arc}; or CL0024 \citep{Zitrin2009_cl0024}. Indeed, thanks to their large critical areas all of these clusters show numerous multiply-imaged background galaxies, typically revealed in deep HST imaging. Additionally, the HFF clusters for example, aside for the giant lens MACS0717 \citep{Zitrin2009_macs0717} and perhaps A370 \citep{Richard2010A370}, show typically Einstein radii of $\sim25-30\arcsec$. Here we add to this important list MACS2135, showing that despite it current relatively shallow imaging, it also lenses an abundance of highly magnified, multiply-lensed background sources, and is comparable in size to the typical HFF cluster. Finding large and prominent lensing clusters is useful for probing the massive-end of the cluster mass function \citep{Zitrin2009_macs0717,Waizmann2012JeanClaude0717,Redlich2014}, for constraining cosmological models \citep{OguriBlandford2009}, and also for studying the DM, substructure, morphology and merging properties of the clusters \citep{Merten2011,Harvey2015Sci}. Large lenses also increase the chances for finding very high redshift galaxies often pushing the redshift limit \citep[e.g.][]{Kneib2004z7,Coe2012highz} and in the case of multiple images we can use the separation between the images to provide a purely ``geometric" distance for the source as a means of testing the often ambiguous photometric redshift \citep{Zitrin2014highz}. In fact, two high-$z$ candidates have already been reported in MACS 2135 \citep{Repp2016HighzMACS}, one of which our model predicts should lie nearly on top the critical curves for high redshift, and thus might be highly magnified and potentially multiply imaged. We leave further examination of this candidate for other, dedicated work. The lensing approach to studying high-$z$ galaxies is sensitive to the faint-end slope of the luminosity function, and complements the field work with Hubble that is also uncovering relatively luminous high-$z$ galaxies over wider areas \citep[e.g.][]{Ellis2013Highz,Bouwens2015LF10000}. It should be appreciated that not only the Einstein radius of a lens is important in assessing the lensing efficiency of various clusters, but as we have shown before, other factors must be considered, such as the magnification distribution (which is related to the gradient of the central mass distribution), substructure and sub-clumps that add non-linearly to the magnification \citep{Redlich2014}, or the ellipticity of mass distribution which enhances the lensing cross-section \citep{Zitrin2013M0416}, as well as of course, the redshifts involved and the magnification bias which depends on the slope of the luminosity function \citep{Coe2014FF}. We conclude that MACS2135 is amongst the top lenses currently known, especially in terms of it critical area, and will benefit from future attention. This includes deeper space imaging to uncover very distant high-redshift dropouts in the NIR, and as a compelling candidate target, in this respect, for JWST. \vspace{0.01cm}
16
7
1607.02119
1607
1607.04185.txt
\noindent We study the X-ray variability of \SSobj\ based on data from the ASCA observatory and the MAXI and RXTE/ASM monitoring missions. Based on the ASCA data, we have constructed the power spectrum of \SSobj\ in the frequency range from $10^{-6}$ to 0.1\,Hz, which confirms the presence of a flat portion (flat-topped noise) in the spectrum at frequencies $3\times 10^{-5} - 10^{-3}$\,Hz. The periodic variability (precession, nutation, eclipses) begins to dominate significantly over the stochastic variability at lower frequencies, which does not allow the stochastic variability to be studied reliably. The best agreement with the observations is reached by the model with the flat portion extending to $9.5\times10^{-6}$~Hz and a power-law spectrum with index of 2.6 below that frequency. The jet nutation with a period of about three days suggests that the time for the passage of material through the disk is less than this value. Therefore, at frequencies below $4\times10^{-6}$~Hz, the power spectrum probably does not reflect the disk structure. It may depend on other factors, for example, a variable mass accretion rate supplied by the donor. The flat portion may arise from a rapid decrease in the viscous time in the supercritical or radiative disk zones. It could be related to the variability of the X-ray jets that are formed in the supercritical region.\\
16
7
1607.04185
1607
1607.03059_arXiv.txt
We present the first results on the saturation of the $f$-mode instability in neutron stars, due to nonlinear mode coupling. Emission of gravitational waves drives the $f$-mode (fundamental mode) unstable in fast-rotating, newborn neutron stars. The initial growth phase of the mode is followed by its saturation, because of energy leaking to other modes of the star. The saturation point determines the strain of the generated gravitational-wave signal, which can then be used to extract information about the neutron star equation of state. The parent (unstable) mode couples via parametric resonances with pairs of daughter modes, with the triplets' evolution exhibiting a rich variety of behaviors. We study both supernova- and merger-derived neutron stars, simply modeled as polytropes in a Newtonian context, and show that the parent may couple to many different daughter pairs during the star's evolution through the instability window, with the saturation amplitude changing by orders of magnitude.
\label{sec:Introduction} It has been known since the 1970s that nonradial, stellar oscillation modes can be driven unstable due to the emission of gravitational waves, thanks to the Chandrasekhar-Friedman-Schutz (CFS) mechanism \cite{Chandrasekhar1970,FriedmanSchutz1978,*FriedmanSchutz1978b}. In rapidly rotating neutron stars, low multipoles could be driven unstable, producing a significant amount of gravitational radiation \cite{IpserLindblom1990,*IpserLindblom1991}. Now that the gravitational wave window to space has been opened \cite{AbbottEtAl2016}, these signals may provide useful information about the neutron star interior \cite{AnderssonKokkotas1996,AnderssonKokkotas1998,KokkotasEtAl2001,BenharEtAl2004,GaertigKokkotas2011,DonevaEtAl2013,DonevaKokkotas2015}; gravitational wave asteroseismology is expected to give some answers about the equation of state of matter at supranuclear densities, avoiding the problems that QCD and terrestrial experiments are currently facing. Nevertheless, there is still uncertainty behind the process saturating the instability. Studies on the CFS-unstable $r$-modes (horizontal fluid motions driven by rotation \cite{PapaloizouPringle1978}) showed that nonlinear mode coupling saturates the instability quite efficiently \cite{SchenkEtAl2001,Morsink2002,ArrasEtAl2003,BrinkEtAl2004,*BrinkEtAl2004b,*BrinkEtAl2005}. Other proposed saturation mechanisms, like large-amplitude viscous effects \cite{AlfordEtAl2012,PassamontiGlampedakis2012} or the interaction of superfluid vortices with superconducting flux tubes \cite{HaskellEtAl2014}, may lead to higher or lower saturation amplitudes, respectively. We focus on another class of potentially unstable modes, the $f$-modes (fundamental oscillations), which comprise large-scale density perturbations. Although the $f$-mode instability is active in a much smaller part of the parameter space (instability window) and has a longer growth time, compared to the $r$-mode, the calculation of its saturation amplitude is still important for the evolution of newborn neutron stars \cite{PassamontiEtAl2013}. Recent studies \cite{DonevaEtAl2015} have also shown a very promising scenario, where an $f$-mode instability could develop very fast in post-merger neutron star remnants. So far, no robust estimate has been provided for the saturation amplitude of unstable $f$-modes. Some upper limits have been derived by \refcite{KastaunEtAl2010}, where the effects of nonlinear damping (like wave breaking) were mainly studied, using a general relativistic simulation under the Cowling approximation (fixed spacetime approximation; for a similar study on $r$-modes, see also \refcite{Kastaun2011}). In the aforementioned study, mode coupling was also observed, for the quadrupole $f$-mode ($l=m=2$, where $l$ and $m$ are the degree and order, respectively, of the spherical harmonic $Y_l^m$ that describes the mode). However, the CFS instability sets in on secular time scales, way beyond the current capabilities of nonlinear hydrodynamic simulations. Given the large time needed for the unstable mode to grow, simulations resort to starting with high mode-amplitude values and then tracking the amplitude decay. This process gives an upper bound for the saturation amplitude, which could still be far, though, from the actual value. As shown by \refcite{SchenkEtAl2001,Morsink2002,ArrasEtAl2003,BrinkEtAl2004,*BrinkEtAl2004b,*BrinkEtAl2005}, nonlinear mode coupling ought to operate at quite low amplitudes, rendering other effects, like the wave breaking of \refcite{KastaunEtAl2010} and the large-amplitude viscous dissipation of \refcite{AlfordEtAl2012,PassamontiGlampedakis2012}, unimportant. On the other hand, the saturation mechanism described in \refcite{HaskellEtAl2014} may lead to lower saturation values, but is relevant only for mature stars. In this paper, we present results on the $f$-mode saturation, using quadratic mode coupling to other polar modes, for Newtonian polytropic stars. The formalism and its main implications were described in detail in a previous paper \cite{PnigourasKokkotas2015} (henceforth Paper I). As opposed to the linear approximation, which gives rise to the oscillation spectrum of the star (e.g., \refcite{UnnoEtAl1989,AertsEtAl2010}), quadratic perturbations build up a three-mode-interaction network, in which the modes of the star couple in triplets. The (complex) mode amplitudes $Q_i$ of a given triplet are then described by \begin{subequations} \label[subequations]{equations of motion with normalisation choice} \begin{align} \dot{Q}_\alpha &= \gamma_\alpha Q_\alpha+\frac{i\omega_\alpha\mathcal{H}}{E_\mathrm{unit}}Q_\beta Q_\gamma e^{-i\Delta\omega t}, \label{mode alpha equation of motion with normalisation choice} \\ \dot{Q}_\beta &= \gamma_\beta Q_\beta+\frac{i\omega_\beta\mathcal{H}}{E_\mathrm{unit}} Q_\gamma^* Q_\alpha e^{i\Delta\omega t}, \label{mode beta equation of motion with normalisation choice} \\ \dot{Q}_\gamma &= \gamma_\gamma Q_\gamma+\frac{i\omega_\gamma\mathcal{H}}{E_\mathrm{unit}} Q_\alpha Q_\beta^* e^{i\Delta\omega t}, \label{mode gamma equation of motion with normalisation choice} \end{align} \end{subequations} where the parameters $\gamma_i$ are the linear growth/damping rates, $\omega_i$ the mode frequencies, $\mathcal{H}$ the coupling coefficient, $E_\mathrm{unit}$ the mode energy at unit amplitude (based on the normalization choice), and $\Delta\omega=\omega_\alpha-\omega_\beta-\omega_\gamma$ the detuning parameter. The coupling occurs when i) an internal resonance exists between the three modes, of the form $\omega_\alpha\approx\omega_\beta+\omega_\gamma$, and ii) a set of selection rules is satisfied for the degrees $l_i$ and the orders $m_i$ of the modes \cite{Dziembowski1982,PnigourasKokkotas2015}. The first condition guarantees that the oscillatory dependence of the nonlinear term is very slow, so that it contributes to the long-term dynamics of the system, whereas the second condition is what makes the coupling coefficient $\mathcal{H}$ nonzero (see Paper I and references therein). Coupling of an unstable $(\gamma_\alpha>0)$ mode to two other, stable $(\gamma_{\beta,\gamma}<0)$, modes can lead to a \emph{parametric resonance instability}: the unstable (parent) mode grows until its amplitude surpasses the \emph{parametric instability threshold} (PIT). At this point, the stable (daughter) modes, coupled to the parent, start growing by draining energy from it. Eventually, and if certain stability conditions are met, the system will reach an equilibrium and saturate (\cref{fig:parametrically resonant system}\hyperref[fig:parametrically resonant system]{a}). The parent mode saturates close to the PIT, which is given by \cite{PnigourasKokkotas2015,Dziembowski1982} \begin{equation} |Q_\mathrm{PIT}|^2=\frac{\gamma_\beta \gamma_\gamma}{\omega_\beta \omega_\gamma}\frac{E_\mathrm{unit}^2}{\mathcal{H}^2}\left[1+\left(\frac{\Delta\omega}{\gamma_\beta+\gamma_\gamma}\right)^2\right]. \label{PIT} \end{equation} As described before, a tacit consequence of quadratic nonlinearities is that modes couple in triplets. This means that individual couplings consist of three modes only, with the daughter modes trying to stop the growth of the parent mode. Of course, the same parent can couple to more than one pairs of daughters. However, not all couplings become important. Remember that, until the PIT is crossed, the parent does not really ``feel'' the presence of the daughters. Since each coupled triplet has its own PIT, only the couplings with the lowest PITs will affect the parent's evolution. In fact, as we shall see later on, the triplet with the lowest PIT is usually the one that determines the parent's saturation amplitude. Following this paradigm, we find the couplings of an unstable $f$-mode to other polar modes and then calculate its saturation amplitude, throughout the instability window. The setup of this process is presented in \cref{sec:Setup}. The results for both supernova-derived neutron stars and post-merger remnants can be found in \cref{sec:Results}. In \cref{sec:The saturation mechanism} we review the details behind the saturation mechanism of the parametrically resonant system \eqref{equations of motion with normalisation choice}. Comparison with previous work on the saturation of the $r$-mode instability via mode coupling is discussed in \cref{sec:Comparison with r-modes}. We conclude with a summary and some final remarks in \cref{sec:Summary}.
\label{sec:Summary} We have presented the first results about the saturation of the $f$-mode instability in neutron stars, due to quadratic mode coupling. Using Newtonian polytropes to describe both supernova- and merger-derived neutron stars, we calculated all the couplings of the most unstable $f$-mode multipoles to other polar modes and obtained their saturation amplitude throughout their instability windows. Once the fast-rotating, nascent neutron star enters the instability window, the $f$-mode is driven unstable due to the emission of gravitational waves (CFS instability). Coupling of the exponentially growing (parent) mode to damped (daughter) modes leads to a parametric resonance instability, during which a pair of daughters drain the parent's energy and grow, when the latter crosses their characteristic parametric instability threshold. If the daughters are sufficiently dissipated, the triplet reaches an equilibrium and saturates. The efficiency of the coupling among three modes is determined by their coupling coefficient, depending on their eigenfunctions, and by their detuning, which measures how close to resonance they are. The triplet amplitudes approach constant values only if the detuning is larger than some lower limit, thus favoring far-from-exact resonances. As a result of the coupling, however, the mode frequencies are shifted and finally evolve towards an exact nonlinear resonance. On the other hand, a lower detuning usually induces oscillations in the triplet amplitudes, around their equilibrium values, in the form of limit cycles or chaotic orbits. Although it is usually treated as a constant, the saturation amplitude changes throughout the window, due to its temperature dependence and because different daughter pairs may set the lowest PIT at different points. We found that the saturation amplitude is larger near the low- and high-temperature edges of the instability window (as high as $\approx 3\times 10^{-4}$), and gradually decreases at intermediate temperatures (with values as low as $\sim 10^{-9}$; the definition used for the amplitude is $|Q|=\sqrt{E_\mathrm{mode}/Mc^2}$\,). The perturbative nonlinear approach that we use is, in its core, simple and has many advantages. As long as the eigenfrequencies and eigenfunctions of the modes are provided, it allows us to easily identify the important couplings in the system and precisely track their effects on the modes' amplitude evolution. Furthermore, it helps us reveal and understand the richness of possible outcomes and offers a strong insight into the problem, letting us follow every parameter's contribution. The calculation of the eigenfrequencies and eigenfunctions of the modes, however, can be a quite laborious task, with analytic solutions existing only in simple models (e.g., homogeneous star) and with no natural limit on the number of modes that should be considered---for instance, solar observations have shown very high $p$-mode multipole oscillations. In order to obtain as many modes as possible, the slow-rotation approximation was utilized, which is the origin of the major uncertainties in our results (correctness of models aside). The Newtonian formalism provides an accurate qualitative description of the problem, at least for supernova-derived neutron stars. In principle, general relativity should change some key components of the setup (e.g., larger instability windows, shorter growth time scales for the parent), thus affecting the final results. Moreover, it is the only appropriate framework for modeling supramassive post-merger remnants, since our Newtonian calculation reflects only their rudimentary properties. Therefore, a relativistic quadratic perturbation scheme needs to be developed in order to obtain conclusive results, especially considering that relativistic hydrodynamic simulations are still far from remaining stable during the secular time scales needed for the instability to grow. Gravitational waves from neutron star oscillations will shed some light on the equation of state of dense matter. Signals generated by the $f$-mode instability might be detectable even with Advanced LIGO, from sources in the Virgo cluster, considering the highest value of the saturation amplitude obtained here \cite{PassamontiEtAl2013,DonevaEtAl2015}. The gravitational-wave era in astronomy has only begun and much work still has to be done, regarding the elimination of the major uncertainties and the improvement of the models, in order to reach confident conclusions. \revappendix* \let\oldsection=
16
7
1607.03059
1607
1607.06998_arXiv.txt
I construct a model of the inner crust of neutron stars using interactions from chiral effective field theory (EFT) in order to calculate its equation of state (EOS), shear properties, and the spectrum of crustal shear modes. I systematically study uncertainties associated with the nuclear physics input, the crust composition, and neutron entrainment, and estimate their impact on crustal shear properties and the shear-mode spectrum. I find that the uncertainties originate mainly in two sources: The neutron-matter EOS and neutron entrainment. I compare the spectrum of crustal shear modes to observed frequencies of quasi-periodic oscillations in the afterglow of giant $\gamma$-ray bursts and find that all of these frequencies could be described within uncertainties, which are, however, at present too sizable to infer neutron-star properties from observations.
\label{sec:introduction} Neutron stars are remarkable objects: With masses up to 2~M$_{\odot}$~\cite{Demorest2010, Antoniadis2013} and typical radii of the order of 12 km~\cite{Lattimer:2014, Hebeler:2013, Watts:2016uzu}, densities inside neutron stars are higher than densities accessible in experiments on earth. This makes neutron stars excellent laboratories for physical theories under extreme conditions. A large part of the available observational data on neutron stars is linked to the physics of neutron-star crusts, which can be divided into the outer and the inner crust. The outer crust consists of a lattice of neutron-rich nuclei emerged in a sea of electrons. Deeper in the neutron star, with increasing density and neutron chemical potential, the nuclei become more and more neutron-rich. At densities of $\rho \sim 4\times 10^{11} \text{g/cm}^3$, the neutron chemical potential becomes positive and neutrons begin to drip out of the nuclei. This is where the inner crust begins. In addition to free neutrons, inhomogeneous phases of nuclear matter, the so-called nuclear pasta phases, may appear; see, e.g., Ref.~\cite{Schneider:2013dwa}. At the crust-core transition density, which is roughly half of the nuclear saturation density $\rho_{0}\sim 2.7\times 10^{14} \text{g/cm}^3 \sim 0.16 \fm^{-3}$, the nuclei will dissolve and a phase of uniform nuclear matter in $\beta$ equilibrium will begin. Understanding crustal properties is key to describe various neutron-star observations~\cite{Chamel:2008ca}. In this paper, I focus on shear properties of the neutron-star crusts: The shear modulus $\mu$, shear velocities $v_S$, and the frequency spectrum of crustal shear modes. Crustal shear modes are of particular interest for the description of quasiperiodic oscillations (QPOs) in the afterglow of giant $\gamma$-ray bursts in magnetars~\cite{Israel:2005, Strohmayer:2005, Watts:2006, Strohmayer:2006, Hambaryan:2010}. The shear modulus describes how the neutron-star crust elastically deforms under shear stress, i.e., it describes the stiffness of the crust lattice under shear deformations. These deformations lead to the formation of shear oscillations, which travel through the crust with the shear velocity $v_S$ and have a frequency that depends on $v_S$ and the crust parameters. Giant flares trigger starquakes that cause crustal shear deformations and lead to shear oscillations in the crust~\cite{Duncan:1998my}. These shear oscillations can in principle modulate the surface emission and then be observed as QPOs. However, QPOs are not simply crustal oscillation modes because the global magnetic field couples the neutron star's crust and core and leads to the formation of global oscillation modes~\cite{Levin:2006ck, Gabler:2010rp}. The global magnetic field, thus, plays an important role for the correct description of QPO oscillation spectra. Because the restoring force in the crustal lattice is the Coulomb interaction, the shear modulus depends on the charge number $Z$ of the lattice ions and their density $n_i$. While for the outer crust these are well understood, the composition and structure of the inner crust are not well constrained. Furthermore, additional effects in the inner crust are thought to be crucial for the correct description of crustal shear modes, like neutron superfluidity~\cite{Andersson:2008, Samuelsson:2009xz, Passamonti:2011mc, Sotani:2013jya}, entrainment of neutrons with the crust lattice~\cite{Chamel:2012zn}, or the appearance of pasta phases~\cite{Gearheart:2011qt, Sotani:2011nn, Passamonti:2016jfo}, but these effects are not completely understood. This ignorance of crustal properties will also reflect in the crustal shear spectra. So far, no oscillation model, neither crustal nor global, is able to describe all observed QPO frequencies, see, e.g., Ref.~\cite{SteinerWatts2009}. On the other hand, none of these models include systematic uncertainties. In this paper, I estimate the effects of nuclear-physics uncertainties on the spectrum of crustal shear modes. These uncertainties may be sizable and originate from various sources, e.g., the inner-crust EOS, the crust structure and composition, or neutron entrainment. This paper is structured as follows: In Sec.~\ref{sec:crustEOS} I will determine models for the inner-crust equation of state (EOS) within the Wigner-Seitz approximation, based on realistic interactions with systematic theoretical uncertainties. I use these EOSs in Sec.~\ref{sec:shearspeed} to determine the shear modulus and the shear velocities of the neutron-star inner crust. Finally, in Secs.~\ref{sec:QPOs} and~\ref{sec:n1}, I calculate the frequencies of the fundamental crustal shear oscillations as well as of the first radial overtone with nuclear-physics uncertainties with the goal of identifying the largest sources of uncertainty. I summarize and give an outlook in Sec.~\ref{sec:outlook}.
\label{sec:outlook} In this paper I have studied the influence of different sources of uncertainty on the spectrum of shear modes in the neutron-star inner crust. To capture the uncertainties in the nuclear interactions I used parametrizations for the energy per particle of nuclear matter, which were fit to chiral EFT calculations of pure neutron matter and the empirical saturation point. Using these interactions, I modeled the inner crust in the Wigner-Seitz approximation and studied the impact of uncertainties in the crust composition and the surface energy parameters. While the uncertainty in the crust composition has the largest impact on the geometry of the Wigner-Seitz cell, the uncertainty in the inner-crust EOS is dominated by the nuclear interactions. Using the inner-crust model, I determined the shear modulus and shear velocities in the neutron star crust with uncertainties. For the shear modulus I found that the main uncertainty stems from the crust composition at low densities and from the neutron-matter EOS at higher densities. For the shear velocities, the main source of uncertainty is neutron entrainment, which leads to a variation up to factor of 2 in the neutron-star crust. Using free-slip boundary conditions, I calculated the frequencies of the fundamental crustal shear modes and compared the calculation to observed QPO frequencies. I obtained fundamental frequencies ranging from $18$ to $28$ Hz, with a total uncertainty band of $13-52$~Hz. I identified three major sources of uncertainty: first, the EOS of nuclear matter up to saturation density, which sets the inner-crust EOS; second, the EOS above saturation density, which enters the calculation via the radius variation; and third, the entrainment factor. The effect of the uncertainties in the crust composition and surface parameters on the shear-mode frequencies, instead, are small. Both an improved description of the EOS with reduced theoretical uncertainties and a better determination of the entrainment factor are necessary to reliably model crustal shear oscillations. Corrections to the value of entrainment, as suggested in Ref.~\cite{Kobyakov:2013}, lower the number of neutrons locked in the lattice and lead to an increase of the calulated fundamental frequencies to $24-35$ Hz. If the fundamental QPO frequencies can be described in terms of crustal shear modes, an identification of the fundamental shear mode with the $28-29$ Hz QPO, thus, seems to be likely. I also performed calculations of oscillation modes with $n=0$ and $2 \leq l \leq 16$ as well as of the $n=1, l=2$ mode. These calculations are very dependent on the neutron-star parameters, but show that every observed QPO frequency could be described by at least one crustal shear mode within uncertainties. While QPOs in principle could be used to infer neutron-star properties, current uncertainties are quite sizable and hinder the clear identification of modes, which, in turn, impedes the extraction of robust constraints. The computation of shear modes in the neutron-star crust would mostly benefit from a reduction of (a) the uncertainty of the EOS of neutron matter at densities below and above saturation density, and (b) a determination of the entrainment factor with robust theoretical uncertainties. In addition, the influence of nuclear pasta phases has to be investigated in detail. Together with a global neutron-star oscillation model, which properly includes the effects of the strong magnetic fields, these improvements would allow comparisons with observed frequencies to reliably identify modes and infer properties of neutron stars. On the other hand, additional information on the QPO sources, e.g., masses, would allow one to put constraints on the EOS or the entrainment factor.
16
7
1607.06998
1607
1607.03090_arXiv.txt
Proxima Centauri is an M dwarf approximately 15,000 AU from the Alpha Centauri binary, comoving and likely in a loosely bound orbit. Dynamic simulations show this configuration can form from a more tightly bound triple system. As our nearest neighbors, these stars command great interest as potential planet hosts, and the dynamics of the stars govern the formation of any planets within the system. Here we present a scenario for the evolution of Alpha Centauri A and B and Proxima Centauri as a triple system. Based on N-body simulations, we determine this pathway to formation is plausible, and we quantify the implications for planet formation in the Alpha Centauri binary. We expect this formation scenario may have truncated the circumstellar disk slightly more than a system that formed in the current configuration, but that it most likely does not prevent terrestrial planet formation. We simulate planet formation in this system and find that in most scenarios, two or more terrestrial planets can be expected around either Alpha Centauri A or B, orbiting in a region out to approximately 2 AU, assuming planetesimals and planetary embryos are able to first form in the system. Additionally, terrestrial planet formation and stability in Proxima Centauri's habitable zone is also plausible. However, an absence of planets around these stars may be indicative of highly disruptive stellar dynamics in the past.
Our nearest neighbor, the M-dwarf Proxima Centauri, is thought to be tenuously bound to the Alpha Centauri binary, forming an extremely wide triple system; although measurements are not precise enough to constrain the orbit, its proximity to Alpha Centauri along with its comoving velocity would be very unlikely in a passing, unconnected star \citep{2006AJ....132.1995W}. A detailed study of triple system dynamics \citep{2012Natur.492..221R} hypothesized that the three stars could have formed closer together, as part of a single system, and that dynamical interactions between them could then have led to Proxima's near-ejection onto its current highly eccentric path. There has also been significant interest in the possibility of planets in the Alpha Centauri system. As our nearest neighbors, these stars represent the best candidates for in-depth study, as well as the most likely target for a search for biomarkers or any distant future interstellar contact if habitable planets were to be found there. A hot Earth-sized planet has been reported around Alpha Centauri B \citep{2012Natur.491..207D}, although the detection has been disputed \citep{2013ApJ...770..133H, 2016MNRAS.456L...6R}. Other observational studies have ruled out the possibility of giant planets in the system \citep{2015MNRAS.450.2043D, 2001A&A...374..675E} and put constraints on the detectability of planets within the system \citep{2015IJAsB..14..305E, 2013ApJ...764..130E, 2008ApJ...679.1582G}. The gravitational forces of multiple stars introduce additional complications for planetary formation; however, the prevalence of stellar multiplicity means it is critical to understand such systems if we are to understand planet formation in the Universe as a whole \citep{2014arXiv1406.1357T}. Although planet searches initially avoided binaries due to their additional complications as observational targets as well as the reduced likelihood of planet formation in their more turbulent dynamical environments \citep{2010ASSL..366...19E}, many planets have now been discovered in binary systems, including a multiplanet system in the binary 55 Cancri \citep{2008ApJ...675..790F}. In some cases, planet searches discovered both a planet and a previously unknown companion star \citep{2009A&A...494..373M}. A planet has also been detected in 16 Cygni, a triple system consisting of two Sun-like stars and a red dwarf, similar to Alpha Centauri except that here the smaller star is part of the inner binary \citep{1997ApJ...483..457C}. In addition to the multiple systems we see today, it is likely that even more stars were members of such systems when they formed. There is evidence to suggest many, if not most, stars form in bound multistellar systems which then eject members until they reach stable configurations, resulting mostly in singles and binaries but occasionally in higher-multiplicity systems \citep{2010ApJ...725L..56R, 2007prpl.conf..133G, 2014prpl.conf..267R}. This is consistent with the lower multiplicity rates of smaller stars, as they are more easily ejected than massive stars. The consequences of ``fly-by'' interactions between stars have been studied \citep{2015MNRAS.448..344L}, but multiple bound stars have different outcomes, as they repeatedly interact over multiple orbits. The protoplanetary disks around these stars, from which planets will eventually form, may be truncated or disturbed during these stellar interactions \citep{2007ApJ...660..807Q}. Several groups have made theoretical studies of the disk or planet stability \citep{2015ApJ...799..147J, 2015ApJ...798...70R, 2015ApJ...798...69R, 2012A&A...539A..18M, 2012AstL...38..581P, 2009MNRAS.400.1936P, 1994ApJ...421..651A} and formation environment \citep{2010ApJ...708.1566X, 2009MNRAS.393L..21T, 2008MNRAS.388.1528T, 2008ApJ...679.1582G, 2002A&A...396..219B, 2002ApJ...576..982Q, 1997AJ....113.1445W} in Alpha Centauri. However, most of these studies assume the stars were in their current orbits. If Alpha Centauri exchanged energy with Proxima Centauri to allow the latter to reach its current orbit, the Alpha Centauri binary would have lost energy in the process, altering its own orbit as well. These interactions typically take place soon after the stars' formation, as does the formation of any circumstellar disks and protoplanets within them. Therefore, such interactions could have significant consequences for our assumptions about planet formation in this system. In \cite{2009MNRAS.393L..21T}, the possibility of wider initial orbits was examined in the context of formation in a cluster, finding that initially wider orbits could improve the ease of formation of planetesimals. In this work, we seek to characterize the limits of the dynamical history of the Alpha Centauri star system, with particular interest in how it may affect any planet formation within that system. We will assume Proxima Centauri is on an eccentric, bound orbit, and that its current position at around 15,000 AU from the Alpha Centauri binary is likely in the long, slow, portion of its orbit relatively near apocenter. We constructed a large population of triple systems that could evolve into the current arrangement and simulate their interactions, looking for examples of Proxima (a.k.a. Alpha Centauri C) ending up on a wide, highly eccentric orbit. We examine two limiting cases: all stars at their present masses, so that A and B significantly outweigh C, thus minimizing the effects any energy exchanges have on the binary; and an equal-mass case which assumes that all three stars initially grew at similar rates, but that C was ejected when mass was still accreting and all three stars were its size (0.123 M$_\odot$), maximizing the effects on the binary system. This puts bounds on the amount by which the binary orbit could possibly change. Ultimately, we seek to understand whether planets may be present in Alpha Centauri, at what locations, and with what mass. The evolution of Proxima Centauri can affect this, but the effect could range from trivial to quite substantial depending on timing, so we bracket the range of possibilities to consider the range of outcomes.
In a large suite of N-body simulations, we recreate possible pathways to the formation of the Alpha Centauri-Proxima Centauri system according to the method described in \cite{2012Natur.492..221R}, and explore the effects this would have on planet formation around the individual stars. In typical scenarios, assuming a debris disk at least as dense as the Minimum Mass Solar Nebula, we expect several terrestrial planets to form within a few AU of the central star, with a high probability of a planet in the habitable zone. We find it is plausible that Proxima formed at a distance of a few hundred AU from Alpha Centauri and was thrown out to the current (presumed) orbit at tens of thousands of AU while the stars were still accreting mass. In the process, it most likely caused the Alpha Centauri binary to see a decrease in semi major axis and increase in eccentricity, reducing the pericenter. This would cause truncation of the outer protoplanetary disk to a radius that scales with the binary's pericenter. Binaries which have undergone this type of three-body interaction in the past may have gone through a phase with a tighter orbit than it is now; the minimum pericenter could take any value up to the present one, but is typically no more than 20\% smaller). This additional truncation strips additional outer material from the disk, but in most simulations, enough material remains to form systems of terrestrial planets. Although the process of planetesimal formation, especially in binaries, is not yet fully understood, we make the assumption that the system was able to form planetesimals and planetary embryos, and then examine how planet formation would proceed from that point. In our simulations of the Alpha Centauri system, the disk was typically truncated to near the ice line, likely preventing formation of gas giants and even Neptunes or mini-Neptunes. See \cite{2013MNRAS.431.3444C}, \cite{2014MNRAS.440L..11R}, and \cite{2014ApJ...780...53C} for discussions of in situ versus migration formation pathways of close-in mini-Neptunes. Unless the initial disk had a much higher density than expected, high mass, close-in planets such as some found by Kepler \citep{2014ApJ...790..146F} are not expected here, with neither an outer region to form planets that migrate in, nor an outer well of material to feed an inner region while planets form in situ. In addition, these interactions do not necessarily rule out the possibility of planets around Proxima, although an absence of planets may indicate a past close encounter and serve as a confirmation of triple system interactions involving a close pass. Finally, we predict that if Proxima is orbiting at high inclinations, it may be inducing Kozai-Lidov oscillations in Alpha Centauri which will change its eccentricity on a Gyr timescale. It is uncertain whether this would increase or decrease the pericenter from the present value, but it most likely will not decrease it further than the minimum pericenter it has experienced previously, and so should not cause significant disturbance beyond the previous truncation. If, however, the outer star is perturbed enough by external forces such as passing stars, it could end up on a new orbit, essentially randomizing the system. Overall, significant uncertainties remain, but our simulations indicate that, despite the possibility of a turbulent past, Alpha Centauri B and its companions are still likely terrestrial planet hosts. Missions capable of detecting or conclusively ruling out such planets would yield great insights into the formation of planets within multistellar environments. Any planets found would provide targets for detailed characterization, while a non-detection would be a good indicator of the system having undergone disruptive stellar interactions, which helps constrain the fitness of multistellar systems as planet hosts. Therefore, we look forward to results from current and future searches of this system.
16
7
1607.03090
1607
1607.07881.txt
We present the Cosmic Web Detachment (CWD) model, a conceptual framework to interpret galaxy evolution in a cosmological context, providing a direct link between the star formation history of galaxies and the cosmic web. The CWD model unifies several mechanism known to disrupt or stop star formation into one single physical process and provides a natural explanation for a wide range of galaxy properties. Galaxies begin accreting star-forming gas at early times via a network of \textit{primordial} highly coherent filaments. The efficient star formation phase ends when non-linear interactions with other galaxies or elements of the cosmic web \textit{detach} the galaxy from its network of primordial filaments, thus ending the efficient accretion of cold gas. The stripping of the filamentary web around galaxies is the physical process responsible of star formation quenching in gas stripping, harassment, strangulation and starvation. Being a purely gravitational/mechanical process CWD acts at a more fundamental level than internal feedback processes. We introduce a simple and efficient formalism to identify CWD events in N-body simulations. With it we reproduce and explain, in the context of CWD, several key observations including downsizing, the cosmic star formation rate history, the galaxy mass-color diagram and the dependence of the fraction of red galaxies with mass and local density.
Star formation quenching, its underlaying mechanism and what triggers it, is one of the most pressing problems in modern galaxy formation. Galaxies in the Universe are either actively forming stars or in a ``quenched" state of practically no star formation. This dichotomy is reflected in the bi-modality in the color-magnitude diagram \citep{Strateva01,Hogg04,Baldry04,Faber07,Schiminovich07} clearly separating blue star-forming spiral galaxies from red ``quenched" elliptical galaxies (see \citet{Dekel05} for a review). Most galaxies sit in one of the two groups and only a few galaxies are found in the ``green valley" in between \citep{Schawinski14}. The clear gap between the two galaxy populations can be interpreted as star formation quenching either having occurred several Gyr ago and/or that it is a fast event \citep{Bell04,Blanton06,Wyder07,Salim07}. Several mechanism have been proposed as responsible for the decrease in star formation activity in galaxies. Currently favored candidates include internal feedback processes such as AGN and supernovae feedback \citep{Silk98,DiMatteo05, Best05}. However, the non-causal connection between black hole mass vs. bulge mass challenges the predominant place given to AGN in galaxy formation \citep{Peng07,Jahnke11} allowing other options. Cosmic environment has been long recognized as playing a major role shaping the properties of galaxies, specially in dense environments, in the form of ram pressure and gas stripping \citep{Gunn72,Balsara94,Abadi99,Quilis00,Hester06,McCarthy08}, harassment \citep{Moore96,Moore98}, strangulation \citep{Bekki02,Fujita04,Kawata08,Peng15}, preprocessing \citep{Kodama01,Treu03,Goto03,Fujita04}. These processes have in common that they are external influences that affect star formation by preventing gas from reaching galaxies or by removing gas reservoirs. Recent studies show environmentally induced quenching independent of AGN feedback from purely geometric constraints \citep{Aragon14b} and an undefined but clear cosmological origin \citep{Feldmann15,Peng15}. The spacial correlation of galaxy properties, up to scales of several megaparsecs, also points to the dominant role of environment on galaxy evolution \citep{Dressler80,Weinmann06,Kauffmann13}. Both observations and simulations point to star formation quenching being the result of environmental (external) processes and feedback (internal) processes \citep{Hogg03,Blanton05,Cooper07,Coil08,Peng10,Voort11} but their relative importance is not clear yet. External processes are usually characterized by local density, a first-order environmental descriptor with limited power to encode the intricate geometry of the cosmic web crucial to understand the geometry and dynamics of gas accretion, galaxy interactions, etc. Internal processes seem to be driven by halo mass \citep{Peng10}. However, halo mass and density are correlated, making this relation not straightforward to interpret. \subsection{The need for a cosmic web-based galaxy formation model} Current galaxy formation models, in particular semi-analytical implementations, can reproduce a wide range of observations and provide useful insight on the physical processes occurring inside galaxies \citep{Rees77,White78,Lacey91,Cole91,White91,Kauffmann93,Somerville99,Cole00,Benson01,Benson02,Hatton03,Monaco07,Somerville08}. These models intrinsically consider galaxies as isolated entities and interactions with other galaxies and their cosmic environment are only indirectly treated via their mass accretion and merger history \citep{Hearing13,Mutch13,Hearing14,Aldo16}. The accretion/merger history of galaxies is insensitive to the particular geometry and dynamics of the cosmic web surrounding galaxies. However, there is extensive evidence of the effect of cosmic environment (beyond the traditional cluster vs. field separation) on halo/galaxy properties \citep{Navarro04,Trujillo06,Aragon07,Aragon07b,Hahn07,Darvish14,Chen15,Alpaslan16,Darvish16}. For instance, star-forming gas is tunneled into the cores of early galaxies via a network of dense filamentary streams \citep{Dekel09} but this key information is not included in current galaxy formation models making them blind to environmental effects on the filamentary streams, one of the main paths for gas accretion in galaxies. %Current state-of-the-art of N-body hydrodynamic simulations are able to reproduce to a remarkable degree the observed properties of galaxies, although much effort is still involved in understanding the fine-tuning needed to achieve such results. Recent simulations such as the Illustris \citep{Genel14, Vogelsberger14} and Eagle \citep{Schaye15} can probe galaxy formation processes from a few kiloparsecs to tens of megaparsecs, allowing us to study physical processes at the interior of galaxies and their time evolution. N-body hydrodynamic simulations are basically synthetic observations that act as a black box. While they can reproduce many properties of galaxies they do not directly explain the origin of such properties. They are as good as our ability to interpret them. A semi-analytic model relevant to the one presented here is the empirical \textit{age matching} approach \citep{Hearing13,Hearing14,Watson15} \citep[see also][for another method based on mass accretion]{Aldo16} in which the time for star formation quenching is assumed to be the first of i) the time when a galaxy reaches $10^{12}$\msun, ii) the formation time of the halo defined as the time when the slope in the mas accretion history changes and iii) the time when the halo is accreted into a larger halo, becoming a satellite. The \textit{age} of the halo computed as above is then matched with observed properties of galaxies associated with time evolution such as color. Age matching is an empirical model which, despite being based on ad-hoc rules, can reproduce several observations \citep{Hearing14}. However it does not explain the physical underlying process that define the age of a galaxy and so its explanatory and predictive power is limited. A successful theory of galaxy formation must be able to not only reproduce a range of observations but to provide a unifying framework that can explain and predict properties of galaxies. %------------------- %---- FIGURE ----- %------------------- \begin{figure*} \centering \includegraphics[width=0.99\textwidth,angle=0.0]{figure-01.eps} % From: /media/miguel/DATA1/Detachment/32Mpc_LSS/Detach/ADV/ \caption{Coherent vs. turbulent velocity field around haloes. Left: a 1 h$^{-1}$Mpc thick slice through the density field of a 32 h$^{-1}$Mpc box. Center: the velocity field at scales smaller than 125 h$^{-1}$kpc highlighted using the particle advection visualization technique \citep[see][for a description of the particle advection technique]{Aragon13}. The streams indicate the direction of the velocity field. The areas regions correspond to regions where the velocity field converges. Right: a zoomed region showing details in the velocity field. The red/white circles correspond to sub-haloes inside the slice. Top panels show that the velocity field at early times is highly coherent and closely delineates the location of primordial filaments connected to haloes. At latter times (bottom panels) the velocity field is highly turbulent and haloes are not connected to the observed streams. }\label{fig:box_advection} \end{figure*} %----------------------------------------------------------------------------------------------------------- % %----------------------------------------------------------------------------------------------------------- \subsection{Star-forming gas accretion via primordial filaments} Galaxy formation is a complex process that began when, driven by gravity, nodes of a tenuous web of primordial filaments emerged from tiny density fluctuations \citep{Zeldovich70}, \citep[see also][]{Hidding14}. At the nodes of this cosmic network proto-galaxies began to grow feeding via coherent filamentary streams of cold (T$ <10^5$K) gas \citep{keres05, Dekel09,Danovich12,Harford16}. The coherence of the streams at early times makes gas accretion highly efficient. At $z \sim 2$ most of the gas in star-forming galaxies is accreted via dense narrow filamentary streams that inject cold gas into the inner regions of galaxies even in the presence of shock heating \citep{Dekel09,Faucher11}. In comparison, far less gas is accreted via the inefficient, isotropic, hot (T$ >10^5$K) accretion mode. The nature of the cold streams connected to a proto-galaxy is closely linked to its surrounding matter distribution, and in particular to surrounding peaks, as described in the \textit{Cosmic Web theory} \citep{Bond96}. The quadrupolar tidal field configuration associated to a pair of peaks results in the formation of a bridge of matter in between. The Cosmic Web theory is usually invoked to explain the large megaparsec-scale filaments observed in the galaxy distribution. However, the same principle can be applied at smaller scales and earlier times to describe the formation of bridges between proto-galaxies. Primordial filaments form from a nearly Gaussian field in a gravitationally young environment characterized by a highly coherent velocity field. Figure \ref{fig:box_advection} shows a comparison between the density and velocity fields at $z=5$ and $z=0$. The velocity field was high-pass filtered at 125 kpc in order to highlight dynamics on galactic scales instead of the large-scale flows that dominate the raw velocity field (see \citet{Aragon13} for details). The differences between the early and latter velocity fields are remarkable. The velocity field at $z=5$ is laminar and closely follows the filamentary network surrounding haloes. All haloes can be seen either at the nodes or ridges of the coherent structures (where the advected trajectories converge) in the velocity field. In contrast, the velocity field around haloes at $z=0$ is dominated by turbulent flows that do not correlate with haloes. As shown in \citet{Aragon13} the magnitude of the velocity field at small scales characteristic of the primordial velocity field are one order of magnitude smaller than the large-scale flows seen in the large present-time filaments. A typical example of such structures is the Pisces-Perseus ridge \citep{Wegner93} spanning tens of megaparsecs and intersecting several galaxy groups and clusters. The different dynamical regimes between primordial and non-linear filaments has not been explicitly noted before and it is key for our understanding of gas accretion. Note that at $z=0$ there are practically no streams connected to haloes as in the case of $z=5$ where every halo is located at the position of one or multiple streams. Figure \ref{fig:box_advection} shows two different kinds of structures in the cosmic web separated by their gravitational evolutionary stage. Primordial filaments are formed immediately after haloes begin their collapse and have a marked cellular nature and coherent velocity flows at small scales. The large filaments we see in the present-time galaxy distribution formed hierarchically by the collapse of smaller primordial filaments. They are larger, more massive and dynamically complex. Such large-scale structures also form a cellular system but at scales of tens of megaparsecs \citep{Joeveer78,Klypin83,Geller89,Icke91,Aragon10c,Einasto11,Aragon13,Aragon14b}. Present-time voids may still contain primordial filaments due to their super-Hubble expansion which freezes the development of structure at large scales. Such small-scale filaments are prevalent in high-resolution computer simulations of voids \citep{Gottlober03,Sheth04,Park09,Aragon13,Rieder13} and recently have been observed in deep surveys \citep{Kreckel12,Beygu13,Alpaslan14}. Figure \ref{fig:box_advection} highlights the interplay between haloes and their surrounding cosmic web as a function of time and halo mass (since the halo mass function is also a function of time). At $z=5$ the non-linear mass $M_\ast$ is of the order of $10^7$\msun while at the present time it is close to $10^{13}$\msun (see Fig. \ref{fig:M_star}). The non-linear mass gives us a rough estimate of the mass-scale at which haloes become the dominant component of their surrounding cosmic web \citep{Dekel09}. Haloes with masses larger than $M_\ast$ are nodes of their local cosmic web while haloes less massive will be embedded inside a larger cosmic web element such as the case of galaxies in large filaments at the present time. %----------------------------------------------------------------------------------------------------------- % %----------------------------------------------------------------------------------------------------------- \subsection{The fate of cold flows and star formation quenching} The stark difference in the velocity field around haloes at early and latter times gives us a clue on the fate of the primordial filaments feeding galaxies. The velocity field around haloes in present-time over-dense regions has no memory of the primordial filaments originally connected to the haloes as the result of non-linear dynamics. If gas accretion through coherent cold filamentary flows is the most efficient mechanism to inject star-forming gas into galaxies then we should see a clear change in star formation in haloes entering non-linear regions. The change in the accretion mode should be fast (within a few Gyr) given the efficient gas-star conversion of cold gas \citep{Dekel09,Voort11} which makes a gas-starved galaxy stop producing stars within a short time scale \citep{Bauermeister10,Peng15}. Based in our current knowledge of the relation between galaxies, star formation and gas accretion via cold flows we can enumerate the following key assertions: \begin{itemize} \item Proto-galaxies are the nodes of a network of primordial filaments. \item Star-forming cold gas is accreted mainly via primordial coherent filamentary streams. \item Star formation closely follows cold gas accretion and ends (to a first approximation) when the gas supply is cut. \end{itemize} \noindent These observations point to a process behind the drop in star­ formation that acts by affecting the gas infall through primordial filaments. Any mechanism that is able to separate a galaxy from its web of primordial filaments results in star formation quenching. %------------------- %---- FIGURE ----- %------------------- \begin{figure*} \centering \includegraphics[width=0.99\textwidth,angle=0.0]{figure-02.eps} \caption{{\bf Web Detachment of a $2.3\times10^{11}$\msun galaxy by accretion into a $2.4\times10^{14}$\msun cluster}. The cluster is located at the top outisde the figure. From top to bottom we show: {\bf i)} the projected density field centered at the galaxy on a 3 \mpc side box (gray background). Small colored circles correspond to individual particles inside the primordial filamentary web surrounding the galaxy at $z=5$ (see text for details). For reference the particles are colored according to their distance from the central galaxy at $z=5$. The same set of particles (keeping their original colors) is tracked and displayed at different times, showing the disrupting and detachment of the web of filaments as the galaxy is accreted by the cluster. {\bf ii)} Coherent structures in the dark matter velocity field after removing bulk flows above scales of 250 h$^{-1}$kpc (see Fig. \ref{fig:box_advection} for details). Matter flows from dark-blue regions and accumulates in the light-yellow structures. At early times the velocity field is highly coherent and filamentary, almost laminar streams feed the central galaxy. After the web detachment event at $z\sim3$ the velocity field around the galaxy becomes turbulent and the filamentary streams are lost. {\bf iii)} Cold gas accretion rate as a function of the distance from the galaxy and divided into ``cold" and ``hot" modes (see text for details). The vertical dotted line shows the virial radius of the galaxy.} \label{fig:detachment_example} \end{figure*} %------------------- %---- FIGURE ----- %------------------- \begin{figure} \centering \includegraphics[width=0.49\textwidth,angle=0.0]{figure-03.eps} \caption{Top: cold gas accretion (dashed blue line) and star formation rate (solid black line) as a function of redshift, measured inside a shell of radius $r_{vir}-1.5r_{vir}$ centered in the halo presented in Fig. \ref{fig:detachment_example}. Bottom: mass accretion history. The black line is the raw MAH and the solid area the fixed MAH as described in the text.}\label{fig:detachment_example_SFR} \end{figure} %----------------------------------------------------------------------------------------------------------- % %----------------------------------------------------------------------------------------------------------- \subsection{Filament detachment by accretion into a cluster}\label{sec:satellite_accretion} The results presented in this section are based on a zoom simulation from a 64\mpc box. The zoom region is centered in a $2.4\times10^{14}$\msun cluster and has a mass resolution of $2\times10^7$\msun per dark matter particle. The simulation includes gas with cooling and metal enrichment as well as stochastic star formation (see Appendix \ref{sec:simulations}). Figure \ref{fig:detachment_example} shows a satellite galaxy being accreted into a large cluster with present-time masses of $2 \times 10^{11}$ \msun and $2.4\times10^{14}$\msun respectively. This is a case of an extreme interaction between a galaxy and its environment that might be described as ram-pressure gas stripping \citep{Mori00,Quilis00,Mayer06,Kronberger08,Boselli08,Rijcke10}, and clearly shows non-linear effects not only on the galaxy but on its surrounding network of filaments. As the galaxy begins to interact with the cluster we see changes in its surrounding matter configuration, mass accretion and star formation. The top panels in Fig. \ref{fig:detachment_example} show that at early times ($z=5$) the satellite galaxy is the central node of a network of thin filaments. The filaments were identified by placing test particles and letting them follow the instantaneous velocity field interpolated using a Delaunay tessellation linear interpolation scheme \citep{Schaap00} and applying a high-pass filter at the scale of interests \citep{Aragon13}. The places where the particles accumulate were used to construct a filament mask which was then used to tag particles in the simulation (Aragon-Calvo in preparation). The corresponding velocity field is composed of coherent filamentary streams of gas and dark matter that delineate the location of the primordial filaments in the matter distribution. It is worth noting that the (high-pass filtered) velocity field shows the filamentary structures in the cosmic web more clearly than the density field. The coherent filamentary structures in the velocity field at $z=5$ have a laminar character reflecting their early evolutionary stage. Gas accretion was computed in shells of thickness $\Delta R_{shell}$ around the center of the halo $\boldsymbol{r_h}$ with velocity $\boldsymbol{v_h}$ following the approach of \citet{Faucher11b} as follows: \begin{equation} \dot{M} = \sum_i m_i \frac{\boldsymbol{v_i} - \boldsymbol{v_h}}{\Delta R_{shell}} \cdot \frac{\boldsymbol{r_i} - \boldsymbol{r_h}}{\left | \boldsymbol{r_i} - \boldsymbol{r_h} \right |} \end{equation} \noindent where the sum is over the particles with position $\boldsymbol{r_i}$, mass $m_i$ and velocity $\boldsymbol{v_i}$ inside the shell. Gas was divided into hot and cold species by their instantaneous temperature at the threshold $T=10^{5} $K. The coherent and smooth velocity field provides an efficient mechanism for constant cold gas accretion producing a steady star formation. This is shown in Fig. \ref{fig:detachment_example_SFR} where we see a steady increase in the star formation rate from $z=8$ up to its peak at $z \sim 4$. The cold gas accretion rate at $z=5$ is of the order of $\sim 1-3 $ M$_{\odot}$/yr. Around this time the galaxy begins to interact with the proto-cluster and this is reflected in both the cold gas accretion rate and the star formation rate which shows a rapid decline. After its first interaction with the proto-cluster, the galaxy suffers several episodes of cold gas loss. Around $z \sim 1$ the galaxy begins its accretion into the cluster. The higher density and strong tidal field associated to the cluster induces a mechanical stress in the filamentary web around the galaxy, stretching it until the point when it can no longer remain gravitationally attached (top-right panels Fig. \ref{fig:detachment_example}). After $z \sim 1$, even before the galaxy has been completely accreted by the cluster, there is no recognizable filamentary web around the galaxy. The local velocity field is highly turbulent and the lack of coherent gas streams connected to the galaxy prevents the accretion of cold gas (lower panels in Fig. \ref{fig:detachment_example}), ending its efficient star formation phase. After the galaxy has been detached from its web of primordial filaments there is no mechanism that can reconnect it. From this point the galaxy can not longer accrete cold gas, depleting its internal reserves of gas and eventually stopping star formation. %------------------- %---- FIGURE ----- %------------------- \begin{figure*} \centering \includegraphics[width=0.8\textwidth,angle=0.0]{figure-04.eps} \caption{Three toy models of CWD events. From top to bottom: major merger, satellite accretion and cosmic web crossing. These events can be considered different cases of halo accretion. The basic CWD event occurs as follows: A) Initially a galaxy accretes cold gas via their web of filaments. B) The galaxy then suffers a violent detachment event form either a merger, accretion into a larger halo or accretion/crossing of a large-scale structure (here a filament).}\label{fig:toy_model} \end{figure*} %----------------------------------------------------------------------------------------------------------- % %----------------------------------------------------------------------------------------------------------- \subsection{Cosmic Web Detachment events}\label{sec:detachment_events} The star formation regulation and subsequent quenching presented in Fig. \ref{fig:detachment_example} is a purely mechanical/gravitational process, requiring no internal feedback from AGN. The rupture in the cold gas accretion channel is sufficient to stop star formation and, in that sense, can be considered a more fundamental process than internal feedback. Subsequent internal feedback from AGN could further prevents gas from entering the galaxy and/or cooling, removing any residual star formation \citep{DiMatteo05,Springel05,Best05}. Figure \ref{fig:detachment_example} highlights the key role of primordial filaments in the star forming history of galaxies and the decisive role of the large-scale matter distribution around a galaxy. The time when a galaxy {\it detaches} from its primordial filamentary web marks a tuning point in its star formation history. The gas stripping by ram pressure and tidal interactions shown in Fig. \ref{fig:detachment_example} is followed by quenching but it is not the underlying process responsible for ending star formation in the satellite galaxy. The actual process that induces quenching is the removal of the feeding filaments connected to the galaxy. Without them there is no way to effective accrete cold gas to form stars and the galaxy has to rely on internal reserves. This process which we call \textit{Cosmic Web Detachment} (CWD) is fundamentally a starvation process \citep{Peng15} triggered by non-linear interactions between galaxies and other galaxies or their environment. In the following sections we describe other non-linear processes that share the same CWD mechanism to trigger star formation quenching. In the rest of this paper we use the terms CWD and web detachment interchangeably. Figure \ref{fig:toy_model} shows toy models of three basic processes that can result in CWD: i) major merger, ii) satellite accretion and iii) cosmic web infall/crossing. Note that it is possible that galaxies inside voids and walls could remain attached to their web of filaments while experiencing a reduced or even halted gas accretion due to the super Hubble expansion characteristic of under-dense regions \citep{Weygaert93,Schaap07,Aragon11}. In all cases presented in Fig. \ref{fig:toy_model} the initially star-forming galaxy is connected to a web of primordial filaments from which it accretes gas. Subsequent non-linear interactions between the galaxy and a nearby structure (other galaxy, a group or a large filament/wall) detaches the galaxy from its web of feeding filaments. Following the CWD event, the galaxy can no longer accrete cold gas and star formation stops. Note that in the case of a major merger as assume that the newly merged halo could remain connected to its primordial filaments. However, these filaments are not connected to the galaxy's core. A similar situation occurs in the case of satellite accretion. After web detachment the filaments originally attached to the satellite could remain connected to the larger halo but are not able to inject gas into the satellite galaxy. CWD by interaction with filaments and walls may be an important quenching mechanism for dwarf galaxies \citep{Benitez13}. %------------------- %---- FIGURE ----- Detachment spatial distribution %------------------- \begin{figure} \centering \includegraphics[width=0.35\textwidth,angle=0.0]{figure-05.eps} \caption{Assigning a detachment time to a galaxy. The red contour encloses regions that have been identified as multi-streaming using our Lagrangian Sheet prescription (see text for details), in this case a cluster and three large filaments connected to it. A galaxy is first detected inside a void (A) at which point it is assumed to be forming stars until it enters a multi-streaming region (B) and from this point it becomes quenched (C).}\label{fig:detachment_assignmnet_toy} \end{figure} %========================================================================== % %==========================================================================
The CWD model presented here provides a simple mechanism to stop star formation in galaxies by separating a galaxy from its main star-forming gas supply. CWD is to first order purely mechanical/gravitational. The nature of the particular CWD event determines the fate of the remaining gas inside the galaxy: major mergers will result in most of the gas being consumed in a starburst, often followed by a change in the galaxy's morphology, while less violent detachments will allow gas reservoirs to be slowly consumed until depleted within a few Gyr \citep{Bauermeister10}. Star formation histories of high ­redshift galaxies support the picture of gas­ accretion$\to$star ­formation (web­ attached), star­burst (web­ detaching) and instantaneous or slow decline (web­ detached) \citep{Reddy12, Blanton06}. \citet{Schawinski14} found evidence that galaxies spiral galaxies move through the green valley when their external gas supply ends but continue to burn their remaining gas into stars. Ellipticals on the other hand require an scenario where both the accreting gas and their reservoirs are consumed in a very short time scale. Gas strangulation and quenching are two distinct ways of reducing the SFR. Quenching is in principle instantaneous, although in practice it is inefficient. Strangulation cuts the gas supply and allow star formation to continue at a decreasing rate. Evidence has been found for bimodality in the metallicity stellar mass relation between passive and star-forming galaxies that points to both processes in action for ETGs. Strangulation applies to galaxies below $\sim 10^{11}$\msun, These galaxies continue to grow in stellar mass and metallicity for some 4 billion years after the gas supply terminates, which we interpret here as due to web detachment. CWD is a strangulation process. We have shown that it fits the data on SFR histories, the mass color bimodality and environmental dependence of red galaxy fraction. We therefore propose that it is an effective and natural implementation of strangulation. Indeed, AGNs are not needed to drive outflows in our approach, even for massive galaxies. Web detachment naturally regulates the gas supply at all masses. Of course AGNs may well be a consequence both of mass and of gas accretion as well as of mergers, and almost certainly play a role in quenching massive galaxies, above $\sim 10^{11}$ \msun. %========================================================================== %--- Acknowledgements %==========================================================================
16
7
1607.07881
1607
1607.02213_arXiv.txt
2.5D time-dependent ideal magnetohydrodynamic (MHD) models in Cartesian coordinates were used in previous studies to seek MHD equilibria involving a magnetic flux rope embedded in a bipolar, partially open background field. As demonstrated by these studies, the equilibrium solutions of the system are separated into two branches: the flux rope sticks to the photosphere for solutions at the lower branch but is suspended in the corona for those at the upper branch. Moreover, a solution originally at the lower branch jumps to the upper, as the related control parameter increases and reaches a critical value, and the associated jump is here referred to as upward catastrophe. The present paper advances these studies in three aspects. First, the magnetic field is changed to be force-free. The system still experiences an upward catastrophe with an increase in each control parameter. Secondly, under the force-free approximation, there also exists a downward catastrophe, characterized by a jump of a solution from the upper branch to the lower. Both catastrophes are irreversible processes connecting the two branches of equilibrium solutions so as to form a cycle. Finally, the magnetic energy in the numerical domain is calculated. It is found that there exists a magnetic energy release for both catastrophes. The Amp\`{e}re's force, which vanishes everywhere for force-free fields, appears only during the catastrophes and does a positive work, which serves as a major mechanism for the energy release. The implications of the downward catastrophe and its relevance to solar activities are briefly discussed.
\label{sec:introduction} \par Coronal magnetic flux ropes are believed to have close relationship with solar eruptive activities \citep[e.g.][]{Gibson2006a,Labrosse2010a,Chen2011a}, including prominence/filament eruptions, flares, and coronal mass ejections (CMEs), which are widely considered as different manifestations of a single physical process \citep[e.g.][]{Low1996a,Wu1999a,Torok2011a,Zhang2014a}, corresponding to a sudden destabilization of the coronal magnetic configuration \citep{Archontis2008a}. Flux ropes can be triggered to erupt by many different mechanisms such as magnetic reconnections and various instabilities \citep[e.g.][]{Antiochos1999a,Chen2000a,Moore2001a,Kliem2006a}. It was also suggested by many authors that catastrophe could be responsible for the solar eruptive activities \citep{Priest1990a,Forbes1995a,Lin2004b,Zhang2007a}. Catastrophe occurs by a catastrophic loss of equilibrium as a control parameter of the magnetic system exceeds a critical value \citep{vanTend1978a,Forbes1990a,Isenberg1993a}. Here the control parameter characterizes the physical properties of the magnetic configuration. Any parameter can be selected as a control parameter provided that different values of this parameter will result in different configurations \citep{Lin2002b,Wang2003a,Su2011a}, and different kinds of control parameters correspond to different evolutionary scenarios \citep{Kliem2014a}. During catastrophe, magnetic free energy is quickly released and converted to kinetic and thermal energy \citep{Chen2007a}. Catastrophe and instability are intimately related in the evolution of different magnetic systems \citep{Kliem2014a,Longcope2014a}. \par Many solar eruptive activities originate from active regions \citep{Benz2008a,Chen2011a}, and Cartesian coordinates are widely used to investigate active region activities. In Cartesian coordinates, both analytical and numerical analyses have been performed to explore catastrophic behaviors of flux ropes in bipolar background field. If the bipolar field is completely closed, no catastrophe occurs for the flux rope of finite cross section \citep{Hu2000a}; only if the radius of the flux rope is small compared to the length scale of the photospheric magnetic field will there exist a catastrophe \citep{Forbes1991a,Lin2002b}. During the catastrophe, the flux rope, originally attached to the photosphere, loses equilibrium at a critical value of the control parameter and reaches a new equilibrium levitating in the corona. In a partially open bipolar field, however, \cite{Hu2001a} found that catastrophe also occurs for the flux rope of finite cross section. These studies imply that there are two branches of equilibrium states of a flux rope with catastrophe: the lower branch and the upper branch. The flux rope sticks to the solar surface for solutions at the lower branch and levitates in the corona for those at the upper branch, with a vertical current sheet below it. The catastrophe mentioned above corresponds to a jump from the lower branch to the upper, and thus is called ``upward catastrophe'' hereinafter. \par All the previous studies only analysed the upward catastrophe. Some questions are still standing in theory: is there a catastrophe during which the flux rope falls back from the upper branch to the lower (called ``downward catastrophe'' hereinafter)? Will it also release magnetic free energy? To answer these questions, we follow the work by \cite{Hu2001a} to study the equilibrium solutions, but change to force-free fields. The motivations of using a force-free field structure rather than a magnetostatic structure are as follows. First, the strong magnetic fields over active regions are usually considered to be force-free \citep{Low1977a}. Second, under force-free conditions, the system is dominated by magnetic fields, and its energy is limited to magnetic energy so as to substantially simplify the energy analysis (see \sect{sec:energy}). By analysing the evolution of the equilibrium solutions versus the control parameters in Cartesian coordinates, the properties of the catastrophes in partially open bipolar background field under the force-free approximation are investigated. We mainly focus on the existence of the downward catastrophe, and the evolutions of the magnetic energy during the catastrophes. The sections are arranged as follows: simulation methods are introduced in \sect{sec:equation}; two kinds of catastrophes are demonstrated in \sect{sec:catastrophe}; the variations of magnetic energy during upward and downward catastrophes are analysed in \sect{sec:energy}; by summarizing the simulation results, the whole evolution of a flux rope in partially open bipolar field is illustrated in \sect{sec:profile}; the mechanism by which magnetic energy is released is analysed in \sect{sec:work}; the significance of catastrophe in both observational and theoretical analyses are discussed in \sect{sec:discuss}. \par
\label{sec:discuss} To investigate the catastrophic behavior of coronal flux ropes, we simulate the evolution of the equilibrium states associated with a flux rope in a force-free partially open bipolar field versus different control parameters. It is found that, under force-free approximation, there is also upward catastrophe. What's more, under the force-free approximation, there exists another possibility that a downward catastrophe takes place, during which a levitating flux rope may fall down to the photosphere. The evolutionary scenario represented by the control parameter might cause a catastrophe to take place. For example, the ``flux-feeding'' procedure, during which chromospheric fibrils rise upward and merge with the prominence above \citep{Zhang2014a}, will result in varying $\Phi_z$, and varying $\Phi_p$ could be caused by twisting or untwisting motions of the flux rope \citep{Torok2010a,Liu2016a}. All these phenomena are the possible triggers of catastrophes. Upward and downward catastrophes connect the two branches of equilibrium states so as to form a cycle. These two catastrophes are both irreversible but reproducible processes. Thus there might exist activities during which more than one catastrophe take place: e.g. a flux rope is suspended in the corona at first, then falls back to the photosphere, and at last jumps upward, resulting in an eruptive activity. \par By calculating the magnetic energy within the numerical domain, the evolution of magnetic energy is analysed semi-quantitatively. Although the moving directions of the rope are opposite for the upward and downward catastrophes, magnetic energy is always released. The order of the released energy is rather large, comparable to a medium flare (see \sect{sec:energy}). Since there are no magnetic reconnections, magnetic energy is mainly released by the work done by Amp\`{e}re's force. Our calculation demonstrates that the magnetic energy released by the work done by Amp\`{e}re's force during catastrophe is sufficient for solar eruptive activities, indicating that magnetic reconnection is not always necessary. If magnetic reconnection is included in the simulation, the eruptive speed can be significantly enhanced \citep{Chen2007a}, and magnetic energy is released by both magnetic reconnection and the work done by Amp\`{e}re's force. \par Previous studies have proposed that an upward catastrophe can serve as an effective mechanism for solar eruptive activities. Since catastrophe occurs via a loss of equilibrium at a critical point, it can be triggered by very small disturbances. In addition, upward catastrophes can not only account for CMEs and flares, they also provide sites for fast reconnection \citep{Chen2007a}. Apart from eruptive activities, there are also energetic but non-eruptive activities, such as confined flares \citep{Liu2014a}. The physical mechanism of confined flares has been discussed in many previous studies \citep[e.g.][]{Yang2014a,Joshi2015a}. During a downward catastrophe, although magnetic energy is still released, the flux rope falls back to the photosphere. Therefore, a downward catastrophe might be another possible cause of energetic but non-eruptive activities. Observational evidences are still needed to confirm these conjectures. \par This research is supported by Grants from NSFC 41131065, 41574165, 41421063 and 41222031, MOEC 20113402110001, CAS Key Research Program KZZD-EW-01-4, and the fundamental research funds for the central universities WK2080000077.
16
7
1607.02213
1607
1607.01026_arXiv.txt
{ Simple parameter-free analytic bias functions for the two-point correlation of densities in spheres at large separation are presented. These bias functions generalize the so-called Kaiser bias to the mildly non-linear regime for arbitrary density contrasts as $b(\rho)-b(1) \propto{(1-\rho^{-13/21})\rho^{{1+n}/{3}}}$ with $b(1)=-4/21-n/3$ for a power-law initial spectrum with index $n$ . The derivation is carried out in the context of large deviation statistics while relying on the spherical collapse model. A logarithmic transformation provides a saddle approximation which is valid for the whole range of densities and shown to be accurate against the 30 Gpc cube state-of-the-art Horizon Run~4 simulation. Special configurations of two concentric spheres that allow to identify peaks are employed to obtain the conditional bias and a proxy to BBKS extrema correlation functions. These analytic bias functions should be used jointly with extended perturbation theory to predict two-point clustering statistics as they capture the non-linear regime of structure formation at the percent level down to scales of about $10$ Mpc$/h$ at redshift $0$. Conversely, the joint statistics also provide us with optimal dark matter two-point correlation estimates which can be applied either universally to all spheres or to a restricted set of biased (over- or underdense) pairs. Based on a simple fiducial survey, this estimator is shown to perform five times better than usual two-point function estimators. Extracting more information from correlations of different types of objects should prove essential in the context of upcoming surveys like Euclid, DESI, PFS or LSST. }
The large-scale structure of the Universe puts very tight constraints on cosmological models. Deep spectroscopic surveys, like Euclid \citep{Euclid}, DESI \citep{DESI}, PFS \citep{2014PASJ...66R...1T} or LSST \citep{lsst}, will allow astronomers to study the details of structure formation at different epochs, hence to probe cosmic acceleration. Yet, in order to reach the expected precision on the equation of state of dark energy, astronomers must address the following challenges: non-linear gravitational evolution \citep{Bernardeau02}, redshift space distortions \citep{Kaiser87,tns}, bias \citep{Kaiser84,dekel87}, intrinsic alignments \citep{2015SSRv..193...67K} and baryonic physics \citep{2015JCAP...12..049S}. In this context, two-point clustering has generated a lot of interest \cite[e.g.][and references therein]{halo-model}, as it allows one to investigate how the densest regions of space -- where dark halos usually reside -- are clustered, which in turn sheds light on the so-called biasing between dark matter and halos: as halos correspond to peaks of the density field, they are not a fair tracer of that field. \cite{Kaiser84} showed that in the high contrast, $\delta/\sigma \gg 1$, large separation limit, the correlation function, $\xi_{>\delta/\sigma}$, of peaks lying above this threshold reads \begin{equation} \xi_{>\delta/\sigma}\approx \frac{1}{\sigma^2} \left(\frac{\delta}{\sigma}\right)^{2}\xi, \end{equation} so that the correlation function of high density regions decreases more slowly than the density field correlation function, $\xi$, with an amplification factor or {\sl bias} that is proportional to the threshold squared. This analysis can also be restricted to the peaks of the density field above a given threshold following the seminal papers by \cite{BBKS} (hereafter BBKS) and \cite{Regos95}. For two point functions, the non-linear regime increases the number of modes used to better constrain cosmological parameters. Of particular (partially theoretically unexplored) interest is the possibility of computing conditional two-point correlations, e.g. two-point correlation between regions that have specific densities, so as to provide more robust estimates of the large-distance two-point correlation. It has been argued \citep{Bernardeau14,Bernardeau15,Uhlemann16,Codis16b} that the statistics of cosmic densities in concentric spheres can leverage cosmic parameters competitively, as the corresponding spherical symmetry allows for analytical predictions in the mildly non-linear regime, beyond what is commonly achievable via other statistics in the context of perturbation theory. Indeed, the zero variance limit of the cumulant generating functions yields estimates of the joint density probability distribution function (PDF hereafter) which seems to match simulations in the regime of variances of order unity \citep{1989A&A...220....1B,1992ApJ...390L..61B,1993ApJ...412L...9J,2002A&A...382..412V,Bernardeau14,Bernardeau15}. This success was shown to originate from a regime of large deviations at play in the mildly non-linear evolution of the large-scale structure \citep{LDPinLSS}. The aim of this paper is to show that the spherically-symmetric framework which led to surprisingly accurate predictions for one-point statistics also accommodates, in the large separation limit, analytic estimates of the two-point statistics and in particular of the bias factor associated with imposed constraints within concentric cosmic densities. Recently, \citeauthor{Codis16a} (\citeyear{Codis16a}a) studied the two-point statistics of the density within concentric spheres, whose redshift evolution was shown to be accurately predicted by large-deviations theory in the mildly non-linear regime, but relied on numerical integration of highly oscillating complex functions and was therefore subject to possibly significant numerical errors, in particular for large densities. Since \cite{Uhlemann16} showed that very accurate analytic approximations could be found for one-point statistics by using a logarithmic transform of the density field and performing a saddle-point approximation, we propose in this paper to extend the use of the logarithmic transform to two-point statistics. It was shown in \citeauthor{Codis16b} (\citeyear{Codis16b}b) that the one-point PDF can be fully predicted, modulo one parameter, the variance of the density field, which is the driving parameter of the theory, leading to two options: i) higher order perturbation theory can be used to predict the value of this variance as a function of scale and redshift in order to recover the full PDF or ii) this one parameter model can be used to build optimal likelihood estimators for the variance based on the measurement of densities in spheres. Conversely, in the present paper, modulo the unknown underlying two-point correlation function of the dark matter density field, we will show that the same large-deviations formalism provides us with the full statistics of the two-point PDF of the density within concentric spheres separated by a distance $r_e$. Once again, one can i) rely on perturbation theory to predict the underlying dark matter correlation function \citep[e.g.][]{Taruya2012}, or ii) build, from the present theory, optimal estimators for the dark matter correlation function to be applied to measured density in separated spheres. \begin{figure} \centering\includegraphics[width=0.65\columnwidth]{configuration-nplusncell.pdf} % \caption{The two-point configuration with two concentric cells of radii $R_{1}$ and $R_{2}$ in one location (purple) and two other concentric cells of radii $R_{1}$ and $R_{2}$ in another location (red) separated by a distance $r_e\gg R_2$. } \label{fig:configuration-nplusncell} \end{figure} In this paper, following \cite{1996A&A...312...11B}, the focus will be on predicting analytically the density two-point statistics for configurations shown in Figure~\ref{fig:configuration-nplusncell} and specifically the corresponding bias functions (the aforementioned density-dependent scalings of the two-point correlation). We will in particular consider the density field smoothed at two different scales in two concentric spheres which can be turned into an inner density and a slope (difference of density between the two spheres). This will allow us to focus on the conditional density-given-slope bias as a quasi-linear proxy for the BBKS peak correlation function. These bias functions generalize the so-called Kaiser linear bias in the mildly non-linear regime for large separations and arbitrary density contrasts. Hence they provide alternative ways of using gravitational clustering to probe our cosmological model, in particular using specific regions of space (underdense/overdense, small/big slope, etc). Leveraging conditionals on the value of the density at the legs of the correlations will allow for a more robust estimate of the two-point correlation function. We will illustrate on a fiducial experiment how the present formalism can be used to estimate optimally the underlying top-hat filtered correlation function. This paper is organized as follows. Section~\ref{sec:LDPS} presents briefly the implementation of a large deviation principle on the joint statistics of concentric cells based on a saddle point approximation. Section~\ref{sec:validation} compares these analytic predictions to the state-of-the-art dark matter simulation Horizon~Run~4 (HR4). Section~\ref{sec:recoverxi} demonstrates how to measure optimally the dark matter correlation function on a given survey. Finally, Section~\ref{sec:conclusion} wraps up. Appendix~\ref{app:LSSfast} shortly describes the accompanying package {\tt LSSFast} for the evaluation of the one-cell PDF and the bias functions. Appendix~\ref{app:LDP} reviews the formalism of large deviations relevant to obtain the density PDF for concentric spheres and the joint PDF at large separations. Appendix~\ref{app:biasPT} provides a description of bias functions in the Gaussian and weakly non-Gaussian regime based on perturbation theory. Appendix~\ref{app:figures} provides a validation of HR4 at redshift $z=4$ together with extended results for redshift $z=0$.
\label{sec:conclusion} This paper presented simple parameter-free analytic bias functions for the two-point correlation of density in spheres (equations~\eqref{eq:densitybias} and \eqref{eq:jointdensityslopebias}). These bias functions generalize the so-called Kaiser bias in the mildly non-linear regime for (not so) large separation and for arbitrary contrast when considering the density smoothed with a top-hat filter (or equivalently measured in spheres). The derivation was carried out using a large-deviation principle, while relying on the spherical collapse model. A logarithmic transformation allowed for a saddle approximation, which was shown to be extremely accurate against the state-of-the-art HR4 N-body simulation throughout the range of measured densities, e.g. extending the match to the theory by a factor of 10 or more on joint PDFs, conditionals and marginals. This is both a success of the theory and an assessment of the quality of this simulation. The conditional density-given-slope and density-given-mass biases were also presented as a quasi-linear proxy to the BBKS extremum correlation functions operating at lower redshifts. As an illustration, Figure~\ref{fig:corrfct-spheres} presented the expected bias modulation of the sphere-sphere correlation function at redshift 0.7 in spheres of 14 Mpc$/h$. \citeauthor{Codis16a} (\citeyear{Codis16a}a) recently showed how such bias functions could be used as a mean of mitigating correlation errors when computing count-in-cell statistics on finite surveys. Conversely, based on the knowledge of the joint PDF of the density in spheres separated by $r_{e}$, we presented and implemented in Section~\ref{sec:recoverxi} a maximum likelihood estimator for the underlying top-hat smoothed dark matter density, which was shown to be unbiased and very accurate for separations above 50Mpc$/h$. Its variance is up to 5 times smaller than that of the classical sample estimator. Hence these analytic bias functions should be used jointly with analytic models for the two-point function from perturbation theory for cosmic parameter estimation, as they capture the biasing effect of non-linear regime of structure formation. Let us stress in closing that the saddle point PDFs presented in this work are not arbitrary fitting functions, but a clear prediction of the theory of gravitational clustering which allows for direct comparison with data a low redshift. These PDFs should be also compared favourably with fits to a lognormal PDF which provide a much worse match as illustrated in Figure~\ref{fig:lognormal-PDF-HR4-z0p7}. The saddle point approximation presented here gives, at very little extra cost, a few percent accuracy over about 4 orders of magnitude in the values of the one- and two-cell PDFs and percent accuracy on the bias functions all densities probed by the simulation with an explicit dependence of both cosmology, through the initial power spectrum, and the chosen theory of gravitation, though the spherical collapse model. In this paper we ignored redshift-space distortion or galaxy biasing which will be investigated in Feix et al (in prep.). \vskip 0.5cm {\bf Acknowledgements:} This work is partially supported by the grants ANR-12-BS05-0002 and ANR-13-BS05-0005 (\url{http://cosmicorigin.org}) of the French {\sl Agence Nationale de la Recherche}. CU is supported by the Delta-ITP consortium, a program of the Netherlands organization for scientific research (NWO) that is funded by the Dutch Ministry of Education, Culture and Science (OCW). She thanks IAP for hospitality when this project was completed. D. Pogosyan thanks the Institut Lagrange de Paris, a LABEX funded by the ANR (under reference ANR-10-LABX-63) within the {\sl Investissements d'Avenir} programme under reference ANR-11-IDEX-0004-02. We thank Mark Neyrinck for comments.
16
7
1607.01026
1607
1607.08088_arXiv.txt
We present the correspondence between non-interacting multi-hadron fermion star equation of state in the many-flavor limit and the degrees of freedom of a Kaluza\,--\,Klein compact star. Many flavors can be interpreted in this framework as one extra compacti\-fied spatial dimension with various, more-and-more massive hadron state excitations. The effect of increasing the degrees of freedom was investigated on the equation of state and in connection with the mass-radius relation, $M(R)$. The maximum mass of the star, $M_{\mathrm{max}}$ were also calculated as a function of the maximum number of excited states, $n$ and the size of the compactified extra dimension, $R_{\mathrm{c}}$.
\label{sec:intro} Investigation of the phase diagram of hot and dense matter aims to cla\-rify the role of phase transitions between partonic and hadronic states, search for the critical point, and explore the exotic degrees of freedom predicted by nuclear physics and low-energy Quantum Chomodynamics (QCD). Lattice QCD works well in the high-temperature and low-density part of the phase diagram while, high-energy heavy-ion collisions have the focus on the critical point. Testing the cold, superdense, strongly interacting matter in the zero-temperature and high-density limit, compact stars are the best (celestial) laboratories, especially through the observation of astrophysical properties of these extreme objects. However, there is no direct observation of the inner structure of a compact star, some physical properties like the measured mass-radius relation, moment of inertia, rotation period, magnetic field and the soon-to-be-available gravitational wave observations support to build and parametrize realistic equations of state (EoS) in the non-perturbative and high-density QCD regime. This is one of the main goal of the Working Group 2 of the "NewCompStar COST Action MP1304", which developed a continously-evolving online database, CompOSE~\cite{composeweb} for neutron star EoS. The aim of this paper is to present the correspondence between non-interacting multi-hadron fermion star equation of state in the many-flavor limit to the many degrees of freedom of a Kaluza\,--\,Klein compact star. We introduce the description of a compact star in the Kaluza\,--\,Klein world with one extra compactified dimension and show how excited states can be connected to the hadron spectra, especially in the many-flavor limit, assuming more-and-more massive hadron states. We present the effect of increasing the degrees of freedom on the equation of state and the connection of this conception to the mass-radius relation, $M(R)$ in compact stars. We present the dependence of the maxi\-mum mass of the star, $M_{\mathrm{max}}$ as a function of the maximum number of excited states, $n$ and the size of the compactified extra dimension, $R_{\mathrm{c}}$.
\label{sec:summary} Correspondence between non-interacting multi-hadron fermion star EoS in the infinite flavor limit to the degrees of freedom of a Kaluza\,--\,Klein compact star has been presented. We found that decreasing $R_{\mathrm{c}}$ results stiffer EoS with saturated $M_{\mathrm{max}} \to 0.7 M_{\odot} $, while larger $R_{\mathrm{c}}$ generates saturations at smaller $M_{\mathrm{max}}$ depending on the number of degrees of freedom (flavors).
16
7
1607.08088
1607
1607.01483_arXiv.txt
Studying the average properties of galaxies at a fixed comoving number density over a wide redshift range has become a popular observational method, because it may trace the evolution of galaxies statistically. We test this method by comparing the evolution of galaxies at fixed number density and by following individual galaxies through cosmic time ($z=0-5$) in cosmological, hydrodynamical simulations from OWLS. Comparing progenitors, descendants, and galaxies selected at fixed number density at each redshift, we find differences of up to a factor of three for galaxy and interstellar medium (ISM) masses. The difference is somewhat larger for black hole masses. The scatter in ISM mass increases significantly towards low redshift with all selection techniques. We use the fixed number density technique to study the assembly of dark matter, gas, stars, and black holes and the evolution in accretion and star formation rates. We find three different regimes for massive galaxies, consistent with observations: at high redshift the gas accretion rate dominates, at intermediate redshifts the star formation rate is the highest, and at low redshift galaxies grow mostly through mergers. Quiescent galaxies have much lower ISM masses (by definition) and much higher black hole masses, but the stellar and halo masses are fairly similar. Without active galactic nucleus (AGN) feedback, massive galaxies are dominated by star formation down to $z=0$ and most of their stellar mass growth occurs in the centre. With AGN feedback, stellar mass is only added to the outskirts of galaxies by mergers and they grow inside-out.
Galaxies grow through accretion of diffuse and filamentary gas and through mergers with a wide range of mass ratios. This growth is thought to be regulated by feedback from star formation and active galactic nuclei (AGN) \citep[see e.g.][]{Mo2010}. The material expelled in galactic winds almost balances the gas brought in by accretion. Without outflows, cosmological hydrodynamical simulations overproduce the stellar mass formed in both low-mass and high-mass haloes as well as the baryon fraction in groups and clusters \citep[e.g.][]{White1991, Benson2003, McCarthy2010}. Cosmological simulations try to include the relevant physical processes for galaxy formation, but need to employ simplified subgrid recipes to model stellar and black hole feedback. In recent years, huge progress has been made in understanding the effects of these subgrid models and in capturing important properties of observed galaxies and their haloes \citep[e.g.][]{Schaye2010, Dave2011, Vogelsberger2014, Genel2014, Schaye2015, Crain2015}. The growth of structure is followed from the early Universe until the present day and simulations are therefore a powerful tool to study galaxy formation dynamically. However, there are also limitations to how accurately simulations can produce realistic galaxies and it is therefore vital to directly compare them to observations. Doing so enables us to both validate or invalidate aspects of simulations and helps us to interpret observations of galaxies. Galaxies are observed and studied at a large range of redshifts ($z=0-10$) and are found to have very different properties at low ($z=0-1$) and high redshift ($z=2-6$). For example, galaxies at high redshift are, on average, more clumpy, more compact, and have higher star formation rates than equally-massive galaxies at low redshift \citep[e.g.][]{Damen2009, Genzel2011, Szomoru2012}. An important question is how these different populations are connected. To understand how galaxies at high redshift grow into their present-day counterparts, one cannot compare galaxies at different epochs at fixed mass. Rather, the galaxies need to be linked while taking into account their mass growth. Using hydrodynamical simulations or semi-analytic models, it is straightforward to find progenitors and descendants of galaxies and thus calculate the rate at which they grow \citep[e.g.][]{Voort2011a}. Observationally, however, it is hard to connect populations of galaxies at different epochs. One purely observational approach is to look at the galaxy population at fixed comoving number density. By assuming that the brightest galaxies at high redshift evolve into the brightest galaxies at low redshift, one can infer the rate at which the interstellar medium (ISM) mass and stellar mass grow \citep[e.g.][]{Dokkum2010, Papovich2011, Ownsworth2014} and one can study how the structural properties of galaxies change with time \citep[e.g.][]{Brammer2011, Dokkum2013, Patel2013a, Muzzin2013, Papovich2015}. Some studies have been performed using only star-forming galaxies \citep{Patel2013b}. A hybrid technique, using both observations and simulations, is the abundance matching method in which observed galaxies are linked to simulated dark matter haloes by matching their spatial density \citep[e.g.][]{Conroy2009, Moster2013, Behroozi2013a}. In this method the most massive galaxies are assumed to reside in the most massive haloes, somewhat less massive galaxies inside somewhat less massive haloes, etcetera. Additional information, such as star formation rates, can be used to break degeneracies. By using merger trees, the growth of haloes can be computed from the dark matter simulations directly, from which the growth of the corresponding galaxies can be derived. Recently, a number of observational studies have used results from abundance matching to correct for mergers by decreasing the number density with decreasing redshift \citep{Marchesini2014, Ownsworth2014, Papovich2015}. Abundance matching, semi-analytic models, and cosmological simulations provide ways to test how well galaxies can be traced by number density selections \citep{Leja2013, Behroozi2013b, Mundy2015}. Vice versa, using number densities, we can directly compare evolving galaxy properties, like star formation rates and quiescent fractions, in simulations to those in observations. Here, we use cosmological, hydrodynamical simulations to compare the growth of galaxies and haloes at fixed comoving number density with the mass evolution of their descendants or progenitors. We show that the difference is reasonably small, which lends support to the use of this method. For the first time we show the evolution of each individual component: halo mass, ISM mass, stellar mass, and supermassive black hole mass. We also include accretion rates and star formation rates and find that there are three regimes of growth for massive galaxies. At high redshift, gas accretion dominates. At intermediate redshift, star formation dominates. At low redshift, mergers dominate. We investigate the importance of AGN feedback and compare simulations with and without AGN feedback. We find that without AGN feedback at low redshift star formation dominates strongly down to $z=0$, resulting in more massive galaxies. With AGN feedback the galaxies become quenched and grow mainly in their outer parts (or `inside-out'). This paper is organized as follows. We describe the simulations used in Section~\ref{sec:sim}. In Section~\ref{sec:method} the methods used to connect galaxies at various redshifts are described. Section~\ref{sec:results} shows the mass growth and accretion rates from $z=5$ to $z=0$ for our fiducial simulation. In Section~\ref{sec:compare} we compare the results from Section~\ref{sec:results} to a simulation without AGN feedback. We discuss how our simulated galaxies relate to certain observations in Section~\ref{sec:obs}. Our conclusions can be found in Section~\ref{sec:concl}.
\label{sec:concl} While in simulations the growth of an individual galaxy can be followed throughout cosmic time, no such tracking is possible in observations. Statistically, one might expect the $N$ most massive galaxies at high redshift to evolve into the $N$ most massive galaxies at low redshift \citep[e.g.][]{Papovich2011} or that galaxies stay at the same (cumulative) number density as they grow \citep[e.g.][]{Dokkum2010, Patel2013a, Dokkum2013}. However, galaxy mergers and scatter in gas accretion rates, complicate this simplified picture \citep[e.g.][]{Leja2013, Behroozi2013b, Mundy2015}. Motivated by observational studies that investigate the redshift dependence of the properties of galaxies selected at a fixed, comoving cumulative number density, we trace progenitors and descendants in cosmological, hydrodynamical simulations from the OWLS project and compare the derived galaxy growth to the one derived using haloes at fixed number density, $n(>M)$. Below we summarize our results. Our simulations are able to trace the progenitors of the majority of galaxies with $n(>M)\le2\times10^{-5}$~Mpc$^{-3}$ at $z=0$ out to $z=5$ or vice versa. Because of halo mergers between sample members, the number of descendants decreases dramatically towards low redshift, in the most extreme case by a factor of 3 at $z=0$ for galaxies with $n(>M)=2\times10^{-3}$~Mpc$^{-3}$ at $z=5$. The fraction of galaxies that are contained in both the number density and progenitor selection decreases towards higher redshift. About a quarter of the progenitors of the most massive $z=0$ galaxies are amongst the most massive galaxies at $z=5$. The progenitor and descendant samples are about 50 per cent complete over $\Delta z=2$. The median stellar, ISM, and halo masses can be different by as much as a factor of 3 when using fixed comoving number densities to select central galaxies versus tracing progenitors or descendants of these galaxies for the redshift range $z=0-5$. However, a similar difference exists between tracing progenitors or descendants due to mergers and scatter in accretion histories. Observationally there is no way to know which high-redshift galaxies will merge with larger galaxies and which galaxies will not. Mergers and scatter in the assembly of haloes conspire to get similar growth for descendants of galaxies selected at $z=5$ and number density selected galaxies. The same is not true for descendants of galaxies selected at $z=2$, which have higher masses than number density selected galaxies at $z=0$ due to the decreased star formation and thus increased importance of mergers at low redshift. Median black hole masses show differences of up to a factor of~5 between our various tracing methods. For high number densities ($n(>M)=2\times10^{-3}$~Mpc$^{-3}$), taking into account mergers by decreasing the number density traces descendants better than fixed number density, but this is not the case for $n(>M)\le2\times10^{-4}$~Mpc$^{-3}$. The scatter in ISM mass is much larger than the scatter in stellar mass for massive galaxies when AGN feedback is important, e.g.\ at $z<3$ for $n(>M)=2\times10^{-4}$~Mpc~$^{-3}$. Interestingly, without AGN feedback, the scatter in ISM mass is even lower than in the scatter in stellar mass. When tracing progenitors and descendants, the scatter is generally somewhat higher as compared to tracing galaxies at fixed number density. This is especially true for progenitors at high redshift, for which the scatter is higher by up to a factor of four. The scatter in black hole mass is always large and peaks around $z=3$ for our fiducial number density. The lower the number density, i.e.\ the more massive the average galaxy, the faster the median halo mass increases doubles. The same is true for the central black hole mass. This effect is less pronounced for ISM and stellar masses, because AGN feedback reduces gas accretion and star formation preferentially in high-mass objects. The mass evolution of quiescent and star-forming galaxies separately shows minor differences at $z<3$ in total halo and stellar masses, but the median ISM mass (black hole mass) of quiescent galaxies is substantially lower (higher) than that of star-forming galaxies at all redshifts. The gas accretion rate at the virial radius follows the dark matter accretion rate closely, but is about an order of magnitude lower. This is not true close to the central galaxy. At high redshift, the average gas accretion rate is only a factor two lower at 0.15$R_\mathrm{vir}$ than at 0.95$R_\mathrm{vir}$. At $z=0$, however, it is lower by 1.3~dex. This large decrease is in part caused by longer gas cooling times in higher-mass, hotter haloes at a lower average density of the Universe, but primarily by AGN feedback. At certain intermediate redshifts ($z\sim1-2$), the median gas accretion rate is negative, meaning that for more than half our galaxy sample outflow dominates over inflow at 0.15$R_\mathrm{vir}$. We identify three regimes of galaxy growth. Gas accretion dominates until star formation takes over at $z=2$. Below $z=1$, both the gas accretion rate and the star formation rate have decreased sufficiently for stellar mass brought in by (minor) mergers to dominate the galaxy's mass growth. The gas accretion rate at $n(>M)=2\times10^{-4}$~Mpc$^{-3}$ peaks at high redshift ($z\ge5$) and then drops by more than an order of magnitude. The star formation rate peaks somewhat later ($z=3$) and follows the drop of the gas accretion rate. The black hole accretion rate peaks at an even lower redshift ($z=1.8$) and declines mildly by about 0.5~dex to $z=0$. The star accretion rate, i.e.\ the rate at which stars are brought in by mergers, shows no decline. Its evolution resembles that of the dark matter accretion rate, but lower by two orders of magnitude. The fraction of quiescent galaxies increases down to $z=0$, consistent with observations \citep[e.g.][]{Brammer2011,Patel2013a,Lundgren2014}. However, without AGN feedback, the vast majority of galaxies remains star forming even at low redshift. In this case, the galaxies in our simulations experience accelerated growth in ISM mass, but especially in stellar mass when $M_\mathrm{halo}\approx10^{12}$~M$_\odot$. As a result, in the absence of AGN feedback, their stellar masses are almost an order of magnitude higher at $z=0$ for $n(>M)=2\times10^{-4}$~Mpc$^{-3}$. The galaxies still transition from being gas accretion dominated to being star formation dominated at $z=3$, but mergers never become important. Without AGN feedback, there is significant star formation in the centre at low redshift and this is where most of the stellar mass is added. When AGN feedback is included, massive galaxies grow only in their outskirts (inside-out growth), consistent with observations. Following galaxies consistently across a large redshift range is one of the main observational challenges. Within the uncertainties presented here, cumulative number densities are a simple tool with which we can follow galaxy growth. It traces the mass evolution as well as either true progenitors or descendants, since the median masses found with the latter two methods are also different by a factor of $\sim3$ (for $z=0-5$). Additionally, it provides an excellent way to select similar populations of galaxies in simulations and observations and to directly compare them.
16
7
1607.01483
1607
1607.01998_arXiv.txt
In this work we have estimated 10 collisional ages of 9 families for which for different reasons our previous attempts failed. In general, these are difficult cases that required dedicated effort, such as a new family classifications for asteroids in mean motion resonances, in particular the $1/1$ and $2/1$ with Jupiter, as well as a revision of the classification inside the $3/2$ resonance. Of the families locked in mean motion resonances, by employing a numerical calibration to estimate the Yarkovsky effect in proper eccentricity, we succeeded in determining ages of the families of (1911) Schubart and of the ``super-Hilda'' family, assuming this is actually a severely eroded original family of (153) Hilda. In the Trojan region we found families with almost no Yarkovsky evolution, for which we could compute only physically implausible ages. Hence, we interpreted their modest dispersions of proper eccentricities and inclinations as implying that the Trojan asteroid families are fossil families, frozen at their proper elements determined by the original ejection velocity field. We have found a new family, among the Griquas locked in the 2/1 resonance with Jupiter, the family of (11097) 1994 UD1. We have estimated the ages of 6 families affected by secular resonances: families of (5) Astraea, (25) Phocaea, (283) Emma, (363) Padua, (686) Gersuind, and (945) Barcelona. By using in all these cases a numerical calibration method, we have shown that the secular resonances do not affect significanly the secular change of proper a. For the family of (145) Adeona we could estimate the age only after removal of a number of assumed interlopers. With the present paper we have concluded the series dedicated to the determination of asteroid ages with a uniform method. We computed the age(s) for a total of 57 families with $>100$ members. For the future work there remain families too small at present to provide reliable estimates, as well as some complex families (221, 135, 298) which may have more ages than we could currently estimate. Future improvement of some already determined family ages is also possible by increasing family membership, revising the calibrations, and using more reliable physical data.
\label{s:intro} In our previous work (\cite{bigdata}, hereinafter referred to as Paper I, and \cite{namur_update}, Paper II), we have introduced new methods to classify asteroids into families, applicable to an extremely large dataset of proper elements, to update continuously this classification, and to estimate the collisional ages of large families. In later work (\cite{ages}, Paper III, and in \cite{iaus318}, Paper IV), we have systematically applied a uniform method (an improvement of that proposed in Paper I) to estimate asteroid family collisional ages, and solved a number of problems of collisional models, including cases of complex relationship between dynamical families (identified by clustering in the proper elements space) and collisional families (formed at a single time of collision). In this paper we solve several difficult cases of families for which either a collisional model had not been obtained, e.g., because it was not clear how many separate collisions were needed to form a given dynamical family, or because our method based on V-shapes (in the plane with coordinates proper semimajor axis $a$ and inverse of diameter $1/D$) did not appear to work properly. In many cases the difficulty had to do with complex dynamics, such as the effects of resonances (either in mean motion or secular). In other cases the problem was due to the presence of interlopers (members of the family by the automatic classification but not originated from the same parent body) which can be identified by using physical data, including albedo, color indexes and absolute magnitudes. Thus, to assess the level of success of the research presented in this paper, the reader should take into account that all the families discussed in this paper have been selected by the failure of our previous attempts to find a reasonable collisional model and/or to estimate an age. In this paper we discuss new family classifications for asteroids in mean motion resonances, in particular the $1/1$ and $2/1$ with Jupiter, plus a revision and critical discussion of the classification inside the $3/2$ resonance. Overall we have estimated $10$ additional ages for $9$ families. Almost all the succesful computations have required some additional effort, on top of using the methods developed in our previous papers. Two examples: first, removal of interlopers has played a critical role in several cases, to the point that $2$ of the $9$ families used for age estimation have seen removal of the namesake asteroid as an interloper, resulting in change of the family name\footnote{We are using the traditional asteroid family naming convention, by which the family is named after the lowest numbered member: if the lowest numbered is an interloper, the second lowest numbered becomes the namesake.}; second, the interaction of the Yarkovsky effect with resonances has very different outcomes: outside resonances, the semimajor axis undergoes a secular drift, while inside a mean motion resonance the semimajor axis is locked and other elements can undergo a secular drift. Inside a secular resonance the drift in semimajor axis appears not to be significantly affected, as shown by our numerical tests. The paper is organized as follows: in Section~\ref{s:meanmot} we deal with the specific problems of families formed by asteroids locked inside the strongest mean motion resonances with Jupiter, namely $3/2$ (Hildas), $1/1$ (Trojans), and $2/1$ (Griquas). In Section~\ref{s:secres} we discuss the families affected by secular resonances (involving the perihelia and nodes of the asteroid and the planets Jupiter and/or Saturn). In Section~\ref{s:complex} we present two successful interpretations of families with strange shapes and many interlopers. In Section~\ref{s:obstest} we discuss the families for which we either have not found a consistent collisional model, or we have found a model (and computed an age) but there are still problems requiring dedicated observational efforts. In Section~\ref{s:conclusions} we draw some conclusion, not just from this paper but from the entire series, in particular discussing the limitations to the possibility of further investigating the collisional history of the asteroid belt with the current data set. The numerical data in connection with the computation of ages presented in this paper are collected in the Tables~1--8, analogous to those used in Papers III and IV: all the tables are given in the Appendix. Tables~\ref{tab:tablefite} and \ref{tab:tablefita} describe the fit region in the $(e, 1/D)$ and $(a, 1/D)$ planes, respectively, where $D$ is the diameter in km. Table~\ref{tab:tablealbedo} contains the albedo data from various sources, used to estimate $D$. Tables~\ref{tab:tableslopese} and \ref{tab:tableslopesa} contain the results of the fit for slopes of the V-shapes, again in the two planes. Table~\ref{tab:tableyarkoparam} contains the data to compute the Yarkovsky calibration (see Papers I and III), and finally Tables~\ref{tab:tableages_dedt} and \ref{tab:tableages_dadt} contain the calibrations, the estimated ages and their uncertainties. Tables~\ref{tab:tablefita}, \ref{tab:tablealbedo}, \ref{tab:tableslopesa}, \ref{tab:tableyarkoparam} and \ref{tab:tableages_dadt} are partitioned by horizontal lines into sections for families of type fragmentation, of type cratering, and with one side only. We have found no additional young ages ($< 100$ My). For the sake of brevity, unless absolutely necessary, we do not specifically quote in the text these tables and data they contain, but the reader is encouraged to consult them whenever either some intermediate result is needed, or some quantitative support of the proposed explanation is required.
\label{s:conclusions} In this paper we have attempted to give a collisional model to a number of families for which the same attempt had previously failed. Most of these families were either locked in resonances or anyway significantly affected by resonances, both mean motion and secular. To estimate an age for the family required in each of these resonant cases to apply a specific calibration for the Yarkovsky effect, which in principle could be different in each case. For the largest families found in the Hilda region, consisting of asteroids locked in the 3/2 resonance with Jupiter, the Yarkovsky effect results in a secular change in eccentricity, thus the V-shape technique had to be applied in the $(e, 1/D)$ plane. There we found family 1911 with a good age determination and family 153 of the \textit{eroded} type, that is which can be seen visually in the plots in the proper $(\sin{I}, e)$ plane but cannot be confirmed by the statistical tests of the HCM method. If such a family exists, then its age must be $>3.5$ Gy, extremely ancient or even primordial, which is consistent with the hypothesis that this family is depleted to the point of not having a significant density contrast with the background. For the Trojans, that is the asteroids locked in 1/1 resonance with Jupiter, we are presenting in this paper a new classification which identifies a number of families by using synthetic proper elements and a full HCM method. HCM is well established and has been succesfully applied to the main asteroid belt, but the results in the Trojan swarms indicate that families there have a very different structure. Numerical calibrations have shown that the Yarkovsky perturbations are ineffective in determining secular changes in all proper elements. This implies that all Trojan families are \textit{fossil} families, frozen with the original field of relative velocities, which are small for cratering families, for the fragmentation case somewhat larger, but still limited to the order of the escape velocity from the parent body. Thus we find no way to estimate the ages of the families, while they can be a reliable source of information on the original velocity field immediately after the collision. We have found a new family among the Griquas, locked in the 2/1 resonance with Jupiter. We have analysed 6 large families affected by secular resonances, mostly the nonlinear ones. We have used a numerical calibration method, which has shown in all cases that the secular resonances do not affect significanly the secular change of proper $a$, thus the V-shape method in the $(a,1/D)$ plane can be used to compute the age in the standard way (as in Paper III). The solution of some of the cases has been possible only by removal of a number of assumed interlopers. This applies especially to family 145 which otherwise would appear to have a double V-shape. In conclusion, with the 10 ages computed in this paper, the situation is the following: of the 25 families with $>1000$ members, we have computed at least one age for all but 490, which has a too recent age, already known, unsuitable for our method. Of the 19 families with $300<N<1000$ members, excluding 778 which has a too recent age, already known, there is only one case left without age, namely 179 (see Section~\ref{s:eos}). Of the 24 families with $100<N<300$ we have computed 6 ages, the others we believe could only give low reliability results. In this range only for families with a small range in proper $a$ we can compute a reliable slope. Thus we can compute only young to medium old ages ($<200$ My). As an example, the family of (1222) Tina, with only $137$ members, has a short $a$ range and appears to have a well defined V-shape in $(a, 1/D)$, but the inverse slopes on the two sides are incompatible ($1/S=-0.054\pm 0.007$ IN, $0.032\pm 0.003$ OUT), thus we should conclude it is the result of two separate collisions. With so few data points, this does not appear a mature result, but something to be reanalysed when the number of members is at least doubled. Among the problems with family ages we have left open, there are three complex families which we believe have more ages than we have estimated: 221, 135, and 298. Two apparently complex cases we believe have been solved: 145, 283, although some confirmation would be useful for 145. Overall, we believe we have completed a useful work, which is based on the dataset of proper elements we have produced, and by using physical data only as check (apart from absolute magnitudes used for ages). We think it is anyway a good start towards the goal of an absolute chronology of the main collisions in the asteroid main belt, see in Figure~\ref{fig:famage_plotnf} all the family ages we have been able to compute so far (in Papers III, IV, and the present one). We have used for the error bars, for the cases where two values IN and OUT are available and compatible, standard deviations computed by the formulae from \cite{orbdet}[Section 7.2], applicable to all the cases in which two least square fits can be merged under the assumption that the parameters to be determined are the same. This needs to be applied to the term in the error budget due to the fit of the two slopes, under the assumption that they are two solutions for the same physical quantity. Thus the two fits for the slopes reinforce each other (if compatible), while the calibrations are affected by the same errors (e.g., in the density). This more complex formula is an improvement with respect to what we did in Paper III: \begin{eqnarray*} \sigma_{FIT}&=&\frac{\sigma_{FITIN}\sigma_{FITOUT}} {\sqrt{\sigma^2_{FITIN}+\sigma^2_{FITOUT}}}\\ \sigma_{CAL}&=&\sqrt{\frac{\sigma_{CALIN}^2+\sigma_{CALOUT}^2}{2}}\ \ \ \ \sigma=\sqrt{\sigma_{FIT}^2+\sigma_{CAL}^2} \end{eqnarray*} Given our complete open data policy anyone can try by himself to compute other ages with our proper elements and family classification data\footnote{http://hamilton.dm.unipi.it/astdys/index.php?pc=5}. However, we recommend caution: ages computed with insufficient data could be unreliable. It is also possible to improve ages (and decrease uncertainty) by using our computed slopes but revising the Yarkovsky calibration with a specific effort for each individual family. \begin{figure}[h!] \figfigincl{14 cm}{famage_plotnf}{Chronology of the asteroid families; the grouping on the horizontal axis correspond to fragmentation families, cratering families, young families and families with one-sided V-shape (be they cratering or fragmentation).} \end{figure}
16
7
1607.01998
1607
1607.03486_arXiv.txt
The inefficiency of star formation in massive elliptical galaxies is widely believed to be caused by the interactions of an active galactic nucleus (AGN) with the surrounding gas. Achieving a sufficiently rapid reddening of moderately massive galaxies without expelling too many baryons has however proven difficult for hydrodynamical simulations of galaxy formation, prompting us to explore a new model for the accretion and feedback effects of supermassive black holes. For high accretion rates relative to the Eddington limit, we assume that a fraction of the accreted rest mass energy heats the surrounding gas thermally, similar to the `quasar mode' in previous work. For low accretion rates, we invoke a new, pure kinetic feedback model that imparts momentum to the surrounding gas in a stochastic manner. These two modes of feedback are motivated both by theoretical conjectures for the existence of different types of accretion flows as well as recent observational evidence for the importance of kinetic AGN winds in quenching galaxies. We find that a large fraction of the injected kinetic energy in this mode thermalizes via shocks in the surrounding gas, thereby providing a distributed heating channel. In cosmological simulations, the resulting model produces red, non star-forming massive elliptical galaxies, and achieves realistic gas fractions, black hole growth histories and thermodynamic profiles in large haloes.
In simulations of galaxy formation, feedback from active galactic nuclei (AGNs) is the most commonly invoked physical mechanism to explain the suppression of star formation in massive galaxies, and the observed correlations between black hole masses and properties of their host galaxies. In particular, feedback from luminous quasars has been suggested to limit black hole growth and star formation during mergers at high redshift \citep{2005Natur.433..604D, 2005MNRAS.361..776S, 2006ApJS..163....1H,2010MNRAS.406L..55D, 2014MNRAS.442..440C}. Interacting galaxies trigger a redistribution of angular momentum and thus gas inflows into the nuclear region of galaxies \citep{1989Natur.340..687H, 1996ApJ...471..115B, 1996ApJ...464..641M}. These gas inflows then generate a cascade of gravitational instabilities \citep{2010MNRAS.407.1529H, 2015MNRAS.446.2468E}, through which the supermassive black hole (SMBH) is fuelled and a fraction of the gravitational binding energy is released. This energy is sufficient to lower the star formation rate by several orders of magnitude \citep{2005Natur.433..604D}. However, it is not yet clear whether the released energy has a lasting effect on the whole galaxy and its star formation rate, or just affects the innermost regions \citep{2011MNRAS.412.1341D,2015ApJ...800...19R}. {By applying semi-analytic modelling, \citet{2006MNRAS.365...11C} pointed out that `radio-mode' feedback, which provides an efficient source of energy in systems with hot, hydrostatic atmospheres, can simultaneously explain the low mass drop-out rate in cooling flows, the exponential cutoff at the bright end of the galaxy luminosity function and the increased mean stellar age in massive elliptical galaxies. \citet{2006MNRAS.370..645B} used a similar approach in their semi-analytic model.} \citet{2007MNRAS.380..877S} presented a unified sub-resolution model with energy input from both quasars and radio-mode feedback in hydrodynamical simulations and applied it to galaxy cluster formation. In this model, the second mode of feedback is active once the black hole accretion rate relative to the Eddington limit, $\dot{M}_\text{BH} / \dot{M}_\text{Edd}$, drops below a given value. The feedback energy injection is modelled by heating up spherical bubbles of gas in galaxy haloes, mimicking the observed radio lobes in galaxy clusters. There are various implementations of `quasar mode' feedback in the literature. \citet{2011MNRAS.412.1341D,2012MNRAS.420.2221D} use feedback from radiation pressure from luminous AGN, modelled by depositing momentum in surrounding simulation particles in idealized mergers. \citet{2012ApJ...754..125C,2014MNRAS.442..440C,2015MNRAS.449.4105C} included mechanical and thermal energy and pressure from X-rays in their AGN feedback prescription and studied the effect on idealized mergers of disc galaxies and in cosmological ``zoom'' simulations of elliptical galaxies while \citet{2013MNRAS.431.2513W} performed a comparative study of these AGN models in merger simulations. Likewise, many different approaches for `radio mode' activity have been taken, often using bipolar outflows in idealized simulations of hydrostatic haloes \citep{2002MNRAS.332..271R,2003MNRAS.339..353B,2004MNRAS.348.1105O,2004ApJ...611..158R,2005A&A...429..399Z,2006ApJ...643..120B,2007MNRAS.380L..67B,2007MNRAS.376.1547C,2007ApJ...656L...5S,2009MNRAS.395..228S,2011MNRAS.411..349G, 2011MNRAS.415.1549G,2012MNRAS.424..190G,2014ApJ...789...54L,2014ApJ...789..153L,2015ApJ...811...73L,2016MNRAS.455.2139H,2016ApJ...818..181Y,2016arXiv160501725Y}, or in cosmological simulations \citep{2010MNRAS.409..985D,2012MNRAS.420.2662D,2016arXiv160603086D}. These methods assume that quenching is caused by the energy that is released from collimated jets and their associated radio lobes, which can be found in massive systems \citep{2006MNRAS.373..959D}. However, \citet{2016arXiv160303674M} show that these kinetic feedback implementations have a different impact in an idealized galaxy cluster setup compared to pure thermal injection. The extensive body of literature on coupled AGN-galaxy evolution \citep[including][among others]{2004ApJ...600..580G,2005MNRAS.358L..16K,2007MNRAS.380..877S,2008ApJ...676...33D,2008ApJS..175..356H,2008ApJS..175..390H,2008MNRAS.385..161O,2008MNRAS.391..481S,2009MNRAS.398...53B,2010ApJ...717..708C,2010MNRAS.406L..55D,2011MNRAS.414..195T,2012MNRAS.420.2662D,2013arXiv1312.0598R,2014MNRAS.442.2304H,2014MNRAS.442..440C,2015MNRAS.450.1349K,2015ARA&A..53...51S,2015MNRAS.448.1504S,2016MNRAS.460.3925T} has recently been complemented by a new generation of high-resolution cosmological simulations of galaxy formation in large volumes, such as Eagle \citep{2015MNRAS.446..521S} and Illustris \citep{2014Natur.509..177V}. The corresponding implementations for black hole feedback in massive galaxies (in Illustris the radio-mode, while Eagle does not distinguish between modes) gather energy up to a predetermined threshold value, which parametrizes its burstiness and inject it instantaneously as thermal energy \citep[see][for Illustris and Eagle, respectively]{2007MNRAS.380..877S,2009MNRAS.398...53B}. While the Illustris simulation -- which forms the starting point of our work -- has been remarkably successful in matching a wide range of galaxy properties, its results are in tension with a number of properties of observed haloes and galaxies. An important discrepancy arising from the AGN feedback model is the gas fraction of groups of galaxies and poor clusters, which is substantially too low in Illustris \citep{2014MNRAS.445..175G}. At the same time, the stellar masses of the central galaxies in the simulated systems are too high. Employing a yet higher feedback efficiency of the BH radio mode to suppress star formation further would expel even more gas, and hence does not represent a viable solution. Alternatively, as part of our study, we made numerous attempts to improve the impact of the bubble model by adopting different choices for the parameters or by adding non-thermal pressure support in the form of magnetic fields, but without success. We therefore conclude that the particular AGN feedback model in Illustris is disfavoured, and a more radical change is in order. This suggestion is supported by recent observational findings about the possible importance of kinetic winds driven during BH accretion. For example, \citet{Cheung2016} find bisymmetric emission features in the centres of quiescent galaxies of stellar mass around $2 \times 10^{10}\,{\rm M}_\odot$, from which they infer the presence of centrally driven winds in typical quiescent galaxies that host low-luminosity active nuclei. They show that such `red geyser' galaxies are very common at this mass scale, and that the energy input from the low activity of the SMBHs of these galaxies is capable of driving the observed winds, which contain sufficient mechanical energy to suppress star formation. This appears to be a feedback channel that is distinct from the radio galaxies at the centres of clusters, but as it affects many more galaxies at lower mass scales, it could well be more important for global galaxy evolution. {Recently, \citet{2016arXiv160702507P} found hot, AGN-driven outflows in post-merger galaxies, using the single-mode thermal AGN feedback model of \citet{2016arXiv160702151T}}. Interestingly {however}, \citet{2014ApJ...796....7G} and \citet{2014ApJ...787...38F} have discovered wide-spread, powerful AGN-driven outflows in the majority ($\sim 70\%$) of massive $z \sim 1-2$ star-forming galaxies. Because this phenomenon is so common, {it likely arises from low-luminosity AGN with low Eddington ratios and thus appears consistent with a kinetic wind mode.} Also, theoretically there is good motivation for hot coronal winds from BH accretion flows. For example, \citet{2014ARA&A..52..529Y} discuss such a scenario, which can be viewed as a small-scale version of the jet model of \citet{Blandford1977}. The motivation of our work is therefore to develop a revised model for black hole growth and feedback that takes these considerations into account. It is important to realize that the relevant time and length scales of the detailed black hole physics are by far not resolved in cosmological simulations. Hence, the corresponding feedback models can only be implemented as so-called sub resolution treatments that mimic the net effect of feedback on resolved scales. Besides the theoretical uncertainties involved, this approach comes with the drawback that the behaviour of the models can vary between different numerical methods, because the scales at which the gas state is affected by the subgrid treatment are only marginally resolved. This is demonstrated for example in \citet{2015MNRAS.452..575S} for the bubble heating model of \citet{2007MNRAS.380..877S}. We thus also aim to take recent improvements in the accuracy of the hydrodynamical modelling into account \citep{2012MNRAS.423.2558B, 2012MNRAS.425.2027K, 2012MNRAS.424.2999S, 2012MNRAS.425.3024V, 2016MNRAS.455.1134P}. The model presented here conjectures two modes of feedback from AGN in thermal and kinetic form, and in this sense is similar to \citet{2012MNRAS.420.2662D}. While the kinetic part of their model is inspired by the sub-relativistic jet simulations of \citet{2004MNRAS.348.1105O}, our approach does not directly aim to represent jets from AGNs that act on marginally resolved scales. Rather we assume that the physical mechanisms that provide energy and momentum transport from black holes to their surroundings are reasonably efficient, and that their impact on large scales can be captured by depositing energy and momentum in small regions around halo centres. This approach does not address the microphysics of the origin of AGN feedback but aims to arrive at a robust parametrization of the effects of black holes on galaxy and galaxy cluster formation even at coarse resolution. In what follows, we present a new model for SMBH growth and AGN feedback in cosmological simulations of structure formation implemented in the moving-mesh magnetohydrodynamics code {\small AREPO} \citep{2010MNRAS.401..791S, 2011MNRAS.418.1392P,2016MNRAS.455.1134P}. In Section~\ref{sec:Model}, we describe the model and its free parameters. Because the main modification to previous works lies in feedback injection at low accretion rates, in Section~\ref{sec:Tests} we discuss idealized tests of how the energy couples in this mode to the gas. We then continue in Section~\ref{sec:CosmoSims} with an investigation of its impact on cosmological simulations of galaxy formation. Section~\ref{sec:parameters} is dedicated to a systematic exploration of the influence of the different model parameters on the results. Finally, we describe our findings and present our conclusions in Section~\ref{sec:Conclusion}. Appendix~\ref{app:wind} specifies, for definiteness, details of our supernova feedback model, and Appendix~\ref{app:convergence} discusses numerical resolution dependencies.
\label{sec:Conclusion} In this study, we introduced a new model for SMBH growth and the associated feedback in cosmological simulations of galaxy formation. We distinguish between a state of high and a state of low accretion, which are associated with pure thermal or pure kinetic feedback, respectively. Unlike in previous work, we omit an artificial boost factor $\alpha$ in the accretion rate estimate to account for unresolved ISM structure, and instead adopt an accretion rate given by the Bondi formula throughout. The feedback energy in the high accretion rate state is released with a continuous thermal feedback prescription. In the low accretion state, we instead use pulsed kinetic feedback injection in random directions, which is the primary new element adopted in this study. We have shown in idealized simulations that this mode drives shocks in the surrounding gas, thermalizing a significant fraction of the AGN energy within a Myr. In simulations of cosmological structure formation, our new model is able to significantly reduce star formation in the most massive haloes, leading to a stellar mass fraction in excellent agreement with observations, without overly heating and diluting the central gas. This resolves one of the central problems in the Illustris simulation. It also leads to massive galaxies with a red, old stellar population, living in haloes that have gas fractions in agreement with observations. The star formation efficiency peaks in haloes with a few times $10^{12}\,\text{M}_\odot$, in very good agreement with abundance matching expectations once we use the halo masses from dark matter only simulations for the comparison, as also used in the fits to observations on which the abundance models are based. The key to sustained quenching of massive haloes in our simulations is to ensure that the black holes in these systems transition to the low accretion state and remain in it for most of their subsequent evolution. We encourage this behaviour by employing a BH mass-dependent Eddington ratio threshold for determining the accretion state, making it progressively easier for high-mass black holes to be in the kinetic mode. Once the black holes reach this mode, the more efficient coupling of the kinetic feedback and the self-regulated nature of gas accretion will typically keep the black holes accreting at low Eddington rates. Brief interruptions of this with episodes of quasar activity, triggered for example by significant inflows of cold gas during a galaxy merger, may nevertheless occur. We analysed the impact of each of our black hole model parameters on the cosmic star formation rate history and the stellar, gas and black hole masses. To this end we varied each parameter by a factor of $4$ and carried out otherwise identical simulations to our default model. We found that most of the parameters do not alter the global properties severely, but some of them can have a significant impact on a subset of haloes and galaxies over particular mass ranges. In these cases, the changes can be readily understood in terms of the tightly self-regulated nature of black hole growth that occurs in our models. We would like to emphasize that the assumption of the existence of a low accretion rate state with efficient kinetic feedback is more important than the precise value of any of the model parameters. The new AGN feedback model discussed here significantly improves the galaxy formation model explored previously in the Illustris simulation project, particularly at the high-mass end of the galaxy population. It therefore promises to be an excellent starting point for a new generation of hydrodynamical simulations of galaxy formation that allow much improved predictions for the bright end of the galaxy population, and for groups and clusters of galaxies, as well as their thermodynamic scaling relations. Future work with this model in high-resolution simulations of galaxy formation could potentially also shed light on the physical origin of observed centrally concentrated radio emission \citep{2015A&A...576A..38B, 2016AN....337..114B}, AGN driven nuclear outflows \citep{2014ApJ...787...38F, 2014MNRAS.443.2154T} and related phenomena.
16
7
1607.03486
1607
1607.03679_arXiv.txt
Spiral galaxies are thought to acquire their gas through a protracted infall phase resulting in the inside-out growth of their associated discs. For field spirals, this infall occurs in the lower density environments of the cosmic web. The overall infall rate, as well as the galactocentric radius at which this infall is incorporated into the star-forming disc, plays a pivotal role in shaping the characteristics observed today. Indeed, characterising the functional form of this spatio-temporal infall in-situ is exceedingly difficult, and one is forced to constrain these forms using the present day state of galaxies with model or simulation predictions. We present the infall rates used as input to a grid of chemical evolution models spanning the mass spectrum of discs observed today. We provide a systematic comparison with alternate analytical infall schemes in the literature, including a first comparison with cosmological simulations. Identifying the degeneracies associated with the adopted infall rate prescriptions in galaxy models is an important step in the development of a consistent picture of disc galaxy formation and evolution.
Numerical chemical evolution models (CEMs) are one of the most flexible (and long-standing) tools for interpreting the distribution of metals in both the gas and stellar phases of galaxies. The power of CEMs lies in the rapid and efficient coupling of star formation and feedback prescriptions, with the stellar nucleosynthesis, initial mass function formalisms, and treatments of gas infall/outflow. While lacking a self-consistent (hydro-)dynamical treatment, the ability to explore these parameter spaces on computing timescales of minutes rather than months, ensures CEMs maintain a prominent role in astrophysics today. The initial motivation for the development of CEMs was the identification of what is now known as the {\it G-dwarf Problem} \citep{vdb62,sch63,lyn75}; specifically, there is an apparent paucity of metal-poor stars in the solar neighbourhood, relative to the number predicted to exist should the region behave as a `closed-box', i.e. one in which gas neither enters nor departs. It was recognised that a viable solution to the G-dwarf Problem lay in the relaxation of this closed-box assumption, via the inclusion of a gas infall prescription \citep{lar72,edm90}. The infall of metal-poor gas can then dilute the existing elemental abundances, whilst simultaneously increasing the early star formation rate, thus producing a stellar metallicity distribution shifted to moderately higher metallicities. Moreover the star formation sustained by metal-poor gas accretion self-regulates to produce a constant gas-phase metallicity close to the stellar yield (i.e., close to the solar metallicity). The G-dwarf problem also appears in external galaxies, as M\,~31 \citep{wor96}. Building on this framework, most classical models of disc formation assume a protogalaxy or dark matter halo which acts as the the source of the infalling gas \citep{lf83,gus83,lf85,mat89,pcb98} without which, in addition, radial abundance gradients are increasingly difficult to recover \footnote{Some more recent models include two or even three infall phases, each corresponding in turn to the formation of the halo, followed by that of the thick and thin discs \citep{chia01,chia03,fen06,mic13}.}. Extensions of this classical framework include those employing a multiphase representation for the interstellar medium \citep{fer92,fer94,mol96}. In \citet{md05}, we calculated a generic grid of theoretical CEMs, defined in terms of their rotation velocity using the universal rotation curve of \citet{pss96}. In that work we assumed that the infall rate, or its inverse, the collapse time-scale, $\tau_{\textrm{\scriptsize c}}$, for each galaxy, depends on the total mass of each theoretical galaxy, with the low mass galaxies forming on a longer time scale than the massive ones, according to the expression $\tau_{\textrm{\scriptsize c}} \propto M^{-1/9}$ \citep{gal84}. Such a mass-dependency mimics the downsizing phenomenon now associated with galaxy formation \citep{heav04,gon08}. These sorts of adopted timescale relationships are, however, weakly constrained in the sense that they were only implemented to ensure present-day abundance patterns and gas fractions were recovered. We now possess much more information pertaining to the manner by which gas moves from the cosmic web, through halos, and onto discs, and so more realistic prescriptions should be pursued. From the observational point of view, the observations of the atomic hydrogen line \HI\ at 21~cm in nearby galaxies revealed the existence of extensive halos containing gas up to 15\,kpc above the plane of the discs \citep[e.g.][]{fra02,bar05,boo08,hea11,gen13}. This gas could be deposited on the discs, forming at present the reservoirs from which stars form in the outer parts of galaxies. In fact, it seems be rotating slower than in discs and moving slowly toward inwards \citep{oos04}. However, most of observational evidences of gas accretion are indirect, since this gas is tenuous. Only the high velocity clouds (HVC) are well studied \citep[e.g.][]{wak99}, their existence being in agreement with theoretical expectations. \citet{ric06} analyzed the HVCs in the local intergalactic medium suggesting, in agreement with previous findings from \citet{bli99}, that these HVC's are the building blocks of galaxies, and situated within the halo at galactocentric distances less than $\sim$40~kpc, consistent with the predictions of cosmological hydrodynamical simulations \citep{con06}. These clouds are comprised of low-metallicity gas \citep[e.g.][]{gib01}, consistent with the material being the fuel out of which the disc forms. It should also be noted that there now exists recent observations which suggest the infall of low-metallicity \HI\ gas in dwarf galaxies triggering star formation therein. The most prominent example is NGC~5253, but additional examples are described by \citet[][ and references therein]{arls10,arls12}.\footnote{Similarly, it has been suggested that the high N/O ratio found in some of these galaxies might be also related to infall of metal-poor gas. See the excellent review by \citet{jorge14} for details.} \citet{oos04} estimated that, if bumps of \HI\ gas have a total mass of order 10$^{8-9}\,\mbox{M}_{\sun}$ and are accreted in 10$^{8-9}$\,yr, the typical accretion rate would be $\sim$1\,\Moy. On the other hand, \citet{san08}, reviewing all data referring this subject of gas around galaxies and its possible movement towards them, infer a mean visible accretion rate of cold gas of at least 0.2\,\Moy, which should be considered a lower limit of the infall rate; such a value poses a problem when it compared with the star formation rate (SFR), since it is roughly one order of magnitude smaller than necessary to sustain the observed SFR \citep[see][ ,and references therein]{jorge14}. The \citet{san08} value is, however, calculated using only the cold gas, neglecting the likely more dominant background reservoir of ionised gas. \citet{leh11} measuring the mass of this phase, estimating the infall rate increases to $\sim 0.8$\,\Moy. Later, \citet{ric12} gives an infall rate of 0.5--0.7\,\Moy\ as an estimate for the Milky Way Galaxy (MWG) and the Andromeda galaxy (M\,31) within a radius of 50\,kpc, claiming that `in MWG and other nearby galaxies the infall of neutral gas may be observed directly by \HI\ 21~cm observations of extra-planar gas clouds that move through the halos'. Very recently, \citet{fer16} report the {\sc HI } 21-cm detection in emission at a redshift $z=0.375$ with the COSMOS {\sc HI} Large Extragalactic Survey (CHILES). Following these data, the diffuse gas is M(HI)=2$\times$10$^{10}$\,\Msun, the molecular one is M(H$_{2}$)=1.8--9.9$\times$10$^{10}$\,\Msun, and the stellar mass M$_{*}$=8.7$\times$10$^{10}$\,\Msun. This implies that the disc of this galaxy accreted [13--22]$\times$10$^{10}$\,\Msun\, in $\sim$8\,Gyr, which will produce an averaged value for the infall rate of 17--27\,\Msun\,yr$^{-1}$. Although these numbers are highly speculative, given the lack of any more firm estimates, we will employ them in this work. Additional guidance regarding infall rates can be provided by tracing the spatial and temporal infall of gas onto discs, via the use of cosmological simulations. This cosmological gas supply has a strong dependence on redshift and halo mass. However, the interplay between the circumgalactic gas components is not well known and the gas physics in a turbulent multiphase medium is non-trivial to capture in (relatively) low-resolution hydrodynamical simulations. The consequence is that there remain only select examples in the literature which reproduce successfully and simultaneously (both in terms of size and relative proportions) the characteristics of late-type discs and their spheroids \citep[e.g.][]{brad,vog13,scha15}. Bearing in mind this cautionary statement, it is interesting to note that these simulations suggest that most of the baryons in galaxies are accreted diffusely, with roughly 3/4 due to smooth accretion, and 1/4 from mergers. Recently, \citet{bro14} have generated a suite of cosmological simulations which reproduce the gross characteristics of the Local Group. Analysing these simulations, they find the relationship between the stellar mass and the halo mass (their Fig.~2 and Eq.~2), valid for a stellar mass range [$10^{7}-10^{8}$] M$_{\sun}$. In the last decade, several techniques have been developed to obtain such a relation between the dynamical mass in the proto-halos and the baryonic mass in the discs, although usually this last one is associated to the stellar mass. One of these statistical approaches connecting the CDM halos with their galaxies is the sub-halo abundance matching technique. With that, the total stellar to halo mass relation (SHMR) is obtained \citep{shan06,guo10,beh10,beh13,rod12,rod15}. Other formalisms use the halo occupation distribution (HOD), which specifies the probability that a halo of mass $M$ has a given number of galaxies with a certain mass $M_{*}$ (or luminosity, colour or type). As a result, the SHMR is also estimated \citep{mos10,leau10,yang12}. A summary of these results can be found in Fig.\,5 of \citet{beh13}. This relation constrains the possible accretion of gas from the halo to the discs. One of the questions that arises when cosmological simulations and data are compared is, such as \cite{kor16} states, that there is a collision between the cuspy central density seen in cosmological simulations and the observational evidence that galaxies have flat cores. This tension there exists from some years ago and it is still present. The use of a Navarro, Frank \& White (NFW) profile, with its $\rho \propto r^{-1}$ producing cusps at small radii, comes from the era in which cosmological simulations were undertaken primarily only with dark matter, but it continues be widely used. Over the past years the data for rotation curves have improved immensely, as well as the mass modeling, showing that most of dwarf disc galaxies have cored halos. Although it seemed less clear for giant spirals, \citet{don09} analyzed rotation curves (RCs) for a sample of 1000 galaxies finding a good fit of a core-halo profile to the data, better than the one for the NFW. More recently, \citet{nesti13} have carefully analyzed the available data for the Milky Way Galaxy, fitting both dark matter Burkert and NFW profiles. They find that the cored profile produces the best result, and is therefore the preferred one, claiming that this is in agreement with similar fits obtained for other external disc galaxies and in agreement with the mass model underlying the Universal Rotation Curve (URC). \citet{ogi14a} and \citet{ogi14b} say that this discrepancy between observations and simulations may be due to dynamical processes that transform a cuspy into a cored model, probably by the effect of the feedback that modifies the star formation process at small scale. In fact, most recent cosmological simulations \citep[see][]{brad} which include this feedback in the star formation prescriptions, find that this transformation occurs when there is violent feedback from rapid star formation in the inner regions of disc galaxies. In this work we use the \citet[][ hereinafter SAL07]{sal07} expressions, who use the URC formalism assuming that halo distributions follow a Burkert core isothermal profile. In the present work we compute the infall rate for a set of theoretical galaxies with total dynamical masses in the range \Mvir $\sim[5\times 10^{10} -10^{13}]$ \Msun. Following the prescriptions of SAL07, we derive the rotation curves for each halo and disc, and their corresponding radial mass distributions. By imposing that gas from the halo falls onto the discs at a rate such that after a Hubble time the systems end with masses as observed in nature, we obtain the infall rate for each galaxy and for each radial region therein. We analyse the infall rate resulting from these prescriptions, comparing with the results from assumptions of previous CEMs, those inferred from cosmological simulations, and those from extant empirical data concerning mass accretion. We pay special attention to the redshift evolution of this infall in galaxies of different dynamical masses, and analyse its radial dependency within individual galaxies. We verify that the final halo-disc mass relation follows the prescriptions given by the authors cited above. The chemical evolution is beyond the scope of this work; here, we focus specifically on the manner by which gas reaches the disc. The impact on star formation and metal enrichment is the focus of the next phase of our collaboration (Moll\'{a} et al., in preparation). We describe the framework of our models in Section~\ref{model}. The results are outlined in Section~\ref{results}, sub-dividing the study of the dependence of the infall rate on the galactocentric radius into Sub-section ~\ref{dep-r}, the dependence on mass of the whole galaxies into Sub-section ~\ref{dep-m}, and the resulting growth of the spiral discs into Sub-section~\ref{mdisc}. These results and their implications are discussed in \S~\ref{dis}. Our conclusions are summarised in Section~\ref{conclusions}.
\label{conclusions} In this work we have presented the infall rates used as input to a grid of chemical models for spiral galaxies. We also performed a systematic comparison with data and competing models in the literature, including cosmological simulations. Our main conclusions can be summarised as follows: \begin{enumerate} \item The more massive the galaxy, the higher the absolute value of the infall rate at all redshifts. \item The infall rates necessary to reproduce the relationship between \Mvir$-M_{\textrm{\scriptsize D}}$ are smoother than the ones assumed in classical chemical evolution models, including our earlier generation of models (MD05). They are also lower and smoother than the accretion rates produced in cosmological simulations which create spheroids and are in better agreement with those resulting in realistic late-type discs (employing contemporary prescriptions for star formation and feedback). \item The evolution of the infall rate with redshift is quite constant for discs, decreasing smoothly for $z < 2$ and showing very similar behaviour for all radial regions, with differences only in the absolute value. This smooth evolution is different than the steep decline with redshift shown for bulges. \item The normalised infall rate is essentially the same for all discs until $z=2$, and shows only small differences for $z< 2$. \item The final relationship among disc and halo masses, denoted as SHMR, obtained by these new prescriptions, agrees well with observations and cosmological simulations. \item From redshift $z=2.5$ to today, discs grow in size by a factor of two (except for the lowest mass galaxies), while the disc masses increase by a factor of five to ten. \item These infall rates (decreasing with time and with radius within each theoretical galaxy) are in agreement with the classical inside-out scenario, and also with the cosmological simulations infall rates from \citet{courty10}. \item The growth of discs continue to the present time, with gas accretion across the discs of low mass systems, and in the outer regions of more massive spirals. \end{enumerate}
16
7
1607.03679
1607
1607.00883_arXiv.txt
{Within the Standard Model with non-linearly realised electroweak symmetry, the LHC Higgs boson may reside in a singlet representation of the gauge group. Several new interactions are then allowed, including anomalous Higgs self-couplings, which may drive the electroweak phase transition to be strongly first-order. In this paper we investigate the cosmological electroweak phase transition in a simplified model with an anomalous Higgs cubic self-coupling. We look at the feasibility of detecting gravitational waves produced during such a transition in the early universe by future space-based experiments. We find that for the range of relatively large cubic couplings, $111~{\rm GeV}~ \lesssim |\kappa| \lesssim 118~{\rm GeV}$, $\sim $mHz frequency gravitational waves can be observed by eLISA, while BBO will potentially be able to detect waves in a wider frequency range, $0.1-10~$mHz. } \begin{document}
\label{sec:intro} After the discovery of the LHC Higgs boson, precise determination of its couplings has become imperative. Without this knowledge the nature of electroweak symmetry remains undetermined. Namely, with the current Higgs data at hand the Standard Model (SM) with a non-linearly realised $SU(2)_L\times U(1)_Y$ gauge symmetry is still a viable option% \cite{Binosi:2012cz, Kobakhidze:2012wb}. In the most economic case, the Higgs boson can be considered as an electroweak singlet particle, which admits several additional interactions beyond the conventional SM \cite{Kobakhidze:2012wb}. These new interactions, besides having interesting manifestations at the LHC and future colliders, could have played an important role in the early universe by driving a strongly first-order electroweak phase transition. Electroweak baryogenesis in this framework has been studied in Ref. \cite{Kobakhidze:2015xlz}. This paper is devoted to investigating the production of gravitational waves during the cosmological electroweak phase transition and the feasibility of their detection in upcoming experiments. Nonlinearly realised electroweak gauge theory becomes strongly interacting at high energies, the famous example being $WW\to WW$ scattering in the Higgsless Standard Model. It is expected that at high energies new resonances show up, which unitarise rapid, power-law growth of scattering amplitudes with energy in perturbation theory. However, the scale where new physics is expected to emerge crucially depends on Higgs couplings and could be as high as few tens of TeV \cite{Kobakhidze:2012wb, Kobakhidze:2015xlz}. New physics at such high energies may escape the detection at LHC. Therefore, alongside with the precision measurements of Higgs couplings, complimentary information stemming from astrophysical observations of gravitational waves may provide an important hint for the nature of the electroweak symmetry % and the cosmological phase transition. With this motivation, we consider a simplified model with only one additional anomalous cubic Higgs coupling $\kappa$, which is the most relevant coupling for the electroweak phase transition and also one of the most difficult to be measured at the LHC. Beyond this and simplicity considerations, we have no fundamental reason to stick with this minimalistic scenario. In fact, the model can be extended in various ways, without significantly affecting our results. Note that, production of gravitational waves from a first order phase transition in effective theories of the SM with higher dimensional operators have been previously discussed in \cite{Delaunay:2007wb,Huang:2016odd,Leitao:2015fmj}. % The paper is organised as follows. In the following section (Sec.~\ref{sec:model}) we give a brief account of the non-linear SM; The next section (Sec.~\ref{sec:fin_temp}) is devoted to discussion of the electroweak phase transition. In Sec.~\ref{sec:gw}, we compute the amplitude of gravitational waves produced during the strongly first order phase transition and identify the range of $\kappa$ for which they can potentially detected by eLISA and BBO. The conclusions are presented in Sec.~\ref{sec:conclusion} and some technical details are given in the Appendices.
\label{sec:conclusion} In this paper we have studied the electroweak phase transition within the Standard Model with non-linearly realised electroweak gauge symmetry. Namely, we focused on the anomalous Higgs cubic coupling $\kappa$, which could drive strongly first order phase transition, if sufficiently large. With increase of $|\kappa|$, the nucleation temperature, $T_n$, drops well below $W/Z$ masses, resulting in decreased period of phase transition and in higher velocities of the nucleated bubbles. However for very large $|\kappa|$ values, the nucleation rate drops substantially and the universe is trapped in the high temperature phase. Thus, strongly first order phase transition is possible only for a limited range of the anomalous cubic coupling $|\kappa | \in [79,118]$ GeV. We have also found that for $|\kappa|\in [111,118]$ GeV, gravitational waves in the $0.1-10~\mu$Hz frequency range can be produced during the electroweak phase transition with a sizeable enough amplitude to be detectable by the planned eLISA. Recent results from LISA Pathfinder mission \cite{PhysRevLett.116.231101} are encouraging for the feasibility of the eLISA project, which, if implemented, can provide a complimentary information on the nature of electroweak symmetry and the cosmological phase transition. This information will be particularly important since the measurement of the Higgs cubic coupling at high luminosity LHC is feasible only with $30-50$\% accuracy \cite{Lu:2015jza}.
16
7
1607.00883
1607
1607.07692_arXiv.txt
Our study attempts to understand the collision characteristics of two coronal mass ejections (CMEs) launched successively from the Sun on 2013 October 25. The estimated kinematics, from three-dimensional (3D) reconstruction techniques applied to observations of CMEs by SECCHI/Coronagraphic (COR) and Heliospheric Imagers (HIs), reveal their collision around 37 $R_\sun$ from the Sun. In the analysis, we take into account the propagation and expansion speeds, impact direction, angular size as well as the masses of the CMEs. These parameters are derived from imaging observations, but may suffer from large uncertainties. Therefore, by adopting head-on as well as oblique collision scenarios, we have quantified the range of uncertainties involved in the calculation of the coefficient of restitution for expanding magnetized plasmoids. Our study shows that the comparatively large expansion speed of the following CME than that of the preceding CME, results in a higher probability of super-elastic collision. We also infer that a relative approaching speed of the CMEs lower than the sum of their expansion speeds increases the chance of super-elastic collision. The analysis under a reasonable errors in observed parameters of the CME, reveals the larger probability of occurrence of an inelastic collision for the selected CMEs. We suggest that the collision nature of two CMEs should be discussed in 3D, and the calculated value of the coefficient of restitution may suffer from a large uncertainty.
\label{sec:intro} Coronal mass ejections (CMEs) being the most energetic events on the Sun are expanding magnetized plasma blobs in the heliosphere. If they reach the Earth with a southward directed magnetic field orientation, they can cause intense geomagnetic storms \citep{Dungey1961,Gosling1993,Gonzalez1994}. They are frequently launched from the Sun, especially during solar maximum when their interaction or collision in the heliosphere is possible. Historically, such interaction was inferred using in situ data from \textit{Pioneer 9} and twin \textit{Helios} spacecraft \citep{Intriligator1976,Burlaga1987}. However, the first observational evidence was provided by \citet{Gopalswamy2001apj} using Large Angle and Spectrometric COronagraph (LASCO; \citealp{Brueckner1995}) on-board \textit{SOlar and Heliospheric Observatory (SOHO)} and long wavelength radio observations. It has been suggested that some interacting CMEs have long interval of strong southward magnetic field and can produce major disturbances in the Earth's magnetosphere \citep{Wang2003a,Farrugia2004,Farrugia2006,Lugaz2014}. Before the \textit{Solar TErrestrial RElations Observatory} (\textit{STEREO}) \citep{Kaiser2008} era, CMEs could only be imaged near the Sun from one viewpoint of SOHO and we lacked their 3D kinematics. Therefore, understanding CME-CME interaction was mainly based on magnetohydrodynamics (MHD) numerical simulations studies \citep{Vandas1997,Vandas2004,Gonzalez-Esparza2004,Lugaz2005,Wang2005,Xiong2006,Xiong2007,Xiong2009}. With the availability of wide angle imaging observations of Heliospheric Imagers (HIs) on-board \textit{STEREO} from multiple viewpoints, several cases of interacting CMEs have been recently reported in the literature \citep{Harrison2012,Liu2012,Lugaz2012,Mostl2012,Martinez-Oliveros2012,Shen2012,Temmer2012,Webb2013,Mishra2014a,Mishra2015,Colaninno2015}. Also, the simulations based studies on the observed cases of CMEs are also being done to advance our understanding of such interaction \citep{Lugaz2013,Shen2013,Shen2014,Niembro2015,Shen2016}. Understanding the interaction of CMEs is of interest because of their impact on many areas of heliospheric research. Several cases of CME-CME interaction studies have focused on understanding their nature of collision, particle acceleration and geoeffectiveness \citep{Shen2012,Lugaz2014,Ding2014}. Interacting CMEs also provide an unique opportunity to study the evolution of the shock strength, structure and its effect on plasma parameters of preceding CME \citep{Wang2003a,Lugaz2005,Mostl2012,Liu2012,Lugaz2015}. It is suggested that due to preconditioning of ambient medium by preceding CME, any following CME may experience high \citep{Temmer2012,Mishra2015a} or low drag \citep{Temmer2015} before their noticeable collision or merging. We use the term ``interaction'' and ``collision'' for two different sense as defined in \citep{Mishra2014a}. By ``interaction'' we mean a probable exchange of momentum between the CMEs is in progress, however we could not notice obvious joining of their features in the imaging observations. The ``collision'' stands for the scenario noticed in imaging observation, where two CMEs moving with different speeds come in close contact with each other and show an exchange of momentum till they achieve an approximately equal speed or they get separated from each other. Colliding CMEs can display change in their kinematics and morphology after the collision, and hence the prediction of their arrival time to Earth becomes challenging. The knowledge about nature of collision of CMEs may be utilized to predict their post-collision kinematics. Using twin viewpoint \textit{STEREO} observations, more accurate estimation of kinematics and masses of CMEs is possible, however, recent case studies are not in agreement about the nature of collision of the CMEs. This disagreement is possible as each case study have taken different candidate CMEs having probably different characteristics. Some studies exploiting imaging observations have shown a super-elastic collision of CMEs \citep{Shen2012,Colaninno2015} while some advocate inelastic \citep{Mishra2015} or close to elastic collision \citep{Mishra2014a,Mishra2015a}. This poses a question as to what determines the nature of collision, i.e. coefficient of restitution to vary from super-elastic to inelastic range. Most of the earlier studies have considered a simplistic approach that CMEs are propagating exactly in the same direction (i.e. head-on collision), and also have not taken the expansion speed or angular size of CME into account \citep{Mishra2014a,Mishra2015,Mishra2015a}. \citet{Schmidt2004} have studied the obliquely colliding CMEs using numerical simulation. Some earlier studies have suggested the collision nature of the CMEs based on their deflection and change in the dynamics without explicit mentioning the value of coefficient of restitution \citep{Lugaz2012,Temmer2012,Colaninno2015}. \citet{Shen2012} for the first time studied the oblique collision of CMEs using imaging observation, and considered several uncertainties into account, however they did not discuss on constraining the conservation of momentum. The straightforward use of observed CME characteristics (speed and mass) which may involve large errors, may be a reason for conservation of momentum to be no longer valid. We admit that previous studies are away from the real scenario and each has different limitations. Taking an exception to \citet{Shen2012}, we are not aware of other study which thoroughly discusses the uncertainties involved in understanding the nature of collision of the CMEs. Hence, we take next step to address the limitations of previous studies, and to investigate the role of characteristics of CMEs, e.g. direction, mass, propagation speed, expansion speed and angular size, on the collision nature. For this purpose, we selected two CMEs which occurred almost 7 hr apart on 2013 October 25 and collided with each other in HI-1 field-of-view. The collision around such a moderate distance from the Sun is well suited to our collision picture. This is because near the Sun coronal magnetic structures may interfere the CME dynamics and accurate estimation of the dynamics far from the Sun using HI observations is difficult \citep{Howard2011,Davies2012,Davies2013,Mishra2014}. We apply the Graduated Cylindrical Shell (GCS) fitting technique \citep{Thernisien2009} on coronagraphic images and Self-Similar Expansion (SSE) method \citep{Davies2012} on HI images of the CMEs to estimate their kinematics. This is discussed in Section~\ref{recon} including a description on estimation of true mass of the CMEs using \citet{Colaninno2009} method and identification of collision phase from the kinematics profile. Section~\ref{coli} presents the analysis and results from the head-on and oblique collision scenario, and shows the limitations of the approach of simplistic head-on collision undertaken in earlier studies. The various limitations of the present study are discussed in Section~\ref{Dis} and conclusions are presented in Section~\ref{Res}.
\label{Res} We have made an attempt to understand the uncertainties in the nature of collision of magnetized expanding plasma blobs by analyzing the interacting CMEs of 2013 October 25. Our analysis suggests for inelastic nature of collision for the selected CMEs. Uncertainties in the collision nature due to the error in direction, mass, angular width, expansion and propagation speed is examined. We show that the mass of the CMEs has almost no effect on deciding their nature of collision. Similar results have also been presented by \citet{Shen2012}. We note that that the head-on collision scenario causes the $e$ value to be underestimated than that of oblique collision. For the selected CMEs, the probability of inelastic nature of collision decreases with increasing the errors in the longitude of the CMEs. The values of $e>1$ corresponding to larger errors in longitudes leads to larger inconsistency with observed dynamics of the CMEs and therefore seems unreliable. This made us to acknowledge the pseudo effect of propagation direction of the CMEs on their collision nature. To estimate a reliable value of $e$, we emphasize that error in the CMEs direction should be considered along with the errors in CME dynamics. Our analysis in a scenario of oblique collision clearly finds that deflection of interacting CMEs is an inevitable phenomenon. The observed kinematics of the CMEs and their angular half-width ranging between 5 to 35$^{\circ}$ results in probability of around 75.6\% for inelastic nature of collision. The collision nature is found to be super-elastic when ratio of CME2 to CME1 angular half-width is greater or equal to 1.5. We also noted super-elastic collision when the expansion speed of CME2 is greater or equal to 2 times of expansion speed of CME1. Our study finds that the lower approaching speed of the CMEs results in a greater probability of super-elastic collision. Further, an uncertainty of 100 km s$^{-1}$ in the initial speed of the CMEs together with the variation of their angular half-width from 5 to 35$^{\circ}$ lead to probability of 72.7\% for inelastic nature of collision. From our analysis, we establish a concept that the larger expansion speed of CME2 than CME1, and larger values of their sum over CMEs approaching speed tend to increase the probability of super-elastic collision \citep{Shen2012,Shen2016}. We conclude that if the expansion speed of following CME2 is larger than preceding CME1, it gives relatively low approaching speed before the collision and relatively high separation speed after the collision causing the nature of collision to be super-elastic. From our analysis for the CMEs of 2013 October, the relative expansion speed of the CMEs appears as a strong factor than relative approaching speed for deciding the nature of collision. Further study is needed to clearly understand the sufficient condition for inelastic or super-elastic collision. \\ We acknowledge UK Solar System Data Center for providing the processed Level-2 \textit{STEREO}/HI data. The work is supported by the NSFC grant Nos. 41131065, 41574165 and 41421063. We also thank the reviewer whose comments have greatly improved this paper. W.M. is supported by the Chinese Academy of Sciences (CAS) President’s International Fellowship Initiative (PIFI) grant No. 2015PE015. \newpage
16
7
1607.07692
1607
1607.07001_arXiv.txt
The different appearances exhibited by coronal mass ejections (CMEs) are believed to be in part the result of different orientations of their main axis of symmetry, consistent with a flux-rope configuration. There are observational reports of CMEs seen along their main axis (axial perspective) and perpendicular to it (lateral perspective), but no simultaneous observations of both perspectives from the same CME have been reported to date. The stereoscopic views of the telescopes onboard the {\it Solar-Terrestrial Relations Observatory} (STEREO) twin spacecraft, in combination with the views from the {\it Solar and Heliospheric Observatory} (SOHO) and the {\it Solar Dynamics Observatory} (SDO), allow us to study the axial and lateral perspectives of a CME simultaneously for the first time. In addition, this study shows that the lateral angular extent (\emph{L}) increases linearly with time, while the angular extent of the axial perspective (\emph{D}) presents this behavior only from the low corona to $\approx$\,5 $R_{\odot}$, where it slows down. The ratio $L/D \approx$\,1.6 obtained here as the average over several points in time is consistent with measurements of \emph{L} and \emph{D} previously performed on events exhibiting only one of the perspectives from the single vantage point provided by SOHO.
\label{S_intro} Coronal mass ejections (CMEs) have been intensively studied since their first detections by space-borne coronagraphs, partly because they constitute the main modifier of heliospheric conditions, and hence of space weather. Given the adverse consequences that geomagnetic storms may unleash on Earth \citep[\eg][]{Lanzerotti2009}, the field of space weather has thrived. Unfortunately, to date it is not possible to predict when and where in the Sun the next eruption will take place. Therefore, current forecasting commences when a CME event has already been launched with a significant propagation component in Earth's direction. Throughout the years, numerous efforts have been undertaken to forecast the time of arrival of an interplanetary CME, the probability of it interacting with the Earth's magnetosphere, and the strength of this interaction \citep[\eg][]{Dryer-Smart1984, Fry-etal2001, Gopalswamy-etal2000, Gopalswamy-etal2001b, Smith-etal2003, Schwenn-etal2005, Gopalswamy-etal2005b, Gopalswamy-etal2005c, Manoharan2006, Taktakishvili-etal2009, Kilpua-etal2009, Moestl-etal2013, Xie-etal2013, Lugaz-etal2014}. As part of these efforts, it is crucial to understand how magnetic fields are organized within CMEs, and how this arrangement relates to the CME sources on the Sun. In this respect, it is fundamental to gain understanding of the general morphology of CMEs, as well as of how this morphology evolves with time. Until the advent of the STEREO Mission at the end of 2006 \citep[{\it Solar-Terrestrial Relations Observatory};][]{Kaiser-etal2008}, the study of the three-dimensional (3D) configuration of CMEs had been speculative to some extent, given the limitations imposed by perspective and projection effects, inherent to bidimensional images obtained from a single vantage point. Some studies dealt with the analysis of observational properties of CMEs to deduce whether they were planar or 3D entities, and in the latter case evaluating whether the 3D overall structure was better approximated by spherically symmetric bubbles, by cylindrically-symmetric arcades, or by curved flux-tubes \citep[\eg][]{Crifo-etal1983, Schwenn1986, Webb1988, MacQueen1993, Vourlidas-etal2000, Plunkett-etal2000, Moran-Davila2004}. On the theoretical side, much progress has been made on 3D magnetohydrodynamic models that describe the initiation, eruption, configuration, and/or evolution of CMEs \citep[\eg][]{Gibson-Low1998, Antiochos-etal1999, Amari-etal2000, Tokman-Bellan2002, Manchester-etal2004, Torok-Kliem2005, Odstrcil-etal2005, Amari-etal2007, Zuccarello-etal2009}. Other models of geometrical basis have also proliferated \citep[\eg][]{Zhao-etal2002, Michalek-etal2003, Xie-etal2004, Cremades-Bothmer2005, Thernisien-etal2006}. The new views of the solar corona from different points of view provided after STEREO's launch certainly meant a step forward toward determining the 3D spatial extent of a CME and its true propagation direction \citep[\eg][]{Webb-etal2009, Mierla-etal2010, Mierla-etal2011, Gopalswamy-etal2012, Feng-etal2013}. The simultaneous two perspectives of the STEREO spacecraft also enabled the development of forward-modeling techniques \citep{Thernisien-etal2009, Wood-etal2010} that match well the appearance exhibited by a CME from both STEREO viewpoints and in some cases from the Earth's perspective as well. However, careless use of these tools may yield misleading reconstructions, given that at times it is possible to match several combinations of parameters to the same CME observations. As pointed out by \citet{Mierla-etal2009}, the 3D reconstruction of the CME morphology from currently available coronagraph data is an intrinsically undetermined task, given that a proper tomographic reconstruction requires a large number of images of a CME from many different viewpoints. In this effort we take advantage of coronal images provided by the STEREO spacecraft in quadrature to investigate in detail the dimensions of a particular CME. The analysis relies on the overall 3D configuration scheme of cylindrical symmetry proposed by \citet{Cremades-Bothmer2004}, which approximates the structure of CMEs as organized along a main axis of symmetry, in agreement with the flux rope concept as described by \eg~\citet{Gosling-etal1995}, \citet{Chen-etal1997}, and \citet{Dere-etal1999}. The scheme considers the white-light topology of a CME projected in the plane of the sky (POS) as being primarily dependent on the orientation and position of the source region's neutral line on the solar disk. As a result of the solar differential rotation, the neutral lines associated with bipolar regions that are on the visible side of the solar disk and close to the east limb tend to be perpendicular to the limb, while when close to the west limb, the neutral lines tend to be parallel to it (see Figure~\ref{scheme}). According to these typical orientations, front-side solar sources close to the east limb tend to yield CMEs seen along their main axis, exhibiting a three-part structure and in many cases also helical threads indicative of magnetic flux ropes as in \eg~\citet{Wood-etal1999} and \citet{Dere-etal1999}. On the other hand, front-side solar sources close to the west limb tend to yield CMEs with their main axes oriented parallel to the limb and perpendicular to the observer--Sun line, so that the lateral view of a CME is detected. In view of this cylindrically symmetric configuration, \citet{Cremades-Bothmer2005} measured the lateral angular extent \emph{L} of the cylinder axis and the angular extent of the cylinder cross section \emph{D} on SOHO/LASCO CMEs exhibiting extreme projections, \ie seen solely either in the lateral or in the axial perspective, respectively. These angular extents were dissimilar on average for both groups, \ie CMEs seen along their symmetry axis (axial events) appeared narrower than those seen perpendicular to it (lateral events). However, it remains unknown whether this trend is verified for all CMEs or if it was only fortuitous, given that the measurements of \emph{L} and \emph{D} have thus far not been performed simultaneously on the same CME. As argued in Section \ref{s:ID}, it is very difficult to detect a CME event that exhibits both perspectives simultaneously. To our knowledge, this is the first time that both \emph{L} and \emph{D} are reported to be simultaneously measured for the same event. The next section describes the investigated data sets, while Section \ref{s:ID} addresses the criteria considered to identify this singular event. Section \ref{s:3Dmodel} presents the modeling of this event using the GCS forward model \citep[Graduated Cylindrical Shell;][]{Thernisien-etal2009,Thernisien2011}. The characterization of the angular extents is presented in Section \ref{s:expansion}, while Section \ref{s:conclusions} presents final remarks and conclusions. \begin{figure} \centering \includegraphics[width=0.55\textwidth]{nscheme.png}\llap{\parbox[b]{2.265in}{\tiny{\fontfamily{cmss}\selectfont Axial}\\\rule{0ex}{0.76in}}}\llap{\parbox[b]{2.50in}{\tiny{\fontfamily{cmss}\selectfont Perspective}\\\rule{0ex}{0.65in}}}\llap{\parbox[b]{0.50in}{\color{black}{\tiny{\fontfamily{cmss}\selectfont Lateral}}\\\rule{0ex}{0.76in}}}\llap{\parbox[b]{0.50in}{\color{black}{\tiny{\fontfamily{cmss}\selectfont Perspective}}\\\rule{0ex}{0.65in}}} \caption{The scheme of 3D configuration, adapted from \citet{Cremades-Bothmer2004}. NL stands for neutral line.} \label{scheme} \end{figure}
\label{s:conclusions} The hypothesis posed by \cite{Cremades-Bothmer2004}, according to which CMEs are organized along a main axis of symmetry and therefore should exhibit different appearances according to their location, orientation, and vantage point, is directly verified by the simultaneous observation of the two extreme perspectives relative to the same event. The analysis of the event was achieved by combining the stereoscopic views of STEREO and the terrestrial views of SOHO and SDO. With two spacecraft in quadrature, CMEs suitable to exhibit both perspectives are those that arise from polar regions and are directed perpendicular to the Sun--observer line. Such an event was identified in the images provided by the STEREO/SECCHI coronagraphs on 28 March 2013, with the STEREO spacecraft separated by $\approx$\,86$\degree$. The lateral and axial perspectives are unambiguously discerned in the fields of view of the ST-A and ST-B, respectively. The source region of this event could not be observed in detail from chromospheric or low-coronal images for several reasons: i) the source was on the far side for SDO/AIA and extreme limb for ST-A and ST-B, ii) the prominence associated with this CME was presumably suspended high in the low corona prior to eruption, which has been considered by \cite{Robbrecht-etal2009} to explain stealth CMEs, iii) if the latter is the case, the filament was probably too hot to be detected in H$\alpha$. This event has a favorable orientation that allows for the direct detection of the lateral and axial perspectives and enables a temporal analysis of the CME expansion. The expansion of the flux rope angular diameter D measured in the axial perspective, as well as that of the lateral angular extent of the associated prominence, show a linear increase in time, at least up to the outer edge of the COR2 FOV. The full angular widths of the CME as seen in the axial (\textit{AW$_D$}) and lateral (\textit{AW$_L$}) perspectives show a different behavior in time: they show a linear growth with time up to $\approx$\,5$R_{\odot}$, followed by a slower growth rate phase in the case of the lateral perspective, and by a phase of nearly constant AW in the case of the axial view. The average ratio $L/D$ obtained from values at different points in time yielded $\approx\,1.6$. This is the first time that this ratio is deduced for the same single CME, and it agrees with previous analyses obtained from measurements of single perspectives performed on different events. The average \textit{AW$_L$}/\textit{AW$_D$} of the full angular widths in the lateral and axial perspectives yields the same value. A similar analysis performed on a set of nearly polar CMEs is underway. We hope to understand whether there are recurrent patterns regarding the distinct angular extents of the lateral and axial perspectives, as well as the expansion rates in the axial direction and perpendicular to it. As for equatorial CMEs, the simultaneous detection of both perspectives requires the combined analysis of coronagraphic observations offset from the ecliptic, such as those expected to be provided by the \textit{Solar Orbiter} mission, to be launched in October 2018, together with observations from close to the ecliptic plane, \eg~from Earth, SOHO, or \textit{Solar Probe Plus}, to be launched in July 2018. \begin{acks} IC acknowledges a postdoctoral fellowship from \mbox{CONICET}. HC and LB are members of the Carrera del Investigador Cient\'ifico (CONICET). The authors acknowledge funding from UTN project PID UTI2218 and thank the anonymous referee for valuable suggestions. The SOHO/LASCO data are produced by an international consortium of the NRL (USA), MPI f\"ur Sonnensystemforschung (Germany), Laboratoire d'Astronomie (France), and the University of Birmingham (UK). SOHO is a project of international cooperation between ESA and NASA. The STEREO/SECCHI project is an international consortium of the NRL, LMSAL and NASA/GSFC (USA), RAL and Univ. Bham (UK), MPS (Germany), CSL (Belgium), IOTA and IAS (France). SDO/AIA data are courtesy of the NASA/SDO and the AIA Science Teams. This article uses data from the SOHO/LASCO CME catalog generated and maintained at the CDAW Data Center by NASA and the CUA in cooperation with NRL. \end{acks} \noindent{\bf Disclosure of Potential Conflicts of Interest} The authors declare that they have no conflicts of interest.
16
7
1607.07001
1607
1607.05232_arXiv.txt
Correlations between intrinsic galaxy shapes on large-scales arise due to the effect of the tidal field of the large-scale structure. Anisotropic primordial non-Gaussianity induces a distinct scale-dependent imprint in these tidal alignments on large scales. Motivated by the observational finding that the alignment strength of luminous red galaxies depends on how galaxy shapes are measured, we study the use of two different shape estimators as a multi-tracer probe of intrinsic alignments. We show, by means of a Fisher analysis, that this technique promises a significant improvement on anisotropic non-Gaussianity constraints over a single-tracer method. For future weak lensing surveys, the uncertainty in the anisotropic non-Gaussianity parameter, $A_2$, is forecast to be $\sigma(A_2)\approx 50$, \dAtwoapproxCMB~smaller than currently available constraints from the bispectrum of the Cosmic Microwave Background. This corresponds to an improvement of a factor of $4-5$ over the uncertainty from a single-tracer analysis.
The next generation of galaxy surveys will mine the cosmological information in the large-scale structure of the Universe with unprecedented precision in the quest to constrain the nature of ``dark energy'', the mysterious force behind the accelerated expansion of the Universe. Two of the most promising probes of the growth history of the Universe are the clustering of galaxies and their gravitational lensing by intervening matter along the line of sight. The deviations of photons from their otherwise straight path produced by lensing result in changes in the ellipticities of galaxies of order $1\%$. Aside from gravitational lensing, the tidal field of the large-scale structure of the Universe can also modify the shapes and orientations of galaxies. These ``intrinsic alignments'' have been clearly detected for luminous red galaxies up to $z\sim 1$ \citep{Brown02,Mandelbaum06,Hirata07,Joachimi11,Heymans13,Singh15}, with an alignment bias that depends on luminosity. Recently, Ref.~\cite{Singh16} showed that the alignment strength also changes when different regions of a galaxy are probed. Their results suggest that the outskirts of galaxies are more sensitive to the tidal field, with their isophotes twisting more efficiently in the direction of other galaxies \cite{diTullio79,Kormendy82}. Alignments of red galaxies have also been clearly identified in cosmological hydrodynamical simulations, including the radial dependence of alignment strength \citep{Tenneti15,Velliscig15,Chisari15HzAGN,Chisari16HzAGN}. While intrinsic alignments are widely regarded as contaminants to weak gravitational lensing \citep{Hirata04,Joachimi10,Joachimi10b,Zhang10,Kirk15,Krause15,Krause16}, recent work has started to explore them as a cosmological probe in their own right \citep{chisari/dvorkin,CDS14,Schmidt15}. The ``linear tidal alignment model'' \citep{Catelan01,Blazek11} provides a good description of the scale and redshift dependence of intrinsic alignments on scales $\gtrsim 10$ Mpc$/h$. In \cite{Schmidt15}, hereafter SCD15, we explored the potential of intrinsic alignments as a probe of inflation; in particular, through the scale-dependent bias in the statistics of intrinsic galaxy shapes that arises in the presence of anisotropic non-Gaussianity in the early universe. Constraints on this type of non-Gaussianity are inaccessible through two-point correlations of galaxy clustering, and they probe: primordial curvature perturbations generated by large-scale magnetic fields \cite{ShiraishiB1,ShiraishiB2}, the presence of higher spin (spin $2$) fields during inflation \citep{Arkani-Hamed:2015bza,Chen/Wang:1,Chen/Wang:2,Baumann/Green,Lee16}, inflationary models with a generalized bispectrum from excited Bunch-Davies vacuum \cite{Ganc12,Agullo12,Ashoorioon16}, vector fields \cite{Yokoyama08,Barnaby12,Bartolo13,Bartolo15} and solid inflation \cite{Endlich13}. Assuming scale invariance, the squeezed-limit bispectrum of the primordial Bardeen potential perturbation $\phi$ can in general be expressed as \cite{Shiraishi13} \begin{eqnarray} B_\phi(\vk_1,\vk_2,\vk_3=\vk_L) &=& \sum_{\ell=0,2,...} A_\ell P_\ell(\hat\vk_L\cdot\hat\vk_S) \left(\frac{k_L}{k_S}\right)^\Delta \label{eq:BsqI} \\ & &\times P_\phi(k_L) P_\phi(k_S)\left[1 + \O\left(\frac{k_L^2}{k_S^2}\right)\right]\,, \nonumber \end{eqnarray} where $k_3 = k_L \ll k_1,\,k_2$ while $\vk_S = \vk_1 -\vk_L/2$ and $\vk_1 + \vk_2 + \vk_L = 0$ from statistical homogeneity, $P_\ell$ are the Legendre polynomials, and $A_\ell$ are dimensionless amplitudes which are allowed to be non-zero only for even $\ell$ in the squeezed limit. The coefficient $A_0$ is related to the usual local non-Gaussianity parameter $f_{\rm NL}^{\rm loc}$ via $A_0 = 4 f_{\rm NL}^{\rm loc}$. Intrinsic alignments constrain the parameter that governs the quadrupolar dependence of the bispectrum, $A_2$. As in SCD15, we will focus on the ``local'' scaling with $\Delta=0$. Our results are easily generalizable to other values of $\Delta$, which might be of particular relevance for massive higher-spin fields, since in de Sitter space their masses are bounded from below by a unitarity condition known as the ``Higuchi bound" \citep{Lee16}. In SCD15, we showed that constraints on anisotropic non-Gaussianity are complementary to those derived from the Cosmic Microwave Background (CMB) bispectrum \cite{PlanckNG}, but probing smaller scales. Current constrains from Planck for the anisotropic non-Gaussianity parameter are $A_2=-16\pm86$ from temperature information only, or $A_2=6\pm74$ including preliminary polarization information ($1\sigma$, Table 25 of \cite{PlanckNG15}). Ref. \cite{Raccanelli15} explored the potential constraints on $A_2$ from biased tracers in future radio surveys. In their optimistic scenario, an uncertainty of $\Delta A_2=250$ could be reached with the Square Kilometer Array if the redshift distribution of the sources can be inferred through cross-correlations with samples of known redshift. This constraint is not competitive with the CMB because the leading contribution to the scale-dependent bias of tracer counts cancels for anisotropic primordial non-Gaussianity, as mentioned above. Future galaxy surveys, such as Euclid\footnote{\url{http://sci.esa.int/euclid/}} and the {\it Large Synoptic Survey Telescope} (LSST\footnote{\url{http://www.lsst.org/lsst/}}), will gain constraining power on cosmological parameters from applying the so-called ``multi-tracer'' technique \citep{Seljak09,McDonald09}. This technique combines tracers of the same underlying density field, with different bias parameters, to reduce the impact of cosmic variance. Note that this cosmic variance cancellation only applies to scale-dependent features in the statistics of these tracers. The application of this technique to measure ``ultra-large scale'' observables, for example, general relativistic effects and non-Gaussianity of the local type, $f_{\rm NL}$, is particularly promising with future surveys \citep{yoo/etal:2012,Ferramacho14,Fonseca15,Alonso15}. In this work, we explore how the combination of intrinsic alignments measured from different regions of a galaxy, combined in a multi-tracer technique, can enhance constraints on anisotropic non-Gaussianity. We show that error bars can be significantly smaller than when estimated from a single-tracer method. The main result of this paper is the reduction of the uncertainty in the anisotropic non-Gaussianity parameter, $A_2$, to \dAtwoapprox~of the single-tracer value when the multi-tracer technique is applied to red and blue galaxies in LSST. This corresponds to a \dAtwoapproxCMB~smaller uncertainty on $A_2$ than currently available CMB constraints on anisotropic non-Gaussianity. We also show that constraints from Euclid will attain a similar precision. This work is organized as follows. In Section \ref{sec:theory}, we summarize the tidal alignment model and its relation to anisotropic non-Gaussianity during inflation. In Section \ref{sec:surveys}, we describe the future surveys we consider for forecasting constraints on anisotropic non-Gaussianity from alignments. Section \ref{sec:fisher} describes the forecasting method, followed by the results in Section \ref{sec:results}. In Section \ref{sec:discuss}, we discuss the assumptions of our work and we suggest directions for future improvement. We conclude in Section \ref{sec:conclusion}. Throughout, we assume the following Planck \cite{Ade:2015} fiducial flat $\Lambda$CDM cosmology: $\Omega_{\rm b}h^2=0.022$, $\Omega_{\rm CDM}h^2=0.12$, $h=0.67$, $\Omega_K=0$, $\mathcal{A}_s=2.2\times10^{-9}$, $n_s=0.9645$, $k_p=0.05$ Mpc$^{-1}$ and we define $\Omega_m=\Omega_b+\Omega_{\rm CDM}$.
\label{sec:conclusion} In the next decade, intrinsic alignments of galaxies could provide constraints on inflation complementary to the CMB bispectrum. Intrinsic shape estimators can be more sensitive to tidal alignments towards the outskirts of a galaxy, and different estimators can effectively be used in a multi-tracer technique in the spirit of \cite{Seljak09} to constrain anisotropic non-Gaussianity. We have forecasted the impact of this method for two future weak lensing surveys, LSST and Euclid. Our results demonstrate that multi-traced intrinsic alignments, combined with lensing and clustering of blue galaxies, can yield constraints on the anisotropic non-Gaussianity parameter as low as \dAtwo\AtwoLSSTredxblue, corresponding to \dAtwoLSSTredxblue of the single tracer constraint. However, the impact of the atmospheric point-spread function on LSST might make it difficult to obtain two different alignment tracers, or it could result in a dependence of the relative alignment bias on apparent magnitude. Euclid is in a better position, due to the absence of the atmosphere, to perform shape measurements at different galactic radii. On the other hand, a tomographic approach has the potential of further shrinking the uncertainties in the non-Gaussianity parameters, as demonstrated by Ref. \cite{Alonso15} for $A_0$.
16
7
1607.05232
1607
1607.00692.txt
Observations of coronal jets increasingly suggest that local fragmentation and intermittency play an important role in the dynamics of these events. In this work we investigate this fragmentation in high-resolution simulations of jets in the closed-field corona. We study two realizations of the embedded-bipole model, whereby impulsive helical outflows are driven by reconnection between twisted and untwisted field across the domed fan plane of a magnetic null. We find that the reconnection region fragments following the onset of a tearing-like instability, producing multiple magnetic null points and flux-rope structures within the current layer. The flux ropes formed within the weak-field region in the center of the current layer are associated with ``blobs" of density enhancement that become filamentary threads as the flux ropes are ejected from the layer, whereupon new flux ropes form behind them. This repeated formation and ejection of flux ropes provides a natural explanation for the intermittent outflows, bright blobs of emission, and filamentary structure observed in some jets. Additional observational signatures of this process are discussed. Essentially all jet models invoke reconnection between regions of locally closed and locally open field as the jet-generation mechanism. Therefore, we suggest that this repeated tearing process should occur at the separatrix surface between the two flux systems in all jets. A schematic picture of tearing-mediated jet reconnection in three dimensions is outlined.
Observations show the solar atmosphere to be highly dynamic, with impulsive, energetic events occurring over a broad range of spatial and temporal scales. Magnetic reconnection, the process whereby stored magnetic energy is released via a reconfiguration of the magnetic connectivity, is generally believed to be central to the majority of such events \citep{PriestForbes2000}. In recent years, our perceived understanding of how magnetic reconnection proceeds in the corona has shifted away from the idea that reconnection occurs smoothly in a single, well defined current layer \citep[e.g.][]{Parker1957,Sweet1958,Petschek1964} towards a picture of more intermittent, fragmented dynamics involving multiple current layers and energy release sites \citep[e.g.][]{Huang2013}. Observationally, a growing number of cases exhibit such intermittency amongst the largest and best-resolved events. Bright blobs are observed in the ray-like features that form beneath erupting coronal mass ejections (CMEs) when viewed along their axes \citep[e.g.][]{Lin2005,Guo2013}. Dark, void-like supra-arcade downflows \citep{McKenzie1999,McKenzie2009} are observed in post-CME {rays} when viewed from the {side}. Additionally, radio pulsations \citep{Kliem2000}, plasma blobs \citep[e.g.][]{Ohyama1998}, and wave-like motions of the flare ribbons \citep{Brannon2015} suggest that bursty reconnection occurs in solar flares. Intermittent plasma outflows and blobs have also been observed in filament eruptions \citep{Reeves2015}. {All of these features suggest an intermittent, bursty reconnection process.} The onset and nonlinear evolution of the tearing instability \citep{Furth1963} provides a natural explanation for much of this fragmentation and intermittent reconnection. Indeed, tearing and the associated formation of magnetic islands/flux ropes have been observed in numerical simulations of CMEs and flares \citep{Barta2011,Karpen2012,Guo2013b,Lynch2013} and surges \citep{Karpen1996}, as well as during more gentle quasi-steady interchange reconnection \citep{Edmondson2010}. In a self-consistently evolving system, where current layers form dynamically over time, tearing is initiated when a stable {current} layer becomes sufficiently long and thin. In two dimensions, numerical studies show that this typically occurs when $S = Lv_{a}/\eta > 10^{4}$, where $S$ is the Lundquist number based on the length $L$ of the current layer, $v_{a}$ is the inflow Alfv\'{e}n speed, and $\eta$ is the plasma resistivity \citep[e.g.][]{Biskamp1986}. In the context of such lengthening and thinning current layers, the tearing instability is typically referred to as the ``plasmoid instability.'' \citet{Loureiro2007} were the first to develop a two-dimensional linear theory describing how this instability grows in a pre-existing Sweet-Parker sheet. Subsequently, \citet{Pucci2014} argued that such a sheet is unattainable in nature, and that any developing current layer will disrupt before it reaches the aspect ratio consistent with the Sweet-Parker scaling. Regardless of the exact nature of the linear phase, if the global evolution is sufficiently slow, the subsequent nonlinear dynamics will be dominated by the formation, coalescence and ejection of magnetic islands. In the corona, $S$ is orders of magnitude higher than $10^{4}$, and long, thin current layers are expected to form in non-potential magnetic fields on the basis of ideal modeling \citep[e.g.][]{Syrovatskii1971,Longcope1996}. Consequently, tearing-mediated reconnection appears to be inevitable. In this work, we explore the role of tearing and the formation of fine structure in closed-field coronal jets. Coronal jets are transient, impulsive, collimated plasma outflows originating from bright regions low in the solar atmosphere. They are smaller than typical flares or CMEs, but {can} share some characteristic features \citep[e.g.][]{Shibata1997}. The most energetic jets are observed in X-rays \citep[e.g.][]{Cirtain2007,Shimojo1996}, but jets are also observed at EUV and optical wavelengths \citep[e.g.][]{Filippov2013,Savcheva2007,Guo2013}. Typically, a brightening of the base occurs first, followed by rapid, often supersonic, plasma outflows guided by the ambient field. The morphological appearance of {the source region of} many jets is that of a sea anemone \citep{Shibata1994}, with the outflows forming a bright spire extending from a compact quasi-circular base. A large fraction of coronal jets also exhibit a helical structure to their outflows and a wandering of the jet spire when viewed against the plane of the sky \citep{Patsourakos2008}. X-ray and EUV jets are observed prolifically in coronal holes \citep[e.g.][]{Cirtain2007,Savcheva2007}, where the ambient field is quasi-unidirectional and the jets appear as extended radial spires, sometimes extending out far into the heliosphere \citep[e.g.][]{Patsourakos2008,Filippov2013}. Such jets are also observed (although less readily against the brighter background plasma) in closed-field regions, particularly near active regions \citep[e.g.][]{Torok2009,Yang2012,Guo2013,Lee2013,Schmieder2013,Zheng2013,Cheung2015}. In these cases, the jet material propagates along the ambient coronal loops and the spire often has a curved appearance. Closed-field jets also have been associated with brightening at the distant footpoint of the connecting coronal loop \citep[e.g.][]{Torok2009,Zhang2013}. As in flares and CMEs, there is some observational evidence that intermittent, fragmented reconnection plays a role in jets. Recent {\it{Interface Region Imaging Spectrograph}} (IRIS) observations of an active-region jet revealed filamentary fine-scale structure in the emission from the reconnection region \citep{Cheung2015}. {\it{Solar Dynamics Observatory}} observations have shown blobs forming in both small \citep{Zhang2014} and large \citep{Filippov2015} open-field EUV jets. Recent {\it{Solar TErrestrial RElations Observatory}} observations have also revealed trains of plasma blobs within jets in the closed-field corona near active regions \citep{Zhang2016}. Using a new imaging technique, \citet{Chen2013} analyzed the moving sources of type III radio bursts in an active-region jet. They found multiple reconnection sites within the jet region and filamentary structures in the jet outflow. Jets originating from the cooler solar chromosphere have also been reported to contain plasma blobs and to have a multi-treaded structure \citep{Singh2011,Singh2012}. Finally, the formation of islands/flux ropes has also been reported in several jet simulations \citep{Yokoyama1994,Karpen1995,Yokoyama1996,Moreno-Insertis2013,Yang2013}. The magnetic field associated with these events consists of a parasitic polarity patch, with a field component normal to the photosphere of one sign, embedded within a region of weaker field of the opposite sign. The field of the parasitic polarity closes down to the photosphere and is separated from the background, locally open field by a dome-shaped separatrix surface, topped with a three-dimensional (3D) magnetic null point. In open-field jets, the background field connects to the distant heliosphere, whereas in closed-field jets, the field closes back to the photosphere at a distant footpoint. This domed configuration can form as a result of flux emergence \citep[e.g.][]{Torok2009}, or be pre-existing {\citep[e.g.][]{Zhang2012,Cheung2015}}. {Such a two-flux system readily allows the displacement of the field lines near the null, forming a strong, fully 3D current {layer} that eventually will begin to reconnect} \citep{Antiochos1996,PriestTitov1996,Pontin2007}. {If the current layers formed in jet source regions become sufficiently long and thin, it is to be expected that they will become highly fragmented following the onset of a tearing-like instability, in a manner similar to 2D Sweet-Parker-like layers.} Indeed, \citet{Wyper2014a} studied the stability of the current layers formed self-consistently at 3D null points (as occurs in solar jets) through external boundary driving. They found that rapid tearing does occur beyond a critical Lundquist number $S_{c} \approx 2\times 10^{4}$, indicating that current layers in coronal jets should be highly unstable to tearing. However, in contrast to 2D studies, the nonlinear dynamics were dominated by the complex interplay of multiple flux ropes and null points within a nearly turbulent current layer \citep{Wyper2014b}. Additionally, the finite extent of the 3D null current layer allowed twist and mass within the flux ropes to escape {in the direction perpendicular to the two outflow jets,} so that the ropes rarely grew significantly wider than the thickness of the main layer. {Using static models for the domed anemone field of coronal jets, \citet{Pontin2015} demonstrated that such flux ropes also {create structure} in the open/closed boundary.} The aim of this work is to explore the occurrence of such fully 3D tearing in coronal jets and the role that it plays in the jetting behavior. The paper is structured as follows. In $\S\S 2$ and $3$ we introduce the model and the numerical setup. In $\S 4$ we summarize the overall evolution of two jets chosen for study, whilst in $\S 5$ we investigate the tearing-driven dynamics and fine structure during each jet. We summarize and discuss our findings in $\S 6$. %%%%%%% \begin{figure}[t] \centering \includegraphics[width=0.45\textwidth]{fig1-eps-converted-to.pdf} \caption{Initial magnetic field in two configurations with aspect ratios $L/N=2.40$ (top) and $L/N=1.46$ (bottom). The bottom planes are color-shaded according to the sign ($+,-$) and strength of the field component normal to the surface ($B_x$). Selected field lines outline the fan separatrix surface and the inner and outer spine lines emanating from the magnetic null point (red sphere).} \label{fig:fields} \end{figure}
\label{sec:discussion} In this paper, we have analyzed the evolution of tearing-mediated reconnection in two high-resolution numerical simulations of coronal jets. Our jets were at the extremes of the parameter range explored at lower resolution in WD16 and exhibited similar macroscopic behaviors {to those earlier calculations}. By tracking the null points in the volume and analyzing the magnetic connectivity, we were able to pinpoint when tearing began in the simulations and to follow the evolution of the fragmented reconnection region in the jets. In agreement with the {idealized 3D null-point reconnection} studies of \citet{Wyper2014a,Wyper2014b}, we find that tearing in the jet current layer leads to the formation of multiple null points and of interacting flux-rope structures. The onset of tearing occurred before the onset of the jet, which resulted from the triggering of a kink-like instability in the twisted field beneath the separatrix dome. The kinking of the twisted field generated favorable conditions for fast reconnection and the rapid release of stored magnetic energy. These dynamics did not occur due to the fragmentation of the current layer. Consequently, the macroscopic behavior of our high-resolution jets is fully consistent with the less well-resolved jets studied by WD16. The tearing in our jets appears to occur once the current layer becomes sufficiently long and thin, i.e.\ after it reaches a high aspect ratio. We are relying upon numerical resistivity to facilitate reconnection, so the current-layer thickness is set by the grid spacing. At still higher resolution, therefore, tearing might occur even earlier, since any critical aspect ratio can be reached sooner for a fixed sheet length. However, we believe it unlikely that this would substantially alter either the fast reconnection initiated by the kink instability or the subsequent jet generation. To test this requires even more extensive simulations at still higher 3D resolution, which is beyond the scope of this work. In our jets, the current layer was most fragmented midway through the jet, when the layer reached its most elongated state and the reconnection rate peaked. This tearing generated flux ropes both in the weak-field region at the center of the current layer and along the flanks of the separatrix surface. Using a rough proxy for soft X-ray/EUV emission, we showed that the largest of such flux ropes may be visible as swirls of emission near the base of the jet. We estimate the expected size of such swirls by using their relative size compared to the dome. The largest one that we identified is shown in Figure \ref{fig:bp5LOS}(h) and corresponds to the large swirl in Q in Figure \ref{fig:qbp5}(d). It has a width of around $1$, or about $1/6$ of the width of the dome initially. Assuming a typical dome width of $6~{\rm Mm}$, this swirl has a width of $1 {\rm ~Mm}$, or $1.4$ arc seconds. Such swirls are larger than the limit of resolution with IRIS ($\approx 0.33$ arc seconds) for typical jets, so they may be identifiable in the largest jet events if they appear in cooler chromospheric lines. Once the flux ropes formed, their inherent twist spread along the field lines as they were ejected from the current layer. They then became torsional wave packets within the main jet outflow. The flux ropes in the weak-field region also had associated density enhancements, forming plasma ``blobs'' that were localized to these structures. Once the flux ropes were ejected, new ones formed in their place, and the process was repeated as the jet proceeded. This repeated formation and ejection of plasma blobs provides a natural explanation for the intermittent outflows, bright blobs of emission, and quasi-periodic intensity fluctuations observed in some jets \citep[e.g.][]{Singh2011,Singh2012,Morton2012,Zhang2014,Filippov2015}. The thread-like nature of the tearing-mediated outflows may also explain the filamentary structure often observed in jets \citep[e.g.][]{Singh2011,Cheung2015}. However, such filamentary structure may also be the result of thermal effects including condensation and evaporation, which are not treated in our simulations. In our scenario, the jets were confined along a coronal loop where they transferred twist from beneath the separatrix dome to the larger-scale magnetic field surrounding it. We introduced a simple method for estimating the flux affected by the jet, based on fitting a circle that contains the same amount of flux as the parasitic polarity around the base of the separatrix just prior to the jet. We noted that this method may also be useful for estimating the heliospheric flux affected by coronal-hole jets, along which fast outflows and high-energy particles are expected. The method worked well for us because the outer flux is quite evenly distributed in our simulations. In more realistic fields, where the photospheric flux is distributed in patches, a more complex method involving some form of weighting may be required. We also showed that, after a period of relaxation, the final states in each simulation were not simple, uniformly twisted loops as might be expected based on \citet{Taylor1986} relaxation theory. Rather, the loops contained many twisted threads that were remnants of the flux ropes formed by tearing during the jet (Fig.\ \ref{fig:ropeend}). Between the threads were multiple extended current layers that stretched along the loop. This is the 3D version of the reconnection-driven current filamentation described by \citet{Karpen1996}. It is a reminder that coronal reconnection can produce multiple current layers that may heat the coronal loop plasma as they dissipate. Using our rough proxy for emission, we showed that these cooling threads may be observable as criss-crossing thread-like features in the connecting coronal loop. {\citet{Sun2013} described a large coronal jet similar in morphology to our $L/N = 2.40$ case. They observed criss-crossing threads within the loop and a two-phase emission suggestive of shuffling reconnection and possible heating within the cooling loop, supporting the picture we have deduced from our simulations. To test this fully requires a more comprehensive treatment of the plasma energetics than the simple model adopted here. Other possible avenues for future research include a realistic stratification of the background atmosphere; an exploration of how particles are accelerated during the jet; where and when jet flows, heat fluxes, and/or particles precipitate at the photosphere to generate remote brightenings; and the resultant occurrence of chromospheric evaporation flows into the corona.} Finally, we note that essentially all jet models invoke reconnection between regions of locally closed and locally open field. Such models implicitly assume that reconnection is occurring at one or more 3D magnetic nulls, so we conclude that this repeated tearing process is likely to occur in all coronal jets. Our schematic Figure \ref{fig:cartoon} of this repeated process in a generic jet scenario could arise due to flux emergence \citep[e.g.][]{Moreno-Insertis2013}, eruption of a mini-filament \citep[e.g.][]{Filippov2015,Sterling2015}, or instability following photospheric twisting \citep[][]{Pariat2009}. As the spatial resolution and temporal cadence of observing instruments increase, it seems inevitable that such structures will be detected increasingly frequently.
16
7
1607.00692
1607
1607.03622_arXiv.txt
We present $Spitzer$/IRS mid-infrared spectral maps of the Galactic star-forming region M17 as well as IRSF/SIRIUS Br$\gamma$ and Nobeyama 45-m/FOREST $^{13}$CO ($J$=1--0) maps. The spectra show prominent features due to polycyclic aromatic hydrocarbons (PAHs) at wavelengths of 6.2, 7.7, 8.6, 11.3, 12.0, 12.7, 13.5, and 14.2~$\micron$. We find that the PAH emission features are bright in the region between the HII region traced by Br$\gamma$ and the molecular cloud traced by $^{13}$CO, supporting that the PAH emission originates mostly from photo-dissociation regions. Based on the spatially-resolved $Spitzer$/IRS maps, we examine spatial variations of the PAH properties in detail. As a result, we find that the interband ratio of PAH 7.7~$\micron$/PAH 11.3~$\micron$ varies locally near M17SW, but rather independently of the distance from the OB stars in M17, suggesting that the degree of PAH ionization is mainly controlled by local conditions rather than the global UV environments determined by the OB stars in M17. We also find that the interband ratios of the PAH 12.0~$\micron$, 12.7~$\micron$, 13.5~$\micron$, and 14.2~$\micron$ features to the PAH~11.3~$\micron$ feature are high near the M17 center, which suggests structural changes of PAHs through processing due to intense UV radiation, producing abundant edgy irregular PAHs near the M17 center.
Spectral bands due to polycyclic aromatic hydrocarbon (PAH) emissions are dominant features in the near- and mid-infrared (IR; 3--20~$\micron$). Since PAHs are excited by far-UV photons (6--13.6~eV), the PAH emission features are characteristic of photo-dissociation regions (PDRs). The PAH emission features are observed at wavelengths of 3.3, 6.2, 7.7, 8.6, 11.3, 12.0, 12.7, 13.5, 14.2, 15.8, 16.4, 17.4, 17.8 and 18.9~$\micron$, which are attributed to vibrations of C-H or C-C bonds in hydrocarbons. Those features, especially main features at 6.2, 7.7, 8.6, 11.3, and 12.7~$\micron$, are theoretically and observationally well-studied (e.g., \citealt{Chan01, Draine07, Tielens08, Bauschlicher09}). Past studies have shown that the PAH interband ratios are useful probes to study the properties of PAHs. Among these, the degree of PAH ionization is best studied through interband ratios involving the 6.2, 7.7 and 11.3~$\micron$ PAH interband ratios. Here the 6.2 and 7.7~$\micron$ bands, due to C-C vibrations, are representative of ionized PAHs and the 11.3~$\micron$ band, due to C-H vibrations, representative of neutral PAHs (e.g., \citealt{Allamandola99, Peeters02}). It is expected that the degree of PAH ionization is relatively high near exciting sources while that is low in molecular clouds. Another important property is the edge structure of PAHs, which is probed by using the interband ratios of the PAH features at 11.3, 12.0, 12.7, 13.5, and 14.2~$\micron$; likely origins of these features are all C-H in-plane bending, but the numbers of adjacent C-H bonds in a benzene ring are different (PAH~11.3~$\micron$: solo; PAH~12.0~$\micron$: duo; PAH~12.7~$\micron$: trio; PAH~13.5~$\micron$ and PAH~14.2~$\micron$: quartet; \citealt{Draine03, Tielens05}). For example, edgy PAHs are expected to show the strong PAH~12.0, 12.7, 13.5, and 14.2~$\micron$ features relative to the PAH~11.3~$\micron$ feature. The structures of PAHs may change from region to region depending on the surrounding radiation field (\citealt{Boersma13, Kaneda14}). Therefore examining the PAH ionization together with the PAH edge structure may be helpful to discuss variations of the PAH properties. Recently, variations in the PAH properties have been intensively studied for a variety of targets mainly with $Spitzer$ and $AKARI$ (e.g., \citealt{Peeters02, Smith07a, Boersma12, Yamagishi12}). Most of such studies, however, discussed PAHs in individual areas, and did not intensively examine spatial variations of PAHs. In order to examine the effects of the surrounding interstellar environment on the PAH properties, spatially-resolved observations are essential (e.g., \citealt{Crete99, Rapacioli05, Berne07, Sakon07, Kaneda05, Kaneda08, Fleming10, Yamagishi10, Berne12, Pilleri12, Egusa13, Croiset16}). One of the most intensive spatially-resolved studies in PAHs was carried out by \citet{Boersma13, Boersma14, Boersma15}. They analyzed $Spitzer$/IRS spectral maps of the reflection nebula NGC~7023 and decomposed the observed spectra to the emission features from ionized and neutral PAHs. As a result, they found clear spatial variations in the degree of PAH ionization along the direction from the exciting B-type star to PDR and molecular-cloud regions. \citet{Stock16} also examined the PAH interband ratios of seven Galactic HII regions and three reflection nebulae with $Spitzer$/IRS spectral maps, although they did not discuss spatial variations of the ratios in the target regions. \citet{Haraguchi12} examined the degree of PAH ionization in the Orion nebula based on their ground-based near-IR narrow-band observations. The number of such spatially-resolved studies of PAHs is, however, still limited. In this paper, we present $Spitzer$/IRS spectral maps of the mid-IR PAH features in the Galactic star-forming region M17 as well as Br$\gamma$ and $^{13}$CO($J$=1--0) maps covering the same region. M17 is young ($\sim$1~Myr; \citealt{Hanson97}) and one of the well-studied active Galactic star-forming regions (e.g., \citealt{Stutzki90, Giard92, Giard94, Cesarsky96, Povich07}), which contains more than 100 OB stars in the central cluster, NGC~6618 (\citealt{Lada91}). Among them, the most active ionizing source is CEN1, a binary of O4+O4 stars (\citealt{Chini80}). Assuming the distance of 2~kpc (\citealt{Xu11}), the area of the spectral maps in the present study is $1.2\times1.2$~pc$^2$ which is 80 times larger than that in the study of NGC~7023 (\citealt{Boersma13}). Based on the wide-area spectral maps, we examine the effects of the intense star-forming activity on the properties of PAHs in detail.
Based on $Spitzer$/IRS spectral mapping observations, we have examined spatial variations of the PAH properties around the M17SW region. We analyzed independent 990 spectra, which show prominent PAH features at wavelengths of 6.2, 7.7, 8.6, 11.3, 12.0, 12.7, 13.5, and 14.2~$\micron$ as well as fine-structure lines. We decomposed all the spectral features using PAHFIT (\citealt{Smith07a}). As a result, the derived PAH emissions are bright in regions between HII regions traced by Br$\gamma$ and molecular cloud regions traced by $^{13}$CO. Additionally, the PAH intensity maps show a large-scale gradient from the north-east to south-west, indicating that PAHs are irradiated by UV from the OB stars in the M17 center. By contrast, PAH interband ratio maps show no clear large-scale gradient but they are locally changed. These results suggest that the PAH ionization is mainly controlled by local conditions rather than the large-scale UV environment determined by the OB stars in the M17 center. The degree of PAH ionization estimated from the PAH 7.7~$\micron$/PAH 11.3~$\micron$ ratios ranges from 19 to 81~\% with a median value of 48~\%, which is comparable to that in the previous studies for NGC~7023 and the Orion Bar. We also find that the degree of PAH ionization is high near the peak of the $^{13}$CO emission. We discuss the ionization balance using the $G_0$ and [CII] maps to find that $G_0$/[CII] ratios show a local maximum inside the molecular cloud. We conclude that buried B-type or later stars may be important to determine the degree of PAH ionization in local conditions. Additionally, the PAH edge structures are examined by ratios of the PAH~12.0, 12.7, 13.5, and 14.2~$\micron$ features to the PAH~11.3~$\micron$ feature, which suggests that edgy PAHs are processed due to the intense (and/or hard) UV radiation especially near the M17 center.
16
7
1607.03622
1607
1607.06007_arXiv.txt
We present and analyze Subaru/IRCS $L^\prime$and $M^\prime$ images of the nearby M dwarf VHS J125601.92-125723.9 (VHS 1256), which was recently claimed to have a $\sim$11 M$_{J}$ companion (VHS 1256 b) at $\sim$102 au separation. Our adaptive optics images partially resolve the central star into a binary, whose components are nearly equal in brightness and separated by 0$\farcs$106 $\pm$ 0$\farcs$001. VHS 1256 b occupies nearly the same near-infrared color-magnitude diagram position as HR 8799 bcde and has a comparable $L^\prime$ brightness. However, it has a substantially redder $H$ - $M^\prime$ color, implying a relatively brighter $M^\prime$ flux density than for the HR 8799 planets and suggesting that non-equilibrium carbon chemistry may be less significant in VHS 1256 b. We successfully match the entire SED (optical through thermal infrared) for VHS 1256 b to atmospheric models assuming chemical equilibrium, models which failed to reproduce HR 8799 b at 5 $\mu m$. Our modeling favors slightly thick clouds in the companion's atmosphere, although perhaps not quite as thick as those favored recently for HR 8799 bcde. Combined with the non-detection of lithium in the primary, we estimate that the system is at least older than 200 Myr and the masses of the stars comprising the central binary are at least 58 $M_{J}$ each. Moreover, we find some of the properties of VHS 1256 are inconsistent with the recent suggestion that it is a member of the AB Dor moving group. Given the possible ranges in distance (12.7 $pc$ vs. 17.1 $pc$), the lower mass limit for VHS 1256 b ranges from 10.5 $M_J$ to 26.2 $M_J$. Our detection limits rule out companions more massive than VHS 1256 b exterior to 6--8 au, placing significant limits on and providing some evidence against a second, more massive companion that may have scattered the wide-separation companion to its current location. VHS 1256 is most likely a very low mass (VLM) hierarchical triple system, and could be the third such system in which all components reside in the brown dwarf mass regime.
Exoplanet surveys have recently measured the frequency of exoplanets as a function of a host of parameters like stellar mass, metallicity, orbital separation, and planetary mass \citep{win15}. These parameters help to inform our understanding of how and where exoplanets form. The observed frequency of gas giants at small ($<$2 au) separations rises from $\sim$3\% for dM stars to $\sim$14\% for solar metallicity A-type stars; this dearth of massive planets around dM stars is consistent with theoretical predictions \citep{lau04} of the core accretion model \citep{pollack1996}. At larger orbital separations (10s-100s of au), the frequency of gas giants around dM stars is $<$6\% \citep{bowler2015}. Recent results from Kepler have shown both that the frequency of small mass planets at short orbital periods increases around low mass stars \citep{bor11,how12} and there is a lack of planets larger than 2.5 R$_{\earth}$ surrounding dM stars at short orbital periods. High contrast imaging investigations have similarly begun to discover gas giant exoplanets located at large orbital separations from their stars (Fomalhaut b, \citealt{kal08}, $\beta$ Pictoris b, \citealt{lagrange2009,lag10}; HR 8799 bcde, \citealt{mar10}; $\kappa$ And. b, \citealt{car13}; 51 Eri b ,\citealt{mac15}; HD 100546 bc ,\citealt{quanz13}, \citealt{cur15}). While most of these directly imaged gas giants surround early-type stars, detections have been reported around Solar-analogs (GJ 504b, \citealt{kuz13}) and dM stars (e.g. ROXs42B, \citealt{cur14}; GU Psc, \citealt{nau14}). Yet the formation mechanisms responsible for these systems is still under debate. A growing number of objects with wide orbits and modest mass ratios (eg. HD 106906b, \citealt{bai14}; ROXs42Bb, \citealt{cur14}; 1RXSJ1609, \citealt{Laf2008}; 2M J044144, \citealt{trodorv2010}) have led to suggestions that the planetary companion formed via a binary star-like process rather than the core accretion process (\citealt{Low1976}, \citealt{bate2009}, \citealt{bra14}). Although binary stars are common (e.g. \citealt{rag10}), our understanding of the frequency of exoplanets around binaries and higher order systems remains limited. Since the discovery of the first exoplanet surrounding a binary host (Kepler-16b, \citealt{doy11}), less than a dozen similar systems have been discovered by Kepler (\citealt{win15} and references therein). The analysis of publicly available Kepler data led \citet{arm14} to conclude that the frequency of planets with R $>$ 6R$_{\earth}$ on periods of less than 300 days was similar to that of single star rates; however, this conclusion is critically dependent on the assumed planetary inclination distribution. While at least one bona fide planetary mass companion orbiting a binary has been imaged \citep[ROXs 42Bb][]{sim95,rat05,cur14} most dedicated direct imaging surveys for gas giant planets around binaries have not yielded any firm detections to-date \citep{tha14}. Recently, \citet{gau15} reported the detection of a planetary mass (11.2 $^{+9.7}_{-1.8}$ M$_{J}$) companion at a projected separation of 102 $\pm$9 au from its host star VHS J125601.92-125723.9 (hereafter VHS 1256), described as a M7.5 object with an inferred mass from its bolometric luminosity of 73$^{+20}_{-15}$ M$_{J}$, placing it near the hydrogen burning limit. The primary was estimated to have an age of 150-300 Myr from both kinematic membership in the Local Association and lithium abundance. At a distance of $12.7 \pm 1.0$ $pc$ measured from trigonometric parallax \citep{gau15}, this made VHS 1256 the closest directly imaged planetary mass system to the Earth. \citet{Stone2016} reported a greater distance to VHS 1256 of $17.1 \pm 2.5$ $pc$ based on spectrophotometry of the system. From the standpoint of substellar atmospheres and atmospheric evolution, VHS 1256 b is a particularly unique object. Its near-infrared properties resemble those of the HR 8799 planets and a select few other young (t $\lesssim$ 30 Myr) and very low mass (M $\lesssim$ 15 $M_{\rm J}$) substellar objects, occupying roughly the same near-infrared color-magnitude space \citep{gau15,Faherty2016}: a continuation of the L dwarf sequence to fainter magnitudes and cooler temperatures. Indeed, as shown by atmosphere modeling, the near-infrared properties of objects like HR 8799 bcde and 2M 1207 B reveal evidence for thicker clouds than field brown dwarfs of the same effective temperatures \citep{Currie2011}. VHS 1256 b then offers a probe of clouds at ages intermediate between these benchmark objects and Gyr-old field objects and thus some insights into the atmospheric evolution of low-mass substellar objects. Furthermore, non-equilbrium carbon chemistry can be probed by new thermal infrared photometry, in particular at $M^\prime$ \citep[e.g.][]{Galicher2011}\footnote{The existing W2 photometry reported in \citet{gau15} covers a far wider bandpass (4--5 $\mu m$). Much of this wavelength range is far less sensitive to carbon monoxide opacity at relevant temperatures that is a tracer of non-equilbrium carbon chemistry, while $M^\prime$ is far more (uniquely) sensitive (e.g. see Figure 7 in \citealt{cur14B}).}. New thermal infrared data for VHS 1256 b allows us to assess the evidence for non-equilibrium chemistry for the objects at/near the deuterium burning limit and at ages older than HR 8799 bcde. In this work, we present new adaptive optics imagery of VHS 1256, providing the first detections of its wide-separation companion in major thermal IR broadband filters, $L^\prime$ and $M^\prime$. We use these mid-infrared photometric points and optical and near-infrared photometry from \citet{gau15} to perform the first atmospheric (forward) modeling of VHS 1256 b and the first assessment of how its thermal IR properties (e.g. carbon chemistry) compare to younger planet-mass objects with similar near-IR colors. Additionally, we report our independent determination of the primary's binarity, also reported in \citet{Stone2016}, following our original work \citep{rich2015} with additional analyses. We will adopt the same nomenclature as Stone et al. \citeyear{Stone2016}, referring to the close partially resolved binary as VHS 1256 A and B, and the wide companion as VHS 1256 b. After discussing our observations in Section \ref{sec:obs}, we search for new companions around VHS 1256 and investigate the binarity of VHS 1256 in Section \ref{sec:point}. Next, we discuss improved photometry of VHS 1256 A, B, and b at $L^\prime$and $M^\prime$ in Section \ref{sec:photometry}. Using the new L$^\prime$ and M$^\prime$ photometry, we assess the atmospheric properties of VHS 1256 b in Section \ref{sec:atmo}. Finally, we discuss the implications of our study in Section \ref{sec:Discussion}.
\label{sec:Discussion} \subsection{Binarity of the Central Source} Subaru/IRCS AO $L^\prime$and $M^\prime$ imagery has clearly revealed that the central source of the VHS 1256 system is comprised of two objects (Figure \ref{fig:L_M}) that have similar relative brightness ($L^\prime$= 10.5 and 10.54 magnitude respectively). Such binarity is observed in 22$^{+6}_{-4}$\% of very low mass stars \citep{duc13}. \citet{gau15} assigned the central source a spectral classification of M7.5, based on optical (M7.0) and IR (M8.0) spectral classifications. We speculate that the minor differences in the optical versus IR spectral classifications derived by \citet{gau15} could be caused by minor differences in the spectral classifications of the binary components. At the observed distance to VHS 1256 ($12.7 \pm 1.0$ $pc$; \citealt{gau15}), the $0\farcs103 \pm 0\farcs001$ projected separation between the binary components corresponds to a projected physical separation of $\sim$1.3 au. Our results on the binarity of the central source are consistent with those independently and recently reported by \citet{Stone2016}. \subsection{System Age and Component Masses} \label{sec:Masses} \citet{gau15} suggested a system age of 150-300 Myr, based on the lack of observed Li in the system and kinematic age constraints from being a Local Association member. However, with the discovery that the central source is a binary \citep{Stone2016} and our independent verification of VHS 1256 A and B in the $L^\prime$-band, we can reassess the age limits of the system. Using the nominal distance (12.7 pc) and the absolute magnitude (M$_{L'}$; $10.0 \pm 0.2$), the 300 Myr upper limit age suggested by \citet{gau15} results in an inferred mass for VHS 1256 A or B of 47 M$_{J}$. However such a mass would be too small to destroy Li \citep{all14} and produce the non-detection of this line (Figure \ref{fig:lithium}). Rather, at this adopted distance the lower limit age of VHS 1256 must be $>$ 400 Myr to produce VHS 1256 A and B with our observed M$_{L'}$ and the lack of Li in the system's spectra. If one assumes the new distance of 17.1 pc proposed by \citet{Stone2016} and the corresponding absolute magnitude of the central components (M$_{L'}$; $\sim9.4 \pm 0.3$), the lower age limit is $>$ 200 Myrs (Figure 5). This is broadly consistent with the lower age limit proposed by \citet{Stone2016} of $280^{+40}_{-50}$. Note we used models from \citet{all14}, while Stone et al. used models from \citet{Chabrier2000}. Stone et al. (\citeyear{Stone2016}) suggested that VHS 1256 was consistent with being a member of the AB Dor moving group, based on analysis of its UVW kinematics and a 66.85\% membership probability predicted by the BANYAN II software tool \citep{mal13,gag14}. Our own investigation suggests that it still has a 28\% chance of being in the ``young field" (age up to 1 $Gyr$). Additionally, VHS 1256 b is a clear outlier in UWV space ($\sim$ 8 $\pm$ 1.7 km $s^{-1}$ from the core of AB Dor. (J. Gagne, pvt. comm.). Furthermore, membership in the 149$^{+51}_{-19}$ Myr AB Dor moving group \citep{bel15} is inconsistent with the lower age limit of $280^{+40}_{-50}$ proposed by \citet{Stone2016} and marginally inconsistent with our lower limit of 200--400 $Myr$. Moreover, the near-to-mid infrared colors of VHS 1256 A appear indistinguishable from those in the field and potentially bluer than AB Dor members (Figure \ref{fig:cmd}. Thus, it is not clear that VHS 1256 is a member of the AB Dor moving group, as suggested by \citet{Stone2016}. As shown in Figure \ref{fig:lithium}, the minimum mass of each central component of VHS 1256 (A and B) is $>$ 58 M$_{J}$ for both of the distances discussed above. This implies that the wide companion, VHS 1256 b, has a minimum mass ranging from 10.5 to 26.2 M$_{J}$, as shown in Table \ref{tbl:absolute}. The large range is due to the uncertainty in the distance (12.7 or 17.1 pc) and the range in bolometric luminosities from atmospheric fitting (section \ref{sec:atmo}). Though the lower estimate does dip below the deuterium burning limit, the companion is most likely in the brown dwarf regime. \subsection{Additional Companions and Formation} \label{sec:Morecompanions} We detected no other point source companions in our field of view (FOV), $\sim$16$\farcs$5 x $\sim$16$\farcs$5 in $L^\prime$and $\sim$9$\farcs$3 x $\sim$9$\farcs$3 in $M^\prime$, down to our 5-$\sigma$ sensitivity limits shown in Figure \ref{fig:lims}. of 13.2 ($L^\prime$; 12.5 mag at 17.1 pc). Assuming a distance of 12.7 (17.1) $pc$, minimum system age of $>$ 400 (200) Myr, and no flux reversal at $L^\prime$ (i.e. that more massive objects are fainter), we can therefore exclude the presence of additional companions more massive than VHS 1256 b beyond 6 (8) au. For most of the semi major axis space we probe, comparisons with \citet{Baraffe2003} imply that companions down to 3--5 $M_{\rm J}$ can be excluded if the system is 200--400 $Myr$ old. Because we have failed to identify other substellar companions orbiting the primary, this severely restricts the possibility that VHS 1256 b was scattered to its present orbit by dynamical interactions with another, unseen planet. Thus far, searches for close-in substellar companions to stars with imaged (near) planet-mass companions at 100--500 au have failed to identify potential scatterers, suggesting that this class of objects formed in situ either from protostellar disk or molecular cloud fragmentation \citep{Bryan2016}. Furthremore, the mass ratio (q) of VHS 1256 b (M$\sim$18.4 M$_{J}$; median lower limit between 10.2-26.2 M$_{J}$) to VHS 1256 A+B (M$\geq$116 $M_J$) is $\sim$0.16. This mass ratio is substantially larger than that observed for other imaged planetary systems such as HR 8799 (q $\sim$ $5*10^{-3}$; \citealt{fabrycky2010}) and ROXs 42B (q $\sim$ 0.008-0.01; \citealt{cur14}). Rather, it is more similar to that observed for low mass BDs (q $\sim$ 0.01-0.9; eg. see Figure 4, \citealt{cur14} and citations there in). We suggest this is indicative that the system formed via some form of fragmentation, i.e. a binary-star-like formation mechanism, rather than core accretion \citep{pollack1996}. Stone et al. \citeyear{Stone2016} reached a similar conclusion of the binary-star-like formation. \subsection{Atmospheric Modeling} Although VHS 1256 b occupies a similar near-IR color-magnitude space to HR 8799 bcde \citep{gau15}, its significantly older age than the HR 8799 system enables one to probe a different time frame in planet/brown dwarf atmospheric evolution. VHS 1256 b and HR 8799 bcd(e?) have different spectral energy distributions at the longest wavelengths probe ($M^\prime$/4.7 $\mu m$). In the now-standard picture of understanding the atmospheres of the youngest and lowest-mass L/T objects, thick clouds and non-equilibrium carbon chemistry both are due to the objects' low surface gravity \citep[e.g.][]{Marley2012}. That VHS 1256 b thus far lacks evidence for non-equilibrium carbon chemistry may complicate this picture, suggesting some decoupling of gravity's two effects or that VHS 1256 b's gravity is high enough that non-equilibrium effects are less obvious than they are for, say, HR 8799 b. Higher signal-to-noise detections in $M^\prime$ and photometry in the 3--4 $\mu m$ range probing methane will allow us to better clarify VHS 1256 b's carbon chemistry. Multiple lines in $J$ band resolvable at medium resolution could better clarify the companion's surface gravity. With other, similar objects detected at a range of ages, we can better map out the atmospheric evolution of objects of a given mass as well as the diversity of objects occupying the same reddened L/T transition region where VHS 1256 b and bona fide planets like HR 8799 bcde reside. \subsection{System Architecture} The VHS 1256 hierarchical triple system is poised to become an important contributor to our understanding of VLM systems. It represents the third hierarchical triple system comprised solely of brown dwarf-mass components known \citep{bou05,rad13}. Given this projected separation (1.3 au) and associated approximate orbital period ($\sim$4.7 years) of the central binary in VHS 1256, future AO spectroscopic monitoring of the system is poised to determine dynamical masses of all components of the triple system, which should help constrain evolutionary models (see e.g. \citealt{dupuy2010}). Since at least some brown dwarf binaries are believed to form via the disintegration of triple systems, and the third body in such systems are most likely also brown dwarf mass object \citep{rei15}, robustly determining the fundamental properties of the few known triple systems like VHS 1256 could help test the predictions of dynamical simulations of BD formation and evolution. \\ We thank Sarah Schmidt for thoughtful discussions about L/T dwarfs and multiplicity and Jonathan Gagne for helpful comments on VHS 1256's possible membership in different moving groups. We acknowledge support from NSF-AST 1009314 and NASA's Origins of Solar Systems program under NNX13AK17G. This work was performed in part under contract with the Jet Propulsion Laboratory (JPL) funded by NASA through the Sagan Fellowship Program executed by the NASA Exoplanet Science Institute. This work is also based on data collected at Subaru Telescope, which is operated by the National Astronomical Observatory of Japan. The authors recognize and acknowledge the significant cultural role and reverence that the summit of Mauna Kea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain. \nocite{*}
16
7
1607.06007
1607
1607.01761_arXiv.txt
The next generation weak lensing surveys (i.e., LSST, Euclid and WFIRST) will require exquisite control over systematic effects. In this paper, we address shear calibration and present the most realistic forecast to date for LSST/Euclid/WFIRST and CMB lensing from a stage 4 CMB experiment (``CMB S4''). We use the \textsc{CosmoLike} code to simulate a joint analysis of all the two-point functions of galaxy density, galaxy shear and CMB lensing convergence. We include the full Gaussian and non-Gaussian covariances and explore the resulting joint likelihood with Monte Carlo Markov Chains. We constrain shear calibration biases while simultaneously varying cosmological parameters, galaxy biases and photometric redshift uncertainties. We find that CMB lensing from CMB S4 enables the calibration of the shear biases down to $0.2\%-3\%$ in 10 tomographic bins for LSST (below the $\sim 0.5\%$ requirements in most tomographic bins), down to $0.4\%-2.4\%$ in 10 bins for Euclid and $0.6\%-3.2\%$ in 10 bins for WFIRST. For a given lensing survey, the method works best at high redshift where shear calibration is otherwise most challenging. This self-calibration is robust to Gaussian photometric redshift uncertainties and to a reasonable level of intrinsic alignment. It is also robust to changes in the beam and the effectiveness of the component separation of the CMB experiment, and slowly dependent on its depth, making it possible with third generation CMB experiments such as AdvACT and SPT-3G, as well as the Simons Observatory.
\label{sec:intro} Understanding the physics of cosmic acceleration is the aim of many ongoing and upcoming imaging surveys such as the Kilo-Degree Survey (KiDS)\footnote{\url{http://www.astro-wise.org/projects/KIDS/}}, the Dark Energy Survey (DES)\footnote{\url{http://www.darkenergysurvey.org}} \cite{DES}, the Subaru Hyper Suprime-Cam (HSC) survey\footnote{\url{http://www.naoj.org/Projects/HSC/index.html}} \cite{Miyazakietal:06}, the Subaru Prime Focus Spectrograph (PFS)\footnote{\url{http://sumire.ipmu.jp/en/2652}}\cite{Takadaetal:12}, the Dark Energy Spectroscopic Instrument (DESI)\footnote{\url{http://desi.lbl.gov}}, the Large Synoptic Survey Telescope (LSST) \cite{LSSTScienceBook}, the ESA satellite mission Euclid \cite{EuclidDefinitionStudyReport}, and NASA's Wide-Field Infrared Survey Telescope (WFIRST) \cite{WFIRST}. Through gravitational lensing, images of distant sources such as galaxies or the cosmic microwave background (CMB) are distorted by the presence of foreground mass. In the weak regime, lensing produces small distortions, arcminute deflections or $\sim1\%$ shear, coherent on degree scales, which are detected statistically. Weak gravitational lensing is sensitive to the growth of structure and the geometry of the universe, making it a powerful probe of dark energy, modifications to General Relativity and the sum of the neutrino masses (see \cite{2013PhR...530...87W} and references therein for a review). Realizing the full potential of the stage 4 weak lensing surveys (i.e. LSST, Euclid and WFIRST) requires an exquisite understanding and control of systematics effects \cite{2006astro.ph..9591A}. In the case of LSST, the bias and scatter in photometric redshifts need to be controlled to better than a percent \cite{2006MNRAS.366..101H}, which may require more than $\sim 10^5$ galaxy spectra for calibration \cite{2008ApJ...682...39M}. Interpreting cosmic shear, i.e. the power spectrum of the weak lensing of galaxies by the large-scale structure in the universe, requires knowledge of the matter power spectrum, down to scales where non-linear evolution and baryonic effects are important \cite{2008ApJ...672...19R, 2008PhRvD..77d3507Z, 2011MNRAS.418..536E, 2011MNRAS.415.3649V, 2014MNRAS.440.2997V, 2014MNRAS.442.2641V, 2015MNRAS.454.2451E, 2015ApJ...806..186O, 2016arXiv160303328H}. Intrinsic alignments of galaxies, if not mitigated, could contaminate the cosmic shear signal by up to $1-10\%$ (see \cite{2015SSRv..193...67K, 2015SSRv..193..139K, 2015SSRv..193....1J, 2015PhR...558....1T} for a review). Finally, estimating the shear from galaxy shapes may lead to additive and multiplicative biases, typically redshift dependent, which have to be controlled to a high accuracy \cite{2006MNRAS.366..101H, 2013MNRAS.429..661M}. The shear multiplicative bias is degenerate with the amplitude of the signal and its time evolution can hide the true evolution of the growth of structure, which probes dark energy and possible modifications to general relativity. \citet{2013MNRAS.429..661M} found that fully exploiting the statistical power of a stage 4 cosmic shear survey requires a shear multiplicative bias of $\lesssim 0.4\%$. The focus of this paper is to show how CMB lensing contributes to reaching this goal. Many effects contribute to the shear multiplicative bias \cite{2013MNRAS.429..661M, 2016MNRAS.tmp..827J}, such as inaccuracies in the point-spread function (PSF) or detector effects (e.g. charge transfer inefficiency in CCDs or the brighter-fatter effect). Model biases may occur when estimating galaxy shapes with an inaccurate galaxy profile. Since lensing couples the short and long wavelength modes of a galaxy image, knowing the response of a galaxy image to shear requires knowing the galaxy image to a better resolution than the PSF, leading in practice to a ``noisy deconvolution'' shape bias. Furthermore, the galaxies used for shear estimation do not form a homogenous sample, and their detection signal-to-noise depends on their shape, leading to a shape selection bias. Correcting this bias perfectly would require knowledge of the galaxy population below the detection threshold \cite{2013MNRAS.429..661M, 2016MNRAS.tmp..827J}. State of the art shape algorithms calibrated on simulations (e.g. \cite{2016MNRAS.tmp..827J}) reach few percent accuracy on the shear multiplicative bias. They currently approach the requirements of stage 4 surveys (e.g. \cite{2015MNRAS.450.2963M}) in the absence of selection bias (i.e. when the galaxy population is known perfectly) and for images with slightly higher signal to noise. Because the shear multiplicative bias is such a critical and difficult systematic for stage 4 weak lensing surveys, alternative methods to calibrate it provide a valuable redundancy. These consistency checks will be crucial in trusting the results, e.g. in the optimistic event where stage 4 weak lensing surveys would find deviations from $\Lambda$CDM. The lensing convergence reconstructed from temperature and polarization of the cosmic microwave background (CMB) does not rely on galaxy shape estimation \cite{1999PhRvD..59l3507Z, 2002ApJ...574..566H, 2003PhRvD..67h3002O}. Although CMB lensing is most sensitive to the mater distribution at higher redshift, the lensing efficiencies for CMB and galaxy lensing overlap, thus potentially allowing to calibrate the shear multiplicative bias \cite{2012ApJ...759...32V, 2013ApJ...778..108V, 2013arXiv1311.2338D}. Using a Fisher forecast, \citet{2012ApJ...759...32V, 2013ApJ...778..108V, 2013arXiv1311.2338D} combine all the 2-point correlations of galaxy positions, galaxy shapes and CMB lensing convergence. They show that adding CMB lensing and galaxy spectroscopic data to a shape catalog allows to approach the requirements on shear calibration for stage 4 weak lensing surveys. This method is appealing because it is a self-calibration: it relies on the data alone and not on image simulations of the whole galaxy sample. It is a practical example of real synergy between overlapping surveys, where combining probes leads to improved systematics and not just a marginally higher signal-to-noise. CMB lensing has been measured by the WMAP satellite \cite{2007PhRvD..76d3510S, 2008PhRvD..78d3520H}, the Atacama Cosmology Telescope (ACT) \cite{2011PhRvL.107b1301D, 2014JCAP...04..014D, 2015PhRvL.114o1302M, 2015ApJ...808....7V}, the South Pole Telescope (SPT) \cite{2012ApJ...756..142V, 2015ApJ...806..247B, 2015ApJ...810...50S}, POLARBEAR \cite{2014PhRvL.113b1301A} and the Planck satellite \cite{2014A&A...571A..17P, 2015arXiv150201591P}. In the future, Advanced ACT, SPT-3G and a stage 4 ground-based CMB experiment (CMB S4) \cite{2015APh....63...66A, 2015APh....63...55A}, as well as the Simons Observatory, will provide high fidelity maps of the CMB lensing convergence over a large fraction of the sky. Recent work has applied this method to existing stage 3 data. \citet{2016arXiv160105720L} correlated galaxy positions from CFHT with galaxy shapes from CFHT and CMB lensing from Planck. Assuming a fixed cosmology (WMAP or Planck parameters) and known photometric redshift errors, they constrain the shear bias in the CFHT shape catalog, finding a shear bias lower than unity at $2-4\sigma$. However, a shift in the cosmological parameters, a different redshift-evolution of the galaxy bias or uncharacterized photometric redshift errors might explain this tension. Similarly, \citet{2016arXiv160207384B} correlated galaxy positions from DES with galaxy shapes from DES and CMB lensing from SPT, to constrain the shear bias and an overall additive photometric redshift bias. The constraints on shear bias are obtained by fixing cosmology and the photo-z bias to fiducial values. Cross-correlations between galaxy shear and CMB lensing have been measured \cite{2015PhRvD..91f2001H, 2015PhRvD..92f3517L, 2016MNRAS.459...21K}. In \cite{2015MNRAS.449.2205K}, the combination of galaxy shear and CMB lensing is forecasted to improve dark energy and neutrino mass constraints. In \cite{2016arXiv160505337M}, the CMB lensing from Planck and the galaxy lensing from CFHTLenS is measured around CMASS halos, yielding a $15\%$ measurement of a cosmographic distance ratio. Finally, \cite{2016arXiv160608841S} measured all the two-point correlations of galaxy positions and shear from SDSS and CMB convergence from Planck, and constrained in turn galaxy bias to $2\%$ accuracy, cosmology, shear multiplicative bias to $15\%$ and distance ratio to $10\%$. While these studies \cite{2016arXiv160105720L, 2016arXiv160207384B, 2016arXiv160608841S} currently constrain the shear bias to $\gtrsim10\%$, far from current state-of-the-art calibrations from simulations, they constitute very encouraging first steps in using CMB lensing and spectroscopic data to calibrate the shear bias. In this paper, we address the following questions: can CMB lensing calibrate the shear bias down to the requirements of stage 4 surveys? Is this method competitive with image simulations? Is this possible without arbitrarily fixing cosmological and nuisance parameters? How robust is this calibration to photometric redshift uncertainties and intrinsic alignments? To answer these questions in a realistic way, we simulate the full joint analysis of weak lensing, photometric clustering and CMB lensing. We compute the full covariances for all the two-point auto- and cross-correlations, including the non-Gaussian covariances which dominate on small scales. We analyze the full likelihood function with Monte Carlo Markov Chains (MCMC), allowing for a potentially non-Gaussian posterior distribution. This is particularly relevant when assessing non-linear degeneracies between parameters. We compare these MCMC forecasts with Fisher forecasts, thus assessing potential departures from Gaussianity of the posterior distributions. We test the robustness of the method by simultaneously varying shear biases, cosmological parameters, galaxy biases, photometric redshift uncertainties (bias in each tomographic bin and overall scatter), and by contaminating the simulated data with intrinsic alignment. We extend the \textsc{CosmoLike} \cite{2014MNRAS.440.1379E} framework to include CMB lensing and produce the most realistic forecast to date for LSST/Euclid/WFIRST and CMB lensing from a stage 4 CMB experiment. Given the importance of this result, and as an input for the design of CMB S4, we vary the depth, resolution and maximum multipole of the CMB experiment and show the robustness of this shear calibration method. This paper is organized as follows. In Sec.~\ref{sec:method} we describe the observables considered, the survey specifications, the systematic effects included and the simulated likelihood. In Sec.~\ref{subsec:lsst_alone}, we revisit the LSST requirements and show that self-calibration of the shear biases with LSST alone is possible down to $1-2\%$. The calibration of shear multiplicative bias down to the LSST requirements by using stage 4 CMB lensing is presented in Sec.~\ref{subsec:lsst_cmbs4}, along with the impact of intrinsic alignments. We show the importance of sensitivity and resolution for CMB S4 in Sec.~\ref{subsec:varying_cmbs4}, the robustness to photometric redshift uncertainties in Sec.~\ref{subsec:photoz}, to non-linearities and baryonic effects in Sec.~\ref{subsec:varyinglmaxgks}, and present forecasts for Euclid and WFIRST instead of LSST in Sec.~\ref{subsec:euclid_wfirst}.
In this study, we answer the following questions: can CMB lensing calibrate the shear bias down to a useful accuracy, competitive with image simulations and comparable with the LSST requirements? Is this possible while marginalizing over cosmological and nuisance parameters? How robust is this calibration to intrinsic alignments, photo-z uncertainties, non-linear and baryonic effects, and assumptions on the CMB S4 experiment? To do so, we extend the \textsc{CosmoLike} framework to include CMB lensing. We jointly analyze all the two-point correlation functions of galaxy positions, shear and CMB lensing convergence. We include the non-Gaussian covariances and explore the posterior distribution with MCMC sampling and the Fisher approximation. Our forecasts simultaneously vary cosmological parameters, galaxy biases, photo-z uncertainties for each source and lens bin and shear calibration for each source bins. We make conservative choices of galaxy samples and scales. We therefore expect our forecast to be realistic and robust. We show that CMB lensing from S4 can calibrate the shear multiplicative biases for LSST down to $0.3\%-2\%$ in 10 tomographic bins, surpassing the LSST requirements of $\sim 0.5\%$ in most of the redshift range. This method performs best in the highest redshift bins, where shear calibration is otherwise most challenging. We show a shear calibration of $0.4\%-2.4\%$ for Euclid's 10 tomographic source bins and $0.6\%-3.2\%$ for WFIRST's 10 bins. For a reasonable level of intrinsic alignments and Gaussian photo-z uncertainties, the shear calibration from CMB S4 lensing is only biased at a fraction of the statistical uncertainty. This shear calibration is sensitive to the noise level in CMB S4 maps, but insensitive to the beam and maximum multipole at which component separation is performed, within sensible values. Thus stage 3 CMB surveys such as AdvACT and SPT-3G, as well as the Simons Observatory, will already provide a meaningful shear calibration. It is mildly dependent on the photo-z priors for Gaussian photo-z errors, and on the maximum multipole included in the analysis, beyond $\ell_\text{max}\sim 1,000$. We did not consider explicitly photo-z outliers \cite{2010MNRAS.401.1399B} or potential biases in the CMB lensing reconstruction \cite{2014ApJ...786...13V, 2014JCAP...03..024O}. In conclusion, we find that shear calibration from CMB lensing will be possible at a level competitive with or even exceeding the LSST requirements. This method is a powerful alternative to simulation-based calibration techniques, because it relies on the data directly. In the systematics-limited era of stage 4 weak lensing surveys, this method will provide redundancy and serve as a cross check, in order to reliably measure the properties of dark energy, the neutrino masses and possible modifications to general relativity. \subsection*{Acknowledgements} We thank Patricia Burchat, Scott Dodelson, Jo Dunkley, Simone Ferraro, Colin Hill, Gil Holder, Bhuvnesh Jain, Alexie Leauthaud, Jia Liu, Mathew Madhavacheril, Rachel Mandelbaum, Roland de Putter, Uro\v s Seljak, Blake Sherwin, Sukhdeep Singh and Martin White for useful discussion about shear calibration with CMB lensing. We thank Bob Armstrong, Eric Huff and Peter Melchior for useful discussion about shear systematics. We thank Elisa Chisari, Rachel Mandelbaum and Sukhdeep Singh for useful discussion about intrinsic alignments. We thank Scott Dodelson, Jo Dunkley, Simone Ferraro, Bhuvnesh Jain, Rachel Mandelbaum, Peter Melchior and Sukhdeep Singh for feedback on an earlier version of this paper. Numerical calculations in this work were carried out using computational resources supported by the Princeton Institute of Computational Science and Engineering. We thank Jim Stone and the Computational Science and Engineering Support for access to these resources and invaluable help. ES was supported, in part, by a JPL Strategic Universities Research Partnership grant. JR, TE and HM were supported by JPL, which is operated by Caltech under a contract from NASA. Part of the research described in this paper was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration. Copyright 2016. All rights reserved. \newpage
16
7
1607.01761
1607
1607.06970_arXiv.txt
We report the discovery and orbit of a new dwarf planet candidate, 2015 RR$_{245}$, by the Outer Solar System Origins Survey (OSSOS). \obj's orbit is eccentric ($e$=0.586), with a semi-major axis near 82~au, yielding a perihelion distance of 34~au. \obj\ has $g-r = 0.59 \pm 0.11$ and absolute magnitude $H_{r} = 3.6 \pm 0.1$; for an assumed albedo of $p_V = 12$\% the object has a diameter of $\sim670$~km. Based on astrometric measurements from OSSOS and Pan-STARRS1, we find that 2015 RR$_{245}$ is securely trapped on ten-Myr timescales in the 9:2 mean-motion resonance with Neptune. It is the first TNO identified in this resonance. On hundred-Myr timescales, particles in \obj-like orbits depart and sometimes return to the resonance, indicating that \obj\ likely forms part of the long-lived metastable population of distant TNOs that drift between resonance sticking and actively scattering via gravitational encounters with Neptune. The discovery of a 9:2 TNO stresses the role of resonances in the long-term evolution of objects in the scattering disk, and reinforces the view that distant resonances are heavily populated in the current Solar System. This object further motivates detailed modelling of the transient sticking population.
\label{sec:intro} The Outer Solar System Origins Survey (OSSOS) was designed to provide a set of $500+$ very precise trans-Neptunian object (TNO) orbits by the end of its 2013--2017 observations with the Canada-France-Hawaii Telescope (CFHT) \citep{Bannister:2015}. As OSSOS covers 155 square degrees of sky on and near the Solar System mid-plane, the Kuiper belt's steep luminosity function \citep{2001AJ....122.1051G, Petit:2011p3938, Fraser:2014vt} was used to predict that the brightest target expected to be found over the course of the survey would have apparent magnitude $m_r\sim 21.5$. At $m_r=21.8$, \obj\ is the brightest target discovered by OSSOS. At a current heliocentric distance of 65 au, this bright OSSOS detection is also far beyond the median distance of TNO detections in sky surveys. Its substantial distance requires \obj\ to be sizeable.
\label{sec:discussion} Two main possibilities appear likely for how \obj\ came to be in its current orbit: first, that it was scattered off Neptune and is presently `sticking' to a resonance, with the scattering event either recent or early in Solar System history. Secondly, \obj\ could have been captured into the resonance during Neptune's migration. We consider each in turn. Metastable resonant TNOs that are emplaced by `transient sticking' are an established phenomenon. The transient sticking slows orbital evolution, providing a mechanism necessary to maintain the current scattering disk, which would otherwise decay on timescales much shorter than the age of the Solar System \citep{Duncan:1997hg}. Several studies of transient sticking report temporary captures in the 9:2 resonance \citep{Fernandez:2004kh, Lykawka:2007dj, Almeida:2009bib}. These studies found most periods spent in the resonance are short ($\sim 10$ Myr) and with large libration amplitudes $L_{92}>130^\circ$. However, occasionally their modelled particles attained smaller libration amplitudes, which lengthened their occupation in the resonance. The low-libration `sticker' objects provide an enhanced contribution to the steady-state transient population. Indeed, the simulations reported in \citet{Lykawka:2007dj} include stickers in the 9:2 resonance with $L_{92}$ as small as the $\simeq115^\circ$ observed for \obj. Of the particles in \citet{Lykawka:2007dj} surviving to the present time, roughly half had experienced a trapping in the 9:2 at some time, and one case kept $L_{92}<120^{\circ}$ for 700~Myr (P. Lykawka, 2016 private communication). \obj\ plausibly fits into this `metastable TNO' paradigm. Given the $\sim$100 Myr median resonant lifetime of our orbital clones, we suggest that \obj\ is likely to be transiently alternating between Neptune mean-motion resonances and the actively scattering component of the trans-Neptunian region. This conclusion is bolstered by \obj's perihelion distance of $q=34$ au; roughly\footnote{Orbital integration is required; see discussions in \citet{Lykawka:2007ff} and \citet{Gladman2008}.} $q<37$~au results in the continuing orbital interactions with Neptune that are shared by almost all active scatterers. In fact, non-resonant TNOs with $q$ only 4 au from Neptune's orbital semi-major axis of 30 au typically experience sufficiently numerous strong encounters with Neptune (when the longitude of Neptune matches that of the TNO while the TNO is at perihelion) for their orbits to rapidly evolve \citep{Morbidelli:2004dh}. Such rapidly evolving orbits, when observed at the current epoch, are classified in the scattering population \citep{Gladman2008}. Residence in the 9:2 or other resonances can temporarily shield TNOs from scattering, but eventually their orbital evolution will lead such TNOs to leave the resonance and resume active scattering. Note that there can be resonant objects (eg.~Pluto) which do not participate at all in this process on 4~Gyr time scales. Though we consider transient sticking the most plausible origin for \obj, other emplacement scenarios are possible. For example, Nice model-type histories \citep{2008Icar..196..258L} could in principle emplace objects in the 9:2 resonance directly during an early Solar System upheaval event. If fortunate enough to remain stable for the subsequent $\sim$4 Gyr solar system age, these objects might still be present today. A numerical simulation of TNO sculpting under a Nice model-like Solar System history \citep{Pike2016nice} does produce a population of 9:2 resonant TNOs. However, the objects reported in \citet{Pike2016nice}, drawn from the simulations of \citet{Brasser:2013dw}, may also be transient captures. Further work is needed to determine whether these objects were caught early and retained, or whether they are in fact transiently sticking TNOs captured later in the 4 Gyr numerical evolution. Alternately, resonance capture during smooth migration of Neptune, even over a relatively large 10 au distance, would require an initial disk extending beyond 55 au to provide a source of TNOs for capture into resonance. While there are low-inclination TNOs beyond the 2:1 resonance that suggest that the cold classical TNO population did extend to at least 50 au \citep{Bannister:2015}, 55 au would be at the larger end of the observed debris disk population \citep{Hilenbrand2008}. Because they appeal to capture early in the Solar System's history, both the Nice-type and the smooth migration scenarios would require that future observations of \obj\ push its orbit to a subset of phase space more stable than that currently explored by our orbital clones; this seems unlikely given the extent of our numerical exploration. We next consider whether our detection of a large TNO in the resonant phase of the metastable population is consistent with the population ratios between the two phases (scattering/resonant). Our initial numerical experiments suggest that---summed over all resonances--- the transiently stuck population may be comparable to the population of active scatterers. This is similar to the behaviour seen for the known 5:1 resonant TNOs, which typically spend half their lifetimes in various resonances and half in a scattering state \citep{Pike:2015gn}. If so, a single transiently-stuck dwarf planet candidate detection by OSSOS is consistent with our lack of detection of similarly-sized active scatterers. Only a few of the well-populated distant resonances are known to contain $H<4$ TNOs \citep{2011AJ....142...98S}. Additionally, the classification methods of \citet{Elliot:2005ju} and \citet{Gladman2008} both agree that with its current astrometric measurements, 2007 OR$_{10}$ is securely in the 10:3 resonance. Because dynamical timescales are longer at large semi-major axes, transiently stuck TNOs spend more time in more distant (low-order) resonances, making the 9:2 a reasonable resonance in which to find \obj. When viewed in absolute magnitude $H$ space, detection of an $H_r=3.6$ TNO by OSSOS is naively a $\sim4$\% probability using the TNO sky density estimates of \citet[figure 9]{Fraser:2014vt}. However, the $H_r$ frequency distribution reported in \citet{Fraser:2014vt} utilizes an empirical formulation that adjusts for the increased albedos of many large TNOs \citep{Brown:2008tp, Fraser:2008p103, Fraser:2014vt}. Use of that relation\footnote{The $H_r$ mag of the \obj\ is used to estimate a size given an estimated intrinsic albedo of $p_V = 12$\% and then an 'effective' $H_r$ mag is computed for that size using an effective albedo of 6\%.} to compute an `effective' $H_r$ for \obj\ results in a value of $H_{r, eff} = 4.35$. At this $H_{r, eff}$, our detection of one TNO in 155 square degrees of survey coverage is in good agreement with the measured $H_{r, eff}$ frequency distribution. Concentrating on such `large' TNOs, \obj\ spends approximately two-thirds of its orbit brighter than the shallowest magnitude limit $m_r \sim 24.5$ of any OSSOS block; even at aphelion its sky motion of $\sim1"$/hour would be easily detectable by our survey. With such a substantial visibility fraction, a trivial estimate of the number of comparable TNOs within about 10 degrees of the ecliptic is $(360\times20 / 155) \simeq 50$ $H_r < 3.6$ TNOs over the sky, with only a small upward correction (of $<$50\%) for the fraction of the visibility. Demanding that these TNOs be also in the 9:2 should be viewed as dangerous `post-facto' reasoning (in that the argument would apply to any sub-population in which the single TNO was found). Instead, the perspective should be that there are 50-100 $H_r < 3.6$ TNOs in the volume inside 100 au, which seems completely plausible. Its dynamics suggest \obj\ is one of the objects that survived the population decay in the initially scattered disk after experiencing scattering and temporary capture in multiple resonances. If of order 100 $H\lesssim4$ TNOs exist and the ``retention efficiency" over the entire outer Solar System is $\sim$1\% \citep{Duncan:1997hg, Nesvorny:2016dd}, then there would have been $\sim 10,000$ such objects present in the outer Solar System at the time that the giant planets began to clear the region. This is in line with primordial estimates \citep{1991Icar...90..271S, 1997AJ....114..841S} of $\sim1000$ Plutos, when one takes into account that Pluto-scale TNOs are only a fraction of the $H<4$ inventory. Viewed another way, there may still be an issue due to the puzzling fact that \obj\ is roughly 3 magnitudes brighter than the OSSOS detection limits. That is, OSSOS detects many TNOs with $m_r<24.9$, and none are in the 9:2 resonance. We find that the $H_{v} = 4.7$ TNO 2003 UA$_{414}$, recently re-found by Pan-STARRS,\footnote{Four new oppositions of observation in MPS 719376: \url{http://www.minorplanetcenter.net/iau/ECS/MPCArchive/2016/MPS_20160719.pdf}, which changed the semi-major axis of 2003 UA$_{414}$'s orbit by nearly a factor of two.} is securely in the 9:2, and stably resonant on a 100 Myr timescale, with 100 clones evolved as in \S~\ref{sec:orbit} all remaining resonant. No other published surveys suggest any smaller TNOs being detected in the 9:2. If one anchors a normal exponential magnitude distribution to \obj, even restricting to its discovery distance of 65 AU, there should be $\sim100$ TNOs up to three magnitudes fainter, yet only one has been found. The problem is worsened when considering that near the $q=34$ au perihelion distance, TNOs as faint as $H_r\simeq9$ are visible to OSSOS, and detection of those TNOs is far more likely than finding \obj. A plausible resolution of this apparent paradox is most likely that \obj\ has an albedo that is higher than that of smaller TNOs \citep{2008ssbn.book..161S}, as suggested above, and thus this TNO does not anchor a steep exponential distribution. Considering known large TNOs on potentially `metastable' orbits, for an albedo like that of the substantially larger Eris (at a current heliocentric distance of 96 au) of $p\simeq0.96$ \citep{Sicardy:2011gq}, the effective $H_r$ becomes nearly 6, and the non-detection of smaller TNOs even at perihelion is not statistically alarming. We point out, however, that the 1500~km diameter 2007 OR$_{10}$'s visual albedo is only 9\% \citep{2016arXiv160303090P}, raising doubt on whether all large TNOs have high albedos \citep{Brown:2008tp}. Future thermal measurements and spectral studies of \obj, which will steadily brighten as it approaches its 2090 perihelion, will inform the open question of its albedo and surface composition.
16
7
1607.06970
1607
1607.00147_arXiv.txt
\noindent We investigate the use of closure phase as a method to detect the HI 21cm signal from the neutral IGM during cosmic reionzation. Closure quantities have the unique advantage of being independent of antenna-based calibration terms. We employ realistic, large area sky models from Sims et al. (2016). These include an estimate of the HI 21cm signal generated using 21cm FAST, plus continuum models of both the diffuse Galactic synchrotron emission and the extragalactic point sources. We employ the CASA simulator and adopt the Dillon-Parsons HERA configuration to generate a uv measurement set. We then use AIPS to calculate the closure phases as a function of frequency ('closure spectra'), and python scripts for subsequent analysis. We find that the closure spectra for the HI signal show dramatic structure in frequency, and based on thermal noise alone, the redundant HERA-331 array should detect these fluctuations easily. Comparatively, the frequency structure in the continuum closure spectra is much smoother than that seen in the HI closure spectra. Unfortunately, when the line and continuum signals are combined, the continuum dominates the visibilities at the level of $10^3$ to $10^4$, and the line signal is lost. We have investigated fitting and removing smooth curves in frequency to the line plus continuum closure spectra, and find that the continuum itself shows enough structure in frequency in the closure spectra to preclude separation of the continuum and line based on such a process. We have also considered the subtraction of the continuum from the visibilities using a sky model, prior to calculation of the closure spectra. We find that if 99\% of the continuum can be subtracted from the visibilities, then the line signal can be seen in the residuals after subsequent smooth curve fitting and removal, although the advantages of such an approach are not clear at this point.
Detecting the HI 21cm signal from the neutral intergalactic medium during cosmic reionization, and into the preceding dark ages, has been one of the paramount goals of modern astrophysics for the last decade (Morales \& Wyithe 2010). However, this task is complicated by the much stronger foreground continuum emission. A powerful distinguishing property of the foregrounds is that they are dominated by spectrally smooth emission. This is in stark contrast with the 21-cm emission which is expected to fluctuate rapidly in both its spatial and spectral dimensions. A naive solution\footnote{ Without a priori knowledge of the covariance between the foregrounds and the 21-cm signal in the data, independent subtraction of a foreground model from the data prior to estimation of the quantity of interest will produce biased estimates of said quantity. As such, joint estimation of the foregrounds and 21-cm signal is essential for obtaining statistically robust estimates of the 21-cm signal (Sims et al. 2016).}, therefore, would be to attempt to removed the foregrounds in the spectral domain by fitting a smoothly varying function (such as a polynomial) in frequency to either the visibilities or the spectral image cubes. However, in his PhD thesis work, A. Datta showed that the chromatic response of an interferometer for very wide field imaging imprints a spectral signature on the visibility data which is impossible to remove using standard continuum subtraction techniques via smooth curve fitting to the visibilty spectra, such as UVLIN in AIPS or uvcontsub in CASA, or point-by-point smooth curve fitting to a spectral image cube. The continuum can still be removed properly, in theory, through a frequency dependent subtraction from the visibilities of an accurate continuum model generated from the data themselves. However, Datta et al. showed that such a subtraction requires remarkably accurate complex gain calibration as a function of frequency (0.1\%; Datta et al. 2009; 2010). These facts have led to consideration of alternative methods for detecting the HI 21cm signal, through 'foreground avoidance' in delay spectra (Parsons et al. 2012; Morales et al. 2012). The method involves separating HI from continuum in the line of sight vs. sky-plane power-spectral space. In this space, the maximum wave number (or spectral frequency) for flat-spectrum continuum emission due to the chromatic response of a given interferometric baseline is set by the maximum delay of the baseline for sources at the horizon. Hence, the line signal in the line-of-sight direction (frequency) emerges from 'the wedge' of continuum emission at large wave number (eg. Datta et al. 2010). This avoidance method still requires tight control of the spectral response of other parts of the array, such as the antennas and data transmission system, and/or very accurate calibration of the spectral response (bandpass) with time, to avoid coupling the continuum signal to the line, and hence causing 'bleeding' of the continuum signal into the EoR window (eg. Pober et al. 2016). In this memo, we consider an alternate approach for discovering the HI signal using the closure phases of the interferometer. Closure phase results from a simple product of the three visibility pairs from three antennas (Jennison 1958). It was recognized early in the field of radio interferometry that closure quantities are independent of antenna-based phase and amplitude calibration terms. Hence, to the degree that array calibration is separable into antenna-based terms, closure phases are independent of calibration and calibration errors, ie. close quantities are a robust 'observable' of the true sky signal. This fact was used in early radio interferometry, and in particular, VLBI, when maintaining phase coherence was problematic. Closure quantities are still used extensively in optical interferometry, as well as being the primary diagnostic for antenna-based calibration errors in phase-connected radio interferometers (Perley 1999). Note that in this memo, the goal is not to characterize the HI 21cm signal from reionization, nor to consider the physical interpretation and its implications for the physics of reionization. These are early days in HI 21cm cosmology, when mere detection of the signal remains paramount. Given the robust nature of closure phase to antenna-based calibration, herein we consider the simple questions: is the HI 21cm line signal from reionization obvious in the closure phase behaviour as a function of frequency? Does the behaviour of the closure phases due to the line signal as a function of frequency differ substantially from that of the continuum? And are the two separable in a simple way?
16
7
1607.00147
1607
1607.05886_arXiv.txt
We report constraints on the sources of ultra-high-energy cosmic rays (UHECRs) above $10^{9}$\;GeV, based on an analysis of seven years of IceCube data. % This analysis efficiently selects very high energy neutrino-induced events which have deposited energies from $5 \times 10^5$ GeV to above $10^{11}$\;GeV. Two neutrino-induced events with an estimated deposited energy of $(2.6 \pm 0.3) \times 10^6$\;GeV, the highest neutrino energy observed so far, and $(7.7 \pm 2.0) \times 10^5$\;GeV were detected. The atmospheric background-only hypothesis of detecting these events is rejected at 3.6$\sigma$. The hypothesis that the observed events are of cosmogenic origin is also rejected at $>$99\% CL because of the limited deposited energy and the non-observation of events at higher energy, while their observation is consistent with an astrophysical origin. Our limits on cosmogenic neutrino fluxes disfavor the UHECR sources having cosmological evolution stronger than the star formation rate, e.g., active galactic nuclei and $\gamma$-ray bursts, assuming proton-dominated UHECRs. Constraints on UHECR sources including mixed and heavy UHECR compositions are obtained for models of neutrino production within UHECR sources. Our limit disfavors a significant part of parameter space for active galactic nuclei and new-born pulsar models. These limits on the ultra-high-energy neutrino flux models are the most stringent to date.
\vspace{5mm} In a recent letter \cite{icecube2016}, we stated that two neutrino-induced events were detected. The observed events were, because of their estimated energies, interpreted as background in the original analysis searching for neutrinos above 10 PeV. One of two events was described as a particle shower with a deposited energy of $(7.7\pm2.0) \times 10^5$ GeV. Later investigation revealed that this event was a detector artifact caused by a spurious flash from the in-ice calibration laser during the warm-up period before a calibration run. We have updated the current analysis excluding all the runs overlapping with the laser warm-up period. The total live time difference with the update is less than 0.5\%. The other neutrino-induced event, an upward going track with a deposited energy of $(2.6\pm0.3) \times 10^6$ GeV, is unaffected. A further search identified no other high-energy neutrino candidates affected by the calibration laser. The atmospheric background-only hypothesis of detecting the one surviving event is rejected at $2.2\sigma$. The observed event is compatible with a generic astrophysical $E^{-2}$ power-law flux with a p-value of 86.4\% and the hypothesis that this event is of cosmogenic origin is rejected with a p-value of 2.2\%. The corresponding evaluation of representative models is given in Table \ref{table:CL} and \ref{table:CL2} as well as the model-dependent limits presented in Fig.\;\ref{fig:MD}. The quasi-differential limit and a model-dependent upper limit on an unbroken $E^{-2}$ power-law flux shown in Figure 4 of the original letter become stronger. An updated version of this plot can be found in Fig. \ref{fig:differential}. An updated exclusion contour from a generic scanning of the parameter space for the source evolution function, $\varPsi_s(z)\propto (1+z)^m$, up to the maximum source extension in redshift $z\leq z_{\rm max}$, is shown in the upper panel of Fig.\;\ref{fig:contours}. The lower panel of Fig.\;\ref{fig:contours} provides a generic constraint on these astrophysical fluxes as an exclusion region in the parameter space of $E^{-2}$ power-law neutrino flux normalization $\phi_0$ and spectral cutoff energy $E^{cut}_{\nu}$. \begin{table}[b] \begin{center} \begin{tabular}{lcrc} \hline \hline $\nu$ Model & Event rate & p-value & MRF\\ & per livetime & &\\ \hline Kotera {\it et al.} & && \\ SFR & $3.6^{+0.5}_{-0.8}$ & $6.0^{+2.9}_{-1.0}$\% & 1.04\\ \hline Kotera {\it et al.} & && \\ FRII & $14.7^{+2.2}_{-2.7}$ & $<$0.1\% &0.23\\ \hline Aloisio {\it et al.}&&&\\ SFR & $4.8^{+0.7}_{-0.9}$ & $3.2^{+2.8}_{-0.7}$\% & 0.80\\ \hline Aloisio {\it et al.}&&&\\ FRII & $24.7^{+3.6}_{-4.6}$ & $<$0.1\% & 0.15\\ \hline Yoshida {\it et al.}& & & \\ $m = 4.0,z_{max}=4.0$& $7.0^{+1.0}_{-1.0}$ & $0.1^{+0.4}_{-0.1}$\% &0.43\\ \hline Ahlers {\it et al.}& && \\ best fit, 1 EeV& $2.8^{+0.4}_{-0.4}$ & $13.4^{+9.2}_{-2.2}$\% &1.33\\ \hline Ahlers {\it et al.}& && \\ best fit, 3 EeV & $4.4^{+0.6}_{-0.7}$ & $3.2^{+1.8}_{-1.4}$\% &0.76\\ \hline Ahlers {\it et al.}& && \\ best fit, 10 EeV & $5.3^{+0.8}_{-0.8}$ & $1.1^{+2.5}_{-0.3}$\% &0.63\\ \hline \hline \hline \end{tabular} \caption{Cosmogenic neutrino model tests: Expected number of events in the effective livetime, p-values from model hypothesis test, and 90\%-CL model-dependent limits in terms of the model rejection factor (MRF). See the caption of the original letter for full citations.} \label{table:CL} \end{center} \end{table} \begin{table} \begin{center} \begin{tabular}{lcrc} \hline \hline $\nu$ Model & Event rate & p-value & MRF\\ & per livetime & &\\ \hline Murase~{\it et al.}& & & \\ $s=2.3$, $\xi_{CR}$=100& $7.4^{+1.1}_{-1.8}$ & $0.3^{+1.3}_{-0.2}$\% & 0.62 ($\xi_{CR}\leq$62)\\ \hline Murase~{\it et al.}& & & \\ $s=2.0$, $\xi_{CR}$=3& $4.5^{+0.7}_{-0.9}$ & $4.8^{+4.9}_{-2.2}$\% & 1.32 ($\xi_{CR}\leq$4.0)\\ \hline Fang~{\it et al.}& && \\ SFR& $5.5^{+0.8}_{-1.1}$ & $1.6^{+3.0}_{-0.8}$\% &0.88\\ \hline Fang~{\it et al.}& && \\ uniform& $1.2^{+0.2}_{-0.2}$ & $78.2^{+2.4}_{-3.9}$\% & 4.0\\ \hline Padovani~{\it et al.}& && \\ $Y_{\nu\gamma}=0.8$& $37.8^{+5.6}_{-8.3}$ & $<$0.1\% & 0.12 ($Y_{\nu\gamma}\leq$0.13)\\ \hline \hline \end{tabular} \caption{Astrophysical neutrino model tests: Same as Table \ref{table:CL}. See the caption of the original letter for full citations.} \label{table:CL2} \end{center} \end{table} \begin{figure} \begin{center} \includegraphics[width=3.4in]{ErratumFig2_v1.pdf} \end{center} \caption{ Model-dependent 90\% confidence-level limits (solid lines) for cosmogenic and astrophysical neutrino predictions. The range of limits indicates the central 90\% energy region. See the caption of the original letter for full details. } \label{fig:MD} \end{figure} \begin{figure} \begin{center} \includegraphics[width=3.2in]{ErratumFig4_v1.pdf} \end{center} \caption{All-flavor-sum neutrino flux quasi-differential 90\%-CL upper limit on one energy decade $E^{-1}$ flux windows (solid line). A model-dependent upper limit on an unbroken $E^{-2}$ power-law flux from the current analysis ($E_{\nu}^2\phi < 5.9\times10^{-9}$ GeV/cm$^2$\;s\;sr) is also shown (dotted line). See the caption of the original letter for full details. } \label{fig:differential} \end{figure} \begin{figure} \begin{center} \includegraphics[height=1.5in]{ErratumFig3a_v1.pdf}\\ \includegraphics[height=1.5in]{ErratumFig3b_v1.pdf} \end{center} \caption{Constraints on UHECR source evolution model and all flavor $E^{-2}$ power-law flux model parameters. The colored areas represent parameter space excluded by the current analysis. (Top) Cosmogenic flux parameters $m$ and $z_{max}$ of UHECR-source cosmological evolution function of the form $\psi_s(z) \propto (1+z)^m$. (Bottom) Upper limits on $E^{-2}$ power-law neutrino flux normalization $\phi_0$ and spectral cutoff energy $E^{cut}_{\nu}$. See the caption of the original letter for full details. } \label{fig:contours} \end{figure} \begin{thebibliography}{9} \bibitem{icecube2016} M.~G.~Aartsen {\it et al.} (IceCube Collaboration), Phys.\ Rev.\ Lett.\ {\bf 117}, 241101 (2016). \end{thebibliography} \clearpage \ifx \standalonesupplemental\undefined \setcounter{page}{1} \setcounter{figure}{0} \setcounter{table}{0} \fi \newcolumntype{L}[1]{>{\arraybackslash}p{#1}} \newcolumntype{C}[1]{>{\centering\arraybackslash}p{#1}} \newcolumntype{R}[1]{>{\hfill\arraybackslash}p{#1}} \renewcommand{\thepage}{Supplementary Methods and Tables -- S\arabic{page}} \renewcommand{\figurename}{SUPPL. FIG.} \renewcommand{\tablename}{SUPPL. TABLE} \vspace{50mm}
16
7
1607.05886
1607
1607.05248_arXiv.txt
We report the discovery of two super-Earth-sized planets transiting the bright (V = 8.94, K = 7.07) nearby late G-dwarf \thisstar, using data collected by the K2 mission. The inner planet, \thisfirstplanet, has a radius of 1.6 \rearth\ and an ultra-short orbital period of only 0.96 days. The outer planet, \thissecondplanet, has a radius of 2.9 \rearth\ and orbits its host star every 29.85 days. At a distance of just 45.8 $\pm$ 2.2 pc, \thisstar\ is one of the closest and brightest stars hosting multiple transiting planets, making HD 3167 b and c well suited for follow-up observations. The star is chromospherically inactive with low rotational line-broadening, ideal for radial velocity observations to measure the planets' masses. The outer planet is large enough that it likely has a thick gaseous envelope which could be studied via transmission spectroscopy. Planets transiting bright, nearby stars like \thisstar\ are valuable objects to study leading up to the launch of the James Webb Space Telescope.
Transiting exoplanets are benchmark objects. Like an eclipsing binary star, a transiting exoplanet offers unique opportunities for a rich variety of follow-up studies due to its favorable orbital geometry. It is possible to measure a planet's fundamental bulk properties like mass and radius \citep[e.g.][]{dai, gettel}, study its atmosphere photometrically or spectroscopically \citep[][]{diamondlowe, knutson, kreidberg}, and measure the alignment between the planet's orbit and the host star's spin axis \citep[e.g.][]{sanchisojeda}. More so than eclipsing binary stars, transiting exoplanets are difficult to detect. Wide-field ground-based transit surveys have detected hundreds of hot Jupiters \citep[e.g.][]{wasps}, but the sensitivity of these surveys falls off quickly at longer orbital periods \citep{gaudiperiod} and smaller planet radii \citep{gaudiradius}. Space telescopes like NASA's \Kepler\ observatory \citep{koch}, with smaller fields of view but better photometric precision have been highly successful at detecting small planets \citep{coughlin,morton16}, but these planets typically orbit faint stars due to the narrow survey design. \Kepler\ revolutionized our knowledge of exoplanets, but only a handful of \Kepler's discoveries orbit stars bright enough for detailed follow-up observations. In its extended K2 mission, \Kepler\ is surveying a larger area of sky and is finding more exciting planets suitable for follow-up observations \citep{crossfield}, but planets transiting stars brighter than 9th magnitude remain a rare prize. Pencil-beam ground-based transit surveys like MEarth \citep{irwin}, APACHE \citep{sozzetti}, and TRAPPIST \citep{trappist} have found what are likely among the best planets in the sky for atmospheric characterization, but the number of transiting planets detected by those surveys to date is small enough to count on one hand \citep{charbonneau,gj1132,trappist1}. Today, planets transiting the brightest known host stars in the sky were uncovered by years or decades of precise radial velocity (RV) measurements \citep{winn55cnc, dragomir, motalebi}. While RV searches have been fruitful, the observations are challenging, the surveys require many nights over many years on large telescopes, and the success rate of finding transiting planets is low. \begin{figure*}[ht!] % \centering \includegraphics[width=6.5in]{lc2.eps} \caption{K2 light curve of \thisstar. Top: the full K2 light curve. Both the numerous, shallow transits of \thisfirstplanet\ and three deeper transits of \thissecondplanet\ are evident in the light curve by eye. Bottom left: K2 light curve (grey dots) phase folded on the transits of \thisfirstplanet, and best-fit transit model (thick purple line). Bottom right: K2 light curve (grey dots) phase folded on the transits of \thissecondplanet, and best-fit transit model (thick purple line).} \label{lc} \end{figure*} NASA's TESS mission \citep{ricker}, an all-sky space-based transit survey, is expected to address the need for planets transiting bright host stars by discovering hundreds of the nearest and brightest transiting exoplanets in the sky \citep{sullivan}, but TESS won't begin collecting data until early 2018. There is an immediate need for small planets transiting bright stars that can be studied before the launch of missions like TESS and the James Webb Space Telescope (JWST). The lessons learned from these first planets to be examined in detail will help inform decisions about how to use resources like JWST to efficiently learn about exoplanets. Here, we announce the discovery of two super-Earth-sized planets transiting the bright (V = 8.94, K = 7.07) nearby dwarf star \thisstar\ using data from the K2 mission. The inner planet, \thisfirstplanet, is a 1.6 \rearth\ super-Earth that orbits its host every 0.96 days. \thisfirstplanet\ has likely lost most of any atmosphere it once possessed due to the intense radiation environment in its short-period orbit \citep{robertousp}. The outer planet, \thissecondplanet, orbits in a 29.85 day period, and with a radius of 2.9 \rearth\ likely has a thick gaseous envelope. The host star is both bright enough in visible wavelengths for precise RV follow-up to measure the planets' masses, and bright enough in infrared wavelengths to spectroscopically interrogate \thissecondplanet's atmosphere. In Section \ref{observations} we describe our observations of \thisstar, our data reduction, and our analysis. In Section \ref{validation}, we describe our statistical validation of the planet candidates, and in Section \ref{discussion}, we discuss the \thisstar\ planets and their importance in the context of the thousands of known transiting exoplanets.
\label{discussion} The main importance of the \thisstar\ planetary system is due to the brightness and proximity of the host star. With a V-magnitude of 8.94, slow projected rotation of less than 2 \kms, and low activity, \thisstar\ is highly suitable for precise RV observations to measure the planets' masses. If \thisfirstplanet\ is rocky with a mass of about 4 \mearth, it should induce RV variations with a semiamplitude of about 3 \ms. Depending on its composition, \thissecondplanet\ could induce RV variations with a semiamplitude of anywhere between 1 \ms (for a roughly 5 \mearth\ planet) and 3 \ms (for a roughly 15 \mearth\ planet). These signals should be readily detectable with modern spectrographs. \thissecondplanet\ is one of the best currently known small planets for atmospheric characterization with transit transmission spectroscopy. We downloaded a list of transiting planets with radii smaller than 4 \rearth\ from the NASA Exoplanet Archive \citep{akeson} and calculated the expected signal-to-noise ($S/N$) one could hope to accumulate per transit compared to the expected scale height of each planets' atmosphere. In particular, we calculated: \begin{equation} S/N \propto \frac{R_{p} H \sqrt{Ft_{14}}}{R_{\star}^2} \end{equation} \begin{equation} H = \frac{k_{b}T_{eq}}{\mu g} \end{equation} \noindent where $R_p$ is the planet's radius, $R_\star$ is the star's radius, $H$ is the atmosphere's scale height, $k_{b}$ is Boltzmann's constant, $T_{eq}$ is the planet's equilibrium temperature, $\mu$ is the atmosphere's mean molecular weight, $g$ is the planets' surface gravity, $t_{14}$ is the transit duration, and $F$ is the flux from the star. We calculated $F$ from the host stars' H-band magnitudes to test suitability for observations with the Hubble Space Telescope's Wide Field Camera 3 instrument, and we assumed that planets less dense than rocky structural models \citep{zeng} have atmospheres dominated by molecular hydrogen, while planets consistent with rocky structural models have atmospheres dominated by heavier molecules (like oxygen). \begin{deluxetable}{llccl} \tablewidth{0pt} \tablehead{ \colhead{ } & \colhead{Planet} & \colhead{$R_{\rm P}$ (\rearth)}& \colhead{Predicted $S/N ^{a}$} &\colhead{Discovery}} \startdata 1. & GJ 1214 b &$2.85 \pm 0.20$ & 3.9$^{c}$ & MEarth \\ 2. &GJ 3470 b &$3.88 \pm 0.32$ & 1.9$^{c}$ & RV \\ 3. &55 Cnc e & $1.91 \pm 0.08$ & 1.6$^{~}$ & RV \\ 4. &HD 97658 b & $2.25 \pm 0.10$& 1.1$^{c}$ & RV \\ 5. &\thissecondplanet\ & $\rplc \pm \urplc$& 1.0$^{b}$ & K2 \\ \enddata \tablecomments{$a$: the signal-to-noise ratios for transmission spectroscopy per transit are given relative to \thissecondplanet. We note that the predicted signal-to-noise ratios for the five planets listed here are all calculated assuming low mean molecular weight atmospheres. $b$: planet mass used in the $S/N$ calculation estimated using the relation given by \citet{weissmarcy}. $c$: the transmission spectra of these planets are flat, indicating either obscuring clouds/haze layers, or atmospheres with high molecular weights.} \label{best} \end{deluxetable} Table \ref{best} ranks the small transiting exoplanets most amenable to atmospheric characterization. If \thissecondplanet\ has a thick gaseous envelope, as expected based on mass measurements of similarly sized exoplanets \citep{weissmarcy}, only four known small planets are more amenable to atmospheric characterization. This doesn't necessarily mean atmospheric features will be detected in \thissecondplanet\ -- GJ 1214 b is by far the most amenable small planet for transit spectroscopy, but its transmission spectrum is masked by clouds or hazes \citep{kreidberg}. Indeed, three of the four small planets more amenable to atmospheric characterization than \thissecondplanet\ have flat transmission spectra inconsistent with a hydrogen-dominated atmosphere. A major goal for transmission spectroscopists is understanding which planets form thick clouds or hazes, and on which planets clear skies permit transit spectroscopy. When TESS launches, it will likely find about 80 planets comparable to or better than \thissecondplanet\ for transmission spectroscopy (as calculated above for existing planets). Studying \thissecondplanet\ now could inform the choice of which TESS planets should be observed to most efficiently learn about the atmospheres of small planets. Unlike most known multi-transiting systems, the planets in the \thisstar\ system are widely separated in orbital period. The period ratio of $P_c/P_b$ = 31.1 is larger than the period ratios of 99\% of all pairs of adjacent planets in the \Kepler\ Data Release 24 planet candidate catalog \citep{coughlin}. This could suggest the presence of additional, non-transiting, planets in the \thisstar\ system which might be revealed by RV observations. With a period of just 0.96 days, \thisfirstplanet\ is an example of an ultra-short period (USP) planet, as defined by \citet{robertousp} \citep[although its period is longer than the 12 hour cutoff chosen by][]{jackson}. \citet{robertousp} found that in the \Kepler\ sample, essentially all USP planets have radii smaller than 2 \rearth, indicating that the intense radiation so close to their host stars has stripped the planets of thick gaseous envelopes. Even though RV studies have shown that 1.6 \rearth\ planets often have thick gaseous envelopes, \thisfirstplanet's radiation environment makes it likely its composition is also predominantly rocky. Finally, we note that the short period of \thisfirstplanet\ makes it likely that spectroscopic observations of \thissecondplanet's atmosphere might overlap with a transit of the inner planet \citep[see, for example,][]{dewit}. This could be an efficient way to rule out a hydrogen-dominated atmosphere for \thisfirstplanet. Observers should be cautious, however, to ensure that a transit of \thisfirstplanet\ not interfere with out-of-transit observations necessary for calibration.
16
7
1607.05248
1607
1607.05281_arXiv.txt
The exponential scale length ($L_d$) of the Milky Way's (MW's) disk is a critical parameter for describing the global physical size of our Galaxy, important both for interpreting other Galactic measurements and helping us to understand how our Galaxy fits into extragalactic contexts. Unfortunately, current estimates span a wide range of values and often are statistically incompatible with one another. Here, we perform a Bayesian meta-analysis to determine an improved, aggregate estimate for $L_d$, utilizing a mixture-model approach to account for the possibility that any one measurement has not properly accounted for all statistical or systematic errors. Within this machinery we explore a variety of ways of modeling the nature of problematic measurements, and then employ a Bayesian model averaging technique to derive net posterior distributions that incorporate any model-selection uncertainty. Our meta-analysis combines 29 different (15 visible and 14 infrared) photometric measurements of $L_d$ available in the literature; these involve a broad assortment of observational datasets, MW models and assumptions, and methodologies, all tabulated herein. Analyzing the visible and infrared measurements separately yields estimates for $L_d$ of $2.71^{+0.22}_{-0.20}$ kpc and $2.51^{+0.15}_{-0.13}$ kpc, respectively, whereas considering them all combined yields $2.64\pm0.13$ kpc. The ratio between the visible and infrared scale lengths determined here is very similar to that measured in external spiral galaxies. We use these results to update the model of the Galactic disk from our previous work, constraining its stellar mass to be $4.8^{+1.5}_{-1.1}\times10^{10}$ M$_\odot$, and the MW's total stellar mass to be $5.7^{+1.5}_{-1.1}\times10^{10}$ M$_\odot$.
\label{sec:intro} Since the invention of the first telescopes, astronomers have been trying to explain the distribution of stars that make up the Milky Way (MW). The earliest maps of our Galaxy were developed by keeping a simple tally of the number of stars one could see as a function of their apparent brightness and position on the sky and then interpreting these star counts with a few basic assumptions \citep{Herschel1785,Kap20,Kapteyn22,Seares25,Bok37,Oort38}. Remarkably, while the accessibility and quality of data has been drastically transformed by advancing technology, this same basic methodology has underlain most present-day photometric models of the MW \citep{Bahcall86}, with only a small subset of recent studies that utilized more sophisticated techniques. Today, a wealth of high-quality, well-calibrated observational data for stars has been accumulated from a wide array of multi-band photometric surveys that have been carried out over the past three decades, using both visible and infrared (IR) light. As a result, the literature is rich with studies on the geometrical structure of the Galaxy. The current picture of the MW has been radically transformed since the first pioneering papers, which followed the advent of detailed studies of extragalactic spiral galaxies \citep{deV59,Freeman70,Kormendy77}. Today, it is well understood that the major stellar components of the MW include a bar with a bulge or pseudobulge at its center and a flattened disk that is much more visibly extended \citep[see, e.g.,][and references therein]{Licquia1}. Generally, current models assume that the distribution of stars comprising the disk \emph{to first order} is axisymmetric and follows an exponentially declining density profile, both radially and vertically, such that the volume density may be written as \begin{equation} \rho_\star(R,\phi,Z) = \frac{\Sigma_\star(0)}{2H_d}\exp(-\frac{R}{L_d}-\frac{Z}{H_d}), \label{eq:rho} \end{equation} where $R$, $\phi$, and $Z$ are the Galactocentric cylindrical coordinates, $\Sigma_\star(0)$ is the central stellar surface density, $H_d$ is the (vertical) disk scale height, and $L_d$ is the (radial) disk scale length. In some cases, authors alternatively employ an isothermal-sheet model for the vertical structure that replaces the $\exp(Z)$ dependence with a $\sech^2(Z)$ dependence in Equation \eqref{eq:rho} with the appropriate renomalization factors \citep[cf.][]{Spitzer42,vdKSearle,Freeman78}; however, the details of this are not of interest here. More importantly, $L_d$ represents the radius containing the first $e$-folding of starlight within the disk in projection, or in other words where the surface density declines to $\sim$37\% of $\Sigma_\star(0)$, and hence provides a standard measure of the absolute physical size of the Galactic disk. Dozens of attempts have been made to determine $L_d$ over the past few decades, making it one of the most investigated characteristics of our Galaxy. Here, we focus exclusively on those estimates from photometric models of the MW in order to enable direct comparisons to measurements of extragalactic objects. We have collected a total of 29 different measurements from the literature since 1990, 15 of them are based on optical data and 14 on IR data. Altogether, these values lie in the range of $\sim$2--6 kpc, and upon close inspection reveal little consensus on the true size of the Galactic disk. At least some of this disparity is likely due to variations in the assumptions that go into each MW model, which typically include between one and five stellar components that are fit by up to a dozen free parameters. Other possible issues are that unidentified substructures present in the data are biasing models fit to particular lines of sight, or that there are a multitude of very different models that are roughly equally successful in fitting the data \citep{Juric08}. To account for these complications many authors have incorporated substructure features into their models (e.g., spiral arms and rings), while others test a variety of functional forms for the assumed density law. In this paper, we address the question: given the measurements available in the literature, what is the best photometric estimate of the MW's disk scale length? In \citet[][in preparation]{Licquia3}, a companion paper to this study, we have found that scale lengths measured from optical photometry of other massive spiral galaxies \citep{Hall12}, which employed the same exponential density model as Equation \eqref{eq:rho}, span a range of $\sim$1--10 kpc. This is rather comparable to the range of values for the MW described above. To determine more precisely where the MW falls within this range, we can perform a Bayesian mixture model (BMM) meta-analysis \citep{Licquia1} of Galactic disk scale length estimates. This method will enable us to investigate and remedy any sources of tension amongst disk scale length measurements. Simultaneously, it will provide a single aggregate result that is built upon the rich assortment of photometric survey data that is available, but that also accounts for the possibility that any of the included estimates are offset due to systematics or bear an underestimated error bar, and then incorporate that information into the overall uncertainties in the combined result. A BMM analysis will yield improved constraints on the Galactic $L_d$, which in turn will help us to better understand how our Galaxy measures up to its extragalactic peers. The structure of this paper is as follows. In \S\ref{sec:data}, we begin by describing the sample of Galactic $L_d$ estimates we have obtained, emphasizing the variety of observational data, MW models, and analysis techniques that they involve. In \S\ref{sec:methods}, we explain the first-order corrections we make in order to place these estimates on an equal footing, and the BMM meta-analysis we subsequently perform on the resulting dataset. Here, we also introduce a Bayesian model averaging technique that we use to produce our final posterior distributions and explain why it is appropriate to use in this work. In \S\ref{sec:results} we present the aggregate results for $L_d$, including those from both segregating and combining the IR and visible data. Here, we also present the results for parameters which characterize the overall consistency of the estimates in our dataset, as well as the different ways we have tested for robustness. In \S\ref{sec:discussion}, we provide comparisons to $L_d$ estimates that have been determined from dynamical modeling, as well as to visible-to-IR scale length ratios measured for external galaxies. In this section we also construct an updated model of the stellar disk using the results found here in order to revise our previous estimate of the total stellar mass from \citet{Licquia1}. Lastly, in \S\ref{sec:summary} we summarize this study and highlight our conclusions.
\label{sec:summary} In this study, we have set out to determine a combined, robust estimate of the scale length of the Galactic disk, $L_d$, measured at visible and IR wavelengths, given the large array of data available in the literature. Upon thoroughly investigating the previous estimates of $L_d$ (see Table 1), we find that the set of Galactic models that are employed display as much variety as the observational datasets they are optimized to match, typically containing around a dozen free parameters. Aside from the wide assortment of model assumptions involved, given that measurements of $L_d$ are produced from fitting the smooth underlying structure of the disk, these estimates are also susceptible to systematic error due to undetected substructures present in the true distribution of stars along any particular line of sight through the Galaxy (J08). As a result of these variations in methodology and data, the estimates we have collected fall anywhere in the range $2\lesssim L_d\lesssim6$ kpc. This is comparable to the dynamical range of scale lengths measured for other galaxies of similar mass to the MW in the local Universe ($1\lesssim L_d\lesssim10$ kpc; cf. Licquia et al. 2016, in preparation), and hence an improved determination of the scale length will also improve our understanding of how the MW fits amongst the broader population of galaxies. Foregoing less sophisticated meta-analysis techniques (e.g., the inverse variance-weighted mean; IVWM), we have opted for a robust analysis method that has proven powerful in many applications inside and outside of astronomy. More specifically, we have produced our results by statistically combining the literature estimates using a Bayesian mixture-model (BMM) approach, which allows us to account for the possibility that one or more of these estimates have not properly accounted for all statistical or systematic errors. Through Monte Carlo techniques we have ensured that all the estimates used are rescaled to reflect current knowledge of the Sun's distance from the Galactic center, $R_0$. Lastly, we have implemented a Bayesian model averaging (BMA) technique to obtain the posterior for $L_d$ marginalized over all the bad-measurement models we have investigated, taking into account both goodness-of-fit and model complexity. Ultimately, we find the Galactic scale length to be $L_d = 2.71^{+0.22}_{-0.20}$ kpc for visible starlight, $L_d = 2.51^{+0.15}_{-0.13}$ kpc for IR starlight, and $L_d = 2.64\pm0.13$ kpc when integrating visible and IR starlight measurements (see \S\ref{sec:r0dependence} for discussion on how these results depend on $R_0$). In Table \ref{table:HB_results} we have listed a full summary of results from our bad-measurement models, the result from applying BMA, as well as the IVWM for comparison. We have demonstrated in \S\ref{sec:robust} that our results are robust to varying many of the assumptions we have made in our analyses, and in \S\ref{sec:comp_extragalactic} that they are consistent with passband-to-passband variations measured for external disks. We have also used our results to revise the estimate of the MW's total stellar mass from \citet{Licquia1} in \S\ref{sec:stellar_mass}. Using the IR scale length measurement found here, we find that the mass of the stellar disk is $4.8^{+1.5}_{-1.1}\times10^{10}$ \massunits. Combining this with the BMM estimate for the bulge+bar mass in a model-consistent manner using the framework of \citet{Licquia1}, we have determined the MW's total stellar mass to be $5.7^{+1.5}_{-1.1}\times10^{10}$ \massunits. For convenience, we have compiled in Appendix \ref{sec:updated_props} several tables displaying the updated constraints we have produced for the structural and mass properties of the MW, as well as updates to the results from \citet{Licquia2} using the stellar mass derived here. The remaining estimates of $L_d$ in the literature we have compared our results to are generically constrained by modeling stellar kinematic observations, and hence describe the radial distribution of \emph{total} mass in the Galaxy. Nevertheless, the majority of such estimates compare well with the values we have presented in this study based on visible/IR starlight, though there are a few that are in significant tension, favoring instead values of $L_d$ far below or above our constraints. It is beyond the scope of this paper to comment on whether a dynamically-constrained $L_d$ is a fair comparison with those based on starlight, but this agreement adds some credence to our stellar scale length estimates. Interestingly, it appears that dynamical estimates are as prevalent \emph{and} as disparate as photometric estimates, and astronomers in need of adopting a value from the literature would likely benefit from performing the type of BMM analysis we have employed here to that set as well. In totality, the results of this study, in combination with those of our previous works which we have updated here, provide a much improved comprehensive picture of the MW. More specifically, we have determined tight constraints on a variety of the Galaxy's global properties, including its total stellar mass, star formation rate, photometric disk scale length, and optical luminosity and color index, using methods that either circumvent or correct for many of the major systematics that have traditionally affected them. Moreover, all of these values have been produced from a single, consistent model of the MW that reflects our best-to-date knowledge its structural parameters, and which rests upon the same basic assumptions that are used for studying extragalactic objects. All of this work culminates in a newfound ability to assess accurately how the properties of our Galaxy compare to scaling relations found for external spiral galaxies. In a companion paper to this one (Licquia et al. 2016; in preparation), we will present new comparisons of the MW to both the Tully-Fisher relation as well as 3-dimensional luminosity-velocity-size relations for other massive spiral galaxies in order to assess how our Galaxy truly fits in a variety of extragalactic contexts. \clearpage
16
7
1607.05281
1607
1607.05762_arXiv.txt
The size, shape and degree of emptiness of void interiors sheds light on the details of galaxy formation. A particularly interesting question is whether void interiors are completely empty or contain a dwarf population. However the nearby voids that are most conducive for dwarf searches have large angular diameters, on the order of a steradian, making it difficult to redshift-map a statistically significant portion of their volume to the magnitude limit of dwarf galaxies. As part of addressing this problem, we investigate here the usefulness of number counts in establishing the best locations to search inside nearby ($d <$ 300 Mpc) galaxy voids, utilizing Wolf plots of $\log(n < m)$ vs. $m$ as the basic diagnostic. To illustrate expected signatures, we consider the signature of three void profiles, ``cut out'', ``built up'', and ``universal profile'' carved into Monte-Carlo Schechter function models. We then investigate the signatures of voids in the Millennium Run dark matter simulation and the Sloan Digital Sky Survey. We find in all of these the evidence for cut-out and built-up voids is most discernible when the void diameter is similar to the distance to its center. However the density distribution of the universal profile that is characteristic of actual voids is essentially undetectable at any distance. A useful corollary of this finding is that galaxy counts are a reliable measure of survey completeness and stellar contamination even when sampling through significant voids.
It is well-known that galaxies are assembled into a ``cosmic web'' of clusters, filaments, and sheets surrounding under-dense voids. The distribution of galaxies within it, particularly in the low density volumes, gives insight into galaxy formation and evolution and the role dark matter plays in it (e.g. \citealp{Benson03} and references therein). Lambda cold dark matter ($\Lambda$CDM) models predict the existence of many low-mass dark-matter halos in voids (e.g. \citealp{Dek86,Peb01,Hoff92,Tik09}). If galaxy formation has proceeded in these halos then voids should have a population of smaller galaxies in their interior (e.g. \citealp{Tik09}). But studies like \citet{Hoy05}, do not find them. This problem was termed the ``void phenomenon'' by \citet{Peb01}. \citet{Tink09}, using an approach to galaxy biasing similar to that of \citet{Croton06}, found a good match between theory and observation for the low-density luminosity function (LMF), nearest neighbor statistics for dwarfs, and the void probability function of faint galaxies. They predict that voids should be empty of dwarf galaxies fainter than seven magnitudes below $M^*$ or about $M_{r'} = -14$. They further predict that the drop-off in galaxy absolute magnitudes when transitioning from a filament to a void can be as steep as five magnitudes over $1 h^{-1}$ Mpc $(h = H_o/100)$. Therefore the presence or absence of dwarf galaxies in voids and the abruptness of void boundaries differentiate between these two models. \citet{Fos09} have compiled a list of nearby voids in the Updated Zwicky Catalog. The closest of these are excellent places to search for dwarfs. But their angular diameters are large, on the order of a steradian. Searching for dwarfs in their centers requires obtaining spectroscopic redshifts for a complete sample fainter than $r'$ of 20, a prohibitive task. For this reason surveys have concentrated on small fields or a subset of the galaxy population such as galaxies with emission (e.g. \citealp{Sand84,Mood88,Kiss}). To help survey more efficiently we are revisiting the use of galaxy number counts to find the best possible places to search. Galaxy number count (GNC) analysis is an accessible tool with a venerable history in clustering studies. Its many applications include galaxy luminosity evolution \citep{Metcalfe06, Bershady98}, mapping galactic extinction \citep{Fukugita04, Yasuda07}, mapping the extent of external galaxies \citep[e.g.][]{Ellis07}, delineating large-scale structure \citep{Frith06, Dolch05, Fukugita04}, exploring galaxy star-formation evolution \citep{Kong06}, mapping galaxy x-ray evolution \citep{Georgakakis06}, determining the AGN fraction \citep{Treister06}, and better understanding galaxy formation processes \citep{Berrier06, Lopez04}. In this report we investigate the suitability of using GNC analysis to map the extent of emptiness of nearby galaxy voids. As a diagnostic, we utilize plots of $\log(n < m)$ vs. $m$, called ``Wolf plots'' after the work of \citet{Wolf23,Wolf24,Wolf26,Wolf32}, who used them to study stellar extinction. If space is uniformly filled with objects of an arbitrary but unchanging magnitude distribution, it is straight-forward to show that the slope of $\log(n < m)$ vs $m$ has a constant value of 0.6 in a complete magnitude-limited survey of objects that do not evolve with depth. A slope greater than 0.6 indicates a spatial overdensity in the survey volume, while a slope less than 0.6 indicates an underdensity or void. \citet{Koo86} has shown that evolutionary effects cause the slope to vary between 0.43 and 0.68 depending on depth and survey filter but the slope itself is reasonably constant over magnitude spans of interest here. In this report we consider how voids can be understood using Wolf plots. We first look at their behavior in Monte-Carlo models of uniform galaxy distributions within which a number of different shaped voids were carved. We then examine three voids from the Millennium Run simulation \citep{Springel05} and the semi-analytic galaxy catalog of \citet{Croton06}. Finally we examine a section of the Sloan Digital Sky Survey (SDSS) Data Release 7 \citep{Abazajian09} photometric data set.
Both simple and more complicated simulations suggest that Wolf plots are capable of revealing voids in the universe closer than 300 Mpc, if they have a cut-out type density profile. That some may have this type of profile is suggested by Millennium Run data. This suggests that it is possible to find corridors of emptiness by making a grid of Wolf plots in the area of interest and looking for the signature of a cut-out void. But getting statistics large enough for a well-formed Wolf plot requires sampling large areas of the sky. Features smaller than the areal extent are quickly averaged out to where they cannot be detected. On top of this, the universal profile of \cite{Nadathur14} is nicely masked in Wolf plots. Therefore Wolf plots of the entire galaxy population, as was done here, do not appear to be an effective way to locate or map typical voids. The consolation is that they are a reasonable way to check for survey contamination and completeness. A central question of this study is whether Wolf plots are capable of revealing the best places to look for dwarf galaxies inside voids. Unfortunately the lack of a signature from the universal profile together with the faintness of dwarfs mitigates against this. Wolf plots in particular and GNC analysis in general average across too great a population spread too far in space to reveal a signature by themselves. We conclude that spectroscopic analysis of individual candidates chosen through other means is still a superior approach.
16
7
1607.05762
1607
1607.00237_arXiv.txt
{In this work, we have studied the possibility of setting up Bell's inequality violating experiment in the context of cosmology, based on the basic principles of quantum mechanics. First we start with the physical motivation of implementing the Bell's inequality violation in the context of cosmology. Then to set up the cosmological Bell violating test experiment we introduce a model independent theoretical framework using which we have studied the creation of new massive particles by implementing the WKB approximation method for the scalar fluctuations in presence of additional time dependent mass contribution in the cosmological perturbation theory. Here for completeness we compute total number density and energy density of the newly created particles in terms of Bogoliubov coefficients using WKB approximation method. Next using the background scalar fluctuation in presence of new time dependent mass contribution, we explicitly compute the expression for the one point and two point correlation functions. Furthermore, using the results for one point function we introduce a new theoretical cosmological parameter which can be expressed in terms of the other known inflationary observables and can also be treated as a future theoretical probe to break the degeneracy amongst various models of inflation. Additionally, we also fix the scale of inflation in a model independent way without any prior knowledge of primordial gravitational waves. Also using the input from newly introduced cosmological parameter, we finally give a theoretical estimate for the tensor-to-scalar ratio in a model independent way. Next, we also comment on the technicalities of measurements from isospin breaking interactions and the future prospects of newly introduced massive particles in cosmological Bell violating test experiment. Further, we cite a precise example of this set up applicable in the context of string theory motivated axion monodromy model. Then we comment on the explicit role of decoherence effect and high spin on cosmological Bell violating test experiment. In fine, we provide a theoretical bound on the heavy particle mass parameter for scalar fields, graviton and other high spin fields from our proposed setup.}
In the year \textcolor{red}{\bf 1935}, \textcolor{violet}{\bf Einstein, Podolsky and Rosen (EPR)} in ref.~\cite{epr:1935ja} mentioned that, \textcolor{blue}{{\it ``If, without in any way disturbing a system, we can predict with certainty (i.e., with probability equal to unity) the value of a physical quantity, then there exists an element of physical reality corresponding to this physical quantity''}}. This work also claimed that quantum mechanics cannot be a complete theoretical framework, therefore there has to be some element exists using which it is not possible to describe within the basic principles of quantum mechanics. Furthermore the authors also added that, \textcolor{blue}{{\it ``While we have thus shown that the wave function does not provide a complete description of the physical reality, we left open the question of whether or not such a description exists. We believe, however, that such a theory is possible''}}. Based on all such statements one can ask a question regarding the existence of all such missing elements in quantum physics theory. Later \textcolor{violet}{\bf J. S. Bell} introduced the existence of \textcolor{blue}{\it``hidden''} variables which directly implies that in spin correlation measurements the measurable probabilities must satisfy the proposed \textcolor{blue}{\it Bell's inequality} \cite{Bell:1964kc} within the framework of quantum mechanics. For completeness here we also mention some of the remarkable works in the area of quantum mechanics proposed upto Bell test experiment: \begin{itemize} \item \textcolor{red}{\bf 1927} \textcolor{blue}{{\it Copenhagen interpretation of Quantum Mechanics (Bohr, Heisenberg)}}, \item \textcolor{red}{\bf 1935} \textcolor{blue}{{\it Einstein-Podolsky-Rosen (EPR) paradox}}, \item \textcolor{red}{\bf 1952} \textcolor{blue}{{\it De Broglie-Bohm nonlocal hidden variable theory (Bohmian Mechanics)}}, \item \textcolor{red}{\bf 1964} \textcolor{blue}{{\it Bell’s Theorem on local hidden variables}}, \item \textcolor{red}{\bf 1972} \textcolor{blue}{{\it First experimental Bell test (Freedman and Clauser)}}. \end{itemize} Later the actual version of the Bell's inequality have been proved incorrect by many experiments performed till date, which in turn proves that the nature is non-local and hence all the particles can interact with each other without bothering about the underlying interaction scale and the corresponding distance (length scale) between all of them. This underlying principle of violation of Bell's inequality is thoroughly used in our work to setup the cosmological experiment and to study some of the unexplored important features in the context of early universe. It is a very well known fact that our present understanding of the large scale structure formation of universe is that, it is actually originated from the small scale perturbations and once the universe became matter dominated then gravitational effects mimics its role in cosmological evolution, which we observe today through various cosmological observations. For the formation of the structure due to gravitational instability of what we observe today, there has to be pre-existing small fluctuations on physical length scales. In the model of Big Bang it is almost impossible to produce fluctuations in any arbitrary length scale, so in such a case we put these small perturbations by hand. The proper physical explanation for these small scale perturbations is that these perturbations arises due to density fluctuations in the inflationary epoch \cite{juan:2015ja,Mukhanov:1981ja,Hawking:1981ja,Starobinsky:1982ja,bardeen:1983jj}, which have a quantum mechanical origin. In the context of modern cosmology, as we know one of the main basic idea is that there occurred an event namely epoch in the very early time of the universe where the universe is vacuum dominated matter or radiation. Therefore during this era the scale factor grew almost exponentially in time. We can also understand why the observable universe is homogeneous and isotropic if this quasi exponential expansion occurred in the very early age of universe. This epoch is commonly known as inflation. This theory was first introduced by \textcolor{violet}{\bf A. Guth} in ref.~\cite{Alan:1981ja}. Primordial density perturbation is actually the vacuum fluctuation which survived after the period of inflation which may be the most possible reason for the large scale structure formation of our universe and CMB anisotropy. In the present context we are primarily interested in the specific type of inflation theory which removes the shortcomings of standard Big Bang theory, also helps us to the get the mostly favored possible explanation of the homogeneity and isotropic of CMB and to construct a Bell's inequality violating cosmological setup. Therefore the inflationary paradigm predicts that the origin of large scale structure, which we actually observe is nothing but the outcome of quantum mechanical fluctuations after the inflationary period. Such quantum fluctuations make the inflationary paradigm consistent with various cosmological observations compared to the other classical statistical fluctuations appearing in the present context by following the same epoch \cite{Mukhanov:1981ja,Hawking:1981ja,Starobinsky:1982ja,bardeen:1983jj,Alan:1981ja}. Here it is important to note that, in case of classical statistical approach frictional force acts as a external source using which inflaton energy is converted to the other forms of energy and finally produce fluctuations. Now further using this information one can compute, also compare and constrain two and three point correlation functions from quantum fluctuations and classical statistical fluctuations and check the consistency relations from any higher point correlation functions. Here additionally it is important to note that, during the quantum mechanical interpretation of the required fluctuations, highly entangled quantum mechanical wave function of the universe plays a significant role. Due to this fact, quantum fluctuations can be theoretically demonstrated as well as implemented in the context of primordial cosmology, iff we can perform a Bell's inequality violating cosmological experiment using the highly quantum mechanical entangled wave function of the universe defined in the inflationary period. Throughout this paper we will develop a theoretical setup to address various fundamental questions related to general aspect of Bell's inequality violation and also study the various unexplored physical consequences from cosmological Bell's inequality violating experiment. Now to describe the theoretical framework and background methodology in detail it is important to mention that, in the context of quantum mechanics, Bell test experiment is described by the measurement of two non-commutating physical operators which are associated with two distinctive locations in the space-time. Similarly using this similar analogy in the context of primordial cosmology, one can also perform similar cosmological observations on two spatially separated as well as causally disconnected places upto the epoch of reheating. In case of cosmological observations one can able to measure the numerical values of various cosmological observables (along with cosmic variance), which can be computed from scalar curvature fluctuation. Apart from the success in its observational ground it is important to point that for all such observations it is impossible to measure the value of associated canonically conjugate momentum. Consequently, for these observables it is impossible to measure the imprints of two non-commuting operators in the context of primordial cosmology. This directly implies that due to this serious drawback in the underlying structural setup it is not at all possible to setup a Bell's inequality violating experimental setup in the context of cosmology. But to make a further strong conclusive statement regarding this issue one needs to investigate for decoherence effect and its impact in cosmological observation \cite{Burgess:2006jn,Nelson:2016kjm,Dpaas:1996st,Fcldn:2005st,Marti:2007st,Blhu:1995st,Rlaflamme:1990st, Prokopec:2007st,Gdmoore:2007st,Holmann:2008st,franco:2011st}. If the cosmological observables are satisfying the basic requirements of decoherence effect then it is possible to perform measurements from two exactly commuting cosmological observables and one can able to design a Bell's inequality violating cosmological experimental setup. In the context of quantum mechanics, to design such experimental setup one needs to perform repeated measurement on the same object (here it is the same quantum state) and in such a physical situation one can justify the appearance of each and every measurement through a single quantum state. Using the same idea one can also design a cosmological experimental setup in the present context. In the context of cosmology, one can similarly consider two spatially separated portions in the full sky which exactly mimics the role of performing repeated cosmological Bell's inequality violating experiment via the same quantum mechanical state. Due to this here one can choose the appropriate and required properties of two spatially separated portions in the full sky to setup Bell's inequality violating experimental setup in cosmology. Most importantly it is important to mention here that if it is possible to connect direct a link between these mentioned non-commuting cosmological observables and a classical probability distribution function originated from inflationary paradigm then it is surely possible to setup a Bell's inequality violating cosmological experimental setup. In this work we have addressed the following important points through which it is possible to understand the underlying framework and consequences from the proposed Bell's inequality violating experimental setup in the context of cosmology. These issues are: \begin{itemize} \item Setting up cosmological Bell's inequality violating experiment in presence of new heavy fields within the framework of inflation where these heavy fields are the additional field content, appearing along with the inflaton field. We have shown that the time dependent mass profile for such heavy fields play a significant role to setup Bell's inequality violating experiment. \item Explicit role of one point and two point correlation functions, which play significant role to quantify the effect of Bell's inequality violation in presence of significant heavy field mass profile. \item Particle creation mechanism of all such heavy fields for different time dependent mass profiles which are responsible for Bell's inequality violation in cosmological setup. \item The exact connection between all such heavy fields and axion fields as appearing in the context of monodromy model in string theory. \item Specific role of isospin breaking phenomenological interactions for heavy fields during the Bell's inequality violating experimental measurement. \item Exact role of high spin for heavy particles to determine the particle creation and quantify the amount of Bell's inequality violation in cosmological setup. \item To give a generic mass bound on the scalar heavy fields and high spin heavy fields within a model independent framework of inflationary paradigm. For this purpose we use Effective Field Theory (EFT) framework for inflation \cite{lopez:2012jj,Behbahani:2011it,Choudhury:2014wsa,Choudhury:2015eua, Choudhury:2015pqa,Cheung:2007st,noumi:2013jj} in the present context. \item To identify the connection between scale of inflation or more precisely the exact theory of inflation and amount of Bell's inequality violation in proposed cosmological experimental setup. \item To give a model independent quantification for primordial gravitational waves through tensor-to-scalar ratio from inflation with the help of the amount of Bell's inequality violation in cosmology. If we have any prior knowledge of the amount of Bell's inequality violation in the cosmological setup then using this model independent relation one can put stringent constraint on various inflationary models. If it is not possible to quantify the amount of Bell's inequality violation from any other experimental probe and if one can able to measure the value of tensor-to-scalar from future observational probes, subsequently it is possible to quantify the amount of Bell's inequality violation in cosmology with the help of this proposed model independent relation. \item To study the exact role of initial conditions or choice of inflationary vacuum to violate Bell's inequality in the context of de Sitter and quasi de Sitter cosmological setup. \item Proposed a specific form of cosmological observable within the framework of inflationary paradigm through which the effect of Bell's inequality violation can be explicitly quantified~\footnote{In ref.~\cite{juan:2015ja} the author have also mentioned this possibility in the context of baroque inflationary model where one can perform the cosmological Bell's inequality violating experiment. In this paper we explore other possibilities in detail by proposing various time dependent mass profiles for the heavy fields for arbitrary choice of initial conditions or choice of vacuum. Hence we will quote the results for Bunch Davies vacuum and $\alpha$ vacuum for the sake of completeness. Also in our paper we provide an explicit form of the new inflationary observable through which one can quantify the effect of Bell's inequality violation in cosmological setup.}. Also expressed various known inflationary observables in terms of this newly proposed observable. Here it is important to note that, this conversion is only possible if the heavy fields are massive compared to the Hubble scale and follow a profile as mentioned earlier. \end{itemize} Now before going to the further details let us mention the underlying assumptions clearly to understand the background setup for this paper: \begin{enumerate} \item UV cut-off of the effective theory is given by the scale $\Lambda_{UV}$. For our purpose we fix $\Lambda_{UV}=M_p$, where $M_p$ is the reduced Planck mass. \item Inflaton and the heavy fields are minimally coupled to the Einstein gravity sector. \item Effective sound speed $c_{S}\neq 1$. Within EFT it is always $c_{S}\leq 1$. For canonical slow roll models $c_S=1$ and for other cases $c_S<1$. \item Various choices for initial conditions are taken into account during our computation. We first derive the results for arbitrary choice of vacuum and then quote the results for Bunch Davies, $\alpha$ and special type of vacuum. \item To express the scale of inflation in terms of the amount of Bell's inequality violation in cosmological experimental setup we assume that slow-roll prescription perfectly holds good in the EFT sector. Consequently we have used the consistency conditions which are applicable for slow roll to find out the expression for tensor-to-scalar ratio in terms of the Bell's inequality violating observable. For example, we use here $r=16\epsilon c_{S}$. But without assuming any slow-roll one can find out the expression for the first Hubble slow roll parameter $\epsilon=-\dot{H}/H^2$ in terms of the Bell's inequality violating observable within the framework of EFT. \item For the computation of Bogoliubov co-efficients we have introduced cut-off in conformal time scale to collect the regularized finite analytical contribution for different time dependent mass profile. Consequently the rest of the parameters derived from Bogoliubov co-efficients i.e. reflection and transmission coefficients, number density and energy density follow the same approximation during massive particle creation. \item To use the analogy with axion monodromy model in the context of string theory we neglect the effect of back-reaction and restricted upto the mass term in the effective potential. This helps us to perfectly identify the analogy between heavy fields and axion. \item We use approximated WKB solutions to quantify the particle creation for different arbitrary time dependent mass profile for heavy fields as it is not always possible to compute the exact mode functions for the heavy fields in Fourier space by exactly solving the equation of motion for the heavy fields. Some of the cases we provide exact solution where the time dependence in the mass parameter is slowly varying. We use these results to compute the one point and two point correlation functions in the present context. \item To study the role of arbitrary spin fields with spin ${\cal S}>2$ in Bell's inequality violation we assume that the dynamics of all such fields is similar with the scalar field and graviton. \end{enumerate} \begin{table*} \centering \footnotesize\begin{tabular}{|c|c|c| } \hline \hline {\bf Properties}& {\bf Relativistic Quantum Theory} & {\bf Cosmology} \\ \hline\hline\hline {\bf Importance}&Theory of Entanglement & Important hidden features in \\ &came into picture &the context of early universe can be known \\ \hline {\bf Fluctuation} & Helps to produce & Helps to produce\\ &virtual particles(pairs & hot and cold spots in CMB \\ &of particle and antiparticle) & \\ \hline {\bf Assumptions}&Concepts of locality and reality & Slow-roll prescription \\ % \hline {\bf Decoherence } &Provides reasons for & Primordial non gaussianity \\ &the collapse of wave function &can be enhanced\\ \hline {\bf Applications } &Quantum information, computing & Origin of Large Scale Structure formation. \\&and many more & \\ \hline \hline \hline \end{tabular} \caption{ Table showing the connection between Relativistic Quantum Theory and Cosmology in context to Bell's inequality violation.}\label{fig:cosmoscv} \vspace{.4cm} \end{table*} \begin{figure*}[htb] \centering { \includegraphics[width=17.2cm,height=19cm] {bell_setup.pdf} } \caption{Flow chart of the Bell's inequality violating cosmological setup.} \label{sqz2cv} \end{figure*} \begin{figure*}[htb] \centering { \includegraphics[width=17.2cm,height=19cm] {FlowChart.pdf} } \caption{Flow chart of the basic structural setup of this paper.} \label{sqz2} \end{figure*} In table~(\ref{fig:cosmoscv}), we show the connection between relativistic quantum theory and cosmology in context to Bell's inequality violation. In fig.~(\ref{sqz2cv}) and fig.~(\ref{sqz2}), we have schematically shown the flow chart of the Bell's inequality violating cosmological setup and basic structural setup of the present paper which we have discussed in detail as follows: \begin{itemize} \item \underline{\textcolor{violet}{\bf Section }\ref{sec2}}: Here we review Bell's inequality in quantum mechanics and its implications. For this we review the proof of Bell's inequality followed by an example of Bell's inequality with spin system. Further we discuss briefly the violation of Bell's inequality in quantum mechanics. Hence we provide the explanation for such violation and the consequences which finally give rise to new physical concepts like quantum entanglement. \item \underline{\textcolor{violet}{\bf Section } \ref{sec3}}: Here in subsection \ref{sec3a} we briefly discuss about the setup for Bell's inequality violating test experiment in the context of primordial cosmology. Then we study creation of new massive particles as introduced in the context of inflationary paradigm for various choice of time dependent mass profile in subsection \ref{sec3b}. We also present the calculation for the three limiting situations-{\bf (1)} $m\approx H$, {\bf (2)} $m>>H$ and {\bf (3)} $m<<H$. Now to describe a very small fraction of particle creation after inflation we need to find the Bogoliubov coefficient $\beta$ in FLRW space-time, which characterizes the amount of mixing between the two types of WKB solutions. Therefore we provide detailed mathematical calculations to find the Bogoliubov coefficient $\beta$ for each of the different cases. Using the results for Bogoliubov co-efficients we further calculated reflection and transmission co-efficients, number density and energy density of the created particles for various mass profiles for two equivalent representations. Since the exact analytical expression for the integrals involved in all of these parameters are not always computable, we use the approximation in three physical sub regions. Here we provide the results for three specific cases:-\begin{enumerate} \item $|kc_{S}\eta|=c_{S}k/aH<<1$ (\textcolor{blue}{super horizon}), \item $|kc_{S}\eta|=c_{S}k/aH\approx 1$ (\textcolor{blue}{horizon crossing}), \item $|kc_{S}\eta|=c_{S}k/aH>>1$ (\textcolor{blue}{sub horizon}). \end{enumerate} Further in subsection \ref{sec3c} we study cosmological scalar curvature fluctuations in presence of new massive particles for arbitrary choice of initial condition and also for any arbitrary time dependent mass profile. Here we explicitly derive the expression for one point and two point correlation function using in-in formalism. Then we quote the results for the three limiting situations-{\bf (1)} $m\approx H$, {\bf (2)} $m>>H$ and {\bf (3)} $m<<H$ in \textcolor{blue}{super horizon}, \textcolor{blue}{sub horizon} and \textcolor{blue}{horizon crossing}. Here we introduce a new cosmological observable which captures the effect of Bell's inequality violation in cosmology. Further we express the scale of inflation in terms of the amount of Bell's inequality violation in cosmology experimental setup. Additionally we derive a model independent expression for first Hubble slow roll parameter $\epsilon=-\dot{H}/H^2$ and tensor-to-scalar ratio in terms of the Bell's inequality violating observable within the framework of EFT. Additionally, in subsection \ref{sec3c} we give an estimate of inflaton mass parameter $m_{inf}/H$~\footnote{Here $m_{inf}$ is the mass of inflaton field and $H$ is the Hubble scale.}. Further we consider a very special phenomenological case, where inflaton mass is comparable with the new particle mass parameter $m_{inf}\approx m$ and using this we provide an estimate of heavy field mass parameter $m/H$~\footnote{Here $m$ is the mass of heavy field and $H$ is the Hubble scale.} which is an important ingredient to violate Bell's inequality within cosmological setup. \item \underline{\textcolor{violet}{\bf Section } \ref{sec4}}: In subsection \ref{sec4a} we give an example of axion model with time dependent decay constant as appearing in the context of string theory. Hence in the next subsection \ref{sec4b} we mention the effective axion interaction of axion fields. Now to give a analogy between the newly introduced massive particle and the axion we further discuss the creation of axion in early universe in subsection \ref{sec4c}. Further in subsection \ref{sec4d} and \ref{sec4e} we establish the one to one correspondence between heavy field and axion by comparing the particle creation mechanism, one and two point correlation functions. Additionally, in subsection \ref{sec4e} we give an estimate of axion mass parameter $m_{axion}/f_{a}H$~\footnote{Here $m_{axion}$ is the axion mass, $f_{a}$ is the time dependent decay constant for axion and $H$ is the Hubble scale.}which is an important ingredient to violate Bell's inequality within cosmological setup. Finally, in subsection \ref{sec4f} we discuss the specific role of isospin breaking phenomenological interaction for axion type of heavy fields to measure the effect of Bell's inequality violation in primordial cosmology. \item \underline{\textcolor{violet}{\bf Section } \ref{sec5}}: Here we conclude with future prospects from this present work. \item \underline{\textcolor{violet}{\bf Appendix} \ref{sec6}}: In appendix \ref{sec6a} we explicitly show the role of quantum decoherence in cosmological setup to violate Bell's inequality. Additionally here we also mention a possibility to enhance the value of primordial non-Gaussianity from Bell's inequality violating setup in presence of massive time dependent field profile. Further in appendix \ref{sec6b} we discuss the role of three specific time dependent mass profile for producing massive particles and to generate quantum fluctuations. Further, in appendix \ref{sec6b} we discuss the role of arbitrary spin heavy field to violate Bell's inequality. Here we provide a bound on the mass parameter for massive scalar with spin ${\cal S}=0$, axion with spin ${\cal S}=0$, graviton with spin ${\cal S}=2$ and for particles with high spin ${\cal S}>2$ in \textcolor{blue}{horizon crossing}, \textcolor{blue}{super horizon} and \textcolor{blue}{sub horizon} regime. Then we provide the extended class of Bell's inequality, called \textcolor{blue}{\it CHSH inequality}. Finally, we give a very brief discussion on \textcolor{blue}{\it quantum cryptography} related to the present topic of the paper. \end{itemize}
\label{sec5} To summarize, in the present article, we have addressed the following points: \begin{itemize} \item Firstly we have briefly reviewed Bell's inequality in quantum mechanics and its implications. For this we reviewed the proof of Bell's inequality. Further we have discussed the violation of Bell's inequality in the context of quantum mechanics. Also we have given the explanation for such violation, which finally give rise to new physical concepts and phenomena. \item Next we have briefly discussed about the setup for Bell's inequality violating test experiment in the context of primordial cosmology. Further we have studied creation of new massive particles as introduced in the context of inflationary paradigm for various choice of time dependent mass profile. We have also presented the calculation for the three limiting situations-{\bf (1)} $m\approx H$, {\bf (2)} $m>>H$ and {\bf (3)} $m<<H$. To describe a very small fraction of particle creation after inflation we have computed the expression for Bogoliubov coefficient $\beta$ in FLRW space-time, which characterizes the amount of mixing between the two types of WKB solutions. Next using the results for Bogoliubov co-efficients we have further calculated reflection and transmission co-efficients, number density and energy density of the created particles for various mass profiles. Here we have provided the results for three specific cases:-\begin{enumerate} \item $|kc_{S}\eta|=c_{S}k/aH<<1$ (\textcolor{blue}{super horizon}), \item $|kc_{S}\eta|=c_{S}k/aH\approx 1$ (\textcolor{blue}{horizon crossing}), \item $|kc_{S}\eta|=c_{S}k/aH>>1$ (\textcolor{blue}{sub horizon}). \end{enumerate} Further we have studied cosmological scalar curvature fluctuations in presence of new massive particles for arbitrary choice of initial condition and also for any arbitrary mass profile. Here we have explicitly derived the expression for one point and two point correlation function using in-in formalism. Then we have quoted the results for the three limiting situations-{\bf (1)} $m\approx H$, {\bf (2)} $m>>H$ and {\bf (3)} $m<<H$ in \textcolor{blue}{super horizon}, \textcolor{blue}{sub horizon} and \textcolor{blue}{horizon crossing}. Here in our computation we have introduced a new cosmological observable which captures the effect of Bell's inequality violation in cosmology. Further we have expressed the scale of inflation in terms of the amount of Bell's inequality violation in cosmology experimental setup using model independent prescription like EFT. Additionally we have derived a model independent expression for first Hubble slow roll parameter $\epsilon=-\dot{H}/H^2$ and tensor-to-scalar ratio in terms of the Bell's inequality violating observable within the framework of EFT. Additionally, we have given an estimate of heavy field mass parameter $m/H$ to violate Bell's inequality within cosmological setup. \item It is important to note that when all the EFT interactions are absent in that case both $c_{S}\sim \tilde{c}_{S}=1$ and one can get back the results for canonical slow-roll models. On the other hand when the EFT interactions are switched on within the present description, one can able to accommodate the non-canonical as well as non-minimal interactions within this framework. In that case both $c_{S}$ and $\tilde{c}_{S}$ are less than unity and in such a situation one can always constraint the sound speed parameter as well the strength of the EFT interactions using observational probes (Planck 2015 data). One can easily compare the present setup with effective time varying mass parameter with the axions with time varying decay constant. For $m<<H$ case the last term in the effective action is absent and in that case the reduced form of the action will able to explain the EFT of inflation in presence of previously mentioned non-trivial effective interactions. Once we switch off all such interactions the above action mimics the case for single field slow-roll inflation. \item Further we have given an example of axion model with time dependent decay constant as appearing in the context of string theory. Hence we have mentioned the effective axion interaction of axion fields. Now to give a analogy between the newly introduced massive particle and the axion we have further discussed the creation of axion in early universe. Next we have established the one to one correspondence between heavy field and axion by comparing the particle creation mechanism, one and two point correlation functions. Additionally, we have given an estimate of axion mass parameter $m_{axion}/f_{a}H$ to violate Bell's inequality within cosmological setup. Finally, we have discussed the specific role of isospin breaking interaction for axion type of heavy fields to measure the effect of Bell's inequality violation in primordial cosmology. \item Next we have explicitly shown the role of quantum decoherence in cosmological setup to violate Bell's inequality. Additionally here we have also mentioned a possibility to enhance the value of primordial non-Gaussianity from Bell's inequality violating setup in presence of massive time dependent field profile. Further we have discussed the role of three specific time dependent mass profile for producing massive particles and to generate quantum fluctuations. Finally, we have discussed the role of arbitrary spin heavy field to violate Bell's inequality. Here we have provided a bound on the mass parameter for massive scalar with spin ${\cal S}=0$, axion with spin ${\cal S}=0$, graviton with spin ${\cal S}=2$ and for particles with high spin ${\cal S}>2$ in \textcolor{blue}{horizon crossing}, \textcolor{blue}{super horizon} and \textcolor{blue}{sub horizon} regime. \end{itemize} The future prospects of our work are appended below: \begin{itemize} \item In this work we have not explored in detail the possibility of enhancing the primordial non-Gaussianity from the violation of Bell's inequality and the exact role of time dependent mass profile for heavy fields. In appendix we have pointed on some of the issues but not given detailed calculation on this issue. In near future we are planning to address this important issue. \item One can also comment on the dependence on time dependent mass profile for the heavy field to derive the consistency relation in the context of inflationary cosmology. Due to the enhancement of the primordial non-Gaussian amplitude it is expected from the basic understanding that due to the presence of such non-negligible Bell violating contribution, all the inflationary consistency relations will modify significantly. In future we are also planning to derive all such modified consistency relations from this work. \item In this work we have implemented the idea of Bell violation in the context of inflationary cosmology. But the explicit role of alternatives idea of inflation to design a Bell's inequality violating experiment in cosmology is not studied yet. One can check whether this can be done or not. If this is possible, then one can also study the consequences from this, including non-Gaussianities. \item The explicit role of entanglement entropy is very important to understand the underlying physical principles in the present context for de Sitter and quasi de Sitter case. In this paper we have not addressed this issue in detail, which one can address in future. \item One can also carry forward our analysis in the context of higher derivative gravity set up which nobody have addressed yet. \item To give a model independent bound on the scale of inflation and primordial gravitational waves we have defined a new cosmological observable which explicitly captures the effect of violation of Bell's inequality in cosmology. But for this we need a prior knowledge of such observables. We have only mentioned the explicit role of isospin breaking interactions in this context. But in this work we have not studied the exact connection between such isospin breaking interactions for heavy fields and newly defined Bell violating cosmological observable. One can also address this issue to comment on the measurement of such observables through various observational probes. \end{itemize}
16
7
1607.00237
1607
1607.03002_arXiv.txt
Using the narrowband all-sky imager mode of the LWA1 we have now detected 30 transients at 25.6 MHz, 1 at 34 MHz, and 93 at 38.0 MHz. While we have only optically confirmed that 37 of these events are radio afterglows from meteors, evidence suggests that most, if not all, are. Using the beam-forming mode of the LWA1 we have also captured the broadband spectra between 22.0 and 55.0 MHz of four events. We compare the smooth, spectral components of these four events and fit the frequency dependent flux density to a power law, and find that the spectral index is time variable, with the spectrum steepening over time for each meteor afterglow. Using these spectral indices along with the narrow band flux density measurements of the 123 events at 25.6 and 38 MHz, we predict the expected flux densities and rates for meteor afterglows potentially observable by other low frequency radio telescopes.
Recently, \citet{Obenberger14} discovered that the trails left by bright meteors (fireballs) occasionally emit a radio afterglow in the upper High Frequency (HF; 3 to 30 MHz) and lower Very High Frequency (VHF 30 to 300 MHz) bands. This self-emission, which has been observed to last up to several minutes, is distinct from the well understood phenomenon of meteor trail reflections. While the radio emission has been observed between 25.6 and 55.0 MHz, continuous broadband measurements have only been made between 37 and 55 MHz. Most events are recorded using the narrowband ($\sim$ 100 kHz) Prototype All-Sky Imager (PASI), a backend correlator/imager of the first station of the Long Wavelength Array (LWA1) \citep{Taylor12,Ellingson13,Obenberger15a}. PASI images have an extremely large field of view of $ 1 \pi$ sr ($\sim$ 10$^{4}$ deg$^{2}$ ) but have limited sensitivity due to the narrow bandwidth, which also prevents any spectral characterization of the events. At the time this paper was written, PASI had detected 30 events at 25.6 MHz, 1 at 34 MHz, and 93 at 38.0 MHz, the majority of these events are thought to be afterglows from meteors. Currently the only way to make broadband measurements is with the beamformer mode of the LWA1. This mode can create up to 4 simultaneous beams, each with two tunings of up to 19.6 MHz ($\sim$18 MHz usable), but each beam only has a field of view of $\sim$ 50 deg$^{2}$. This is considerably smaller than the all-sky imager mode, and therefore results in much fewer detections. Beamformed measurements of two meteor radio afterglows recorded on October 17, 2014 (M1) and October 26, 2014 (M2) were reported in \citet{Obenberger15b}, and showed that the spectrum is steep, increasing in flux density at lower frequencies. It was also shown that the spectrum of M1 contained dynamic structure in the form of repeating frequency/time dispersed pulses. While the bulk of the emission is largely unpolarized, these sweeps did contain significant linear polarization (Stokes Q). Unfortunately full polarization parameters were not recorded so the amount of Stokes U and V are not known. Also the sweeps were not smooth in spectrum, rather they displayed narrow (several MHz) regions of enhanced emission. On the other hand, M2 only contained smooth spectrum, unpolarized emission, with no sign of the polarized type observed in M1. The measurements from \citet{Obenberger15b} were made using two tunings, one centered at 45.45 MHz and the other at 65.05 MHz. The upper tunings of each observation were rendered unusable because the beamformer did not have enough dynamic range to properly record the extremely bright forward scatter of analog TV channels. Any channel that did not contain a transmitter was compressed to almost zero value, therefore no useful broadband data were recorded. The lower tuning, however, made relatively clean measurements of the spectrum between 36.5 and 54.0 MHz, with only a few moments of compression. The steady increase of flux density at lower frequencies presumably flattens out or turns over somewhere below the 36.5 MHz lower boundary of the M1 and M2 measurements. If the Langmuir wave hypothesis introduced in \citet{Obenberger15b} is correct, then the waves with frequency below the electron/neutral collision rate would be critically damped. At 90 km the collision frequency is $\sim$ 0.5 MHz, and one would expect a sharp cutoff near that frequency. At higher frequencies the unpolarized emission for M1 and M2 decayed to the point of undetectability above 52 MHz. Since there is no evidence of a sharp cutoff, it is not unreasonable to assume that the spectra follow a slow approach to zero, perhaps following a power law. Fitting the spectrum would provide numerical parameters that could be used as a test for any future theoretical model. Furthermore, a fitted spectrum would allow for a higher frequency extrapolation of the flux density, which would be useful for other facilties, such as the Murchison Wide Field Array (MWA) \citep{Tingay13} in Australia, the Amsterdam-ASTRON Radio Transients Facility and Analysis Center (AARTFAAC), based on the Low Frequency Array (LOFAR) \citep{Prasad14,Haarlem13} in the Netherlands, and two additional Long Wavelength Array stations located at Owens Valley Radio Observatory (LWA-OVRO; www.tauceti.caltech.edu/lwa/array.html) in California and Sevilleta National Wildlife Refuge (LWA-SV) in New Mexico. These telescopes operate at or just above the LWA1 frequency range, have extremely large fields of view, and have comparable or better sensitivity than the LWA1. In addition MWA, AARTFAAC, and LWA-OVRO all have higher angular resolution than the LWA1, creating an excellent opportunity to further the study of meteor radio emission. An extrapolated spectrum along with event rates would give researchers at these facilities an idea of what sensitivities and amount of observing time they might need in order to detect meteor radio afterglows. Smooth spectrum radio sources are typically fit to a power law, where the spectrum is parameterized with a spectral index $\alpha$. The spectral index is related to the flux density, $S$ by: \begin{equation} S \propto \nu^{\alpha}, \end{equation} where $\nu$ is the frequency. At the LWA1 frequency range, a typical spectral index for an astrophysical source is $\sim$ -1, but the spectral index is not necessarily constant over all frequencies. Rather it can be a function of frequency, and is given by : \begin{equation} \alpha(\nu) = \frac{\partial \log S(\nu)}{\partial \log \nu} \end{equation} \citet{Obenberger15b} did not include a power law fit for the spectra of the two radio afterglows. This analysis was excluded from the paper because the spectrum of M1 and the second meteor (M2) contained several effects that made comparison difficult. As mentioned above, the extremely bright forward scatter of several transmitters put the beamformer into compression, introducing a number of broadband dips in the spectrum. Furthermore, the polarized emission from M1 added spectral structure to several seconds of the event, rendering nearly half of the event impossible to fit. Considering these sources of error, we determined that any results would be difficult to interpret. On December 18, 2015 and February 12, 2016 we measured the spectrum of two more meteors afterglows (M3 and M4) at slightly lower frequencies, with significantly cleaner spectra and no beamformer compression. We can compare the smooth spectral components of all four events, and by fitting these spectra to a power law we can predict what flux densities MWA, LOFAR, LWA-OVRO, LWA-SV, and AARTFAAC should expect to see for meteor afterglows at higher frequencies. In addition, we can use the large number of events recorded by PASI to compute rates for events brighter than our worst sensitivity thresholds at 38.0 and 25.6 MHz, and extrapolate these thresholds to predict rates at higher frequencies. These predictions will provide useful values for other observatories to probe the physics of this mysterious phenomenon.
We have reported yearly averaged meteor radio afterglows rate densities of 130, 40, and 15 events per year$^{-1}$ $\pi$ sr $^{-1}$, for events brighter than 1,700 Jy at 25.6 MHz and 540 and 1,000 Jy at 38.0 MHz. These rates are based on 1,135 hours of observing at 25.6 MHz and 12,876 hours at 38.0 MHz. We have also fit the dynamic spectra of four events to time dependent power laws. The spectral indices for all four events get steeper with time. These observations are qualitatively consistent for a diffusing plasma, assuming that the radiation is coming from Langmuir waves, where higher plasma frequencies would decay to lower plasma frequencies as the electrons diffuse into the ambient atmosphere. A comparison of these observations to a numerical model is warranted. Using the spectral indices of power law fits, we extrapolated higher frequency spectra for typical meteors occurring at the rates mentioned above. We then compared the expected flux densities to the sensitivities of other radio telescopes. In particular the MWA telescope has a unique opportunity to study the high frequency extension of meteor radio afterglow spectra, although high angular resolution and near-field effects will no doubt pose a challenge. We have also reported on linear polarization detected in M4, steady polarization across all observed frequencies has not been observed in any other meteor radio afterglow. The majority of the observed polarization ($\sim$ 50\%) is mot likely intrinsic to the afterglow itself, rather than caused by instrumental leakage. M4 was distinct from most meteors, being dim (480 Jy at 34 MHz) and short in duration ($\sim$ 6 seconds). By observing at 34 MHz PASI is optimized to see dim events. Further observations will prove useful for polarization studies of meteor radio afterglows.
16
7
1607.03002
1607
1607.03658_arXiv.txt
All types of single WR stars emit X-rays. Their X-ray luminosities are orders of magnitude lower than in persistent high-mass X-ray binaries. The X-ray spectra appear to be thermal, with very hot plasma of a few$\times 10$\,MK being present along with cooler components. The mechanisms responsible for X-ray generation are not yet understood. Promising scenarios include an LDI-like mechanisms, interactions of wind streams with blobs, and CIRs. New observations and modeling shall uncover how X-rays are generated in winds of single WR stars.
16
7
1607.03658
1607
1607.06282_arXiv.txt
Upcoming observations of the 21-cm signal from the Epoch of Reionization will soon provide the first direct detection of this era. This signal is influenced by many astrophysical effects, including long range X-ray heating of the intergalactic gas. During the preceding Cosmic Dawn era the impact of this heating on the 21-cm signal is particularly prominent, especially before spin temperature saturation. We present the largest-volume (349\,Mpc comoving=244~$h^{-1}$Mpc) full numerical radiative transfer simulations to date of this epoch that include the effects of helium and multi-frequency heating, both with and without X-ray sources. We show that X-ray sources contribute significantly to early heating of the neutral intergalactic medium and, hence, to the corresponding 21-cm signal. The inclusion of hard, energetic radiation yields an earlier, extended transition from absorption to emission compared to the stellar-only case. The presence of X-ray sources decreases the absolute value of the mean 21-cm differential brightness temperature. These hard sources also significantly increase the 21-cm fluctuations compared the common assumption of temperature saturation. The 21-cm differential brightness temperature power spectrum is initially boosted on large scales, before decreasing on all scales. Compared to the case of the cold, unheated intergalactic medium, the signal has lower rms fluctuations and increased non-Gaussianity, as measured by the skewness and kurtosis of the 21-cm probability distribution functions. Images of the 21-cm signal with resolution around 11~arcmin still show fluctuations well above the expected noise for deep integrations with the SKA1-Low, indicating that direct imaging of the X-ray heating epoch could be feasible.
The Epoch of Reionization (EoR), a major global phase transition in which the neutral hydrogen in the Universe transitioned from almost neutral to largely ionized, remains one of the cosmological eras least constrained by observations. Although no direct measurements of this transition currently exist, multiple observations indicate reionization is completed by $z \approx 5.7$ and possibly earlier. These observations include high-redshift quasar spectra \citep[e.g.][]{Fan2006,McGreer2015}, the decrease in the fraction of Lyman~$\alpha$ (\Lya) emitting galaxies \citep[e.g.][]{Stark2011,Schenker2012, Pentericci2014, Tilvi2014}, and measurements of the temperature of the intergalactic medium \citep[IGM; e.g.][]{Theuns2002, Raskutti2012, Bolton2012}. The start of substantial reionization is constrained by the Thomson optical depth measured from the anisotropies and polarisation of the Cosmic Microwave Background, CMB \citep[e.g.][]{Komatsu2011, Planck2015, Planck2016}. \cite{Planck2016} find that the Universe was less than 10 per cent ionized at z $\approx$ 10, the average redshift at which reionization would have taken place if it had been an instantaneous process to be in the range $7.8 \leq z \leq 8.8$, and an upper limit for the duration of the process is $\Delta z < 2.8$. At high redshifts, 21-cm radiation from hydrogen atoms in the I: contains a treasure trove of information about the physical conditions both during the EoR and the preceding epochs. In particular, the 21-cm signal probes the \textit{Dark Ages}, the epoch after recombination during which the formation of baryonic large scale structure began and the \textit{Cosmic Dawn}, the period of preheating from the first ionizing sources before reionization was significantly underway. Several experiments are attempting to measure the 21-cm signal from the EoR using low-frequency radio interferometry. These include the ongoing GMRT\footnote{\url{http://gmrt.ncra.tifr.res.in/}}, LOFAR\footnote{\url{http://www.lofar.org/}}, MWA\footnote{\url{http://www.mwatelescope.org/}}, and PAPER\footnote{\url{http://eor.berkeley.edu/}} and the future HERA\footnote{\url{http://reionization.org/}} and SKA\footnote{\url{https://www.skatelescope.org/}}. The main sources powering reionization are likely early galaxies, with Population III (Pop.~III; metal-free) and Population II (Pop.~II; metal-enriched) stars providing the bulk of ionizing photons. However, sources of higher energy X-ray photons may also be present, contributing non-trivially to the photon budget. Although their abundance is uncertain, high-mass X-ray binaries (HMXBs) likely exist throughout reionization \citep{Glover2003}. Other hard radiation sources, such as QSOs and supernovae, may have also contributed. Very little is known about these objects in terms of their abundances, clustering, evolution, and spectra, especially at these high redshifts. The high-energy photons from these hard radiation sources have a much smaller cross section for interaction with atoms and, hence, far longer mean free paths than lower energy ionizing photons. Therefore, these photons are able to penetrate significantly further into the neutral IGM. While not sufficiently numerous to contribute significantly to the ionization of the IGM (although recently there has been some debate about the level of contribution of quasars, \citep[e.g.][]{Khaire2016}), their high energies result in a non-trivial amount of heating. Along with variations in the early \Lya\ background, variations in the temperature of the neutral IGM caused by this non-uniform heating constitute an important source of 21-cm fluctuations before large-scale reionization patchiness develops \citep[see e.g.][for a detailed discussion]{2012RPPh...75h6901P}. Once a sufficient \Lya\ background due to stellar radiation has been established in the IGM, the spin temperature of neutral hydrogen will be coupled to the kinetic temperature, $T_\mathrm{K}$, due to the Wouthuysen-Field (WF) effect. The 21-cm signal is then expected to appear initially in absorption against the CMB, as the CMB temperature ($T_{\mathrm{CMB}}$) is greater than the spin temperature of the gas. Once the first sources have heated the IGM and brought the spin temperature,$T_\mathrm{s}$, above $T_{\mathrm{CMB}}$, the signal transitions into emission (see Section~\ref{sec:dbtTheory} for more details). The timing and duration of this transition are highly sensitive to the type of sources present, as they determine the quantity and morphology of the heating of the IGM \citep[e.g.][]{Pritchard2007,Baek2010,Mesinger2013,Fialkov2014,2014MNRAS.443..678P,Ahn2015}. Considerable theoretical work regarding the impact of X-ray radiation on the thermal history of reionization and the future observational signatures exists. Attempts have been made to understand the process analytically \citep[e.g.][]{Glover2003,Furlanetto2004}, semi-numerically \citep[see e.g.][]{Santos2010,Mesinger2013, Fialkov2014, Knevitt2014}, and numerically \citep[eg.][]{Baek2010,Xu2014,Ahn2015}. However, due to the computationally challenging, multi-scale nature of the problem, numerical simulations have not yet been run over a sufficiently large volume -- a few hundred comoving Mpc per side -- to properly account for the patchiness of reionization \citep{Iliev2013}, while at the same time resolving the ionizing sources. In this paper, we present the first full numerical simulation of reionization including X-ray sources and multi-frequency heating over hundreds of Mpc. Using multi-frequency radiative transfer (RT) modelling, we track the morphology of the heating and evolution of ionized regions using density perturbations and haloes obtained from a high-resolution, $N$-body simulation. The size of our simulations ($349\,$Mpc comoving on a side) is sufficient to capture the large-scale patchiness of reionization and to make statistically meaningful predictions for future 21-cm observations. We compare two source models, one with and one without X-ray sources, that otherwise use the same underlying cosmic structures. We also test the limits of validity of the common assumption that for late times the IGM temperature is much greater than the spin temperature. The outline of the paper is as follows. In Section~\ref{sec:sims}, we present our simulations and methodology. In Section \ref{sec:theory}, we describe in detail the theory behind our generation of the 21-cm signatures. Section~\ref{sec:results} contains our results, which include the reionization and temperature history and morphology. We also present our 21-cm maps and various statistics of the 21-cm signal. We then conclude in Section~\ref{sec:conclusions}. The cosmological parameters we use throughout this work are ($\Omega_\Lambda$, $\Omega_\mathrm{M}$, $\Omega_\mathrm{b}$, $n$, $\sigma_\mathrm{8}$, $h$) = (0.73, 0.27, 0.044, 0.96, 0.8, 0.7), where the notation has the usual meaning and $h = \mathrm{H_0} / (100 \ \mathrm{km} \ \rm{s}^{-1} \ \mathrm{ Mpc}^{-1}) $. These values are consistent with the latest results from WMAP \citep{Komatsu2011} and Plank combined with all other available constraints \citep{Planck2015,Planck2016}.
\label{sec:conclusions} We present the first large-volume, fully numerical structure formation and radiative transfer simulations of the IGM heating during the Cosmic Dawn by the first stellar and X-ray sources. We simulate the multi-frequency transfer of both ionizing and X-ray photons and solve self-consistently for the temperature state of the IGM. While the exact nature and properties of the first X-ray sources are still quite uncertain, our results demonstrate that, under a reasonable set of assumptions, these sources produce significant early and inhomogeneous heating of the neutral IGM and, thus, impact considerably the redshifted 21-cm signals. We focus on these expected 21-cm signals from this epoch and its statistics throughout this paper. In this work, we consider relatively soft-spectrum X-ray sources, which trace the star formation at high redshift. At these high redshifts, these sources are still fairly rare and for reasonable assumed efficiencies, the addition of X-rays does not affect significantly the evolution of the mean fractions of H~II and He~II. The fraction of He~III, however, is boosted by almost an order of magnitude compared to the stellar-only case, although remaining quite low overall. The high energies and long mean free paths of the hard X-ray radiation make it the dominant driver of the heating of the neutral IGM. Pop.~II stars, even massive ones, do not produce a significant amount of such hard radiation. Therefore, both the morphology and the overall amount of heating change dramatically when X-ray sources are present. The mean and the median temperature both increase considerably compared to the stellar-only case, with the mean eventually reaching $\sim10^3$~K by $z\sim13$ (the median, which only reaches $\sim200$~K, better reflects the neutral IGM state as it is less sensitive to the very high temperatures in the ionized regions). The X-ray heating is long-range and, therefore, widely distributed throughout the IGM. This heating is also highly inhomogeneous, as evidenced by the temperature PDFs, maps, and evolution seen in the lightcone visualisations. The neutral regions are heated by the X-ray sources and go fully into 21-cm emission with respect to the CMB before $z=13$, while with stellar-only sources the IGM remains in absorption throughout the Cosmic Dawn. The presence of X-rays, therefore, results in an early, but extended ($\Delta z\sim 7$) transition into emission. The 21-cm fluctuations initially ($z>20$) track the density fluctuations due to the still insignificant heating and ionization fluctuations. However, the temperature fluctuations due to X-ray heating quickly boost the large-scale 21-cm fluctuations to much higher values. At a resolution of $\sim 10-12$ arcmin for redshifts 15 -- 17, the fluctuations are large enough to be a factor of several above the expected noise level of SKA1-Low, which implies the possibility of observing not only power spectra, but also coarse images of the 21-cm signal from the Cosmic Dawn. For the same resolution, the $\delta T_{\rm b}$ rms in the presence of X-rays peaks at $\sim11.5$~mK around $z\sim16.5$. As the X-rays heat the neutral IGM, a broad peak develops at $k\sim 0.1$~Mpc$^{-1}$, corresponding to spatial scale of about 43 Mpc, at $z \sim14-15$. As the IGM heats up and the absorption gradually turns into emission, the 21-cm fluctuations for the HMXB case decrease and asymptote to the high-$T_\mathrm{K}$ limit, which is not fully reached by the end of our simulation ($z\sim12.7$), even though by that time the mean IGM is heated well above the CMB temperature. In contrast, the stellar-only case fluctuations are still increasing steeply by $z\sim12$, as they are driven by cold IGM. In the HMXB case, the distribution of the $\delta T_{\rm b}$ fluctuations shows a clear non-Gaussian signature, with both the skewness and kurtosis peaking when the fluctuations start rising. By the end of the simulation, the skewness and kurtosis approach, but do not reach, the high-$T_\mathrm{K}$ limit. For soft radiation sources, the non-Gaussianity is driven by density fluctuations only, producing a smooth evolution. The often-used high spin temperature limit, $T_\mathrm{S} \gg T_{\mathrm{CMB}}$, is not valid throughout the X-ray heating epoch as long as any IGM patches remain cold. When X-rays are present, even after the IGM temperature rises above the CMB everywhere (and thus the 21-cm signal transits into emission), significant temperature fluctuations remain and contribute to the 21-cm signal. The neutral regions do not asymptote to the high-temperature limit until quite late in our model, at $z\sim12$. This asymptotic behavior can readily been seen in the power spectra and statistics of the 21-cm signal. Soft, stellar-only radiation has short mean free paths and, therefore, never penetrates into the neutral regions, leaving a cold IGM. Previous work in this area has largely been limited to approximate semi-analytical and semi-numerical modelling \citep[e.g.][]{Pritchard2007,Mesinger2013,2014MNRAS.445..213F,2015MNRAS.451..467S, 2015MNRAS.454.1416W,2016arXiv160202873K}. By their nature, such approaches do not apply detailed, multi-frequency RT, but rely on counting the photons produced in a certain region of space and comparing this to the number of atoms (with some correction for the recombinations occurring). The difference between the two determines the ionization state of that region. The X-ray heating is done by solving the energy equation using integrated, average optical depths and photon fluxes, and often additional approximations are employed as well \citep[e.g.][]{2015MNRAS.454.1416W}. These methods typically do not take into account nonlinear physics, spatially varying gas clumping or absorbers, or Jeans mass filtering of low mass sources. These differences make comparisons with the previous results in detail difficult due to the very different modelling employed and would require further study. Nonetheless, we find some commonalities and some disparities with our results, summarised below. Our thermal history is similar to that of the relevant cases in \citet{Pritchard2007} (their Case A) and \citet{2015MNRAS.454.1416W} (their case `$\log\zeta_{\rm X}=55$'). We find a quite extended transition between 21-cm absorption and emission, from the formation of the first ionizing and X-ray sources at $z\sim21$ all the way to $z\sim13$. This transition is somewhat more protracted than the one in the most similar scenarios ($f_{\rm x}=1$ and $5$) considered in \citet{Mesinger2013}, likely due to the higher star formation efficiencies assumed in that work. We find a clear X-ray heating-driven peak in the 21-cm power spectra at $k=0.1-0.2\,{\rm Mpc}^{-1}$, similar to the soft X-ray spectra peak found in\citet{2014MNRAS.443..678P} and at similar redshift (z$\sim 19 \ 15 - 16$; though this depends on the uncertain source efficiencies). Results from their peak power, at $\Delta_{\rm 21cm}\sim14\,$mK, are in rough agreement with our results. The general evolution of the power spectra found in \cite{Pritchard2007} appears similar, with the fluctuations at $k=0.1\,\rm Mpc^{-1}$ also peaking at $z\sim 15-16$ (although that only occurs at $z\sim 12-13$ for the scenario with less X-rays, again suggesting a strong dependence on the source model). The power spectra found are in reasonable agreement our results, with peak values of $\Delta_{\rm 21cm}\sim19\,$mK or $\Delta_{\rm 21cm}\sim11.5\,$mK depending on the source model used by them, compared to $\Delta_{\rm 21cm}\sim14\,$mK for our HMXB case. The 21-cm skewness from the X-ray heating epoch is rarely calculated, but \citet{2015MNRAS.454.1416W} recently found a very similar evolution to ours (though shifted to somewhat higher redshifts), with a positive peak roughly coinciding with the initial rise of the 21-cm fluctuations due to the temperature patchiness. Their corresponding 21-cm $\delta T_{\rm b}$ PDF distributions during the X-ray heating epoch significantly differ from ours, however. At the epoch when $T_{\rm S}$ reaches a minimum, the semi-numerical model predicts a long tail of positive $\delta T_{\rm b}$, which does not exist in the full simulations. Around the $T_{\rm S}\sim T_{\rm CMB}$ epoch, our distribution is quite Gaussian; while \citet{2015MNRAS.454.1416W} find an asymmetric one (though, curiously, with one with close to zero skewness, indicating that skewness alone provides a very incomplete description). Finally, in the $T_{\rm S}\gg T_{\rm CMB}$ epoch, the two results both yield gaussian PDFs, but the simulated one is much narrower. Our models confirm that, for reasonable assumptions about the presence of X-ray sources, there is a period of substantial of fluctuations in the 21-cm signal caused by the patchiness of this heating and that this period precedes the one in which fluctuations are mostly caused by patchiness in the ionization. However, since the nature and properties of X-ray sources remain unconstrained by observations, other scenarios in which the heating occurs later, are also allowed. The currently ongoing observational campaigns of both LOFAR and MWA should be able to put constraints on the presence of spin temperature fluctuations for the range $z < 11$, which would then have clear implications for the required efficiency of X-ray heating at those and earlier redshifts. In the future, we will use simulations of the kind presented here to explore other possible scenarios, for example heating caused by rare, bright sources, as well as the impact of spin temperature fluctuations on all aspects of the 21-cm signal, such as redshift space distortions.
16
7
1607.06282
1607
1607.03144_arXiv.txt
We investigate the cosmological implications of studying galaxy clustering using a tomographic approach applied to the final BOSS DR12 galaxy sample, including both auto- and cross-correlation functions between redshift shells. We model the signal of the full shape of the angular correlation function, $\omega(\theta)$, in redshift bins using state-of-the-art modelling of non-linearities, bias and redshift-space distortions. We present results on the redshift evolution of the linear bias of BOSS galaxies, which cannot be obtained with traditional methods for galaxy-clustering analysis. We also obtain constraints on cosmological parameters, combining this tomographic analysis with measurements of the cosmic microwave background (CMB) and type Ia supernova (SNIa). We explore a number of cosmological models, including the standard $\Lambda$CDM model and its most interesting extensions, such as deviations from $w_{\rm{DE}}=-1$, non-minimal neutrino masses, spatial curvature and deviations from general relativity using the growth-index $\gamma$ parametrisation. These results are, in general, comparable to the most precise present-day constraints on cosmological parameters, and show very good agreement with the standard model. In particular, combining CMB, $\omega(\theta)$ and SNIa, we find a value of $w_{\rm{DE}}$ consistent with $-1$ to a precision better than $5\%$ when it is assumed to be constant in time, and better than $6\%$ when we also allow for a spatially-curved Universe.
Along with measurements of the cosmic microwave background (CMB) and distant type Ia supernovae (SNIa), large galaxy-catalogues tracing the large-scale structure (LSS) of the Universe, have become one of the fundamental observables in observational cosmology. The most widely used tools for the analysis of the LSS are the so called two-point statistics: the correlation function, and its Fourier counterpart, the power spectrum. These measurements of the clustering of galaxies encode information of both the expansion history of the Universe and the growth of structure. In particular, the baryon acoustic oscillation (BAO) signal imprinted onto these two-point statistics, provides a very robust distance measurement, relative to the sound horizon scale, that can be used to measure the distance-redshift relation probing the expansion history of the Universe. The BAO signature in the galaxy distribution was simultaneously measured for the first time in 2005 by \cite{Eisenstein:2005aa}, using a spectroscopic subsample of luminous red galaxies (LRGs) of the Sloan Digital Sky Survey \citep[SDSS;][]{York:2000aa}, and by \cite{Cole:2005aa} in the Two-degree Field Galaxy Redshift survey \citep[2dFGRS;][]{Colless:2001aa}. Since then, due to the wealth of information that galaxy surveys provide, much effort has been devoted to design and perform ever larger galaxy-surveys, such as the Baryon Oscillation Spectroscopic Survey \citep[BOSS;][]{Dawson:2013aa}, WiggleZ \citep{Drinkwater:2010aa} and the Dark Energy Survey \citep[DES;][]{The-Dark-Energy-Survey-Collaboration:2005aa}. Supported by this increasing amount of data, substantial work has been devoted to modelling and detecting the BAO signal in two-point statistics and use it for cosmological constraints \citep[e.g.][]{Percival:2007aa,Spergel:2007aa,Reid:2010aa,Blake:2011aa,Sanchez:2014aa,Samushia:2013aa,Anderson:2014aa,Alam:2016aa,Beutler:2016aa}. Future projects, such as the Hobby-Eberly Telescope Dark Energy Experiment \citep[HETDEX; ][]{Hill:2008aa}, the Dark Energy Spectroscopic Instrument \citep[DESI;][]{Levi:2013aa}, the Large Synoptic Survey Telescope \citep[LSST; ][]{LSST-Science-Collaboration:2009aa} and the {\it Euclid} mission \citep{Laureijs:2011aa}, will continue on this path, further improving our understanding of the Universe. There are two important issues related to the traditional study of LSS that need to be considered. First, in order to use the 3D positions of galaxies, it is necessary to assume a fiducial cosmological model in order to transform the measured angular positions on the sky and redshifts of galaxies into comoving coordinates or distances, a process which could bias the parameter constraints if not treated carefully (see e.g. \citealt{Eisenstein:2005aa} and \citealt{Sanchez:2009aa}). Secondly, in order to obtain a precise measurement of either the correlation function or the power spectrum, usually the whole galaxy sample is used to obtain one measurement, typically averaging over a wide redshift range and assuming that the measurement at the mean redshift is representative of the entire sample, washing out information on the redshift evolution of the structures. A simple way to avoid the first issue is to use two-point statistics based only on direct observables, i.e. only angular positions and/or redshifts, such as the angular correlation function $\omega(\theta)$ or the angular power spectrum $C_\ell$. This is done by dividing the sample into redshift bins, or shells, in order to recover information along the line of sight, which otherwise would be lost due to projection effects. Using the clustering in redshift shells solves the second issue of the 3D analysis, providing information on the redshift-evolution of the galaxy-clustering signal, which can be leveraged to put constraints on time-evolving quantities such as the galaxy bias and the growth of structures. Recently, large amount of effort has been committed to develop, test and apply different variation of this methodology \citep[e.g.][]{Crocce:2011aa,Crocce:2011ab,Ross:2011ab,Sanchez:2011aa,de-Simoni:2013aa,Asorey:2012aa,Asorey:2014aa,Di-Dio:2014aa,Salazar-Albornoz:2014aa,Eriksen:2015aa,Eriksen:2015ab,Eriksen:2015ac,Eriksen:2015ad,Carvalho:2016aa}. This paper extends and applies the {\it clustering tomography} analysis in \cite{Salazar-Albornoz:2014aa} to the final galaxy sample of BOSS. It complements a series of companion papers analysing this sample \citep{Alam:2016aa, Beutler:2017aa, Beutler:2017ab, Chuang:2016aa, Grieb:2017aa, Pellejero-Ibanez:2016aa, Ross:2017aa, Sanchez:2017aa, Sanchez:2017ab, Satpathy:2016aa, Tinker:2016aa, Vargas-Magana:2016aa, Wang:2016aa, Zhao:2017aa}, and is organised in the following manner: Section \ref{sec:data} outlines our galaxy sample, our measurements and the complementary datasets included in this study. In Section \ref{sec:method} we describe our methodology, including the modelling of the full shape of the angular correlation function in redshift shells, its analytical full covariance matrix, the optimisation of our binning scheme and the performance of this tomographic approach on our set of mock galaxy catalogues. Section \ref{sec:bias} presents our measurements of the redshift evolution of the linear bias of the BOSS galaxy sample, and the impact on cosmological constraints of assuming different models for its evolution. Section \ref{sec:constraints} display our constraint on cosmological parameters for different parameter spaces, obtained combining our measurements of the angular clustering signal in redshift shells with other datasets. Final conclusions are in Section \ref{sec:conclusions}.
\label{sec:conclusions} We applied a tomographic technique to analyse galaxy clustering based on \cite{Salazar-Albornoz:2014aa} to the final BOSS galaxy sample. For this purpose, we extended our description of the full shape of $\omega(\theta)$ to use state-of-the-art modelling of non-linearities, galaxy bias and RSD. We also extended the analysis to include cross-correlation measurements between redshift shells. In order to maximise the constraining power of our measurements, we optimised the number of redshift shells used in the analysis, by means of maximising the FoM in the $\Omega_{\rm m}-w$ plane. We did this exploring three different cases: (i) a Fisher-matrix approach that resulted in an monotonic increase in the FoM as a function of the number of shells; (ii) an MCMC analysis using synthetic data, where we only varied $\Omega_{\rm m}$ and $w$, which showed a clear maximum in the FoM; and (iii), an analogous MCMC test, where we also included the nuisance parameters of the model, which resulted in the same behaviour as (ii), but with a smaller value for the FoM. We defined our binning scheme on the basis of the last case, where our final configuration consisted of 18 redshift shells of different widths, containing $\sim70000$ galaxies each, plus as many cross-correlations, with subsequent shells, as necessary to surpass the BAO scale in the line of sight. We tested our methodology against a set of $1000$ {\sc md-patchy} mock catalogues, which are designed to match the characteristics of the final BOSS galaxy sample, following its angular and radial selection function, as well as including the redshift evolution of bias and RSD. Using the mean of the $1000$ mock catalogues, we ran an MCMC analysis constraining very general cosmologies, using three different models for the evolution of the linear galaxy-bias. We were able to recover unbiased cosmological information for two of these models, and biased results at the $1\sigma$ level for the constant galaxy-clustering (CGC) model. Also, we repeated this test on a subset of $100$ mocks using one of the galaxy-bias models that resulted in unbiased constraints, and performed an MCMC analysis on each mock catalogue individually. On these tests we found excellent agreement between the statistical errors and those estimated by our model for the full covariance matrix of $\omega(\theta)$. Next, we analysed the redshift evolution of the linear bias of BOSS galaxies. Fixing the cosmological parameters to the best-fitting $\Lambda$CDM model to the final Planck CMB observations, we fit the linear bias parameter of our model for the galaxy-clustering signal, marginalising over the other nuisance parameters and $\sigma_8$ with a Planck prior. Also, using the same three different models for the redshift evolution of the linear galaxy-bias used in the previous tests, we fit the clustering amplitude of $\omega(\theta)$ in all redshift shells simultaneously. We saw that all three models are able to reproduce well the observed redshift evolution of the linear bias up to redshift $z\sim0.6$, where the BOSS sample is close to a volume-limited one. However, none of them were able to reproduce the observed scatter in the measurements within $0.6\lesssim z\lesssim 0.75$, where the BOSS sample behaves as flux-limited. For this reason, and because two of the three bias models depend on the linear growth factor $D(z)$, in order to avoid biased cosmological constraints, we decided not to include the measurements in these high-redshift shells in our tomographic analysis. We tested the impact that assuming these three models for the redshift evolution of the linear galaxy-bias has on the obtained constraints on cosmological parameters. Combining our measurements of $\omega(\theta)$ from BOSS with the CMB measurements from Planck, we obtained constraints on the $w$CDM parameter-space using each of the three galaxy-bias models, and found no significant difference between them, showing that this analysis provides robust constraints. Finally, combining the information obtained from the application of our tomographic approach to the final BOSS galaxy-sample, with the latest Planck CMB observations and type Ia supernova (SNIa), we constrain the parameters of the standard $\Lambda$CDM cosmological model and its more important extensions, including non-flat universes, more general dark-energy models, neutrino masses, and possible deviations from the predictions of general relativity. In general, these constraints are comparable to the most precise present-day cosmological constraints in the literature, showing and consolidating the $\Lambda$CDM model as the standard cosmological paradigm. In particular, in all the cases where we allow $w_{\rm{DE}}$ to deviate from its fiducial value of $-1$, either as constant or time-dependent, our final constraints are in good agreement to those cases where $w_{\rm{DE}}$ is fixed to $-1$. For the simplest $w$CDM extension we obtain $w_{\rm{DE}} = -0.958^{+0.063}_{-0.055}$ for the combination of our $\omega(\theta)$ measurements with Planck, and $w_{\rm{DE}} = -0.991\pm 0.046$ for the full $\rm{Planck}+\omega(\theta)+\rm{SNIa}$ combination. For models including $\Omega_{\rm K}$, with $w$ fixed to $-1$ or treated as a free parameter, we find $|\Omega_{\rm K}|\sim10^{-3}$, consistent with no curvature within the errors. Although we do not find a clear detection for the total sum of neutrino masses, we obtain upper limits that can be considered among the tightest ones available at present, where in the $\nu\Lambda$CDM case, we obtain $\sum m_\nu/\rm{eV}< 0.207(0.400)$ $68\%$($95\%$) confidence interval(C.I.) upper limits for the $\rm{Planck}+\omega(\theta)$ combination, while for the full $\rm{Planck}+\omega(\theta)+\rm{SNIa}$ case, we find $\sum m_\nu/\rm{eV}< 0.169(0.330)$ $68\%$($95\%$) C.I. upper limits. Furthermore, we see no significant deviations from the GR predictions of the linear growth of structures, parametrised by the growth index parameter $\gamma$, neither assuming a $\Lambda$CDM as the background cosmological model, nor when we also treat $w_{\rm{DE}}$ as a free parameter. In summary, the methodology of analysing the large-scale structure of the Universe presented in this work, using angular galaxy-clustering measurements in thin redshift-shells, is an excellent alternative to the traditional 3D clustering analysis. It avoids the two main issues of the traditional approach, by using cosmology-independent measurements, and by being able to trace the redshift evolution of the clustering signal. Furthermore, this technique is able to provide precise constraints on cosmological parameters, proving to be a valid and very robust method to analyse present and future large galaxy-surveys.
16
7
1607.03144
1607
1607.04552_arXiv.txt
Finding the optimal ordering of $k$-subsets with respect to an objective function is known to be an extremely challenging problem. In this paper we introduce a new objective for this task, rooted in the problem of star identification on spacecrafts: subsets of detected spikes are to be generated in an ordering that minimizes time to detection of a valid star constellation. We carry out an extensive analysis of the combinatorial optimization problem, and propose multiple algorithmic solutions, offering different quality-complexity trade-offs. Three main approaches are investigated: exhaustive search (branch and prune), goal-driven (greedy scene elimination, minimally intersecting subsets), and stateless algorithms which implicitly seek to satisfy the problem's goals (pattern shifting, base unrank). In practical terms, these last algorithms are found to provide satisfactory approximations to the ideal performance levels, at small computational costs.
In this paper, we introduce a novel objective for the generation of all $k$-subsets of $n$ elements and we discuss the structure of the resulting combinatorial optimization task. In general, improving the order of elements in a sequence towards some objective is recognized to be a complex optimization task \cite{dewar} with interesting applications in computer science such as unit test coverage \cite{bryce,rothermel}. Star trackers (see Figure \ref{fig:hydra}) are a common sensor used by a spacecraft to determine its attitude by looking at fixed stars. The problem we here formalize and tackle was suggested by the work of Mortari et al. \cite{MortariEtAl2004} on the design of efficient algorithms for star identification. In that paper, the authors consider a ``lost-in-space" spacecraft attitude identification problem: find the orientation of a spacecraft in deep space using a single star tracker image. Such a problem corresponds to that of identifying $k$ stars in a ``scene" (i.e. a picture taken by the star tracker) containing $n$ spikes of which $t$ are unknown stars and the rest are artifacts due to various disturbances present in harsh space environments. Part of the algorithm proposed in that paper, called the Pyramid Algorithm and today widely used in many star-trackers in orbit, needs to generate all $\binom{n}{3}$ combinations in a smart order that allows the discovery of three true stars in the scene from a minimal number of star catalog queries.
We consider a star identification problem and we map it to a $k$-subsets (queries) optimal ordering problem. We provide an in depth analysis of its structure proving interesting mathematical properties that are used in the design and assessment of solution algorithms. A number of algorithms with different complexity and performance, covering a wide spectrum of the parameters $n$ and $k$, is proposed and proved to advance the state-of-the-art in star identification research. For small $n<7$ we are able to provide the optimal solutions, while for higher $n$ ($< 32$) our algorithms are only able to compute and score suboptimal solutions. We released our best solutions online\footnotemark[1]. The problem complexity is such that for even higher $n$ it is extremely challenging to even compute the considered scoring function. Nevertheless, we presented a class of algorithms with polynomial complexity in the output size $N$, producing better sequences than the average random sequence at least for the parameter range that could be tested.
16
7
1607.04552
1607
1607.08060_arXiv.txt
NGC~5548 is the best-observed reverberation-mapped active galactic nucleus with long-term, intensive monitoring. Here we report results from a new observational campaign between January and July, 2015. We measure the centroid time lag of the broad H$\beta$ emission line with respect to the 5100 \AA\, continuum and obtain $\tau_{\rm cent} = 7.20^{+1.33}_{-0.35}$ days in the rest frame. This yields a black hole mass of $\mbh=8.71^{+3.21}_{-2.61} \times 10^{7}M_{\odot}$ using a broad H$\beta$ line dispersion of $3124\pm302$ \kms\ and a virial factor of $\fblr=6.3\pm1.5$ for the broad-line region (BLR), consistent with the mass measurements from previous H$\beta$ campaigns. The high-quality data allow us to construct a velocity-binned delay map for the broad H$\beta$ line, which shows a symmetric response pattern around the line center, a plausible kinematic signature of virialized motion of the BLR. Combining all the available measurements of H$\beta$ time lags and the associated mean 5100~{\AA} luminosities over 18 campaigns between 1989 and 2015, we find that the H$\beta$ BLR size varies with the mean optical luminosity, but, interestingly, with a possible delay of $2.35_{-1.25}^{+3.47}$\,yrs. This delay coincides with the typical BLR dynamical timescale of NGC~5548, indicating that the BLR undergoes dynamical changes, possibly driven by radiation pressure.
Reverberation mapping (RM) is a powerful tool to probe the geometry and structure of broad-line regions (BLRs) in active galactic nuclei (AGNs) (\citealt{Bahcall1972, Blandford1982, Peterson1993}). Over the past four decades, great efforts on RM monitoring have yielded a precious sample of $\sim60$ nearby Seyfert galaxies and quasars with measurements of H$\beta$ time lags (e.g., \citealt{Bentz2013, Du2016a}), among them NGC~5548, the best-observed source that has been intensively monitored by 17 individual RM campaigns, including the recent Space Telescope and Optical Reverberation Mapping Project (AGN STORM; \citealt{DeRosa2015, Edelson2015, Fausnaugh2015arXiv}; see \citealt{Peterson2002} for a summary of the first 13 campaigns; \citealt{Bentz2007, Bentz2009}). NGC~5548 therefore serves as a valuable laboratory to study in detail the long-term variations of the BLR (\citealt{Wanders1996, Sergeev2007}), as well as the consistency and reliability of RM-based black hole (BH) mass measurements (\citealt{Peterson1999, Collin2006}). NGC 5548 follows the relation $\rblr\propto L_{5100}^{0.79\pm0.2}$ (\citealt{Eser2015}), where $L_{5100}$ is the optical luminosity at 5100 \AA. This relation for NGC 5548 is significantly different from the well-known radius$-$luminosity relation $\rblr\propto L_{5100}^{0.53_{-0.03}^{+0.04}}$ for the overall RM sample (\citealt{Kaspi2000}; \citealt{Bentz2013}). This difference needs to be understood. On the other hand, the geometry and kinematics of the BLR in NGC 5548 have been investigated by velocity-resolved mapping in several studies (e.g., \citealt{Denney2009b}; \citealt{Bentz2010}; \citealt{DeRosa2015}), and by recently developed dynamical modelling \citep{Pancoast2014} using the data taken by the 2008 Lick AGN Monitoring Project (LAMP; \citealt{Bentz2009}). However, the inferred BLR dynamics seems diverse, and there is no consensus\footnote{This can be seen by comparing Figure 3 from Denney et al. (2009b) with Figure 19 from Bentz et al. (2010), who present observations from 2007 and 2008, respectively. Such a difference cannot be caused by intrinsic variations of the BLR because the time separation between the two campaigns is significantly shorter than the BLR dynamical timescale (see Equation 9).}. To investigate the above issues, we conducted a new observational campaign for NGC 5548 in 2015. This paper presents the results of our new RM campaign. In Section 2, we describe the observations and the data reduction in detail. In Section 3, we perform the time series analysis and measure the H$\beta$ time lags and construct the velocity-resolved lags of the broad $\rm H\beta$ line. We investigate the structure and dynamics of the BLR in Section 4, and discuss the BH mass measurements, accretion rates, and the long-term variations of BLR size in Section 5. We draw our conclusions in Section 6. Throughout the paper, a cosmology with $H_0=67{\rm ~km~s^{-1}~Mpc^{-1}}$, $\Omega_{\Lambda}=0.68$, and $\Omega_{\rm M}=0.32$ is adopted (\citealt{Ade2014}).
We present results of a new RM campaign on NGC~5548 based on high-quality optical spectra taken in 2015. We measure a centroid time lag for the broad $\rm H\beta$ line of $\tau_{_{\rm H\beta}} = 7.20^{+1.33}_{-0.35}$ days in the rest frame. Adopting a virial factor of $\fblr=6.3\pm1.5$ and an H$\beta$ line dispersion of $\sigma_{\rm line} = 3124\pm302$ \kms, we measured a BH mass of $\mbh=8.71^{+3.21}_{-2.61} \times 10^7M_{\odot}$. We obtain the following results: \begin{itemize} \item The velocity-resolved delay map of the broad H$\beta$ line shows a symmetric structure, consistent with the previous results of \cite{Denney2009b}. \item The relation between H$\beta$ line width and H$\beta$ time lag is consistent with virial motions. The virial product varies weakly with luminosity but is largely constant. \item The BLR size of NGC 5548 follows $R_{\rm H\beta} \propto L_{\rm 5100}^{0.86}$, steeper than the slope of $\sim 0.5$ for the global $R_{\rm H\beta}-L_{\rm 5100}$ relation for all RM AGNs. \item Examining the variation patterns of $\rblr$ and $\bar{L}_{5100}$, we find tentative evidence that $\rblr$ follows $\bar{L}_{5100}$ with a delay of $2.35^{+3.47}_{-1.25}$\,yrs. This is consistent with the dynamical timescale of the BLR, implying that the long-term variations of the BLR may be driven by radiation pressure. \end{itemize}
16
7
1607.08060
1607
1607.01188_arXiv.txt
{The two currently largest all-sky photometric datasets, WISE and SuperCOSMOS, have been recently cross-matched to construct a novel photometric redshift catalogue on 70\% of the sky. Galaxies were separated from stars and quasars through colour cuts, which may leave imperfections because different source types may overlap in colour space.} {The aim of the present work is to identify galaxies in the WISE $\times$ SuperCOSMOS catalogue through an alternative approach of machine learning. This allows us to define more complex separations in the multi-colour space than is possible with simple colour cuts, and should provide a more reliable source classification.} {For the automatised classification we used the support vector machines (SVM) learning algorithm and employed SDSS spectroscopic sources that we cross-matched with WISE $\times$ SuperCOSMOS to construct the training and verification set. We performed a number of tests to examine the behaviour of the classifier (completeness, purity, and accuracy) as a function of source apparent magnitude and Galactic latitude. We then applied the classifier to the full-sky data and analysed the resulting catalogue of candidate galaxies. We also compared the resulting dataset with the one obtained through colour cuts.} {The tests indicate very high accuracy, completeness, and purity ($>95\%$) of the classifier at the bright end; this deteriorates for the faintest sources, but still retains acceptable levels of $\sim85\%$. No significant variation in the classification quality with Galactic latitude is observed. When we applied the classifier to all-sky WISE $\times$ SuperCOSMOS data, we found 15 million galaxies after masking problematic areas. The resulting sample is purer than the one produced by applying colour cuts, at the price of a lower completeness across the sky.} {The automatic classification is a successful alternative approach to colour cuts for defining a reliable galaxy sample. The identifications we obtained are included in the public release of the WISE $\times$ SuperCOSMOS galaxy catalogue.\thanks{Available from \url{http://ssa.roe.ac.uk/WISExSCOS}.}}
Modern wide-field astronomical surveys include millions of sources, and future catalogues will increase these numbers to billions. As most of the detected objects cannot be followed-up spectroscopically, research done with such datasets will heavily rely on photometric information. Without spectroscopy, an appropriate identification of various source types is complicated, however. In the seemingly most trivial case of star-galaxy separation in deep-imaging catalogues, we quickly reach the limit where this cannot be done based on morphology: we lack resolution, and distant faint galaxies become unresolved or point-like, similar to stars \citep[e.g.][]{vasconcellos11}. Additional information is then needed to separate out these and sometimes other classes of sources (such as point-like but extragalactic quasars). This has traditionally been done with magnitude and colour cuts; however, when the parameter space is multidimensional, such cuts become very complex. Additionally, noise in photometry scatters sources from their true positions in the colour space. This, together with huge numbers of sources, most of which are usually close to the survey detection limit, precludes reliable overall identification with any manual or by-eye methods. For these reasons, the idea of automatized source classification has recently gained popularity and was applied to multi-wavelength datasets such as AKARI \citep{solarz12}, the Panoramic Survey Telescope and Rapid Response System \citep[Pan-STARRS,][]{saglia12}, the VIMOS Public Extragalactic Redshift Survey \citep[VIPERS,][]{malek13SVM}, the cross-match of the Wide-field Infrared Survey Explorer -- Two Micron All Sky Survey (WISE--2MASS) \citep{KoSz15}, the Sloan Digital Sky Survey \citep[SDSS,][]{Brescia15}, and the WISE-only \citep{Kurcz16}, and it has been tested in view of the Dark Energy Survey data \citep{soumagnac13}. The present paper describes an application of a machine-learning algorithm to identify galaxies in a newly compiled dataset, based on the two currently largest all-sky photometric catalogues: WISE in the mid-infrared, and SuperCOSMOS in the optical. This work is a refinement of a simpler approach at source classification that was applied in \cite{WISC16}, hereafter \tcb{B16}, where stars and quasars were filtered out on a statistical basis using colour cuts to obtain a clean galaxy sample for the purpose of calculating photometric redshifts. The two parent catalogues we use here, described in detail below and in Sect.\ \ref{Sec: Data}, both include about a billion detections each, of which a large part are in common. For various reasons, however, the available data products from these two surveys offer limited information on the nature of the catalogued objects, which indeed presents a challenge to the classification task. WISE \citep{WISE}, which is the more sensitive of the two, suffers from low native angular resolution resulting from the small aperture of the telescope (40 cm): it is equal to $6.1''$ in its shortest $W1$ band (3.4 $\mu$m), increasing to $12''$ at the longest wavelength $W4$ of 23 $\mu$m. This leads to severe blending in crowded fields, such as at low Galactic latitudes, and original photometric properties of the blended sources become mixed. In addition, proper isophotal photometry has not been performed for the majority of WISE detections, and no WISE all-sky extended source catalogue is available as yet (see, however, \citealt{Cluver14} and \citealt{Jarrett16} for descriptions of ongoing efforts to improve on this situation). Finally, WISE-based colours provide limited information for classification purposes: at rest frame, the light in its two most sensitive passbands, $W1$ and $W2$ (3.4 and 4.6 $\mu$m), is emitted from the photospheres of evolved stars (Rayleigh-Jeans tail of the spectrum), and the catalogue is dominated by stars and galaxies of relatively low redshift, which typically have similar $W1-W2$ colours. The two other WISE filters, $W3$ and $W4$ centred on 12 and 23 $\mu$m, respectively which might serve to reliably separate out stars from galaxies and QSOs when combined with $W1$ and $W2$ \citep{WISE}, offer far too low detection rates to be applicable for most of the WISE sources. SuperCOSMOS (hereafter SCOS) on the other hand, which is based on the scans of twentieth century photographic plates \citep{SCOS1}, does offer point and resolved source identification \citep{SCOS2}. This classification, although quite sophisticated, is based mostly on morphological information, however, and on the one hand, unresolved galaxies and quasars are classified as point sources, and on the other, blending in crowded fields (Galactic Plane and Bulge, Magellanic Clouds) leads to spurious extended source identifications (see also \citealt{Peacock16}). A cross-match of the WISE and SCOS catalogues improves the classification of different types of sources that is useful for extragalactic applications, as shown in \tcb{B16}. However, although only extended SCOS sources were considered in B16, blends mimicking resolved objects dominated at Galactic latitudes as high as $\pm30\degree$ and had to be removed on a statistical basis. In the present paper we improve on that work by generating a wide-angle (almost full-sky) galaxy catalogue from the \WISC\ cross-match through machine-learning. For this purpose we use the support vector machines (SVM) supervised algorithm. A similar task for other WISE-based datasets was undertaken in two recent works. \cite{KoSz15}, who used a cross-match of WISE $W1<15.2$ sources with the 2MASS Point Source Catalogue \citep[PSC,][]{2MASS} and performed an SVM analysis in multicolour space, showed that a cut in the $W1_\mrm{WISE}-J_\mrm{2MASS}$ colour efficiently separates stars and galaxies. Based on these results, they produced a galaxy catalogue containing 2.4 million objects with an estimated star contamination of 1.2\% and a galaxy completeness of 70\%. The separation was only made for stars and galaxies, with no information regarding quasars. A limitation of a WISE -- 2MASS cross-match is the much smaller depth of the latter with respect to the former. Most of the 2MASS galaxies are located within $z<0.2$ \citep{2MPZ,Rahman2MASS}, while WISE extends well beyond this, detecting $L_*$ galaxies at $z\sim0.5$ \citep{Jarrett16}. Using only photometric information from WISE, \cite{Kurcz16} employed SVM and attempted classifying all unconfused WISE sources brighter than $W1<16$ into three classes: stars, galaxies, and quasars. This led to identification of 220 million candidate stars, 45 million candidate galaxies, and 6 million candidate QSOs. The latter sample is however, significantly contaminated with what was interpreted as a possibly very local foreground, such as asteroids or zodiacal light. The present paper is laid out as follows: the data are described in Sect.~\ref{Sec: Data}, Sect.~\ref{Sec:classifiaction_method} explains the principles of the support vector machine-learning algorithm and introduces the training sample used here, and in Sect.~\ref{Sec:classification_performance} we present various tests that allowed us to quantify the performance of the SVM algorithm. Section~\ref{Sec: Final classification} contains the description and properties of the final galaxy catalogue, as well as a comparison with the results of \tcb{B16} . In Sect.~\ref{Sec:summary} we summarise our analysis.
\label{Sec:summary} The \WISC\ galaxy sample is currently the largest in terms of its size and sky coverage at $z\sim0.2$, giving access to angular scales not accessible with samples such as SDSS. At the same time, it is much deeper than other all-sky datasets that are available from IRAS or 2MASS. Here we presented an approach to identify galaxies in the \WISC\ photometric data that is an alternative to the colour cuts applied in \cite{WISC16}. By using the support vector machines algorithm, trained and tested on a cross-match of spectroscopic SDSS data with \WISC, we identified about 15~million galaxy candidates over 70\% of sky. This number is smaller than 18.5~million obtained by \tcb{B16}, mostly because our sample is of higher purity but lower completeness than the colour-selected sample. The resulting source probabilities assigned by SVM are provided in the photometric redshift \WISC\ dataset released together with the publication of \tcb{B16} , available from the Wide Field Astronomy Unit, Institute for Astronomy, Edinburgh at \url{http://ssa.roe.ac.uk/WISExSCOS.html}. We focused on galaxies because we used only extended (resolved) sources from SuperCOSMOS. Still, this work might be continued to obtain a more general identification of stars, galaxies, and quasars in the full \WISC\ sample. This would require SCOS point-source photometry to be calibrated all-sky in a similar way as the aperture-based measurements \citep{Peacock16}, however, which currently is not the case. Successful machine-learning galaxy identification in \WISC\ shows that a similar approach will be worthwhile for other samples based on WISE, cross-matched with forthcoming wide-angle datasets such as Pan-STARRS, SkyMapper, or VHS. For WISE itself, first efforts of all-sky star, galaxy, and QSO separation in that catalogue have been reported in \cite{Kurcz16}.
16
7
1607.01188
1607
1607.06858_arXiv.txt
A dark radiation term arises as a correction to the energy momentum tensor in the simplest five-dimensional RS-II brane-world cosmology. In this paper we revisit the constraints on dark radiation based upon the newest results for light-element nuclear reaction rates, observed light-element abundances and the power spectrum of the Cosmic Microwave Background (CMB). Adding dark radiation during big bang nucleosynthesis alters the Friedmann expansion rate causing the nuclear reactions to freeze out at a different temperature. This changes the final light element abundances at the end of BBN. Its influence on the CMB is to change the effective expansion rate at the surface of last scattering. We find that our adopted BBN constraints reduce the allowed range for dark radiation to between $-12.1\%$ and $+6.2\%$ of the ambient background energy density. Combining this result with fits to the CMB power spectrum, the range decreases to $-6.0\%$ to $+6.2\%$. Thus, we find, that the ratio of dark radiation to the background total relativistic mass energy density $\rho_{\rm DR}/\rho$ is consistent with zero although in the BBN analysis there could be a slight preference for a negative contribution. However, the BBN constraint depends strongly upon the adopted primordial helium abundance.
One of the proposed solutions to the hierarchy problem among the fundamental forces is the introduction of compact extra dimensions. However, this creates a new hierarchy problem between the weak forces and the size of the compact extra dimensions. A possible solution was suggested by Randall and Sundrum \cite{Randall} by introducing a non-compact large extra dimension. In that model, the observed universe is a four-dimensional spacetime embedded in a five-dimensional anti-de-sitter space (AdS5). The projected three-space Friedmann equation of the five dimensional universe reduces to \cite{Langlois}: \begin{equation} \left(\frac{\dot{a}}{a}\right)^2 =\frac{8 \pi G_{\rm N}}{3} \rho -\frac{K}{a^2}+\frac{\Lambda_{4}}{3} +\frac{\kappa_{5}^4}{36}\rho^2 + \frac{\mu}{a^4}~~. \label{Friedmann} \end{equation} Here $a(t)$ is the usual scale factor for the three-space at time $t$, while $\rho$ is the energy density of matter in the normal three space. $G_N$ is the four dimensional gravitational constant and is related to its five dimensional counterpart $\kappa_5$ by \begin{equation} G_{\rm N} = \kappa_{5}^4 \lambda / 48 \pi~~, \end{equation} where $\lambda$ is the intrinsic tension on the brane and $\kappa_5^{2}= M_5^{-3}$, with $M_5$ the five dimensional Planck mass. The $\Lambda_4$ in the third term on the right-hand side is the four dimensional cosmological constant and is related to its five dimensional counterpart by \begin{equation} \Lambda_{4}=\kappa_{5}^4 \lambda^2 /12 + 3 \Lambda_{5}/4~~. \end{equation} Note that for $\Lambda_{4}$ to be close to zero, $\Lambda_{5}$ should be negative. Hence the spacetime is AdS5. In standard Friedmann cosmology only the first three terms arise. The fourth term is probably negligible during most of the radiation dominated epoch since $\rho^{2}$ decays as $a^{-8}$ in the early universe. However this term could be significant during the beginning of the epoch of inflation \cite{Maartins00, Okada16,Mayukh}. The last term is the dark radiation \cite{Binetruy00, Mukohyama00}. It is called radiation since it scales as ${a^{-4}}$. It is a constant of integration that arises from the projected Weyl tensor describing the effect of graviton degrees of freedom on the dynamics of the brane. One can think of it, therefore, as a projection of the curvature in higher dimensions. In principle it could be either positive or negative. Although it is dubbed dark radiation it is not related to relativistic particles. Since it does not gravitate, flow or scatter as would a light neutrino species, its effects on the cosmic microwave background (CMB) is different than that of normal radiation. Nevertheless, since it scales like radiation, its presence can alter the expansion rate during the radiation dominated epoch. This effect has been studied previously by several authors \cite{Ichiki, Bratt}. Here we update those previous studies in the context of newer constraints on light elemental abundances, BBN nuclear reaction rates and the CMB.
In conclusion we deduce that based upon our adopted $2 \sigma$ (95\% C. L.) BBN constraints, brane-world dark radiation is allowed in the range of $-12.1\%$ to $+6.2\%$ ($\Delta N_\nu = -0.19 \pm 0.56$) compared to the range deduced in Ref.~\cite{Ichiki} of $-123\%$ to $10.5\%$ based upon constraints available at the time of that paper. After taking into account the $2\sigma$ limits on the dark radiation from the fit to the CMB power spectrum, this region shrinks to a range of $-6.0\%$ to $+6.2\%$ ($\Delta N_\nu = -0.19 _{-0.18}^{+0.56}$). However, if the higher helium abundance of \cite{Izotov14} were adopted, the allowed range increases to The $1 \sigma$ BBN constraint ($2.63 \leq N_{\nu} \leq 3.38$, $\Delta N_\nu = -0.19 \pm 0.28$) is comparable to the values deduced by \cite{bbnreview,PlanckXIII}. For $\eta$ fixed by the {\it Planck} analysis, the constraint on positive dark radiation comes from the upper bound on the $^4$He mass fraction and the upper bounds on the D/H. The limit on negative dark radiation arises from the constraint on cosmic expansion rate at the epoch of last scattering (the CMB) and the lower bound of D/H. We caution, however, that either a larger value for the Hubble parameter \cite{Riess16} could shift the allowed CMB range (or a larger primordial helium abundance \cite{Izotov14} could shift the BBN range) to a higher positive contribution of dark radiation. For example, if the higher helium abundance of \cite{Izotov14} were adopted, the allowed range increases to $+3.2$ \% $< \rho_{\rm DR} /\rho < 13.5$ \%. In this case the lower bound is from BBN and the upper bound is from the CMB. We also checked the corresponding $^{7}$Li/H abundance for the allowed ranges of dark radiation. For dark radiation of $+6.2\%$ and $\eta$ of $6.1 \times 10^{-10}$ the lithium abundance is $5.19 \times 10^{-10}$ slightly alleviating the lithium problem. The $3\sigma$ CMB contour combined with D/H corresponds to a lower limit of dark radiation of $-9.1\%$. In this case the corresponding lithium abundance is increased to $5.64 \times 10^{-10}$ exacerbating the lithium problem. Hence, although a positive dark radiation slightly reduces the lithium abundance, it is not sufficient to solve the lithium problem.
16
7
1607.06858
1607
1607.01964_arXiv.txt
We investigate evolution of an accretion disc in binary black hole (BBH) systems and possible electromagnetic counterparts of the gravitational waves from mergers of BBHs. Perna et al. (2016) proposed a novel evolutionary scenario of an accretion disc in BBHs in which a disc eventually becomes ``dead'', i.e., the magnetorotational instability (MRI) becomes inactive. In their scenario, the dead disc survives until {\it a few seconds before} the merger event. We improve the dead disc model and propose another scenario, taking account of effects of the tidal torque from the companion and the critical ionization degree for MRI activation more carefully. We find that the mass of the dead disc is much lower than that in the Perna's scenario. When the binary separation sufficiently becomes small, the mass inflow induced by the tidal torque reactivates MRI, restarting mass accretion onto the black hole. We also find that this disc ``revival'' happens {\it more than thousands of years before} the merger. The mass accretion induced by the tidal torque increases as the separation decreases, and a relativistic jet could be launched before the merger. The emissions from these jets are too faint compared to GRBs, but detectable if the merger events happen within $\lesssim 10$ Mpc or if the masses of the black holes are as massive as $\sim 10^5 M_{\odot}$.
\label{sec:intro} The Laser Interferometer Gravitational-wave Observatory (LIGO) detected gravitational wave (GW) signals from merger events of binary black hole (BBH) systems \citep{LIGO16a,LIGO16d}. Although electromagnetic counterparts of GWs from mergers of BBHs were unexpected, the {\it Fermi} Gamma-ray Burst Monitor (GBM) reported detection of gamma-rays from the consistent direction of GW150914 \citep{FermiGBM16a}, which indicates possibility of a short gamma-ray burst (GRB) coincident with the merger of BBH \footnote{No electromagnetic counterparts have been reported for GW151226 \citep{Fermi16a,Pan-STAR16a}}. While some claim that the GBM event is likely to be a false signal \citep{Lyu16a,GBS16a,Xio16a}, some models are proposed to explain it \citep{PLG16a,JBC16a,Loe16a}. Possible electromagnetic counterparts in other wavelengths are also discussed \citep{MKM16a,YAO16a}. For producing powerful radiation, it is necessary to leave sufficient amount of material around the merging black holes (BHs). Here, we study disc accretion in BBHs. The evolution of an accretion disc is determined by the efficiency of angular momentum transport. It is believed that turbulent stress induced by the magnetorotational instability (MRI) can efficiently transport the angular momentum \citep{bh91,BH98a}. This instability is active for ionized plasma, whereas it is suppressed when the ionization degree is sufficiently low \citep{Gam96a,MPH01a}. If the disc is MRI ``dead'', the disc material can remain around BHs for a long time. \citet{PLG16a} argued that the dead disc can remain until a few seconds before the merger, and can supply energy enough to explain the GBM event. However, their model seems to ignore or misestimate a few processes that affect evolution of an accretion disc in binary systems. One important process is the effect of tidal torque, which prevents the disc material from expanding outward beyond the tidal truncation radius \citep{IO94a}. This causes the disc mass to decrease faster than that of the well-known self-similar solutions \citep[e.g.][]{CLG90a,MPH01a,PLG16a}. The tidal torque also causes to heat up the outer edge of the dead disc in the late phase of evolution, which can eventually reactivates MRI. Another important point is the critical ionization degree for MRI activation. MRI is usually active for very low ionization degree \citep[e.g.][]{Gam96a,SM99a}, and the critical temperature for MRI activation is very low, typically less than a few thousands K. This causes the MRI activation tens of thousands years before the merger. In this paper, we improve the dead disc model and propose another scenario, which predicts electromagnetic counterparts of GWs whose luminosity increases with time. In Figure \ref{fig:evolution}, we show the schematic evolutionary tracks of the disc mass $m_{\rm d}$, the mass accretion rate $\dot M$, and the binary separation $R_{\rm sep}$. The disc experiences three phases. At first, the disc forgets its initial condition through viscous evolution. Then, the disc mass and the accretion rate decrease with radiative cooling, which leads to decrease of the ionization degree (phase I). This eventually suppresses MRI, forming a dead disc that remains around the BH until the binary separation sufficiently decreases (phase II). Then, the heating by the tidal torque from the companion becomes effective, which reactivates MRI in the entire region of the disc, restarting accretion onto the BH (phase III-i). This disc ``revival'' happens many years before the merger \footnote{\citet{PLG16a} mentioned a low-luminosity and long-lasting transient preceding the merger by the MRI reactivation due to photons from the outer rim, although they did not discuss it in detail.}. We describe this model in detail in Section \ref{sec:evolv}. The mass accretion rate increases as the separation decreases, and a relativistic jet could be launched owing to high accretion rate (phase III-ii). We estimate flux of electromagnetic emission from the jet and discuss its detectability in Section \ref{sec:detect}. Section \ref{sec:summary} is devoted to summary and discussion. \begin{figure} \includegraphics[width=\linewidth]{f1.eps} \caption{Schematic evolutionary tracks of the disc mass (red), the mass accretion rate (blue), and the binary separation (magenta). Note that this is double logarithmic plot and that phase II is much longer than the other phases.} \label{fig:evolution} \end{figure}
\label{sec:summary} We study evolution of an accretion disc in BBH systems and propose an evolutionary track of the disc, which leads to different conclusion from the previous work \citep{PLG16a}. At first, the disc viscously expands outward but the companion prevents the disc from expandig beyond $r_{\rm out}$ due to the tidal torque. The evolution of viscous disc results in the decrease of the disc mass and the temperature. When the disc sufficiently cools down (typically less than 3000 K), the dead disc forms because MRI becomes inactive. Since the thermal instability causes the rapid drop of the disc temperature, the disk becomes dead when the temperature becomes less than a few tens of thousands K. This dead disc remains until the binary separation sufficiently decreases. As the binary separation decreases, the position at which the tidal torque is effective moves inward, and the mass of the outer rim increases. Then, the angular momentum is transported by the tidal torque, which induces the mass inflow from the outer rim to the dead disc. When the mass inflow by the tidal torque becomes higher than $\dot M_{\rm dead}$, the accretion heating activates MRI, restarting the mass accretion from the disc to the central black hole (the disc revival). This disc revival typically happens tens of thousands years before the merger event. The evolution of the revival disc is determined by the tidal torque, keeping $t_{\rm vis}\sim t_{\rm GW}$. The mass accretion rate of the revival disc increases with time. In the late phase of the revival disc evolution, the mass accretion rate can exceed Eddington rate, and a relativistic jet is expected to be launched. We estimate the electromagnetic flux from the jet and discuss its detectability. Since the jet luminosity is increasing with time, the X-ray flux from the internal shock increases with time. This flux can be detectable before the merger event. The afterglow can typically be luminous a few thousands seconds after the merger. The estimated flux from the jet is too low to explain the GBM event, but detectable by the optical transient surveys or X-ray monitoring systems if the merger events happen in the local universe ($\lesssim10\rm~Mpc$) or if BHs are very massive ($\sim10^5~M_{\odot}$). In Section \ref{sec:initial}, the disc physical quantities is mildly inconsistent with the thin-disc approximation in the early phase. When $H/r>1$ and $p_{\rm rad}>p_{\rm gas}$, we should use the slim disc solution that has different features from the standard thin disc \citep{abr+88,CG09a}. In this regime, the disc mass decreases more rapidly than the standard thin disc, which shorten $t_{\rm dead}$. When the mass accretion rate becomes lower than the Eddington rate, the disc state changes from the slim disc to the thin and radiation-pressure dominant disc \citep{abr+88,KFM08a}. This regime is thermally unstable \citep{SS76a}. Some models with a different expression of the stress can avoid this instability \citep{SC81a,HBK09a}. However, the most recent simulation with a wide calculation range and a better radiative transfer scheme shows that the solution is thermally unstable \citep{jia+13}, and it is unlikely to be realized. Thus, the disc state is expected to change to the thin and gas-pressure dominant disc soon after the slim disc regime ends. Since $\dot M_{\rm dead}$ is much less than the Eddington rate, the thin and gas-pressure dominant disc takes place whenever the disc becomes dead. Therefore, even if we address the disc evolution discussed above, our estimate in Section \ref{sec:evolv} would not change except that $t_{\rm dead}$ would be shortened. The shortened $t_{\rm dead}$ could not affect our statement that the disc evolution time is much shorter than the decreasing time of binary separation. In Section \ref{sec:dead}, we ignore ionization by cosmic rays (CRs), although its effect for accretion process is still under debate \citep[e.g.,][]{BS13a}. The CRs ionize the disc surface layer of $\Sigma_z=\int_z^\infty\rho(z)dz\lesssim100~\rm~g~cm^{-2}$, where $\rho(z)$ is the density \citep{UN81a}. Assuming the density of CRs is the same as that in the interstellar medium of the Galaxy, we write the ionization rate as $\zeta_{\rm cr}\sim10^{-17}\rm~cm^3~s^{-1}$ \citep{UN81a}. The equilibrium condition between the ionization by CRs and recombination is $\zeta_{\rm cr}n_H=\beta_{\rm rec}n_en_p$, where $\beta_{\rm rec}=6.22\times10^{-13}T_{3.5}^{-3/4}$ is the radiative recombination rate \citep[the UMIST database,][]{UMIST12}. Assuming $n_H=\Sigma/(2m_p H)$, $n_e=n_p$, and $n_e=\chi_e n_H$, we obtain the equilibrium ionization degree $\chi_{\rm cr}$. The instability condition for MRI is $\chi_{\rm dead}\le\chi_{\rm cr}$. We calculate the critical $\beta_{\rm pl}$ below which the MRI is active, whose values are 26, 2.2$\times10^2$, and $8.9\times10^3$ for models A, B, and C in Section \ref{sec:detect}, respectively. Since the expected value of $\beta_{\rm pl}$ by the MRI turbulence ranges 10--100, the layered accretion is likely for models B and C. In this case, the surface layer of $\Sigma_{\rm active}\sim 100\rm~g~cm^{-2}$ accretes onto BHs \citep{Gam96a}. The mass loss by the layered accretion in $t_{\rm mer}$ is estimated to be $M_{\rm lay}\sim3\pi\nu\Sigma_{\rm active}t_{\rm mer}$, the values of which are 46 $M_{\odot}$, 68 $M_{\odot}$, and 1.6$\times10^3M_{\odot}$ for models A, B, and C, respectively. These values are much higher than the mass of the dead disc. Therefore, some mechanism to reduce CR density is necessary to leave the dead disc until the disc revives for models B and C. In Section \ref{sec:revival}, we use some assumptions such as constant separation parameter ($\asep=0.3$) and $\widetilde{\Sigma}\sim m_{\rm dead}/(\pi r_{\rm out}^2) $. In order to verify these assumptions, we should perform long-term non-ideal magneto-hydrodynamical simulations with cold fluid and non-axisymmetric gravity. This is because (a) the viscous time is much longer than the dynamical time, (b) the resistivity is essential for the death and revival of the disc, (c) the sound speed in the disc is much slower than the Keplerian velocity, and (d) tidal torque is non-axisymmetric effect. Such simulations remain as a future work.
16
7
1607.01964
1607
1607.01363_arXiv.txt
{ Under the presence of anisotropic sources in the inflationary era, the trispectrum of the primordial curvature perturbation has a very specific angular dependence between each wavevector that is distinguishable from the one encountered when only scalar fields are present, characterized by an angular dependence described by Legendre polynomials. We examine the imprints left by curvature trispectra on the $TT\mu$ bispectrum, generated by the correlation between temperature anisotropies (T) and chemical potential spectral distortions ($\mu$) of the Cosmic Microwave Background (CMB). Due to the angular dependence of the primordial signal, the corresponding $TT\mu$ bispectrum strongly differs in shape from $TT\mu$ sourced by the usual $g_{\rm NL}$ or $\tau_{\rm NL}$ local trispectra, enabling us to obtain an unbiased estimation. From a Fisher matrix analysis, we find that, in a cosmic-variance-limited (CVL) survey of $TT\mu$, a minimum detectable value of the quadrupolar Legendre coefficient is $d_2 \sim 0.01$, which is 4 orders of magnitude better than the best value attainable from the $TTTT$ CMB trispectrum. In the case of an anisotropic inflationary model with a $f(\phi)F^2$ interaction (coupling the inflaton field $\phi$ with a vector kinetic term $F^2$), the size of the curvature trispectrum is related to that of quadrupolar power spectrum asymmetry, $g_*$. In this case, a CVL measurement of $TT\mu$ makes it possible to measure $g_*$ down to $10^{-3}$. } \begin{document}
Measurements of higher-order correlators of the primordial curvature fluctuation can play a crucial role in understanding the initial conditions of our Universe. In the usual single-field slow-roll inflationary scenario, the induced curvature perturbation is a nearly Gaussian field, and all the statistical information is then confined to the 2-point correlator or the power spectrum \cite{Acquaviva:2002ud,Maldacena:2002vr}. In contrast, higher-order correlators, such as the bispectrum and the trispectrum, are direct indicators for non-Gaussianity (NG), and their presence indicates the evidence for, e.g., some other source fields or some nonlinear interactions. Detailed analyses of their features, such as the shape and the scale dependence, or tests of the consistency relations between $n$-point and $(n-1)$-point correlators, thus provide essential information to select observationally viable Early Universe models (see e.g., \cite{Bartolo:2004if,Komatsu:2010hc,Chen:2010xka,Ade:2013ydc,Ade:2015ava} and references therein for review). Primordial higher-order correlation functions have been deeply investigated using observational data of the Cosmic Microwave Background (CMB) anisotropies \cite{Komatsu:2008hk,Komatsu:2010fb,Bennett:2012zja}. Recent analyses using {\it Planck} data give constraints on primordial NGs with nearly cosmic-variance-limited (CVL) level accuracy \cite{Ade:2013ydc,Feng:2015pva,Ade:2015ava}, as long as CMB temperature anisotropies are concerned. Higher-order correlators related to Large Scale Structure (e.g., \cite{Giannantonio:2011ya, Maartens:2012rh, Byun:2014cea, Raccanelli:2015oma}) or 21-cm fluctuations (e.g., \cite{Cooray:1999kg,Cooray:2004kt,Cooray:2008eb, Pillepich:2006fj,Munoz:2015eqa, Shimabukuro:2015iqa}) are expected as future NG observables. This paper focuses on another observable that has been shown to be particularly promising to constrain primordial NG, namely the correlation between CMB temperature (T) fluctuations and CMB $\mu$-type chemical potential spectral distortions, induced by heat release due to diffusion of acoustic waves, at redshifts from $2 \times 10^6 $ to $5 \times 10^4$. $\mu$-distortions display a quadratic dependence on the primordial curvature perturbation, while the temperature depends linearly on it. The curvature bispectrum and trispectrum can therefore source $T\mu$ and $\mu\mu$ correlations, respectively \cite{Pajer:2012vz}. Detectability analyses, based on futuristic $\mu$-distortion anisotropy surveys, have been carried on for several theoretically-motivated NG templates \cite{Pajer:2012vz,Ganc:2012ae,Biagetti:2013sr, Miyamoto:2013oua, Kunze:2013uja, Ganc:2014wia, Ota:2014iva, Emami:2015xqa, Shiraishi:2015lma,Dimastrogiovanni:2016aul,Ota:2016mqd}. Observational constraints on the usual local NG parameters, $f_{\rm NL}$ and $\tau_{\rm NL}$, based on {\it Planck} estimates of $T\mu$ and $\mu\mu$, are already available \cite{Khatri:2015tla}. In \cite{Bartolo:2015fqz}, we recently analyzed $TT\mu$ as an observable which depends on the curvature trispectrum. Our main finding was that, contrary to $\mu \mu$, $TT \mu$ is sensitive not only to $\tau_{\rm NL}$ but also to the other local trispectrum parameter, $g_{\rm NL}$, potentially improving with respect to the constraints that can be obtained with the trispectrum of CMB anisotropies. An important difference between $\mu\mu$ and $TT\mu$ lies in the number of degrees of freedom: the angular power spectrum of $\mu\mu$ depends on only one $\ell$ mode, while $TT\mu$ varies in 3D harmonic space $(\ell_1, \ell_2, \ell_3)$. It is therefore expected that $TT\mu$ is more sensitive to some details of the NG shapes and has an advantage in discriminating between different primordial trispectrum shapes. In this paper, we examine $TT\mu$ generated from curvature trispectra with angular dependence \cite{Shiraishi:2013oqa}, characterized by \begin{eqnarray} \Braket{\prod_{n=1}^4 \zeta_{{\bf k}_n}} &=& (2\pi)^3 \delta^{(3)}\left(\sum_{n=1}^4 {\bf k}_n \right) \sum_{L} d_L \left[ {\cal P}_L(\hat{k}_1 \cdot \hat{k}_3) + {\cal P}_L(\hat{k}_1 \cdot \hat{k}_{12}) + {\cal P}_L(\hat{k}_3 \cdot \hat{k}_{12}) \right] \nonumber \\ && \times P(k_1) P(k_3) P(k_{12}) + (23~{\rm perm}) ~, \label{eq:zeta4_def} \end{eqnarray} where ${\bf k}_{12} \equiv {\bf k}_1 + {\bf k}_2$, $P(k)$ denotes the power spectrum of the curvature perturbation, and ${\cal P}_L(x)$ are the Legendre polynomials. This exactly expresses the angular dependence arising from the presence of anisotropic sources,% \footnote{In this paper, ``anisotropic sources'' mean objects sourcing a nontrivial angle dependence between each wavevector in the angle-averaged observables or the isotropized curvature correlators like Eqs.~\eqref{eq:zeta4_def} and \eqref{eq:zeta3_def}.} such as primordial vector fields present during inflation (see e.g.~\cite{Dimastrogiovanni:2010sm,Soda:2012zm,Maleknejad:2012fw,Bartolo:2012sd,Naruko:2014bxa,Bartolo:2014hwa,Bartolo:2015dga}). In addition to $d_0$, a nonzero $d_2$ appears in inflationary models where the inflaton field couples to a vector field via a $f(\phi)F^2$ interaction \cite{Shiraishi:2013vja,Abolhasani:2013zya,Shiraishi:2013oqa,Rodriguez:2013cj} (note that, for the $L = 0$ case, Eq.~\eqref{eq:zeta4_def} is independent of any angle and hence equivalent to a $\tau_{\rm NL}$-type trispectrum, with the replacement $d_0 = \tau_{\rm NL} / 6$). The same model also predicts nonzero $c_0$ and $c_2$ in the curvature bispectrum template \cite{Shiraishi:2013vja}: \begin{eqnarray} \Braket{\prod_{n=1}^3 \zeta_{{\bf k}_n} } = (2\pi)^3 \delta^{(3)}\left(\sum_{n=1}^3 {\bf k}_n \right) \sum_{L} c_L {\cal P}_L(\hat{k}_1 \cdot \hat{k}_2) % P(k_1) P(k_2) + (2~{\rm perm}) ~, \label{eq:zeta3_def} \end{eqnarray} where the $L=0$ case is equivalent to the usual local NG template and hence $c_0 = (6/5) f_{\rm NL}$.% \footnote{ Other examples that give rise to bispectra and trispectra shapes of the type described in Eqs.~(\ref{eq:zeta4_def}) and~(\ref{eq:zeta3_def}) are the so-called solid inflation models~\cite{Endlich:2012pz, Bartolo:2013msa, Endlich:2013jia, Bartolo:2014xfa}, which are based on a specific internal symmetry obeyed by the inflaton fields, and which, e.g., produce in the bispectrum $c_2 \gg c_0$. Recently a model with a $f(\phi) (F^2 + F\tilde{F})$ coupling has been proposed as the first example of an inflationary model where $c_1$ is generated~\cite{Bartolo:2015dga}. Large-scale non-helical and helical magnetic fields in the radiation-dominated era do also generate $c_0$, $c_2$ and $c_1$~\cite{Shiraishi:2012rm, Shiraishi:2012sn, Shiraishi:2013vja}. See \cite{Ashoorioon:2016lrg} for other possibilities of generating anisotropic NGs.} Later we show that $T\mu$ due to $c_L$ and $\mu\mu$ due to $d_L$ vanish except for $ L=0$, while $TT\mu$ due to $d_L$ becomes nonzero for any even $L$. This is due to the difference of number of degrees of freedom mentioned above. The structure of this paper follows that of previous papers about $TTT$ from $c_L$ \cite{Shiraishi:2013vja} and $TTTT$ from $d_L$ \cite{Shiraishi:2013oqa}. We start by computing $TT\mu$ using the flat-sky approximation, and see how the 3D ${\bf k}$-space angular dependence in Eq.~\eqref{eq:zeta4_def} is projected to the 2D $\boldsymbol{\ell}$ space. After that, we recompute $TT\mu$ in full-sky and show, both via visual inspection and by actually computing correlation coefficients, that $TT\mu$ from $ d_2$ has a very different shape compared to those induced by $d_0$ (or equivalently $\tau_{\rm NL}$) and $g_{\rm NL}$. We then forecast error bars with a Fisher matrix analyses, showing that $d_2 \sim 0.01$, which is 4 orders of magnitude below the smallest detectable value from $TTTT$, is accessible by a CVL measurement of $TT\mu$. Finally, we focus on the $f(\phi) F^2$ model. In this case, due to the model-dependent consistency relations, $c_{0, 2}$ and $d_{0, 2}$ are expressed in terms of the parameter of the quadrupolar power spectrum asymmetry, $g_*$ \cite{Shiraishi:2013vja,Shiraishi:2013oqa,Bartolo:2012sd}. The sensitivities to $d_{0, 2}$ tell that $g_* \sim 10^{-3}$ is, in principle, accessible by $TT\mu$, and the 1D correlators $T\mu$ and $\mu\mu$, could further improve the sensitivity to $g_*$. This paper is organized as follows. In the next section we compute $TT\mu$ from $d_L$ on the flat-sky and full-sky basis, and discuss residual angular dependence projected on $\boldsymbol{\ell}$--space. In Sec.~\ref{sec:fisher} we analyze the sensitivity to $d_L$ and some related parameters, and estimate the correlation coefficients between each shape, by employing the Fisher matrix. Section~\ref{sec:conclusions} contains our conclusions.
\label{sec:conclusions} If some anisotropic source is present in the very Early Universe, the primordial trispectra of curvature perturbations can display a characteristic, non-trivial angular dependence between different wavenumbers ${\bf k}$, which can be expressed in terms of Legendre polynomials. This paper discussed the possibility to observe such angular dependence using a new type of observable, recently found in \cite{Bartolo:2015fqz}, namely the $TT\mu$ correlation function generated from CMB temperature and $\mu$-distortion anisotropies. For the sake of intuitive understanding, we started our calculation of angular-dependent $TT \mu$ by employing the flat-sky approximation (in analogy with the previous CMB temperature trispectrum analysis done in \cite{Shiraishi:2013oqa}), and verified that the specific angular dependence in ${\bf k}$--space gets directly projected to $\boldsymbol{\ell}$-space. Therefore, $TT\mu$ changes its amplitude and sign, depending on the angle between each $\boldsymbol{\ell}$. After this preliminary calculation, we performed a more accurate full sky quantitative analysis. Using a Fisher matrix approach, we found that $TT\mu$ from the $L=2$ mode in the Legendre-type template \eqref{eq:zeta4_def} is nearly orthogonal to $TT\mu$ from the $L=0$ mode (or equivalently the $\tau_{\rm NL}$-type trispectrum) and $TT\mu$ from the $g_{\rm NL}$-type trispectrum. This is an important feature when it comes to discriminating between shapes. Our parameter forecasts showed that, in the absence of the $L=0$ mode (i.e., $d_0 = 0$), a CVL-level measurement of $\mu$-distortion fluctuations enables us to detect the $L=2$ mode with $d_2 \sim 0.01$ sensitivity, which is $4$ orders of magnitude smaller than the value accessible by the temperature trispectrum ($TTTT$). Even in more realistic cases, $TT\mu$ could outperform $TTTT$, although instrumental uncertainties and additional cosmic variance, generated by nonzero $d_0$, reduce the sensitivity to $d_2$. Once fixing the inflationary model, the parameters of the power spectrum, bispectrum and trispectrum are related to each other. Considering the $f(\phi)F^2$ model, and employing the consistency relation \eqref{eq:dL_gstar_fFF}, we reach the conclusion that a quadrupolar power asymmetry with $g_* \sim 10^{-3}$ could, in principle, be detected from $TT\mu$.
16
7
1607.01363
1607
1607.06296_arXiv.txt
The extensive catalog of $\gamma$-ray selected flat-spectrum radio quasars (FSRQs) produced by \emph{Fermi} during a four-year survey has generated considerable interest in determining their $\gamma$-ray luminosity function (GLF) and its evolution with cosmic time. In this paper, we introduce the novel idea of using this extensive database to test the differential volume expansion rate predicted by two specific models, the concordance $\Lambda$CDM and $R_{\rm h}=ct$ cosmologies. For this purpose, we use two well-studied formulations of the GLF, one based on pure luminosity evolution (PLE) and the other on a luminosity-dependent density evolution (LDDE). Using a Kolmogorov-Smirnov test on one-parameter cumulative distributions (in luminosity, redshift, photon index and source count), we confirm the results of earlier works showing that these data somewhat favour LDDE over PLE; we show that this is the case for both $\Lambda$CDM and $R_{\rm h}=ct$. Regardless of which GLF one chooses, however, we also show that model selection tools very strongly favour $R_{\rm h}=ct$ over $\Lambda$CDM. We suggest that such population studies, though featuring a strong evolution in redshift, may nonetheless be used as a valuable independent check of other model comparisons based solely on geometric considerations.
The discovery of quasars at redshifts $\gtrsim 6$ \citep{Fan2003,Jiang2007,Willott2007,Jiang2008, Willott2010a,Mortlock2011,Venemans2013,Banados2014,Wu2015} suggests that $\gtrsim 10^{9-10}\;M_\odot$ supermassive black holes emerged only $\sim 900$ Myr after the big bang, and only $\sim 500$ Myr beyond the formation of Population II and Population III stars \citep{Melia2013a}. Such large aggregates of matter constitute an enduring mystery in astronomy because these quasars could not have formed so quickly in $\Lambda$CDM without an anomalously high accretion rate \citep{Volonteri06} and/or the creation of unusually massive seeds \citep{Yoo2004}; neither of these has actually ever been observed. For example, \cite{Willott2010b} have recently demonstrated that no known high-$z$ quasar accretes at more than $\sim 1-2$ times the Eddington rate (see Figure~5 in their paper; see also Melia 2014). This paper will feature two specific cosmologies---the aforementioned $\Lambda$CDM (the `standard,' or concordance model) and another Friedmann-Robertson-Walker solution known as the $R_{\rm h}=ct$ Universe \citep{Melia2007,Melia2012c,Melia2016}. Our focus will be to explain the luminosity function of these quasars, particularly as they evolve towards lower redshifts. Part of the motivation for this comparative study is that, unlike $\Lambda$CDM, the $R_{\rm h}=ct$ model does not suffer from the time compression problem alluded to above \citep{Melia2013a}. In this cosmology, cosmic reionization (starting with the creation of Population III stars) lasted from $t\sim 883$ Myr to $\sim 2$ Gyr ($6\lesssim z\lesssim 15$), so $\sim 5-20\;M_\odot$ black hole seeds formed (presumably during supernova explosions) shortly after reionization had begun, would have evolved into $\sim 10^{10}\; M_\odot$ quasars by $z\sim 6-7$ simply via the standard Eddington-limited accretion rate. The $R_{\rm h}=ct$ Universe has thus far passed all such tests based on a broad range of cosmological observations, but already, this consistency with the age-redshift relationship implied by the early evolution of supermassive black holes suggests that an optimization of the quasar luminosity function might serve as an additional powerful discriminator between these two competing expansion scenarios. The class of flat-spectrum radio quasars (FSRQs) is ideally suited for this purpose. FSRQs are bright active galactic nuclei (AGNs) that belong to a subcategory of Blazars. These represent the most extreme class of AGNs, whose radiation towards Earth is dominated by the emission in a relativistic jet closely aligned with our line-of-sight. The discovery of $\gamma$-ray emission from these sources was an important confirmation of the prediction by \cite{Melia1989} that the particle dynamics in these jets ought to be associated with significant high-energy emission along small viewing angles with respect to the jet axis. It is still an open question exactly what powers the jet activity, but it is thought that the incipient energy is probably extracted from the black hole's spin, and is perhaps also related to the accretion luminosity. Major mergers might have enhanced the black-hole growth rate and activity, which would have occurred more frequently in the early Universe. In this context, the Blazar evolution may be connected with the cosmic evolution of the black-hole spin distribution, jet activity and major merger events themselves, all of which may be studied via the luminosity function (LF) and its evolution with redshift. Recently, \emph{the Fermi Gamma-ray Space Telescope} has detected hundreds of blazars from low redshifts out to z = 3.1, thanks to its high sensitivity \citep{Abdo10}. Based on the previous analysis of the FSRQ $\gamma$-ray luminosity function (GLF), it is already clear that the GLF evolution is positive up to a redshift cut-off that depends on the luminosity (see, e.g., Padovani et al. 2007; Ajello et al. 2009; Ajello et al. 2012). But all previous work with this sample ignored a very important ingredient to this discussion---the impact on the GLF evolution with redshift from the assumed cosmological expansion itself. Our main goal in this paper is to carry out a comparative analysis of the standard $\Lambda$CDM and $R_h = ct$ models using the most up-to-date sample of 408 FSRQs detected by the \emph{Fermi}-LAT over its four-year survey. We wish to examine the influence on the results due to the assumed background cosmology and, more importantly, we wish to demonstrate that the current sample of $\gamma$-ray emitting FSRQs is already large enough for us to carry out meaningful cosmological testing. Throughout this paper, we will be directly comparing the flat $\Lambda$CDM cosmology with $\Omega_{\rm m} = 0.315$ and $H_0=67.3$ km s$^{-1}$ Mpc$^{-1}$, based on the latest \emph{Planck} results \citep{Planck14}, and the $R_{\rm h}=ct$ Universe, whose sole parameter---the Hubble constant---will for simplicity be assumed to have the same value as that in $\Lambda$CDM. We will demonstrate that these data already emphatically favour $R_{\rm h}=ct$ over $\Lambda$CDM, even without an optimization of $H_0$ for $R_{\rm h}=ct$. The outline of this paper is as follows. In \S~2, we will summarize the observational data, specifically the 3FGL catalog \citep{Acero15}, and describe how the $\gamma$-ray luminosity is determined for each specific model. \S~3 will provide an account of the critical differences between these two cosmologies that directly impact the calculation of the GLF, and we discuss the currently preferred ansatz for this luminosity function based on the most recent analysis of these data in \S~4. We present and discuss our results in \S~5, and conclude in \S~6.
The extensive, high-quality sample of $\gamma$-ray emitting FSRQs observed by \emph{Fermi} has generated considerable interest in identifying the $\gamma$-ray luminosity function and its evolution with cosmic time. The number density of such objects has changed considerably during the expansion of the Universe, growing dramatically up to redshift $\sim 0.5-2.0$ and declining thereafter. Aside from the obvious benefits one may derive from better understanding this evolution as it relates to supermassive black-hole growth and its connection to the halos of host galaxies, its strong dependence on redshift all the way out to $z\sim 3$ offers the alluring possibility of using it to test different cosmological models. In this paper, we have introduced this concept by directly comparing two specific expansion scenarios, chiefly to examine the viability of the method. To do so, we have opted to use prior values for the model parameters themselves, and instead focus on the optimization of the parameters characterizing the chosen ansatz for the luminosity function. In doing so, one may question whether the choice of GLF unduly biases the fit for one model or the other. This is a legitimate concern, and considerable work still needs to be carried out to ensure that one is not simply customizing the GLF for each background cosmology. For this reason, we have opted in this paper to use two different forms of the GLF, one for pure luminosity evolution and the second for a luminosity-dependent density evolution, even though earlier work had already established a preference by the data for the latter over the former. We have found that selecting either of these GLFs has no influence at all on the outcome of model comparison tools. In both cases, information criteria, such as the AIC, KIC, and BIC, show quite conclusively that the evolution of the GLF for FSRQs very strongly favours $R_{\rm h}=ct$ over the concordance $\Lambda$CDM model. Cosmic evolution is now studied using a diversity of observational data, including high-$z$ quasars (Melia 2013a, 2014), Gamma-ray bursts (Wei et al. 2013), the use of cosmic chronometers (Melia \& Maier 2013; Melia \& McClintock 2015), type Ia supernovae (Wei et al. 2015) and, most recently, an application of the Alcock-Paczy\'nski test using model-independent Baryon Acoustic Oscillation (BAO) data \citep{Font2014,Delubac2015,Melia2015b}, among others. The BAO measurements are particularly noteworthy because, with their $\sim 4\%$ accuracy, they now rule out the standard model in favour of $R_{\rm h}=ct$ at better than the $99.34\%$ C.L. \begin{figure*} \begin{center} \begin{tabular}{cc} \hspace{-1cm} \includegraphics[width=\columnwidth]{fig10a} & \hspace{0cm} \includegraphics[width=\columnwidth]{fig10b} \end{tabular} \end{center} \caption{Differential local (left) and z=1 (right) $\gamma$-ray luminosity functions for FSRQs assuming LDDE. The solid and dashed curves represent the results in this paper. Stars are the results of \citet{Singal2014}, who restricted their analysis solely to FSRQs with a $\gtrsim 7 \sigma$ detection threshold in the first-year catalog of the \emph{Fermi} LAT. The dotted curves are the FSRQ LFs reported by \citet{Ajello12}.} \label{fig:figure10} \end{figure*} In this paper, we have provided a compelling confirmation of these other results by demonstrating that population studies, though featuring a strong evolution in redshift, may also be used to independently check the outcome of model comparisons based purely on geometric considerations. We emphasize, however, that much work still needs to be done to properly identify how to best characterize the number density function for this type of analysis. This would be critically important in cases, unlike $\Lambda$CDM and $R_{\rm h}=ct$, where cosmological models are so different that an appropriate common ansatz may be difficult to find.
16
7
1607.06296
1607
1607.04293_arXiv.txt
We discuss a generalization of the classic Keplerian disk test problem allowing for both pressure and rotational support, as a method of testing astrophysical codes incorporating both gravitation and hydrodynamics. We argue for the inclusion of pressure in rotating disk simulations on the grounds that realistic, astrophysical disks exhibit non-negligible pressure support. We then apply this test problem to examine the performance of various smoothed particle hydrodynamics (SPH) methods incorporating a number of improvements proposed over the years to help SPH better address problems noted in modeling the classical gravitation only Keplerian disk. We also apply this test to a newly developed extension of SPH based on reproducing kernels called CRKSPH. Counterintuitively, we find that pressure support worsens the performance of traditional SPH on this problem, causing unphysical collapse away from the steady-state disk solution even more rapidly than the purely gravitational problem, whereas CRKSPH greatly reduces this error.
Rotating disks are one of the most ubiquitous astrophysical phenomena in the universe. Besides being the primary component of spiral galaxies, they appear as quasi-stable, thermalized disks following compact object mergers and collisions \citep{rosswog2009wd,rosswog2010,vanKerkwijk2010,raskin2010,raskin2012,raskin2014}, as nucleating disks during stellar and planetary formation\citep{cassen1981,lissauer1987,lubow1999,boss2001,imaeda2002,saitoh2004,saitoh2008,krumholz2009,wada2011}, and in the form of accretion disks around a variety of objects, such as black holes \citep{chakrabarti1993,murray1996,fryer1999,rosswog2008,rosswog2009bh,rosswog2010bh}. All of these scenarios are topics of great interest in the astrophysical community, and as a consequence, a considerable amount of computational effort has been expended modeling these objects using a variety of methods \citep{flebbe1994,owen1998,gadget,boley2007,cullen2010,hopkins2015,beck2016}. One important class of such computational methods are the various forms of Smoothed Particle Hydrodynamics \citep[SPH,][]{gingold1977,lucy1977}. It is therefore no surprise that many papers describing astrophysical hydrodynamics methods -- SPH included -- employ a simplified, rotating astrophysical disk as a test of the method \citep{owen1998,owen2004,lodato2004,cullen2010,hopkins2015,beck2016}. The most common example of this sort of rotating astrophysical test is the classic Keplerian disk \citep{masuda1997,cartwright2010,binney2011,hosono2016}. However, the Keplerian disk scenario is a purely gravitational problem, such that this is more a test of how well the gravitational portion of the algorithm can properly model the orbital motion of the fluid, with no role for hydrodynamics other than that it not interfere with the gravitational problem. While such a test is an important limit to examine, it does not represent a rigorous test of astrophysical hydrodynamics, and neglects to test an important physical regime. Pressure plays a role in virtually every astrophysical disk to various degrees (either in their formation or providing some degree of support/equilibrium), and the applicability of any hydrodynamic method for the study of astrophysical disks depends greatly on the degree to which the method is able to properly simulate these disks in the presence of non-negligible pressure. Including pressure in a rotating disk simulation can expose potential weaknesses of a method, especially in the presence of shearing flows which are typical of a rotationally supported astrophysical disk. The improper activation of the SPH artificial viscosity in the presence of shearing flows is one such well-known weakness that can degrade an SPH simulation of a rotating disk. Because the purely gravitational Keplerian disk is the most common test problem used to investigate how well SPH models can represent astrophysical disks, a considerable amount of effort in the literature has been expended to ensure that SPH methods act more like $n$-body codes in these kinds of tests \citep{balsara1995,cullen2010}. This focus unfairly neglects the full hydrodynamic role of pressure, and tells us nothing of how applicable SPH might (or might not) be to truly model the formation and evolution of such disks. SPH is a full-featured hydrodynamics method, and as hydrodynamics plays a non-negligible role in nearly all physical scenarios involving a gravitating disk, we would like a broader class of test problems -- a treatment where both the kinematics and hydrodynamics are modeled accurately and robustly. In this paper, we describe a simple gravitationally bound, rotating disk test case that permits an arbitrary fraction of pressure vs.~rotational support, and we demonstrate how a variety of formulations of SPH fare on this problem, including a new variation based on a reproducing kernel formulation \citep[CRKSPH,][]{frontiere2016}. In \S\ref{sec:Qforms}, we briefly review the role of artificial viscosity in SPH and the various modifications that exist to ameliorate its effects on shear flows. We also provide in \S\ref{sec:Qforms} a brief review of the framework upon which CRKSPH is built. In \S\ref{sec:disktest}, we outline a generalization of the Keplerian disk problem we use to create a general class of pressure-supported, rotating disks. In \S\ref{sec:results} we discuss the performance of the various competing artificial viscosity forms against CRKSPH, and in \S\ref{sec:conclusion}, we give our conclusions.
\label{sec:conclusion} In this paper, we have developed a simple test problem to appraise the treatment of hydrodynamics and gravitation for a realistic, rotating disk simulation based on the work of \cite{owen1998}. We believe this prescription is a more appropriate test of the performance of any hydrodynamical method for applications modeling astrophysical disks as compared with the ordinary, purely gravitational Keplerian disk. We have tested the performance of three popular SPH formulations: one using the ordinary M\&G viscosity, one employing the Balsara viscosity multiplier, and one using the more sophisticated C\&D viscosity formulation. We also examined the performance of a new meshfree method based on SPH: CRKSPH \citep{frontiere2016}. For each choice of the relative ratio of pressure to rotational support ($f_p$), CRKSPH outperformed all other methods with the least systematic error and spurious momentum transport. The addition of pressure support into the treatment of a (partially) rotationally supported disk reveals many weaknesses in some popular flavors of artificial viscosity developed for SPH. When the linear term of the artificial viscosity is not ameliorated via a negligible sound speed, the ability of the artificial viscosity scheme to distinguish shear from compression becomes the central governing mechanism by which the disk evolves. While these specific results are interesting, more importantly, this scenario of pressure and rotational support playing important roles in the steady-state solution more closely matches realistic disk scenarios that might arise from more complicated simulations. When pressure is eliminated entirely, these sorts of tests essentially only test the gravitational solver end of an algorithm, with some contribution from the hydrodynamics in that it should not distort the purely gravitational rotational flow. As our examples here (and previous investigations of this Keplerian limit) demonstrate, many popular forms of SPH fail this limiting test due to the activation of the quadratic term of the artificial viscosity. However, as we have also demonstrated, this is not the full story, and scenarios where both hydrodynamics and gravitational motion play non-negligible roles should be tested. As stated previously, the momentum transport errors we observe in these tests are not trivial. At $t=200$, the $f_p=0.95$ disk has only experienced 7 orbits at its fastest annulus, and while this timescale is not the dynamically important one when pressure constitutes 95\% of the support against gravity, it nevertheless demonstrates that even just a few dynamical times are sufficient to entirely disrupt the balance between pressure and orbital motion. This is best seen in \crefrange{fig:0p95v200t}{fig:0p50v200t} (for $f_p=0.95$ and $f_p=0.50$, respectively), where the M\&G simulations no longer resemble a $v^2\propto r^{-1}$ disk, and the Balsara and C\&D simulations are well on their way to the same fate. Had we included secondary physics effects like radiation transport or chemical cooling, these errors would have resulted in the complete collapse of the disk. Since artificial viscosity converts momentum into thermal energy, and if thermal energy is rapidly being removed from the disk, along with wholesale momentum transport to the outer edges of the disk as must happen to maintain pressure balance, the disk would experience rapid orbital decay after only a few cooling timescales. Any real application that is designed to look for disk collapse, such as in galaxy formation simulations \citep[\eg][]{zurek1986,saitoh2008,inoue2014}, would therefore feature overly vigorous collapse arising purely from numerical errors, rather than actual, physical processes. In the case of compact object merger disks, where our $f_p=0.50$ simulations have the greatest insight, these disks are assumed to be optically thick, and so cooling mechanisms are often excluded from the simulations. For those simulations, artificial momentum transport away from the compact object near the center of the disk still results in an unphysically short time-scale for disk collapse. Many investigators performing simulations of this kind \citep{rosswog2010,vanKerkwijk2010,raskin2010,raskin2012,raskin2014,moll2014} have looked for conditions for carbon ignition during the post-merger, disk relaxation phases, and this effect may play an outsized role in the conclusions they reach. Specifically, by observing the density curves of \cref{fig:0p50v200t}, we can conclude that any disk simulation employing some form of M\&G, Balsara, or C\&D viscosity will have experienced as much as a 50-75\% over-density near the center of the disk after only a handful of orbits. This has serious consequences for the expected lifetimes of these disks against carbon-ignition. Even for nearly pressure-free disks, as in our $f_p=0.05$ simulations, the small errors in the quadratic term of the artificial viscosity accumulate over time, and this effect may be difficult to examine in simulations where pressure plays no role at all. In that case, the spurious heating from the artificial viscosity won't have destabilized the disk very much, and so such a disk will be more resilient against collapse. Instead, the errors will accumulate to such a degree that the disk essentially falls apart, as has been witnessed by other investigators using negligible pressure \citep[\eg][]{cullen2010,hopkins2015,hosono2016}. To conclude this investigation, we believe the generalized Keplerian disk including non-negligible pressure support is an important test problem that codes intended for use in modeling astrophysical problems involving gravitation and hydrodynamics should examine. Our specific results here -- testing how SPH and CRKSPH fare on these problems -- is interesting, but more broadly we would be interested in seeing many more astrophysical hydrodynamics methods tested against these scenarios, such as Eulerian meshed methods, moving mesh Lagrangian methods, etc.
16
7
1607.04293
1607
1607.01013_arXiv.txt
\label{sec:intro_knot} It is hard to characterize the atmospheres of transiting exoplanets because the atmospheric signal is $10^{-3}$--$10^{-5}$ of the stellar flux \citep{seager2010exoplanet}. Unfortunately, most current telescopes and instruments were not designed for these precisions. Consider the Spitzer Space Telescope \citep{werner2004spitzer}: many planets have been observed with its InfraRed Array Camera \citep[IRAC;][]{fazio2004infrared}, and these light curves are a large part of the available data \citep[e.g.][]{Agol_2010,nymeyer2011spitzer,mahtani2013warm,wong2015W14b}. The pixels in IRAC are not uniformly sensitive and the target centroid (i.e. stellar position) moves on timescales of minutes to days \citep{ingalls2016repeatability}. That means IRAC can distort the light we see \citep[e.g.][]{crossfield2012spitzer}. Many detector models have been used to deal with sensitivity variations on a pixel. Early analyses of \emph{Spitzer} light curves used polynomials \citep{charbonneau2005detection,knutson20083}. \citet{ballard2010search,ballard2011kepler} used Kernel Regression to analyze IRAC and Kepler Space Telescope data; improved versions of this method were used by \citet{knutson20123}, \citet{lewis2013orbital}, and \citet{wong2015W14b,wong20163}. \citet{morello2014new} used Independent Component Analysis \citep[ICA;][]{waldmann2012cocktail} to reanalyze IRAC transit light curves. More recently, \citet{deming2015spitzer} used Pixel-Level Decorrelation (PLD) to remove red noise from IRAC data. The authors state this method is better than modeling the sensitivity with centroids for a few reasons, including that PLD is analytically sound and runs fast. In recent years, many researchers have used BiLinearly-Interpolated Subpixel Sensitivity mapping \citep[BLISS hereafter;][]{stevenson2012transit}. This routine works quickly in a Markov Chain Monte Carlo (MCMC) because no \emph{explicit} parameters are used for the detector sensitivity. Instead, BLISS divides the light curve by the current astrophysical signal at each MCMC step, averages the leftover residuals at many locations on the pixel (``knots"), then interpolates to find the sensitivity at each centroid. This means BLISS optimizes the sensitivity at each knot---it runs efficiently because the weight of each knot at the centroids' locations can be calculated ahead of time. Many studies have used BLISS to model the intra-pixel sensitivity in \emph{Spitzer} data, as shown in Table \ref{tab:BLISS_planets}. \citet{lanotte2014global} and \citet{demory2016variability,demory2016map} also included the full-width half-maximum of the pixel response function in their analyses. A recent study by \citet{ingalls2016repeatability} found that BLISS, PLD, and ICA are the most accurate and reliable ways to model IRAC sensitivity for real and synthetic observations of XO-3b. These methods can usually fit eclipse depths to within $3\times$ the photon limit of the true values. \begin{table}[h!] \centering \caption[Works that model \emph{Spitzer} intra-pixel sensitivity with BLISS.]{Works that use BLISS to model the intra-pixel sensitivity in \emph{Spitzer} IRAC data.} \footnotesize \label{tab:BLISS_planets} \begin{tabular}{l l} \toprule \multicolumn{1}{c}{\bfseries Reference} & \bfseries Planet/System \\ \midrule \citet{stevenson2012transit} & HD 149026b\\ \citet{stevenson2012two} & GJ 436\\ \citet{lanotte2014global} & ...\\ \citet{blecic2013thermal} & WASP-14b\\ \citet{cubillos2013wasp} & WASP-8b\\ \citet{blecic2014spitzer} & WASP-43b\\ \citet{cubillos2014spitzer} & TrES-1\\ \citet{diamondlowe2014new} & HD 209458b\\ \citet{gillon2014search} & GJ 1214\\ \citet{stevenson2014deciphering} & WASP-12b\\ \citet{stevenson2014transmission} & ...\\ \citet{motalebi2015harps} & HD 219134b\\ \citet{triaud2015wasp} & WASP-80b\\ \citet{yu2015tests} & PTFO 8-8695 b\\ \citet{demory2016variability} & 55 Cnc e\\ \citet{demory2016map} & ...\\ \citet{stevenson2016search} & HAT-P-26b\\ \bottomrule \end{tabular} \end{table} However, BLISS does not fit for the detector sensitivity---it merely optimizes it. The BLISS maps vary during an MCMC, but they always do so jointly with the astrophysical model. Thus, one cannot explore the full parameter space because the BLISS map and astrophysical model are not chosen independently (Section \ref{sec:marg_opt}). With large numbers of BLISS knots, one can also end up fitting noise in the light curve. Both of these issues mean BLISS may give astrophysical uncertainties that are too small \citep{hansen2014broadband}. BLISS was introduced to side-step the computational challenge of a fully Bayesian approach \citep{stevenson2012transit}. However, nobody has tested the impact of this shortcut, nor has anybody published a rigorous study of BLISS using synthetic \emph{Spitzer} observations, for which one knows the ground truth. \citet{ingalls2016repeatability} tested seven techniques for removing correlated noise from IRAC data using real and synthetic observations---but only for a single hot Jupiter, XO-3b. We will therefore investigate BLISS by using a simple model of \emph{Spitzer} IRAC light curves. \citet{stevenson2012transit} created BLISS to handle the intra-pixel sensitivity in IRAC data because fitting $\sim10^{5}$ measurements with $\sim10^{3}$ model parameters in an MCMC was not feasible. This is still true, so we test light curves that have a modest number of data by using $\sim25$--$150$ BLISS knots (but see Sections \ref{sec:choose_mesh} and \ref{sec:more_synth_fits}). These sets of parameters are small enough that we can \emph{directly} fit each knot. We organize our work as follows: in Section \ref{sec:marg_opt}, we describe how properly marginalizing a parameter differs from optimizing it, and use examples to show that this can affect the fits on other parameters. Then, in Sections \ref{sec:toy_model} and \ref{sec:toy_fits}, we use a toy model to show that optimizing may cause problems even with simple posteriors and Gaussian uncertainties. We describe our model of the \emph{Spitzer} IRAC detector in Section \ref{sec:D_model}, including how we make mock centroids, then introduce our astrophysical model and synthetic light curves in Section \ref{sec:A_model}. In Section \ref{sec:B_routine}, we briefly review BLISS, and in Section \ref{sec:map_compare}, we compare BLISS knots and maps to the true pixel sensitivity. We then fit our light curves with MCMC and three different models for the pixel sensitivity, including two versions of BLISS, in Section \ref{sec:synthetic_fits}. We discuss our results in Section \ref{sec:discuss_knot} and summarize our work in Section \ref{sec:conclude_knot}. For those interested, the details about how we choose parameters for the pixel's sensitivity and the astrophysical signal are given in Appendices \ref{sec:D_sens_explain} and \ref{sec:A_param_explain}, respectively.
\label{sec:conclude_knot} We have performed MCMC fits on synthetic eclipse data to test how accurate and precise BLISS mapping is for modeling intra-pixel sensitivity variations in \emph{Spitzer} IRAC light curves. BLISS mapping is a non-parametric method, meaning it uses no jump parameters during the MCMC to model the detector signal. This is an expedient approximation that is not statistically sound in principle. Nonetheless, BLISS mapping has been widely used without rigorous testing on synthetic data. Optimizing nuisance parameters, instead of marginalizing over them, can give both imprecise and inaccurate estimates for other parameters of interest. Even in our toy example with simple posteriors, we find that fitted uncertainties can still be too small, by a factor of 2. In BLISS mapping, the estimated sensitivities at the knots---and so the interpolated maps---become inaccurate for the data when the photon noise is low. The maps also start fitting noise when the average data per knot is $\sim10$ or less. However, in many reasonable cases, the knot values match the intrinsic sensitivity to within the photon noise and the maps give good fits to the detector signal. Furthermore, standard BLISS mapping is a viable shortcut to the rigorously Bayesian Jump-BLISS mapping. Both methods return similar estimates for the astrophysical model, and the knots in standard BLISS mapping behave like actual jump parameters. Curiously, our low-order polynomial model is often as precise as BLISS mapping at fitting eclipse depths, yet the latter is preferred for high-significance eclipses and more featured sensitivity variations. We also find that using the $\beta$ method to inflate uncertainties does not always increase the predictive power of fits. In our tests, BLISS mapping does not predict the centering precision of a data set. Selecting a proper number of knots can require fine-tuning---proposed methods may not work in general. But, we find that BLISS mapping usually fits eclipse depths more accurately than precisely (i.e. conservatively), a potential benefit against low levels of extra red noise in the light curve. Overall, therefore, BLISS mapping can be an acceptable way to model \emph{Spitzer} IRAC sensitivity variations.
16
7
1607.01013
1607
1607.05406_arXiv.txt
As the first paper in a series on the study of the galaxy-galaxy lensing from Sloan Digital Sky Survey Data Release 7 (SDSS DR7), we present our image processing pipeline that corrects the systematics primarily introduced by the Point Spread Function (PSF). Using this pipeline, we processed SDSS DR7 imaging data in $r$ band and generated a background galaxy catalog containing the shape information of each galaxy. Based on our own shape measurements of the galaxy images from SDSS DR7, we extract the galaxy-galaxy (GG) lensing signals around foreground spectroscopic galaxies binned in different luminosity and stellar mass. The overall signals are in good agreement with those obtained by \citet{Mandelbaum2005, Mandelbaum2006} from the SDSS DR4. The results in this paper with higher signal to noise ratio is due to the larger survey area than SDSS DR4, confirm that more luminous/massive galaxies bear stronger GG lensing signal. We also divide the foreground galaxies into red/blue and star forming/quenched subsamples and measured their GG lensing signals, respectively. We find that, at a specific stellar mass/luminosity, the red/quenched galaxies have relatively stronger GG lensing signals than their counterparts especially at large radii. These GG lensing signals can be used to probe the galaxy-halo mass relations and their environmental dependences in the halo occupation or conditional luminosity function framework. Our data are made publicly available in \url{http://gax.shao.ac.cn/wtluo/weak\_lensing/wl\_sdss\_dr7.tar.gz}.
\label{sec_intro} The nature of dark matter remains a mystery in the current paradigm of structure formation \citep[see][for a review]{Bertone2005}. Although many experiments have been proposed to directly detect signatures of dark matter, such as particle annihilation, particle decay, and interaction with other particles \citep[see][for a review]{Feng2010}, the main avenue to probe the existence and properties of dark matter is still through the gravitational potentials associated with the structures in the dark matter distribution. One promising way to detect the gravitational effects of dark matter structures is through their gravitational lensing effect, in which light rays from distant sources are bent by foreground massive objects such as galaxies or clusters of galaxies residing in massive dark matter halos. In the case of galaxies, the multiple images prediction was first observationally confirmed by \citet{Walsh1979}. Since then more and more strong lensing systems were found and analyzed \citep[e.g.][]{Oguri2002, Kneib2004, Broadhurst2005, Treu2006, Cabanac2007, Bolton2008, Coe2013}. In addition, smaller distortions in galaxy images have been detected in large surveys, such as SDSS, CFHTLS, and SUBARU weak lensing surveys. These are referred to as weak lensing effects and have been studied very extensively in the past decade \citep{Kaiser1995, Sheldon2004, Mandelbaum2005, Mandelbaum2006, Wittman2006, Fu2008, Bernstein2009,Cacciato2009, Oguri2009, George2012, Li2013, Mandelbaum2013, Li2014}. Weak gravitational lensing studies are further sub-divided into two categories: lensing effects based on individual massive systems, such as clusters of galaxies, galaxy-galaxy lensing, which relies on the stacking of lensing signals around many galaxies. For a deep survey such as CFHTLenS \citep{Heymans2012}, DES \citep{Jarvis2015}, DLS \citep{Wittman2006}, EUCLID \citep{Refregier2010}, LSST \citep{LSST2009}, KIDS \citep{Kuijken2015} and SUBARU weak lensing survey \citep{Kaifu1998, Umetsu2007}, the number density of background galaxies around a single cluster is sufficient to measure the weak lensing signals with high S/N ratio, so that the mass and shape of the dark matter distribution can be obtained \citep{Oguri2010}. For shallower surveys and for less massive systems, such as SDSS \citep{York2000}, stacking lensing signals around many systems is the only way to measure the weak lensing effects with sufficient S/N ratio. Although unable to give dark matter distributions associated with individual systems, galaxy-galaxy lensing provides a powerful tool to estimate the average mass and profile of dark matter halos around galaxies with certain properties, as the luminosity, stellar mass, etc. In principle, weak gravitational lensing can provide a clean measurement of the total mass distribution of the lens system. However, the lensing signals are weak and a number of effects need to be understood and modeled accurately to obtain reliable results. These include uncertainties in photometric redshifts, intrinsic alignment, source selection bias and mask effect \citep{Yang2003,Mandelbaum2005, Mandelbaum2006, Yang2006a, Mandelbaum2008, Mandelbaum2009a, Mandelbaum2009b, Li2009, Sheldon2009, Liu2015}. In addition, accurate image measurements are absolutely essential in galaxy-galaxy lensing studies. Thus, for any weak lensing survey, an image processing pipeline has to be developed first and validated by a series of test simulations, such as STEP (Shear TEsting Program) \citep{Heymans2006, Massey2007}, Great08 \citep{Bridle2009}, Great 10 \citep{Kitching2010}, GREAT3 \citep{Mandelbaum2014} or Kaggle -- the dark matter mapping competition\footnote{Supported by NASA \& the Royal Astronomical Society.}. Other independent softwares, such SHERA \citep[hereafter M12]{Mandelbaum2012}, have also been designed for specific surveys. Many groups have developed image processing pipelines devoted to improving the accuracy of shape measurements for weak lensing studies \citep{Kaiser1995, Bertin1996, Maoli2000, Rhodes2000, vanWaerbeke2001, Bernstein2002,Bridle2002, Refregier2003, Bacon2003, Hirata2003, Heymans2005, Zhang2010, Zhang2011,Bernstein2014, Zhang2015}. Among these, Lensfit \citep{Miller2007, Miller2013, Kitching2008} applies a Bayesian based model-fitting approach; BFD (Bayesian Fourier Domain) method \citep{Bernstein2014} carries out Bayesian analysis in the Fourier domain, using the distribution of un-lensed galaxy moments as a prior, and the Fourier\_Quad method developed by \citep{Zhang2010, Zhang2011, Zhang2015} uses image moments in the Fourier Domain. In this paper we attempt to develop an image processing pipeline for weak lensing studies by combining the \citet[herefater BJ02]{Bernstein2002} method (see Appendix \ref{bj02_detail} for details) with the re-Gaussianization method introduced in \citet[hereafter HS03]{Hirata2003}. We test the performance of our pipeline using a number of commonly adopted simulations, and we apply our method to the SDSS data. The structure of the paper is as follows. In Section \ref{sec_method}, we describe the procedures used to construct our image processing pipeline. The pipeline is tested using simulations in Section \ref{sec_test}. Section \ref{sec_application} presents the application of our pipeline to the SDSS DR7 data, along with the galaxy-galaxy lensing results obtained for galaxies of different luminosities and colors. Finally, we summarize our results in Section \ref{sec_summary}. In addition, some details of our method are given in Appendix~\ref{bj02_detail}, some tests on systematic errors are made in Appendix \ref{sec_systematic}, and our main results for the SDSS data are listed inTables presented in Appendix \ref{sec_ESD}. All the galaxy-galaxy lensing data shown in this paper can be downloaded from \href{http://gax.shao.ac.cn/wtluo/weak\_lensing/wl\_sdss\_dr7.tar.gz} {http://gax.shao.ac.cn/wtluo/weak\_lensing/wl\_sdss\_dr7.tar.gz}.
\label{sec_summary} In weak lensing studies, obtaining a reliable measurement of the lensing signals requires highly accurate image processing. In this paper, we build our image processing pipeline to achieve accurate shape measurement for weak lensing studies based on \citet[ (BJ02)]{Bernstein2002} and \citet[(HS03)]{Hirata2003} methods. This pipeline is then applied to SDSS DR7 to measure the galaxy shapes, as well as the galaxy-galaxy lensing signals for lens galaxies of different luminosities, stellar masses, colors, and SFRs. The main results of this paper are summarized as follows. \begin{itemize} \item We have developed a new image processing pipeline, and tested it on SHERA and GREAT3 simulations. Our pipeline works well on PSF correction in the absence of sky background noise. The corrected PSF multiplicative errors are far below the 1\% requirements ($0.009\%$ for $\gamma_1$ and $-0.053\%$ for $\gamma_2$) for PSF correction only. \item An non-convergence problem occurs for $\sim 40\%$ galaxies when more realistic simulations with sky background noise are being used. In addition, to have a sufficient image resolution ${\cal R}>1/3$ an additional 20\% have to be discarded. Despite these, our method achieves a lensing reconstruction accuracy that is similar to other methods as shown in the GREAT3 competition \citep{Mandelbaum2015}. \item Our pipeline was applied to the SDSS DR7 $r$ band imaging data and create a catalog containing 41,631,361 galaxies with information about position, photometric redshift, ellipticity and ellipticity measurement error due to sky background and Poisson noise. \item Using these galaxy images, we calculated the galaxy-galaxy lensing signals around foreground lens galaxies binned in different luminosities and stellar masses. Our results show good agreement with the previous studies of \citet[M05]{Mandelbaum2005} and \citet[M06]{Mandelbaum2006}, with significantly reduced error bars. \item We have also separate the galaxies in different luminosity/ stellar mass bins into red/blue or star-forming/quenched subsamples. The galaxy-galaxy lensing signals show quite different scale dependences among these subsamples. While red and quenched galaxies show stronger galaxy-galaxy lensing signals than their counterparts in the same luminosity or stellar mass bins, the enhancement is the strongest at relatively large separations. \end{itemize} As the first paper of our galaxy-galaxy lensing series, here we have focussed on testing the reliability of our image processing pipeline and presented some general results of the galaxy-galaxy lensing in the SDSS DR7. In addition, we have performed a number of tests on possible systematics in our pipeline, using the $\gamma_{45}$ component, foreground sources, and random samples. Our pipeline and the galaxy-galaxy lensing signals obtained prove to be reliable against these tests. Our data can be used to study the dark matter contents associated with SDSS galaxies and the structures they represent. In a forthcoming paper, we will use the data to carry out a number of analyses. We will separate galaxies into centrals and satellites so as to model the mass distributions around them and their links to dark matter halos. We will also obtain the mass distribution around galaxy groups \citep{Yang2007} to test the reliability of the mass assignments based on other mass estimates, and to study how halo masses depend on the intrinsic properties of galaxy groups, such as the colors of members of galaxy groups. Finally we will stack the lensing signals around groups with different X-ray properties \citep[e.g.][]{Wang2014} to test how X-ray gas in galaxy groups is related to their dark matter contents. As we have found, our pipeline is unable to fully deal with images that are noisy. This limitation is the main drawback of our pipeline and needs to be addressed. Fourier space based methods seem to be superior in this regard as they can process asymmetric systems and much noisier images. For this reason, we intend to improve our methodology by implementing the Fourier space method of \citet{Zhang2015}.
16
7
1607.05406
1607
1607.00824_arXiv.txt
Several recent studies have demonstrated how large-scale vortices may arise spontaneously in rotating planar convection. Here we examine the dynamo properties of such flows in rotating Boussinesq convection. For moderate values of the magnetic Reynolds number ($100 \lesssim \Rm \lesssim 550$, with $\Rm$ based on the box depth and the convective velocity), a large-scale (\ie system-size) magnetic field is generated. The amplitude of the magnetic energy oscillates in time, nearly out of phase with the oscillating amplitude of the large-scale vortex. The large-scale vortex is disrupted once the magnetic field reaches a critical strength, showing that these oscillations are of magnetic origin. The dynamo mechanism relies on those components of the flow that have length scales lying between that of the large-scale vortex and the typical convective cell size; smaller-scale flows are not required. The large-scale vortex plays a crucial role in the magnetic induction despite being essentially two-dimensional; we thus refer to this dynamo as a large-scale-vortex dynamo. For larger magnetic Reynolds numbers, the dynamo is small scale, with a magnetic energy spectrum that peaks at the scale of the convective cells. In this case, the small-scale magnetic field continuously suppresses the large-scale vortex by disrupting the correlations between the convective velocities that allow it to form. The suppression of the large-scale vortex at high $\Rm$ therefore probably limits the relevance of the large-scale-vortex dynamo to astrophysical objects with moderate values of $\Rm$, such as planets. In this context, the ability of the large-scale-vortex dynamo to operate at low magnetic Prandtl numbers is of great interest.
Understanding the generation of system-size magnetic fields in natural objects, \ie fields with a significant component on the scale of the objects themselves, remains an outstanding problem in geophysical and astrophysical fluid dynamics. Such fields are maintained by dynamo action, whereby the magnetic induction produced by the motions of an electrically conducting fluid compensates the losses due to Ohmic dissipation. Typically, in planetary and stellar interiors, the inductive motions are driven by thermal or compositional convection. Numerical simulations have demonstrated that rotating convection can indeed generate magnetic fields on a scale large compared with that of the convective cells --- see, for example, the spherical shell simulations of \cite{Olson1999}, \cite{Christensen2006}, \cite{Soderlund2012}, and the plane layer computations of \cite{Stellmach2004}. However, relating the findings of numerical models to dynamos in the convective cores of rapidly rotating bodies, such as planets, is not entirely straightforward. In computational models, it is not currently feasible to achieve values of the Ekman number ($\Ek$, a measure of the viscous to the Coriolis force) smaller than $\Ek = O(10^{-6})$, whereas in the Earth, for example, $\Ek = O(10^{-15})$. The horizontal extent of convective cells, which depends on the Ekman number as $\Ek^{1/3}$, is therefore expected to be much smaller in nature than in the numerical models; this has important consequences for magnetic field generation \citep[e.g.][]{Jones2000b}. For rapidly rotating planets, the magnetic Reynolds numbers ($\Rm$, the ratio of Ohmic diffusion time to induction timescale) are expected to be of the order of \mbox{$10^3-10^5$} at the system size; calculated on the small convective scale though, $Rm$ is much less than unity, \ie Ohmic diffusion acts much faster than magnetic induction. In this case, a large-scale magnetic field can still be generated by the small-scale convective vortices if they act collectively to produce a mean-field $\alpha$-effect \citep{Childress1972, Soward1974}. However, the large-scale magnetic field sustained by this process tends to be spatially uniform \citep[e.g.][]{Favier2013b}, unlike the observed geomagnetic field. In computational models, which necessarily have to consider much higher values of $\Ek$ than the true planetary values, the fundamental problem of very small $Rm$ on the convective scale is therefore implicitly avoided. An important challenge of planetary dynamo theory is thus to explain the generation of system-size magnetic fields of strong amplitude and complex spatio-temporal variations, while $Rm$ at the convective scale is smaller than unity. One plausible solution to this problem is that the generated magnetic field strongly modifies the convective flows such that the convective scale increases, as predicted by the linear theory of magnetoconvection \citep{Chandrasekhar61}. The influence of the magnetic field on the convective flow, and in particular on its lengthscale, has indeed been observed in a number of dynamo simulations in which strong magnetic fields are sustained \citep[\eg][]{Stellmach2004, Tak08, Hor10, HC_2016}. In this paper we explore an alternative solution based on a hydrodynamical argument: in rapidly rotating non-magnetic convection, the small-scale convective vortices may transfer part of their energy to larger-scale flows; if $Rm$ is sufficiently high, based on this increased scale, then the dynamo could operate at these larger scales. The possible formation of large-scale flows is therefore of great interest for the dynamics of planetary interiors. Computationally, this represents a challenging problem, with large domains required that can accommodate many convective cells, together with any large-scale structure that may emerge; as such, studies to date have concentrated on the computationally economical planar geometry. Over the last few years, a number of independent numerical studies of plane layer, non-magnetic, rapidly rotating convection --- both Boussinesq and compressible --- have demonstrated how large-scale vortices (LSVs) can form through the long-term concerted action of the Reynolds stresses resulting from the small-scale convective cells \citep{Chan07, Kapyla11, Rubio14, Favier2014, Guervilly2014}. The LSVs are long-lived, box-size, depth-invariant vortices, which form by the clustering of small-scale convective vortices; their horizontal flows are of much larger amplitude than the underlying convective flows. Figure~\ref{fig:diagram_LSV}, based on the results of \citet{Guervilly2014}, shows the domain of existence of LSVs for rotating, plane layer, Boussinesq convection, in the parameter space ($\Ek, \Ra/\Ra_c$); the Rayleigh number, $\Ra$, measures the ratio of buoyancy driving to dissipative effects, with $\Ra_c$ the critical value at the onset of convection. The area of the circles provides a measure of the relative amplitude of the LSVs, quantified by the ratio $\Gamma=|\vel|^2/3|u_z|^2$, where $|\vel|$ is the root mean square (rms) value of the total flow and $|u_z|$ is the rms value of the vertical flow. Since the flows in LSVs are essentially horizontal, they are characterised by values of $\Gamma$ larger than unity. The colour of the circles denotes the value of the local Rossby number, defined as $\Ro_l = |u_z|/(2\Omega l)$, where $\Omega$ is the rotation rate and $l$ is the typical horizontal lengthscale of the convection \citep[\eg][]{Christensen2006}. $\Ro_l$ is an inverse measure of the rotational constraint on the convective flow; it thus increases with $\Ra$. There are two essential conditions for the formation of an LSV, highlighted by figure~\ref{fig:diagram_LSV}. One is that the convective flows must be sufficiently energetic to cluster; for the parameter values considered in \citet{Guervilly2014}, this may be expressed by the condition $\Ra /\Ra_c \gtrsim 3$. The other is that the convective flows are rotationally constrained and anisotropic, \ie narrow in the horizontal directions and tall in the vertical direction; this may be expressed by the condition $\Ro_l \lesssim 0.1$. LSVs thus appear for low Ekman numbers and large Reynolds numbers ($\Rey$, the ratio of the viscous timescale to the convective turnover time), precisely the conditions under which convection takes place in planetary cores. LSVs could therefore be good candidates to drive planetary dynamos if they can efficiently generate magnetic fields on scales comparable with or larger than that of the LSVs themselves. \begin{figure} \centering \includegraphics[clip=true,width=10cm]{fig1.pdf} \caption{Location of the hydrodynamical simulations of \citet{Guervilly2014} in the parameter space $(\Ek,\Ra/\Ra_c)$, for $\Pran=1$ and an aspect ratio of $1$. The colour scale gives the value of the local Rossby number, $\Ro_l$, and the area of the circle is proportional to $\Gamma$, which is a measure of the relative strength of the horizontal flows. The grey area indicates the region where LSVs form.} \label{fig:diagram_LSV} \end{figure} The question of whether LSVs can indeed drive a dynamo was addressed in the short paper by \citet{Guervilly2015}. By extending the hydrodynamic study of \citet{Guervilly2014}, it was found that rotating convection in the presence of LSVs can indeed generate a magnetic field with a significant large-scale component. The field is concentrated in the shear layers surrounding the LSVs and is mainly horizontal. A coherent mean (\ie horizontally averaged) magnetic field is also maintained by the flow. This large-scale dynamo process operates only for a range of magnetic Reynolds numbers; $\Rm$ must be large enough for dynamo action to ensue, but small enough that a small-scale magnetic field cannot be permanently sustained by the convective flows. The latter is an essential condition for this particular type of dynamo, since small-scale magnetic fields appear to suppress systematically the formation of the LSV. Indeed, the ability of a small-scale field to disrupt large-scale coherent flows would seem to be a fairly robust characteristic of magnetohydrodynamic turbulence; for example, in a two-dimensional $\beta$-plane model, \citet{Tobias07} found that small-scale fields --- resulting from the distortion of a very weak large-scale field --- suppress the generation of the zonal flows that would otherwise form spontaneously. In the large-scale dynamos considered by \citet{Guervilly2015}, the influence of the small-scale magnetic field varies in time, resulting in sizeable temporal oscillations of the kinetic and magnetic energies. A large-scale magnetic field is first generated by the joint action of the LSV and the smaller-scale convective flow; this large-scale field is then distorted by the convective flows into a small-scale field, which, subsequently, quenches the LSV and triggers the decay of the entire field; once the field is small enough, however, the LSV is able to regenerate and the cycle starts again. Since LSVs consist essentially of horizontal flows, they cannot of themselves act as dynamos \citep{Zeldovich1957}; nonetheless, for convenience, we shall refer to this type of dynamo as an `LSV dynamo', even though it does not rely solely on the LSV. The present paper builds extensively on \citet{Guervilly2015} by exploring the LSV dynamo in depth. We concentrate on the lowest value of $\Ek$ considered in \citet{Guervilly2014}, thus allowing a wide-ranging exploration in $\Ra$ and $\Rm$. The goals of our study are: (i) to investigate in detail how the LSV dynamo mechanism operates; (ii) to determine the parameter region in which this dynamo operates, in order to determine its relevance for planetary dynamos; and (iii) to explain the mechanism by which the small-scale magnetic field suppresses the LSV. The layout of the paper is as follows. The mathematical formulation of the problem is given in \S\,\ref{sec:Math}. In \S\,\ref{sec:TTD}, we consider the three very different types of dynamo that can exist in rapidly rotating, plane layer convection, and describe where in parameter space each may be found. Through the application of spectral filters to the convective flows, the key ingredients of the LSV dynamo mechanism are presented in \S\,\ref{sec:Mechanism}. The means by which the LSV can be suppressed and the resulting temporal evolution of the dynamo are described in \S\,\ref{sec:Suppression}. A concluding discussion is contained in \S\,\ref{sec:ccl}.
\label{sec:ccl} In \citet{Guervilly2014}, we showed how large-scale vortices (LSVs) can be spontaneously generated from a hydrodynamical process that consists of the clustering of small-scale, rotationally constrained convective vortices. In this paper, we have explored the dynamo action that may result from turbulent rotating convection in the presence of LSVs. For a range of magnetic Reynolds numbers above the critical value for the onset of dynamo action ($100 \lesssim \Rm \lesssim 550$ for $\Ek=5\times10^{-6}$ and $\Pran=1$), the flow acts to generate a magnetic field with a significant component on scales large compared with the small-scale convective vortices --- we denote this as an LSV dynamo. The dynamo generates magnetic fields that are concentrated in the shear layers surrounding the LSV, together with a coherent mean (\ie horizontally-averaged) magnetic field. From considering the kinematic dynamo problem with spectrally filtered versions of the velocity, we find that the dynamo mechanism requires only the presence of the LSV together with velocity modes of scale intermediate between the box size and the dominant convective scale. When the LSV is artificially filtered out from the induction equation but the effect of the LSV on the flow is retained in the momentum equation, the dynamo fails. Consequently, the LSV plays a crucial role in the magnetic induction; importantly, this is not simply via its action on the smaller-scale flows. The filtered simulations indicate that the magnetic field is generated in the core of the LSV and in the surrounding shear layers, having been expelled from the core by small-scale vortices. These results are deduced from the filtering exercise performed in the kinematic phase of the dynamo, so they could be specific to this phase. However, the qualitative similarities of the magnetic field in the filtered kinematic dynamo simulations and in the full dynamic regime suggest that the dynamo mechanism operates in a similar manner in both cases. For $\Rm \gtrsim 550$, the convective flows generate a small-scale dynamo. In this case, the continuous production of the small-scale magnetic field acts to suppress the LSV completely. LSVs are produced by the Reynolds stresses resulting from the interactions of depth-dependent vortices; the small-scale magnetic field hinders these interactions, leading to a loss of coherence of the Reynolds stresses, and hence an inability to create an LSV. The LSV dynamo can therefore operate only below the threshold for small-scale dynamo action. The transition from the LSV dynamo to the small-scale dynamo is continuous. In the LSV dynamo, oscillations of the kinetic and magnetic energies are associated with cycles of suppression and regeneration of the LSV. These oscillations are of magnetic origin, and are due to the amplification of the small-scale magnetic field from the interactions between the large-scale field and the convective flows. The suppression of the LSV by small-scale magnetic fields at high $\Rm$ therefore probably limits the relevance of the LSV dynamo mechanism to astrophysical objects with moderate $\Rm$. Such is the case of planetary dynamos, and in this context, the ability of the LSV dynamo to operate at low magnetic Prandtl numbers is of great interest. Although the values of $\Pm$ in our simulations are a long way from realistic values of the order of $10^{-6}$, it is entirely possible that, at smaller $\Ek$, LSV dynamos might be found at lower $\Pm$. In the Cartesian geometry employed here, the LSV dynamo produces a mean horizontal magnetic field with a coherent `staircase' structure that overall rotates clockwise. The spatial and temporal variations of this mean field are more complex than those of the mean field generated by the large-scale dynamo that operates near the onset of convection (compare, for instance, figure~\ref{fig:LSdynamo_c} and figure~\ref{fig:LSVD_Bmean_b}). This latter dynamo relies on the coordinated action of the convective vortices, and so can operate only for slightly supercritical convection (up to 50\% above the critical Rayleigh number for the parameters considered here). Its mean field always has a simple spatial structure with a regular time dependence, quite unlike the spatial and temporal variations of planetary magnetic fields. From this point of view, the greater complexity of the mean magnetic field of the LSV dynamo is thus an interesting feature, although the Cartesian geometry is inadequate for a comparison with planetary magnetic fields. The choice of boundary conditions may have an influence on the formation of LSVs and any subsequent dynamo action. The role of the velocity boundary conditions in LSV formation has been addressed in the recent paper by \citet{Stellmach2014}. The magnetic boundary conditions used here enforce a purely horizontal magnetic field at the top and bottom boundaries and these boundary conditions might have an effect on the LSV dynamo. To examine the role of the magnetic boundary conditions, we have also performed a simulation with boundary conditions that enforce a purely vertical magnetic field at the top and bottom boundaries ($B_x=B_y=\partial_z B_z=0$ at $z=0,1$) for the same parameters as the dynamo of figure~\ref{fig:LSVD_Bmean} ($\Ra=5\times10^8$ and $\Pm=0.2$). We find that the generated magnetic field retains its main characteristics: a similar saturation level of the magnetic energy, concentration of the magnetic field in the shear layers around the LSV and a coherent mean horizontal magnetic field. The main differences are that the mean horizontal field tends to be symmetric with respect to the mid-plane and that a stronger vertical magnetic field is produced, with an average energy about half that of the horizontal field. Thus, overall, the magnetic boundary conditions (at least the two types tested here) have only a minor influence on the LSV dynamo mechanism. The choice of the aspect ratio of the numerical domain might also influence the LSV dynamo, since the width and strength of the LSV both increase with aspect ratio \citep{Guervilly2014}. To explore this question, however, requires a series of lengthy calculations, going beyond the scope of the present study; it is though an issue that we propose to examine in future work. One of the most interesting aspects of the recent work on LSV formation in rotating convection, together with the resulting dynamos considered in this paper, is the discovery of new physical phenomena at the small values of the Ekman number that can now be tackled numerically. It is therefore instructive to consider how the formation of LSVs and the ensuing dynamos might be affected at yet lower values of $\Ek$ --- though clearly one cannot rule out the appearance of yet further novel behaviour in the considerable gap that exists between what is currently computationally feasible ($\Ek =O(10^{-6})$) and what is appropriate for the Earth ($\Ek =O(10^{-15})$), for example. In terms of the LSVs, it is clear that they require vigorous, yet rotationally constrained turbulence; as such we can be confident that the range of Rayleigh numbers at which LSVs will be found will increase as $\Ek$ is decreased, as portrayed in figure~\ref{fig:diagram_LSV}. The issue of the LSV dynamo is, however, less clear-cut. Although we may expect the LSV dynamo to operate for $\Ra$ just above the value at which LSV can be formed, it is less straightforward to predict the onset of small-scale dynamo action, and hence the demise of the LSV dynamo. The set-up considered here --- i.e., a Boussinesq fluid in Cartesian geometry --- forms the simplest system in which to study dynamos driven by rotating convection; it thus allows for a relatively wide exploration of parameter space. However, spherical geometry is clearly more appropriate for the study of planetary cores. To date, LSVs have not been reported in numerical simulations of rotating convection in spherical geometry; large-scale coherent flows \textit{are} observed for low $\Ek$ and large $\Rey$, but these are zonal flows --- \ie axisymmetric and azimuthal jets \citep[e.g.][]{Heimpel05, Gastine2012}. Interestingly, these zonal flows are also known to be disrupted by magnetic fields in spherical convective dynamos \citep[e.g.][]{Aub05,Yad16}. In terms of rotating convection, one of the major differences between Cartesian and spherical geometries is the anisotropy caused by spherical geometry in the plane normal to the rotation axis. This anisotropy constrains the geostrophic flows to be azimuthal, unlike in Cartesian geometry, where there is no preferred horizontal direction. Consequently, zonal flows tend to be dominant in rotating spherical convection, provided that the viscous damping from the boundary layers is not too large, \ie for stress-free boundary conditions. Interestingly, however, models of barotropic flows on a $\beta$-plane with forced stirring have shown that non-zonal coherent structures of low azimuthal wavenumber can co-exist with dominant zonal jets \citep{Galerpin10, Bakas14, Constantinou15}. These large-scale coherent structures take the form of propagating waves, which are believed to be sustained by nonlinear interactions between Rossby waves. One may thus speculate that at sufficiently small values of $\Ek$, large-scale vortices, and the resultant dynamos, may indeed play a role in spherical geometry; if so, we might expect them to have a propagating nature, a feature that cannot be recovered in plane parallel geometry.
16
7
1607.00824
1607
1607.00015_arXiv.txt
{Recent improvements in the age dating of stellar populations and single stars allow us to study the ages and abundance of stars and galaxies with unprecedented accuracy. We here compare the relation between age and $\alpha$-element abundances for stars in the solar neighborhood to that of local, early-type galaxies. We find both relations to be very similar. Both fall into two regimes with a flat slope for ages younger than $\sim$ 9 Gyr and a steeper slope for ages older than that value. This quantitative similarity seems surprising, given the different types of galaxies and scales involved. For the sample of early-type galaxies we also show that the data are inconsistent with literature delay time distributions of either single or double Gaussian shape. The data are consistent with a power law delay time distribution. We thus confirm that the delay time distribution inferred for the Milky Way from chemical evolution arguments also must apply to massive early-type galaxies. We also offer a tentative explanation for the seeming universality of the age-\afe~relation as the manifestation of averaging of different stellar populations with varying chemical evolution histories. }
\label{s:intro} It has long been recognized \citep{tinsley79,matteucci86} that the element abundance ratio \afe~is a powerful estimator of the duration of star formation events in galaxies. This is because of the different explosion timescales and yields of different types of supernovae. A direct consequence of this insight is the expectation of a correlation between the ages of stars in galaxies and their \afe~ratios. A recent example is Figure 1 in \citet{chiappini15}, which shows the generic prediction for single stars in the Milky Way. It is unclear, however, how this relation would translate into galaxy-wide average properties. Generally, one expects that galaxies that have stopped forming stars at an earlier time in the history of the universe (equivalent to having a shorter star formation timescale), would show a smaller contribution of light from Fe-enriched stars in their spectra and would thus show a higher overall \afe~enrichment. There does not seem to be a good reason why the relations between age and \afe~ should be quantitatively the same for entire galaxies and single stars, with the hope that possible differences could be used to study the different star formation histories. However, the exploration of this expected correlation has been hampered by uncertainties in stellar and galaxy ages, related to both model uncertainties and to intrinsic degeneracies, such as the age-metallicity degeneracy. We have recently been able to take a significant step forward by showing the existence of this correlation for early-type galaxies (ETGs) \citep[][hereafter W15]{walcher15}. Indeed, earlier work such as \citet{jorgensen99} found no correlation between \afe~and age. The first time this correlation was tentatively seen is by \citet{gallazzi06}. A correlation of age and \afe~was unambiguously shown by \cite{graves10} from stacked spectra (their Fig.~4), but the very nature of stacked spectra made it impossible to study the scatter in the relation. The relation was shown on a per galaxy basis by \cite{kuntschner10} (their Fig.~6), but in this case small sample size and continued large uncertainties on age made an interpretation difficult. Other recent work, such as \citet{thomas10} and \citet{johansson12} show and discuss the parameters age, \feh~and \afe, but do not directly address the age-\afe~relation explored here. The W15 results are nevertheless qualitatively in agreement with these earlier papers and reinforce and expand on them. We emphasize that for this same correlation, it is important to heed the warnings of \cite{thomas05}, who discuss the importance of degeneracies when using age as a parameter. We quantitatively show in W15 that the age-metallicity degeneracy does not give rise to the observed correlation. An interesting parallel development has been the verification of the expected similar correlation in the stars of the Milky Way. The unique age-metallicity relation in the Galactic disk has been first suggested by \citet{twarog80} using multi-band photometry data. However, \citet{edvardsson93} and later studies \citep{feltzing01, nordstrom04, holmberg07, holmberg09, casagrande11} have found that there is no one-to-one relationship between ages and metallicities of stars and a large scatter at any age that may have an astrophysical cause. Finally, the most recent work by \citet[][hereafter B14]{bergemann14}, using the high-resolution spectra from the Gaia-ESO stellar survey, has conclusively established the weak age-metallicity relation in the solar vicinity of the Galactic disk. This is the first study to carefully analyze the survey target selection effects and their impact on the age - metallicity diagram. For the stars with ages below 8 Gyr and for the solar vicinity, the observed age-metallicity relation was found to be nearly flat, and the majority of older stars turned out to be metal-poor and enhanced in $\alpha$ elements. Similar conclusions were reached by \citet[][hereafter H13]{haywood13} and \citet{bensby14}. As discussed in \citet{bensby14}, the H13 analysis lead to a very tight \afe-age relation due to the problems of the spectroscopic analysis and sample selection biases. Generally, B14 established that \afe~is a good proxy for the age of a star, even though they see a significant dispersion of [Mg/Fe], especially at ages above 9 Gyr. This paper attempts to establish two new statements. First, the correlation between age and \afe~as expected from chemical evolution is seen in ETGs and is quantitatively similar to the one for stars in the solar neighborhood. This is true despite the very different star formation histories of these two different kind of stellar systems. Second, this universality allows to explore the dependence on the yields and delay time distributions of SNe Ia and II. When fixing the yields, the age-\afe~relation of ETGs thus provides additional interesting constraints on the delay time distribution of SNe Ia.
\label{s:concl} We have compared the age-\afe~relation between ETGs and the solar neighborhood, for data and models. We find that the relation is quantitatively the same, and that both Milky Way and early-type galaxy data require a DTD with a small prompt component (<30\% of SNe-Ia exploding within 100 Myr). For example, a power-law DTD, such as those commonly derived from observations of the SN-Ia rate, matches this requirement. We also suggest that the observed scatter in the age-\afe~relation for ETGs could be driven by differences in the onset of star formation in those systems. For the actually existing range of galactic systems and therefore star formation histories studied in the present paper, the age-\afe~relation is self-similar on widely different scales. A tentative explanation for this seeming universality of the age-\afe~relation is that is results from averaging of different stellar populations with varying chemical evolution histories. It thus does not seem to be a useful tool to understand the star formation histories of galaxies, contrary to the more widely used \feh-\afe~relations.
16
7
1607.00015
1607
1607.08609.txt
In multi-field inflation one or more non-adiabatic modes may become light, potentially inducing large levels of isocurvature perturbations in the cosmic microwave background. If in addition these light modes are coupled to the adiabatic mode, they influence its evolution on super horizon scales. Here we consider the case in which a non-adiabatic mode becomes approximately massless (``ultralight") while still coupled to the adiabatic mode, a typical situation that arises with pseudo-Nambu-Goldstone bosons or moduli. This ultralight mode freezes on super-horizon scales and acts as a constant source for the curvature perturbation, making it grow linearly in time and effectively suppressing the isocurvature component. We identify a St\"uckelberg-like emergent shift symmetry that underlies this behavior. As inflation lasts for many $e$-folds, the integrated effect of this source enhances the power spectrum of the adiabatic mode, while keeping the non-adiabatic spectrum approximately untouched. In this case, towards the end of inflation all the fluctuations, adiabatic and non-adiabatic, are dominated by a single degree of freedom.
Light scalar fields, with masses much smaller than the Hubble expansion rate $H$, are known to produce potentially large levels of entropy perturbations during inflation~\cite{Guth:1980zm, Starobinsky:1980te, Mukhanov:1981xt, Linde:1981mu, Albrecht:1982wi} that could lead to (1) non-adiabatic features in the Cosmic Microwave Background (CMB)~\cite{Langlois:1999dw, Amendola:2001ni, Bartolo:2001rt}, and (2) the super-horizon evolution of curvature perturbations~\cite{Gordon:2000hv, GrootNibbelink:2000vx, GrootNibbelink:2001qt, Wands:2002bn, Tsujikawa:2002qx, Byrnes:2006fr, Choi:2007su, Gao:2009qy}. In this article we study the extreme situation in which a non-adiabatic mode is approximately massless, and its interaction with the curvature perturbation persists during the whole period of inflation, from horizon crossing until reheating. In this case, the non-adiabatic mode freezes on super-horizon scales acting as a constant source for the growth of the adiabatic mode. Because inflation lasts for many $e$-folds, the adiabatic mode eventually becomes dominated by the particular solution sourced by the light mode, which is a linearly growing function of time on super-horizon scales. By the end of inflation, the adiabatic mode may then be completely determined by the value of the light field on super-horizon scales. In this case, the associated power spectrum of curvature perturbations is enhanced by a factor proportional to the number of $e$-folds, squared. Interestingly however, the non-adiabatic perturbations do not experience the same cumulative effect, reducing the potential impact of isocurvature perturbations (relative to curvature perturbations) to the CMB~\cite{Planck:2013jfk, Ade:2015lrj}. The mechanism discussed here, could be shared by many specific models involving light scalar fields such as pseudo-Nambu-Goldstone bosons and moduli. We will offer a concrete example where an ultra-light field emerges, that appears within a well studied class of models~\cite{GarciaBellido:1995fz, DiMarco:2002eb, DiMarco:2005nq, Lalak:2007vi, Cremonini:2010sv, Cremonini:2010ua, vandeBruck:2014ata} consisting of a multi-field action with a non-canonical kinetic term, typical of supergravity and string theory compactifications. As we will show, the appearance of the ultra-light field is related to a symmetry under field reparametrizations, mildly broken by slow-roll. We emphasize that the phenomenology of models with ultra-light fields differ from curvaton scenarios~\cite{Lyth:2001nq, Lyth:2002my} where primordial fluctuations are generated at the very end of inflation. Here, instead, primordial fluctuations are generated during the whole process of inflation. \setcounter{equation}{0}
\label{sec:conclusions} Muti-field models of inflation with several light fields such as pseudo-Nambu-Goldstone bosons or moduli are fairly common in string theory and beyond the standard model cosmological scenarios. It is important to understand to what extent they are compatible with the observations. In this paper we have considered multi-field theories of inflation in which the inflaton fluctuations interact with light fields in such a way that (1) isocurvature modes are generated during inflation, and (2) curvature perturbations $\R$ experience a large super-horizon growth sourced by the light fields. Contrary to expectations, there is a very interesting regime in which the prolonged combination of these two properties can lead to an effective suppression of isocurvature perturbations at the end of inflation, while still retaining a nearly scale invariant power spectrum. As a byproduct, we show that these models also provide a natural mechanism to suppress the tensor to scalar ratio compared to the single field realization with the same spectral index, and slow-roll parameters. Previous works~\cite{Kobayashi:2010fm, Cremonini:2010sv, Cremonini:2010ua} have analyzed the effects of light non-adiabatic fields on the evolution of curvature perturbations and found similar results. In this work, however, we have attempted to give various steps forward in understanding this class of models in a more systematic and generic way. In particular, (a) we have identified a novel, non-standard shift symmetry relating the entropy and curvature perturbations, that emerges in the ultra-light limit $\mu \to 0$. (b) We presented a concrete model (discussed in Section~\ref{concrete-example}) where this symmetry is realised. We have discussed the role of the underlying symmetries of the scalar field manifold in ensuring that both the shift symmetry and the sourcing of curvature by entropy perturbations persist during the entire history of inflation. (d) We have analytically solved the system of perturbations in the regime in which this shift symmetry is valid (see Appendix~\ref{app:quantization}). The key mechanism discussed here is the following: Since the entropy mode is approximately massless, it freezes on super-horizon scales acting as a constant source for the curvature perturbation. If inflation last enough $e$-folds, the adiabatic mode would then linearly grow in time. Remarkably, this implies that at late times, both the curvature as well as the isocurvature perturbations are effectively dominated by a single degree of freedom. It is interesting to note that the effective field theory description of this regime is expected to be quite different from~\cite{Senatore:2010wk} and/ or the curvaton scenario, and may appear in the context of scenarios with a large number of fields, where a given combination of them could couple to the adiabatic curvature perturbation in the form of a light field~\cite{Dias:2016slx}. If the entropy mode remains massless for a sufficiently large number of $e$-folds $\Delta N\gg 1$, and the coupling is perturbatively small, the power spectrum of isocurvature perturbations evolves in the standard way while curvature perturbations $\P_\R$ are enhanced by a factor proportional to $\Delta N$. Effectively the primordial isocurvature modes are suppressed as \be \P_\R / \P_\S \simeq (\P_\R / \P_\C)^2 \propto \Delta N^2 \ee where $\P_\S$ is the power spectrum of the normalized entropy perturbation~\cite{Amendola:2001ni} related to $\sigma$ by $\S\equiv \sigma / \sqrt{2\epsilon}$ and $\P_\C$ the cross-correlation spectrum $\langle \R \S \rangle$, calculated in Appendix~\ref{sec:pheno}, which is also suppressed. These features seem to be tightly connected to an emergent symmetry: Besides the usual shift symmetry that is associated with the masslessness of $\R$ (the adiabatic mode), we have identified in the quadratic action a second one involving both the curvature and the ultra-light (entropy) mode $\sigma$. Specifically, to reach the regime we are interested in, the time derivative $\dot \R$ must appear in the following combination \be \dot \R - \lambda\frac{H}{\sqrt{2 \epsilon}}\sigma ,\label{DR} \ee where in the case of multi-field inflation the dimensionless coupling $\lambda$ is proportional to the rate of bend $\Omega$ of the inflationary trajectory ($\lambda = - \Omega /H$). The combination \eqref{DR} is invariant under the St\"uckelberg-like transformation \be\label{stu2} \R\rightarrow \dot \R - \lambda\frac{H}{\sqrt{2 \epsilon}}\delta C_2 ,Ê \qquad \sigma\rightarrow\sigma+\delta C_2\ . \ee There is some evidence, in concrete multi-field examples, that this structure is inherited from a symmetry of the parent theory. It is tempting to speculate that the combination of eq.~(\ref{DR}) does not only happen at quadratic order but will also appear at higher orders, with interesting consequences for non-Gaussianity. We leave this for future work.
16
7
1607.08609
1607
1607.07545_arXiv.txt
We study the possibility of using quadrupole moments of auto-convolved galaxy images to measure cosmic shear. The autoconvolution of an image corresponds to the inverse Fourier transformation of its power spectrum. The new method has the following advantages: the smearing effect due to the Point Spread Function (PSF) can be corrected by subtracting the quadrupole moments of the auto-convolved PSF; the centroid of the auto-convolved image is trivially identified; the systematic error due to noise can be directly removed in Fourier space; the PSF image can also contain noise, the effect of which can be similarly removed. With a large ensemble of simulated galaxy images, we show that the new method can reach a sub-percent level accuracy in general conditions, albeit with increasingly large stamp size for galaxies of less compact profiles.
\begin{split} g_1&=\frac{<(Q_a^{20}-Q_a^{02})R_a^{00}-(R_a^{20}-R_a^{02})Q_a^{00}>}{2<(Q_a^{20}+Q_a^{02})R_a^{00}-(R_a^{20}+R_a^{02})Q_a^{00}>}\\ g_2&=\frac{<Q_a^{11}R_a^{00}-R_a^{11}Q_a^{00}>}{<(Q_a^{20}+Q_a^{02})R_a^{00}-(R_a^{20}+R_a^{02})Q_a^{00}>}. \end{split} \end{equation} The math details are shown in Appendix \ref{App-AC}. \begin{figure}[!t] \centering \includegraphics[width=8cm,height=6cm]{cut-off-F.eps} \caption{ Same as fig.\ref{fig_cut_A}, but for the method of ZLF15. }\label{fig_cut_F} \end{figure} \subsubsection{Noise Correction} The terms on the right side of eq.(\ref{Estimator Centroid}) are contaminated by noise, including both background noise and source Poisson noise. As shown in ZLF15, the background noise contribution can be estimated and directly subtracted using a background noise image near the galaxy location, because they are not correlated with the source shapes. The source Poisson noise statistically exhibits a scale-independent power spectrum in Fourier space, therefore its contribution can be estimated at large wave-numbers, and subtracted from the source power spectrum on all scales. These procedures can similarly be applied in our new method not only for the galaxy, but also for the PSF, owing to the fact that the new shear estimators linearly depend on the power spectra of both the galaxy and the PSF. In practice, the PSF power at the position of the galaxy can be constructed as a weighted sum of the power spectra of its neighboring stars, each of which is subtracted by the power of its companion background noise image in the neighborhood. It is in this sense that the new method allows the presence of noise in the PSF, an important feature that is not shared by ZLF15. Let us denote the auto-convolved images of background noise for galaxy and PSF as $G(\vec{x})$ and $V(\vec{x})$ respectively. To remove the noise bias, the moments of galaxy and PSF can be redefined as \begin{equation} \begin{split}\label{qm Discrete} S^{ij}=&\int{x^iy^j[F_o(\vec{x})-G(\vec{x})]d^2x}.\\ T^{ij}=&\int{x^iy^j[W(\vec{x})-V(\vec{x})]d^2x}. \end{split} \end{equation} The shear estimators are updated accordingly \begin{equation}\label{Estimator final} \begin{split} g_1&=\frac{<(S^{20}-S^{02})T^{00}-(T^{20}-T^{02})S^{00}>}{2<(S^{20}+S^{02})T^{00}-(T^{20}+T^{02})S^{00}>}\\ g_2&=\frac{<S^{11}T^{00}-T^{11}S^{00}>}{<(S^{20}+S^{02})T^{00}-(T^{20}+T^{02})S^{00}>}. \end{split} \end{equation} \subsection{General Setup} \label{S-test general} The pipeline of the new method is summarized as follows\\ 1) Fourier transform the galaxy and PSF images and calculate their power spectrum according to eq.(\ref{spectral density});\\ 2) Remove the source Poisson noise according to ZLF15;\\ 3) Inversely transform power spectrum of galaxy and PSF to get the autoconvolution according to eq.(\ref{inverse fourier transformation});\\ 4) Repeat step 1) and 3) on the neighbouring background images;\\ 5) Construct the shear estimator according to eq.(\ref{Estimator final})
\label{S-Summary} Auto-convolved galaxy images can be used for shear estimation because it transforms similarly as the intrinsic image under the distortion matrix of cosmic shear (see eq.(\ref{shear auto-convolved})). The PSF effect can be corrected by subtracting the quadrupole moments of the auto-convolved PSF directly. The effect of noise can be statistically removed using neighbouring images of only background, similar to what is done in ZLF15. The PSF image is also allowed to contain noise, the effect of which can be similarly removed using neighbouring background images, as shown in \S\ref{noise_c}. These convenient features of the new method are all due to the linearity of the relation between the shear estimators and the multipole moments of the auto-convolved galaxy/PSF images. The major advantages of the new method are: 1) It does not make assumptions on the morphologies of the galaxy or PSF; 2) It has an accurate treatment of noise for both galaxy and PSF; 3) The centroid of the auto-convolved images are trivially identified; 4) It only requires the quadrupole moments of the galaxy/PSF images; 5) The image processing of the method is very fast, as it only involves Fast Fourier Transformation. We choose to use circular aperture to define the boundary of the galaxy image, and to determine the proper aperture radii for galaxies of different Sersic profiles. The results show that the required aperture size (in terms of galaxy's half-light radius) for the new method is more strongly dependent on the galaxy morphology than the method of ZLF15. We also show that the new method works well in the presence of noise, albeit with a larger statistical error than ZLF15. These issues remain to be improved. A possible solution to these problems is to downweight the contribution of pixels at large distances from the image center. However, it seems that a nontrivial weighting function (comparing to our current circular top-hat function) neccessarily requires more complicated PSF correction procedures, as discussed in many other works. This is a possible direction for the future development of this method.
16
7
1607.07545
1607
1607.01543_arXiv.txt
Successful phenomenological models of pulsar wind nebulae assume efficient dissipation of the Poynting flux of the magnetized electron-positron wind as well as efficient acceleration of the pairs in the vicinity of the termination shock, but how this is realized is not yet well understood. The present paper suggests that the corrugation of the termination shock, at the onset of non-linearity, may lead towards the desired phenomenology. Non-linear corrugation of the termination shock would convert a fraction of order unity of the incoming ordered magnetic field into downstream turbulence, slowing down the flow to sub-relativistic velocities. The dissipation of turbulence would further preheat the pair population on short length scales, close to equipartition with the magnetic field, thereby reducing the initial high magnetization to values of order unity. Furthermore, it is speculated that the turbulence generated by the corrugation pattern may sustain a relativistic Fermi process, accelerating particles close to the radiation reaction limit, as observed in the Crab nebula. The required corrugation could be induced by the fast magnetosonic modes of downstream nebular turbulence; but it could also be produced by upstream turbulence, either carried by the wind or seeded in the precursor by the accelerated particles themselves.
\label{sec:intr} Pulsar wind nebulae (PWNe) have long been recognized as outstanding laboratories of astro-plasma physics in extreme conditions, see e.g. \citet{2009ASSL..357..421K} and \cite{2012SSRv..173..341A}, and the Crab nebula, as a result of its proximity, plays a very special role among those objects. At the price of non-trivial assumptions regarding the physical conditions behind the termination shock of the pulsar wind, which separates the free streaming wind from the hot shocked wind in the nebula, phenomenological models have been very successful in explaining the general spectral energy distribution and morphological features of the Crab nebula, using analytical calculations \citep[e.g.][]{1984ApJ...283..694K,1984ApJ...283..710K,1996MNRAS.278..525A} or increasingly sophisticated numerical simulations \citep[e.g.][]{2003MNRAS.344L..93K,2004MNRAS.349..779K,2003A&A...405..617B,2004A&A...421.1063D,2014MNRAS.438..278P}, see also \citet{2015arXiv150302402A} and \citet{2015SSRv..191..391K} for reviews. Nonetheless, various puzzles plague the current understanding of the microphysics of PWNe; among them, two are particularly noteworthy: how the wind converts its Poynting flux -- which supposedly far dominates the particle kinetic energy at the base of the wind -- into particle thermal energy behind the termination shock, reaching rough equipartition between these components; and how particle acceleration takes place behind the termination shock. The present paper examines a speculative scenario, which could potentially solve part of the above puzzles; it specifically assumes that the termination shock of the pulsar wind is non-linearly corrugated, the precise meaning of this being given in Sec.~\ref{sec:def}. It then shows that such corrugation efficiently converts an incoming ordered magnetic energy into turbulence, thereby slowing down appreciably the flow velocity behind the termination shock. A significant part of the turbulence can be further dissipated through collisionless effects on short length scales, leading to pre-acceleration of the pairs, up to close equipartition with the incoming magnetic energy. The corrugation of the shock may thus achieve efficient dissipation of the incoming Poynting flux, in a way that is reminiscent of the dissipation through reconnection of a striped wind in the equatorial plane \citep{2003MNRAS.345..153L}. Finally, it is speculated (and argued) that the turbulence seeded by corrugation may also sustain an efficient Fermi process, leading to a particle spectrum close to what is observed. This paper is organized as follows: Sec.~\ref{sec:corr} discusses the physics of a corrugated shock wave in the MHD limit; Sec.~\ref{sec:acc} recalls some results on the collisionless damping of relativistic MHD waves in a relativistic plasma, then discusses the physics of particle pre-acceleration in the resulting turbulence and the development of a relativistic Fermi process at high energies. Sec.~\ref{sec:disc} discusses various possible sources of corrugation and examines how the present results can be applied to PWNe. Finally, Sec.~\ref{sec:conc} provides a summary of the discussion and some conclusions.
\label{sec:conc} The present paper has speculated that the termination shock of pulsar winds might be strongly corrugated. It has discussed possible sources of corrugation and exhibited various interesting phenomenological consequences. Corrugation could in principle be excited by the interaction of downstream fast magnetosonic modes catching up the shock front, through the advection of upstream turbulence modes, or through the generation in the shock precursor of turbulence by particle acceleration. As discussed here and in \citet{LRG16}, the latter two possibilities are more interesting in the present context because of the existence of a resonance in the response of the shock to upstream perturbations, leading to possible large amplification of turbulent modes. Once corrugation is excited on a range of scales, a fraction of order unity of the incoming ordered magnetic energy is converted into turbulence, immediately downstream of the shock. This conversion slows down appreciably the flow velocity along the shock normal, which could help understand why the post-shock nebula moves so slowly in the pulsar rest frame, in accord with the seminal discussion of \citet{1984ApJ...283..694K}. Various dissipative effects could then transfer a sizable fraction of the turbulence energy density into the pair population. In particular, slow magnetosonic modes are rapidly dissipated in a relativistic plasma. A direct consequence is that, independently of the upstream magnetization of the flow, the downstream magnetization beyond this dissipative layer would decrease to values of order unity. This, of course, has significant virtues for understanding the phenomenological properties of the Crab nebula, which indeed reveals a rough equipartition between the pairs and the magnetic energy content. The pre-acceleration of the pairs in the dissipative layer through stochastic acceleration leads to the emergence of a power-law, with an index $s$ typically between $1$ and $2$, because stochastic acceleration is balanced by escape losses due to advection outside of the dissipative layer. The present paper has also speculated that the excitation of turbulence on a broad range of scales behind the shock could sustain a relativistic Fermi process with a Bohm-type acceleration timescale; this point remains to be demonstrated however, using for instance dedicated test-particle simulations. It has then been shown that this combination of stochastic pre-acceleration followed by Fermi acceleration could potentially help understand the spectral features of the Crab nebula, provided the Lorentz factor of the termination shock is $\gamma_1\,\sim\,4\times 10^3$ in the nebula rest frame (assuming a pair multiplicity $\kappa\,\sim\,10^6$). Further work is required along several lines to test this speculative model. In particular, dedicated numerical simulations are needed to understand the physics of corrugation in the non-linear regime through the interaction of a relativistic magnetized shock with upstream perturbations. As mentioned above, dedicated numerical simulations would also be needed to understand how such a corrugated shock can accelerate particles, and with what efficiency. It would be interesting to understand how the accelerated particles could themselves seed perturbations in the upstream plasma, and how such perturbations could influence the shock corrugation pattern. Finally, such simulations would have to be properly placed in a global context to understand the impact of the nebular turbulence on the shock itself.\bigskip {\bf Acknowledgments:} it is a pleasure to thank A. Bykov, L. Gremillet, R. Keppens, G. Pelletier and O. Ramos for insightful discussions and advice. This work has been financially supported by the Programme National Hautes \'Energies (PNHE) of the C.N.R.S. and by the ANR-14-CE33-0019 MACH project.
16
7
1607.01543
1607
1607.03293_arXiv.txt
{In July 2015, the high-mass X-ray binary \vo underwent a giant outburst, a decade after the previous one. \vo hosts a strongly magnetized neutron star. During the 2004--2005 outburst, an anti-correlation between the centroid energy of its fundamental cyclotron resonance scattering features (CRSFs) and the X-ray luminosity was observed.} % {The long ($\approx 100$\,d) and bright ($L_{\rm x} \approx 10^{38}$\,erg\,s$^{-1}$) 2015 outburst offered the opportunity to study during another outburst the unique properties of the fundamental CRSF and its dependence on the X-ray luminosity.} % {The source was observed by the \inte satellite for $\sim 330$\,ks. We exploit the spectral resolution at high energies of the SPectrometer on \inte (SPI) and the Joint European X-ray Monitors to characterize its spectral properties, focusing in particular on the CRSF-luminosity dependence. We complement the data of the 2015 outburst with those collected by SPI in 2004--2005 and left unpublished so far.} % {We find a highly significant anti-correlation of the centroid energy of the fundamental CRSF and the $3-100$\,keV luminosity of $E_1 \propto -0.095(8)L_{37}$\,keV. This trend is observed for both outbursts. We confirm the correlation between the width of the fundamental CRSF and the X-ray luminosity previously found in the JEM-X and IBIS dataset of the 2004--2005 outburst. By exploiting the RXTE/ASM and Swift/BAT monitoring data we also report on the detection of a $\sim 34$\,d modulation superimposed on the mean profiles and roughly consistent with the orbital period of the pulsar. We discuss possible interpretations of such variability.} % {} %
\label{sect intro} \vo is a transient X-ray pulsar orbiting around a Be star. It spends most of the time in a low luminosity state ($L_{\rm x} \la 10^{36} \ergs$), sporadically interrupted either by normal type I outbursts, ($L_{\rm x} \sim10^{36-37}\,\ergs$) associated with the passage of the neutron star (NS) at the periastron, or by giant type II outbursts, which last several orbital periods. During these episodes, \vo becomes one of the most luminous X-ray sources of the Galaxy, achieving X-ray luminosities up to $L_{\rm x}\approx 10^{38}$\,erg\,s$^{-1}$. \vo was first detected during a long ($\sim 100$ days) giant outburst caught by Vela~5B in 1973, when it reached a peak intensity of 1.4 Crab in $3-12$\,keV ($\sim 2.9 \times 10^{-10}$~erg~cm$^{-2}$~s$^{-1}$; \citealt{terrell1984}). A spin period of 4.37\,s and an orbital period of 34\,days (eccentricity $e = 0.37$) were discovered \citep{stella1985,zhang2005} during three small outbursts observed by \emph{EXOSAT} and \emph{Tenma} \citep{Tanaka1983}. Owing to the precise source localization obtained with EXOSAT, the companion was identified as an O8-9Ve star, BQ Cam \citep{honeycutt1985,Negueruela1999}, and its distance estimated to be 2.2--5.8\,kpc \citep{corbet1986}. This value was later increased to 7\,kpc \citep{Negueruela1999}. \vo was detected again in X-rays by \emph{Ginga} in 1989, leading to the discovery of an absorption line feature at $\sim$28.5\,keV and quasi periodic oscillations at $\sim$0.051\,Hz \citep{Makishima1990, Takeshima1994}. A particularly bright giant outburst occurred in 2004 November, and was followed by \inte and \xte through dedicated target of opportunity observations (TOOs). The X-ray spectrum could be fitted using a power law with high-energy cut-off, a model which is typically adopted to describe the X-ray emission of accreting pulsars in Be X-ray binary systems. Interestingly, the X-ray spectrum shows three absorption-like features at energies $\sim28$\,keV, $\sim50$\,keV and $\sim70$\,keV \citep{coburn2005,Pottschmidt2005,kreykenbohm2005}. These are produced by the resonant scattering of photons on electrons in the accretion column of X-ray pulsars and are called cyclotron resonance scattering features (CRSFs; see, e.g., \citealt{Isenberg1998}; \citealt{Schoenherr2007} and references therein). If detected, CRSFs provide a direct measurement of the magnetic field of a neutron star through the relation $E_{\rm cyc} \approx 11.6B_{12}\times(1+z)^{-1}$\,keV, where $E_{\rm cyc}$ is the centroid energy of the fundamental CRSF, $B_{12}$ is the magnetic field strength in units of $10^{12}$\,G, and $z$ is the gravitational redshift of the line-forming region. The higher harmonics have a centroid energy approximately $n$-times that of the fundamental line. The equation above can be used to derive a neutron star magnetic field strength of $\sim2.7\times10^{12}$\,G in \vo. An anti-correlation between the centroid energy of the fundamental CRSF and the X-ray luminosity during the outbursts has been reported by several authors using different data sets \citep{Mihara1998,Mowlavi2006,Tsygankov2006,Tsygankov2010}. So far, this is the only X-ray pulsar for which an anti-correlation is firmly established, as the presence of this phenomenon in \fu is still debated \citep[see][and references therein]{Mueller2013}. For several other systems, a positive correlation between X-ray luminosity and CRSF centroid energy has been reported (Her~X-1, \citealt{Staubert2007}; GX~304$-$1, \citealt{Klochkov2012}; Vela~X-1, \citealt{Furst2013}; A\,0535$+$26, \citealt{Sartore2015}). The anti-correlation observed in \vo has been interpreted as due to an increase of the height of the accretion column, which induces either the upward migration of the line forming region in a region where the magnetic field weakens \citep[][and references therein]{becker2012}, or the progressive illumination of a larger portion of the neutron star surface, where cyclotron scattering is assumed to take place in regions progressively further from the magnetic poles \citep{Poutanen2013}. In June 2015, \vo underwent a new giant outburst \citep{Nakajima2015,Doroshenko2015b}, anticipated by a brightening in the optical band, probably associated with the donor star disc \citep{Camero-Arranz2015}. During this outburst, INTEGRAL carried out four observations between July 17 and October 9, covering both the rise and the decay of the outburst. In this work, we present the spectral results on the cyclotron line luminosity dependence, obtained using the SPectrometer on Integral (SPI) collected during both the source outbursts in 2004--2005 and 2015. We complemented the analysis using JEM-X and IBIS data collected during the same periods, when possible (see the next section). The observation and data analysis are described in Sect.~\ref{sect. obs}. In Sect.~\ref{sect. spectral analysis}, we present and discuss the results from the spectral analysis. In Sect.~\ref{sect. lightcurve}, we report the identification of a $\sim 34$\,d modulation superimposed on the profiles of the 2004--2005 and 2015 outbursts. Finally, in Sect.~\ref{sec:summary}, we summarize our findings.
\label{sec:summary} We have reported results of the spectral properties of \vo during the 2004--2005 and 2015 giant outbursts, based on SPI, JEM-X, and IBIS observations. We showed that the spectral parameters describing the continuum are consistent with previous results. The correlation of the centroid of the fundamental CRSF and the 3$-$100\,keV luminosity is $E_1 \propto -0.095\pm 0.008 L_{37}$\,keV. The value of the slope lies between the slopes determined by \citet{Cusumano2016} for the ascending and descending phases of the 2015 outburst, while a direct comparison with the results by \citet{Tsygankov2006} is hampered by the different spectral model adopted by these authors. Using SPI data of the 2004--2005 outburst and left unpublished so far together with JEM-X and IBIS data, we confirmed a significant correlation between the absorption line width of the fundamental CRSF and the 3--100\,keV luminosity previously found by \citet{Mowlavi2006} in the JEM-X+ISGRI dataset of the same outburst. We found a modulation at $\sim34$ days superimposed on the 2004--2005 and 2015 outburst profiles in RXTE ASM and \swift/BAT lightcurves, with maxima shifted of 10$-$15 days after the periastron passage. Such modulation in the lightcurve of a neutron star powered by an accretion disk can be explained with an enhanced amount of matter gravitationally captured at periastron and spiraling inwards (on a timescale that depends on the dimension of the disk and its viscosity properties) until it is accreted on the surface of the neutron star.
16
7
1607.03293
1607
1607.03636_arXiv.txt
We present new observations of the galaxy cluster 3C\,129 obtained with the Sardinia Radio Telescope in the frequency range 6000$-$7200\,MHz, with the aim to image the large-angular-scale emission at high-frequency of the radio sources located in this cluster of galaxies. The data were acquired using the recently-commissioned ROACH2-based backend to produce full-Stokes image cubes of an area of 1\degree$\times$1\degree~centered on the radio source 3C\,129. We modeled and deconvolved the telescope beam pattern from the data. We also measured the instrumental polarization beam patterns to correct the polarization images for off-axis instrumental polarization. Total intensity images at an angular resolution of 2.9\arcmin~were obtained for the tailed radio galaxy 3C\,129 and for 13 more sources in the field, including 3C\,129.1 at the galaxy cluster center. These data were used, in combination with literature data at lower frequencies, to derive the variation of the synchrotron spectrum of 3C\,129 along the tail of the radio source. If the magnetic field is at the equipartition value, we showed that the lifetimes of radiating electrons result in a radiative age for 3C\,129 of $t_{\rm syn}\simeq 267\pm 26$ Myrs. Assuming a linear projected length of 488\,kpc for the tail, we deduced that 3C\,129 is moving supersonically with a Mach number of $M=v_{\rm gal}/c_{\rm s}=1.47$. Linearly polarized emission was clearly detected for both 3C\,129 and 3C\,129.1. The linear polarization measured for 3C\,129 reaches levels as high as 70\% in the faintest region of the source where the magnetic field is aligned with the direction of the tail.
The Sardinia Radio Telescope (SRT; Grueff et al. 2004) is a new 64-m diameter radio telescope located at San Basilio, about 35\,km north of Cagliari, Italy. The telescope is designed for astronomy, geodesy, and space science, either as a single dish or as part of European and global networks. The SRT has a shaped Gregorian optical configuration with a 7.9-m secondary mirror and supplementary Beam-WaveGuide (BWG) mirrors. With six focal positions (primary, Gregorian, and four BWGs), the SRT is capable of allocating up to 20 receivers. Eventually, once all of the planned devices are installed, it will operate with high efficiency in the 0.3-116 GHz frequency range. One of the most advanced technical features of the SRT is its active surface: the primary mirror is composed of 1008 panels supported by electromechanical actuators that are digitally controlled to compensate for gravitational deformations. The antenna officially opened on September 30th 2013, upon completion of the technical commissioning phase (Bolli et al. 2015). In its first light configuration, the SRT is equipped with three receivers: a 7-beam K-band (18-26.5 GHz) receiver (Gregorian focus; Orfei et al. 2010), a single-feed C-band (5.7-7.7 GHz) receiver (BWG focus), and a coaxial dual-feed L/P-band (0.305-0.41/1.3-1.8 GHz) receiver (primary focus; Valente et al. 2010). The technical commissioning phase was followed by the Astronomical Validation activity aimed at validating the telescope for standard observing modes and at transforming the SRT from a technological project into a real general-purpose astronomical observatory. The Astronomical Validation results are presented by Prandoni et al. (in preparation). The suite of backends currently available on site includes the recently-commissioned ROACH2 FPGA board-based\footnote{The Reconfigurable Open Architecture Computing Hardware (ROACH) processing board is developed by the Center for Astronomy Signal Processing and Engineering Research, see http://casper.berkeley.edu.The ROACH2 is a stand-alone FPGA-based board that represents the successor to the original ROACH board.} backend SARDARA (SArdinia Roach2-based Digital Architecture for Radio Astronomy; Melis et al., in preparation). In this work, we exploited for the first time the capabilities of the SARDARA backend in spectral-polarimetric mode to perform total intensity and polarization observations in C-band. For this purpose, we observed the galaxy cluster 3C\,129 field. 3C\,129 is a luminous X-ray galaxy cluster at redshift of $z=0.0208$ (Spinrad 1975), located close to the Galactic plane. We selected this field because of the homonymous radio galaxy 3C\,129, whose spectral and polarization properties have been very well studied in the literature. Indeed, these observations serve as an important term of reference to assess the reliability of the calibration of the new spectral-polarimetric data acquired with the SRT. Moreover, they provide new information on the high-frequency large angular scale emission of the radio sources located in this cluster of galaxies. Like many other radio sources hosted in galaxy clusters, 3C\,129 has a distorted head-tail morphology where the jets are bent into a tail due to the ram-pressure caused by the motion of the parent galaxy in the intracluster medium. The radio galaxy 3C\,129 was among the first radio galaxies discovered to have a head-tail structure. It is located at the cluster periphery and was identified with an E-galaxy by Hill \& Longair (1971). Since then, it has been studied extensively in radio by several authors (e.g. Miley 1973, Perley \& Erickson 1979, Owen et al. 1979, van Breugel 1982, Downes 1984, J{\"a}gers 1987, Feretti et al. 1998, Taylor et al 2001). The source stretches over $\sim$20\arcmin~in length, and the spectacular structure of its narrow-angle tail has been shown at low frequencies with Very Large Array (VLA) and Giant Metrewave Radio Telescope (GMRT) observations (e.g. Kassim et al. 1993, Lane et al. 2002, Lal \& Rao 2004). The importance of mapping the radio galaxy 3C\,129 with a single dish is that interferometers go into the technical problem of the missing zero spacing. Indeed, they filter out structures larger than the angular size corresponding to their shortest spacing, limiting the synthesis imaging of extended structures. Single dish telescopes are optimal for recovering all of the emission on large angular scales, especially at high frequencies. Although single dishes typically have a low resolution, the 3C\,129 tail stretches enough to be well resolved at the resolution of about 2.9\arcmin~of the SRT in C-band. In addition to the head-tail 3C\,129, we mapped other radio sources in the field, including the radio source 3C\,129.1 near the projected center of the X-ray galaxy cluster. This radio source also has dual radio jets but it is weaker and smaller than 3C\,129, and the radio jets extend over about 2\arcmin~(e.g. Downes 1984, J{\"a}gers 1987, Kassim et al. 1993). The paper is organized as follows, in Sect.\,2, we illustrate the SRT observations. In Sect.\,3, we present the data reduction. In Sect.\,4, we show the total intensity and polarization imaging. In Sect.\,5, we present the spectral analysis. In Sect.\,6, we present the polarization analysis. Finally, in Sect.\,7 we draw our summary. In Appendix, we provide details about the data reduction. Throughout this paper, we assume a $\Lambda$CDM cosmology with $H_0$ = 71 km s$^{-1}$Mpc$^{-1}$, $\Omega_m$ = 0.3, and $\Omega_{\Lambda}$ = 0.7. At the distance of 3C\,129, 1\arcsec~corresponds to 0.415 kpc.
We have presented a wide-band spectral-polarimeric study of the galaxy cluster 3C\,129 performed with the new SARDARA backend installed at the SRT. The results are summarized in the following. Total intensity images at an angular resolution of about 2.9\arcmin~were obtained for the tailed radio galaxy 3C\,129 and for 13 more sources in the field, including 3C\,129.1 at the galaxy center. We modeled the total intensity telescope beam pattern and the off-axis instrumental polarization patterns. We used these models to deconvolve the telescope beam from the total intensity images and to correct the polarized images for the off-axis instrumental polarization. The SRT data were used, in combination with literature data at lower frequencies, to derive the variation of the synchrotron spectrum of 3C\,129 along the tail of the radio source. We computed the global spectra of 3C\,129 and 3C\,129.1 that we used to compare the SRT total intensity measurements with the literature data at other frequencies. We found that the new SRT data points are in remarkable agreement with both data from the literature at close frequencies and the best model fit to the global synchrotron spectrum. This result confirms the accuracy of the flux density scale calibration at the SRT. In addition, we found that the radio spectrum in 3C\,129 progressively steepens with distance from the core, and at each location it is steeper at higher frequencies. The local spectra are well fitted by models involving synchrotron energy losses, assuming a continuous isotropization of the pitch-angle distribution (JP model). The break frequency obtained by spectral fits decreases with increasing distance from the host galaxy. Assuming that the magnetic field is at the equipartition value, we showed that the lifetimes of radiating electrons after synchrotron and Inverse Compton losses result in a radiative age for 3C\,129 of the order of $t_{\rm syn}\simeq 267\pm 26$ Myrs. By assuming a linear projected length of 488\,kpc for the tail, this implies a relative average advancing velocity for the galaxy of $v_{\rm gal}\simeq 1785\pm 175\cdot \sec(i)$\,km/s (where $i$ is the inclination of the tail with respect to the plane of the sky). Given the sound speed of $c_{\rm s}=1217\pm 22$ km/s, we deduced that 3C\,129 is moving supersonically with a Mach number as high as $M=v_{\rm gal}/c_{\rm s}=1.47$, which is very close to the speed invoked by Lane et al. (2002) to explain the Mach cone opening angle of $\sim 90$\degree~for the shock-wave front that would be associated with the ``crosspiece'' observed in front of the host galaxy. Linearly polarized emission is clearly detected for both 3C\,129 and 3C\,129.1. The linear polarization determined for 3C\,129 with the SRT at 6600\,MHz reaches levels as high as 70\% in the faintest region of the source where the magnetic field is aligned with the direction of the tail. This result can be interpreted as a combination of the spectral steepening due to the radiative losses of the synchrotron electrons, leading to an intrinsically higher fractional polarization, and to the ordering of the magnetic field that increases with the increase of the distance from the head.
16
7
1607.03636
1607
1607.05683_arXiv.txt
We investigated the formation of the \ion{Mg}{2} h--k doublet in a weakly magnetized atmosphere (20--100\,G) using a newly developed numerical code for polarized RT in a plane-parallel geometry, which implements a recent formulation of partially coherent scattering by polarized multi-term atoms in arbitrary magnetic field regimes. Our results confirm the importance of partial redistribution effects in the formation of the \ion{Mg}{2} h and k lines, as pointed out by previous work in the non-magnetic case. We show that the presence of a magnetic field can produce measurable modifications of the broadband linear polarization even for relatively small field strengths (${\sim}10$\,G), while the circular polarization remains well represented by the classical magnetograph formula. Both these results open an important new window for the weak-field diagnostics of the upper solar atmosphere.
\label{sec:intro} One of the big challenges faced these days by the solar physics community is to gain a solid understanding of the solar chromosphere, and how it magnetically connects to the underlying photosphere and the corona above. Within the chromosphere, the structure and dynamics of the magnetized plasma undergo dramatic changes. This region spans approximately nine pressure scale heights, and the gas temperature goes through a minimum of only a few thousand K, before suddenly rising to the million K temperatures of the solar corona. As the gas density decreases, the intrinsic three-dimensional distribution of the solar radiation becomes increasingly important, because the excitation of the chromospheric ions becomes more strongly correlated with the degree of anisotropy of the radiation. At the same time, the reduced role of particle collisions in thermalizing the atomic populations allows for subtle quantum effects (e.g., atomic polarization, level-crossing coherence, the magnetic and electric Hanle effects in the presence of deterministic as well as turbulent fields) to become apparent in the spectral and polarization signatures of the solar chromosphere \citep{TB01,LL04,CL08}. \begin{figure*}[!t] \centering \includegraphics[width=.495\hsize]{MgII-PRD-Cols-20G.eps} \includegraphics[width=.495\hsize]{MgII-PRD-Cols-20G-zoom.eps} \caption{\label{fig:both} Stokes profiles of the \ion{Mg}{2} h--k doublet modeled in a weakly magnetized FAL-C atmosphere ($B=20$\,G, $\vartheta_B=30^\circ$, $\varphi_B=180^\circ$) and for various directions of the LOS (corresponding to $\mu=0.1,0.3,0.5,0.8$, respectively, for the black, red, blue, and green curves). The dashed curves correspond to the Stokes profiles for the non-magnetic case and $\mu=0.1$. \emph{Left:} note the remarkable presence of broadband Stokes-$U$ polarization due to the combination of upper-term quantum interferences and magneto-optical effects. \emph{Right:} finer details of the polarization of the h and k line cores and of the quantum interference pattern between them. We note in particular the reversal of the sign of Stokes $V$ (and more subtly, of Stokes $U$) for $\mu=0.8$, in accordance with the sign of the LOS projection of the magnetic field vector. We also note the complete absence of a magnetic signature in the core of the h-line at 280.35\,nm, as expected for an intrinsically non-polarizable transition in the weak-field limit. } \end{figure*} \begin{figure*}[!t] \centering \includegraphics[width=.495\hsize]{MgII-PRD-Cols-100G.eps} \includegraphics[width=.495\hsize]{MgII-PRD-Cols-100G-zoom.eps} \caption{\label{fig:both.100} Same as Figure~\ref{fig:both}, but for a magnetic field strength $B=100$\,G. We note how the separation of the lobes in the broadband polarization structure of Stokes $Q$ and $U$ increases with the magnetic strength because of the M-O effects. The changes in the line cores are instead dominated by the depolarization associated with the larger field strength. } \end{figure*} \begin{figure*}[!t] \centering \includegraphics[width=.495\hsize]{MgII-PRD-Cols-mu08-Vk.eps} \caption{\label{fig:only_k} Fractional circular-polarization profiles $v = V/I$ of the \ion{Mg}{2} k-line, for the magnetic models of Figure~\ref{fig:both} (top) and Figure~\ref{fig:both.100} (center), and corresponding intensity profiles (bottom) for the CRD (dashed) and PRD (solid) regimes. Only the case of $\mu=0.8$ is shown here (cf.\ green curves of Figures~\ref{fig:both} and \ref{fig:both.100}), showing the fully resolved line core and the near wings of the line. The dashed-dotted curves show the weak-field approximation to Stokes $V$ (magnetograph formula). We note the extremely good fit of this approximation in the spectral region where the line formation is dominated by the CRD regime. The reported values of the inverted LOS field component were estimated by restricting the use of the magnetograph formula to the inner lobes of the k-line, where the $v$ fractional polarization is larger than 0.04\% and 0.2\%, respectively, for the 20\,G and 100\,G field strengths. } \end{figure*} Another increasingly important aspect of the modeling of chromospheric spectral lines in realistic solar scenarios is the ability to account for the higher temporal coherence between the processes of absorption and re-emission of the solar radiation, which is % fostered by the particular physical state of the tenuous chromospheric plasma. This condition of \emph{partially coherent scattering} gives rise to a plethora of phenomena (commonly dubbed \emph{partial frequency redistribution} or PRD), % which must be taken into account for a proper diagnosis of the plasma and magnetic properties of the chromosphere. In particular, PRD effects are fundamental for the interpretation of many quantum interference patterns that are observed in the solar spectrum even between widely separated multiplet lines. This was originally demonstrated by \cite{St80} in the case of the \ion{Ca}{2} H--K doublet around 395\,nm. Such effects are thus essential also for the modeling of the linear polarization of the \ion{Mg}{2} h--k doublet around 280\,nm \citep{Au80,HS87,BT12}, which has a quantum structure similar to that of the \ion{Ca}{2} H and K lines. Recently, polarized radiative transfer (RT) with PRD in the non-magnetic case has been applied to the modeling of a variety of chromospheric line multiplets showing quantum interferences \citep{Sm12,BT14}, as well as in the case of a uniformly magnetized slab \citep{Sm13}. Independently, \citeauthor{Ca14}\ (\citeyear{Ca14}; see also \citeauthor{CM16} \citeyear{CM16}) have attacked the problem of the formation of spectral line polarization by partially coherent scattering in a magnetized medium, with sufficient generality to enable the modeling of many resonance lines of the solar spectrum that show complex linear polarization patterns \citep{Wi75,SK96,SK97,St00,Ga00}. In particular, that formalism allows to fully take into account the role of atomic polarization in the lower state of an atomic transition, a feature that has been neglected by previous works in PRD modeling. On the other hand, lower-level polarization is important for the interpretation of many chromospheric diagnostics, as demonstrated by \cite{MT03} in the case of the \ion{Ca}{2} IR triplet. In order to apply the formalism of \cite{Ca14} under realistic chromospheric conditions, we have developed a 1-D RT code for the polarized multi-term atom in an arbitrary magnetic field, which takes into account the effects of PRD, as well as the contribution of (isotropic) inelastic and elastic collisions. The code is based on a straightforward $\Lambda$-iteration scheme \citep[e.g.,][]{Mi78}, and it arrives at the solution for the polarized PRD transfer problem in a magnetized atmosphere in two steps. In the first stage we assume \emph{complete frequency redistribution} (CRD), and solve the non-LTE problem of the second kind \citep{LL04} for zero magnetic field and including only inelastic collisions. In order to facilitate the convergence of the CRD problem, we initialize the level populations with the non-LTE solution from the RH code \citep{Ui01}. In the second stage, this converged CRD solution is used to initialize the iteration for the magnetized PRD problem, with the further addition of elastic collisions. The effects of collisions are taken into account in the RT equation by implementing physically consistent branching ratios between the first-order (CRD) and second-order (PRD) emissivity terms of the theory \citep[cf.][]{Ca14}. As a result, the emissivity term in the RT equation takes the form \begin{equation} \label{eq:emissivity} \varepsilon_i(\omega_k,\hat{\bm{k}})= \left[\varepsilon_i\apx{1}(\omega_k,\hat{\bm{k}}) -\varepsilon_i\apx{2}(\omega_k,\hat{\bm{k}})_{\rm f.s.}\right] +\varepsilon_i\apx{2}(\omega_k,\hat{\bm{k}}) \end{equation} where $\varepsilon_i\apx{2}(\omega_k,\hat{\bm{k}})_{\rm f.s.}$ corresponds to the expression of $\varepsilon_i\apx{2}(\omega_k,\hat{\bm{k}})$ in the limit of flat-spectrum (f.s.) illumination. Using equation~(15) of \cite{Ca14}, it is straightforward to verify that the emissivity (\ref{eq:emissivity}) converges to the one of \cite{Ui01}, in the case of the unpolarized multi-level atom.
\label{sec:concl} The following conclusions can be drawn from the above results. First of all, the importance of PRD effects for the formation of the linear polarization profiles of the \ion{Mg}{2} h and k lines \citep{Au80,HS87,BT12,BT14} is confirmed by these new calculations in the presence of a magnetic field. Secondly, M-O effects are found to be responsible for the appearance of important levels of \emph{broadband} Stokes-$U$ polarization. This result contrasts the common belief that magnetic fields produce significant polarization only in the line core, and in particular that the manifestation of M-O effects requires the presence of strong magnetic fields \citep[e.g.,][]{SL87}. The fact that M-O effects are so outstanding in the polarization profiles of the \ion{Mg}{2} h--k lines is due to the peculiar combination of a strong opacity in the far wings and a significant level of scattering polarization (${\sim}2$\%; see Figure~\ref{fig:both}) induced by radiation anisotropy, which are produced in the adopted atmospheric model. In our two-term model atom, these M-O effects induce the appearance of a polarization signal that encompasses the entire spectral range of the h--k doublet (spanning several nm; see Figures~\ref{fig:both} and \ref{fig:both.100}), and which is dominated by the signature of quantum interferences in the upper term of the \ion{Mg}{2} atomic model. A very remarkable result is the manifestation of these polarization transfer effects already for relatively weak fields---in the modeled case of the \ion{Mg}{2} h and k lines, for field strengths of only a few gauss. This opens a completely new diagnostic window for the magnetism of the quiet-Sun upper atmosphere, since these effects should be detectable also in other notable chromospheric lines, such as the \ion{Ca}{2} H--K doublet around 395\,nm, the \ion{Na}{1} D-doublet around 590\,nm, and the \ion{H}{1} H$\alpha$ line at 653\,nm. In the context of this work, the presence of broadband signals in the Stokes $Q$ and $U$ polarizations of the \ion{Mg}{2} h--k doublet make this line set a very attractive and potentially powerful diagnostic for synoptic magnetic studies of the solar chromosphere and upper photosphere. The relatively large amplitude of the signals over a spectral range spanning many Doppler widths should facilitate the design of high-throughput and fast cadence imaging polarimeters, which could rely on relatively low polarimetric sensitivity and spectral resolution, at least for the diagnosis of the upper photosphere and lower chromosphere, where these broadband signatures are produced. The inner cores of the k and h lines, where the signatures of the Hanle and Zeeman effects dominate the polarization signal, probe instead the low transition region (above ${\sim}2000$\,km in the FAL-C model). Hence, \emph{the broadband Stokes profiles of the \ion{Mg}{2} h and k lines offer an opportunity to study simultaneously the magnetic structure at the base and the top of the chromosphere.} Our simulations show that the amplitude of the Hanle-effect polarization in the core of the \ion{Mg}{2} k-line can be as large as ${\sim}1$\%, and therefore relatively easy to detect using narrowband (${\sim}0.25$\AA) filter polarimeters. In fact, a systematic study of the variation of narrowband-integrated Stokes $Q$ and $U$ polarizations as a function of the vector magnetic field should be conducted in order to assess the feasibility of filter-based, full-disk polarimeters for the \ion{Mg}{2} h--k doublet. Finally, \emph{the magnetograph formula applied to the Stokes $V$ profiles of the \ion{Mg}{2} h--k doublet retains its diagnostic value as a proxy of the magnetism of the low transition region,} although our modeling also shows that its applicability breaks down when PRD effects in the line formation region become important. This conclusion reinforces the relevance of these lines for the diagnosis of chromospheric magnetic fields, and in particular it provides a direct and inexpensive tool for the quick-look inversion of large spectro-polarimetric datasets of chromospheric lines, providing additional evidence to the importance and feasibility of full-disk observations of the solar chromosphere at these wavelengths.
16
7
1607.05683
1607
1607.00290_arXiv.txt
The origin of ultra high energy cosmic rays (UHECR) is still unknown. However, great progress has been achieved in past years due to the good quality and large statistics in experimental data collected by the current observatories. The data of the Pierre Auger Observatory show that the composition of the UHECRs becomes progressively lighter starting from $10^{17}$ eV up to $\sim 10^{18.3}$ eV and then, beyond that energy, it becomes increasingly heavier. These analyses are subject to important systematic uncertainties due to the use of hadronic interaction models that extrapolate lower energy accelerator data to the highest energies. Although proton models of UHECRs are disfavored by these results, they cannot be completely ruled out. It is well known that the energy spectra of gamma rays and neutrinos, produced during propagation of these very energetic particles through the intergalactic medium, are a useful tool to constrain the spectrum models. In particular, it has recently been shown that the neutrino upper limits obtained by IceCube challenge the proton models at 95\% CL. In this work we study the constraints imposed by the extragalactic gamma-ray background, measured by Fermi-LAT, on proton models of UHECRs. In particular, we make use of the extragalactic gamma-ray background flux, integrated from 50 GeV to 2 TeV, that originates in point sources, which has recently been obtained by the Fermi-LAT collaboration, in combination with the neutrino upper limits, to constrain the emission of UHECRs at high redshits ($z>1$), in the context of the proton models.
Despite a big experimental effort done in the past years, the origin of the ultra high energy cosmic rays (UHECRs), i.e. with energies larger than $10^{18}$ eV, is still unknown. The Pierre Auger \cite{Auger:15} and Telescope Array \cite{TA:12} observatories, are currently taking data in this energy range. The Southern hemisphere Pierre Auger Observatory, in the Province of Mendoza, Argentina, is the largest cosmic ray observatory in the world. The Telescope Array observatory, placed in Utah, USA, is the largest in the northern hemisphere. The UHECR energy spectrum presents two main features, the ankle, found at an energy of $\sim10^{18.7}$ eV, which consists in a hardening of the flux, and a suppression at $10^{19.6}-10^{19.7}$ eV \cite{Valino:15,Jui:15}. Note that, even though the spectra observed by The Pierre Auger and Telescope Array observatories present some differences, these two features are observed in both measurements. UHECRs generate atmospheric air showers when they interact with the molecules of the atmosphere. These air showers can be observed by using ground detectors and/or fluorescence telescopes. The ground detectors detect the secondary particles, produced during the shower development, that reach the ground. Whereas, the fluorescence telescopes detect the fluorescence light emitted by the interaction of the secondary charged particles with the molecules of the atmosphere. The Pierre Auger and Telescope Array observatories have both types of detectors, and a subsample of the events are observed by both detector types. The arrival direction, the energy, and the composition of the primary UHECR have to be inferred from the air shower observations. In particular, the composition is obtained by comparing observables sensitive to the primary mass with simulations of the atmospheric air showers. This is subject to large systematic uncertainties because the hadronic interactions at the highest energies are unknown. There are models that extrapolate the lower energy accelerator data to the highest energies. Some of these models have been recently updated with the Large Hadron Collider data. As a consequence, the differences among the predictions of the different models decreased but did not disappear. One of the parameters most sensitive to primary mass is the atmospheric depth of the shower maximum, $X_{max}$. This parameter can be reconstructed with the data taken by fluorescence telescopes. The composition analyses based on $X_{max}$ observed by Auger, done by using the updated hadronic interaction models, show a decrease of the primary mass from $10^{17}$ eV up to $\sim 10^{18.3}$ eV and then, beyond that energy, the mass becomes increasingly heavy \cite{Porcell:15,AugerXmax:14}. On the other hand, the $X_{max}$ data obtained by Telescope Array is consistent with protons \cite{TAXmax:15,Fujii:15}. However, it has been recently shown that the composition data observed by the two experiments are consistent \cite{Abbasi:15}. It is worth mentioning that, the statistics of Telescope Array is smaller than the one corresponding to Auger. The interpretation of the UHECR spectrum depends strongly on the mass composition. If protons are the dominant component, the ankle is originated by pair production in the interaction of the protons with the low energy photons of the extragalactic background light (EBL) and cosmic microwave background (CMB), during propagation through the intergalactic medium. The suppression observed at the highest energies can originate by the photopion-production of protons interacting with the CMB photons, by the intrinsic cutoff of in the sources, or by a combination of both effects. The proton model (known as the dip model) of the UHECR spectrum have been extensively studied in the literature (see Refs.~\cite{Hill:85,DeMarco:03,Bere:04,Bere:05,Bere:06,Aloisio:07,Aloisio:08}). On the other hand, if the UHECR spectrum is composed by heavy nuclei besides protons, the ankle can be interpreted as the transition between the galactic and extragalactic cosmic rays (see Ref.~\cite{AloisioRev:12} for a review). However, this possibility is disfavoured by the Auger data \cite{Auger:12}, large scale anisotropy studies show that the extragalactic component should continue below the ankle. Recently, a new light extragalactic component, that dominates the flux below the ankle, originated in different sources than the ones responsible of the flux at the highest energies, has been proposed \cite{Aloisio:14}. This low energy component could also originate in the photodisintegration of nuclei with the photon fields in the vicinity of the acceleration region or the source \cite{Unger:15,Allard:15}. In these scenarios the suppression is due to the photodisintegration process undergone by the nuclei when they interact with photons of the EBL and CMB, to an intrinsic spectral cutoff in the injected spectrum, or to a combination of both effects. Besides composition information, the observation of secondary gamma rays and neutrinos, generated in the interactions undergone by the cosmic rays during propagation through the universe, can constrain the different models of the UHECR spectrum. In particular, a smaller number of gamma rays and neutrinos are predicted in scenarios with larger fractions of heavy nuclei at the highest energies. In Ref.~\cite{Heinze:15} the dip model has been rejected at 95\% CL by using the upper limit on the neutrino spectrum obtained by the IceCube experiment \cite{Ishihara:15}. In that work the source evolution is assumed to be the one corresponding to the star formation rate times $(1+z)^m$ where $z$ is the redshift and $m$ is a parameter fixed by fitting Telescope Array data. However, it is also shown that models with no emission at $z>1$ cannot be rejected with present neutrino data. Extragalactic gamma-ray background (EGB) observations impose quite restrictive constrains on models of the UHECR spectrum \cite{Bere:75,Fodor:03,Semikoz:04,Ahlers:10,Decerprit:11,Bere:11,Hooper:11,Gelmini:12}. The EGB has recently been measured, from 100 MeV to 820 GeV, by Fermi-LAT \cite{Ackermann:15}. Part of the EGB originates in gamma-ray point sources. The gamma rays generated by the propagation of UHECRs can contribute to a diffuse component. By using the 2FHL catalog \cite{Ackermann:16} of 360 point sources (mostly blazars) detected by Fermi-LAT, it has been shown that $86^{+16}_{-14}\%$ of the integrated EGB spectrum from 50 GeV to 2 TeV originates in point sources \cite{AckermannPS:16}. By using this result, in Ref.~\cite{Liu:16} it is found that only a group of nearby sources, contributing in the energy range below the ankle, can be responsible for the light component observed in that energy range. Also, in Ref.~\cite{Kalashev:16}, a more restrictive upper limit to the energy density of the electromagnetic cascades, that are developed in the intergalactic medium, is found by using this new result. In this work we study the impact of this new estimation on proton models of UHECRs. Instead of using the upper limit to the energy density found in Ref.~\cite{Kalashev:16}, we calculate an upper limit to the integrated gamma-ray flux, in the energy range from 50 GeV to 2 TeV, that does not originate in point sources. This upper limit is used to study the constraints that this new result imposes on proton models of the UHECRs spectrum, assuming a more relaxed source evolution than the one used in Ref.~\cite{Heinze:15}. We also compare the results obtained by using the gamma-ray information with the ones obtained by using the neutrino upper limit.
In this work we have studied the constraints, imposed by the results recently obtained by the Fermi-LAT collaboration about the EGB origin, on proton models of UHECRs. For that purpose, we have first calculated an upper limit to the component of the integrated EGB flux, from 50 GeV to 2 TeV, that does not originate in point sources. The obtained value at 90\% CL is $I_\gamma^{UL} = 9.354 \times 10^{-10}\: \textrm{cm}^{-2} \textrm{s}^{-1} \textrm{sr}^{-1}$. We have assumed that the UHECR emission as a function of redshift $z$ follows a broken power law in $(1+z)$, with a breaking point at $z=1$ and an end point at $z=6$. We have found, by fitting the TA data, a very fast increase of the UHECR emission for $z \in [0,1]$, such that the index of the power law takes the values $m=6.65-8.34$, depending on the cutoff energy and EBL model considered. This result is consistent with previous work. We have also found that for $E_{cut} = 10^{21}$ eV the best fit, corresponding to the case in which there is no UHECR emission beyond $z=1$, is rejected at 99\% CL by using the inferred upper limit from gamma-ray observations. However, part of the 68.27\% CL region of the fit is in the allowed region inferred from that upper limit. For $E_{cut} = 10^{19.7}$ eV the best fit is in the allowed regions obtained from the upper limits inferred from gamma-ray and neutrino observations. When the UHECR emission beyond $z=1$ is included the gamma-ray information becomes less restrictive. This is due to the larger attenuation of the gamma-ray flux originated at $z>1$ in the energy interval considered for the integration. The process responsible for this attenuation is pair production on the EBL. In this case, the gamma-ray and neutrino observations become complementary. By using both types of observations we have found that the index of the evolution function for $z \in [1,6]$ should be smaller than $n=1.5-2.3$ in order not to be in tension with the 99.73\% CL region of the fit. Therefore, only proton models of the UHECRs with a much slower redshift evolution in the interval $z \in [1,6]$, compared with the the one corresponding to $z \in [0,1]$, are still compatible with the neutrino and gamma-ray upper limits. A more precise determination of the different EGB components or a more restrictive neutrino upper limits are required to reject, at a given significance level, all proton models of UHECRs by using these types of analyses, which are independent of composition measurements. {\bf Note added:} as our paper was completed another manuscript appeared on the arXiv, considering the impact of the EGB measurements on proton models of UHECR, which contains complementary results to ours \cite{BereKala:16}.
16
7
1607.00290
1607
1607.00897_arXiv.txt
Pre-DECIGO (DECihertz laser Interferometer Gravitational wave Observatory) consists of three spacecraft arranged in an equilateral triangle with 100\,km arm lengths orbiting 2000\,km above the surface of the earth. It is hoped that the launch date will be in the late 2020s. Pre-DECIGO has one clear target: binary black holes (BBHs) like GW150914 and GW151226. Pre-DECIGO can detect $\sim 30M_\odot$--$30M_\odot$ BBH mergers like GW150914 up to redshift $z\sim 30$. The cumulative event rate is $\sim 1.8\times 10^{5}\,{\rm events~yr^{-1}}$ in the Pop III origin model of BBHs like GW150914, and it saturates at $z\sim 10$, while in the primordial BBH (PBBH) model, the cumulative event rate is $ \sim 3\times 10^{4}\,{\rm events~ yr^{-1}}$ at $z=30$ even if only $0.1\%$ of the dark matter consists of PBHs, and it is still increasing at $z=30$. In the Pop I/II model of GW150914-like BBHs, the cumulative event rate is $(3$--$10)\times10^{5}\,{\rm events~ yr^{-1}}$ and it saturates at $z \sim 6$. We present the requirements on orbit accuracy, drag-free techniques, laser power, frequency stability, and the interferometer test mass. For BBHs like GW150914 at 1\,Gpc ($z\sim 0.2$), SNR~$\sim 90$ is achieved with the definition of Pre-DECIGO in $0.01$--$100$\,Hz band. Since for $z\gg 1$ the characteristic strain amplitude $h_c$ for a fixed frequency band weakly depends on $z$ as $z^{-1/6}$, $\sim 10\%$ of BBHs near face-on have SNR~$> 5 \ (7)$ even at $z\sim 30 \ (10)$. Pre-DECIGO can measure the mass spectrum and the $z$-dependence of the merger rate to distinguish various models of BBHs like GW150914, such as Pop III BBH, Pop II BBH and PBBH scenarios. Pre-DECIGO can also predict the direction of BBHs at $z = 0.1$ with an accuracy of $\sim 0.3\,\deg^2$ and a merging time accuracy of $\sim 1\,$s at about a day before the merger so that ground-based GW detectors further developed at that time as well as electromagnetic follow-up observations can prepare for the detection of merger in advance like a solar eclipse. For intermediate mass BBHs such as $\sim 640 M_\odot$--$640 M_\odot$ at a large redshift $z > 10$, the quasinormal mode frequency after the merger can be within the Pre-DECIGO band so that the ringing tail can also be detectable to confirm the Einstein theory of general relativity with SNR~$\sim 35$.
The first direct detection of a gravitational wave (GW) has been done by O1 of aLIGO~\cite{Abbott:2016blz}. The event called GW150914 was a binary black hole (BBH) of mass $\sim 30 M_\odot$--$30 M_\odot$. Such high mass black hole (BH) candidates had not been confirmed although several suggestions existed~\cite{Kalogera:2006uj,Dominik:2012kk,Kinugawa:2014zha,Kinugawa:2015nla, Spera:2015vkd,Amaro-Seoane:2015umi,Mandel:2015qlu,Marchant:2016wow, Belczynski:2016obo} so that its origin is not known at present, while for the merger of neutron star (NS) binary there exists several systems with merger time less than the age of the universe so that the event rate has been estimated~\cite{Kim:2002uw,Kalogera:2003tn, Kim:2013tca}, and this was the most plausible GW source for the first direct detection of GWs before GW150914~\cite{Abadie:2010cf}. For BBHs, no observation of electromagnetic counterparts exists so that analyzing theoretical models is the only way to provide methods to study their properties. One method is population synthesis, in which the Monte Carlo simulations have been performed to evolve binaries starting from binary ZAMS (Zero Age Main Sequence) stars. The mass of the observed BH candidate in X-ray binaries is at most $\sim 15 M_\odot$, which is about half of the mass of GW150914. This suggests that the progenitors of GW150914 are low-metal stars with little or no mass loss such as Pop II or Pop III stars~\cite{Baraffe:2000dp,Inayoshi:2013rfa}. In particular, the predicted mass of Pop III BBHs by Kinugawa et al.~\cite{Kinugawa:2014zha} (see also Refs.~\cite{Kinugawa:2015nla,Kinugawa:2016mfs,Kinugawa:2016ect}) agrees astonishingly well with GW150914~\cite{TheLIGOScientific:2016htt}. However, a single event is not enough to restrict the origin of BBHs like GW150914, around $6$--$8$ of which will be found in O2. The cumulative chirp mass and the total mass as well as the distribution of the spin parameter $a/M$ of the merged BH will help to distinguish the plausible model among the various population synthesis models. Here, in the population synthesis Monte Carlo simulations of Pop II and III stars, there are so many unknown functions and parameters such as initial mass function, initial eccentric distribution function, the distribution of initial separation, the distribution of mass ratio, the Roche lobe overflow parameters, the common envelope parameters and so on, so that the observed cumulative chirp mass and the total mass as well as the distribution of the spin parameter will only give constraints among these undetermined functions and parameters. There are other completely different formation scenarios of BBHs like GW150914, such as primordial BBH (PBBH)~\cite{Sasaki:2016jop}~\footnote{ They simply applied the method in Ref.~\cite{Nakamura:1997sm} for the case with mass from $0.5M_\odot$ to $30M_\odot$.} and the three-body dynamical formation model~\cite{Amaro-Seoane:2015umi}. In particular, in the PBBH model the mass spectrum primarily reflects the primordial density perturbation spectrum. This means that it is difficult to distinguish models only by observations of small--$z$ BBHs like GW150914 since the average detectable range of GW150914 with the design sensitivity of aLIGO is at most $z \sim 0.3$ whose luminosity distance is $\sim 1.5\,{\rm Gpc}$. It is highly requested to observe GW150914-like events for larger $z$ to distinguish various models. For this purpose, DECIGO (DECiherz laser Interferometer Gravitational wave Observatory) is suitable although it was originally proposed by Seto, Kawamura, and Nakamura~\cite{Seto:2001qf} to measure the acceleration of the universe through GWs from binary NS--NS at $z\sim 1$ and observe the chirp signal $1$--$10$ years before the final merger. The GW frequency from such NS-NS binaries is $\sim 0.1$\,Hz ($=$decihertz), which is the origin of the name of DECIGO.~\footnote{ Another origin of the name is ``DECIde and GO project.''} DECIGO consists of at least a triangle shape three spacecraft in a heliocentric orbit with a 1000\,km Fabry--P\'{e}rot laser interferometer to particularly observe the GW background (GWB) from the inflation at frequency $0.1$\,Hz~\cite{Kawamura:2006up}. Pre-DECIGO~\footnote{ Pre-DECIGO was initially the pathfinder for DECIGO. That is, Pre-DECIGO was a smaller version of DECIGO, and the design of Pre-DECIGO had not been defined including the selection of possible targets, while DECIGO has a definite design and clear targets. In particular, the sensitivity of Ultimate-DECIGO is limited only by the uncertainty principle so that it can detect the inflation origin background GWs even if $\Omega_{\rm GW}=10^{-20}$.} is a smaller DECIGO which consists of three spacecraft arranged in an equilateral triangle with 100\,km arm length orbiting 2000\,km above the surface of the earth. The orbit of Pre-DECIGO is geocentric and is different from DECIGO whose orbit is heliocentric. We are hoping that the launch date will be in the late 2020s. In this paper, we show that Pre-DECIGO can detect events like GW150914 up to $z\sim 30$ where the cumulative event rate is $\sim 1.8\times 10^{5}\,{\rm events~yr^{-1}}$ in the Pop III origin model of GW150914, while in the PBBH model, it is $\sim 3\times 10^{4}\,{\rm events~yr^{-1}}$ even if only 0.1\% of the dark matter consists of primordial BHs (PBHs). This paper is organized as follows: In Sect. 2, the requirements on the orbit accuracy, drag-free techniques, laser power, frequency stability, and interferometer test mass will be shown. In Sect. 3, we show that Pre-DECIGO can measure the mass spectrum and the $z$-dependence of the merger rate up to $z\sim 30$ to distinguish various models such as Pop III BBH, Pop II BBH, PBBH, three-body dynamical formation models, and so on. For small $z = 0.1$, Pre-DECIGO can also predict the direction of BBHs with accuracy $\sim 0.3\,\deg^2$, and the merger time with accuracy $\sim 1$\,s at about a day before the merger so that the Einstein Telescope (ET)~\cite{Punturo:2010zz} and the enhanced version of aLIGO as well as electromagnetic follow-up observations can prepare for the detection of the merger in advance, like a solar eclipse. For large $z > 10$ the quasinormal mode (QNM or ringing tail) frequency after the merger can be within the Pre-DECIGO band so that the ringing tail can also be detectable to confirm or refute the Einstein theory of general relativity (GR) with signal-to-noise ratio (SNR) $\sim 35$ for intermediate mass BBHs such as $\sim 640 M_\odot$--$640 M_\odot$. Section 4 is devoted to discussions.
One of the observed binary NS PSR2127+11C (see Ref.~\cite{Jacoby:2006dy} and references therein) has a similar parameter to the Hulse--Taylor binary pulsar although PSR2127+11C is in the globular cluster (GC) M15. This suggests the possibility of the formation of BBHs in the GC. A BH of mass $\sim 30 M_\odot$ is much larger than the typical mass of the constituent stars, $\sim 1 M_\odot$, so that it will sink down to the center of the GC or star cluster due to dynamical friction. Then BBHs can be formed in the central high density region of GCs. Since the escape velocity from GCs is $10\,{\rm km\, s^{-1}}$ or so, the kick velocity in the formation process of BHs or the kick when BBHs are formed by three-body interaction is high enough for BBHs to escape from GCs. Rodriguez, Chatterjee, and Rasio~\cite{Rodriguez:2016kxx} performed such a simulation to show that the event rate is at most $\sim 1/7$ of Pop I and II origin BBHs. If we take their result as it is, the dynamical formation of binaries in GCs gives only a minor contribution of Pop II origin of BBHs. \begin{figure}[!t] \begin{center} \includegraphics[width=0.6\textwidth,clip=true]{./combine3.eps} \end{center} \caption{The event rates for Pop III (standard), Pop I and II (OLD), and PBBH merger as a function of $z$. These rates are derived by differentiating the cumulative event rate in Fig.~\ref{fig:CER} with respect to $\ln z$. Note here that the detectability may change by the mass distribution of each model.} \label{fig:combine3} \end{figure} From only the chirp mass, total mass and spin angular momentum, it will be difficult to distinguish the origin of GW150914-like BBHs. This is because the number of parameters that can be determined by the distribution function of the GW data is much smaller than that of the unknown model parameters and the distribution functions assumed in each model. However, the redshift distribution of GW events varies robustly among the models. Namely, the maximum possible redshift is $\sim 6,\,10$, and $> 30$ for Pop I/II, Pop III, and PBBH models, respectively (see Fig.~\ref{fig:combine3}). In Fig.~\ref{fig:combine3}, we show the event rates for each model. These event rates are derived by differentiating the cumulative event rate in Fig.~\ref{fig:CER} with respect to $\ln z$. To observe the maximum redshift as a smoking gun to identify the origin of GW150914-like events, the construction of Pre-DECIGO seems to be the unique possibility. Pre-DECIGO can observe NS--NS and NS--BH mergers. However no detection of GWs from the merger of these systems has been done, though many simulations exist. For the same distance of the source, the SNR for NS--NS and NS--BH (30$M_\odot$) are 0.08 and 0.25 times smaller than for $30M_\odot$--$30M_\odot$ BBHs. We will here postpone discussing what we can do using Pre-DECIGO about these sources until the first observations of GWs from these systems, since the event rates are still uncertain and might be very small compared with BBH mergers.
16
7
1607.00897
1607
1607.07190_arXiv.txt
We construct maps of the oxygen abundance distribution across the disks of 88 galaxies using CALIFA data release 2 (DR2) spectra. The position of the center of a galaxy (coordinates on the plate) were also taken from the CALIFA DR2. The galaxy inclination, the position angle of the major axis, and the optical radius were determined from the analysis of the surface brightnesses in the SDSS $g$ and $r$ bands of the photometric maps of SDSS data release 9. We explore the global azimuthal abundance asymmetry in the disks of the CALIFA galaxies and the presence of a break in the radial oxygen abundance distribution. We found that there is no significant global azimuthal asymmetry for our sample of galaxies, i.e., the asymmetry is small, usually lower than 0.05 dex. The scatter in oxygen abundances around the abundance gradient has a comparable value, $\lesssim 0.05$ dex. A significant (possibly dominant) fraction of the asymmetry can be attributed to the uncertainties in the geometrical parameters of these galaxies. There is evidence for a flattening of the radial abundance gradient in the central part of 18 galaxies. We also estimated the geometric parameters (coordinates of the center, the galaxy inclination and the position angle of the major axis) of our galaxies from the analysis of the abundance map. The photometry-map-based and the abundance-map-based geometrical parameters are relatively close to each other for the majority of the galaxies but the discrepancy is large for a few galaxies with a flat radial abundance gradient.
It has been well known for a long time that the disks of spiral galaxies show negative radial abundance gradients, in the sense that the abundance is higher at the centre and decreases with galactocentric distance \citep{Searle1971,Smith1975}. It is a common practice to describe the nebular abundance distribution across the disk of a galaxy by the relation between oxygen abundance O/H and galactocentric distance $R{_g}$ and to specify this distribution by the characteristic abundance (the abundance at a given galactocentric distance, e.g., abundance at the centre of the disk) and by the radial abundance gradient. Relations of this type were determined for many galaxies by different authors \citep[][among many others]{VilaCostas1992,Zaritsky1994,Pilyugin2006,Pilyugin2007,Pilyugin2014a,Pilyugin2015,Moustakas2010,Gusev2012,Sanchez2014,SanchezMenguiano2016}. Such relations are based on the assumption that the abundance distribution across the disk is axisymmetric. The azimuthal abundance variations across the disk of a galaxy were discussed in a number of papers. The dispersion in abundance at fixed radius (the scatter around the general O/H -- $R_{g}$ trend, residuals) is used as the measure of the azimuthal abundance variations \citep[e.g.][]{Kennicutt1996,Martin1996,Bresolin2011,Li2013,Berg2015,Croxall2015}. The number of data points in such investigations is usually small. The two-dimensional abundance distribution was analyzed for the galaxy NGC 628 \citep{RosalesOrtega2011}. The observations obtained by the CALIFA survey \citep[Calar Alto Legacy Integral Field Area survey;][]{Sanchez2012} provide the possibility to construct abundance maps for disk galaxies. This allows one to investigate the distribution of abundances across the disk in detail, in particular the global azimuthal asymmetry of abundance in the disks of galaxies. We define the global azimuthal asymmetry in abundance in the following way. We divide the target galaxy into two semicircles and determine the difference between the arithmetic means of the residuals for the opposite semicircles. The differences are determined for the different position angles of the dividing line. The maximum absolute value of this difference is adopted as a measure of the global azimuthal asymmetry of abundance in the disk of a galaxy. In the present work, we will determine the values of the global azimuthal abundance asymmetry in the disks of the CALIFA galaxies and compare them with the level of azimuthal abundance variations. Thus, the goal of this investigation is to construct maps of the oxygen abundance distribution in the disks of the selected CALIFA galaxies and to use those maps to explore the presence (or lack) of a global azimuthal asymmetry in the oxygen abundance as well the existence of a break in the radial oxygen abundance distribution, e.g., a flattening of the radial abundance gradient at the central part of a galaxy. Evidence for a decrease of the oxygen abundances in the central parts of a number of the CALIFA galaxies was found by \citet{Sanchez2014}. We also examine whether the geometric parameters of a galaxy (coordinates of the center, inclination and position angle of the major axis) can be estimated from the analysis of the abundance map. The geometric parameters of a galaxy are usually determined from the analysis of the photometric or/and velocity map of the galaxy under the assumption that the surface brightness of the galaxy (or the velocity field) is a function of the radius only, i.e., that there is no azimuthal asymmetry in brightness (or velocity). Since the metallicity in the disk is also a function of the galactocentric distance one may expect that the abundance map can also be used for the determination of the geometric parameters of a galaxy. We will determine the ``chemical'' (abundance-map-based) geometrical parameters of our galaxies and compare them to their ``photometric'' (canonical, photometry-map-based) geometrical parameters. The paper is organized in the following way. The algorithm employed in our study is described in the Section 2. The results and discussion are given in Section 3. Section 4 is a brief summary. Throughout the paper, we will use the following notations for the line fluxes: $R_2$ = $I_{\rm \rm [OII] \lambda 3727+ \lambda 3729} /I_{\rm {\rm H}\beta }$, $N_2$ = $I_{\rm \rm [NII] \lambda 6548+ \lambda 6584} /I_{\rm {\rm H}\beta }$, $S_2$ = $I_{\rm \rm [SII] \lambda 6717 + \lambda 6731} /I_{\rm {\rm H}\beta }$, $R_3$ = $I_{\rm {\rm [OIII]} \lambda 4959 + \lambda 5007} /I_{\rm {\rm H}\beta }$. The units of the wavelengths are angstroms.
We construct maps of the oxygen abundances across the disks of 88 CALIFA galaxies. The oxygen abundances were determined through the $C$ method using CALIFA DR2 spectra. Hence, here we use the empirical metallicity scale defined by H\,{\sc ii} regions with oxygen abundances derived through the direct method ($T_e$ method). The position of the center of a galaxy (coordinates on the plate) were taken from the CALIFA DR2. The galaxy inclination and position angle of the major axis were determined from our analysis of $r$ band photometric maps of SDSS data release 9. The optical radii were determined from the radial surface brightness profiles in the SDSS $g$ and $r$ bands constructed on the basis of the photometric maps and converted to the Vega $B$ band. We examine the global azimuthal asymmetry of the abundances in the disks of our target galaxies. The arithmetic mean of the deviations from the O/H $- R_{G}$ relation $d(O/H)_{1}$ for spaxels with azimuthal angles from $A$ to $A+180$ and mean deviation $d(O/H)_{2}$ for spaxels with azimuthal angles from $A+180$ to $A+360$ are determined for different positions of $A$. The maximum absolute value of the difference $d(O/H)_{1} - d(O/H)_{2}$ is used to specify the global azimuthal asymmetry in the abundance distribution across the galaxy. The scatter around the O/H $- R_{g}$ relation for our sample of CALIFA galaxies is in the range of $\sim 0.02$ to $\sim 0.06$ dex. There is no significant global azimuthal asymmetry for our CALIFA sample. The values of the global azimuthal asymmetry are small, less than 0.05 dex for the bulk of target galaxies. These values can be attributed to the uncertainties in the photometry-map-based geometrical parameters of the galaxies, i.e., the uncertainties in the photometry-map-based geometrical parameters of a galaxy can make an appreciable (and possibly dominant) contribution to the obtained values of the azimuthal asymmetry. We have found that the radial abundance distribution across the disks of the majority of the galaxies of our sample can be well fitted by a single relation. However, in eighteen of the galaxies in our sample, the oxygen abundances in the central part of the galaxies are systematically lower as compared to the general radial abundance trend. Although the decrease is rather well defined, its value is small, within $\sim$0.1 dex, and can be questioned taking into account the uncertainties of the abundances in the reference H\,{\sc ii} regions. We note that the decrease of the oxygen abundances in the central parts of a number of the CALIFA galaxies was first revealed by \citet{Sanchez2014} and recently confirmed by \citet{SanchezMenguiano2016}. We estimated the geometric parameters of our galaxies (coordinates of the center, inclination and position angle of the major axis) from the analysis of the abundance map. The geometrical parameters are determined on the condition that the coefficient of the correlation between oxygen abundance and galactocentric distance is maximized or the scatter around the O/H -- $R_{g}$ relation is minimized. Both these conditions result in the same values of the geometrical parameters. The photometry-map-based and the abundance-map-based geometrical parameters are relatively close to each other for the majority of our galaxies but the discrepancy is large for a few galaxies with a flat radial abundance gradient.
16
7
1607.07190
1607
1607.00409_arXiv.txt
We present an open-source and validated chemical kinetics code for studying hot exoplanetary atmospheres, which we name \texttt{VULCAN}. It is constructed for gaseous chemistry from 500 to 2500 K using a reduced C-H-O chemical network with about 300 reactions. It uses eddy diffusion to mimic atmospheric dynamics and excludes photochemistry. We have provided a full description of the rate coefficients and thermodynamic data used. We validate \texttt{VULCAN} by reproducing chemical equilibrium and by comparing its output versus the disequilibrium-chemistry calculations of Moses et al. and Rimmer \& Helling. It reproduces the models of HD 189733b and HD 209458b by Moses et al., which employ a network with nearly 1600 reactions. We also use \texttt{VULCAN} to examine the theoretical trends produced when the temperature-pressure profile and carbon-to-oxygen ratio are varied. Assisted by a sensitivity test designed to identify the key reactions responsible for producing a specific molecule, we revisit the quenching approximation and find that it is accurate for methane but breaks down for acetylene, because the disequilibrium abundance of acetylene is not directly determined by transport-induced quenching, but is rather indirectly controlled by the disequilibrium abundance of methane. Therefore, we suggest that the quenching approximation should be used with caution and must always be checked against a chemical kinetics calculation. A one-dimensional model atmosphere with 100 layers, computed using \texttt{VULCAN}, typically takes several minutes to complete. \texttt{VULCAN} is part of the Exoclimes Simulation Platform (ESP; \url{exoclime.net}) and publicly available at \url{https://github.com/exoclime/VULCAN}.
Atmospheric chemistry is a nascent subdiscipline of exoplanet science that is rapidly gaining attention, because of its importance in deciphering the abundances of atoms and molecules in exoplanetary atmospheres. Unlike for the Earth and Solar System bodies, the bulk of the focus is on currently observable exoplanetary atmospheres, which fall into the temperature range of 500 to 2500 K (for reviews, see \citealt{sd10,madhu14,hs15}). There exists a diverse body of work on the atmospheric chemistry of exoplanets, and the published models fall into two basic groups: chemical equilibrium \citep{bs99,lodders02,madhu12,blecic16} and photochemical kinetics \citep{koppa12,line11,vm11,moses11,hu12,moses13a,moses13b,line13,agundez14,zahnle14,hu15,venot12,venot15,rimmer16}. Some of this work traces its roots back to the study of brown dwarfs and low-mass stars \citep{bs99,lodders02}. The two types of models take on very distinct approaches. Chemical-equilibrium models seek to minimize the Gibbs free energy of the system and do not require a knowledge of its chemical pathways. They are hence able to deal with a large number of species with different phases \citep{eq_book}. Chemical-kinetics models employ a network to calculate the change of every reaction rate with time, and requires the solution of a large set of stiff differential equations. Chemical-equilibrium models are a simple starting point, but we expect hot exoplanets to host disequilibrium chemistry \citep{ks10,moses11}. In the current work, we have constructed, from scratch, a computer code to calculate the chemical kinetics of hot exoplanetary atmospheres using a flexible chemical network. In theory, one could construct a single, complete chemical network that is valid for all temperatures. In practice, chemical networks are constructed with a limited subset of reactions specifically for low or high temperatures, with the former being relevant for Earth and the Solar System bodies. A low-temperature network typically omits the endothermic reactions (e.g., \citealt{liang03}), because they are very slow and hence do not affect the outcome, but their inclusion would slow down the calculation unnecessarily. Furthermore, extrapolating reaction rates measured at low temperatures to higher temperatures may result in errors at the order-of-magnitude level. In constructing this code, which we name \texttt{VULCAN}\footnote{Named after the Roman god of alchemy.}, we have built a reduced chemical network consisting of about 300 reactions for the temperature range from 500--2500 K, which is compatible with the currently characterizable exoplanets. We perform a twofold validation of our network: by reproducing chemical equilibrium and by reproducing the disequilibrium-chemistry models of HD 189733b and HD 209458b by \cite{moses11} and also models by \cite{rimmer16}. Initially, our results disagreed with those of \cite{rimmer16}, but upon further investigation we were able to show that this is due to the different chemical networks used. Chemical kinetics codes are typically proprietary (e.g., \citealt{allen81,line11}), which raises questions of scientific reproducibility. \texttt{VULCAN} is constructed to be completely open-source under the GNU Free Documentation License in the hope that this will accelerate scientific progress. Furthermore, the user may elect to use a chemical network that is different from what we provide. \texttt{VULCAN} is also constructed as part of a long-term hierarchical approach, which started with the re-examination of the theoretical foundations of atmospheric chemistry in \cite{hlt16}, analytical carbon-hydrogen-oxygen (C-H-O) networks of equilibrium chemistry in \cite{hl16} and analytical carbon-hydrogen-oxygen-nitrogen (C-H-O-N) networks of equilibrium chemistry in \cite{ht16}. In the current study, we focus on C-H-O networks of chemical kinetics that take into account disequilibrium chemistry due to atmospheric mixing, which we approximately describe by diffusion in the one-dimensional (1D) limit. We consider only the gas phase and do not include photochemistry. We investigate the validity of the quenching approximation and how it is affected by the carbon-to-oxygen ratio (denoted by C/O). In \S\ref{sect:theory}, we state the governing equations and boundary conditions used. In \S\ref{sect:method}, we describe our numerical methods. In \S\ref{sect:rates}, we provide a detailed description of the chemical rate coefficients used in our default, reduced network, as they are a key ingredient of any chemical kinetics code. In \S\ref{sect:benchmarking}, we subject \texttt{VULCAN} to several tests, thereby validating it. In \S\ref{sect:results}, we use \texttt{VULCAN} to study theoretical trends, including varying C/O, and also to revisit the quenching approximation. In \S\ref{sect:discussion}, we provide a concise summary of our current work, compare it to previous work and suggest opportunities for future work. Appendix \ref{append:euler} describes our Rosenbrock method, which we use for temporal integration. Appendix \ref{append:sensitivity} describes a method we developed to identify the key chemical reactions involved in producing a specific atom or molecule. Appendix \ref{append:generate} describes how one may implement a different set of chemical reactions in \texttt{VULCAN}. Appendix \ref{appendix:rates} provides the full set of forward rate coefficients. Appendix \ref{append:nasapoly} describes the thermodynamics data we used to reverse our forward reaction rates.
\label{sect:discussion} \subsection{Overall Conclusions} We have constructed an open-source and validated chemical kinetics code, named \texttt{VULCAN}, for studying the gaseous chemistry of hot (500--2500 K) exoplanetary atmospheres using a reduced C-H-O network of about 300 reactions for 29 species. We have provided a full description of the rate coefficients and thermodynamic data used. We have demonstrated that \texttt{VULCAN} is able to reproduce chemical equilibrium as a limiting case and also compared our calculations to the disequilibrium-chemistry models of \cite{moses11} and \cite{rimmer16}. Specifically, we are able to reproduce the models of HD 189733b and HD 209458b by \cite{moses11}, despite using a reduced chemical network (300 versus nearly 1600 reactions), thus demonstrating that the accuracy of a chemical kinetics calculation is not determined by the sheer size of the network alone. We further examine trends associated with varying the temperature-pressure profile. We demonstrate that the quenching approximation cannot always be employed and may result in large errors of several orders of magnitude. Finally, we show that the abundances of \ce{CH4} and \ce{H2O} depend sensitively on the presence of atmospheric mixing at low and high values of C/O, respectively. \subsection{Comparison to Previous Work and Opportunities for Future Work} In terms of its technical setup, \texttt{VULCAN} shares some similarities to the work of \cite{moses11}, \cite{hu12}, \cite{moses13a,moses13b}, \cite{hu15} and \cite{venot12,venot15}. These codes solve a set of mass continuity equations with chemical source and sink terms, and approximate atmospheric motion by diffusion. They differ in some details: we have used the Rosenbrock (semi-implicit) method, while \cite{hu12} used the backward Euler method. Among these studies, we have employed the smallest chemical network, but we have demonstrated that our results are equivalent. The approach of \cite{rimmer16} is somewhat different from this body of work (see \S\ref{subsect:rimmer}). An obvious opportunity for future work is to include nitrogen, sulphur and phosphorus in our chemical network. Another missing ingredient is photochemistry. It would also be insightful to include the effects of condensation in a setting with disequilibrium chemistry. \cite{bs99} have included condensation in their calculations, but these are restricted to being in chemical equilibrium. In the long term, it will be necessary to couple radiative transfer, chemistry and atmospheric dynamics, since the temperature-pressure profile of the atmosphere changes with the chemistry, because the relative abundances of the molecules alter the opacities, which in turn change the temperature \cite{drummond16}. Atmospheric dynamics should be properly represented, instead of being crudely approximated by eddy diffusion, which does not apply to situations where the length scale of atmospheric motion exceeds a pressure scale height. Initial investigations of the coupling of atmospheric dynamics with chemistry have been performed by \cite{cs06}, \cite{burrows10} and \cite{agundez12}, albeit with (severe) approximations taken. \cite{cs06} used a single rate coefficient to describe the conversion between carbon monoxide and methane. \cite{burrows10} post-processed the output of general circulation models to study the relative abundance of methane between the dayside and nightside hemispheres of hot Jupiters. \cite{agundez12} used a simple dynamical model with solid-body rotation to study the effects of a uniform zonal jet on the chemical kinetics; \cite{agundez14} added photochemistry to the model of \cite{agundez12}. A fully self-consistent calculation is still missing from the literature.
16
7
1607.00409
1607
1607.00912_arXiv.txt
{Intermediate-velocity clouds (IVCs) are HI halo clouds that are likely related to a Galactic fountain process. In-falling IVCs are candidates for the {re-}accretion of matter onto the Milky Way.} {We study the evolution of IVCs at the disk-halo interface, focussing on the transition from atomic to molecular IVCs. We compare an atomic IVC to a molecular IVC and characterise their structural differences in order to {investigate} how molecular IVCs form high above the Galactic plane.} {With high-resolution HI observations of the Westerbork Synthesis Radio Telescope and $^{12}$CO(1$\rightarrow$0) and $^{13}$CO(1$\rightarrow$0) observations with the IRAM 30\,m telescope, we analyse the small-scale structures within the two clouds. By correlating HI and far-infrared (FIR) dust continuum emission from the \textit{Planck} satellite, the distribution of molecular hydrogen (H$_2$) is estimated. We conduct a detailed comparison of the HI, FIR, and CO data and study variations of the $X_\mathrm{CO}$ conversion factor.} {The atomic IVC {does not disclose} detectable CO emission. {The atomic small-scale structure, as revealed by the high-resolution HI data, shows low peak HI column densities and low HI fluxes as compared to the molecular IVC}. The molecular IVC exhibits a rich molecular structure and most of the CO emission is observed at the eastern edge of the cloud. There is observational evidence that the molecular IVC is in a transient {and, thus, non-equilibrium} phase. The average $X_\mathrm{CO}$ factor is close to the canonical value of the Milky Way disk.} {We propose that the two IVCs represent different states in a gradual transition from atomic to molecular clouds. The molecular IVC {appears to be more} condensed allowing the formation of H$_2$ and CO in shielded regions all over the cloud. Ram pressure {may} accumulate gas and thus facilitate the formation of H$_2$. We {show} evidence that the atomic IVC will evolve also into a molecular IVC in a few Myr.}
\label{sec:introduction} \begin{table*}[!t] \caption{Characteristics of the different data sets used in this study. For IRAM and \textit{Planck}, the data is gridded to FITS-maps with a Gaussian kernel, degrading the spatial resolution slightly. The angular resolution is that of the final gridded maps.} \label{tab:data} \small \centering \begin{tabular}{cccccc} \hline\hline {Data} & $\nu$ & {Angular resolution} & {Spectral channel width} & {Noise} & Reference \\ & [GHz] & & [km\,s$^{-1}$] & & \\ \hline EBHIS & 1.42 & 10.8\arcmin & 1.29 & 90\,mK & (1) \\ WSRT aIVC & 1.42 & $75.1\arcsec\times23.0\arcsec$ & 1.03 & 1.1\,mJy\,(beam)$^{-1}$ & (3) \\ WSRT mIVC & 1.42 & $49.1\arcsec\times17.9\arcsec$ & 1.03 & 1.4\,mJy\,(beam)$^{-1}$ & (3) \\ \textit{Planck} $\tau_{353}$ & 353 & 5.27\arcmin & -- & -- & (2) \\ IRAM FTS $^{12}$CO(1$\rightarrow$0) & 115.27 & 23.0\arcsec & 0.53 & {0.20}\,K & (3) \\ IRAM FTS $^{13}$CO(1$\rightarrow$0) & 110.20 & 24.1\arcsec & 0.53 & {0.10}\,K & (3) \\ IRAM VESPA $^{13}$CO(1$\rightarrow$0) & 110.20 & 24.1\arcsec & 0.13 & {0.15}\,K & (3) \\ \hline \end{tabular} \tablebib{ (1) \citet{Winkel2016,Kerp2011,Winkel2010}; (2) \citet{Planckcollaboration2014XI}; (3) This work. } \end{table*} In the evolution of a star-forming galaxy like the Milky Way, a cycle of matter is established by the expulsion from the disk and accretion from the halo \citep[e.g.][]{Ferriere2001}. One of the dominant mechanisms is the Galactic fountain process, {that} is {driven} by massive stars and their feedback onto the Galactic {interstellar medium (ISM)} \citep{Shapiro1976,Bregman1980}: Stellar winds and supernovae expel gas and dust into the Galactic halo where a reservoir of metal-enriched material is sustained. The expelled gas cools down and condenses {eventually into atomic clouds}, which are observable by their emission of HI 21\,cm line emission. These clouds {are thought to} fall back and refuel the Milky Way disk \citep[e.g.][]{Putman2012}. Usually, HI halo clouds are identified by their observed radial velocities which are incompatible with simple models of Galactic rotation \citep[e.g.][]{Wakker1991}. \citet{Wakker2001} uses a velocity range relative to the {local standard of rest} (LSR) between $40\,$km\,s$^{-1}\lesssim |v_{\mathrm{LSR}}| \lesssim90\,$km\,s$^{-1}$ to define intermediate-velocity clouds (IVCs). Most of the IVCs show metallicities close to solar, contain dust as seen by their far-infrared (FIR) continuum emission, and have distances below 5\,kpc \citep[e.g.][]{Wakker2001}. All these properties favour a connection of IVCs to Galactic fountains \citep{Bregman2004,Putman2012,Sancisi2008}. For the evolution of IVCs in the Galactic halo, not only their atomic but also their molecular content is important. The most efficient formation mechanism of molecular hydrogen (H$_2$) is the formation on the surfaces of dust grains \citep{Hollenbach1971b}. Dust is present in IVCs as is evident from their FIR emission \citep[e.g.][]{Planckcollaboration2011XXIV}. As a likely product of a Galactic fountain, not only gas but also dust is expelled into the Galactic halo \citep{Putman2012}. Molecular hydrogen is observed in IVCs, either as a diffuse low column density component with $N_{\mathrm{H}_{2}} = 10^{14}-10^{16}\,\mathrm{cm}^{-2}$ \citep{Richter2003,Wakker2006} or as intermediate-velocity molecular clouds (IVMCs). {These clouds contain} significant molecular fractions such that $^{12}$CO(1$\rightarrow$0) emission is detectable \citep{Magnani2010}. {Which state the clouds are in when they impact the disk at the end of the fountain cycle is important}. If the clouds are destroyed and ionised, they cannot contribute to star formation for which cold gas is required \citep{Putman2012}. In-falling cold and dense clouds may integrate into the disk and feed star formation or even trigger the formation of molecular clouds and stars, for instance in the Gould Belt \citep{Comeron1992}. \citet{Roehser2014} propose a natural evolutionary sequence from pure atomic to molecular IVCs in the fountain cycle. During the in-fall of IVCs, ram pressure perturbs the clouds by their motion through the surrounding halo medium. Enhanced pressure leads to the faster formation of H$_2$ which is related to the compression and accumulation of the gas \citep{Guillard2009,Hartmann2001}. These effects are most important during the final stages of accretion of the clouds because the surrounding halo medium is densest. \citet{Roehser2014} base their discussion on two prototypical IVCs at high Galactic latitudes that show an in-falling motion. These IVCs appear as twins in HI single-dish data but are completely different in terms of the correlation with the FIR dust emission: One cloud is purely atomic, the other is a rare IVMC. {\citet{Roehser2014} focus on the HI-FIR correlation on large angular scales.} Here, we present new high-resolution observations of these two clouds in HI and CO. The different chemical properties of the two clouds are expected to be imprinted in their spatial small-scale structure \citep{Hennebelle2012}. We study the connection between the atomic and molecular gas. By correlating the HI emission to the FIR dust continuum, we estimate the distribution of H$_2$ {at smaller angular scales as compared to \citet{Roehser2014}}. Thus, variations of the conversion factor between CO and H$_2$, the $X_{\mathrm{CO}}$-factor \citep{Bolatto2013}, are derived for the IVMC. This paper is organised as follows. In Section \ref{sec:data} we present the data sets that are used in this study. In Section \ref{sec:methods} we describe how we infer H$_2$ column densities. In Section \ref{sec:results} we present the characteristics of the two IVCs that are obtained from the new high-resolution data. In Section \ref{sec:discussion} we discuss our results and {we} summarise in Section \ref{sec:summary}.
\label{sec:discussion} \subsection{The molecular IVC} \label{sec:disc-mivc} \subsubsection{The $X_\mathrm{CO}$ factor} \label{sec:disc-mivc-xco} The quantitative comparison of H$_2$ column densities and $^{12}$CO(1$\rightarrow$0) emission results into conversion factors ${0.5}\times10^{20}\,\mathrm{cm}^{-2}\,(\mathrm{K}\,\mathrm{km}\,\mathrm{s}^{-1})^{-1}\lesssim X_\mathrm{CO}\lesssim{11}\times10^{20}\,\mathrm{cm}^{-2}\,(\mathrm{K}\,\mathrm{km}\,\mathrm{s}^{-1})^{-1}$ (Section \ref{sec:xco-factor}). {The mean conversion factor $\bar{X}_\mathrm{CO}\simeq{1.8}\times10^{20}\,\mathrm{cm}^{-2}\,(\mathrm{K}\,\mathrm{km}\,\mathrm{s}^{-1})^{-1}$ across the entire mIVC is consistent with the canonical value \citep{Bolatto2013}.} The lowest conversion factors are derived at the locations of the CO peaks. Thus, CO-darkish H$_2$ gas \citep{Grenier2005,Wolfire2010,Planckcollaboration2011XIX} is found mostly in regions of low CO abundances where $X_\mathrm{CO}$ is strongly enhanced. {Several effects contribute to apparent changes in the $X_\mathrm{CO}$ factors that cannot be disentangled here. Firstly, there are real variations of the conversion factor due to different relative amounts of H$_2$ and CO across the cloud. Secondly, there is CO-dark H$_2$ gas mostly towards lower column densities \citep{Wolfire2010}. Thirdly, changes in the dust properties may change the HI-FIR correlation and the inferred amount of H$_2$. {Such changes are not only inferred at high column densities \citep[e.g.][]{Ormel2011}. Recent studies \citep{Ysard2015,Fanciullo2015} show that even within the diffuse ISM, at HI column densities that we probe here, the dust emission properties vary. However, these variations cannot account for FIR excess in total because they are too small.} Studies of high-latitude IVMCs have in common that the authors attribute bright CO emission and large molecular abundances in general to dynamical phenomena in the Galactic halo \citep[e.g.][]{Herbstmeier1993,Herbstmeier1994,Moritz1998,Weiss1999,Lenz2015}. However, according to \citet{Herbstmeier1994} the excitation conditions of CO, {for example} line ratios, are similar to other molecular clouds. They propose that either the CO abundances are unusual high or that CO is more efficiently excited. \subsubsection{Excitation conditions of CO} \label{sec:disc-excitation} For the mIVC we obtain similar values for the excitation temperature $T_\mathrm{ex}$, optical depth $\tau_{^{13}\mathrm{CO}(1\rightarrow0)}$, and $^{13}$CO column density $N_{^{13}\mathrm{CO}}$ as {for example} \citet{Pineda2008,Pineda2010}. For Perseus \citet{Pineda2008} find that about 60\% of the $^{12}$CO(1$\rightarrow$0) emission is sub-thermally excited corresponding to volume densities below $1\times10^3\,\mathrm{cm}^{-3}$ \citep[e.g.][]{Snow2006}. Sub-thermal excitation may be also very important for the mIVC since the RADEX grid calculations reproduce the observed peak $^{13}$CO(1$\rightarrow$0) emission best for $T_\mathrm{kin}\simeq45\,\mathrm{K}$ and $n_{\mathrm{H}_2}\simeq440\,\mathrm{cm}^{-3}$ (Section \ref{sec:radiative-transfer}) which is significantly below the critical density of $^{12}$CO(1$\rightarrow$0) or $^{13}$CO(1$\rightarrow$0). According to \citet{Liszt2010}, the specific brightness $W_\mathrm{CO}/N_\mathrm{CO}$ is larger in warm and sub-thermally excited gas. Such environments correspond to kinetic temperatures that are much higher than the CO(1$\rightarrow$0) excitation temperature, in agreement with {our findings}. Generally, CO chemistry is more sensitive to the environmental conditions than H$_2$ is \citep[e.g.][]{Liszt2012}. For some of their lines-of-sight \citet{Liszt2012} describe strongly over-pressured molecular clumps which are likely transient. \subsubsection{Evidence for non-equilibrium conditions} \label{sec:disc-mivc-non-equi} There are observational indications that the formation of H$_2$ and CO in the mIVC does not occur in formation-dissociation equilibrium: \begin{itemize} \item There are spatial {displacements} between all the different data sets (Figs.~\ref{fig:mivc-maps} and \ref{fig:mivc-spatial-cuts}). Also the {velocity} distribution of the HI and CO emission is different (Fig.~\ref{fig:average-spectra}), {suggesting varying H$_2$ formation efficiencies for different spectral components}. \item The brightest CO emission is found at the edge of the integrated CO emission map, located eastwards of the nearby HI maximum close to the cloud's rim (Fig.~\ref{fig:mivc-spatial-cuts}). The $^{12}$CO(1$\rightarrow$0)/$^{13}$CO(1$\rightarrow$0) ratios are lowest at one particular spot at the eastern side (Fig.~\ref{fig:mivc-excitation}) probably linked to the largest molecular abundances and column density contrasts. {This is reminiscent of the Draco molecular cloud \citep[e.g.][]{MivilleDeschenes2016}.} \item In general, the spectral and spatial properties of the mIVC are complicated. We observe several velocity components in both HI and CO and a rich clumpy structure (Figs.~\ref{fig:mivc-maps}, \ref{fig:mivc-renzo}). In $^{12}$CO(1$\rightarrow$0), there is a bimodal velocity distribution across the cloud but no coherent velocity gradient. \item The observed radial velocity of the mIVC is $\sim$$-40\,$km\,s$^{-1}$ moving through a thin halo medium. This situation in itself may be unstable and subject to instabilities {\citep[e.g.][]{MivilleDeschenes2016}.} \end{itemize} \subsubsection{Is the mIVC able to form stars?} \label{sec:star-formation} All known star-forming high-latitude clouds are not classified as IVCs \citep{McGehee2008}. Thus, it would be surprising to find evidences of star formation in the mIVC. Within the innermost part of the mIVC, that we measured with IRAM, the total combined single-dish HI and H$_2$ mass is about $M_\mathrm{H}\simeq{42}\,M_\odot$. This mass is low compared to other star-forming high-latitude clouds {like MBM\,12 \citep{Pound1990,Luhman2001} or MBM\,20 \citep{Liljestrom1991,Hearty2000}.} However, the mIVC is connected to more extended HI structures that contain significantly more mass \citep{Roehser2014}. Using the virial parameter \citep{Bertoldi1992} \begin{equation} \alpha_\mathrm{vir}=\frac{M_\mathrm{vir}}{M} = \frac{5\sigma^2R}{GM} \simeq 1.2 \left( \frac{\sigma_\mathrm{v}}{\mathrm{km}\,\mathrm{s}^{-1}} \right)^2 \left( \frac{R}{\mathrm{pc}} \right) \left( \frac{M}{10^3\,\mathrm{M}_\odot} \right)^{-1}, \end{equation} we estimate the importance of self-gravity for the individual clumps. For clumps with $\alpha_\mathrm{vir}>>1$ gravity is unimportant, while for $\alpha_\mathrm{vir}\simeq1$ the gravitational energy is comparable to the kinetic energy. We calculate the size of each clump by summing over the number of its individual pixels and converting it to the radius of a sphere with the same angular size. Using the estimated radius and gas mass for a typical line-width of CO of $\Delta v \simeq 1\,\mathrm{km}\,\mathrm{s}^{-1}$, the typical virial parameters are $\alpha_\mathrm{vir}\simeq{120}$ with a minimum value of approximately six. Hence, the combined atomic and molecular gas within the spatial regions of the CO clumps is unlikely to be gravitationally bound. The result is similar when we consider the cloud globally. For the entire region covered with IRAM we get $\alpha_\mathrm{virial}\simeq6$. Thus, also globally the cloud seems to be gravitationally unbound. The mIVC appears not to form stars which is expected because the densities and masses are not sufficiently large to form gravitationally bound structures. \subsection{The formation of molecular clouds} \label{sec:disc-formation-molecular-clouds} Turbulence is thought to be important for the formation of molecular clouds and for subsequent star formation \citep{MacLow2004}. Simulations of turbulent colliding flows of initially warm neutral medium (WNM) show the formation of non-linear density perturbations that lead to gaseous structures of cold neutral medium (CNM) due to dynamical and thermal instabilities \citep[e.g.][]{Audit2005,Heitsch2005,Glover2007}. The H$_2$ formation is rapid with time-scales of $\sim$$1\,$Myr or less. The main limiting factor for H$_2$ formation is the time-dependent column density distribution which continuously re-exposes the molecular material to the radiation field \citep{Heitsch2006a}. In their Fig.~7 \citet{Glover2007} show that the chemical abundances are not in equilibrium indicating that molecular cores are likely transient features \citep{VazquezSemadeni2005}. {We infer comparably shallow slopes of the PSDs computed for the interferomteric HI data. {Possible explanations are} high Mach numbers, which appear to flatten the PSD profiles \citep{Burkhart2013b}, {and the thermal instability of atomic gas \citep{Field1965}}. Molecules form in small and dense environments, which suggests a flattening of the PSD for smaller spatial scales of a molecular cloud. However, a steeper PSD is inferred for the molecular cloud than for the atomic one. This may be interesting for the mechanisms that form structures in general.} {The considered single dish and interferometric HI fluxes reveal that there is apparently more atomic gas retrieved by the interferometer for the mIVC than for the aIVC as compared to the total amount of gas. {Accordingly, the HI column densities are higher within the mIVC. These may account for the different amplitudes of the PSD profiles \citep{Gautier1992,MivilleDeschenes2007}}. Apparently, this additional atomic gas is in the form of CNM rather than WNM, since the EBHIS peak spectra directly show that the mIVC contains more cold atomic gas than the aIVC \citep[][their Fig.~8]{Roehser2014}. The retrieved HI fluxes and the PSD profiles suggest the same finding but from the spatial distribution of the HI gas. Similarly, these additional CNM structures appear to be connected to brighter FIR emission in the mIVC, which reflects the presence of molecular hydrogen. Hence, we may have found an observational connection between CNM and molecular gas. Nevertheless, one may argue that there should be more pronounced differences in the atomic structure of the two IVCs. These may be revealed better in proper interferometric mosaics of the targets. {Otherwise, this implies that CNM and diffuse H$_2$ are rather similar.}} High-latitude clouds and IVMCs may be thought of as flows of warm and cold gas through the surrounding halo medium {\citep[also][]{MivilleDeschenes2016}}. Thus, it is just a consequent step to assume that, given time, such objects develop cold small-scale structures in which molecular hydrogen can form. The bimodal velocity distribution within the mIVC can be thought of as an imprint of flows of different gas components. The motion of halo clouds through the ambient medium creates ram pressure. In the mIVC most of the H$_2$ and CO are found at the eastern side of the cloud where a sharp column density contrast at the cloud's rim is evident. We propose that the cloud moves in this direction and the largest molecular abundances are located at the leading front of the cloud {(compare also with Fig.~\ref{fig:mivc-spatial-cuts})}. Hence, ram pressure appears to accumulate gas and facilitate the formation of small-scale structures and molecules. {We note} that the unknown tangential velocity component of the cloud is likely substantial if the IVC originates from a Galactic fountain process \citep[e.g.][]{Melioli2008}. For typical densities of the WNM \citet{Saury2014} find similarly that turbulent motions of the neutral gas alone do not cause the transition from WNM to CNM. Instead, an increase of the WNM density is required in the first place to trigger the rapid formation of CNM structures out of the WNM by turbulence. Thus for the aIVC and mIVC, ram pressure appears to be responsible for {over-pressuring} the WNM, pushing the gas to the thermally unstable regime from which the CNM is formed. The general conditions are identical in both the aIVC and mIVC: The observed radial velocity is the same, there is cold gas with $\mathrm{FWHM}\simeq3\,\mathrm{km}\,\mathrm{s}^{-1}$, the total HI mass is even larger for the aIVC and substructure has evolved. Thus, we confirm the conclusion of \citet{Roehser2014} that the aIVC should evolve into a similar molecular IVC as the mIVC. This transition can occur rapidly, possibly within 1\,Myr \citep{Saury2014}. In simulations of the turbulent ISM no particular triggering mechanism is required but gradually structures emerge. This is in contrast to possible interactions between IVCs and other halo clouds as the reason for the formation of molecules \citep{Herbstmeier1993,Weiss1999,Lenz2015}. We present high-resolution WSRT HI and IRAM CO observations of two high-latitude intermediate-velocity clouds (IVCs). These are studied in the context of the transition from atomic to molecular clouds at the disk-halo interface. Our analysis elaborates on \citet{Roehser2014} who compared the two IVCs by using the most recent large-scale surveys in HI and the FIR, EBHIS and \textit{Planck}. The molecular IVC (mIVC) exhibits a pronounced structure consisting of many clumps in HI and CO. This clumpy {high-column density substructure on sub-parsec scales} provides the shielding of molecules like H$_2$ and CO. Across those parts that are surveyed with IRAM, CO emission is detected indicating that the whole cloud is condensed to allow locally the formation of CO. {Statistically, there is only weak evidence that the small-scale structures within the atomic IVC (aIVC) are different from the mIVC. In terms of HI column density, the interferometric observations detect less HI in clumps but more in a diffuse and smooth distribution.} Consequently, no CO emission is detected near the largest HI column densities of the aIVC. {The excess of interferometric HI flux for the mIVC relative to the aIVC may be considered as evidence for the larger abundance of CNM from its spatial distribution, which is reflected by the large amount of molecular gas within the mIVC. The estimated slopes of the PSD profiles are shallower for the atomic cloud, which is opposite to the naive expectation that molecular clouds have more substructure. This may be connected to the formation mechanisms of molecular gas in general {or to noise in our high-resolution HI data}.} Using the dust optical depth from \textit{Planck}, we infer the column densities of molecular hydrogen within the mIVC. The $X_\mathrm{CO}$ conversion factor varies significantly across the cloud with an average $\bar{X}_\mathrm{CO}\simeq{1.8}\times10^{20}\,\mathrm{cm}^{-2}\,(\mathrm{K}\,\mathrm{km}\,\mathrm{s}^{-1})^{-1}$. The lowest $X_\mathrm{CO}$ are found at the FIR peaks increasing outwards. Thus, most of the CO-darkish H$_2$ gas is found in regions of low CO abundances. A thorough study of similarities and differences between all high-latitude molecular and non-molecular IVCs would shed more light on a possible triggering mechanism and the requirements for the formation of such objects. It would give important insights into the Galactic fountain cycle, the fate of in-falling material, and into the evolution of the Milky Way as a whole. {One may have anticipated statistically more pronounced differences between the two clouds since a molecular cloud is expected to exhibit compact small-scale structures in which molecules have formed. {The 1D power spectral densities do not reveal significant differences between both clouds. This may be interpreted such that CNM and diffuse H$_2$ are not so different after all}. This inconsistency may be related to the incomplete mapping of the two clouds with the radio interferometer and perhaps an insufficient angular resolution. With the upcoming HI surveys conducted with Apertif \citep{Oosterloo2010} proper interferometric imaging of large fractions of the sky are expected. These will allow a more detailed and quantitative analysis of the high-Galactic latitude sky and in particular the two IVCs of interest.}
16
7
1607.00912
1607
1607.08232_arXiv.txt
Ultra-high energy (UHE) neutrino astrophysics sits at the crossroads of particle physics, astronomy, and astrophysics.\footnote{Here, we loosely define the UHE regime to be the energy range near $10^{18}-10^{21}$~eV. A few radio experiments have sensitivities that reach to lower energies, while others have energy thresholds at higher energies.} Through neutrino astrophysics, we can uniquely explore the structure and evolution of the universe at the highest energies at cosmic distances and test our understanding of particle physics at energies greater than those available at particle colliders. The detection of UHE neutrinos would shed light on the nature of the astrophysical sources that produce the highest energy particles in the universe. Astrophysical sources almost certainly produce UHE neutrinos in hadronic processes. Also, neutrinos above $10^{17}$ eV should be produced through the GZK effect~\cite{g,zk}, where extragalactic cosmic rays above $10^{19.5}$~eV interact with the cosmic microwave background within tens of Mpc of their source. It was first pointed out by Berezinsky and Zatsepin~\cite{Berezinsky:1969zz, Berezinsky:1970} that this cosmogenic neutrino flux, sometimes called ``BZ'' neutrinos, could be observable. These neutrinos would point close to the cosmic ray production site both because of the close proximity of the interaction to the source and because the decay products are Lorentz boosted along the line of sight. The latter effect constrains the direction the most strongly ($1~\rm{MeV}/1~\rm{EeV}=10^{-12}$ whereas $100~\rm{Mpc}/1~\rm{Gpc}=0.1$). Since cosmic rays do not follow straight paths in magnetic fields and get attenuated above the GZK threshold, and high energy photons (E$>10^{14}$~eV) are attenuated by the cosmic infrared background, neutrinos offer the unique ability to point to the location of the highest energy cosmic accelerators in the sky. UHE neutrino measurements will also have important implications for high energy particle physics and determining neutrino properties. A sample of UHE neutrinos would allow for a measurement of the $\nu$-p cross section~\cite{Connolly:2011vc,Klein:2013xoa} and a direct test of weak interaction couplings at center-of-mass (CM) energies beyond the Large Hadron Collider (a 10$^{18}$~eV neutrino collides with a proton at rest at 45~TeV CM). A strong constraint on the UHE neutrino flux can even shed light on models of Lorentz invariance violation~\cite{anitaLorentz,Anchordoqui:2014hua}. IceCube has recently discovered neutrinos with energies up to a few $10^{15}$~eV that are likely produced by astrophysical sources directly~\cite{bertErnie,bigBird}. Therefore, there is now a pressing motivation to measure the energy spectrum above $10^{15}$~eV with improved sensitivity, to provide insight into the origin of the seemingly cosmic events and the particle acceleration mechanisms that give rise to them and to determine the high-energy extent of the spectrum. \begin{wrapfigure}{r}{0.65\textwidth } \begin{center}\includegraphics[width=0.60\textwidth]{limits.pdf} \end{center} \captionsetup{width=0.60\textwidth} \caption{The current most competitive experimental constraints on the all-flavor diffuse flux of the highest energy neutrinos compared to representative model predictions~\cite{ahlers,kotera}. Limits are from IceCube, Auger, RICE, ANITA and NuMoon ~\cite{augerLimit,rice,anita2,anita2err,icecube2015,buitinkconstraints2010}. Also shown is the astrophysical neutrino flux measured by IceCube~\cite{icecube2015}. \label{fig:limit}} \end{wrapfigure} Figure~\ref{fig:limit} shows the current best limits on the high energy diffuse neutrino flux compared with a variety of GZK production models. Beyond the astrophysical neutrino events measured up to $\sim10^{15}$~eV, IceCube sets the best limits on the high energy neutrino flux up to $10^{17.5}$~eV, and now shares the claim for the best constraints in the region up to $10^{19.5}$~eV with the Pierre Auger Observatory (Auger)~\cite{icecube2015,augerLimit}. The current best limit on the flux of neutrinos above $10^{19.5}$~eV comes from the Antarctic Impulsive Transient Antenna (ANITA) experiment~\cite{anita2,anita2err}. Note that on the vertical axis of Figure~\ref{fig:limit} is the differential flux $dN/dE$ multiplied by one power of $E$; the product is proportional to $dN/d\log_{10}E$. On this plot, an experiment that increases its sensitivity in an energy-independent way, for example by increasing its live time, will have its limits move only downward. An experiment that decreases its energy threshold with no other change will have its constraints move only to the left on this plot. Despite the competitive limits currently imposed by optical detection experiments, it would be prohibitively costly to build a detector utilizing the optical signature that would be sensitive to the full range of predicted possible cosmogenic neutrino populations above $10^{17}$~eV, due to the detector spacings set by the absorption and scattering lengths of optical light in ice~\cite{Ackermann:2006pva}. IceCube-Gen2 is a proposed IceCube expansion to 100-300~km$^2$ scale for the detection of neutrinos above $10^{13.5}$~eV, focusing on energies above which the atmospheric neutrino background is not overwhelmingly dominant, and with the discovery potential for BZ neutrinos~\cite{Aartsen:2014njl}. However, to achieve sensitivity to the full range of BZ neutrino models, we must instead turn to a different detection mechanism that allows us to instrument larger volumes for comparable cost. Neutrino telescopes that utilize the radio detection technique search for the coherent, impulsive radio signals that are emitted by electromagnetic particle cascades induced by neutrinos interacting with a dielectric. Radio UHE neutrino detection requires a volume $\mathcal{O}(100)$ km$^3$ of dielectric material, which limits the detection medium to be naturally-occurring, that allows radio signals to pass through without significant attenuation over lengths $\mathcal{O}(1)$~km. Current and proposed experimental efforts in this field monitor or plan to monitor immense volumes of glacial ice, whose radio attenuation properties have been directly measured at multiple locations in Antarctica and Greenland, and have the desired clarity~\cite{Barrella:2010vs,besson,barwickSouthPole,araWhitepaper,avva,Barwick_Berg_Besson_Duffin_2014}. We note that although IceCube-Gen2 is nominally on an expansion of the array of optical sensors, a radio array component may be considered for enhancement of sensitivity in the energy range~$10^{16}$-$10^{20}$~eV~\cite{Aartsen:2014njl}. Even some of the most pessimistic models predicting BZ neutrino fluxes are within reach of planned experiments using the radio technique. These more pessimistic models tend to have heavy cosmic-ray composition or a weak dependence of source densities on redshift, within constraints set by other measurements. Recent measurements with the Auger and the Telescope Array disfavor a significant iron fraction in the cosmic-ray composition at energies up to and even exceeding $10^{19.5}$~eV, which in turn favors higher fluxes of BZ neutrinos than if the highest energy cosmic rays were pure iron~\cite{auger_2014_composition,telescopeArray}. An experiment that has a factor of $\sim$50 improvement over the best sensitivity currently achieved by IceCube near $10^{18}-10^{19}$~eV, or reduces the energy threshold with an ANITA-level sensitivity by about a factor of $\sim$50 would reach these pessimistic neutrino flux expectations. For most models, such an experiment would observe enough events to make an important impact in our understanding of UHE astrophysics and particle physics utilizing the highest energy observable particles in the universe. We note, however, that even these pessimistic models can be evaded through alternate explanations for the cosmic ray data, or more exotic scenarios~\cite{Unger:2015laa,Globus:2015xga,anitaLorentz}.
16
7
1607.08232
1607
1607.06533_arXiv.txt
A bright optical flare was detected in the high-redshift ($z=2.133$) quasar CGRaBS J0809+5341 on 2014 April 13. The absolute magnitude of the object reached $-30.0$ during the flare, making it the brightest one (in flaring stage) among all known quasars so far. The 15 GHz flux density of CGRaBS J0809+5341 monitored in the period from 2008 to 2016 also reached its peak at the same time. To reveal any structural change possibly associated with the flare in the innermost radio structure of the quasar, we conducted a pilot very long baseline interferometry (VLBI) observation of CGRaBS J0809+5341 using the European VLBI Network (EVN) at 5 GHz on 2014 November 18, about seven months after the prominent optical flare. Three epochs of follow-up KaVA (Korean VLBI Network and VLBI Exploration of Radio Astrometry Array) observations were carried out at 22 and 43 GHz frequencies from 2015 February 25 to June 4, with the intention of exploring a possibly emerging new radio jet component associated with the optical flare. However, these high-resolution VLBI observations revealed only the milliarcsecond-scale compact ``core'' that was known in the quasar from earlier VLBI images, and showed no sign of any extended jet structure. Neither the size, nor the flux density of the ``core'' changed considerably after the flare according to our VLBI monitoring. The results suggest that any putative radio ejecta associated with the major optical and radio flare could not yet be separated from the ``core'' component, or the newly-born jet was short-lived.
\label{sect:intro} Blazars are active galactic nuclei (AGN) with relativistic jets closely aligned with our line of sight according to the radio-loud AGN unification \citep{Urry95}. As a result, the Doppler-boosted relativistic jet emission dominates their non-thermal spectrum from the radio \citep{Blandford79} through optical \citep{Whiting01} to the $\gamma$-rays \citep{Ackermann11}. Phenomenological relations between optical flaring and radio properties in blazars have been investigated spanning a duration of more than four decades \citep[e.g.,][]{Hac72,Pom76,Babadzhanyants84,Tor94}. However, the physics of the inter-relating properties across the electromagnetic spectrum remains enigmatic. Recent studies found a significant positive correlation between the optical nuclear luminosity and the radio flux density of the compact core in quasars, indicating that both the radio and optical emissions originate from the innermost part of the relativistically beamed pc-scale jets \citep{Ars10}. Correlations between the optical and $\gamma$-ray variability have also been found in blazars \citep{Hovatta14,Cohen14}, supporting the single-zone leptonic models in which the optical seed synchrotron photons are up-scattered by relativistic electrons to $\gamma$-ray energy bands via the inverse Compton process. As for possible correlations between $\gamma$-ray flares and the emergence of new superluminal VLBI components, \citet{Jor01} found a correspondence between these events in about half of the cases in their blazar sample, suggesting that the $\gamma$-ray emission is closely related to the relativistic jet. The physical mechanism producing the $\gamma$-ray flares is either synchrotron self-Compton or external Compton scattering of photons by relativistic electrons in the pc-scale regions of the jet. The location of the seed photon sources however may span two orders of magnitude in distance from the black hole, from the broad-line region (BLR, $\sim$0.1~pc), the molecular torus ($\sim$1$-$few pc), or the radio core ($\sim$10~pc) \citep{Dotson15}. CGRaBS J0809+5341 (J0809+5341, hereafter) is a flat-spectrum radio quasar \citep[a blazar;][]{Massaro09} at high redshift, $z = 2.133$ \citep{Healey08}. Recently it showed a bright flare in unfiltered optical observations \citep{Shumkov14,Balanutsa14}. The observations were made with the MASTER-Tunka auto-detection system on the nights of 2014 April 13 and April 19. The absolute magnitude of the flaring source was extremely high, $M=-30.0$ on April 13 and $M=-30.5$ on April 19, making it (during the short flaring period) possibly the brightest among all known quasars. In the subsequent observation on 2014 May 2, the source became significantly fainter, but it was still about 3 magnitudes brighter than in its quiescent state \citep{Wiersema14}. The source has recently been detected at high energies in the decaying part after the 2014 optical flare as well. It has been detected with the {\em Fermi} Large Area Telescope (LAT) \citep{Fermi15}; this observation indicated variability in the high-energy bands. It remained undetected during the first 2 years of {\em Fermi} operations but became active and continuously detected in the last year \citep{Paliya15}. In the same study, the source was also tracked in the X-rays with the {\em Nuclear Spectroscopic Telescope Array (NuSTAR)} and {\em Swift} satellites. The strong optical flare and the recent high $\gamma$-ray state in J0809+5341 would be expected to cause a radio flux density outburst with the emergence of a new jet component \citep[e.g.,][]{Jor01,Mar08,Mar10,Ori13}. High-resolution VLBI imaging observations are essential to confirm this. Motivated by the discovery of the prominent optical flare, we have carried out a short exploratory VLBI observation of J0809+5341 with the European VLBI Network (EVN), with the aim of searching for possible structural changes and radio flux density variation associated with the event. The experiment was conducted on 2014 November 18, seven months after the optical flare. Then, we continued monitoring the source with the KaVA, a VLBI array combining the Korean VLBI Network (KVN) and the Japanese VLBI Exploration of Radio Astrometry (VERA) array, at frequencies of 22 and 43 GHz. The dual-frequency KaVA observations of J0809+5341 were conducted at three epochs (2015 February 25/26, 2015 April 2/3, and 2015 June 3/4), in order to trace possible structural variations, flux density variability, and the change of radio spectral index with time. Here we report on the results of our VLBI observations of J0809+5341. The paper is organized as follows: Section \ref{sect:obs} describes the EVN and KaVA observations and the data analysis. Section \ref{sect:res} presents the results which are then discussed in Section \ref{sect:disc}. A summary of the current study is then presented in Section \ref{sect:sum}.
\label{sect:disc} \subsection{Source variability} As was described in Sect.~\ref{variability}, the source has a flat radio spectrum and shows strong variability in the radio. This is consistent with the blazar classification of J0809+5341. Following \citet{Ars10}, we calculated the radio-loudness parameter $R$, defined as the ratio of the radio flux density at 5~GHz to the nuclear optical flux density at 4400~\AA{} \citep{Kel89}. For a source at $z = 2.133$, the above bands used for calculating $R$ correspond to 1.6~GHz and 13800~\AA{} in the observer's frame. J0809+5341 shows a practically flat radio spectrum, allowing us to assume a 1.6~GHz flux density of 160~mJy in the quiescent state, and 320~mJy at the flare peak. In the flaring state, the MASTER OT observatory detected the unfiltered R-band magnitude of 16.2 \citep{Shumkov14,Balanutsa14}. Compared to the historical R-band data, it is 3.4 mag lower. When converting to flux density at 13800~\AA, it corresponds to about 4.5~mJy. Then, the radio-loudness of J0809+5341 is $R = 70$ during the 2014 optical flare. As a comparison, $R = 1800$ is estimated during quiescence, reinforcing the classification of the object as a radio-loud quasar. Optical flaring of blazars have been studied for over five decades \citep[e.g.,][]{GK65,Pol79,AS80}, and correlations between optical and radio flares have been detected in some cases, e.g., in the prominent radio AGN AO~0235+164 and 3C~345 \citep{BD80,Babadzhanyants84}. Long-term multi-band monitoring of a sample of blazars shows a tight correlation between the radio and optical luminosities \citep{Ars10,Wie15}. At the moment, it is not clear yet whether a similar correlation exists for J0809+5341, but we note that the major optical flare in 2014 took place at the same time when the 15 GHz radio flux density reached its the peak in April (Fig.~\ref{fig:lc}). Our VLBI observations were performed several months later, when the total radio flux density had already dropped considerably. The VLBI flux densities at various frequencies and epochs reported in this paper show a good consistency with the total flux density of the 15 GHz OVRO light curve (Fig.~\ref{fig:lc}), indicating that the total flux density is dominated by the compact VLBI core, and the core has a flat radio spectrum during this relatively quiet stage. Recently, \citet{Paliya15} reported the first detection of J0809+5341 in the X-ray and $\gamma$-ray bands. This increase in high-energy emission is coincidental and likely associated with the giant optical flare. As mentioned earlier, the seed photon sources for inverse Compton scattering may originate in the BLR, the molecular torus, or the radio core. \citet{Paliya15} found that the $\gamma$-ray properties are consistent with an emission region outside of the BLR. In this case the flaring radio emission is expected to be completely synchrotron self-absorbed, and the observed maximum in radio flux density is likely a chance coincidence. When the shocked ejecta travel along the jet, we expect to see an increase of radio emission as it becomes transparent, first at the highest frequencies, as predicted by the shock-in-jet model \citep{MG85,Val92}. The fact that there has been no increase at 43~GHz in our monitoring implies that the flaring radio emission was either very short-lived, or the shocked ejecta has not propagated yet to the optically thin region. Alternatively, the optical flare and the radio outburst, as well as the increase in the high-energy flux are physically related. Most blazar outbursts are known to occur at pc-scale distances from the central engine, around the radio core region \citep[see e.g.][for a review]{Mar13}. This can be confirmed by long-term monitoring observations with dense time sampling, from radio to $\gamma$-rays, supplemented with high-resolution VLBI monitoring in the radio. Such programmes are being undertaken for some of the most prominent blazars \citep[e.g.,][]{Mar08,Mar10,Agudo11a,Agudo11b,Ori13,Jor13} but not for J0809+5341. However, it is also possible that no new jet component was associated with the flare of J0809+5341, as, e.g., found for the blazar Mrk 421 by \citet{PE05}. This would suggest that the jet rapidly loses its kinetic energy and does not reach the region that can be imaged with the resolution offered by VLBI. \subsection{The brightness temperature and the implications for the Doppler-boosting factor} Based on the VLBI-measured flux density and source size presented in Sect.~\ref{size}, we calculated the apparent brightness temperature of J0809+5341 using the following equation \citep{K&O88}: \begin{equation} T_{\rm b} = 1.22 \times 10^{12} \frac{S_{\rm core}}{\nu^2 \theta^2} (1+z) , \end{equation} where $T_{\rm b}$ is the brightness temperature in Kelvin, $S_{\rm core}$ [Jy] is the flux density of the ``core'' at the observing frequency $\nu$ [GHz], $\theta$ [mas] is the FWHM size of the best-fit circular Gaussian model. The redshift is $z= 2.133$ \citep{Healey08}. The calculated brightness temperatures for the different VLBI experiments are listed in Table~\ref{tab:mf}. The brightness temperatures are in the range of $(0.2 - 5.3) \times 10^{11}$\,K. These values are typical for most other radio-loud quasars observed with VLBI at around $z = 3$ \citep{Gurvits92,Gurvits94,Frey97,Par99}. The $T_{\rm b}$ values at 43~GHz appear consistently smaller than those measured at lower frequencies (Table~\ref{tab:mf}). This phenomenon has also been found in previous high-frequency VLBI surveys \citep{Lee08}. The difference derived from the statistical investigation of large samples is not simply due to source variability or other observing effects. A possible reason might be related to non-zero gradients in the physical conditions in the jet flow, and high-frequency (43- and 86-GHz) VLBI observations probe the optically thin region where $T_{\rm b}$ is intrinsically lower \citep{Lee08}. On the other hand, as we pointed out in Sect. 3.1, the core is not completely resolved, and in our case the fitted component size represents an upper limit at the highest frequencies. This means that the calculated $T_{\rm b}$ values at 43~GHz are in fact lower limits. Therefore we cannot independently confirm the decrease of $T_{\rm b}$ with frequency for J0809+5341. The brightness temperature of blazars is amplified by the Doppler boosting effect as the approaching jets are oriented close to the line of sight. Usually, the equipartition brightness temperature $T_{\rm b,eq} \simeq 5 \times 10^{10}$\,K \citep{Readhead94} is considered to be a reasonable estimate of the intrinsic value $T_{\rm b,int}$. The Doppler boosting factor is thus derived from the observed brightness temperature as $\delta = T_{\rm b}/T_{\rm b,eq}$. The estimated Doppler factors (lower limits in cases where the $T_{\rm b}$ values are lower limits as well) for J0809+5341 listed in Table~\ref{tab:mf} range from at least 0.4 to 10.6. The observations of $\delta<1$ at some epochs might indicate a non-stationary flow of plasma resulting in both deviations from equipartition as well as projection effects of a curved plasma flow trajectory. However we note that the $\delta$ values somewhat below unity in Table 2 are all estimated at 43 GHz and, as discussed above, are lower limits because the source is unresolved at this frequency. Therefore the lower values are not inconsistent with the presence of Doppler boosting in the jet. \subsection{Comparison to jet parameters derived from high-energy observations} The spectral energy distribution (SED) of the source during the flare was fitted by \citet{Paliya15} with a synchrotron self-Compton model, confirming that J0809+5341 is a powerful blazar. \citet{Paliya15} note however that the high-energy properties of J0809+5341 are reminiscent of low-redshift blazars rather than high-redshift ones. Its optical spectrum is dominated by synchrotron emission from the jet rather than an extremely luminous accretion disk; its $\gamma$-ray spectrum is flat rather than steep; and, it hosts a relatively low-mass black hole ($10^{8.4} M_{\odot}$ ) \citep[cf.][]{Ghisellini11,Ghisellini13}. Our VLBI result reveals a relativistic jet with a moderate Doppler boosting factor, consistent with typical blazar radio properties in general. From the SED, \citet{Paliya15} estimate a bulk Lorentz factor of $\Gamma=20$ in the jet, and suggest that the jet becomes radiatively efficient during the flare. Assuming the jet parameters obtained from SED fitting by \citet{Paliya15}, we independently estimate the Doppler factor $\delta$, following, e.g., \citet{Urry95}: \begin{equation} \delta=[\Gamma (1 - \beta \cos \vartheta)]^{-1}, \end{equation} where the bulk velocity measured in the units of the speed of light $c$ is \begin{equation} \beta=(1 - \Gamma^{-2})^{\frac{1}{2}}. \end{equation} Substituting $\Gamma=20$ and the jet viewing angle $\vartheta=3.0\degree$ \citep{Paliya15}, we get $\delta=19.1$. It is higher compared to the values we derived from VLBI data (Table~\ref{tab:mf}). A possible reason is that we overestimate the intrinsic brightness temperature $T_{\rm b,int}$ by a factor of $\sim$2 by adopting the equipartition value $T_{\rm b,int}$ \citep[cf.][]{Homan06}, and thus underestimate the Doppler factor by the same factor. In the relativistic beaming model applied to the parameters of J0809+5341, the observed transverse speed of a radio-emitting blob in the jet, expressed in the units of $c$ is \begin{equation} \beta_{\rm app}=\frac{\beta \sin \vartheta}{1 - \beta \cos \vartheta} = 19.95. \end{equation} Assuming the jet model for J0809+5341 proposed by \citet{Paliya15}, using $\delta=19.1$ for the Dopler factor, we can estimate the expected apparent proper motion $\mu$ of a putative newly-ejected superluminal jet component possibly associated with the optical flare (and the coincident radio and high-energy emission peak) in 2014 April, following \citet{Bach05}: \begin{equation} \mu=\beta_{\rm app} c (1+z) D_{\rm L}^{-1}. \end{equation} Here $D_{\rm L}=16809.4$~Mpc \citep{Wright06}, and thus $\mu=0.23$~mas~yr$^{-1}$. This slow apparent proper motion is consistent with our results, in particular with the fact that we did not detect any sign of a new jet component in our follow-up VLBI observations within 1.1~yr after the flare, with angular resolutions $\gtrsim 0.6$~mas (see Table~\ref{tab:obs}). If there was an emerging blob in the jet, then it was still blended with the ``core''. Another possibility is that the flare did not generate a jet component. Follow-up VLBI imaging over a sufficiently long time interval may eventually reveal a jet ejection, unless the blob is expanding and fading too rapidly to be detected several years after the flare.
16
7
1607.06533
1607
1607.04193_arXiv.txt
We develop a galaxy cluster finding algorithm based on spectral clustering technique to identify optical counterparts and estimate optical redshifts for X-ray selected cluster candidates\footnote{\url{https://github.com/1680/Journal-manuscript-code.git}}. As an application, we run our algorithm on a sample of X-ray cluster candidates selected from the third XMM-Newton serendipitous source catalog (3XMM-DR5) that are located in the Stripe 82 of the Sloan Digital Sky Survey (SDSS). Our method works on galaxies described in the color-magnitude feature space. We begin by examining 45 galaxy clusters with published spectroscopic redshifts in the range of 0.1 to 0.8 with a median of 0.36. As a result, we are able to identify their optical counterparts and estimate their photometric redshifts, which have a typical accuracy of 0.025 and agree with the published ones. Then, we investigate another 40 X-ray cluster candidates (from the same cluster survey) with no redshift information in the literature and found that 12 candidates are considered as galaxy clusters in the redshift range from 0.29 to 0.76 with a median of 0.57. These systems are newly discovered clusters in X-rays and optical data. Among them 7 clusters have sepectroscopic redshifts for at least one member galaxy.
\label{Sec:Intro} Study of galaxy clusters is an important field in Astronomy since they attain maximum values in the density space and their properties provide one of the ways to test expansion and structure growth models \citep[e.g.][]{Voit05, Allen11}. Therefore, various cluster surveys have been conducted at X-rays, optical and mm wavelengths due to multi-components of their baryonic matters (galaxy and intracluster gas). Detecting galaxy clusters in optical data is not an easy task since the available spatial information is precise only in two dimensions which are the right ascension($\alpha$) and declination($\delta$). The third important spatial dimension is along the line of sight but unfortunately it has large uncertainty, which leads to confusion in detecting galaxy clusters and determining their memberships \citep[e.g.][]{Gal2008, Hao10}. X-ray cluster surveys provide an accurate selection method for detecting galaxy clusters at wide redshift range up to 1.7 \citep[e.g.][]{Piffaretti11, Takey11, Mehrtens12, Clerc12, Takey13, Takey14, Clerc14, Pierre15}. Although X-ray observations are not available for a large area on the sky and do not provide redshift for the majority of the selected clusters, they determine precisely the cluster centers and provide observable parameters (X-ray luminosity and temperature) correlating well with the cluster mass. Optical and NIR observations provide the main data to estimate cluster redshifts. Galaxy clusters are tightly related in space ($\alpha$,~$\delta$ and redshift) as well as in color. Therefore, optical cluster detection algorithms try to make use of these properties to identify galaxy clusters and their members as well as to estimate cluster redshifts \citep[e.g.][]{Koester07, Hao10, Rykoff14, Durret15, Wen15}. In this paper, we use the spectral clustering algorithm for optically detecting and estimating photometric redshift of X-ray selected cluster candidates from the 3XMM/SDSS Stripe 82 galaxy cluster survey \citep{Takey2016}. Our cluster finding algorithm tries to identify galaxy clusters in the multi-dimensional color-magnitude space of galaxies. An important goal of the algorithm is to minimize both false positive overheads (projection effects) and false negative detections (missing cluster galaxies) which lead in turn to precisely estimate the cluster redshift. Applying the algorithm on a subsample of 45 clusters with available spectroscopic redshifts in the literature gives recovery for all of them with comparable photometric redshift estimates. In addition, we optically confirm and estimate redshifts of 12 X-ray cluster candidates which had no available redshifts in literature. Those are considered as new galaxy clusters in the literature. The paper is organized as follows. Section~\ref{Sec:RelatedWork} gives a brief description of related work in optical detection of galaxy clusters. We briefly describe the 3XMM/SDSS Stripe 82 galaxy cluster survey and its X-ray and optical data in section~\ref{Sec:DataDescr}, while the cluster samples studied in this work are presented in section~\ref{Sec:DataInvest}. Section~\ref{Sec:Method} presents the spectral clustering-based galaxy clusters finding algorithm, which we develop for photometric redshift estimation and cluster confirmation. Section~\ref{Sec:Results} gives the results for the studied cluster samples and a comparison with the published ones. We end by section~\ref{Sec:Dis} for conclusion and future work. Throughout this paper we use the following values for the cosmological constants: $H_0$=70 $km~s^{-1} Mpc^{-1}$, $\Omega_M$=0.3, and $\Omega_\lambda$=0.7.
\label{Sec:Dis} A galaxy cluster finding algorithm based on spectral clustering technique is developed ({\url{https://github.com/1680/Journal-manuscript-code.git}}). The spectral clustering algorithm searches for clusters using the magnitude and color features. We apply the algorithm on a sample of 45 clusters with spectroscopic redshifts in the range of 0.1-0.8. These clusters are identified in the framework of the 3XMM/SDSS Stripe 82 galaxy cluster survey \citep{Takey2016}. Our cluster finding algorithm identified the optical counterparts of these systems and reported their photometric redshifts, which agree well with the published ones. We also apply our algorithm on a sample of 40 X-ray cluster candidates from the same cluster survey and optically confirmed 12 systems with photometric redshift range from 0.29 to 0.76 and a median of 0.57. The remaining candidates are expected to be at high redshift and therefore we need more data than the ones used in our analysis to confirm them. The spectral clustering algorithm is easy to implement and solve which makes it a good choice for solving different problems. Also, this algorithm does not cluster the observations (galaxies) itself as the $K$-means algorithm does, but works on the similarity matrix created from the relations between different galaxies in a specific space. This results in the ability of the algorithm to identify galaxy clusters by the connectivity of their members as well as their densities \citep{Lux2007, Kannan2004}. The parameters inside the spectral clustering algorithm can be adaptive to self tune the algorithm such as $\sigma$, the local scaling parameter and $k$, the number of output clusters \citep{Manor2004}. The algorithm also can be a learning one as it can learn the similarity matrix from the data \citep{Bach2004}. As a first application of the spectral clustering to find galaxy clusters, we use the basic algorithm according to \citep{Ng2002}. The self-tuning and learning versions cab be used in future work.
16
7
1607.04193
1607
1607.00535_arXiv.txt
Reconnection outflows are highly energetic directed flows that interact with the ambient plasma or with flows from other reconnection regions. Under these conditions the flow becomes highly unstable and chaotic, as any flow jets interacting with a medium. We report here massively parallel simulations of the two cases of interaction between outflow jets and between a single outflow with an ambient plasma. We find in both case the development of a chaotic magnetic field, subject to secondary reconnection events that further complicate the topology of the field lines. The focus of the present analysis is on the energy balance. We compute each energy channel (electromagnetic, bulk, thermal, for each species) and find where the most energy is exchanged and in what form. The main finding is that the largest energy exchange is not at the reconnection site proper but in the regions where the outflowing jets are destabilizied.
The reader will not be surprised to discover that reconnection is the process where magnetic energy is converted into kinetic energy, in the form of bulk and thermal energy~\cite{biskamp}. The last decades have seen great advances in understanding the fundamental physics behind reconnection, a progress that has convinced NASA to launch the Multiscale MagnetoSpheric (MMS) mission \cite{mms-web}. With MMS, four closely spaced spacecraft can measure the local processes enabling reconnection with unprecedented spatial and temporal resolution \cite{Burchaaf2939}. Alongside the progress made in understanding local reconnection physics, we need also to understand how reconnection acts on macroscopic scales to convert large fractions of energy. The current understanding of reconnection at the kinetic level is based on the formation of reconnection sites where two scales are present \cite{birn-priest}. In an outer region the ions become decoupled form the field lines and in an inner region the electrons become also decoupled allowing the topological break and reconnection of field lines. The scale of the inner reconnection region is the electron skin depth or gyro radius, depending on conditions~\cite{gembeta,hesse-guide}. These electron scales are tiny in most systems of interest. In the Earth magnetosphere the electron scales are at the km scale, in the solar corona at the cm scale and in some laboratory plasmas at the micrometer to nanometer scale (depending on the type of experiment). How can large fractions of energy be converted if the region of reconnection are so tiny? Do we have to imagine all the energy needs to pass via these tiny portals? Obviously not. The past research has already indicated two ways out of this conundrum. First, the energy conversion is not limited to the region of field line breakage. Most of the energy is in fact processed as the plasma crosses the so-called separatrices~\cite{lapenta2014separatrices,marty-review}. These are the magnetic surfaces connected to the central reconnection site. In 2D models, this means that reconnection is topologically happening in a central point called x-point, in 3D this can extend to a so-called x-line or to more complex 3D topologies with null points \cite{priest-forbes}. A well known implementation of this solution is that of Petschek reconnection \cite{petschek}, based on the presence of anomalous resistivity in MHD models: energy is converted by standing slow shocks at the separatrices. The process has an analogy in kinetic reconnection with strong energy conversion produced by the electric field caused by the Hall term at the separatrices \cite{shay2007two}. As the system size is enlarged to macroscopic scales, the kinetic process leads naturally to the formation of slow shocks and tangential discontinuities vindicating the validity of the Petschek model even within full kinetic treatment that does not rely on any ad hoc anomalous resistivity \cite{innocenti2015evidence}. Second, there is the possibility of the presence of multiple reconnection sites. If each reconnection site is relatively small and can only process a small amount of energy, large energies can be processed if many reconnection sites are present. This can happen for example in turbulent systems where turbulence breaks up the reconnection process in a myriad reconnection sites \cite{matthaeus1986turbulent,1999ApJ...517..700L}. But also macroscopically laminar systems can have many reconnection sites via a process of instability of a single reconnection site\cite{bulanov1979tearing,loureiro09,uzdensky10,skender}. A single reconnection site has a tendency of breaking up into many sites and progressively fill larger portions of the system \cite{lapenta2008self,bhattacharjee09}. The different reconnection site not only multiply the amount of energy processed but can also feed each other speeding up reconnection to increase even further the energy conversion rate \cite{lapenta2008self}. We present here a third possible explanation. The outflows of reconnection are highly energetic. The ions travel at Alfvenic speeds and the electrons at high superalfvenic speeds \cite{shay2011super,lapenta2013propagation}. As these flows interact with the ambient plasma or with outflows from neighboring reconnection sites, instabilities develop \cite{vapirev2013formation}. We have recently shown that these instabilities can create new reconnection sites where secondary reconnection processes start \cite{lapenta2015secondary}. We consider here the question of how much energy is converted locally and on average in the reconnection outflows when instabilities are observed. Of course the three mechanisms described above are not mutually exclusive. We can certainly assume that reconnection converts lots of energy by converting near the reconnection separatrices of multiple reconnection regions formed by the break up of an initial seed reconnection region and by the instabilities developing in the outflow from each reconnection site. The obvious tendency is for a spontaneous transition to turbulent reconnection \cite{lapenta-lazarian,bhattacharjee09}. A process we urgently need to investigate. Below, section 2 shows examples of secondary reconnection sites in outflows from isolated reconnection sites interacting with pristine ambient plasmas. Section 3 shows the case of the interaction of outflows from neighboring reconnection sites. Section 4 describe the methodology applied in Sect. 5 to demonstrate that the outflow instabilities are powerful energy converters. Section 6 draws the conclusions of the present investigation.
The process of reconnection produces a highly energetic jet. When reconnection happens in a isolated site the jet interacts with the surrounding plasma, when multiple reconnection sites are in proximity, jets from neighbouring sites interact. In both situations, a jet becomes destabilized leading to a chaotic magnetic field topology in the region of interaction. Within it, secondary reconnection develops and strong energy exchange takes place. The local rate of change of the magnetic energy and the work between the particles and the electric field is stronger by one order of magnitude in the region of the interaction of the of the outflows than in the reconnection region proper. The energy exchanges in the outflows are very strong but are also fluctuating with energy going both from the particles to the field (dynamo or generator) or from the field to the particle (reconnection or load). To assess the actual energy budget, we analyzed the integrated energy balance at each distance away from the reconnection site along the direction of the outflow (direction $x$), integrated in the other directions. Two sets of equations are written: in a Lagrangian and in a Eulerian frame. The Lagrangian frame is of more direct physical relevance as it determines how the plasma species are actually exchanging energy. The Eulerian frame is convenient for comparison with in situ local measurements. The main conclusion is that the front is the region of most intense energy exchange. All the main indicators of net energy transfer reach their peak there: work done by the electromagnetic fields, rate of magnetic energy decrease and divergence of the Poynting flux, divergence of bulk energy and thermal energy (together forming the enthalpy flux) of each species.
16
7
1607.00535
1607
1607.03700_arXiv.txt
The early part of a supernova (SN) light-curve is dominated by radiation escaping from the expanding shock-heated progenitor envelope. For polytropic Hydrogen envelopes, the properties of the emitted radiation are described by simple analytic expressions and are nearly independent of the polytropic index, $n$. This analytic description holds at early time, $t<$~few days, during which radiation escapes from shells initially lying near the stellar surface. We use numerical solutions to address two issues. First, we show that the analytic description holds at early time also for non-polytropic density profiles. Second, we extend the solutions to later times, when the emission emerges from deep within the envelope and depends on the progenitor's density profile. Examining the late time behavior of polytropic envelopes with a wide range of core to envelope mass and radius ratios, $0.1\le\Mc/\Menv\le10$ and $10^{-3}\le\Rc/R\le10^{-1}$, we find that the effective temperature is well described by the analytic solution also at late time, while the luminosity $L$ is suppressed by a factor, which may be approximated to better than $20[30]\%$ accuracy up to $t=t_{\rm tr}/a$ by $A\exp[-(at/t_{\rm tr})^\alpha]$ with $t_{\rm tr} = 13 (\Menv/M_\odot)^{3/4}(M/\Menv)^{1/4}(E/10^{51}{\rm erg})^{-1/4}$~d, $M=\Mc+\Menv$, $A=0.9[0.8]$, $a=1.7[4.6]$ and $\alpha=0.8[0.7]$ for $n=3/2[3]$. This description holds as long as the opacity is approximately that of a fully ionized gas, i.e. for $T>0.7$~eV, $t<14(R/10^{13.5}{\rm cm})^{0.55}$~d. The suppression of $L$ at $t_{\rm tr}/a$ obtained for standard polytropic envelopes may account for the first optical peak of double-peaked SN light curves, with first peak at a few days for $\Menv<1M_\odot$.
\label{sec:intro} During a supernova (SN) explosion, a strong radiation mediated shock wave propagates through and ejects the stellar envelope. As the shock expands outwards, the optical depth of the material lying ahead of it decreases. When the optical depth drops below $\approx c/\vt_{\rm sh}$, where $\vt_{\rm sh}$ is the shock velocity, radiation escapes ahead of the shock and the shock dissolves. In the absence of an optically thick circum-stellar material, this breakout takes place once the shock reaches the edge of the star, producing an X-ray/UV flash on a time scale of $R/c$ (seconds to a fraction of an hour), where $R$ is the stellar radius. The relatively short breakout is followed by UV/optical emission from the expanding cooling envelope on a day time-scale. As the envelope expands its optical depth decreases, and radiation escapes from deeper shells. The properties of the breakout and post-breakout cooling emission carry unique information on the structure of the progenitor star (e.g. its radius and surface composition) and on its pre-explosion evolution, which cannot be directly inferred from observations at later time. The detection of SNe on a time scale of a day following the explosion, which was enabled recently by the progress of wide-field optical transient surveys, yielded important constraints on the progenitors of SNe of type Ia, Ib/c and II. For a recent comprehensive review of the subject see \citet[][]{WK16BOrev}. At radii $r$ close to the stellar surface, $\delta\equiv (R-r)/R\ll1$, the density profile of a polytropic envelope approaches a power-law form, \begin{equation}\label{eq:rho0} \rho_0=f_\rho \bar{\rho}_0\delta^n, \end{equation} with $n=3$ for radiative envelopes and $n=3/2$ for efficiently convective envelopes. Here, $\overline{\rho}_0\equiv M/(4\pi/3)R^3$ is the average pre-explosion ejecta density, $M$ is the ejecta mass (excluding the mass of a possible remnant), and $f_\rho$ is a numerical factor of order unity that depends on the inner envelope structure \citep[see][and \S~\ref{sec:cond}, fig.~\ref{fig:frho}]{MM99,Calzavara04}. The propagation of the shock wave in this region is described by the Gandel'Man-Frank-Kamenetskii--Sakurai self similar solutions \citep{GandelMan56,Sakurai60}, \begin{equation}\label{eq:vs} \vt_{\rm sh}=\vt_{\rm s*} \delta^{-\beta n}, \end{equation} with $\beta=0.191[0.186]$ for $n=3/2[3]$. The value of $\vt_{\rm s*}$ depends not only on $E$ and $M$, the ejecta energy and mass, but also on the inner envelope structure, and is not determined by the self-similar solutions alone. Based on numerical calculations, \citet{MM99} have suggested the approximation \begin{equation}\label{eq:vstar} \vt_{\rm s*} \approx 1.05f_\rho^{-\beta}\sqrt{E/M}. \end{equation} For large Hydrogen-dominated envelopes the plasma is nearly fully ionized at early time and the opacity $\kappa$ is nearly time and space independent. In this case, the post-breakout photospheric temperature and bolometric luminosity are given, after significant envelope expansion, by \citep[][hereafter RW11]{WaxmanCampana07,RW11} \begin{eqnarray} \label{eq:RW11} \Tph &=& 1.61[1.69]\left(\frac{\vt^2_{\rm s*,8.5}t_{\rm d}^2}{f_\rho M_0\kappa_{0.34}}\right)^{\epsilon_1} \frac{R_{13}^{1/4}}{\kappa^{1/4}_{0.34}}t_{\rm d}^{-1/2}\,{\rm eV}, \nonumber\\ \LRW &=& 2.0~[2.1]\times10^{42} \left(\frac{\vt_{\rm s*,8.5}t_{\rm d}^2}{f_\rho M_0\kappa_{0.34}}\right)^{-\epsilon_2} \frac{\vt^2_{\rm s*,8.5}R_{13}}{\kappa_{0.34}}\,{\rm \frac{erg}{s}}, \end{eqnarray} where $\kappa=0.34\kappa_{0.34}{\rm cm^2/g}$, $\vt_{\rm s*}=10^{8.5}\vt_{\rm s*,8.5}{\rm cm/s}$, $M=1M_0M_\odot$, $R=10^{13}R_{13}{\rm cm}$, $\epsilon_1=0.027[0.016]$, and $\epsilon_2=0.086[0.175]$ for $n=3/2[3]$. This analytic description holds at times \begin{eqnarray}\label{eq:t_limits} t&>&0.2\frac{R_{13}}{\vt_{\rm s*,8.5}}\max\left[0.5,\frac{R_{13}^{0.4}}{(f_\rho\kappa_{0.34}M_0)^{0.2}\vt_{\rm s*,8.5}^{0.7}} \right]\,{\rm d}, \nonumber\\ t&<& t_\delta=3f_\rho^{-0.1}\frac{\sqrt{\kappa_{0.34}M_0}}{\vt_{\rm s*,8.5}}\,{\rm d}. \end{eqnarray} The first part of the lower limit, $t>R/5\vt_*$, is set by the requirement for significant expansion \citep[the shock accelerates to $>5\vt_{\rm s*}$ near the surface,][]{WK16BOrev}, while the second part is set by the requirement that the photosphere penetrates beyond the thickness of the shell at which the initial breakout takes place (where the hydrodynamic profiles deviate from the self-similar ones due to the escape of photons; see eq.~(16) of RW11). The upper limit is set by the requirement for emission from shells carrying a fraction $\delta M/M<10^{-2.5}$ of the ejecta mass, corresponding approximately to $\delta\lesssim 0.1$ (RW11). The approximation of constant opacity holds for $T>0.7$~eV (at lower temperatures the effect of recombination becomes significant, see RW11 and fig.~\ref{fig:opacity}). At $T>0.7$~eV, the ratio of color to photospheric temperature may be approximated by (RW11) $\Tcol/T_{\rm ph}\approx 1.2$. \begin{figure}[h] \epsscale{1} \plotone{Opac_H_HE_70_30.eps} \caption{Scattering opacity for a 30:70 (by mass) He:H mixture, at the relevant temperatures and densities. Recombination leads to opacity reduction. Similar results are obtained for solar metallicity.} \label{fig:opacity} \end{figure} In RW11, $L$ and $T$ are given as functions of $E/M$ using the approximation of eq.~(\ref{eq:vstar}). Here we give $L$ and $T$ as functions of $\vt_{\rm s*}$, since this is the quantity that determines directly the emission properties, and hence constrained directly by observations, and since our numerical solutions allow us to determine $\vt_{\rm s*}$ directly, and hence to quantify the accuracy of the approximation of eq.~(\ref{eq:vstar}). Also, since our discussion is limited to the regime of time and space independent opacity, we use for $L$ the exact self-similar solution, which is available for this case \citep[][eqs. 19-20 of RW11]{Chevalier92,ChevalierFransson08}, instead of the approximate expressions (eqs. 14-15 of RW11), which differ slightly from the expressions given for $L$ in eqs.~(\ref{eq:RW11}) (in the approximate expressions, the numerical coefficients are $1.8[2.4]\times10^{42}$ and $\epsilon_2=0.078[0.15]$ for $n=3/2[3]$, and the dependence on $\vt_{\rm s*}$ is $L\propto\vt_{\rm s*}^{2-2\epsilon_2}$ instead of $\vt_{\rm s*}^{2-\epsilon_2}$, see \S~\ref{sec:LateForm}). A comment is in place here regarding the use of eqs.~(\ref{eq:RW11}), following \cite{WaxmanCampana07} and RW11, versus the use of the rather similar results of \citet[][NS10]{NS10}. NS10 derived approximate expressions for the luminosity and temperature of both the breakout and post-breakout cooling emission. For the planar breakout phase, their estimates of $L$ and $T$ exceed those of the exact solutions \citep{SKW11Planar,KSW12Bol,SKW13Spec} by factors of a few \citep[leading to an overestimate of the optical/UV flux, which is in the Rayleight-Jeans regime at this time, by 1-2 orders of magnitude, e.g.][]{Ganot16}. For the spherical post-breakout cooling phase, the NS10 estimate of $L(t)$ is similar to that of RW11 (similar functional dependence on parameters with normalization lower[higher] by 10[40]\% compared to the exact self-similar solution of eq.~\ref{eq:RW11} for $n=3/2[3]$). The temporal and parameter dependence of the color temperature estimate of NS10 differs from that of RW11, mainly due to neglecting the bound-free absorption contribution to the opacity, which is the dominant contribution at the relevant temperatures (even for low metallicity). Due to the bound-free contribution, which is not described by Kramers' opacity law (and therefore does not follow the parameter dependence of the free-free opacity), $T_{\rm col}$ is closer to $T_{\rm ph}$ than predicted by NS10 \citep[see also the results of][showing a constant ratio of $\Tcol/T_{\rm ph}$ at late times]{1992ApJ...393..742E}. For example, for a red supergiant explosion with typical parameters ($M=15 M_\odot$, $R=500 R_{\odot}$, $E=10^{51} \erg$), the NS10 color temperature exceeds that of RW11 by 50\% at 1~d (3~eV instead of 2~eV), which implies that inferring $R$ from the observed $T_{\rm col}$ using the NS10 model would lead to an over-estimate of the radius by a factor of $\approx4$. Thus, while the approximate NS10 results may be used for an approximate description of the emission, the more accurate results of RW11 are more appropriate for inferring progenitor parameters from observations. In this paper we use numerical solutions of the post-breakout emission to address two issues. First, we study the applicability of the analytic solution, given by eqs.~(\ref{eq:RW11}), to non-polytropic envelopes. Eqs.~(\ref{eq:RW11}) imply that $T$ is nearly independent of $n$ and essentially determined by $R$ alone, while $L$ is only weakly dependent on $n$ and determined mainly by $\vt_{\rm s*}^2R$. The near independence on $n$ suggests that the properties of the post-breakout cooling emission are nearly independent of the density profile, and therefore that eqs.~(\ref{eq:RW11}) hold also for non-polytropic envelopes. We use numerical solutions of the post-breakout emission from non-polytropic envelopes to demonstrate that this is indeed the case. In particular, we show that deviations from polytropic profiles, which are obtained by numerical stellar evolution models such as those explored by \cite{Morozova16IIP_numeric_BO}, do not lead to significant deviations from the predictions of eqs.~(\ref{eq:RW11}). Second, we extend the analysis to $t\sim t_{\rm tr}$, when the envelope becomes transparent and emission is not limited to $\delta\ll1$ shells. At this stage, the emission is expected to depend on the envelope density structure. We present numerical solutions for progenitors composed of compact cores of radius $10^{-3}\le\Rc/R\le10^{-1}$ and mass $10^{-1}\le\Mc/M\le10^{1}$, surrounded by extended H-dominated $n=3/2$ and $n=3$ polytropic envelopes of mass $\Menv=M-\Mc$, and provide analytic approximations describing the deviation from eqs.~(\ref{eq:RW11}) at late time (in our numerical calculations the entire core mass $\Mc$ is ejected; the results are not sensitive to the presence of a remnant). As explained in \S~\ref{sec:LateForm}, $T_{\rm ph}$ and $L$ are given at $t\gg R/\vt_{\rm s*}$ by \begin{eqnarray} \label{eq:T_L_dimensions} T_{\rm ph} &=& f_T\left(\xi,c/\vt_{\rm s*},\alpha_i\right) \left(\frac{R}{\kappa t^2}\right)^{1/4}, \nonumber\\ L &=& f_L\left(\xi,c/\vt_{\rm s*},\alpha_i\right) \left(\frac{c\vt_{\rm s*}^2R}{\kappa}\right), \end{eqnarray} where $f_T$ and $f_L$ are $R$-independent dimensionless functions of the dimensionless variable $\xi\equiv c\vt_{\rm s*}t^2/\kappa \Menv$, of $c/\vt_{\rm s*}$ and of a set of dimensionless parameters $\alpha_i$ determining the progenitor structure ($n,\Mc/M$,$\Rc/R$). We use our numerical calculations to determine $f_L$ and $f_T$ and to study their dependence on $\alpha_i$. Our approach is complementary to that using numerical calculations to derive the post breakout emission properties for progenitor structures ($\alpha_i$), which are determined by stellar evolution calculations under specific assumptions regarding processes (like convection and mass loss), for which a basic principles theory does not yet exist. Uncertainties in $\alpha_i$ arise due to the absence of such a theory, as reflected in the varying results obtained by different numerical calculations. Our analysis enables us to explore a wide range of progenitor parameters, to determine which characteristics of the emission are not sensitive to uncertainties in $\alpha_i$ (due to uncertainties in stellar evolution models), and to determine the dependence on $\alpha_i$ of the characteristics which are sensitive to these uncertainties. This paper is organized as follows. The equations solved and the initial conditions used are described in \S~\ref{sec:EqCond}. We solve the radiation hydrodynamics equations, using the diffusion approximation with constant opacity. The general form of the solutions at $t\gg R/\vt_{\rm s*}$ (eq.~\ref{eq:T_L_dimensions}) is derived in \S~\ref{sec:LateForm}. The numerical results are presented in \S~\ref{sec:results}. A summary of the analytic formulae, which provide an approximate description of the post-breakout cooling emission, is given in \S~\ref{sec:analytic}. Double-peaked SN light curves are discussed in \S~\ref{sec:2peak}. In \S~\ref{sec:discussion} our results are summarized and discussed, with a focus on the implications for what can be learned about the progenitors from post-breakout emission observations.
\label{sec:discussion} \subsection{Early time, $t<t_\delta=$few days} We have used numerical calculations to demonstrate that the early, $t<t_\delta=$few days (see eq.~(\ref{eq:t_limits})), envelope cooling emission is not sensitive to the details of the density profile of the envelope (see figs.~\ref{fig:Teff32}-\ref{fig:Lq3}). The emission is well described by eqs.~(\ref{eq:RW11}), with $T_{\rm ph}$ determined mainly by $R$, and $L$ determined mainly by $\vt_{\rm s*}^2R$. For $\Mc/\Menv\le1$, the ratio of $\Tcol$ (see \S~\ref{sec:rad}, eqs.~(\ref{eq:L_spec},\ref{eq:gBB})), obtained from the numerical calculations, to $T_{\rm ph}$, given by eq.~(\ref{eq:RW11}), is $1.1[1.0]\pm0.05$ for $n=3/2[3]$, with weak sensitivity to metallicity in the relevant temperature range (this value is somewhat lower than that obtained in RW11, 1.2, who considered a pure He:H mixture; for large radii, $R>10^{13.5}$~cm, and large values of $\Mc/\Menv$, $\Mc/\Menv=10$, $\Tcol/T_{\rm ph}$ is lower by $\approx10\%$; see figs.~\ref{fig:Tc32} and ~\ref{fig:Tc3}). The weak dependence of the early emission on the density structure, reflected in the very weak dependence of $\Tcol/T_{\rm ph}$ and of $L$ and $T$ in eqs.~(\ref{eq:RW11}) on $n$ and model parameters other than $R$ and $\vt_{\rm s*}^2$, implies that $R$ and $\vt_{\rm s*}^2$ may be inferred accurately and robustly from the observations of the early UV/optical emission. The approximate relation between $\vt_{\rm s*}$ and $E/M$, given by eq.~(\ref{eq:vstar}), holds to better than 10\% for $0.3<\Mc/M_{\rm env}<3$ (see fig.~\ref{fig:vst}). The dependence of $f_\rho$ on $n$ and on $\Mc/M_{\rm env}$, approximately given $\Rc/R\ll1$ by $f_\rho=(\Menv/\Mc)^{1/2}$ and $f_\rho=0.08(\Menv/\Mc)$ for $n=3/2$ and $n=3$ (see fig.~\ref{fig:frho}), implies that the relation between $\vt_{\rm s*}$ and $E/M$ depends on the ejecta structure. $E/M$ may be inferred from $\vt_{\rm s*}$ by $E/M=0.9[0.3]\vt_{\rm s*}^2$ for $n=3/2[3]$ with $5[30]\%$ accuracy for $0.3<\Mc/M_{\rm env}<3$ (for progenitors with $n=3$ envelopes and large core radii, $\Rc/R\approx0.1$, $f_\rho$ is larger and $E/M=0.5\vt_{\rm s*}^2$ is a better approximation; see fig.~\ref{fig:frho}). Conversely, a comparison of $\vt_{\rm s*}$, determined by early UV observations, and $E/M$, determined by other late time observations (e.g. spectroscopic ejecta velocity), will constrain the progenitor structure. \subsection{Late time, $t>t_\delta$} We have extended the solutions to $t\sim t_{\rm tr}$, see eq.~(\ref{eq:tstar}), when the emission emerges from deep within the envelope and depends on the progenitor's density profile. The expression given in the abstract for $t_{\rm tr}$ is obtained from eq.~(\ref{eq:tstar}) using the approximation of eq.~(\ref{eq:vstar}) for $\vt_{\rm s*}$, dropping for the sake of simplicity the dependence on $f_\rho$, $t_{\rm tr} = 13f_\rho^{\beta/2}(\Menv/M_\odot)^{3/4}(M/\Menv)^{1/4}(E/10^{51}{\rm erg})^{-1/4}$~d. We have shown (see \S~\ref{sec:LateForm}) that the dependence of $L$ and $T$ on the progenitor parameters is of the general form of eq.~(\ref{eq:T_L_dimensions}), and used the numerical solutions to determine the dimensionless functions $f_T$ and $f_L$ for polytropic, $n=3/2$ and $n=3$, envelopes with a wide range of core to envelope mass and radius ratios, $0.1<\Mc/(M-\Mc)<10$, $0.001<\Rc/R<0.1$. We have found that $T$ is well described by the analytic solution also at late time (for low mass envelopes $T$ may drop at late time by $\sim20\%$ below the analytic prediction, see eq.~\ref{eq:Mevn_min_rec} and fig.~\ref{fig:Tc32_Menv1}), while $L$ is suppressed by a factor which depends mainly on $n$ (and only weakly on $\Rc/R$ and $\Mc/M$), and may be approximated to $\approx20\%$ accuracy up to $t=t_{\rm tr}/a(n)$ by the analytic approximations of eq.~(\ref{eq:Lq}). For very large progenitors, $R>10^{13.5}$~cm, with low mass envelopes, $\Menv\le1M_\odot$, the separation of the time scales $R/\vt_{\rm s*}$ and $t_{\rm tr}/a$ is not large, and the analytic expression for $L$ given by eqs.~(\ref{eq:RW11}), which holds for $R/\vt_{\rm s*}\ll t\ll t_{\rm tr}/a$, is not accurate at any time. However, as demonstrated in fig.~\ref{fig:Lq32}, the approximation for $L$ obtained using eqs.~(\ref{eq:RW11}) with the suppression factor of eq.~(\ref{eq:Lq}) is accurate to better than $10\%$ up to $t=0.1t_{\rm tr}$ also in this case. This implies that $R\vt_{\rm s*}^2$ (and hence $\vt_{\rm s*}^2$) may be accurately determined from the bolometric luminosity $L$ at early time also for very large progenitors, $R>10^{13.5}$~cm, with low mass envelopes. It is worth noting, that the suppression of $L$ at $t>t_\delta$ implies that using eqs.~(\ref{eq:RW11}) to infer $R$ from the luminosity observed at $t>t_\delta$ would lead to an under estimate of $R$ due to the overestimate of $L$, as demonstrated in fig.~\ref{fig:2peak} (compare the solid and dashed curves) and as discussed by \citet{Rubin15}. We have shown (see fig.~\ref{fig:2peak}) that the suppression of $L$ at $t_{\rm tr}/a(n)$ obtained for standard polytropic envelopes may account for the first optical peak of double-peaked SN light curves, with first peak at a few days for $\Menv<1M_\odot$. The suppression of the bolometric luminosity is consistent with the observed behavior, and does not require a non-polytropic envelope with a special structure, e.g. where the the mass is initially concentrated at $r\sim R$. The time at which the bolometric luminosity is suppressed corresponds to $t_{\rm tr}/a(n)$ and hence constrains $\Menv/\vt_{\rm s*}$ (see eq.~(\ref{eq:tstar})), while the luminosity constrains $\vt_{\rm s*}^2R$. It is important to emphasize that these parameters cannot be determined accurately from the observations, since the emission at $t>t_\delta$ depends on the detailed structure of the progenitor (see discussion at the end of \S~\ref{sec:2peak}). Finally, it is important to emphasize that our analysis holds as long as the opacity is approximately that of a fully ionized gas, i.e. for $T>0.7$~eV, $t<14R_{13.5}^{0.55}$~d. At lower temperatures, recombination leads to a strong decrease of the opacity (see fig.~\ref{fig:opacity}) and the photosphere penetrates deep into the ejecta, to a depth where the temperature is sufficiently high to maintain significant ionization and large opacity, implying that $T$ does not drop significantly below $\sim 0.7$~eV. This enhances the dependence on the details of the envelope structure and implies that detailed radiation transfer models are required to describe the emission (our simple approximations for the opacity no longer hold). \subsection{The importance of early UV observations} An accurate determination of $R$ requires an accurate determination of $T$ at a time when eq.~(\ref{eq:RW11}) holds and $T$ depends mainly on $R$, i.e. when $T>0.7$~eV. An accurate determination of $T$ requires, in turn, observations at $\lambda<hc/4T=0.3(T/1{\rm eV})^{-1}\mu$, in order to identify the peak in the light curve, which is obtained when $T$ crosses $T_\lambda\approx hc/4\lambda$ (or by identifying the spectral peak provided redenning can be corrected for, RW11). Since the emission peaks below $0.3\mu$ for $T>1$~eV, UV observations at $\lambda<0.3~\mu$ (which must be carried out from space) will enable one to reliably determine $T$ and $R$ (and hence also $\vt_{\rm s*}$). On the other hand, observations at $\lambda\ge0.44\mu$ (B-band or longer) corresponding to $T_\lambda=hc/4\lambda\le0.7$~eV, will not enable one to accurately determine $T$ and $R$. Observations in the U-band, $\lambda=0.36\mu$ corresponding to $hc/4\lambda=0.8$~eV, will provide less accurate results than UV observations due to the strong temperature dependence of the opacity at slightly lower temperature.
16
7
1607.03700
1607
1607.04316_arXiv.txt
We study possibility of obtaining velocity spectra by studying turbulence in an optically thick medium using velocity centroids (VCs). We find that the regime of universal, i.e. independent of underlying turbulence statistics, fluctuations discovered originally within the velocity channel analysis (VCA) carries over to the statistics of VCs. In other words, for large absorptions the VC lose their ability to reflect the spectra of turbulence. Combining our present study with the earlier studies of centroids in Esquivel \& Lazarian, we conclude that centroids are applicable for studies subsonic/transsonic turbulence for the range of scales that is limited by the absorption effects. We also consider VCs based on absorption lines and define the range of their applicability. We address the problem of analytical description of spectra and anisotropies of fluctuations that are available through studies using VC. We obtain spectra and anisotropy of VC fluctuations arising from Alfv\'en, slow and fast modes that constitute the compressible MHD cascade to address the issue of anisotropy of VC statistics, and show how the VC anisotropy can be used to find the media magnetization as well as to identify and separate contributions from Alfv\'en, slow and fast modes. Our study demonstrates that VCs are complementary to the tools provided by the VCA. In order to study turbulent volume for which the resolution of single dish telescopes is insufficient, we demonstrate how the studies of anisotropy can be performed using interferometers.
\xi(\bm{R})=\frac{1}{2\upi}\int_{-S}^S\mathop{\mathrm{d}z}\left(1-\frac{|z|}{2S}\right)\int_{-\infty}^\infty\mathop{\mathrm{d}v}\int_{-\infty}^\infty\mathop{\mathrm{d}v_+}\left(v_+^2-\frac{v^2}{4}\right)\nonumber\\ \frac{\xi_\rho(\bm{r})}{{\sqrt{D_z(\bm{r})+2\beta_{\text{T}}}}} \exp\left[-\frac{v^2}{2(D_z(\bm{r})+2\beta_{\text{T}})}\right]\sqrt{\frac{2}{D^+(S, \bm{r})}}\nonumber\\ \exp\left[-\frac{v_+^2}{D^+(S, \bm{r})}\right]~, \end{align} where \begin{equation} D^+(S, \bm{r})\equiv \beta_{\text{T}}+D_z(S)-D_z(\bm{r})/2~, \end{equation} and $D_z(\bm{r})$ is the $z$-projected velocity structure function, $\beta_{\text{T}}\equiv k_BT/m$ is the thermal broadening, $m$ being the mass of atoms, $T$ being the temperature and $k_B$ being the Boltzmann constant. After performing the integration over $v$, we finally obtain \begin{equation}\label{maincorr} \xi(\bm{R})=\frac{1}{2}\int_{-S}^S\mathop{\mathrm{d}z}\left(1-\frac{|z|}{2S}\right)\xi_\rho(\bm{r})(D_z(S)-D_z(\bm{r}))~. \end{equation} Notice that our formalism cleanly shows how thermal effects drop out in centroids upon carrying out the integral in Eq. \eqref{centroidcorrm} to obtain Eq. \eqref{maincorr}. This shows that turbulence velocity spectrum can be recovered with centroids regardless of the temperature\footnote{For very hot plasmas, noise levels can distort centroid statistics. See Sec. \ref{finiteres} for more clarification.}, which is distinct from other techniques (e.g. VCA). The centroids structure function is defined as \begin{equation}\label{eq:defcenstr} \mathcal{D}(R)=\left\langle\left[C(\bm{X}_1+\bm{R})-C(\bm{X}_1)\right]^2\right\rangle~. \end{equation} Utilising Eqs. \eqref{maincorr} and \eqref{eq:defcenstr}, we finally obtain the centroid structure function \begin{align}\label{correlation} \mathcal{D}(\bm{R})\approx\int_{-S}^S\mathop{\mathrm{d}z}\bigg\{D_z(S)\left(\xi_\rho(0,z)-\xi_\rho(\bm{r})\right)+[\xi_\rho(\bm{r})D_z(\bm{r})\nonumber\\ -\xi_\rho(0,z)D_z(0,z)]\bigg\}~. \end{align} With the assumption of zero correlation between density and velocity, the above result for optically thin line is identical to that obtained in LE03, where the same result was obtained by directly utilising Eq. \eqref{firstdef}. Working from first principles in the PPV space, as is done in this paper, is especially useful to deal with centroids in the presence of self-absorption. For a constant density field and at $R\ll S$, the centroid structure function is \begin{equation} \mathcal{D}(R)\propto R^{1+\nu}~, \end{equation} which is the regular centroid scaling. We use this scaling further in this paper.
\label{discuss} \subsection{Foundations of the technique} In this paper, we improve the understanding of centroids by studying unnormalised centroids in the presence of self-absorption, carrying out absorption line study and studying the effects of anisotropies in MHD turbulence. Unlike the past works, we explicitly use PPV space for absorption line study and study of self-absorption. This work shows the strength and usefulness of the PPV space formalism developed in \citetalias{lazarian2000velocity}. In the view of modern understanding of MHD turbulence (see \citealt{beresnyak2015mhd} for a review), we study anisotropy of centroid correlation through explicit calculations in mode by mode basis. Theoretical and numerical research (\citetalias{goldreich1995toward}, \citealt{lithwick2001compressible}, \citealt{cho2002compressible, cho2003compressible}, \citealt{kowal2010velocity}) suggest that the MHD turbulence can be viewed as a superposition of the cascades of Alfven, slow and fast modes. The statistical properties of these cascades in the global frame of reference were obtained in \citetalias{lazarian2012statistical} for the magnetic fields, while similar study was carried out in \citetalias{kandel2016extending} for the velocity field. In particular, \citetalias{kandel2016extending} studied turbulence anisotropies by making use of the VCA technique. \subsection{Areas of applicability of centroids} One of the major advantage of centroids is their ability to study turbulence for subsonic regime using the main species, e.g. hydrogen. On the other hand, previous studies suggested that centroids are not reliable in the supersonic regime (\citealt{esquivel2005velocity}; \citealt{esquivel2007statistics}). This is distinct from VCA, which is applicable for supersonic turbulence. While thermal broadening does not affect the centroid statistics in the case of optically thin medium, as shown in Sec. \ref{othick}, thermal broadening might be an important parameter that determines whether or not we can recover the velocity statistics in the case of optically thick medium. Originally the centroids were developed for emission lines. However, our study in Sec. \ref{abline} extends UVC technique to study turbulence using absorption lines, which could be from the collection of point sources or from a spatially extended source. In fact, as we have shown in Sec. \ref{abline}, one may be able recover turbulence statistics at sufficiently small scales using absorption lines if one uses the centroids of logarithm of intensity. The prime objects of study using centroids are turbulence in diffuse ISM of the Milky Way and other galaxies and in intergalactic gas in clusters of galaxies using multi-wavelength single dish and interferometric measurements. The advantage of interferometric study is that one just needs a few measurements rather than restoring the entire PPV cubes to be able to perform the studies. \subsection{Model assumptions} In this paper, we adopted several model assumptions to make our analysis possible. One of the main assumptions is that the fluctuations are Gaussian. This assumption is satisfied by the velocity field to an appreciable degree (see \citealt{MoninYaglomLumley197504}). We do not make any assumption about Gaussianity of density field in the case when the turbulent medium is optically thin, as well as for absorption line studies, but we use Gaussian approximation for PPV space density to understand the main effects of self-absorption. While LE03 derived a general expression for centroid structure function keeping in mind that velocity and density might be correlated, we assumed that they are not. In fact, \cite{esquivel2007statistics} investigated the effects of density velocity correlation and showed that this correlation is not important if $\sigma_\rho/\rho_0\lesssim 1$. If this condition is not fulfilled, one should develop a model of density-velocity, perhaps basing on numerical simulation, correlation to retrieve full information from the centroids. Our analysis of centroid anisotropy was based on the decomposition of MHD turbulence into Alfv\'en, slow and fast modes. This decomposition is reasonable only when the coupling between the modes is marginal. \cite{cho2002compressible} showed that the degree of coupling between different modes to be moderate as long as the sonic Mach number is not very high. Since main regime where centroids are reliable is subsonic turbulence, this condition may not be restrictive for our purpose. \subsection{New power of centroids} This paper improves the usefulness of the UVC technique by providing an analytical description of the technique in the presence of self-absorption as well as for the absorption line study. We believe that our study will be complimentary to the study in \citetalias{lazarian2004velocity}, where effects of self-absorption was studied in the context of VCA. We also extend the ability of the centroids technique to study magnetization of a media and direction of the magnetic field, and explored the possibility of separating contributions of Alfv\'en, slow and fast modes. The separation of mode is important because different modes have different astrophysical impacts. As an example, Alfv\'en modes are essential for magnetic field reconnection (\citealt{lazarian1999reconnection}, see also \citealt{lazarian2015turbulent} and ref. therein), superdiffusion of cosmic rays (\citealt{lazarian2014superdiffusion}), etc. On the other hand, fast modes are important for resonance scattering of cosmic rays (\citealt{yan2002scattering}). The possible ability of centroids to obtain the relative contribution of these different modes complements this ability for the technique introduced in \citetalias{lazarian2012statistical} and \citetalias{0004-637X-818-2-178} for synchrotron data and in \citetalias{kandel2016extending} for spectroscopic data. \subsection{Centroids and other techniques} In this paper, we studied centroids by extensively making use of PPV space formalism. We have also discussed and compared centroids with the VCA and VCS, the techniques that were developed also by using PPV space formalism. Centroids have been used to analytically study anisotropies in this paper, while anisotropies were studied using VCA in \citetalias{kandel2016extending}. Another technique called principal component analysis (PCA, see \citealt{brunt2002interstellar}) can also be used to study turbulence anisotropies. However, unlike the centroids and VCA, it is not easy to quantify PCA using PPV data. Nevertheless, recent studies have shown the sensitivity of PCA to the phase information (\citealt{0004-637X-818-2-118}), although the trend is not yet clear. Another important technique to study turbulence using velocity slice of PPV space is the spectral correlation function (SCF, see \citealt{rosolowsky1999spectral}). The SCF is very similar to VCA if one removes the adjustable parameters from SCF. In fact, both SCF and VCA measure correlations of intensity in velocity slices of PPV, but the SCF treats outcomes empirically. There also exist numerous techniques identifying and analyzing clumps and shells in PPV (see \citealt{houlahan1992recognition}; \citealt{williams1994determining}; \citealt{stutzki1990high}; \citealt{pineda2006complete}; \citealt{ikeda2007survey}). Besides the VCA and centroids, there are also some other techniques to study sonic and Alfv\'en Mach numbers. Some of these techniques include so called Tsallis statistics (see \citealt{esquivel2010tsallis}; \citealt{tofflemire2011interstellar}), bi-spectrum (see \citealt{burkhart2009density}), genus analysis (see \citealt{chepurnov2008topology}), etc. Using different available techniques allows one to obtain a comprehensive picture of MHD turbulence.
16
7
1607.04316
1607
1607.01019_arXiv.txt
The majority of galaxies in the local Universe exhibit spiral structure with a variety of forms. Many galaxies possess two prominent spiral arms, some have more, while others display a many-armed flocculent appearance. Spiral arms are associated with enhanced gas content and star-formation in the disks of low-redshift galaxies, so are important in the understanding of star-formation in the local universe. As both the visual appearance of spiral structure, and the mechanisms responsible for it vary from galaxy to galaxy, a reliable method for defining spiral samples with different visual morphologies is required. In this paper, we develop a new debiasing method to reliably correct for redshift-dependent bias in Galaxy Zoo 2, and release the new set of debiased classifications. Using these, a luminosity-limited sample of $\sim$18,000 Sloan Digital Sky Survey spiral galaxies is defined, which are then further sub-categorised by spiral arm number. In order to explore how different spiral galaxies form, the demographics of spiral galaxies with different spiral arm numbers are compared. It is found that whilst all spiral galaxies occupy similar ranges of stellar mass and environment, many-armed galaxies display much bluer colours than their two-armed counterparts. We conclude that two-armed structure is ubiquitous in star-forming disks, whereas many-armed spiral structure appears to be a short-lived phase, associated with more recent, stochastic star-formation activity.
\label{sec:intro} Spiral galaxies are the most common type of galaxy in the local Universe, with as many as two-thirds of low-redshift galaxies exhibiting disks with spiral structure \citep{Lintott_11,Willett_13,Nair_10,Kelvin_14b}. As star-formation is enhanced in gas-rich disk galaxies \citep{Kennicutt_89,Schmidt_59,Kelvin_14b} understanding spiral structure holds the key to understanding star-formation in the local Universe, yet formulating a single theory to account for all spiral structure still remains elusive.The main theories for the occurrence of spiral arm features in local galaxies initially focused on the idea of being caused by density waves in their disks \citep{Lindblad_63,Lin_64}, but have since been superseded by theories that consider the effects of gravity and disk dynamics \citep{Toomre_81,Sellwood_84}, with most of the work to advance the field of spiral structure theory driven by simulation (eg. \citep{Dobbs_14} and references therein, and discussed further in Sec.4). Using observational studies to test these theories remains a challenge, as visual classifications of both the presence of spiral structure and details of its features are required, which are difficult to obtain when considering the large samples provided by galaxy survey data. An approach that has been successfully employed to visually classify galaxies in large surveys is citizen science, which asks many volunteers to morphologically classify galaxies rather than relying on a small number of experts. Sophisticated automated methods have also been developed for this purpose, (eg. \citealt{Huertas_11,Davis_14,Dieleman_15}). However, these methods cannot currently completely reproduce the results of visual classifications, particularly in low signal-to-noise images. They also require training sets, meaning that `by eye' inspection methods are still a a requirement. Galaxy Zoo 1 \citep[GZ1;][]{Lintott_08,Lintott_11} was the first project to collect visual morphologies using citizen science, by classifying galaxies from the Sloan Digital Sky Survey (SDSS) as either `elliptical' or `spiral'. Using this method, each galaxy is classified by several individuals, and a likelihood or `vote fraction' of each galaxy having a particular feature is assigned as the fraction of classifiers who saw that feature. GZ1 classifications collected in this way have been used to compare galaxy morphology with respect to colour \citep{Bamford_09,Masters_10b,Masters_10a}, environment \citep{Bamford_09,Skibba_09,Darg_10a,Darg_10b}, and star formation properties \citep{Tojeiro_13,Schawinski_14,Smethurst_15}. Following from the success of GZ1, more detailed visual classifications were sought, including the presence of bars, and spiral arm winding and multiplicity properties. Thus, Galaxy Zoo 2 (GZ2) was created \citep[][hereafter W13]{Willett_13}, in which volunteers were asked more questions about a subsample of GZ1 SDSS galaxies. The main difference between GZ2 and GZ1 was that visual classifications were collected using a `question tree' in GZ2, to gain a more exhaustive set of morphological information for each galaxy. GZ2 has already been used to compare the properties of spiral galaxies with or without bars \citep{Masters_11,Masters_12,Cheung_13}, look for interacting galaxies \citep{Casteels_13}, as well as looking for relationships between spiral arm structure and star formation \citep{Willett_15}. This `question tree' method has since been used in a similar way to measure the presence of detailed morphological features in higher redshift galaxy surveys (eg. \citet{Melvin_14,Simmons_14}), and other \textsc{Zooniverse}\footnote{https://www.zooniverse.org/} citizen science projects. An issue that arises in both visual and automated methods of morphological classification is that detailed features are more difficult to observe in lower signal-to-noise images (ie. observed from a greater distance). In Galaxy Zoo, this has been termed as classification bias. It is imperative that classification bias is removed from morphological data, as it leads to sample contamination from galaxies being incorrectly assigned to some categories. This means that any observational differences between samples can be significantly reduced. Classification bias manifested itself in GZ1 with galaxies at higher redshift having lower `spiral' vote fractions, which were corrected using a statistical method \citep{Bamford_09}. The application of a question tree in GZ2 to look for more detailed features means that correcting for biases is more complicated than in GZ1. In particular, there are questions with several possible answers, and debiasing one answer with respect to each of the others is therefore a more difficult process for GZ2. The paper is organised as follows. In Sec.~\ref{sec:data}, the sample selection and galaxy data are described. In Sec.~\ref{sec:redshift_bias}, we describe a new debiasing method that has been created to account for the classification bias in the GZ2 questions with multiple possible answers. In Sec.~\ref{sec:results}, samples of GZ2 spiral galaxies are defined and sorted by arm multiplicity. This is a case where the new debiasing method is required as there are multiple responses to that question. After reviewing relevant theoretical and observational literature, we examine the demographics of spiral galaxies with respect to arm multiplicity, and begin to explore the processes that influence the formation and evolution of spiral arms in Sec.~\ref{sec:results}. The results are summarised in Sec.~\ref{sec:conclusions}. This paper assumes a flat cosmology with $\Omega_\mathrm{m} = 0.3$ and $H_0 = 70\;\mathrm{km\,s^{-1}\,Mpc^{-1}}$.
\label{sec:conclusions} In this paper, the demographics of local spiral galaxies have been compared with respect to spiral arm number, in order to gain an understanding of any significant differences in the physical processes responsible for their spiral structure. We make use of visual classifications of SDSS galaxies from GZ2. In order to obtain complete and clean samples, we have developed a new method to account for redshift-dependent bias. This corrects the vote fractions to ensure sample completeness, and avoid contamination between separate classes of galaxies. The method will also be applicable to further studies of Galaxy Zoo data, and potentially other citizen science projects. A new debiasing method has been developed to remove the effects of redshift-dependent classification bias in Galaxy Zoo data. The method was required for the multiple-answer questions in Galaxy Zoo, where the previously defined debiasing method did not effectively remove redshift bias, leading to sample contamination from incorrectly classified galaxies. In this paper, we studied the arm-number question, which is a multiple-answer question, where the rarer many-armed samples were incomplete, and the two-armed category suffered from sample contamination. The new method was successful in making the samples more complete with redshift in this case. Using the resulting classifications, the distributions of environment, stellar mass and colour were compared for spiral galaxies with different numbers of arms. We found that the most massive galaxies favour many-armed spiral structure, which may be indicative that their disks have not have been sufficiently perturbed to induce two-armed spiral structure. An enhancement in the fraction of two-armed spiral galaxies was observed in the highest density environments, indicating that galaxy-galaxy interactions could play a role in the inducement of two-armed spiral structure. By comparing optical colours, we find that two-armed galaxies are much redder in colour than galaxies with many spiral arms. Although many-armed spiral galaxies display similar $u-r$ colours, the $r-z$ colours are distinctly redder in the two-armed galaxy population. These colours are indicative of a recent, rapidly quenched ($\lesssim$0.1 Gyr) burst of star-formation, suggesting that many-armed spiral structure is a short-lived phase in galaxy disks, whereas star-formation in two-armed spiral structure persists over much longer timescales.
16
7
1607.01019
1607
1607.08412_arXiv.txt
{% We intensively studied the flare activity on the stellar object KIC011764567. The star was thought to be solar type, with a temperature of $T_{eff}\approx(5640\pm200)\:$K, $\log(g)=( 4.3\pm0.3)\:$dex and a rotational period of $P_{rot}\approx22\:$d \citep{brown}. High resolution spectra turn the target to an evolved object with $T_{eff}=(5300 \pm 150)\:$K, a metalicity of ${\rm [m/H]} = (-0.5\pm 0.2)$, a surface gravity of $\log(g) = (3.3 \pm 0.4)\:$dex, and a projected rotational velocity of $v \sin i =( 22 \pm 1)\:$\kms. Within an observing time span of 4 years we detected 150 flares in {\it Kepler} data in an energy range of $10^{36} - 10^{37}\:$erg. From a dynamical Lomb-Scargle periodogram we have evidence for differential rotation as well as for stellar spot evolution and migration. Analysing the occurrence times of the flares we found hints for a periodic flare frequency cycle of $430 - 460\:$d, the significance increases with an increasing threshold of the flares equivalent duration. One explanation is a very short activity cycle of the star with that period. Another possibility, also proposed by others in similar cases, is that the larger flares may be triggered by external phenomena, such as magnetically interaction with an unseen companion. Our high resolution spectra show that KIC011764567 is not a short period binary star.}
16
7
1607.08412
1607
1607.08248_arXiv.txt
We present the results of optical, near-infrared, and mid-infrared observations of M101 OT2015-1 (PSN J14021678+5426205), a luminous red transient in the Pinwheel galaxy (M101), spanning a total of 16 years. The lightcurve showed two distinct peaks with absolute magnitudes $M_r\leq-12.4$ and $M_r \simeq-12$, on 2014 November 11 and 2015 February 17, respectively. The spectral energy distributions during the second maximum show a cool outburst temperature of $\approx$3700 K and low expansion velocities ($\approx-$300 \kms) for the H I, Ca II, Ba II and K I lines. From archival data spanning 15 to 8 years before the outburst, we find a single source consistent with the optically discovered transient which we attribute to being the progenitor; it has properties consistent with being an F-type yellow supergiant with $L$~$\sim$~8.7~$\times\ 10^4$ \Lsun, $T_{\rm{eff}}\approx$7000~K and an estimated mass of $\rm{M1}= 18\pm 1$ \Msun. This star has likely just finished the H burning phase in the core, started expanding, and is now crossing the Hertzsprung gap. Based on the combination of observed properties, we argue that the progenitor is a binary system, with the more evolved system overfilling the Roche lobe. Comparison with binary evolution models suggests that the outburst was an extremely rare phenomenon, likely associated with the ejection of the common envelope. The initial mass of the binary progenitor system fills the gap between the merger candidates V838 Mon (5$-$10 \Msun) and NGC~4490-OT~(30~\Msun).
The discovery of an unusually bright and red nova in M31 (M31 RV) in September 1988 \citep{Rich1989}, triggered the attention of astronomers towards an uncommon type of object. Its peak absolute magnitude, $M_{\rm V} = -9.95$, was brighter than a regular nova ( $M_{\rm V}$ =$-6$ to $-8$), but fainter than a Supernova ($M_{\rm V}<-14$ mag). The surprisingly cool temperature, similar to an M0 type super-giant, and high ejected mass, placed the object into a potentially different category from known cataclysmic variables eruptions, triggering the need for further theoretical exploration. Since then, transient surveys and discoveries led by amateurs contributed to further populate this luminosity ``gap'' between classical novae and supernovae \citep[SNe;][]{Kasliwal2011iptf}. As of today, the observational diversity of such intermediate luminosity events on long timescales ($>$20 days) encompasses three main categories: (1) SN impostors, due to eruptions in massive stars such as luminous blue variables (LBV), (2) intermediate luminosity optical (red) transients (ILOT/ILRT), explained as terminal faint explosions and (3) luminous red novae (LRNe), which are potential stellar mergers. Luminous non-terminal outbursts of massive stars may sometimes mimic the observational signature of a SN. Consequently, this class of events was named as ``SN impostors''. Among these, eruptions of LBVs are known to produce intermediate luminosity transients \citep{Humphreys1994}, such as Eta Carinae and P Cygni. These classical examples generally inhabit the upper part of the Hertzsprung-Russell (HR) diagram, having bolometric magnitudes brighter than $M_{\rm{Bol}}=-9.5$ mag, in the super-giant region. Generally, LBV progenitors exhibit giant eruptions with visual changes $>$2 mag, but they also show non-periodic variability consistent with the behaviour of known LBVs in the LMC: R127 and S Doradus \citep[][and references therein]{Wolf1989,Walborn2008}. As a consequence, the progenitor stars are generally living in a dusty environment, caused by previous episodes of mass ejections. The non-terminal eruptions of SN 2009ip \citep{Pastorello2013,Mauerhan2013,Fraser2013} and UGC 2773 OT2009-1 \citep{Foley2011,Smith2016b} are examples of LBVs in their cool eruptive phase. ILRT, such as SN 2008S \citep{Prieto2008,Botticella2009,Thompson2009}, NGC 300 2009OT-1 \citep{Bond2009,Smith2011} and iPTF10fqs \citep{Kasliwal2011iptf} also inhabit the luminous part of the ``gap'' transient family \citep{Kasliwal2011}. Such events have been interpreted as faint terminal explosions associated to dusty progenitors \citep{Prieto2008,Prieto2009,Kochanek2011}. The electron-capture SNe scenario has been suggested as a possible mechanism \citep{Botticella2009}. Late time observations reveal the complete disappearance of their progenitors, suggesting their outburst to be a terminal activity \citep{Adams2016}. However, NGC 300-OT has also been interpreted as due to accretion on the secondary \cite{Kashi2010}. A survey of massive stars in M33 revealed that the rate of SN 2008S and the NGC 300-OT-like transient events is of the order of $\sim20$\% of the CCSN rate in star-forming galaxies in the local Universe (D$_{\rm{L}} \lesssim 10$ Mpc) \citep{Thompson2009}. However, the fraction of massive stars with colours similar to the progenitors of these transients is only $\lesssim 10^{-4}$. \cite{Khan2010} showed that similar stars are as rare as one per galaxy. The direct implication is that the heavy dust environment phase is a very short transition phase for many massive stars during their final $10^4$ years. Violent binary interactions in binary systems (including stellar mergers) were suggested as the plausible scenario to explain the nature of the outbursts of LRNe \citep{Iben1992,Soker2003,Tylenda2011,Ivanova2013Sci}. Nova Scorpii 2008 (V1309 Sco) currently provides the most compelling evidence for a merger scenario in our own Galaxy, as the exponential period decay of the progenitor system could be witnessed from observations spanning several years before the outburst \citep{Mason2010,Tylenda2011,Tylenda2013,Nandez2014}. V833 Mon, at 6.1$\pm$ 0.6 kpc \citep{Sparks2008} is another remarkable example of a low mass stellar merger candidate \citep{Soker2003}, including a spectacular light echo revealed by observations with the Hubble Space Telescope \citep{Bond2003}. Some extragalactic examples of discoveries consistent with the merger scenario are M85-OT2006OT-1\footnote{Although M85 2006OT-1 is observationally similar to other LRNe, its nature is more controversial: \cite{Kulkarni2007} (see also \cite{Ofek2008}, \cite{Rau2008}) supported the idea of a low-to-moderate mass merger, while \cite{Pastorello2007} favored the weak core-collapse SN explosion scenario.}, the luminous red nova in M31, reported in \cite{Kurtenkov2015} and \cite{Williams2015}, and the massive stellar merger NGC 4490 2011OT-1 \citep{Smith2016}. Pre-explosion photometry of the progenitor systems has allowed to estimate the mass and evolutionary stage of several progenitor systems. To date, the literature reports a wide range of cases, from 1.5 $\pm$ 0.5 \Msun for V1309 Sco to 20$-$30 \Msun for NGC 4490 2011OT-1 \citep{Smith2016}. In agreement with the progenitor mass function, the estimated observed Galactic rate of such events is one every few years ($\sim$ 3 yr) for low luminosity events ($M_V \geq -4$) and one every 10$-$30 yr for intermediate luminosity ($-7 \leq M_V \leq -10$) \citep{Kochanek2014}. Events on the bright end such as NGC 4490 2011OT and M101-OT are expected to be far less common, at most one per century. \begin{figure*} \includegraphics[width=0.75\textwidth]{Figure1a.pdf} \includegraphics[width=0.15\textwidth]{Figure1b.pdf} \caption{Left: M101-OT and the reference stars used to calibrate the photometric zero-point. Due to a variable field-of-view, position and position angle for the M101-OT historical photometric images, different subsets of the fields stars were used according to their visibility. At any time, a minimum number of three stars was used. The square region around M101-OT is shown in detail on the right hand side. Right: Images of M101-OT at four epochs: $\approx$10\ yrs before reference epoch, 22.3 month, 22 days after the second outburst and 12.6 months after. The field of view size is $1'\times1'$ centred on the position of M101-OT. The red dashes show the location of the transient. The telescope, instrument, and magnitude of the object are listed for each image. } \label{fig:field} \end{figure*} \begin{figure*} \includegraphics[width=\linewidth]{Figure2.pdf} \caption{Left: Historic light curve for M101-OT spanning fifteen years of observations until 120 days before second peak date. Pan-STARRS1 and iPTF data allow us to notice an increase in the baseline magnitude of the transient at about 5.5 years before the eruption. Dashed lines are used to guide the eye. Downward-pointing arrows indicate upper limits. Right: Close up of the lightcurve from $-$120 days to +550 days after outburst. For each data point, the marker shape shows the telescope and the colour indicates the filter. Note the difference in time scale between the left and right hand side plots. Vertical tickmarks below the lightcurve show the epochs when this object was observed by \textit{Gaia} (still proprietary data). Upper vertical lines show the epochs when spectra were taken. The lightcurve shows two maxima at $\sim -100$ and 0 days.} \label{fig:lightcurve} \end{figure*} \begin{figure*} \includegraphics[width=\linewidth]{Figure3.pdf} \caption{Left: Evolution of the black-body temperature and radius for M101-OT derived from photometry fits for the same time span as the lightcurve. We refer to the analysis in Section \ref{sec:analysis}. Right: Zoom from -120 to +550 days. } \label{fig:bb_fit} \end{figure*} In this work, we will discuss the observations and nature of M101 OT2015-1 (hereafter M101-OT), also designated as PSN J14021678+5426205 and iPTF13afz \citep{ATel7070}, an extragalactic transient in the luminosity gap. The discovery of M101-OT was publicly announced via the IAU Central Bureau for Astronomical Telegrams (CBAT) by Dimitru Ciprian Vintdevara on the night of 10th to 11th of February 2015 in the outskirts of NGC 5457 (M101) \footnote{ \url{http://www.cbat.eps.harvard.edu/unconf/followups/J14021678+5426205.html} }. Shortly after it was confirmed as an optical transient by Stu Parker with an unfiltered magnitude of 16.7. The source also had an independent discovery within the intermediate Palomar Transient Factory (iPTF) survey back in 2013, when the progenitor was identified as a slow rising source \citep{ATel7070}. This paper is organized as follows: in Section \ref{sec:data}, we report both pre- and post-discovery optical, near-infrared (NIR) and mid-infrared (MIR) photometry and spectroscopy of M101-OT. In Section \ref{sec:analysis}, we examine the spectroscopic measurements and the characteristics of the progenitor. We discuss possible similarities with other objects and the nature of M101-OT in Section \ref{sec:discussion}. Finally, we present a summary and our conclusions in Section \ref{sec:conclusions}. \section[]{Observations}\label{sec:data} M101-OT is located ($\alpha_{J2000}=14^{\rm{h}}02^{\rm{m}}16^{\rm{s}}.78\ \delta_{J2000}=+54^{\rm{h}}26^{\rm{m}}20^{\rm{s}}.5$) in the outer reaches of a spiral arm of M101, at 3$'$.41\,N and 8$'$.12\,W of the measured position of the galaxy nucleus. The surrounding region shows signs of a young stellar population, displaying bright unresolved emission in the Galaxy Evolution Explorer (GALEX) survey at 135 nm to 280 nm. We adopt the Cepheid distance to M101 of D$_{\rm{L}}~=~6.4~\pm~0.2$\, Mpc, corresponding to a distance modulus of $\mu~=~29.04~\pm~0.05$ (random) $\pm~0.18$ (systematic) mag \citep{ShappeeStanek2011}. The estimated Galactic reddening at the position of the transient is $E(B-V)~=~0.008~\pm~0.001$\ mag (from NED\footnote{The NASA/IPAC Extragalactic Database (NED) is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.} adopting \cite{Schlafly2011}), with R$_{\rm V}~=~3.1$, which corresponds to a mean visual extinction of A$_{\rm V}$ = 0.024 mag. The magnitudes reported in the text and figures of this paper have been corrected for Galactic reddening, but the Tables in the Appendix list the observed magnitudes, i.e. not corrected for extinction. The extinction within the host galaxy is not included. Local extinction to the progenitor is unlikely, as archival NIR photometry of M101-OT agrees well with the Rayleigh$-$Jeans tail of a single black body emission derived from optical measurements. Therefore, we argue that there is no evidence of a strong warm dust emission component in the environment around the progenitor star. \subsection{Photometry} \label{sec:phot} The location of M101-OT has been serendipitously imaged by numerous telescopes and instruments over the last 15 years (from 2000 to 2015). For example, in 2011, this galaxy received special attention, as it hosted one of the youngest SN Ia discovered to date: SN 2011fe \citep{Nugent2011}. In an attempt to piece together the past evolution on M101-OT, we retrieved all available data (see description below) covering the location of the transient. The left panel of Figure \ref{fig:field} shows the location of M101-OT and reference stars used for calibration. The right panel shows the magnitude evolution for $-$10 yr, $-$1.8 yr and an early follow-up epoch at 22 days after the second peak. The source has faded below detectable limits at +383 days. Throughout this work, we will use as a reference epoch the date of the second peak in $r$-band, MJD~57070. Our best quality pre-discovery image (seeing of 0$\farcs$55) is an $r$-band exposure at $-$3625 days pre-peak from the Canada France Hawaii Telescope (CFHT). We aligned this image with our +22-d post-peak image using 18 stars in common. There is one point source (see right hand side of Figure \ref{fig:field}) in the image within a 2$''$ radius of the position during the outburst, and the central position of the point spread functions (PSFs) are coincident within 180 mas (with a precision in the alignment of 250 mas). We identify this point source as the progenitor of M101-OT. Imaging in $I$-band taken at late times with Keck confirm the disappearance of the progenitor star. The historical optical data for M101-OT was retrieved from the CFHT MegaPrime and CFHT12K/Mosaic, using single and combined exposures \citep{Gwyn2008}, Pan-STARRS-1/GPC1 \citep[][PS1;]{Magnier2013,Schlafly2012,Tonry2012}, Isaac Newton Telescope/Wide Field Camera (INT/WFC) and Sloan Digital Sky Survey (SDSS) DR 10 \citep{SDSS10}. Unfortunately, there are no Hubble Space Telescope (HST) images covering the location of the source. Post discovery optical magnitudes were obtained from the reported follow-up astronomer\'s telegrams (ATels), Liverpool Telescope (LT), the Nordic Optical Telescope (NOT) and the Palomar P48 and P60 telescopes. The infrared data were retrieved from CFHT/WIRCam, UKIRT/WFCAM and the Spitzer Infrared Array Camera \citep{Fazio2004} in 3.6 and 4.5 $\mu$m as part of the SPitzer InfraRed Intensive Transients Survey (SPIRITS) (Kasliwal. et. al. in prep). Details of pre-discovery photometry and post-discovery optical photometry may be found in the Appendices Table \ref{table:preoutburst} and Table \ref{table:followup} respectively. iPTF photometry is reported in Table \ref{table:ptf}. The NIR and MIR observations are summarized in Appendix Table \ref{table:irphot}. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{Figure4.pdf} \caption{Post-outburst colour evolution for M101-OT. The data points have been binned in groups of 10 days. The abscissa is the average MJD of the bin relative to the reference epoch. } \label{fig:colour_evolution} \end{figure} \begin{figure*}[h] \centering \includegraphics[width=0.82\linewidth]{Figure5.pdf} \caption{Spectral evolution of M101-OT. The spectra have been flux calibrated using the interpolated photometric measurements. Telluric absorption features were corrected or marked otherwise with $\oplus$ symbol. Because the response of the detector drops at the extremes, some spectra are only shown for the valid wavelength range. The spectrum is colour coded by instrument. Blue: P200+DPSP, green: 1.82m+AFOSC, red: WHT+ISIS, orange: NOT+ALFOSC, black: GTC+OSIRIS, grey: comparison spectra. The Keck/LRIS spectrum of M85-OT is from \citep{Kulkarni2007}, NGC 4490-OT from \citep{Smith2016}, the UGC2773-OT2009 spectrum was taken with TNG/DOLORES (A. Pastorello, private communication), and finally the spectrum of V838 Mon was taken with the Kast spectrograph at Lick Observatory on 2002 Mar. 11 \citep{Smith2016}.} \label{fig:spec} \end{figure*} We measured the brightness of the source coincident with M101-OT using the \textsc{IRAF} (Image Reduction and Analysis Facility) SNOoPY \footnote{SNOoPY is a package developed by E. Cappellaro, based on \textsc{daophot}, but optimized for SN magnitude measurements.} package for PSF photometry. The zero-point in the SDSS photometric system was calibrated using aperture photometry on three to nineteen different sequence stars in the M101-OT field. Figure \ref{fig:field} shows the position of the sequence stars. Their coordinates and magnitudes are reported in Table \ref{table:seq}. The magnitude measurements for bands \textit{grizy} were obtained from the PS1 catalogue, having photometric accuracy better than 0.01 mag. Measurements for \textit{u}-band were obtained from the SDSS DR10 catalogue. Johnson photometry was calibrated using the same PS1 catalogue and transformations provided by \cite{Tonry2012} with a root-mean-square (RMS) below 0.1 mag. Photometry from the iPTF survey was obtained with the Palomar Transient Factory Image Differencing Extraction (PTFIDE) pipeline for the 48-inch telescope \citep{Masci2016} and with a custom difference imaging pipeline for 60-inch telescope at Mount Palomar \citep{Cenko2006}. Transformations from PTF Mould-$R$ and $g$-band to SDSS equivalent photometry were obtained using the transformations in \cite{Ofek2012}. The NOT NIR reductions were based on using an external \textsc{IRAF} package \textsc{notcam version 2.5}\footnote{ \url{http://www.not.iac.es/instruments/notcam/guide/observe.html}} and a custom pipeline for WIRC data. The zero-point for IR photometry was calibrated using the Two Micron All Sky Survey (2MASS) photometry. The full historical lightcurve of M101-OT is shown in Figure \ref{fig:lightcurve}, left panel. The earliest detection of the progenitor was obtained on 2000 February 05 with CFHT. From these first single epoch observations, we get $B=21.9\pm$0.1, $V=21.8\pm$0.3, $R=21.1\pm$0.2 (corrected for Milky Way reddening), which at the distance of M101 yield an absolute magnitudes of $M_B=-7.1$, $M_V=-7.2$, and $M_R=-7.9$ for the progenitor star. INT observations in the $r$-band, $r=21.2\pm0.15$, taken only three days after CFHT, are consistent within 0.1 mag with the $R$-band measurements. Within the first period, from approximately 15 to 5.5 years before outburst, the brightness of the progenitor shows only minor variations. The magnitude in the $r$-band remained constant to within 0.2 mags, with an average value of $r$=21.1. Roughly 5.5 years before the outburst, the lightcurve began to rise smoothly across all bands. The $r$-band increased to 19.6 mag at $-$180 days i.e. 1.5 mag relative to the historical median value. Reported magnitudes prior to mid-2012 ($-$2.7 years) taken with the Large Binocular Telescope (LBT) agree with these values: the source was reported as being variable, with mean magnitudes of $U=21.33\pm$0.19, $B=21.30\pm$0.19, $V=20.97\pm$0.17, and $R=20.69\pm$0.17 \citep{ATel7069}. During its slow rise, the transient was detected on 4th of April 2013 (-684 days) internally by iPTF as a slowly brightening source. On 2014 November 10, after appearing from behind the Sun, it was detected at 16.6 mag in R-band during its first outburst \citep{ATel7070}, $\sim$3 months prior to public discovery. At approximately -29 days to peak, it was also detected by LBT in between the first and second outbursts at a considerably fainter magnitude of $R~=~18.22\pm$0.02 \citep{ATel7069}. At the time of public announcement on 2015 February 10, the object was close to its second peak, estimated to fall 10 days later, on 2015 February 17 (MJD 57070). The Gaia satellite \citep{Perryman2001} (a European Space Agency mission) serendipitously observed the region containing M101-OT during the time of the first peak. Unfortunately, these data have not been made available to us. Due to this handicap in constraining the time of the first peak, we choose to adopt the epoch of maximum brightness of the second peak, at MJD 57070, as our reference epoch. The follow-up photometry for M101-OT is shown in the right panel of Figure \ref{fig:lightcurve}. The most remarkable feature of the lightcurve is the existence of two maxima. The object was observed during the decay phase of the first peak, having an absolute magnitude of $M_r\ \leq -12.6$ mag (we only have data on the decline part for the first peak, so the outburst could have been brighter). The second maximum, $\sim 100$ days after shows $M_r\ \simeq -12.0$ mag and it is followed by a fast declining phase, lasting $\sim$40 days, when the object fades 2 magnitudes in $r$-band. The lightcurve makes a transition into a plateau phase of $\sim$60 days: the redder $riz$-bands flatten, while the bluer $Bg$-bands continue to decline. After the end of the plateau, around +110 days, the transient resumes the initial decline rate in $r$-band. The first NIR follow-up data show magnitudes of J=15.45 $\pm$ 0.3, H=15.07 $\pm$ 0.06 and K=14.94 $\pm$ 0.09 at +17 days. The evolution in the IR bands is slow, and only after day +200 the object starts to decline in the IR too. Between +200 and +256 days it fades by $\sim$1 mag in the $K$-band. However, later epoch observations provided by \citep{ATel7206} and follow-up with P200 and NOT, suggest a re-brightening of the object in IR bands. Multi-band photometry allows us to derive the black body temperature and radius of the object, shown in Figure \ref{fig:bb_fit} (see \ref{sec:blackbody} for details). The colour evolution between $-$29 and +272\,d for M101-OT is shown in Figure \ref{fig:colour_evolution}. Coincident with the end of the first phase of the lightcurve, at $\sim 50$ days, the object becomes slightly bluer in $B$ and $g$-bands. This period is associated with a decrease of the photospheric radius. At approximately +130 days, around the end of the plateau, the colour evolution shows a second temporary ($\sim$20 days) enhancement of flux ratio for the blue bands. The last multi-band epoch (+154 days) shows that the object becomes increasingly red, i.e. $g-r=1.9\ \pm\ 0.4$, $g-z=3.8\ \pm\ 0.4$ and $V-K=5.6\ \pm\ 0.2$. \subsection{Optical Spectroscopy} \label{sec:spec} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{Figure6.pdf} \caption{The continuum-subtracted and peak normalized H$\alpha$ region for DBSP (500 \kms), WHT (320 \kms), NOT (747 \kms) and GTC (380 \kms) spectra. The spectrum is colour coded by instrument, same as in Figure \ref{fig:spec}. } \label{fig:halpha} \end{figure} We obtained spectra of M101-OT using a range of facilities. The log of the spectroscopic observations is given in Table \ref{table:speclog}. The data were reduced using \textsc{IRAF} and \textsc{PyRAF} standard routines. The wavelength calibration was done by fitting low-order polynomials to arc lamp spectra. Sky lines were used to check the accuracy of the calibration, which is within 1 \angstrom. We calibrated the flux using spectro-photometric standard stars and later on adjusted it using interpolated photometry for the same epoch as the spectra. The spectra of M101-OT are made public via WISeREP \citep{YaronGal-Yam2012}. We assumed the heliocentric recessional velocity for M101 of 241 $\pm$ 2 \kms \citep{deVaucouleurs1991}. Figure \ref{fig:spec} shows the spectral evolution of M101-OT. All spectra show a cool photospheric continuum, fitted by a black-body emission with temperatures 3000 $-$ 3600 K. The blue part of the M101-OT spectrum is dominated by the absorption forest of Fe~II (at around 5400 \AA), Ti~II (below 4700 \AA) and Sc~II lines. P-Cygni profiles are displayed by intermediate-mass elements. Ca II is identified with an expansion velocity of $v\ \simeq-356\ \pm\ 9$ \kms for the absorption component at +2 days, slowing to $v\ \simeq\ -283\ \pm\ 2$ \kms at +22 days, and $v\ \simeq-207\ \pm\ 17$ \kms at +116 days. Ba II $\lambda \lambda$ 6134, 6489 is also identified as P-Cygni profile with an expansion velocity of $-$180 \kms and a full width at half maximum (FWHM) of 367 \kms. Other elements present in early-time spectra are $\lambda \lambda$ 5150 Mg $\lambda \lambda$ and Na I at $\lambda\lambda$ 5890, 5896 and $\lambda\lambda$ 8183, 8195. Resonance lines K I $\lambda \lambda$ 7665, 7699 and $\lambda \lambda$ 7665, 7699 are also found in the spectrum, although their P-Cygni profiles are much weaker. These lines are rare and have been seen in the extreme super-giant VY CMa \citep{Smith2004MNRASVY} and Type IIn SN 2009kn \citep{Kankare2012}. We do not detect strong [Ca II] $\lambda\lambda$7291, 7325 lines in the spectrum, which have been associated with dense and compact gas disk and presence of dust \citep{Smith2010,Liermann2014}. Figure \ref{fig:spec} shows comparison spectra for similar red transients. M85-OT2006-1 is defined as an SN2008S-like observational class, showing strong emission for CaII and [Ca II] lines. UGC2773-OT2009-1 is considered to be an example of a dust enshrouded LBV. NGC4490-OT2011-1 and V838 Mon are examples of LRNe. There is an important resemblance between all three groups, implying that the nature of the outburst can not be determined from spectra alone. The spectra of M101-OT have a significant evolution of the H$\alpha$ profile. Figure \ref{fig:halpha} shows different morphologies of the profiles for different epochs. At early times, its expansion velocity, derived from the FWHM, is around 500\kms, slightly larger than the one of intermediate mass elements. The profile is asymmetric and shows a small blueshifted absorption component. However, at +22.9 days the absorption evolves into an emission profile, suggesting the existence of asymmetry in the outflow. The implications of this are further discussed in Section \ref{sec:halpha}. Similar behaviour was observed in the high resolution spectra of NGC4490-OT2011-1 reported in \cite{Smith2016}.
\label{sec:conclusions} M101-OT is a transient with LRN characteristics discovered in a star forming region in a spiral arm of M101. A summary of its most relevant observational characteristics is given below: \begin{itemize} \renewcommand\labelitemi{--} \item The historic evolution of M101-OT shows no major variations within 0.2 mag in $R$-band until approximately 5.5 years before the outburst. \item The pre-outburst SED suggests no IR excess, implying the lack of an old existing dust emission component. \item The object has slowly brightened by 1.5 mags over the last 6 years prior to the outburst. The estimated radius appeared to increase from 230$~\pm~13$ \Rsun at 6 years before the outburst, to 3100 \Rsun during the secondary outburst maximum. \item The lightcurve shows two peaks, detected in $R$-band, separated by $\geq$ 100 days. The magnitude of the first peak is $M_r\ \leq\ -12.4$ mag (lower limit because of an observation gap) and $M_r\ \simeq\ -12.0$ during the second peak. The colour of the object during the second maximum is $g-r=1.4$ mag, which corresponds to an estimated temperature of 3600 K. \item Late time follow-up photometry suggests the re-brightening of the object in IR wavelengths after one year. \item The bolometric luminosity for the second peak is $L~=~2.7\times\ 10^{40}$ ergs s$^{-1}$ and the total energy release during the outburst is $L\ > \ 4.1 \times 10^{47}$ erg. This is only a lower limit, as the first outburst is not covered well enough to put a tight constraint on the energy. \item During peak, the spectrum shows a cold photospheric continuum, combined with low expansion velocities ($\sim 300$ \kms) for H$\alpha$, Fe II and low energy ionization elements, which display a P-Cygni profile. \item The lightcurve after the second outburst is defined by a short decline phase ($\sim$ 40 days), a ``plateau'' phase ($\sim$60 days) in $riz$ bands and a second decline phase. The photospheric radius at the beginning of each phase was $\sim$ 6500 \Rsun, 4300 \Rsun and 5800 \Rsun respectively. \item The H$\alpha$ line shows initially a blueshifted absorption component at $-$500 \kms, which develops into an emission profile at epochs +30 days or later. \item The spectrum shows the formation of molecular bands after 100 days of the outburst, which suggests the fast formation of dust in the system. \item The best fit for the progenitor is an F-type giant with a luminosity of L$\sim8.7\ \times\ 10^4$ \Lsun and initial mass of 18$\pm$1. The estimated age of the star is 9.89, which places it in the Hertzsprung-Russell gap. The age is qualitatively consistent with the young stellar population surrounding the progenitor, although high accuracy photometry will be needed to provide a quantitative answer. \item In the binary case scenario, assuming that the primary is overfilling its Roche lobe, the binary system is initially on a wide orbit, with periods between 600 and 270 days (for $q=1$ and $q=18$ respectively). By the end of the common envelope phase, the fate of the system depends on the model. While the simple energy formalism anticipates the complete merger of the system, binary evolution models favor the survival of the binary stellar component with 260$-$290 day period. \end{itemize} Although the nature of the object is not entirely clear, its resemblance with other transients from the same LRN family points towards a possible binary origin. The unusual location of the progenitor star in the Hertzsprung gap supports the hypothesis that the most massive component had expanded beyond its Roche lobe, initiating the CE phase. The outbursts detected for M101-OT suggest that this CE was ejected on dynamical timescales, likely leaving a surviving close binary pair. We have discussed the past and present evolution of this unusual transient in M101; discussion of its future and the fate of its remnant will have to await further observations in the IR bands.
16
7
1607.08248
1607
1607.04299_arXiv.txt
AGN exhibit rapid, high amplitude stochastic flux variations across the entire electromagnetic spectrum on timescales ranging from hours to years. The cause of this variability is poorly understood. We present a Green's Function-based method for using variability to (1) measure the time-scales on which flux perturbations evolve and (2) characterize the driving flux perturbations. We model the observed light curve of an AGN as a linear differential equation driven by stochastic impulses. We analyze the light curve of the \Kepler AGN Zw 229-15 and find that the observed variability behavior can be modeled as a damped harmonic oscillator perturbed by a colored noise process. The model powerspectrum turns over on time-scale $385$~d. On shorter time-scales, the log-powerspectrum slope varies between $2$ and $4$, explaining the behavior noted by previous studies. We recover and identify both the $5.6$~d and $67$~d timescales reported by previous work using the Green's Function of the C-ARMA equation rather than by directly fitting the powerspectrum of the light curve. These are the timescales on which flux perturbations grow, and on which flux perturbations decay back to the steady-state flux level respectively. We make the software package \href{https://github.com/AstroVPK/kali}{\textsc{k\={a}l\={i}}} used to study light curves using our method available to the community.
16
7
1607.04299
1607
1607.06077_arXiv.txt
The possibility that the dark matter comprises primordial black holes (PBHs) is considered, with particular emphasis on the currently allowed mass windows at $10^{16}$ -- $10^{17}\,$g, $10^{20}$ -- $10^{24}\,$g and $1$ -- $10^{3}\,M_{\odot}$. The Planck mass relics of smaller evaporating PBHs are also considered. All relevant constraints (lensing, dynamical, large-scale structure and accretion) are reviewed and various effects necessary for a precise calculation of the PBH abundance (non-Gaussianity, non-sphericity, critical collapse and merging) are accounted for. It is difficult to put all the dark matter in PBHs if their mass function is monochromatic but this is still possible if the mass function is extended, as expected in many scenarios. A novel procedure for confronting observational constraints with an extended PBH mass spectrum is therefore introduced. This applies for arbitrary constraints and a wide range of PBH formation models, and allows us to identify which model-independent conclusions can be drawn from constraints over all mass ranges. We focus particularly on PBHs generated by inflation, pointing out which effects in the formation process influence the mapping from the inflationary power spectrum to the PBH mass function. We then apply our scheme to two specific inflationary models in which PBHs provide the dark matter. The possibility that the dark matter is in intermediate-mass PBHs of $1$ -- $10^{3}\,M_{\odot}$ is of special interest in view of the recent detection of black-hole mergers by LIGO. The possibility of Planck relics is also intriguing but virtually untestable.
\label{sec:Introduction} \noindent Primordial black holes (PBHs) have been a source of intense interest for nearly 50 years \cite{ZeldovichNovikov69}, despite the fact that there is still no evidence for them. One reason for this interest is that only PBHs could be small enough for Hawking radiation to be important \cite{Hawking:1974rv}. This has not yet been confirmed experimentally and there remain major conceptual puzzles associated with the process, with Hawking himself still grappling with these \cite{Hawking:2016msc}. Nevertheless, this discovery is generally recognised as one of the key developments in 20th century physics because it beautifully unifies general relativity, quantum mechanics and thermodynamics. The fact that Hawking was only led to this discovery through contemplating the properties of PBHs illustrates that it can be useful to study something even if it may not exist! PBHs smaller than about $10^{15}\,\grm$ would have evaporated by now with many interesting cosmological consequences. Studies of such consequences have placed useful constraints on models of the early Universe and, more positively, evaporating PBHs have been invoked to explain certain features: for example, the extragalactic \cite{Page:1976wx} and Galactic \cite{Lehoucq:2009ge} $\gamma$-ray backgrounds, antimatter in cosmic rays \cite{Barrau:1999sk}, the annihilation line radiation from the Galactic centre \cite{Bambi:2008kx}, the reionisation of the pregalactic medium \cite{Belotsky:2014twa} and some short-period $\gamma$-ray bursts \cite{Cline:1996zg}. For more comprehensive references, see recent articles by Khlopov \cite{Khlopov:2008qy} and Carr \textit{et al.} \cite{Carr:2009jm} and the book by Calmet \textit{et al.} \cite{Calmet:2014dea}. However, there are usually other possible explanations for these features, so there is no definitive evidence for evaporating PBHs. Attention has therefore shifted to the PBHs larger than $10^{15}\,\grm$, which are unaffected by Hawking radiation. Such PBHs might have various astrophysical consequences, such as providing seeds for the supermassive black holes in galactic nuclei \cite{Bean:2002kx}, the generation of large-scale structure through Poisson fluctuations \cite{Afshordi:2003zb} and important effects on the thermal and ionisation history of the Universe \cite{Ricotti:2007au}. For a recent review, in which a particular PBH-producing model is shown to solve these and several other observational problems, see Ref.~\cite{Dolgov:2016qsm}. But perhaps the most exciting possibility{\,---\,}and the main focus of this paper{\,---\,}is that they could provide the dark matter which comprises $25\%$ of the critical density, an idea that goes back to the earliest days of PBH research \cite{1975Natur.253..251C}. Since PBHs formed in the radiation-dominated era, they are not subject to the well-known big-bang nucleosynthesis (BBNS) constraint that baryons can have at most $5\%$ of the critical density \cite{Cyburt:2003fe}. They should therefore be classed as non-baryonic and from a dynamical perspective they behave like any other form of cold dark matter (CDM). There is still no compelling evidence that PBHs provide the dark matter, but nor is there for any of the more traditional CDM candidates. One favored candidate is a Weakly Interacting Massive Particle (WIMP), such as the lightest supersymmetric particle \cite{Jungman:1995df} or the axion \cite{Preskill:1982cy}, but 30 years of accelerator experiments and direct dark-matter searches have not confirmed the existence of these particles \cite{DiValentino:2014zna}. One should not be too deterred by this{\,---\,}after all, the existence of gravitational waves was predicted 100 years ago, the first searches began nearly 50 years ago \cite{Weber:1969bz} and they were only finally detected by LIGO a few months ago \cite{Abbott:2016blz}. Nevertheless, even some theorists have become pessimistic about WIMPs \cite{Frampton:2015xza}, so this does encourage the search for alternative candidates. There was a flurry of excitement around the PBH dark-matter hypothesis in the 1990s, when the Massive Astrophysical Compact Halo Object (MACHO) microlensing results \cite{Alcock:1996yv} suggested that the dark matter could be in compact objects of mass $0.5\,M_{\odot}$ since alternative MACHO candidates could be excluded and PBHs of this mass might naturally form at the quark-hadron phase transition at $10^{-5}\,\srm$ \cite{Jedamzik:1998hc}. Subsequently, however, it was shown that such objects could comprise only $20\%$ of the dark matter and indeed the entire mass range $10^{-7}\,M_{\odot}$ to $10\,M_{\odot}$ was excluded from providing the dark matter \cite{Tisserand:2006zx}. At one point there were claims to have discovered a critical density of $10^{-3}\,M_{\odot}$ PBHs through the microlensing of quasars \cite{1993Natur.366..242H} but this claim was met with scepticism \cite{Zackrisson:2003wu} and would seem to be incompatible with other lensing constraints. Also femtolensing of $\gamma$-ray bursts excluded $10^{17}$ -- $10^{20}\,\grm$ PBHs \cite{Nemiroff:2001bp}, microlensing of quasars constrained $10^{-3}$ -- $60\,M_{\odot}$ PBHs \cite{1994ApJ...424..550D} and millilensing of compact radio sources excluded $10^{6}$ -- $10^{9}\,M_{\odot}$ PBHs \cite{Wilkinson:2001vv} from explaining the dark matter. Dynamical constraints associated with the tidal disruption of globular clusters, the heating of the Galactic disc and the dragging of halo objects into the Galactic nucleus by dynamical friction excluded PBHs in the mass range above $10^{5}\,M_{\odot}$ \cite{Carr:1997cn}. About a decade ago, these lensing and dynamical constraints appeared to allow three mass ranges in which PBHs could provide the dark matter \cite{Barrau:2003xp}: the subatomic-size range ($10^{16}$ -- $10^{17}\,\grm$), the sublunar mass range ($10^{20}$ -- $10^{26}\,\grm$) and what is sometimes termed the intermediate-mass black hole (IMBH) range ($10$ -- $10^{5}\,M_{\odot}$).\footnote{This term is commonly used to describe black holes intermediate between those which derive from the collapse of ordinary stars and the supermassive ones which derive from general relativistic instability, perhaps the remnants of a first generation of Population III stars larger than $10^{2}\,M_{\odot}$. Here we use it in a more extended sense to include the $\Ocal(10)\,M_{\odot}$ black holes detected by LIGO.} The lowest range may now be excluded by Galactic $\gamma$-ray observations \cite{Carr:2016hva} and the middle range{\,---\,}although the first to be proposed as a PBH dark-matter candidate \cite{1975Natur.253..251C}{\,---\,}is under tension because such PBHs would be captured by stars, whose neutron star or white-dwarf remnants would subsequently be destroyed by accretion \cite{Capela:2013yf}. One problem with PBHs in the IMBH range is that such objects would disrupt wide binaries in the Galactic disc. It was originally claimed that this would exclude objects above $400\,M_{\odot}$ \cite{Quinn:2009zg} but more recent studies may reduce this mass \cite{Monroy-Rodriguez:2014ula}, so the narrow window between the microlensing and wide-binary bounds is shrinking. Nevertheless, this suggestion is topical because PBHs in the IMBH range could naturally arise in the inflationary scenario \cite{Frampton:2010sw} and might also explain the sort of massive black-hole mergers observed by LIGO \cite{Bird:2016dcv}. The suggestion that LIGO could detect gravitational waves from a population of IMBHs comprising the dark matter was originally proposed in the context of the Population III ``VMO'' scenario by Bond \& Carr \cite{1984MNRAS.207..585B}. This is now regarded as unlikely, since the precursor stars would be baryonic and therefore subject to the BBNS constraint, but the same possibility applies for IMBHs of primordial origin. Most of the PBH dark-matter proposals assume that the mass function of the black holes is very narrow (\ie~nearly monochromatic). However, this is unrealistic and in most scenarios one would expect the mass function to be extended. In particular, this arises if they form with the low mass tail expected in critical collapse \cite{Niemeyer:1997mt}. Indeed, it has been claimed that this would allow PBHs somewhat above $10^{15}\,\grm$ to contribute to both the dark matter and the $\gamma$-ray background \cite{Yokoyama:1998xd}. However, this assumes that the ``bare'' PBH mass function (\ie~without the low-mass tail) has a monochromatic form and recently it has been realised that the tail could have a wider variety of forms if one drops this assumption \cite{Kuhnel:2015vtw}. There are also many scenarios (\eg~PBH formation from the collapse of cosmic strings) in which even the bare mass function may be extended. This raises two interesting questions: (1) Is there still a mass window in which PBHs could provide all of the dark matter without violating the bounds in other mass ranges? (2) If there is no mass scale at which PBHs could provide all the dark matter for a nearly monochromatic mass function, could they still provide it by being spread out in mass? In this paper we will show how to address these questions for both a specific extended mass function and for the more general situation. As far as we are aware, this issue has not been discussed in the literature before and we will apply this methodology to the three mass ranges mentioned above. There are subtleties involved when applying differential limits to models with extended mass distributions, especially when the experimental bounds come without a mention of the bin size or when different limits using different bin sizes are combined. In order to make model-independent statements, we also discuss which physical effects need to be taken into account in confronting a model capable of yielding a significant PBH abundance with relevant constraints. This includes critical collapse, non-sphericity and non-Gaussianity, all of which we investigate quantitatively for two specific inflationary models. We also discuss qualitatively some other extensions of the standard model. In principle, this approach could constrain the primordial curvature perturbations even if PBHs are excluded as dark-matter candidates \cite{Josan:2009qn}. The plan of this paper is as follows: In Sec.~\ref{sec:ModelsInRelevantRanges} we review the PBH formation mechanisms. In Sec.~\ref{sec:SpecificModelsInRelevantRanges} we give a more detailed description of two inflationary models for PBH formation, later used to demonstrate our methodology. In Sec.~\ref{sec:RealisticMassFunctions} we consider some issues which are important in going from the initial curvature or density power spectrum to the PBH mass function. Many of these issues are not fully understood, but they may have a large impact on the final mass function, so their proper treatment is crucial in drawing conclusions about inflationary models from PBH constraints. In Sec.~\ref{sec:Constraints} we review the constraints for PBHs in the non-evaporating mass ranges above $10^{15}$g, concentrating particularly on the weakly constrained region around $\Ocal(10)\,M_{\odot}$. In Sec.~\ref{sec:ExtendedPBH} we explore how an extended mass function can still contain all the dark matter. In order to make model-independent exclusions, we develop a methodology for applying arbitrary constraints to any form of extended mass function. In Sec.~\ref{sec:LIGO} we discuss the new opportunities offered by gravitational-wave astronomy and the possible implications of the LIGO events. In Sec.~\ref{sec:Summary-and-Outlook} we summarise our results and outline their implications for future PBH searches.
\label{sec:Constraints} \noindent We now review the various constraints associated with PBHs which are too large to have evaporated yet, updating the equivalent discussion which appeared in Carr \textit{et al.}~\cite{Carr:2009jm}. All the limits assume that PBHs cluster in the Galactic halo in the same way as other forms of CDM. In this case, the fraction $f( M )$ of the halo in PBHs is related to $\beta'( M )$ by Eq.~\eqref{eq:f}. Our limits on $f( M )$ are summarised in Fig.~\ref{fig:large}, which is an updated version of Fig.~8 of Ref.~\cite{Carr:2009jm}. A list of approximate formulae for these limits is given in Tab.~\ref{tab:ConstraintSummary}. Both Fig.~\ref{fig:large} and Tab.~\ref{tab:ConstraintSummary} are intended merely as an overview and are not exact. A more precise discussion can be found in the original references. Many of the constraints depend on other physical parameters, not shown explicitly. In general, we show only the most stringent constraints in each mass range, although constraints are sometimes omitted when they are contentious. Further details of these limits and similar figures can be found in other papers: for example, Tab.~1 of Josan \textit{et al.} \cite{Josan:2009qn}, Fig.~4 of Mack \textit{et al.} \cite{Mack:2006gz}, Fig.~9 of Ricotti \textit{et al.} \cite{Ricotti:2007au}, Fig.~1 of Capela \textit{et al.} \cite{Capela:2013yf} and Fig.~1 of Clesse \& Garcia-Bellido \cite{Clesse:2016vqa}. We group the limits by type and discuss those within each type in order of increasing mass. Since we are also interested in the mass ranges for which the dark-matter fraction is small, where possible we express each limit in terms of an analytic function $f_{\mathrm{max}}( M )$ over some mass range. We do not treat Planck-mass relics, since the only constraint on these is that they must have less than the CDM density, but we do discuss them further in Sec.~\ref{sec:ExtendedPBH}. \begin{figure} \begin{center} \includegraphics{figure3} \end{center} \caption{ Constraints on $f( M )$ for a variety of evaporation (magenta), dynamical (red), lensing (cyan), large-scale structure (green) and accretion (orange) effects associated with PBHs. The effects are extragalactic $\gamma$-rays from evaporation (EG) \cite{Carr:2009jm}, femtolensing of $\gamma$-ray bursts (F) \cite{Barnacka:2012bm}, white-dwarf explosions (WD) \cite{2015PhRvD..92f3007G}, neutron-star capture (NS) \cite{Capela:2013yf}, Kepler microlensing of stars (K) \cite{Griest:2013aaa}, MACHO/EROS/OGLE microlensing of stars (ML) \cite{Tisserand:2006zx, Novati:2013fxa} and quasar microlensing (broken line) (ML) \cite{2009ApJ...706.1451M}, survival of a star cluster in Eridanus II (E) \cite{Brandt:2016aco}, wide-binary disruption (WB) \cite{Quinn:2009zg}, dynamical friction on halo objects (DF) \cite{Carr:1997cn}, millilensing of quasars (mLQ) \cite{Wilkinson:2001vv}, generation of large-scale structure through Poisson fluctuations (LSS) \cite{Afshordi:2003zb}, and accretion effects (WMAP, FIRAS) \cite{Ricotti:2007au}. Only the strongest constraint is usually included in each mass range, but the accretion limits are shown with broken lines since they are are highly model-dependent. Where a constraint depends on some extra parameter which is not well-known, we use a typical value. Most constraints cut off at high $M$ due to the incredulity limit. See the original references for more accurate forms of these constraints.} \label{fig:large} \end{figure} \subsection{Evaporation Constraints} A PBH of initial mass $M$ will evaporate through the emission of Hawking radiation on a timescale $\tau \propto M^{3}$ which is less than the present age of the Universe for $M$ less than $M_{*} \approx 5 \times 10^{14}$\,g \cite{Carr:2016hva}. PBHs with $M > M_{*}$ could still be relevant to the dark-matter problem, although there is a strong constraint on $f( M_{*} )$ from observations of the extragalactic $\gamma$-ray background \cite{Page:1976wx}. Those in the narrow band $M_{*} < M < 1.005\,M_{*}$ have not yet completed their evaporation but their current mass is below the mass $M_{q} \approx 0.4\,M_{*}$ at which quark and gluon jets are emitted. For $ M > M_{\mathrm c}$, there is no jet emission. For $M > 2 M_{*}$, one can neglect the change of mass altogether and the time-integrated spectrum $\drm N^{\gamma}/ \drm E$ of photons from each PBH is just obtained by multiplying the instantaneous spectrum $\drm \dot{N}^{\gamma}/ \drm E$ by the age of the Universe $t_{0}$. From Ref.~\cite{Carr:2009jm} this gives \begin{equation} \frac{\drm N^\gamma }{\drm E} \propto \begin{cases} E^{3}\,M^{3} & ( E < M^{-1} )\, , \\ E^{2}\,M^{2}\,\erm^{-E M} & ( E > M^{-1} ) \, . \end{cases} \end{equation} This peaks at $E \sim M^{-1}$ with a value independent of $M$. The number of background photons per unit energy per unit volume from all the PBHs is obtained by integrating over the mass function: \begin{equation} \Ecal( E ) = \int_{M_{\mathrm{min}}}^{M_{\mathrm{max}}} \!\drm M\, \frac{\drm n}{\drm M}\, \frac{\drm N^\gamma }{\drm E}(m,E) \, , \end{equation} where $M_{\mathrm{min}}$ and $M_{\mathrm{max}}$ specify the mass limits. For a monochromatic mass function, this gives \begin{equation} \Ecal( E ) \propto f( M ) \times \begin{cases} E^{3}\,M^{2} & ( E < M^{-1} )\, ,\\ E^{2}\,M\,\erm^{-E M} & ( E > M^{-1} )\, , \end{cases} \end{equation} and the associated intensity is \begin{equation} I( E ) \equiv \frac{ c \, E \, \Ecal( E )}{4 \pi} \propto f( M ) \times \begin{cases} E^{4}\,M^{2} & ( E < M^{-1} )\, ,\\ E^{3}\,M\,\erm^{- E M} & ( E > M^{-1} )\, , \end{cases} \end{equation} with units $ \mathrm s^{-1}\, \mathrm{sr}^{-1}\,\mathrm{cm}^{-2}$. This peaks at $E \sim M^{-1} $ with a value $I^{\mathrm{max}} ( M ) \propto f( M ) M^{-2}$. The observed extragalactic intensity is $I^\mathrm{obs} \propto E^{-(1+\epsilon)}\propto M^{1+\epsilon} $ where $\epsilon$ lies between $0.1$ (the value favoured in Ref.~\cite{Sreekumar:1997un}) and $ 0.4 $ (the value favoured in Ref.~\cite{Strong:2004ry}). Hence putting $I^{\mathrm{max}}( M ) \le I^\mathrm{obs} ( M ) $ gives \cite{Carr:2009jm} \begin{equation} f( M ) \lesssim 2 \times 10^{-8}\, \left( \frac{ M }{ M_{*} } \right)^{\!\!3 + \epsilon} \quad ( M > M_{*} = 5 \times 10^{14} \grm ) \; . \label{photon2} \end{equation} In Fig.~\ref{fig:large} we plot this constraint for $\epsilon = 0.2$. The Galactic $\gamma$-ray background constraint could give a stronger limit \cite{Carr:2016hva} but this requires the mass function to be extended and depends sensitively on its form, so we do not discuss it here. The reionising effects of $10^{16}$ -- $10^{17}\,$g PBHs might also be associated with interesting constraints \cite{Belotsky:2014twa}. \subsection{Lensing Constraints} \label{sec:Lensing-Constraints} Constraints on MACHOs with very low $M$ come from the femtolensing of $\gamma$-ray bursts. Assuming the bursts are at a redshift $z \sim 1$, early studies~\cite{Marani:1998sh,Nemiroff:2001bp} excluded $f = 1$ in the mass range $10^{-16}$ -- $10^{-13}\,M_{\odot}$ but more recent work~\cite{Barnacka:2012bm} gives a limit which can be approximated as \begin{equation} f( M ) < 0.1 \quad ( 5 \times 10^{16} \grm < M < 10^{19}\,\grm ) \, . \label{femto} \end{equation} The precise form of this limit is shown is Fig.~\ref{fig:large}. Microlensing observations of stars in the Large and Small Magellanic Clouds probe the fraction of the Galactic halo in MACHOs of a certain mass range \cite{Paczynski:1985jf}. The optical depth of the halo towards LMC and SMC, defined as the probability that any given star is amplified by at least $1.34$ at a given time, is related to the fraction $f$ by \begin{equation} \tau^{(\mathrm{SMC})}_{\mathrm L} = 1.4\,\tau^{(\mathrm{LMC})}_{\mathrm L} = 6.6 \times 10^{-7}\,f \end{equation} for the S halo model \cite{Alcock:2000ph}. Although the initial motivation for microlensing surveys was to search for brown dwarfs with $0.02\,M_{\odot} < M < 0.08\,M_{\odot}$, the possibility that the halo is dominated by these objects was soon ruled out by the MACHO experiment \cite{Alcock:2000ke}. However, MACHO observed $17$ events and claimed that these were consistent with compact objects of $M \sim 0.5\,M_{\odot}$ contributing $20\,\%$ of the halo mass \cite{Alcock:2000ph}. This raised the possibility that some of the halo dark matter could be PBHs formed at the QCD phase transition \cite{Jedamzik:1996mr,Widerin:1998my, Jedamzik:1999am}. However, later studies suggested that the halo contribution of $M \sim 0.5\,M_{\odot}$ PBHs could be at most 10\%~\cite{Hamadache:2006fw}. The EROS experiment obtained more stringent constraints by arguing that some of the MACHO events were due to self-lensing or halo clumpiness \cite{Tisserand:2006zx} and excluded $ 6 \times 10^{-8}\,M_{\odot} < M < 15\,M_{\odot}$ MACHOs from dominating the halo. Combining the earlier MACHO \cite{Allsman:2000kg} results with the EROS-I and EROS-II results extended the upper bound to $ 30\,M_{\odot}$ \cite{Tisserand:2006zx}. The constraints from MACHO and EROS about a decade ago may be summarised as follows: \begin{equation} f( M ) < \begin{cases} 1 & ( 6 \times 10^{-8}\,M_{\odot}< M < 30\,M_{\odot} )\, ,\\ 0.1 & ( 10^{-6}\,M_{\odot}< M < 1\,M_{\odot} )\, ,\\ 0.04 & ( 10^{-3}\,M_{\odot}< M < 0.1\,M_{\odot} )\,. \end{cases} \end{equation} Similar limits were obtained by the POINT-AGAPE collaboration, which detected $6$ microlensing events in a survey of the Andromeda galaxy \cite{CalchiNovati:2005cd}. Since then further limits have come from the OGLE experiment. The OGLE-II data \cite{2009MNRAS397, Novati:2009kq, Wyrzykowski:2010bh} yielded somewhat weaker constraints but data from OGLE-III \cite{Wyrzykowski:2010mh} and OGLE-IV \cite{Wyrzykowski:2011tr} gave stronger results for the high mass range: \begin{equation} f( M ) < \begin{cases} 0.2 & ( 0.1\,M_{\odot}< M < 20\,M_{\odot} )\, ,\\ 0.09 & ( 0.4\,M_{\odot}< M < 1\,M_{\odot} )\, ,\\ 0.06 & ( 0.1\,M_{\odot}< M < 0.4\,M_{\odot} )\,. \end{cases} \end{equation} We include this limit in Fig.~\ref{fig:large}, and Tab.~\ref{tab:ConstraintSummary} but stress that it depends on some unidentified detections being attributed to self-lensing. Later (comparable) constraints combining EROS and OGLE data were presented in Ref.~\cite{Novati:2013fxa}. Recently Kepler data has improved the limits considerably in the low mass range \cite{Griest:2013esa,Griest:2013aaa}: \begin{equation} f( M ) < 0.3 \qquad ( 2 \times 10^{-9}\,M_{\odot}< M < 10^{-7}\,M_{\odot} ) \, . \end{equation} It should be stressed that many papers give microlensing limits on $f( M )$ but it is not easy to combine these limits because they use different confidence levels. Also one must distinguish between limits based on positive detections and null detections. The only positive detection in the high mass range comes from Dong \textit{et al.} \cite{Dong:2007px}. Early studies of the microlensing of quasars \cite{1994ApJ...424..550D} seemed to exclude all the dark matter being in objects with $10^{-3} M_{\odot} < M < 60 M_{\odot}$. However, this limit does not apply in the $\Lambda$CDM picture and so is not shown in Fig.~\ref{fig:large}. More recent studies of quasar microlensing suggest a limit \cite{2009ApJ...706.1451M} \begin{equation} f( M ) < 1 \quad ( 10^{-3}\,M_{\odot}< M < 60\,M_{\odot} ) \, . \end{equation} However, this limit might not apply in the $\Lambda$CDM picture, and furthermore the paper states only three data points, so the limit is shown as a dashed line in Fig.~\ref{fig:large}. In this context, Hawkins \cite{1993Natur.366..242H} once claimed evidence for a critical density of jupiter-mass objects from observations of quasar microlensing and associated these with PBHs formed at the quark-hadron transition. However, the status of his observations is no longer clear \cite{Zackrisson:2003wu}, so this is not included in Fig.~\ref{fig:large}. Millilensing of compact radio sources \cite{Wilkinson:2001vv} gives a limit which can be approximated as \begin{equation} f( M ) < \begin{cases} ( M / 2 \times 10^{4}\, M_{\odot} )^{-2} & ( M < 10^{5}\,M_{\odot} )\, , \\ 0.06 & ( 10^{5}\,M_{\odot}< M < 10^{8}\,M_{\odot} )\, , \\ (M / 4 \times 10^{8}\,M_{\odot} )^{2} & ( M > 10^{8}\,M_{\odot} )\, . \end{cases} \end{equation} Though weaker than other constraints in this mass range, we include this limit in Fig.~\ref{fig:large} and Tab.~\ref{tab:ConstraintSummary}. The lensing of fast radio bursts could imply strong constraints in the range above $10\, M_{\odot}$ but these are not shown in Fig.~\ref{fig:large}, since they are only potential limits \cite{Munoz:2016tmg}. \subsection{Dynamical Constraints} \label{sec:Dynamical-Constraints} The effects of PBH collisions on astronomical objects{\,---\,}including the Earth \cite{1973Natur.245...88J}{\,---\,}have been a subject of long-standing interest \cite{Carr:1997cn}. For example, Zhilyaev \cite{Zhilyaev:2007rx} has suggested that collisions with stars could produce $\gamma$-ray bursts and Khriplovich \textit{et al.} \cite{Khriplovich:2008er} have examined whether terrestrial collisions could be detected acoustically. Gravitational-wave observatories in space might detect the dynamical effects of PBHs. For example, eLISA could detect PBHs in the mass range $10^{14}$ -- $10^{20}\,{\grm}$ by measuring the gravitational impulse induced by any nearby passing one \cite{Adams:2004pk, Seto:2004zu}. However, we do not show these constraints in Fig.~\ref{fig:large} since they are only potential. Roncadelli \textit{et al.} \cite{Roncadelli:2009qj} have suggested that halo PBHs could be captured and swallowed by stars in the Galactic disk. The stars would eventually be accreted by the holes, producing a lot of radiation and a population of subsolar black holes which could only be of primordial origin. They argue that every disc star would contain such a black hole if the dark matter were in PBHs smaller than $3 \times 10^{26}$\,g and the following analytic argument \cite{Carr:2009jm} gives the form of the constraint. Since the time-scale on which a star captures a PBH scales as $\tau_{\mathrm{cap}} \propto n_{\mathrm{PBH}}^{-1} \propto M\.f( M )^{-1}$, requiring this to exceed the age of the Galactic disc implies \begin{equation} f < ( M / 3 \times 10^{26}\,\grm ) \, , \label{neutron} \end{equation} which corresponds to a \emph{lower} limit on the mass of objects providing the dark matter. A similar analysis of the collisions of PBHs with main-sequence stars, red-giant cores, white dwarfs and neutron stars by Abramowicz \textit{et al.} \cite{Abramowicz:2008df} suggests that collisions are too rare for $M > 10^{20}\,\grm$ or produce too little power to be detectable for $M < 10^{20}\,$g. However, in a related argument, Capela {\it et al.} have constrained PBHs as dark-matter candidates by considering their capture by white dwarfs \cite{Capela:2012jz} and neutron stars \cite{Capela:2013yf}. The survival of these objects implies a limit which can be approximated as \begin{equation} f( M ) < \frac{M}{4.7\times 10^{24}\,\grm} \left( 1 - \exp \left[ - \frac{ M }{ 2.9 \times 10^{23}\,\grm } \right] \right)^{\!\!-1} \qquad \left( 2.5 \times 10^{18} \grm < M < 10^{25}\,\grm \right) . \end{equation} This is similar to Eq.~\eqref{neutron} at the high-mass end, the upper cut-off at $10^{25}$\,g corresponding to the condition $f = 1$. There is also a lower cut-off at $2 \times 10^{18}$\,g because PBHs lighter than this will not have time to consume the neutron stars during the age of the Universe. This argument assumes that there is dark-matter at the centers of globular clusters and is sensitive to the dark-matter density there (taken to be $10^{4}$\,GeV$\,$cm$^{-3}$). Pani \& Loeb \cite{Pani:2014rca} have argued that this excludes PBHs from providing the dark matter throughout the sublunar window, although this has been disputed \cite{Capela:2014qea, Defillon:2014wla}. In fact, the dark-matter density is limited to much lower values than assumed above for particular globular clusters \cite{Ibata:2012eq, Bradford:2011aq}. Binary star systems with wide separation are vulnerable to disruption from encounters with MACHOs \cite{1985ApJ...290...15B,1987ApJ...312..367W}. Observations of wide binaries in the Galaxy therefore constrain the abundance of halo PBHs. By comparing the results of simulations with observations, Yoo \textit{et al.} \cite{Yoo:2003fr} originally ruled out MACHOs with $M > 43\,M_{\odot}$ from providing the dark matter. However, a careful analysis by Quinn \textit{et al.} \cite{Quinn:2009zg} of the radial velocities of these binaries found that the widest-separation one was spurious, so that the constraint became \begin{equation} f( M ) < \begin{cases} ( M / 500\,M_{\odot} )^{-1} & ( 500\,M_{\odot}< M \lesssim 10^{3}\,M_{\odot} )\, , \\ 0.4 & ( 10^{3}\,M_{\odot}\lesssim M < 10^{8}\,M_{\odot} )\,. \end{cases} \end{equation} It flattens off above $10^{3}\,M_{\odot}$ because the encounters are non-impulsive there. Although not shown in Fig.~\ref{fig:large}, more recent studies by Monroy-Rodriguez \& Allen reduce the mass at which $f$ can be $1$ from $500\,M_{\odot}$ to $21$ -- $78\,M_{\odot}$ or even $7$ -- $12\,M_{\odot}$ \cite{Monroy-Rodriguez:2014ula}. The narrow window between the microlensing lower bound and the wide-binary upper bound is therefore shrinking and may even have been eliminated altogether (see Sec.~\ref{sec:ExtendedPBH}). A variety of dynamical constraints come into play at higher mass scales. These have been studied by Carr and Sakellariadou \cite{Carr:1997cn} and apply providing there is at least one PBH per galactic halo. This corresponds to the condition \begin{equation} f( M ) > ( M / M_{\mathrm{halo}} ), \quad M_{\mathrm{halo}} \approx 3 \times 10^{12}\,M_{\odot} \, , \label{incredulity} \end{equation} which they term the ``incredulity limit''. An argument similar to the binary disruption one shows that the survival of globular clusters against tidal disruption by passing PBHs gives a limit (not shown in Fig.~\ref{fig:large}) \begin{equation} f( M ) < \begin{cases} ( M / 3 \times 10^{4}\,M_{\odot} )^{-1} & ( 3 \times 10^{4}\,M_{\odot} < M < 10^{6}\,M_{\odot} )\, , \\ 0.03 & ( 10^{6}\,M_{\odot} < M < 10^{11}\,M_{\odot} )\, , \\ ( M / M_{\mathrm{halo}} ) & ( M > 10^{11}\,M_{\odot} ) \, , \end{cases} \end{equation} although this depends sensitively on the mass and the radius of the cluster. The limit flattens off above $ 10^{6}\,M_{\odot}$ because the encounter becomes non-impulsive (\cf~the binary case). The upper limit of $ 3 \times 10^{4}\,M_{\odot}$ on the mass of objects dominating the halo is consistent with the numerical calculations of Moore \cite{1993ApJ...413L..93M}. In a related limit, Brandt \cite{Brandt:2016aco} claims that a mass above $5\, M_{\odot}$ is excluded by the fact that a star cluster near the centre of the dwarf galaxy Eridanus II has not been disrupted by halo objects. His constraint can be written as \begin{equation} f( M ) \lesssim \begin{cases} ( M / 3.7\,M_{\odot} )^{-1} / [ 1.1 - 0.1\ln( M / M_{\odot} ) ] & ( M < 10^{3} M_{\odot} ) \, ,\\ ( M / 10^{6} M_{\odot} ) & ( M > 10^{3} M_{\odot} ) \, , \end{cases} \label{eq:eribound} \end{equation} where the density of the dark matter at the center of the galaxy is taken to be $0.1\,M_{\odot}\.\mathrm{pc}^{-3}$, the velocity dispersion there is taken to be $5\.{\rm km}\.{\rm s}^{-1}$, and the age of the star cluster is taken to be $3\,{\rm Gyr}$. The second expression in Eq.~\eqref{eq:eribound} was not included in Ref.~\cite{Brandt:2016aco} but is the incredulity limit, corresponding to having one black hole for the dwarf galaxy. Halo objects will overheat the stars in the Galactic disc unless one has \cite{Carr:1997cn} \begin{equation} f( M ) < \begin{cases} ( M / 3 \times 10^{6}\,M_{\odot} )^{-1} & ( M < 3 \times 10^{9}\,M_{\odot} ) \, , \\ ( M / M_{\mathrm{halo}} ) & ( M > 3 \times 10^{9}\,M_{\odot} ) \, , \end{cases} \label{disc} \end{equation} where the lower expression is the incredulity limit. The upper limit of $3 \times 10^{6}\,M_{\odot}$ agrees with the more precise calculations by Lacey and Ostriker \cite{1985ApJ...299..633L}, although they argued that black holes with $2 \times 10^{6}\,M_{\odot}$ could \emph{explain} some features of disc heating. Constraint \eqref{disc} bottoms out at $M \sim 3 \times 10^{9}\,M_{\odot}$ with a value $f \sim 10^{-3}$. Evidence for a similar effect may come from the claim of Totani \cite{Totani:2009af} that elliptical galaxies are puffed up by dark halo objects of $10^{5}\,M_{\odot}$. These disk-heating limits are not shown in Fig.~\ref{fig:large} because they are smaller than other limits in this mass range. Another limit in this mass range arises because halo objects will be dragged into the nucleus of our own Galaxy by the dynamical friction of the spheroid stars and halo objects themselves (if they have an extended mass function), this leading to excessive nuclear mass unless \cite{Carr:1997cn} \begin{equation} f( M ) < \begin{cases} ( M / 2 \times 10^{4}\,M_{\odot} )^{-10/7}\,( r_{\mathrm c} / 2\,\mathrm{kpc} )^{2} & ( M < 5 \times 10^{5}\,M_{\odot} )\, , \\ ( M / 4 \times 10^{4}\,M_{\odot} )^{-2}\,( r_{\mathrm c} / 2\,\mathrm{kpc} )^{2} & ( 5 \times 10^{5}\,M_{\odot}\ll M < 2 \times 10^{6}\,( r_{\mathrm c} / 2\,\mathrm{kpc} )\,M_{\odot} )\, , \\ ( M / 0.1\,M_{\odot} )^{-1/2} & ( 2 \times 10^{6}\,(r_{\mathrm c} / 2\,\mathrm{kpc} )\,M_{\odot} < M < 10^{7} M_{\odot} )\, , \\ ( M / M_{\mathrm{halo}} ) & ( M > 10^{7} M_{\odot} ) \, . \end{cases} \end{equation} The last expression is the incredulity limit and first three correspond to the drag being dominated by spheroid stars (low $M$), halo objects (high $M$) and some combination of the two (intermediate $M$). The limit bottoms out at $M \sim 10^{7} \, M_{\odot}$ with a value $f \sim 10^{-5}$ but is sensitive to the halo core radius $r_{\mathrm c}$. Also there is a caveat here in that holes drifting into the nucleus might be ejected by the slingshot mechanism if there is already a binary black hole there \cite{Hut:1992iy}. This possibility was explored by Xu and Ostriker \cite{Xu:1994vb}, who obtained an upper limit of $3 \times 10^{6}M_{\odot}$. Each of these dynamical constraints is subject to certain provisos but it is interesting that they all correspond to an upper limit on the mass of the objects which dominate the halo in the range $500\,-\,2 \times 10^{4}\,M_{\odot}$, the binary-disruption limit being the strongest. This is particularly relevant for constraining models in which the dark matter is postulated to comprise IMBHs. Apart from the Galactic disc and elliptical galaxy heating arguments of Refs.~\cite{1985ApJ...299..633L,Totani:2009af}, it must be stressed that none of these dynamical effects gives {\it positive} evidence for MACHOs. Furthermore, none of them requires the MACHOs to be PBHs. Indeed, they could equally well be clusters of smaller objects \cite{1987ApJ...316...23C,Belotsky:2015psa} or Ultra-Compact Mini-Halos (UCMHs) \cite{Bringmann:2011ut}. This is pertinent in light of the claim by Dokuchaev {\it et al.} \cite{Dokuchaev:2004kr} and Chisholm \cite{Chisholm:2005vm} that PBHs could form in tight clusters, giving a local overdensity well in excess of that provided by the halo concentration alone. It is also important to note that the UCMH constraints on the density perturbations may be stronger than the PBH limits in the higher-mass range \cite{Bringmann:2011ut}. This is relevant if one wants to consider the effect of an extended mass function. \subsection{Large-Scale Structure Constraints} \label{sec:Large--Scale-Structure-Constraints} Sufficiently large PBHs could have important consequences for large-scale structure formation because of the Poisson fluctuations in their number density. This effect was first pointed out by M\'esz\'aros \cite{Meszaros:1975ef} and subsequently studied by various authors \cite{1983ApJ...275..405F, 1977A&A....56..377C, 1983ApJ...268....1C}. In particular, Afshordi \textit{et al.} \cite{Afshordi:2003zb} used observations of the Lyman-$ \alpha $ forest to obtain an upper limit of about $ 10^{4}\,M_{\odot}$ on the mass of any PBHs which provide the dark matter. Although this conclusion was based on numerical simulations, Carr \textit{et al.} \cite{Carr:2009jm} obtained this result analytically and extended it to the case where the PBHs only provide a fraction $ f( M ) $ of the dark matter. Since the Poisson fluctuation in the number of PBHs on a mass-scale $M_{\mathrm{Ly}\alpha} \sim 10^{10}\,M_{\odot}$ grows between the redshift of CDM domination ($z_{\mathrm{eq}} \sim 4000$) and the redshift at which Lyman-$ \alpha $ clouds are observed ($z_{\mathrm{Ly}\alpha} \sim 4$) by a factor $ z_{\mathrm{eq}} / z_{\mathrm{Ly}\alpha} \sim 10^{3} $, the clouds will bind too early unless \begin{equation} f( M ) < \begin{cases} ( M / 10^{4}\,M_{\odot} )^{-1} ( M_{\mathrm{Ly}\alpha} / 10^{10}\,M_{\odot} ) & ( M < 10^{7} M_{\odot} ) \, , \\ ( M / 10^{10}\,M_{\odot} ) ( M_{\mathrm{Ly}\alpha} / 10^{10}\,M_{\odot} )^{-1} & ( M > 10^{7}\,M_{\odot} ) \, . \end{cases} \label{eq:cluster} \end{equation} The lower expression corresponds to having at least one PBH per Lyman-$\alpha$ mass, so the limit bottoms out at $M \sim 10^{7}\,M_{\odot}$ with a value $f \sim 0.001$. The data from SDSS are more extensive \cite{McDonald:2004eu}, so the limiting mass may now be reduced. A similar effect can allow clusters of large PBHs to evolve into the supermassive black holes in galactic nuclei \cite{1984MNRAS.206..801C, Duechting:2004dk, Khlopov:2004sc}; if one replaces $M_{\mathrm{Ly}\alpha}$ with $ 10^{8}\,M_{\odot}$ and $z_{\mathrm{Ly}\alpha}$ with $10$ in the above analysis, the limiting mass in Eq.~\eqref{eq:cluster} is reduced to $ 600\,M_{\odot}$\,. Recently, Kashlinksy has been prompted by the LIGO observations to consider the effects of the Poisson fluctuations induced by a dark-matter population of $30\,M_{\odot}$ black holes \cite{Kashlinsky:2016sdv}. This can be seen as a special case of the general analysis presented above. However, he adds an interesting new feature to the scenario by suggesting that the black holes might also lead to the cosmic infrared background (CIB) fluctuations detected by the {\it Spitzer/Akari} satellites \cite{Kashlinsky:2014jja, Helgason:2015ema}. This is because the associated Poisson fluctuations would allow more abundant early collapsed halos than in the standard scenario. It has long been appreciated that the CIB and its fluctuations would be a crucial test of any scenario in which the dark matter comprises the black-hole remnants of Population III stars \cite{Bond:1985pc}, but in this case the PBHs are merely triggering high-redshift star formation and not generating the CIB directly. We do not attempt to derive constraints on the PBH scenario from the CIB observations, since many other astrophysical parameters are involved. \subsection{Accretion Constraints} \label{sec:Accretion-Constraints} There are good reasons for believing that PBHs cannot grow very much during the radiation-dominated era. Although a simple Bondi-type argument suggests that they could grow as fast as the horizon \cite{1967SvA....10..602Z}, this does not account for the background cosmological expansion and a fully relativistic calculation shows that such self-similar growth is impossible \cite{Carr:1974nx,1978ApJ...219.1043B,1978ApJ...225..237B}. Consequently there is very little growth during the radiation era. The only exception might be if the Universe were dominated by a ``dark energy'' fluid with $p < - \rho\.c^{2} / 3$, as in the quintessence scenario, since self-similar black-hole solutions do exist in this situation \cite{Harada:2007tj, Maeda:2007tk, Carr:2010wk}. This may support the claim of Bean and Magueijo \cite{Bean:2002kx} that intermediate-mass PBHs might accrete quintessence efficiently enough to evolve into the SMBHs in galactic nuclei. Even if PBHs cannot accrete appreciably in the radiation-dominated era, massive ones might still do so in the period after decoupling and the Bondi-type analysis \emph{should} then apply. The associated accretion and emission of radiation could have a profound effect on the thermal history of the Universe, as first analysed by Carr \cite{1981MNRAS.194..639C}. This possibility was investigated in more detail by Ricotti \textit{et al.} \cite{Ricotti:2007au}, who studied the effects of such accreting PBHs on the ionisation and temperature evolution of the Universe. The emitted X-rays would produce measurable effects in the spectrum and anisotropies of the CMB. Using FIRAS data to constrain the first and WMAP data to constrain the second, they improve the constraints on $f( M )$ by several orders of magnitude for $ M > 1\,M_{\odot}$\,. The WMAP limit can be approximated as \begin{equation} f( M ) < \begin{cases} ( M / 30\,M_{\odot} )^{-2} & ( 30\,M_{\odot}< M \lesssim 10^{4}\,M_{\odot} )\, ,\\ 10^{-5} & ( 10^{4}\,M_{\odot}\lesssim M < 10^{11}\,M_{\odot} )\, ,\\ M / M_{\ell = 100} & ( M > 10^{11}\,M_{\odot} )\, ,\\ \end{cases} \end{equation} where the last expression is not included in Ref.~\cite{Ricotti:2007au} but corresponds to having one PBH on the scale associated with the CMB anisotropies; for $\ell = 100$ modes, this is $M_{\ell = 100} \approx 10^{16} M_{\odot}$. The FIRAS limit can be approximated as \begin{equation} f( M ) < \begin{cases} ( M / 1\,M_{\odot} )^{-2} & (1\,M_{\odot} < M \lesssim 10^{3}\,M_{\odot} )\, , \\ 0.015 & ( 10^{3}\,M_{\odot}\lesssim M < 10^{14}\,M_{\odot} )\, ,\\ M / M_{\ell = 100} & ( M > 10^{14}\,M_{\odot} )\,. \\ \end{cases} \end{equation} Although these limits appear to exclude $f = 1$ down to masses as low as $1\,M_{\odot}$, they are model-dependent (spherically symmetric Bondi accretion etc.) and therefore not as secure as the dynamical ones. In particular, they depend on the duty-cycle parameter; we assume a smaller value for this than Ref.~\cite{Carr:2009jm}, which is why our limits are somewhat weaker. Mack \textit{et al.} \cite{Mack:2006gz} have considered the growth of large PBHs through the capture of dark-matter halos and suggested that their accretion could give rise to ultra-luminous X-ray sources. The latter possibility has also been explored by Kawaguchi \textit{et al.} \cite{Kawaguchi:2007fz}. In Ref.~\cite{Eroshenko:2016yve} it is claimed that dark matter will cluster around PBHs from very early times, causing sharp density spikes. These would be observable as bright $\gamma$-ray sources from the annihilation of dark-matter particles in orbit around the PBHs. Very stringent constraints on $f$ are obtained using Fermi-LAT data \cite{Abdo:2010nz} for $M > 10^{-8}\,M_{\odot}$. As this constraint depends on the assumption that the dark-matter density is dominated by WIMPs, we do not include it here. However, such PBH limits must be taken into account if they are to be used to constrain models of inflation. \begin{center} \begin{table} \begin{tabular}{ | l | l | p{5cm} |} \hline \bf Mass range & \bf Constraint & \bf Source\\ \hline \hline $ M < 10^{18}$g & $f( M ) < 2 \times 10^{-8} \left( \frac{M}{5 \times 10^{14^{^{^{}}}}\!\grm} \right)^{\!3 + \epsilon^{^{^{}}}}_{_{_{_{}}}}$ & extragalactic $\gamma$-ray background\\ \hline $ 5 \times10^{16} \grm < M < 10^{19}$g & $f( M ) < 0.1$ & femtolensing of GRB from Fermi\\ \hline $ 2.5 \times 10^{18} \grm < M < 10^{25}$g & $f( M ) < \frac{M}{4.7 \times 10^{24^{^{^{}}}}\!\grm} \bigg( 1 - \exp \left[ - \frac{M}{2.9\times 10^{23^{^{^{}}}}\!\grm} \right] \mspace{-4mu} \bigg)^{\!-1^{^{^{}}}}_{_{_{_{}}}}$ & neutron-star capture\\ \hline $2 \times 10^{-9} M_{\odot} < M < 10^{-7}\.M_{\odot}$ & $f( M ) < 0.3 ^{^{^{^{}}}}_{_{_{_{}}}}$ & microlensing from Kepler\\ \hline $ 10^{-6} M_{\odot} < M < M_{\odot}$ & $f( M ) < 0.1 ^{^{^{^{}}}}_{_{_{_{}}}}$ & MACHO and EROS, (OGLE II)\\ \hline $10^{-3} M_{\odot} < M < 0.1 M_{\odot}$ & $f( M ) < 0.04 ^{^{^{^{}}}}_{_{_{_{}}}}$ & MACHO and EROS, (OGLE II)\\ \hline $0.1 M_{\odot} < M < 0.4 M_{\odot}$ & $f( M ) < 0.06 ^{^{^{^{}}}}_{_{_{_{}}}}$ & OGLE III and OGLE IV\\ \hline $0.1 M_{\odot} < M < 20 M_{\odot}$ & $f( M ) < 0.2 ^{^{^{^{}}}}_{_{_{_{}}}}$ & OGLE III and OGLE IV\\ \hline $M > M_{\odot}$ & $f( M ) < 3.7\,\frac{M_{\odot}}{M^{^{^{^{}}}}\!} \bigg( 1.1 + 0.1 \ln\!\left[ \frac{M_{\odot}}{M^{^{^{^{}}}}\!} \right] \mspace{-4mu} \bigg)^{\!-1^{^{^{}}}}_{_{_{_{}}}}$ & Eridanus II star cluster\\ \hline $500\.M_{\odot} < M < 10^{3} M_{\odot}$ & $f( M ) < \frac{500 M_{\odot}}{M^{^{^{^{}}}}\!} ^{^{^{^{}}}}_{_{_{_{}}}}$ & wide-binary stability\\ \hline $10^{3} M_{\odot} < M < 10^{8} M_{\odot}$ & $f( M ) < 0.4^{^{^{^{}}}}_{_{_{_{}}}}$ & wide-binary stability\\ \hline $ 10\.M_{\odot} < M < \times 10^{4} M_{\odot}$ & $f( M ) < \Big( \frac{M}{10 M_{\odot}^{^{^{^{}}}}\!} \Big)^{\!-2^{^{^{}}}}_{_{_{_{}}}}$ & WMAP3 accretion\\ \hline $M > 10^{4} M_{\odot}$ & $f( M ) < \mathrm{max} \bigg[ 10^{-5}, \left( \frac{M}{10^{16^{^{^{}}}}\!M_{\odot}} \right)\mspace{-4mu} \bigg] ^{^{^{^{}}}}_{_{_{_{}}}}$ & WMAP3 accretion\\ \hline $M > 10^{4} M_{\odot}$ & $f( M ) < {\rm max}\! \left[ \frac{10^{4} M_{\odot}}{M^{^{^{^{}}}}\!}\,, \frac{M}{10^{10^{^{^{}}}}\!M_{\odot}} \right] ^{^{^{^{}}}}_{_{_{_{}}}}$ & Lyman-$\alpha$ clouds\\ \hline $M < 5\times 10^{5} M_{\odot}$ & $f( M ) < \!\left( \frac{M}{2\times 10^{4^{^{^{}}}}\!M_{\odot}} \right)^{\!-10 / 7^{^{^{}}}}_{_{_{_{}}}}$ & dynamical friction\\ \hline $5\times 10^{5} M_{\odot} < M < 2 \times 10^{6} M_{\odot}$ & $f( M ) < \!\left( \frac{M}{4\times 10^{4^{^{^{}}}}\!M_{\odot}} \right) ^{\!-2^{^{^{}}}}_{_{_{_{}}}}$ & dynamical friction\\ \hline $M > 2 \times 10^{6} M_{\odot}$ & $f( M ) < \mathrm{max} \bigg[\! \Big( \frac{M}{0.1 M_{\odot}^{^{^{^{}}}}\!} \Big)^{\!-1/2^{^{^{}}}}, \frac{M}{3\times 10^{12^{^{^{}}}}\!M_{\odot}} \bigg]^{^{^{}}}_{_{_{}}}$ & dynamical friction\\ \hline $M < 10^{5} M_{\odot}$ & $f( M ) < \!\left( \frac{M}{2 \times 10^{4^{^{^{}}}}\!M_{\odot}} \right)^{\!-2^{^{^{}}}}_{_{_{_{}}}}$ & millilensing of quasars\\ \hline $10^{5} M_{\odot} < M < 10^{8} M_{\odot}$ & $f( M ) < 0.06^{^{^{^{}}}}_{_{_{_{}}}}$ & millilensing of quasars\\ \hline $M > 10^{8} M_{\odot}$ & $f( M ) < \!\left( \frac{M}{4 \times 10^{8^{^{^{}}}}\!M_{\odot}} \right)^{\!2^{^{^{}}}}_{_{_{_{}}}}$ & millilensing of quasars\\ \hline \end{tabular} \caption{Summary of dominant constraints on the fraction of dark matter in PBHs in various mass ranges. These correspond to or are special cases of the constraints in Fig.~\ref{fig:large}; see main text for details. Only limits stronger than $f( M ) < 1$ are listed.} \vs{-3mm} \label{tab:ConstraintSummary} \end{table} \end{center} \label{sec:Summary-and-Outlook} \noindent In this work we have studied the possibility that PBHs constitute the dark matter, focussing on the three mass ranges where PBHs were considered plausible dark-matter candidates around a decade ago. These include (A) black holes in the intermediate-mass range $1\,M_{\odot} < M < 10^{3}\,M_{\odot}$, (B) sublunar black holes in the range $10^{20}$ -- $10^{24}\,$g and (C) sub-atomic size black holes in the range $10^{16}$ -- $10^{17}\,$g. In addition, we have discussed (D) Planck-mass relics in the range around $10^{-5}\,$g. All relevant constraints in these mass windows were reviewed in Sec.~\ref{sec:Constraints}, including those from microlensing, dynamical effects, large-scale structure, accretion and black-hole mergers of the kind observed by LIGO. We have found that scenarios (A) and (B) can still produce all the dark matter, although this depends on the exact values of the astrophysical parameters involved in the constraints. So these windows may be closed in the near future. Scenario (C) is already excluded for all practical purposes, while (D) is completely unconstrained and will remain so for the foreseeable future. Since the precision of the constraints has improved significantly in recent years, a more refined treatment of PBH formation appears to be mandatory. In order to tackle this issue, we discussed in Sec.~\ref{sec:RealisticMassFunctions} all the necessary ingredients for a precise calculation of the PBH abundance from a fundamental early-Universe source, such as non-Gaussianity, non-sphericity, criticality, merging, the choice of the appropriate variables, and the different approaches for estimating the black-hole number density. Regarding non-Gaussianity, non-sphericity and criticality, we have performed quantitative calculations, showing how these effects are expected to change the PBH distribution. In all cases the mass spectrum will be lowered, while critical collapse will cause significant broadening, as well as a shift towards lower masses. In Sec.~\ref{sec:ExtendedPBH} we introduced a novel scheme for investigating the compatibility of a general extended PBH mass function with arbitrary constraints. We also showed which model-independent conclusions can be drawn from the constraints for an unknown extended mass function, illustrating this by the application to constraints in the intermediate-mass region. Our procedure demonstrated, on the one hand, that extended mass spectra are more difficult to analyse than the commonly (and wrongly) used monochromatic ones. On the other hand, we showed that there are situations in which PBH dark matter is excluded in the monochromatic case but allowed in the extended mass case. We have given explicit examples of this. For definiteness, we introduced in Sec.~\ref{sec:ModelsInRelevantRanges} two inflationary models{\,---\,}the axion-like curvaton model and the running-mass model){\,---\,}which are capable of producing PBHs in the relevant mass ranges. In Sec.~\ref{sec:ExtendedPBH} we confronted these models with the latest constraints in these mass ranges and discussed under what circumstances they can produce PBHs containing all the dark matter. Even though we have presented a rather complete picture of PBH formation, more work is required for the concrete implementation of this approach. In particular, for non-sphericity, precision simulations of fully general-relativistic collapses for ellipsoidal overdensities are necessary. Additional clarification of the interplay between ellipticity and non-Gaussianity, as well as a more thorough understanding of merger rates and accretion are needed before the observational constraints on PBHs can be translated into constraints on early-Universe physics. Even before all issues of PBH formation are settled, model-independent exclusion of PBHs as dark-matter candidates may be possible in the near future. If care is taken when applying observational constraints to allow for uncertainties in the various astrophysical processes (\eg~the growth of the PBH through accretion), then one may be able to exclude even PBHs with extended mass functions. However, this must be done by considering constraints in the way described in this paper, rather than by focusing on monochromatic mass functions which contain all the dark matter.
16
7
1607.06077
1607
1607.01182_arXiv.txt
We cross-match the two currently largest all-sky photometric catalogs, mid-infrared WISE and SuperCOSMOS scans of UKST/POSS-II photographic plates, to obtain a new galaxy sample that covers $3\pi$ steradians. In order to characterize and purify the extragalactic dataset, we use external GAMA and SDSS spectroscopic information to define quasar and star loci in multicolor space, aiding the removal of contamination from our extended-source catalog. After appropriate data cleaning we obtain a deep wide-angle galaxy sample that is approximately 95\% pure and 90\% complete at high Galactic latitudes. The catalog contains close to 20 million galaxies over almost 70\% of the sky, outside the Zone of Avoidance and other confused regions, with a mean surface density of over 650 sources per square degree. Using multiwavelength information from two optical and two mid-IR photometric bands, we derive photometric redshifts for all the galaxies in the catalog, using the \ANNz\ framework trained on the final GAMA-II spectroscopic data. Our sample has a median redshift of $z_\mrm{med} = 0.2$ but with a broad $\de N/\de z$ reaching up to $z>0.4$. The photometric redshifts have a mean bias of $|\delta z|\sim10^{-3}$, normalized scatter of $\sigma_{z} = 0.033$ and less than 3\% outliers beyond $3\sigma_{z}$. {Comparison with external datasets shows no significant variation of \phz\ quality with sky position.} Together with the overall statistics, we also provide a more detailed analysis of photometric redshift accuracy as a function of magnitudes and colors. The final catalog is appropriate for `all-sky' 3D cosmology to unprecedented depths, in particular through cross-correlations with other large-area surveys. It should also be useful for source pre-selection and identification in forthcoming surveys such as TAIPAN or WALLABY.
\setcounter{footnote}{0} Direct mapping of the three-dimensional distribution of galaxies in the Universe requires their angular coordinates and redshifts. Dozens of such wide-angle galaxy redshift catalogs now exist, the most notable of which include the Sloan Digital Sky Survey \citep[SDSS,][]{SDSS}, the Two-degree Field Galaxy Redshift Survey \citep[2dFGRS,][]{2dF}, or the Six-degree Field Galaxy Survey \citep[6dFGS,][]{6dF}. For some applications, it is an advantage if the survey can cover the majority of the sky: for example, searches for violation of the Copernican principle in the form of large-scale inhomogeneities or anisotropies \citep{GH12,AS14,Alonso15,YH15} and coherent motions \citep{BCJM11,BDN12,Carrick15}, as well as cross-correlations of galaxy data with external wide-angle datasets. Examples of the latter are studies of the integrated Sachs-Wolfe effect \citep[see][for a review]{Nishizawa15}, of gravitational lensing of the cosmic microwave background (CMB) on the large-scale structure \citep{LC06}, or searches for sources of the extragalactic $\gamma$-ray background \citep[e.g.][]{Xia15}, including constraints on annihilating or decaying dark matter \citep{Cuoco15}. These analyses are limited by cosmic variance, and also frequently much of the signal lies at large angular scales -- both factors that make it desirable to have the largest possible sky coverage. But there is a practical limit to the number of spectroscopic redshifts that can be measured in a reasonable time. Spectroscopic galaxy catalogs covering the \textit{whole} extragalactic sky, \eg the IRAS Point Source Catalog Redshift Survey \citep[PSCz,][]{PSCz} and the 2MASS Redshift Survey \citep[2MRS,][]{2MRS}, thus tend to be relatively shallow ($z < 0.1$) -- and the same applies to hemispherical samples such as the 6dFGS. This problem can be addressed by using only rare tracers, as with the highly successful BOSS program \citep{BOSS} or planned projects such as the Dark Energy Spectroscopic Instrument (DESI, \citealt{DESI}) or Wide Area VISTA Extra-galactic Survey (WAVES, \citealt{WAVES}) within the 4MOST program; but for many applications it is desirable to have a fully-sampled galaxy density field. For that reason, new wide-field surveys such as the Dark Energy Survey \citep{DES}, Pan-STARRS \citep{Pan-STARRS} or the Kilo-Degree Survey \citep{KiDS} focus on measuring the photometric properties of objects, with only a partial spectroscopic follow-up. In the longer term, the same will apply to forthcoming multi-billion-object facilities including Euclid \citep{Euclid} and the Large Synoptic Survey Telescope \citep{LSST}. {Lying somewhat in between the spectroscopic and photometric surveys, the currently starting Javalambre-Physics of the Accelerated Universe Astrophysical Survey \citep[J-PAS,][]{J-PAS} is expected to reach sub-percent redshift precision over $\sim8000$ \dsqu, thanks to the usage of $56$ narrow-band filters. Of a similar nature, but aiming to cover 100 \dsqu\ to a greater depth than J-PAS, is the Physics of the Accelerating Universe survey \citep[PAU,][]{PAU}.} In order for such surveys to yield cosmological information of comparable or even better quality than from traditional spectroscopic samples, one needs to resort to the technique of \textit{photometric redshifts} (\phzs). In the near future, this approach will dominate those cosmological analyses where the benefit from larger volumes outweighs the loss of redshift accuracy. Although some small-scale analyses are not feasible with the coarse accuracy of \phz\ estimation (typically a few \% precision), there are many applications where this level of measurement is more than adequate. This is particularly true when there is an angular signal that changes slowly with redshift, requiring a \textit{tomographic} analysis in broad redshift bins (\eg \citealt{FP10}); but until recently the necessary \phz\ information has only been available for relatively shallow subsamples of all-sky catalogs. To improve on this situation, in \citet[][hereafter \tcb{B14}]{2MPZ} we combined three all-sky photometric samples -- optical SuperCOSMOS, near-infrared 2MASS and mid-infrared WISE -- into a multiwavelength dataset. We used various spectroscopic calibration samples to compute photometric redshifts for almost 1 million galaxies over most of the extragalactic sky: the 2MASS Photometric Redshift catalog (2MPZ)\footnote{Available for download from the Wide Field Astronomy Unit, Edinburgh, at \url{http://surveys.roe.ac.uk/ssa/TWOMPZ}}. The 2MPZ is currently the deepest three-dimensional full-sky galaxy dataset, with a median redshift of $z\simeq0.1$ and a typical uncertainty in photometric redshift of about $12\%$ (scatter $\sigma_{z} = 0.013$). Ideally, these estimates should be superseded by actual spectroscopy -- and recently prospects have emerged for this to happen, thanks to the new hemispherical TAIPAN survey \citep{TAIPAN} in the South, starting in 2016, as well as the recently proposed LoRCA \citep{LORCA} in the North. These efforts, if successful, will provide spectroscopic information for all the 2MASS galaxies which do not have redshifts, although at their planned depths ($r\lesssim 18$ for the former and $K_s<14$ for the latter) they will not replace the need for the catalog presented in the current paper. We note, however, the \textit{SPHEREx} concept by \cite{SPHEREx} to probe much deeper on most of sky. The depth of 2MPZ is limited by the shallowest of the three photometric surveys combined for its construction, the 2MASS Extended Source Catalog \citep[XSC,][]{2MASSXSC,XSCz}. However, as was shown in \tcb{B14}, one can go beyond the 2MASS data and obtain a much deeper all-sky \phz\ catalog based on WISE and SuperCOSMOS only. In \tcb{B14} we predicted that such a sample should have a typical redshift error of $\sigma_{z}\simeq0.035$ at a median $z\simeq0.2$ (median relative error of $14\%$). The construction of this catalog is the focus of the present paper, and indeed we confirm and even exceed these expectations on the \phz\ quality. We note that in a related effort \cite{KoSz15} presented a wide-angle sample deeper than the 2MASS XSC, based on WISE and the 2MASS \textit{Point} Source Catalog (PSC). Its depth is, however, still limited by 2MASS: PSC has an order of magnitude smaller surface density than WISE \citep{Jarrett16}. Overall, the \cite{KoSz15} sample includes 2.4 million sources at $\zmed\simeq0.14$ over half of the sky, of which 1/3 are in common with the 2MASS XSC. Here we map the cosmic web to much higher redshifts than can be accessed with 2MASS, yielding a third shell of presently available all-sky redshift surveys. The first, with exact spectroscopic redshifts at $\zmed=0.03$, is provided by the 2MRS, flux-limited to $K_s\leq11.75$ (Vega) and containing 44,000 galaxies at $|b|>5\degree$ ($|b|>8\degree$ by the Galactic Bulge). The second is the 2MPZ, which includes almost a million 2MASS galaxies at $K_s<13.9$ with precise \phzs\ at $\zmed=0.07$, based on 8-band 2MASS\ti{}WISE\ti{}SuperCOSMOS photometry. This present work concerns 20 million galaxies with $\zmed=0.2$, thus reaching three times deeper than 2MPZ, over $3\pi$ steradians of the sky outside the Galactic Plane. This paper is organized as follows. In \S\ref{Sec: Contributing catalogs} we provide a detailed description of the catalogs contributing to the sample and their cross-matching. In \S\ref{Sec: x-matches with GAMA} we analyze the properties of the input photometric datasets by pairing them up with GAMA spectroscopic data. \S\ref{Sec: Catalog cleanup} describes the use of external GAMA and SDSS spectroscopic information to remove quasars and stellar blends from the cross-matched catalog. The construction of the angular mask to be applied to the data is also presented there in \S\ref{Sec: Mask}. Next, in \S\ref{Sec: Photometric redshifts} we show how photometric redshifts were obtained for the sample and discuss several tests of their performance; \S\ref{Sec: All-sky photo-z catalog} discusses the properties of the final all-sky catalog. In \S\ref{Sec: Summary} we summarize and list selected possible applications of our dataset. \vspace{2mm}
\label{Sec: Summary} In this paper we presented a novel photometric redshift galaxy catalog based on two the largest existing all-sky photometric surveys, WISE and SuperCOSMOS. A union of these two samples, once cleaned of stellar contamination, provides access to redshifts of $z<0.4$ on unmatched angular scales. Its angular coverage ($\simeq 3\pi$~sr) is a major advance with respect to existing surveys covering these redshifts \citep[e.g.][]{DAbrusco07,Oyaizu08,Brescia14,Beck16}. We envisage manifold possible applications of our catalog, most simply from being able to improve the statistics on analyses of shallower datasets, such as 2MPZ \citep{AS14,XWH14,Alonso15}, 2MRS and 2MASS \citep{GH12} or the 2MASS PSC -- WISE combination \citep{Yoon14}. Our catalog can also be regarded as a testbed for currently being compiled or forthcoming more precise and deeper wide-angle samples, such as from DES, SKA or Euclid. A particularly interesting class of application involves `tomography': slicing the dataset into redshift bins. Cross-correlations with other wide-angle astrophysical probes at various wavelengths should be especially fruitful, owing to their insensitivity to any remaining small systematics. Such analyses include, for instance, CMB temperature maps for the integrated Sachs-Wolfe effect searches \citep[e.g.][]{G08,FP10}; CMB lensing measurements to constrain non-Gaussianity \citep{GP14} or neutrino mass \citep{PZ14}; or the gamma-ray background provided by the Fermi satellite to constrain the sources of this emission \citep{Xia15} or to search for dark matter \citep{Cuoco15}. In addition, we expect the \WISC\ sample to be useful for studies on Faraday rotation of extragalactic sources \citep{Vacca15}, identification of galaxies in the SKA pathfinder WALLABY \citep{Popping12} or in planned CMB missions such as CoRE+ \citep{deZotti15}. {It should be also appropriate in searches for electromagnetic counterparts of extragalactic gravitational wave sources, as -- together with 2MPZ (cf.\ \citealt{AH16}) -- it extends well beyond the catalogs currently used for that purpose \citep[e.g.][]{GWGC}. In addition, both 2MPZ and the present catalog provide two crucial parameters for such studies: the $B$-band magnitude (a proxy for black hole and neutron star merger rate), and the $W1$ magnitude, directly related to the galaxy's stellar mass.} In the nearer future, its bright end ($R\lesssim18$) may be employed as one of the input catalogs for the forthcoming TAIPAN survey \citep{TAIPAN}. The fact that the median redshift of WISE galaxies is much higher than that of SCOS (\S\ref{Sec: x-matches with GAMA}; \citealt{Jarrett16}) makes it desirable to extend the present analysis beyond the latter sample. However, WISE on its own will not allow for precise photometric redshifts, as at its full depth it provides only two mid-IR bands. To obtain \phz\ coverage beyond the SDSS area, it will be necessary to combine WISE with forthcoming catalogs, such as Pan-STARRS \citep{Pan-STARRS} or the VISTA Hemisphere Survey \citep{VHS}. A supplementary approach to derive redshift estimates for WISE can be the one of \cite{Menard13}, which is indeed already being undertaken (A.\ Mendez, priv.\ comm.). One of the requirements for such studies to succeed will be the ability to reliably separate galaxies from stars and quasars in WISE; a report on ongoing machine-learning efforts towards this goal is presented in \cite{Kurcz16}. {The WISE~$\times$~SuperCOSMOS photometric redshift catalog is made publicly available through the Wide Field Astronomy Unit at the Institute for Astronomy, Edinburgh at \url{http://ssa.roe.ac.uk/WISExSCOS}.}
16
7
1607.01182
1607
1607.08776_arXiv.txt
{ We describe the execution and data reduction of the European Southern Observatory Large Programme ``Quasars and their absorption lines: a legacy survey of the high-redshift universe with VLT/XSHOOTER'' (hereafter `XQ-100'). XQ-100 has produced and made publicly available a homogeneous and high-quality sample of echelle spectra of $100$ quasars (QSOs) at redshifts $z\simeq3.5$--$4.5$ observed with full spectral coverage from $315$ to $2\,500$ nm at a resolving power ranging from $R\sim 4\,000$ to $7\,000$, depending on wavelength. The median signal-to-noise ratios are $33$, $25$ and $43$, as measured at rest-frame wavelengths $1\,700$, $3\,000$ and $3\,600$ \AA, respectively. This paper provides future users of XQ-100 data with the basic statistics of the survey, along with details of target selection, data acquisition and data reduction. The paper accompanies the public release of all data products, including 100 reduced spectra. XQ-100 is the largest spectroscopic survey to date of high-redshift QSOs with simultaneous rest-frame UV/optical coverage, and as such enables a wide range of extragalactic research, from cosmology and galaxy evolution to AGN astrophysics. }
\label{sect_intro} In the era of massive quasar (QSO) surveys, already encompassing hundreds of thousands of confirmed sources~\citep[e.g.,][]{paris2014,flesch2015}, there is a relative shortage of follow-up echelle quality spectroscopy. Moderate to high resolving power ($R \approx 5\,000$--$40\,000$) and wide spectral coverage are key to many absorption line diagnostics that probe the interplay between galaxies and the intergalactic medium (IGM) at all redshifts. However, such observations are time consuming and require large telescopes, and even more so for high redshift QSOs which tend to be faint. Another challenge for QSO absorption line science is that as the redshift increases, more of the rest-frame UV and optical transitions become shifted into the hard-going near-infrared (NIR; $1\mu{\rm m} \la \lambda \la 2.5 \mu $m). Presently, public archives contain echelle spectra of roughly a few thousand unique QSOs, of which just a small fraction has NIR coverage. In addition, these data arise primarily from the cumulative effort of single (and heterogenous) observing programs, so one would expect such databases to be inhomogeneous in nature and suffer from selection biases by construction\citep{brunner2002,djorgovski2005}. Thus, new homogeneous and statistically significant echelle data sets are always welcome with as wide a range of uses as possible. In this paper we present ``XQ-100'', a new legacy survey of 100 QSOs at emission redshifts $z_{\rm em}\simeq 3.5$--$4.5$ observed with full optical and NIR coverage using the echelle spectrograph XSHOOTER~\citep{Vernet2011} on the European Southern Observatory (ESO) Very Large Telescope (VLT). The context and the scientific motivation of the survey are as follows. The largest QSO echelle samples in the optical come from Keck/HIRES~\citep[``KODIAQ'' database; ][]{omeara2015} and VLT/UVES (ESO UVES public archive) each providing between $300$ and $400$ QSO spectra with $R\approx 40\,000$. At moderate resolving power, $R\approx 10\,000$, Keck/ESI has observed around a thousand QSOs (John O'Meara, private communication) and a search in the VLT/XSHOOTER public archive reveals spectra of almost $300$ sources to date. Other large optical facilities with echelle capabilities, such as Subaru or Magellan, have either acquired a smaller data volume or do not manage public archives. In addition to ``smaller'' programs ($\la 10$ targets), these data sets, public or not, have been fed over the years by a few dedicated QSO surveys~\citep[e.g.,][]{bergeron2004} aimed at a variety of astrophysical probes of galaxy evolution and cosmology: metals in damped \lya\ systems~\citep[DLAs; e.g.,][]{lu1996, prochaska2003,ledoux2003,rafelski2012} and in the IGM~\citep[e.g.,][]{aguirre2004,songaila2005,scannapieco2006,dodorico2010}; light elements in Lyman-limit Systems~\citep[e.g.,][]{ kirkman2003}; DLA galaxies~\citep[e.g.,][]{peroux2011,noterdaeme2012a,zafar2013}; low and high-z circum-galactic medium~\citep[e.g.,][]{chen2010,rudie2012}; thermal state of the IGM~\citep[e.g.,][]{schaye2000,kim2002}; reionization~\citep[e.g.,][]{becker2007,becker2012,becker2015}; matter power spectrum~\citep[e.g.,][]{croft2002,viel2004,viel2009,viel2013}; and fundamental constants~\citep[e.g.,][]{murphy2003,srianand2004,molaro2013}. In the NIR, the largest QSO spectroscopic survey so far has been conducted using the FIRE IR spectrograph at Magellan~\citep{Matejek2012}. Focused on the incidence of \mgii\ at $z\approx2$--$5$, this survey comprises NIR observations of around 50 high-$z$ QSOs at $R\approx 6\,000$ and median signal-to-noise ratio, S/N $=13$. Other surveys at moderate to high resolution have focused on the \civ\ mass density at $z>4$ using Magellan/FIRE~\citep{Simcoe2011}, Keck/NIRSPEC~\citep{becker2009,ryan2009,becker2012}, or VLT/XSHOOTER~\citep{dodorico2013}, albeit comprising only a handful of sightlines, given the paucity of very high-$z$ QSOs. Near-IR spectroscopy is also needed to study the rest-frame optical emission lines of high-$z$ QSOs, which constrain broad-line region metallicities and black hole masses; however, in this case spectral coverage is more important than resolution. For instance, surveys have used VLT/ISAAC~\citep{sulenticetal06,sulenticetal04,marziani2009}, NTT/SofI~\citep{dietrich2002,dietrich2009}, or Keck/NIRSPEC and Blanco/OSIRIS~\citep{dietrich2003}. There are also samples at higher resolution obtained with Gemini/GNIRS~\citep{jiang2007}, or VLT/XSHOOTER~\citep{ho2012,derosa2014}. The largest samples have been acquired using Palomar Hale 200-inch/TripleSpec~\citep[][32 QSOs at $3.2<z<3.9$]{zuo2015} and, at lower redshifts, VLT/XSHOOTER~\citep[][30 QSOs at $z\approx1.5$]{capellupo2015}. The present XQ-100 survey builds on observations made with VLT/XSHOOTER within the ESO Large Programme entitled ``Quasars and their absorption lines: a legacy survey of the high redshift universe with X-shooter'' (PI S. L\'opez; $100$ hours of Chilean time). XSHOOTER provides complete coverage from the atmospheric cutoff to the NIR in one integration at $R\approx 6\,000$--$9\,000$, depending on wavelength. The full spectral coverage, along with a well-defined target selection and the high S/N achieved (median S/N $=30$), clearly make XQ-100 a unique data set to study the rest-frame UV/optical of high-$z$ QSOs in a single, homogeneous, and statistically significant sample. Our program was based on the following scientific themes: \begin{enumerate} \item {\it Galaxies in absorption:} determining the cosmic density of neutral gas in DLAs, the main reservoirs of neutral gas in the Universe~\citep[e.g.,][]{wolfe2005, prochaska2009a,noterdaeme2012b} at $z>3.5$~\citep{sanchez2015}; studying individual DLA abundances at $2.0\la z\la 4.5$~\citep{berg2016}; constraining the \mgii\ incidence $(dN/dz)_\mgii$ at $z>2.5$ with $\sim 2$--$3$ times better sensitivity and $\sim 2$ times longer redshift path than the sample by~\citet{Matejek2012} to test predictions from the cosmic star formation rate~\citep{zhu2013,2011MNRAS.417..801M}. \item {\it Intergalactic-Medium science:} measuring the cosmic opacity at the Lyman limit~\citep{prochaska2009,worseck2014} and providing an independent census of Lyman-limit systems~\citep[LLS;][]{prochaska2010,songaila2010} at $z\simeq1.5$--$4.5$; constraining the UV background via the proximity effect~\citep[e.g.,][]{dodorico2008,dallaglio2008,calverley2011}. \item {\it Active-Galactic-Nuclei science:} making the first $z> 3.5$ accurate measurements of black hole masses using the rest-frame UV emission lines of \civ$\lambda 1549$ and \mgii$\lambda 2800$ and the rest-frame optical \hb\ line \citep[from line widths and continuum luminosities; e.g.,][]{vestergaard2006,vestergaard2009}; examining broad-line region metallicity estimates (from emission line ratios; e.g., Hamann \& Ferland 1999; Hamann et al. 2002) and their relationship with other QSO properties, including, but not limited to, luminosity and black hole mass; using associated absorption lines to study the co-evolution of galaxies and black holes by measuring metallicities in the interstellar-medium of the QSO host galaxies~\citep[][]{perrotta2016,dodorico2004}; studying the broad QSO-driven outflow absorption lines that are found serendipitously in the spectra. \item {\it Cosmology:} measuring the matter power spectrum with the Ly$\alpha$ forest~\citep{croft1998} at high redshift~\citep[e.g.,][]{viel2009,palanque2013}, including an independent measurement of cosmological parameters with a joint analysis of these and the Planck publicly released data~\citep{irsic2016}. \end{enumerate} The sample size of 100 QSOs was defined by the objectives of these science goals. The choice of emission redshifts was determined by the absorption line searches: $z \ga 3.5$ means that every QSO contributes a redshift path of at least $0.5$ for $(dN/dz)_\mgii$ in the NIR, while $z \la 4.5$ avoids excessive line crowding in the Ly$\alpha$ forest. Clearly, a combination of the factors: well-defined target selection, echelle resolution, high S/N, and full wavelength coverage all represent a benefit to the above science goals. XQ-100 was designed as a legacy survey and this paper accompanies the public release of all data products, including a uniform sample of 100 reduced XSHOOTER spectra (available at {\tt http://archive.eso.org}). We note that this data volume increases the XSHOOTER QSO archive by $\approx 30 \%$. The following sections provide an in-depth description of the survey, along with its basic statistics. A description of our target selection and the observations can be found in Section \S~\ref{sect_target_and_observations}; details of the data reduction, along with a comparison between our own custom pipeline and the one provided by ESO are given in \S~\ref{section_data_reduction}; details of data post-processing (telluric corrections and continuum fits) are given in \S~\ref{section_post}; and, finally, a description of the publicly released data products is given in \S~\ref{section_products}. For a technical description of the instrument, we refer the reader to~\citet{Vernet2011} and to the online XSHOOTER documentation.\footnote{\tt http://www.eso.org/sci/facilities/paranal/ instruments/xshooter/doc.html}\footnote{\tt http://www.eso.org/observing/dfo/quality/ XSHOOTER/qc/problems/problems$\_$xshooter.html} \section[]{Target selection and observations} \label{sect_target_and_observations} \subsection{Target selection} XQ-100 targets were selected initially from the NASA/IPAC Extragalactic Database (NED) to have emission redshifts $z > 3.5$ and declinations $\delta < +15$ degrees. To fill some right-ascension gaps lacking bright $z > 3.5$ targets, twelve additional targets with $+15 < \delta < +30$ were selected from literature sources. Then the Sloan Digital Sky Survey Data Release 7 database~\citep[SDSS DR7;][]{schneider2010} was screened with the further criterion of having SDSS magnitude $r < +20$. Finally, these candidates were cross-correlated with the Automate Plate Machine (APM) catalog\footnote{\tt http://www.ast.cam.ac.uk/$\sim$mike/apmcat/} to obtain uniform magnitudes in a single pass-band ($R$), which we also use throughout the present paper. Our primary selection is thus biased toward bright sources; however, as explained below, we made our best effort to minimize biases affecting the absorption line statistics. We avoided targets with known broad absorption line features, and targets with an intrinsic color selection bias from the SDSS. The SDSS color selection is biased at the lower redshift end of our survey~\citep[$z<3.6$, see][]{Worseck2011}. Here, we required SDSS QSOs to be radio-selected or previously discovered with other techniques such as slitless spectroscopy. Without these precautions, our goal of obtaining a truly blind and unbiased target selection would have been undermined, despite the relatively small number of targets impacted. For example the SDSS color bias would result in (1) underestimates of the mean free path~\citep{prochaska2009}; (2) overestimates of the DLA --and also the LLS-- incidence~\citep{prochaska2010}; (3) a higher metal $dN/dz$ due to the higher incidence of LLSs and partial LLSs; (4) a higher fraction of proximate LLSs that affect proximity effect studies; and (5) potentially a slight bias in the mean QSO spectral energy distribution towards red QSOs~\citep{Worseck2011}. We should also note that although earlier color survey designs (Palomar Spectroscopic Survey, APM BR, APM BRI) considered color selection effects at the low-z end~\citep{irwin1991,storrie1994}, these were never well quantified. Thus, follow-up on color-selected QSOs close to the stellar locus should be done with care (or avoided altogether), as the sightlines are potentially biased in their LLS statistics. During program execution we replaced four targets in our original list that had been observed by~\citet{Matejek2012} using Magellan/FIRE; however, we intentionally observed three other FIRE targets in order to have a reference in characterizing absorption line detection limits: J1020+0922 at $z=3.640$, J1110+0244 $z=4.146$, and J1621-0042 at $z=3.711$. Our final sample, taking into account the various selections described above and also considering the relative paucity of high redshift QSOs, has emission redshifts ranging from $3.508$ to $4.716$. Since the most distant QSO in our sample is the only target with $z_{\rm em} > 4.5$, for simplicity we refer to the redshift range of the survey as $z_{\rm em}\simeq 3.5$--$4.5$ throughout this paper. Figure~\ref{fig_sky} shows the sky distribution of the observed XQ-100 sample. A color scale depicts emission redshifts. Figures~\ref{fig_z} and~\ref{fig_histo} show the final distribution of QSO emission redshifts and $R$-magnitudes, respectively. The full target list is provided in Table~\ref{table_targets} of the Appendix, along with basic target properties (see Section~\ref{section_data_reduction}). A full catalog with all observed target properties (listed in Table~\ref{table_parameters}) is provided online along with the data at {\tt http://archive.eso.org/eso/eso\_archive\_main.html}. \begin{figure} \centering \includegraphics[width=90mm,angle=0,clip]{fig_sky_new.pdf} \caption{Sky distribution of XQ-100 sources. The color scale indicates emission redshifts. \label{fig_sky}} \end{figure} \begin{figure}[b] \includegraphics[width=84mm,clip]{fig_z_new.pdf} \caption{XQ-100 emission redshifts. \label{fig_z}} \end{figure} \begin{figure}[b] \includegraphics[width=84mm,clip]{fig_histo_new.pdf} \caption{XQ-100 $R$-magnitudes (APM). \label{fig_histo}} \end{figure} \subsection{Observations} \label{sect_observations} The observations were carried out in ``service mode'' between April 1, 2012, and March 26, 2014. During this time XSHOOTER was mounted on unit 2 of the VLT. Service mode allows the user to define the Observation Blocks (OBs), which contain the instrument setup and are carried out by the observatory under the required weather conditions. Table~\ref{table_weather} summarizes the requested conditions of XQ-100. The airmass constraint was set according to each target's declination such that the target was observable above the set constraint for at least 2 hours. The requested constraints on sky brightness were fraction of lunar illumination $<0.5$ and minimum moon distance 45 degrees. The targets were split into two samples, brighter and fainter than magnitude $R_{\rm APM}=18.0$. The seeing constraint was set to $1.0$\arcsec\ for the bright sample and $0.8$\arcsec\ for the faint sample. ESO Large Programmes are granted high priority status, which means that observations out of specifications are repeated and eventually carried over to the following semester until the constraints are met (to within $\approx 10$ \%). In our case 13 targets were observed more than once because of interrupted OBs or because of ADC issues (\S~\ref{sect_ADC}).\footnote{The number of OB executions is listed in column 5 of Table~\ref{table_parameters}} As a consequence of this process, 88 XQ-100 targets were observed within specifications, and 12 almost within specifications (i.e., the constraints were worse by $\lesssim 10$ \%). \begin{table} \centering \begin{minipage}{140mm} \caption{Requested observing conditions\label{table_weather}} \begin{tabular}{lr} \hline Seeing & $1.0$\arcsec\ (bright), $0.8$\arcsec\ (faint) \\ Sky transparency & Clear \\ & $\delta > +20$: $< 1.6$\\ Airmass & $+10<\delta <20$: $< 1.5$\\ & $0<\delta < +10$: $< 1.4$\\ & $\delta <0$: $< 1.3$\\ \% of lunar illumination & 50\% \\ Moon distance & 45 degrees\\ \hline \end{tabular} \end{minipage} \end{table} \begin{table*} \centering \begin{minipage}{160mm} \caption{Instrument setup\label{table_observations}} \begin{tabular}{cccccccc} \hline Arm & Wavelength range & Slit width & Resolving power &\multicolumn{2}{l}{Num. of exposures} & \multicolumn{2}{l}{Integration time (s)}\\ & [nm] & (\arcsec) & $\lambda/\Delta\lambda$ & bright & faint & bright & faint\\ \hline UVB & 315--560 & 1.0 & 4\,350 & 2 & 4 & 890 & 880 \\ VIS & 540--1\,020 & 0.9 & 7\,450 & 2 & 4 & 840 & 830 \\ NIR & 1\,000--2\,480$^a$ & 0.9 & 5\,300 & 2 & 4 & 900 & 900 \\ \hline \end{tabular} $^a${\footnotesize 1\,000--1\,800 nm when the $K$-band filter was used; see~\S~\ref{sect_observations}.} \end{minipage} \end{table*} Table~\ref{table_observations} summarizes the instrument setup. XSHOOTER has three spectroscopic arms, UVB, VIS and NIR, each with its own set of shutter, slit mask, cross-dispersive element, and detector. In order to obtain signal-to-noise ratios, that are as uniform as possible, XQ-100 integration times varied across the samples and also across the three spectroscopic arms. The bright sample had two integrations, each with $T_{\rm exp }=890$s in UVB, $T_{\rm exp }=840$s in VIS and $T_{\rm exp }=900$s in the NIR. The faint sample had four exposures, each with $T_{\rm exp }=880$s in the UVB, $T_{\rm exp }=830$s in the VIS, and $T_{\rm exp }=900$s in the NIR. These conditions defined two classes of OBs, which -- including acquisition -- had a total of $39$ and $70$ minutes duration, respectively. In order to optimize the sky-subtraction in the NIR, the exposures were nodded along the slit by $\pm 2.5$\arcsec\ from the slit center. The adopted slit widths were 1.0\arcsec\ in the UVB and 0.9\arcsec\ in the VIS and NIR, to match the requested seeing and account for its wavelength dependence. These slit widths provide a nominal resolving power of $4\,350$, $7\,450$, and $5\,300$, respectively. The slit position was always set along the parallactic angle, except for five targets for which it was necessary to avoid contamination of a nearby bright object in the slit; these cases are relevant to a problem with the atmospheric dispersion corrector system (see next Section). Target acquisition was done in the $R$ filter. The UVB and VIS were binned by a factor 2 in the dispersion direction. For emission redshifts $z > 4$, the [OIII]$\lambda$5007 emission line lies out of the $K$-band. For $4.0 \la z \la 4.5$, [OII]$\lambda$3727 falls in the gap between the $H$- and $K$-bands. Therefore, the 53 XQ-100 sources having $z>4$ were observed using a $K$-band blocking filter that lowers the sky background where scattered light from the $K$-band affects primarily the $J$-band~\citep{Vernet2011}. No blocking filter was used for $z<4$ sources (47) in order to include [OIII]$\lambda$5007 in the wavelength range. We note that \mgii$\lambda\lambda$2796,2803 is always in the wavelength range. See Fig.~\ref{fig_spec} for an example of a spectrum presenting the above-mentioned emission lines. For each exposure, the standard calibration plan of the observatory was used to observe a hot star for telluric corrections. This plan foresees the observation of a telluric standard within 2 hours and 0.2 airmasses of each science observation (but see~\S~\ref{section_telluric}). \subsubsection{ADC issues} \label{sect_ADC} In March 2012 ESO reported that the atmospheric dispersion correctors (ADCs) of the UVB and VIS arms started to fail occasionally, leading to possible wavelength-dependent slit losses, potentially worse than if no ADCs were used. In August 2012 the ADCs were disabled for the rest of the observations (at the time of writing the causes of these failures are being investigated). By August 2012, around 30\% of the XQ-100 observations had been executed. After checking our spectra carefully, we noticed the ADC problem had possibly affected 12 of the spectra, which showed an unusually large flux mismatch between the arms (see example in top panel of Fig.~\ref{fig_ADC}, which is explained below). The reason for such a mismatch was probably that these targets had been observed at a high enough airmass for a malfunctioning ADC to lead to strong chromatic slit losses. \begin{figure} \includegraphics[width=88mm,clip]{fig_ADC.pdf} \caption{XQ-100 spectra of the same QSO, J1126$-$0124, taken with the faulty ADCs in April 2012 (top panel) and repeated with the disabled ADCs in February 2014 (middle panel). Both observations were executed at a similar airmass of $\approx 1.3$ at the parallactic angle. The dashed lines indicate the boundaries of the XSHOOTER arms. The match is better between the VIS and NIR arms in the middle panel. The bottom panel shows the same February 2014 XQ-100 spectrum but smoothed and rebinned to SDSS resolution (blue line), and rescaled by a factor of 1.3 to match the corresponding SDSS spectrum (overlaid in red). The good match suggests that slit losses in the XQ-100 data are roughly achromatic. \label{fig_ADC}} \end{figure} Five out of these 12 OBs were executed a second time with the disabled ADCs and using the parallactic angle. The improvement was evident. The two upper panels of Fig.~\ref{fig_ADC} show XQ-100 spectra of the same OB executed before and after the ADC disabling. We note the effect of the faulty ADCs on the flux levels and slope in the UVB and VIS arms only (top panel), while the NIR arm is not affected, which is expected since this arm does not use an ADC. Conversely, without the ADCs (middle panel) the flux levels have a better match between the arms (spectra were taken at the parallactic angle always). The bottom panel of Fig.~\ref{fig_ADC} shows the XQ-100 spectrum from the middle panel but smoothed and rebinned to SDSS resolution (blue line), and rescaled by a factor of 1.3 to match the corresponding SDSS spectrum (overlaid in red). The good match across wavelengths suggests that slit losses, at least in the SDSS spectral region, are roughly achromatic in the XQ-100 spectra. Since the accuracy of flux calibrations is unimportant for many of the science applications described in the introduction and an extra exposure might be helpful to increase the S/N, we provide reduced spectra of both observations in these 13 cases and flag them in our database (see Section~\ref{section_products}). The remaining observations in the queue proceeded without the ADCs but making sure that the parallactic angle and the lowest possible airmass was chosen.
We have presented XQ-100, a legacy survey of $100$ $z_{\rm em}\simeq3.5$--$4.5$ QSOs observed with VLT/XSHOOTER. We have provided a basic description of the sample, along with details of the observations, and details of the data reduction process. We have also described the format and organization of the publicly available data, which include spectra corrected for atmospheric absorption and a continuum fit. XQ-100 provides the first large uniform sample of high-redshift QSOs at intermediate-resolution and with simultaneous rest-frame UV/optical coverage. In terms of number of QSOs this volume represents a $30 \%$ increase over the whole extant XSHOOTER sample. The released spectra are of superb quality, having median S/N $\sim 30$, $25$, and $40$ at resolutions of $\sim 30$--$50$ \kms, depending on wavelength. We have indicated that these properties enable a wide range of high-redshift research and soon look forward to seeing the results of this three-year effort in the form of new discoveries and contributions to the field.
16
7
1607.08776
1607
1607.08630_arXiv.txt
Chaotic dynamics are expected during and after planet formation, and a leading mechanism to explain large eccentricities of gas giant exoplanets is planet-planet gravitational scattering. The same scattering has been invoked to explain misalignments of planetary orbital planes with respect to their host star's spin. However, an observational puzzle is presented by Kepler-56, which has two inner planets (b and c) that are nearly coplanar with each other, yet are more than 45 degrees inclined to their star's equator. Thus the spin-orbit misalignment might be primordial. Instead, we further develop the hypothesis in the discovery paper, that planets on wider orbits generated misalignment through scattering, and as a result gently torqued the inner planets away from the equator plane of the star. We integrated the equations of motion for Kepler-56 b and c along with an unstable outer system initialized with either two or three Jupiter-mass planets. We address here whether the violent scattering that generates large mutual inclinations can leave the inner system intact, tilting it \emph{gently}. In almost all of the cases initially with two outer planets, either the inner planets remain nearly coplanar with each other in the star's equator plane, or they are scattered violently to high mutual inclination and high spin-orbit misalignment. On the contrary, of the systems with three unstable outer planets, a spin-orbit misalignment large enough to explain the observations is generated 28\% of the time for coplanar inner planets, which is consistent with the observed frequency of this phenomenon reported so far. We conclude that multiple-planet scattering in the outer parts of the system may account for this new population of coplanar planets hosted by oblique stars. \vspace{0.3 in}
As part of the great diversity of known planetary systems, hot Jupiters are frequently observed with spin-orbit misalignment \citep{2008Hebrard, fabrycky09, triaud10, morton11, moutou11, albrecht12, hebrard13}. Planets with even slightly more widely spaced orbits have only rarely allowed spin-orbit measurement, due to observational difficulties. Kepler-56 belongs to the few discovered systems that contain several, more widely-spaced planets (planets b and c have periods 10.5 days and 21.4 days), and also a spin-orbit measurement. In fact, it was the first such system to show spin-orbit misalignment \citep{2013Huber}. Both inner planets are misaligned with their host star's spin axis by at least $45^\circ$, while being mutually aligned to within about $10^\circ$. The geometry of this system, as well as its eventual fate, has been detailed by \cite{2014Li}. In Kepler-56, \cite{2013Huber} also found a radial acceleration consistent with a third giant planet (call it planet d) in a several-AU orbit. That detection inspired them to propose the following scenario to explain the misalignment (following \citealt{2010Mardling} and \citealt{2011Kaib}). Suppose that a fourth planet is initially in the outer parts of the system, and all planets and the stellar spin are coplanar to within a few degrees. The orbits of the two outer planets may go unstable, initiating an epoch of gravitational scattering. Eventually planet d could eject the additional planet, leaving planet d on an eccentric orbit, with an inclination $i_d$. Thus the chaotic dynamics could begin with a relatively flat system of planets and inject inclination into its outer parts. Two-planet scattering simulations leave behind a planet with an inclination at or above $i_0 = 22.5^\circ$ about 1\% of the time \citep{2001Ford}, whereas three-planet simulations do so $\sim 30$\% of the time \citep{2008Chatterjee}. From then on, this inclined outer planet d would slowly cause the inner planets to precess, periodically sampling spin-orbit angles between $0$ and $2 i_d$, assuming planet d's angular momentum dominates the rest of the system. That tilt would be ``gentle'' in the sense that the inner planets would maintin coplanarity \citep{1997Innanen, 2011Kaib, 2013Huber, Boue2014}. In the specific case of Kepler-56, the inner planets also have low eccentricities ($<0.1$) as determined by the transit timing variation analysis \citep{2013Huber}, further evidence that they did not directly participate in the scattering. Interestingly, similar dynamics have recently been noted in the Solar System, supposing the solar obliquity is due to a distant perturbing planet \citep{2016Gomes,2016Bailey}, a revisitation of an old idea \citep{1972GoldreichWard}. Given that spin-orbit misalignment seems to be a generic feature of different kinds of planetary systems, it is of great importance to understand the mechanism(s) that can lead to such an outcome. The weakest part of the scattering scenario seems to be the need for the scattering planets to leave the inner planets undisturbed. This aspect can only be checked via numerical simulations. The plan for this paper is as follows. In section~\ref{sec:method}, we describe the suite of numerical simulations: the method and initial conditions. In section~\ref{sec:ex}, we give a few examples that lead to ejections or collisions of the outer planets, yielding a system with misaligned inner planets. Section~\ref{sec:res} will be dedicated to the statistical outcomes, and their interpretation. We will discuss absolute inclinations as well as mutual inclinations. Finally, we conclude the paper with a summary of our results in section~\ref{sec:concl}.
\label{sec:concl} We have run simulations attempting to implement the idea of \cite{2013Huber} for tilting the inner planets via scattering of outer planets, and found it to be unlikely in the case of two-planet scattering but plausible in the case of three-planet scattering. We ran $173$ simulations of two outer-planets, of which only one produced high enough spin-orbit misalignment of the inner two planets to match the observations. However, for this outcome, the planet's mutual inclination is about $20^\circ$ for most of its evolution, too high for a reasonable match with Kepler-56 b and c's mutual inclination \citep{2013Huber}. We did not find another system with similarly high inclinations, even though we did find a few systems where one of the inner planets ended up with a high inclination. We speculate that the system S3 had its second-outermost planet ejected \emph{and} the highest inner-planet inclinations came from the same source -- an epoch of prolonged scattering which allowed access to these dynamically rarer outcomes. However, considering our simulations, our hypothesis of the outer planet(s) generating a high spin-orbit misalignment requires particularly violent scattering. This is apparently possible through 3 equal-mass outer planets, but not with 2. In runs with three planets in the exterior parts, scattering can tilt the inner system dramatically. For inner systems that are not disrupted, 28\% showed misalignment from the original plane of the outermost planet by more than $45^\circ$ at a randomly selected time in the future secular evolution of the system. Our runs showed that usually two outer planets remain after the scattering, whereas in Kepler-56, only one has been found. New data and analysis appears to confirm the existence of a third planet in the Kepler-56 system (Otor, Montet et al., in prep.). It does not exclude the existence of a fourth planet in a large orbit, which could still be hiding. Observationally, a fourth planet has currently an 95\% upper limit on a long term radial velocity acceleration of 3.2 m/s/yr (Otor, Montet et al., in prep), which is why we quoted results with respect to this benchmark above. The second outer planet in our simulations almost always had a much smaller effect -- it could easily evade that limit. A useful avenue for future work would be quantifying whether 3 unequal-mass planets can achieve large enough misalignments. Also, our focus was on one particular system (Kepler-56), but one would rather model a population of systems, whose initial distribution is plausible from planet formation theories, to see what distribution of spin-orbit and mutual inclination outcomes are expected for the inner planets. Misalignment of inner planets is likely not rare. Kepler-56 was the 6th system of multiple transiting planets whose stellar obliquity was measured \citep{2013Albrecht} -- the search turned up an oblique star unexpectedly quickly. Indeed, another system, KOI-89, has recently been found to feature large angle spin-orbit misalignment of its two inner, coplanar planets \citep{ahlers15}. In contrast to Kepler-56, there is no known additional object orbiting further out. Nevertheless, it is common enough that if scattering indeed explains this population, then we would suggest multi-planet scattering is more common than two-planet scattering. There are observational clues that scattering is probably not the sole mechanism generating misalignments. \cite{2015Mazeh} have found stellar misalignment to be a strong function of stellar temperature, but not of planetary multiplicity or coplanar architecture. A third planet does not appear to be needed to produce systems with similar characteristics as Kepler-56, thus weakening scattering as a major mechanism for spin-orbit misalignments of two or more coplanar planets. A host of other mechanisms may also be in play. The protoplanetary disk may have been tilted from its inception \citep{1991Tremaine, 2010Bate, 2015Fielding}, or due to magnetic torques in its early stages \citep{2011Lai}. It could have endured a torque from a previously-bound stellar companion \citep{2012Batygin} or from a flying-by star \citep{2016XiangGruess}. Even more exotic, the internal convection might have even tilted the stellar surface \citep{2013Rogers} relative to the planetary plane. Most of these mechanisms would likely leave the non-transiting planet in roughly the same plane as the transiting planets. Such a configuration will eventually be testable, as orbital precession of the inner planets due to the outer one will become observable in transit data due to the slow but steady duration drifts, the manifestation of planetary precession \citep{2002MiraldaEscude}. Our main conclusion is that three outer-planets are necessary for scattering to cause the amount of misalignment inferred for Kepler-56's planets b and c. Two-planet scattering does not seem sufficient, because the excitation is rarely dramatic enough. Apparently in these cases, scattering in part of the planetary system propagates chaos to all other parts as well. This conclusion is probably much more general than our attempts to model the Kepler-56 system. In particular, it has been the upshot of attempts to model the early days of the Solar System \citep{2009Brasser,2012Agnor, 2016Kaib}. This conclusion may more broadly apply to exoplanets as well. For instance, since many or most planetary systems of small planets exhibit dynamically packed and rather calm orbits \citep{2014Fabrycky}, and most systems of giant planets have large eccentricity \citep{2010Cumming}, it may suggest that these two types of systems are truly separate, expressing two distinct outcomes of the planet formation process.
16
7
1607.08630
1607
1607.04302_arXiv.txt
The Large Synoptic Survey Telescope (LSST) will provide precise ground-based photometric monitoring of billions of stars in the Galactic field and in open star clusters. The light curves of these stars will give an unprecedented view of the evolution of rotation and magnetic activity in cool, low-mass main-sequence dwarfs of spectral type GKM, allowing precise calibration of rotation-age and flare rate-age relationships, and opening a new window on the accurate age dating of stars in the Galaxy. Previous surveys have been hampered by small sample size, poor photometric precision and/or short time baselines, so LSST data are essential for obtaining new, robust age calibrations. The evolution of the rotation rate and magnetic activity in solar-type stars are intimately connected. Stellar rotation drives a magnetic dynamo, producing a surface magnetic field and magnetic activity which manifests as starspots, chromospheric (Ca II HK, H$\alpha$) and coronal (X-ray) emission and flares. The magnetic field also drives a stellar wind causing angular momentum loss (``magnetic braking'') which slows the rotation rate over time, leading to decreased magnetic activity. More magnetically active stars (larger spots, stronger Ca II HK, H$\alpha$ and X-ray emission, more flares) therefore tend to be younger and to rotate faster. The rotation-age relationship is known as gyrochronology, and the correlation between rotation, age and magnetic activity for solar-type stars was first codified by \citet{skumanich1972}. However, the decrease in rotation rate and magnetic field strength over long time-scales is poorly understood and, in some cases, hotly contested \citep[{e.g.}][]{angus2015, van-saders2016}. Recent asteroseismic data from the Kepler spacecraft have revealed that magnetic braking may cease at around solar Rossby number, implying that gyrochronology is not applicable to older stars \citep{van-saders2016}. In addition, the rotational behavior of lower mass stars is largely unknown due to the faintness of mid-late type M dwarfs. There is reason to believe that M dwarfs cooler than spectral type $\sim$ M4 may behave differently from the G, K and early M stars, since that spectral type marks the boundary where the star becomes fully convective, and a solar-type shell dynamo (which requires an interface region between the convective envelope and radiative core of the star) can no longer operate. Using chromospheric H$\alpha$ emission as a proxy, \citet{west2008} studied a large sample of M dwarfs from SDSS and showed that magnetic activity in mid-late M dwarfs lasts much longer than in the earlier type stars. LSST will provide photometric rotation periods for a new region of period-mass-age parameter space. The Kepler spacecraft focused on Earth-like planets with Sun-like hosts, thus the majority of its targets were G type, with fewer K and M dwarfs. Unlike Kepler however, any target falling within LSST's field of view will be observed --- not just those on a predetermined target list. In addition, due to the large collecting area of LSST, it will be sensitive to a significant population of distant K and M dwarfs. LSST will operate for 10 years, more than double the length of the Kepler prime mission. This long time baseline will enable rotation signatures of faint, slowly rotating stars to be detected, populating both low-mass and old regions of the age-rotation parameter space. Thus, LSST will provide an important complementary data set to Kepler (and the upcoming TESS mission). The LSST data will also allow an unprecedented view of stellar flares, and the calibration of flare rate-age relations that may provide an additional method for age-dating M dwarf populations. \citet{kowalski2009} used sparsely sampled SDSS light curves in Stripe 82 to quantify M dwarf flare rates as a function of height above the Galactic plane, and showed that flare stars may comprise a younger population than active stars (those showing H$\alpha$ emission). Long time baselines and monitoring of large numbers of stars are required to obtain good flare statistics, so LSST will be perfectly suited for this study. As coeval, equidistant, and chemically homogeneous collections of stars, open star clusters with different ages are ideal for studying the dependencies of astrophysical phenomena on the most fundamental stellar parameters - age and mass. Indeed, there are few fields in astronomy that do not rely on results from cluster studies, and clusters play a central role in establishing how stellar rotation and magnetic activity can be used to constrain the ages of stars and stellar populations. In particular, clusters provide essential calibration for rotation-age-activity relations, since each cluster gives a snapshot of stellar evolution at a single age, for all masses. \citep[e.g.][]{meibom2009,meibom2011,mms+11,mbp+15,ghr+06,gondoin2012,gondoin2013,wdm+11}. LSST will also enable the use of cluster and field stars as laboratories for investigating magnetic activity cycles (such as the 11 year cycle on the Sun). There is some evidence that younger, more active stars are less likely to show regular cycle behavior, while older stars such as the Sun typically do show regular cycles \citep{baliunas1995}. Due to the long monitoring times that are required to diagnose activity cycles, it has previously been difficult to carry out a large-scale survey of activity cycle behavior, and thus quantify the changes that occur with magnetic dynamo evolution (e.g. as the star spins down). LSST will easily rectify this situation and indeed the cadence will be well-suited to observing cyclic behavior both in the field and in clusters. The changes that occur in the surface magnetic field (both strength and topology) as a star ages are also not well understood, with fewer than 100 (nearby, bright) stars presently having good measurements. Followup observations of stars from a large LSST sample covering a range of ages and masses with well-determined cycle periods will open a new window on the study of magnetic field evolution. While LSST light curves have the potential to answer some of the most fundamental questions regarding the evolution of stellar rotation and magnetism, it is essential that the properties of the target stars be accurately determined. In order to understand the scope of the followup observations that will be needed to characterize magnetically active stars that exhibit starspots and flares, we performed a number of simulations. We first describe simulations of field stars at several galactic latitudes, sampled with an LSST cadence, and examine the target densities of stars with detected rotation periods (due to starspot modulation) and detected flares. We then consider open clusters at different ages, predict rotation and flare rates, and discuss the complications of determining cluster membership. Finally we look at the constraints for determining activity cycles and measuring magnetic fields directly. Using the results of these simulations, we outline the followup requirements necessary to fully exploit the LSST data set for stellar rotation and magnetism studies both in the Galactic field and in open clusters. Although we focused our study here on magnetically active stars, in order to provide well-defined estimates for followup capabilities, we note that similar observing strategies and followup resources will be valuable for investigations of a wide variety of variable stars, including eclipsing binaries, pulsating stars at a wide range of periods from RR Lyrae to Cepheids to LPVs, cataclysmic variables and novae, and also for other Galactic variability phenomena such as microlensing and planetary transits. Wide field followup imaging and spectroscopic (both moderate and high resolution) facilities will be useful for all of these stellar science topics.
The followup resources and time needed to carry out this particular science topic on rotation and magnetic activity in low mass stars are listed in the following tables. High priority resources include wide field (one square degree) imaging with both broad band ($ugriz$) and narrow band (Ca II H and K) filters and highly multiplexed (1000 fiber) spectrographs at moderate (R=5000) and high (R=20,000-100,000) resolution. Polarimetry is also extremely valuable for magnetic field measurements and requires a high resolution spectrograph that has been designed for polarimetric observations. A range of telescopes from $<$ 3m to 25-30m will be needed, with the smaller telescopes mainly used for imaging and the larger ones for spectroscopy. To implement the entire (ambitious!) program outlined here will require significant resources over the ten year period of LSST operations: 4000 hours on $<$3m telescopes; 11,000 hours on 3-5m telescopes; 6500 hours on 8-10m telescopes; and 1200 hours on 25-30m telescopes. We noted two special infrastructure requests, first in \S2.4 that the individual 15 second images be made available as Level 3 data products (the same as the standard LSST data) and second in \S3.2 that several LSST special survey (``deep drilling'') fields be focused on specific open clusters, including especially the iconic solar age cluster M67 which lies just outside the nominal LSST footprint. After carrying out this exercise in designing a program to followup on LSST photometry of magnetically-active low-mass stars, we are truly excited about the new frontiers that LSST will open in the investigation of gyrochronology, flares, activity cycles and magnetic fields, in both open clusters and the Galactic field. The results of this program will not only advance stellar astrophysics in a transformative way but they will synergistically inform other fields such as the nature and evolution of exoplanet system environments and the evolution of the Galaxy. We look forward to pursuing this investigation for real starting in 2022!
16
7
1607.04302
1607
1607.05903_arXiv.txt
Gravitational lensing has long been considered as a valuable tool to determine the total mass of galaxy clusters. The shear profile as inferred from the statistics of ellipticity of background galaxies allows to probe the cluster intermediate and outer regions thus determining the virial mass estimate. However, the mass sheet degeneracy and the need for a large number of background galaxies motivate the search for alternative tracers which can break the degeneracy among model parameters and hence improve the accuracy of the mass estimate. Lensing flexion, i.e. the third derivative of the lensing potential, has been suggested as a good answer to the above quest since it probes the details of the mass profile. We investigate here whether this is indeed the case considering jointly using weak lensing, magnification and flexion. We use a Fisher matrix analysis to forecast the relative improvement in the mass accuracy for different assumptions on the shear and flexion signal\,-\,to\,-\,noise (S/N) ratio also varying the cluster mass, redshift, and ellipticity. It turns out that the error on the cluster mass may be reduced up to a factor $\sim 2$ for reasonable values of the flexion S/N ratio. As a general result, we get that the improvement in mass accuracy is larger for more flattened haloes, but it extracting general trends is a difficult because of the many parameters at play. We nevertheless find that flexion is as efficient as magnification to increase the accuracy in both mass and concentration determination.
In a hierarchical bottom\,-\,up scenario, structures formation proceeds by merging of low mass objects into high mass systems. Being the most massive gravitationally bound structures in the universe, galaxy clusters emerge as the final point of this process thus making them ideal probes of the structure formation history. Both their density profile and mass function are intimately related to the underlying background cosmological model and the growth of structures thus offering the intriguing possibility to contrain both the cinematics and dynamics of the universe. In particular, they trace the exponential cutoff of the halo mass function thus being particularly sensitive to the details of the theory of gravity. Weighting galaxy clusters has therefore become a rewarding yet daunting business leading astronomers to look for different methods to achieve this difficult goal. Weak gravitational lensing soon emerged as one of the most valuable candidate to end this quest \citep{BS2001,Ref2003,Sch2006,Mun2008}. Massive clusters bend the optical path of light rays causing a distortion and a magnification of the image of a background source so that a supposedly circular object looks like an elliptical one. Needless to say, sources are not circular so that the lensing effect can only be measured statistically thus asking for a large source number density. As difficult as this task is, it is nevertheless possible to use this effect to trace the shear profile of the clusters as first shown by \cite{Ty90} measuring the statistical apparent alignment of galaxies due to the gravitational lensing. Since their initial measurements, different methods were proposed to convert these data into a cluster mass profile and hence a determination of its total mass (see, e.g., \citealt{KS93,SS2001}). Although mass measurement through weak lensing are now routinely performed, the situation is far from being problems free. Indeed, shear profile only traces the cluster mass distribution in the intermediate to outer regions thus being unable to constrain the concentration parameter $c_{vir} = R_{vir}/R_s$ with $R_{vir}$ $(R_s)$ the virial (scale) radius of the mass profile. The mass\,-\,concentration degeneracy then transfers part of the uncertainty on $c_{vir}$ on the mass $M_{vir}$ thus weakening the constraints on this latter quantity. Moreover, it is not always possible to get a sufficiently high source number density thus preventing from narrow down the range for the mass. As a possible way out of both these problems, one has to add a second tracer to shear profile with strong lensing as the most common candidate. Indeed, lensing in the strong regime traces the inner regions thus pushing down the statistical error on $c_{vir}$ and hence on the mass. Unfortunately, strong lensing features are difficult to detect and their modeling is strongly sensitive to the presence of substructures whose amount and distribution are far from being well known. An alternative probe is represented by gravitational lensing flexion defined as the third derivative of the lensing potential \cite{GN02,GB05,Betal06}. Spin\,-\,1 and spin\,-\,3 flexion terms are responsible for the mapping of intrinsically round source into off centred and arc\,-\,like images thus adding further features to the lensing phenomena. Flexion can be measured from higher order shape moments \citep{Ok2007} or from image model fitting \citep{Ref12003,RB2003,Cain2011} thus allowing to probe small scale features of the mass profile as first demonstrated by \cite{Leo07} for Abell\,1689. As a bonus point, it is also expected that third order galaxy moments are much smaller than second order ones. As a consequence, the dispersion of intrinsic flexion should be much smaller than the intrinsic ellipticity one thus lowering the shape noise which severely downgrade the potential of the shear method. Following preliminary analysis, \cite{Er12} have investigated how adding flexion to shear improves the cluter mass determination for two different clusters using mock data. However, the difficulty in simulating data prevent them to explore both the mass\,-\,redshift parameter space and the assumptions on the shear and flexion S/N. We therefore extend their analysis here by reverting to a Fisher matrix analysis. This makes it possible to investigate how the constraints on the mass changes as function of both the mass itself and the cluster redshift. Moreover, we also investigate the dependence on the axial ratio. Since measurin flexion is a daunting task yet to be satisfactorily achieved, we investigate how the results change as a function of the shear and flexion S/N in order to examine which strategy is more rewarding from the point of view of getting a significant boost in mass accuracy. The plan of the paper is as follows. Basic lensing quantities and the relevant formulae for the particular cluster model adopted are summarized in Sect.\,2, while Sec.\,3 presents the mass determination methodology combining shear, magnification and flexion. Details on the computation of the Fisher matrix are given in Sect.\,4, while Sect.\,5 presents the main results of the paper. Conclusions are then summarized in Sect.\,6, while in the Appendix we qualitatively address the impact of substructures.
The search for an efficient way to weight galaxy clusters dates back to the first understanding of their relevant role in constraining cosmological parameters and structure formation scenarios. As far as first shear measurements became possible, weak lensing has emerged as valid tool to achieve this goal thanks to its ability to probe the matter content no matter whether the cluster is relaxed or not and its state of thermal equilibrium. Although cluster masses through weak lensing have been measured for hundreds of clusters (see, e.g., the compilation in \citealt{S15} and refs. therein), there is yet room for improvement being the typical uncertainty on the mass still far from negligible. Flexion has been proposed as a possible way to boost mass measurement accuracy so that we have here investigated under which conditions and to which extent this is indeed the case. Our Fisher matrix analysis convicingly show that the use of flexion can indeed reduce the error on the mass determination by a significative amount under realistic assumption on the overall S/N ratio of $(g, {\cal{F}}, {\cal{G}})$. The gain in accuracy is larger for more flattened haloes as a result of the larger flexion signal, while the scaling with the cluster mass and redshift is non monotonic as a consequence of the complicated dependence of the Fisher matrix elements on both the derivative of the observed quantities and the way the S/N ratio scale with the halo parameters. We have also explored how these results change if future developments in shear and/or flexion measurement codes allow to improve the overall S/N. It turns out that the gain in accuracy saturates with the flexion S/N so that investigating efforts to improve $\nu_{{\cal{F}}}$ more than a factor $\sim 50$ is a non rewarding strategy. Moreover, Fig.\,\ref{fig: smuratiobgchangebfchange} shows that pushing up the flexion S/N has a lower and lower impact as shear S/N gets larger and larger. However, while there is a wide room for ameliorating flexion measurements techniques (being this field still at its infancy), it is unlikely that order of magnitude improvements in the shear S/N are possible in the near future so that the use of flexion will likely be a valid help to narrow down the mass confidence range. Although quite encouraging, our results need to be furtherly tested since they are based on some reasonable yet qualitative assumptions. First, we have combined shear, magnification and flexion by simply defining the full likelihood function (whose derivatives enter the Fisher matrix) as the product of the three single likelihood functions. This is the same as assuming that the errors on the three probes are uncorrelated. While this is true for shear and magnification, the question is still open for shear and flexion. Indeed, \cite{VMB12} pointed out a correlation between shear and flexion noise which can bias flexion measurements if not correctly taken into account. However, the same authors proposed a method to reduce both the bias and the correlation, but a precise accounting of this effect in a Fisher matrix analysis asks for a modeling of the covariance matrix which is still unavailable at the moment. Working out such a model asks for an empirical knowledge of the flexion noise from image simulations and mock measurements so that we have here assumed the ideal situation that future methods will be able to reduce the correlation among these two different probes of the lensing potential. Should this not be the case, the Fisher matrix analysis must be repeated including the relevant cross talk terms possibly reducing the boost in mass accuracy we find out here. It is, however, worth noting that the increase in accuracy we have forecast is so large that we are confident flexion will still stand out as useful complement to shear for cluster mass determination. A second simplifyng assumption concerns the dependence of the shear and flexion S/N on the radial distance from the cluster centre. The lack of any hint on the functional expression of $\nu_X(R)$ (with $X = g, {\cal{F}}, {\cal{G}}$ for shear, first and second flexion, respectively) has forced us to work out a reasonable procedure to find out a scaling with both the $R$ and the halo parameters. However, given the key role of these quantities in the forecast analysis, it is worth readdressing this issue in more detail. Real and mock data can give us a hint on how to model $\nu_g(R)$, while much more work is needed for $\nu_{{\cal{F}}}(R)$ and $\nu_{{\cal{G}}}(R)$ being flexion measurement method still to be decided. As a concluding comment, we remark that our adimittedly preliminary and based on simplified treatment of errors results nevertheless strenghten the idea that flexion can represent the ideal complement to shear in solving the weighting the clusters problem. Although working out algorithms for flexion measurements with sufficiently high S/N can be a tremendously difficult task, the promise of reducing the mass error makes it definitely worth the trouble.
16
7
1607.05903
1607
1607.07137_arXiv.txt
In this paper we find new solutions for the so called Einstein-Chern-Simons Friedmann-Robertson-Walker field equations studied in refs. \cite{gms,cgqs}. We consider three cases:\begin{inparaenum}[(i)] \item in the first case we find some solutions of the five-dimensional ChS-FRW field equations when the $h^a$ field is a perfect fluid that obeys a barotropic equation of state; \item in the second case we study the solutions, for the cases $\gamma =1/2,\ 3/4$, when the $h^a$ field is a five dimensional politropic fluid that obeys the equation $P^{(h)}=\omega ^{(h)}\rho ^{(h)\gamma }$; \item in the third case we find the scale factor and the state parameter $\omega (t)$ when the $h^a$ field is a variable modified Chaplygin gas. \end{inparaenum} We consider also a space-time metric which contains as a subspace to the usual four-dimensional FRW and then we study the same three cases considered in the five-dimensional, namely when\begin{inparaenum}[(i)] \item the $h^a$ field is a perfect fluid, \item the $h^a$ field is a five dimensional politropic fluid and \item the $h^a$ field is a variable modified Chaplygin gas. \end{inparaenum}
The principles underlying the general theory of relativity states that the space-time is a dynamical object which has independent degrees of freedom, and is governed by the Einstein field equations. This means that in General Relativity (GR) the geometry is dynamically determined. Therefore, the construction of a gauge theory of gravity requires an action that does not consider a fixed space-time background. An action for gravity fulfilling these conditions, albeit only in odd-dimensional space-time, $d=2n+1$, was proposed long ago by Chamseddine~\cite{champ1,champ2,zan1}. If Chern-Simons theories are the appropriate gauge-theories to provide a framework for the gravitational interaction, then these theories must satisfy the correspondence principle, namely they must be related to General Relativity. In ref. \cite{salg1} was shown that the standard, five-dimensional General Relativity (without a cosmological constant) can be obtained from Chern-Simons gravity theory for a certain Lie algebra $\mathfrak{B}$. The Chern-Simons Lagrangian is built from a $\mathfrak{B}$-valued, one-form gauge connection $\boldsymbol A$ which depends on a scale parameter $l$ which can be interpreted as a coupling constant that characterizes different regimes within the theory. The $\mathfrak{B}$ algebra, on the other hand, is obtained from the AdS algebra and a particular semigroup $S$ by means of the S-expansion procedure introduced in refs. \cite{salg2,salg3,azcarr}. The field content induced by $\mathfrak{B}$ includes the vielbein $e^{a}$, the spin connection $\omega ^{ab}$ and two extra bosonic fields $h^{a}$ and $k^{ab}$. In ref. \cite{salg1} was then shown that it is possible to recover odd-dimensional Einstein gravity theory from a Chern-Simons theory in the limit where the coupling constant $l$ tends to zero while keeping the effective Newton's constant fixed. In ref. \cite{gms} was considered a 5-dimensional lagrangian $\mathcal{L=L}_\text{EChS}^{(5)}+\kappa \mathcal{L}_\text{M}$ which is composed of a gravitational sector and a sector of matter, where the gravitational sector is given by the so called Einstein-Chern-Simons gravity action \begin{equation} \mathcal{L}_{\mathrm{EChS}}^{\left( 5\right) }=\alpha _{1}l^{2}\epsilon _{abcde}R^{ab}R^{cd}e^{e}+\alpha _{3}\epsilon _{abcde}\left( \frac{2}{3}% R^{ab}e^{c}e^{d}e^{e}+2l^{2}k^{ab}R^{cd}T^{e}+l^{2}R^{ab}R^{cd}h^{e}\right) \label{eins} \end{equation} instead of the Einstein-Hilbert action and where the matter sector is given by the so called perfect fluid. In this reference was studied the implications that has on the cosmological evolution, the fact of replacing the Einstein-Hilbert action by the Chern-Simons action in the gravitational sector, for a metric of Friedmann-Robertson-Walker (FRW). Using a compactification procedure known as dynamic compactification, was found that the cosmological field equations obtained from the Chern-Simons gravity theory lead, in a certain limit, to the usual 4-dimensional FRW equations. It is the purpose of this work to find some new cosmological solutions for the so called Einstein-Chern-Simons-Friedmann-Robertson-Walker field equations, and to show how such new solutions lead, in a certain limit, to the usual cosmological solutions of Einstein theory of gravitation. To find the new cosmological solutions we interpret the $h^{a}$ field as the dark energy in three different cases:\begin{inparaenum}[(i)] \item in the first case we interpret the $h^a$ field as a perfect fluid that obeys a barotropic equation of state; \item in the second case we interpret the $h^a$ field as a politropic fluid that obeys the equation $P^{(h)}=\omega ^{(h)}\rho^{(h)\gamma }$ and \item in the third case we interpret the $h^a$ field as a variable modified Chaplygin gas. \end{inparaenum} This paper is organized as follows: In section \ref{sec02} we briefly review the called Einstein-Chern-Simons Friedmann-Robertson-Walker field equations and their solutions when the standard energy-momentum tensor is modeled as a barotropic fluid. In section \ref{sec03} we study three cases where $h^{a}$ field is interpreted as the dark energy and the standard energy-momentum tensor is modeled as a barotropic fluid with variable parameter of state $\omega(t)$. In the first case, we assume that the $h^a$ field is a perfect fluid which obeys the barotropic equation of state $P^{(h)}=\omega ^{(h)}\rho ^{(h)}$, with $\omega$ a constant, and then we find some solutions of the five-dimensional ChS-FRW field equations. In the second case, we study the case when the $h^a$ field is a five dimensional politropic fluid which obeys $P^{(h)}=\omega ^{(h)}\rho^{(h)\gamma}$, with $\omega =\omega(t)$, where some solutions for $\gamma =1/2,\ 3/4$ are found. In the third case, the scale factor and the state parameter $\omega (t)$ are found when the $h^a$ field is a variable modified Chaplygin gas. In section \ref{sec06} a space-time metric which contain as a subspace the usual four-dimensional FRW metric is found and then we studied the same three cases considered in the five-dimensional case. A Summary concludes this work.
We have found some new cosmological solutions for the so called Einstein-Chern-Simons-Friedmann-Robertson-Walker field equations taking the $h^a$ field component as modelling the dark energy component, represented by three different types of equations of state: barotropic, polytropic, and varying generalized modified Chaplygin gas. For the three types of equations of state, it was found that the behavior of matter field is described by a barotropic equation of state, with variable state parameter $P=\omega (t)\rho$. A comment about the differences and similarities of the results found in this work and the known results of standard cosmology could be of interest. \begin{enumerate} [(i)] \item If the behavior of two fluids is studied and if we want describe, in the context of standard cosmology, a universe accelerated at a late stage, then one of the fluids may represent dark matter described by the equation of state $P_{1}=\omega_{1}\rho _{1}$ (con $\omega_{1}=0$), while the other may represent dark energy described by the equation of state $P_{2}=\omega_{2}\rho_{2}$ (con $\omega_{2}<-1/3$). In this case we can consider that each fluid evolves independently of each other or take the option of considering that there is an interaction term between the two fluids $Q$, this means that fluids will not evolve independently and behavior of a fluid depend on the behavior of the other. \item If in the context of cosmology Einstein-Chern-Simons we consider that the matter field of density $\rho$ represents dark matter (or dark energy) and that the $h^a$ field of density $\rho^{(h)}$ represents dark energy (or dark matter) then, if the two fluids evolve independently of each other, we find that the behavior of a fluid depends on the behavior of the other fluid through the geometric term $H^{4}$. This term plays the role of the interaction $Q$ term present in the case of standard cosmology. \item If in standard cosmology we consider two fluids, one with a constant state parameter and the other with a variable state parameter then, to solve the system dynamics it is necessary to specify the state equations for each fluid and consider an ansatz for the variable parameter state, giving the scale factor or giving an energy density. \item In Einstein-Chern-Simons cosmology, this situation can be resolved without considering an ansatz, closing down the system and getting the form of the different cosmological variables. \end{enumerate}
16
7
1607.07137
1607
1607.06086.txt
We present a new model of the nebular emission from star-forming galaxies in a wide range of chemical compositions, appropriate to interpret observations of galaxies at all cosmic epochs. The model relies on the combination of state-of-the-art stellar population synthesis and photoionization codes to describe the ensemble of \hii\ regions and the diffuse gas ionized by young stars in a galaxy. A main feature of this model is the self-consistent yet versatile treatment of element abundances and depletion onto dust grains, which allows one to relate the observed nebular emission from a galaxy to both gas-phase and dust-phase metal enrichment. We show that this model can account for the rest-frame ultraviolet and optical emission-line properties of galaxies at different redshifts and find that ultraviolet emission lines are more sensitive than optical ones to parameters such as \CO\ abundance ratio, hydrogen gas density, dust-to-metal mass ratio and upper cutoff of the stellar initial mass function. We also find that, for gas-phase metallicities around solar to slightly sub-solar, widely used formulae to constrain oxygen ionic fractions and the \CO\ ratio from ultraviolet and optical emission-line luminosities are reasonable faithful. However, the recipes break down at non-solar metallicities, making them inappropriate to study chemically young galaxies. In such cases, a fully self-consistent model of the kind presented in this paper is required to interpret the observed nebular emission.
\label{sec:intro} The emission from interstellar gas heated by young stars in galaxies contains valuable clues about both the nature of these stars and the physical conditions in the interstellar medium (ISM). In particular, prominent optical emission lines produced by \hii\ regions, diffuse ionized gas and a potential active galactic nucleus (AGN) in a galaxy are routinely used as global diagnostics of gas metallicity and excitation, dust content, star formation rate and nuclear activity \citep[e.g.,][]{izotov99,kobulnicky99,kauffmann03,nagao06b,kewley08}. Near-infrared spectroscopy enables such studies in the optical rest frame of galaxies out to redshifts $z\sim1$--3 \citep[e.g.,][]{pettini2004,hainline09,richard11,guaita13,steidel14,shapley15}. While the future {\it James Webb Space Telescope} ({\it JWST}) will enable rest-frame optical emission-line studies out to the epoch of cosmic reionization, rapid progress is being accomplished in the observation of fainter emission lines in the rest-frame ultraviolet spectra of galaxies in this redshift range \citep[e.g.,][]{shapley03,erb10,stark14,stark15a,stark15b,stark16,sobral15}. The interpretation of these new observations requires the development of models optimised for studies of the ultraviolet -- in addition to optical -- nebular properties of chemically young galaxies, in which heavy-element abundances \citep[for example, the C/O ratio;][]{erb10,cooke11} are expected to differ substantially from those in star-forming galaxies at lower redshifts. Several models have been proposed to compute the nebular emission of star-forming galaxies through the combination of a stellar population synthesis code with a photoionization code (e.g., \citealt{garcia95,stasinska96,charlot01}, hereafter CL01; \citealt{zackrisson01,kewley02,panuzzo03,dopita13}; see also \citealt{anders03,schaerer09}). These models have proved valuable in exploiting observations of optical emission lines to constrain the young stellar content and ISM properties of star-forming galaxies \citep[e.g.,][]{brinchmann04,blanc15}. A limitation of current models of nebular emission is that these were generally calibrated using observations of \hii\ regions and galaxies in the nearby Universe, which increasingly appear as inappropriate to study the star formation and ISM conditions of chemically young galaxies (e.g., \citealt{erb10,steidel14,steidel16,shapley15}; see also \citealt{brinchmann08,shirazi14}). We note that this limitation extends to chemical abundance estimates based on not only the so-called `strong-line' method, but also the `direct' (via the electronic temperature \Te) method, since both methods rely on the predictions of photoionization models (see Section~\ref{sec:icf}). Another notable limitation of current popular models of the nebular emission from star-forming galaxies is that these do not incorporate important advances achieved over the past decade in the theories of stellar interiors \citep[e.g.,][]{eldridge08,bressan12,ekstrom12,georgy13,chen15} and atmospheres \citep[e.g.,][]{hauschildt99,hillier99,pauldrach01,lanz03,lanz07,hamann04,martins05,puls05,rodriguez05,leitherer10}. In this paper, we present a new model of the ultraviolet and optical nebular emission from galaxies in a wide range of chemical compositions, appropriate to model and interpret observations of star-forming galaxies at all cosmic epochs. This model is based on the combination of the latest version of the \citet{bruzual03} stellar population synthesis code (Charlot \& Bruzual, in preparation; which incorporates the stellar evolutionary tracks of \citealt{bressan12,chen15} and the ultraviolet spectral libraries of \citealt{lanz03,lanz07,hamann04,rodriguez05,leitherer10}) with the latest version of the photoionization code \cloudy\ (c13.03; described in \citealt{ferland13}). We follow CL01 and use effective (i.e. galaxy-wide) parameters to describe the ensemble of \hii\ regions and the diffuse gas ionized by successive stellar generations in a galaxy. We take special care in parametrizing the abundances of heavy elements and their depletion onto dust grains in the ISM, which allows us to model in a self-consistent way the influence of `gas-phase' and `interstellar' (i.e. gas+dust-phase) abundances on emission-line properties. We build a comprehensive grid of models spanning wide ranges of interstellar parameters, for stellar populations with a \citet{chabrier03} stellar initial mass function (IMF) with upper mass cutoffs 100 and 300\,\msol. We show that these models can reproduce available observations of star-forming galaxies in several line-ratio diagrams at optical (\oiid, \hb, \oiii, \ha, \nii, \siid) and ultraviolet (\nvd, \civd, \heii, \oiiis, \ciiid, \siliiid) wavelengths. We further exploit this model grid to quantify the limitations affecting standard recipes based on the direct-\Te\ method to measure element abundances from emission-line luminosities. The model presented in this paper has already been used successfully to interpret observations of high-redshift star-forming galaxies \citep{stark14,stark15a,stark15b,stark16} and to define new ultraviolet and optical emission-line diagnostics of active versus inactive galaxies \citep{feltre16}. We present our model in Section~\ref{sec:modelling}, where we parametrize the nebular emission of a star-forming galaxy in terms of stellar population, gas and dust parameters. In Section~\ref{sec:optical}, we compute a large grid of photoionization models and show that these succeed in reproducing observations of galaxies from the Sloan Digital Sky Survey (SDSS) in standard optical line-ratio diagrams. We investigate the ultraviolet properties of these models and compare them with observations of star-forming galaxies at various cosmic epochs in Section~\ref{sec:uv}. In Section~\ref{sec:icf}, we investigate the limitations of standard recipes based on the direct-\Te\ method to measure element abundances from emission-line luminosities, focusing on the \CO\ ratio as a case study. We summarise our conclusions in Section~\ref{sec:concl}. \section[]{Modelling} \label{sec:modelling} To model the stellar and nebular emission from a star-forming galaxy, we adopt the isochrone synthesis technique introduced by \citet{charlot91} and express the luminosity per unit wavelength $\lambda$ emitted at time $t$ as \begin{equation} L_{\lambda}(t)=\int_0^t dt^\prime\, \psi(t-t^\prime) \, S_{\lambda}[t^\prime,Z(t-t^\prime)] \, T_{\lambda}(t,t^\prime)\,, \label{eq:flux_gal} \end{equation} where $\psi(t-t^\prime)$ is the star formation rate at time $t-t^\prime$, $S_\lambda[t^\prime,Z(t-t^\prime)]$ the luminosity produced per unit wavelength per unit mass by a single stellar generation of age $t^\prime$ and metallicity $Z(t-t^\prime)$ and $T_\lambda(t,t^\prime)$ the transmission function of the ISM, defined as the fraction of the radiation produced at wavelength $\lambda$ at time $t$ by a generation of stars of age $t^\prime$ that is transferred by the ISM. We describe below the prescriptions we adopt for the functions $S_\lambda$ and $T_\lambda$ in equation~\eqref{eq:flux_gal}. We do not consider in this paper the potential contributions to $L_{\lambda}(t)$ by shocks nor an AGN component. \subsection{Stellar emission} \label{sec:stellar_code} We compute the spectral evolution of a single stellar generation $S_\lambda[t^\prime,Z(t-t^\prime)]$ in equation~\eqref{eq:flux_gal} above using the latest version of the \citet{bruzual03} stellar population synthesis model (Charlot \& Bruzual, in preparation; see also \citealt{wofford16}). This incorporates stellar evolutionary tracks computed with the recent code of \citet{bressan12} for stars with initial masses up to 350\,\msol\ \citep{chen15} and metallicities in the range $0.0001\leq{Z}\leq0.040$ (the present-day solar metallicity corresponding to $\zavsol=0.01524$; see also Section~\ref{sec:abunddepl} below). These tracks include the evolution of the most massive stars losing their hydrogen envelope through the classical Wolf-Rayet phase (i.e., stars more massive than about $25\,\msol$ at $Z=\zavsol$, this threshold increasing as metallicity decreases). To compute the spectral energy distributions of stellar populations, the above evolutionary tracks are combined with different stellar spectral libraries covering different effective-temperature, luminosity-class and wavelength ranges \citep{pauldrach01,rauch02,lanz03,hamann04,martins05,rodriguez05,sanchez06,lanz07,leitherer10}. Of major interest for the present study are the prescriptions for the ionizing spectra of hot stars. For O stars hotter than 27,500\,K and B stars hotter than 15,000\,K, the ionizing spectra come from the library of metal line-blanketed, non-local thermodynamic equilibrium (non-LTE), plane-parallel, hydrostatic models of \citet{lanz03,lanz07}, which reach effective temperatures of up to 55,000\,K and cover a wide range of metallicities (from zero to twice solar). The ionizing spectra of cooler stars come from the library of line-blanketed, LTE, plane-parallel, hydrostatic models of \citet{rodriguez05}. For O stars hotter than 55,000\,K, the ionizing spectra are taken from the library of line-blanketed, non-LTE, plane-parallel, hydrostatic models of \citet[][which are also used to describe the radiation from faint, hot post-asymptotic-giant-branch stars]{rauch02}. These are available for two metallicities, \zavsol\ and 0.1\zavsol, and interpolated in between. The lowest-metallicity spectra are used down to $\zav=0.0005$, below which pure blackbody spectra are adopted. Finally, for Wolf-Rayet stars, the spectra come from the library of line-blanketed, non-LTE, spherically expanding models of \citet[][see also \citealt{graefener02, hamann03,hamann06,sander12,hainich14,hainich15,todt15}]{hamann04},\footnote{Available from \url{http://www.astro.physik.uni-potsdam.de/~PoWR}} available at four metallicities, $0.07\zavsol$, $0.2\zavsol$, $0.5\zavsol$ and \zavsol. The closest of these metallicities is selected to describe the emission from Wolf-Rayet stars at any metallicity in the stellar population synthesis model. \subsection{Transmission function of the ISM} \label{sec:photo_code} To compute the transmission function $T_{\lambda}(t,t^\prime)$ of the ISM in equation~\eqref{eq:flux_gal}, we follow CL01 (see also \citealt{pacifici12}) and write this as the product of the transmission functions of the ionized gas, $T_{\lambda}^{+}(t,t^\prime)$, and the neutral ISM, $T_{\lambda}^{0}(t,t^\prime)$, i.e. \begin{equation} T_{\lambda}(t,t^\prime)=T_{\lambda}^{+}(t,t^\prime) \, T_{\lambda}^{0}(t,t^\prime)\,. \label{eq:trans_funct_tot} \end{equation} If the ionized regions are bounded by neutral material, $T_\lambda^{+} (t,t^\prime)$ will be close to zero at wavelengths blueward of the H-Lyman limit but greater than unity at the wavelengths corresponding to emission lines. In this paper, we focus on the nebular emission from star-forming galaxies, which is controlled primarily by the function $T_{\lambda}^{+}$. We assume for simplicity that this depends only on the age $t^\prime$ of the stars that produce the ionizing photons. Since 99.9 per cent of the H-ionizing photons are produced at ages less than 10\,Myr by a single stellar generation \citep[e.g.][]{charlot93, binette94}, as in CL01, we write \begin{equation} T_{\lambda}^{+}(t,t^\prime)=\left\{ \begin{array}{l l} T_{\lambda}^{+}(t^\prime) \, & \mathrm{for} \hspace{3mm} t^\prime \leqslant 10\,\mathrm{Myr}\,,\\ 1 & \mathrm{for} \hspace{3mm} t^\prime > 10\,\mathrm{Myr} \,. \end{array}\right. \label{eq:T_ionized} \end{equation} We do not consider in this paper the attenuation by dust in the neutral ISM, which is controlled by the function $T_{\lambda}^{0}$. We refer the reader to the `quasi-universal' prescription by \citet[][see also section~2.5 of \citealt{chevallard16}]{chevallard13} to express this quantity as a function of stellar age $t^\prime$ and galaxy inclination $\theta$, while accounting for the fact that young stars in their birth clouds are typically more attenuated than older stars in galaxies \citep[e.g.][]{silva98,charlot00}. We use the approach proposed by CL01 to compute the transmission function of the ionized gas in equation~\eqref{eq:T_ionized}. This consists in describing the ensemble of \hii\ regions and the diffuse gas ionized by a single stellar generation in a galaxy with a set of effective parameters (which can be regarded as those of an effective \hii\ region ionized by a typical star cluster) and appealing to a standard photoionization code to compute $T_{\lambda}^{+}(t^\prime)$ at ages $t^\prime \leqslant 10\,\mathrm{Myr}$ for this stellar generation. By construction, the contributions by individual \hii\ regions and diffuse ionized gas to the total nebular emission are not distinguished in this prescription. This is justified by the fact that diffuse ionized gas, which appears to contribute around 20--50 per cent of the total H-Balmer-line emission in nearby spiral and irregular galaxies, is observed to be spatially correlated with \hii\ regions and believed to also be ionized by massive stars \citep[e.g.,][and references therein; see also \citealt{hunter90,martin97,oey97,wang97,ascasibar16}]{haffner09}. The effective parameters describing a typical \hii\ region in this approach therefore reflect the global (i.e. galaxy-wide) properties of the gas ionized by a stellar generation throughout the galaxy. To compute $T_{\lambda}^{+}(t^\prime)$ in this context, we appeal to the latest version of the photoionization code \cloudy\ (c13.03; described in \citealt{ferland13}).\footnote{Available from \url{http://www.nublado.org}} We link this code to the spectral evolution of a single stellar generation, $S_\lambda[t^\prime,Z(t-t^\prime)]$, as achieved by CL01, to whom we refer for details. In brief, we compute the time-dependent rate of ionizing photons produced by a star cluster with effective mass $M_\ast$ as \begin{equation} Q(t^\prime)=\frac{M_\ast}{hc}\int_0^{\lambda_{\rm L}}{d\lambda}\,\lambda{S_\lambda}(t^\prime)\,, \label{eq:qlyc} \end{equation} where $h$ and $c$ are the Planck constant and the speed of light, $\lambda_{\rm L}=912\,\AA$ is the wavelength at the Lyman limit and, for the sake of clarity, we have dropped the dependence of $S_\lambda$ on stellar metallicity \zav. In the \cloudy\ code, the gas is described as spherical concentric layers centred on the ionizing source (assumed to be pointlike). From the expression of $Q(t^\prime)$ in equation~\eqref{eq:qlyc}, we compute the time-dependent ionization parameter -- defined as the dimensionless ratio of the number density of H-ionizing photons to that of hydrogen -- at the distance $r$ from the ionizing source, \begin{equation} U(t^\prime,r)=Q(t^\prime)/(4\pi r^2 \nh{c})\,, \label{eq:ur} \end{equation} where we have assumed for simplicity that the effective density \nh\ does not depend on the age $t^\prime$ of the ionizing stars. The dependence of $U$ on $t^\prime$ in the above expression implies that a galaxy containing several stellar generations is modelled as a mix of gas components characterised by different effective ionization parameters. As in CL01, we assume for simplicity in this paper that galaxies are ionization-bounded. It is useful to characterise a photoionization model in terms of the ionization parameter at the Str\"omgren radius, defined by \begin{equation} R_{\rm S}^3(t^\prime)={3Q(t^\prime)}\slash{(4\pi n_{\rm H}^2 \epsilon \alpha_{\rm B})}\,, \label{eq:rs} \end{equation} where $\epsilon$ is the volume-filling factor of the gas (i.e., the ratio of the volume-averaged hydrogen density to \nh), assumed not to depend on $t^\prime$, and $\alpha_{\rm B}$ is the case-B hydrogen recombination coefficient \citep{osterbrock06}. The geometry of the model is set by the difference between $R_{\rm S}$ and the inner radius of the gaseous nebula, $r_{\rm in}$. For $r_{\rm in}\ll R_{\rm S}$, the thickness of the ionized shell is of the order of $R_{\rm S}$, implying spherical geometry, while for $r_{\rm in} \gtrsim R_{\rm S}$, the geometry is plane-parallel. We adopt here models with spherical geometry, in which case the ionization parameter at the Str\"omgren radius, given by (equations~\ref{eq:ur} and \ref{eq:rs}) \begin{equation} U_{\rm S}(t^\prime) =\frac{\alpha_{\rm B}^{2/3}}{3c} \left[ \frac{3 Q(t^\prime)\epsilon^{2} n_{\rm H}}{4\pi} \right]^{1/3}, \label{eq:us} \end{equation} is nearly proportional to the volume-averaged ionization parameter, \begin{equation} U_{\rm S}(t^\prime) \approx\langle U \rangle(t^\prime)/3\,. \label{eq:usav} \end{equation} The above expression is valid for $r_{\rm in}\ll R_{\rm S}$ and neglects the weak dependence of $\alpha_{\rm B}$ on $r$ through the electron temperature. Following CL01, we parametrize models for $T_{\lambda}^{+}(t^\prime)$ in terms of the zero-age ionization parameter at the Str\"omgren radius,\footnote{In reality, CL01 parametrized their models in terms of the zero-age, volume-averaged ionization parameter $\langle U \rangle(0)\approx3U_{\rm S}(0)$.} \begin{equation} U_{\rm S} \equiv U_{\rm S}(0). \label{eq:usdef} \end{equation} As noted by CL01, the effective star-cluster mass $M_\ast$ in equation~\eqref{eq:qlyc} has no influence on the results other than that of imposing a maximum \Us\ at fixed \nh\ (equation~\ref{eq:ur}). In practice, we adopt values of $M_\ast$ in the range from $10^4$ to $10^7\msol$ for models with ionization parameters in the range from $\log\Us=-4.0$ to $-1.0$. The exact choice of the inner radius $r_{\rm in}$ also has a negligible influence on the predictions of models parametrized in terms of \Us, for $r_{\rm in}\ll R_{\rm S}$. As CL01, we set $r_{\rm in }\lesssim0.01\,$pc to ensure spherical geometry for all models. To evaluate $T_{\lambda}^{+}(t^\prime)$, we stop the photoionization calculations when the electron density falls below 1 per cent of \nh\ or if the temperature falls below 100\,K. \subsection{Interstellar abundances and depletion factors} \label{sec:abunddepl} The chemical composition of the ISM has a primary influence on the transmission function $T_{\lambda}^{+}(t^\prime)$ of equation~\eqref{eq:T_ionized}. In this work, we adopt the same metallicity for the ISM, noted \zism, as for the ionizing stars, i.e., we set \zism=\zav. A main feature of our model is that we take special care in rigorously parametrizing the abundances of metals and their depletion onto dust grains in the ISM, to be able to model in a self-consistent way the influence of `gas-phase' and `interstellar' (i.e., total gas+dust-phase) abundances on the emission-line properties of a star-forming galaxy. \subsubsection{Interstellar abundances} \label{sec:abund} The interstellar metallicity is the mass fraction of all elements heavier than helium, i.e. \begin{equation} \zism =\left( \sum_{Z_i\geq3}n_iA_i\right)\Bigg/\left( \sum_{Z_i\geq1}n_iA_i\right)\,, \label{eq:zism} \end{equation} where $Z_i$, $n_i$ and $A_i$ are, respectively, the atomic number, number density and atomic mass of element $i$ (with $n_1=\nh$). For all but a few species, we adopt the solar abundances of chemical elements compiled by \citet{bressan12} from the work of \cite{grevesse98}, with updates from \citet[][see table~1 of \citealt{bressan12}]{caffau11}. This corresponds to a present-day solar (photospheric) metallicity $\zavsol=0.01524$, and a protosolar (i.e. before the effects of diffusion) metallicity $Z_\odot^0=0.01774$. After some experimentation, we found that a minimal fine-tuning of the solar abundances of oxygen and nitrogen within the $\sim$1$\sigma$ uncertainties quoted by \citet{caffau11} provides a slightly better ability for the model to reproduce the observed properties of SDSS galaxies in several optical line-ratio diagrams (defined by the \oiid, \hb, \oiii, \ha, \nii\ and \siid\ emission lines; Section~\ref{sec:obs_SDSS}). Specifically, we adopt a solar nitrogen abundance 0.15~dex smaller and an oxygen abundance 0.10~dex larger than the mean values quoted in table~5 of \citet[][see also \citealt{nieva12}]{caffau11}. For consistency with our assumption $\zism=\zav$, we slightly rescale the abundances of all elements heavier than helium (by about $-0.04$~dex, as inferred using equation~\ref{eq:zism}) to keep the same total present-day solar metallicity, $\zavsol=0.01524$. Table~\ref{tab:abund_depl} lists the solar abundances adopted in this work for the elements lighter than zinc. For reference, the solar \NO\ ratio in our model is $\NOsol=0.07$. We also wish to explore the nebular emission from star-forming galaxies with non-solar abundances. For $\zism=\zav\neq\zavsol$, we take the abundances of primary nucleosynthetic products to scale linearly with \zism\ (modulo a small rescaling factor; see below). We adopt special prescriptions for nitrogen and carbon, which are among the most abundant species:\footnote{\label{foot:cno} Carbon and nitrogen represent $\sim$26 and $\sim$6 per cent, respectively, of all heavy elements by number at solar metallicity. The most abundant element is oxygen ($\sim$48 per cent by number), often used as a global metallicity indicator.} %\begin{itemize} %\item {\it Nitrogen}: \begin{description} \item \textit{\textbf{Nitrogen}}: abundance studies in Galactic and extragalactic \hii\ regions suggest that N has primary and secondary nucleosynthetic components \citep[e.g.,][]{garnett95,henry00b, garnett03}. Both components are thought to be synthesised primarily through the conversion of carbon and oxygen during CNO cycles in stars with masses in the range from about 4 to 8\,\msol. While the production of primary N (in CNO cycles of H burning) does not depend on the initial metallicity of the star, that of secondary N (from CO products of previous stellar generations) is expected to increase with stellar metallicity (essentially CO; see footnote~\ref{foot:cno}). Based on the analysis of several abundance datasets compiled from emission-line studies of individual \hii\ regions in the giant spiral galaxy M101 \citep{kennicutt03} and different types of starburst galaxies \citep[\hii, starburst-nucleus and ultraviolet-selected; see the compilation by][]{mouhcine02}, \citet{groves04b} find that the abundance of combined primary+secondary nitrogen can be related to that of oxygen through an expression of the type\footnote{We have introduced in equation~\eqref{eq:nitrogen} a scaling factor of 0.41 to account for the difference in solar abundances adopted here and in \citet{groves04b}.} \begin{equation} \NH\approx0.41\,\OH\,\left[10^{-1.6} + 10^{(2.33 + \log\small\OH)}\right]\,, \label{eq:nitrogen} \end{equation} where H, N and O correspond to $n_1$, $n_7$ and $n_8$, respectively, in the notation of equation~\eqref{eq:zism}. We implement the above prescription in our models, after which we rescale the abundances of all elements heavier than He to preserve the same \zism\ (using equation~\ref{eq:zism}). It is important to note that equation~\eqref{eq:nitrogen}, which includes our fine tuning of the solar O and N abundances, provides excellent agreement with the observational constraints on the gas-phase abundances of O and N in nearby \hii\ regions and galaxies originally used by \citet[][see references above]{groves04b}. We show this in Section~\ref{sec:depl} below (Fig.~\ref{fig:NO_OH}). %\item {\it Carbon}: \item \textit{\textbf{Carbon}}: The production of C is thought to arise primarily from the triple-$\alpha$ reaction of helium in stars more massive than about 8\,\msol\ \citep[e.g.,][]{maeder92,prantzos94,gustafsson99,henry00b}, although less massive stars also produce and expel carbon \citep[e.g.,][]{vdhoek97,marigo96,marigo98,henry00a,marigo02}. Observations indicate that the \CO\ ratio correlates with the \OH\ ratio in Galactic and extragalactic \hii\ regions as well as Milky-Way stars, presumably because of the dependence of the carbon yields of massive stars on metallicity \citep[e.g.,][]{garnett95,garnett99,gustafsson99,henry00b}. Intriguingly, this trend appears to turn over at the lowest metallicities, where the \CO\ ratio increases again \citep[e.g.,][]{akerman04}. The difficulty of characterising the dependence of carbon production on metallicity has resulted in the secondary component to be either ignored (e.g., CL01, \citealt{kewley02}) or assigned a specific dependence on the \OH\ ratio \citep[e.g.,][]{dopita13} in previous models of nebular emission from \hii\ regions and galaxies. In our model, we prefer to account for the uncertainties in this component by keeping the \CO\ ratio (i.e., $n_6/n_8$ in the notation of equation~\ref{eq:zism}) as an adjustable parameter at fixed interstellar metallicity \zism\ (see Section~\ref{sec:grid} for details). Once a \CO\ ratio is adopted (in practice, by adjusting $n_6$ at fixed $n_8$), we rescale the abundances of all elements heavier than He to preserve the same \zism\ (using equation~\ref{eq:zism}). For reference, the solar \CO\ ratio in our model is $\COsol=0.44$. \end{description} %\end{itemize} To complete the parametrization of interstellar abundances, we must also specify that of helium. We follow \cite{bressan12} and write the He abundance by mass ($\propto n_2A_2$; equation~\ref{eq:zism}) as \begin{equation} Y = Y_{\mathrm P} + (Y_{\odot}^0 - Y_{\mathrm P}) \, \zism/Z_{\odot}^0=0.2485+1.7756\,\zism\,, \label{eq:helium1} \end{equation} where $Y_{\mathrm P} =0.2485$ is the primordial He abundance and $Y_{\odot}^{0}=0.28$ the protosolar one. This formula enables us to compute the helium mass fraction $Y$ at any given metallicity \zism. The hydrogen mass fraction is then simply $X=1-Y-\zism$. \begin{table} \begin{threeparttable} \centering %\begin{tabular}{r l r l} \begin{tabular*}{0.45\textwidth}{r l r l} \toprule Z$_i$\tnote{a} & Element & $\log(n_i/\nh)$\tnote{b} & $(1-f_{\rm dpl}^i$)\tnote{c}\\ \midrule 2 & He & $-1.01$ & 1 \\ 3 & Li & $-10.99$ & 0.16\\ 4 & Be & $-10.63$ & 0.6 \\ 5 & B & $-9.47$ & 0.13 \\ 6 & C & $-3.53$ & 0.5 \\ 7 & N & $-4.32$ & 1 \\ 8 & O & $-3.17$ & 0.7 \\ 9 & F & $-7.47$ & 0.3 \\ 10 & Ne & $-4.01$ & 1 \\ 11 & Na & $-5.70$ & 0.25 \\ 12 & Mg & $-4.45$ & 0.2 \\ 13 & Al & $-5.56$ & 0.02 \\ 14 & Si & $-4.48$ & 0.1 \\ 15 & P & $-6.57$ & 0.25 \\ 16 & S & $-4.87$ & 1 \\ 17 & Cl & $-6.53$ & 0.5 \\ 18 & Ar & $-5.63$ & 1 \\ 19 & K & $-6.92$ & 0.3 \\ 20 & Ca & $-5.67$ & 0.003 \\ 21 & Sc & $-8.86$ & 0.005 \\ 22 & Ti & $-7.01$ & 0.008 \\ 23 & V & $-8.03$ & 0.006 \\ 24 & Cr & $-6.36$ & 0.006 \\ 25 & Mn & $-6.64$ & 0.05 \\ 26 & Fe & $-4.51$ & 0.01 \\ 27 & Co & $-7.11$ & 0.01 \\ 28 & Ni & $-5.78$ & 0.04 \\ 29 & Cu & $-7.82$ & 0.1 \\ 30 & Zn & $-7.43$ & 0.25 \\ \bottomrule %\end{tabular} \end{tabular*} \begin{tablenotes} \item [a] Atomic number \item [b] Abundance by number relative to hydrogen \item [c] $f_{\rm dpl}^i$ is the fraction of element $i$ depleted onto dust grains (the non-refractory elements He, N, Ne, S and Ar have $f_{\rm dpl}^i=0$) \end{tablenotes} \caption{Interstellar abundances and depletion factors of the 30 lightest chemical elements for $\zism=\zavsol=0.01524$ and $\xid=\xidsol=0.36$ (see text for details).} \label{tab:abund_depl} \end{threeparttable} \end{table} \subsubsection{Depletion factors} \label{sec:depl} In our model, we account for the depletion of refractory metals onto dust grains. Observational determinations of depletion factors in Galactic interstellar clouds show a large dispersion depending on local conditions \citep[e.g.,][]{savage96}. For simplicity, we adopt the default ISM depletion factors of \cloudy\ for most elements, with updates from \citet[][see their table~1]{groves04b} for C, Na, Al, Si, Cl, Ca and Ni. By analogy with our fine-tuning of the N and O abundances in Section~\ref{sec:abund}, we slightly adjust the fraction of oxygen depleted from the gas phase (from 40 to 30 per cent for $\xid=\xidsol$) to improve the model agreement with observed properties of SDSS galaxies in several optical line-ratio diagrams (Section~\ref{sec:grid}). The depletion factors adopted in this work are listed in Table~\ref{tab:abund_depl} for the elements lighter than zinc. The elements depleted from the gas phase make up the grains, for which in \cloudy\ we adopt a standard \citet{MRN} size distribution and optical properties from \cite{martin91}. These grains influence radiative transfer via absorption and scattering of the incident radiation, radiation pressure, collisional cooling and photoelectric heating of the gas \citep[see, e.g.,][for a description of the influence of these effects on nebular emission]{shields95,dopita02,groves04a}. In addition, the depletion of important coolants from the gas phase reduces the efficiency of gas cooling through infrared fine-structure transitions, which causes the electron temperature to rise, thereby increasing cooling through the more energetic optical transitions. Following CL01, we explore the influence of metal depletion on the nebular emission from star-forming galaxies by means of the dust-to-metal mass ratio parameter, noted \xid. For $\zism=\zavsol$, the values in Table~\ref{tab:abund_depl} imply that 36 per cent by mass of all heavy elements are in the solid phase, i.e., $\xidsol=0.36$. To compute the depletion factor $f_{\rm dpl}^i$ of a given refractory element $i$ for other dust-to-metal mass ratios in the range $0\leq\xid\leq1$, we note that this must satisfy $f_{\rm dpl}^i=0$ and $1$ for $\xid=0$ and 1, respectively. We use these boundary conditions and the data in Table~\ref{tab:abund_depl} to interpolate linearly $f_{\rm dpl}^i$ as a function of $\xid$ for $\xid\neq\xidsol$. \begin{figure}\includegraphics[width=\columnwidth]{./fig1.pdf} \caption{$\log\NOgas$ as a function of $12+\log\OHgas$, for models with interstellar metallicity $\zism=0.0001$, 0.0002, 0.0005, 0.001, 0.002, 0.004, 0.006, 0.008, 0.010, 0.014, 0.017, 0.020, 0.030 and 0.040 (colour-coded as indicated), carbon-to-oxygen abundance ratio $\CO=0.1$ (triangle), 1.0 (star) and 1.4 (circle) times \COsol, and dust-to-metal mass ratio $\xid=0.1$, 0.3 and 0.5 (in order of increasing symbol size). The data (identical to those in fig.~2 of \citealt{groves04b}) are abundance datasets compiled from emission-line studies of individual \hii\ regions in the giant spiral galaxy M101 \citep[][open diamonds]{kennicutt03} and different types of starburst galaxies (\hii\ [crosses] and starburst-nucleus [dots]; see also \citealt{mouhcine02}).} \label{fig:NO_OH} \end{figure} It is instructive to examine the differences in gas-phase oxygen abundance, \OHgas, corresponding to different plausible values of \xid\ at fixed interstellar metallicity \zism\ in this context. Table~\ref{tab:12logOH} lists \OHgas\ for the 14 metallicities at which stellar population models are available in the range $0.0001\leq{Z}\leq0.040$ (and for the present-day solar metallicity \zavsol=0.01524; Section~\ref{sec:stellar_code}), for 3 dust-to-metal mass ratios, $\xid=0.1$, 0.3 and 0.5, and fixed \COsol\ ratio. The gas-phase oxygen abundance can change by typically 0.2\,dex depending on the adopted $\xid$, the difference with the interstellar (gas+dust-phase) abundance, \OH, reaching up to 0.25\,dex for $\zism=\zavsol$, for example. Fig.~\ref{fig:NO_OH} shows \NOgas\ as a function \OHgas\ for the 14 metallicities and 3 dust-to-metal mass ratios in Table~\ref{tab:12logOH}, and for 3 values of the \CO\ ratio, 0.1, 1.0 and 1.4 times \COsol. The models compare well with observational constraints on these quantities in nearby \hii\ regions and star-forming galaxies, also shown on the figure (the data in Fig.~\ref{fig:NO_OH} are the same as those in fig.~2 of \citealt{groves04b}). For reference, for solar values of the metallicity, \zavsol, dust-to-metal mass ratio, \xidsol, and \CO\ ratio, \COsol, the gas-phase abundances are $12+\log\OHgassol=8.68$, $\NOgassol=0.10$ and $\COgassol=0.31$. The large spread in \NOgas\ and \OHgas\ for models with different \xid\ and \CO\ at fixed metallicity in Fig.~\ref{fig:NO_OH} emphasizes the importance of distinguishing gas-phase from interstellar metallicity when studying the metal content of star-forming galaxies. \begin{table} \begin{threeparttable} \centering %\begin{tabular}{l c c c c} \begin{tabular*}{0.45\textwidth}{l c c c c} \toprule \zism\ & $12+\log\OH$ & \multicolumn{3}{c}{$12+\log\OHgas$} \\ \cmidrule{3-5} & & \xid=0.1 & \xid=0.3 & \xid=0.5 \\ \midrule 0.0001 & 6.64 & 6.61 & 6.53 & 6.41 \\ 0.0002 & 6.94 & 6.91 & 6.83 & 6.71 \\ 0.0005 & 7.34 & 7.30 & 7.23 & 7.11 \\ 0.001 & 7.64 & 7.61 & 7.53 & 7.41 \\ 0.002 & 7.94 & 7.91 & 7.83 & 7.71 \\ 0.004 & 8.24 & 8.21 & 8.14 & 8.02 \\ 0.006 & 8.42 & 8.39 & 8.31 & 8.19 \\ 0.008 & 8.55 & 8.52 & 8.44 & 8.32 \\ 0.010 & 8.65 & 8.61 & 8.54 & 8.42 \\ 0.014 & 8.80 & 8.76 & 8.69 & 8.56 \\ 0.01524 (\zavsol) & 8.83 & 8.80 & 8.71 & 8.58 \\ 0.017 & 8.88 & 8.85 & 8.77 & 8.65 \\ 0.020 & 8.96 & 8.92 & 8.85 & 8.72 \\ 0.030 & 9.14 & 9.11 & 9.03 & 8.90 \\ 0.040 & 9.28 & 9.24 & 9.16 & 9.03 \\ \bottomrule \end{tabular*} %\end{tabular} \caption{Oxygen abundances for interstellar metallicities \zism\ corresponding to the 14 metallicities at which stellar population models are available in the range $0.0001\leq{Z}\leq0.040$ (and for the present-day solar metallicity \zavsol=0.01524), assuming $\CO=\COsol$. The second column lists the total interstellar (i.e. gas+dust-phase) oxygen abundance, $12+\log\OH$, while the three rightmost columns indicate the gas-phase oxygen abundances, $12+\log\OHgas$, corresponding to three choices of the dust-to-metal mass ratio, $\xid=0.1$, 0.3 and 0.5.} \label{tab:12logOH} \end{threeparttable} \end{table}
\label{sec:concl} We have presented a new model of the ultraviolet and optical nebular emission from star-forming galaxies, based on a combination of state-of-the-art stellar population synthesis and photoionization codes to describe the \hii\ regions and the diffuse gas ionized by successive stellar generations (following the approach of CL01). A main feature of this model is the self-consistent yet versatile treatment of element abundances and depletion onto dust grains, which allows one to relate the observed nebular emission from a galaxy to both gas-phase and dust-phase metal enrichment, over a wide range of chemical compositions. This feature should be particularly useful to investigate the early chemical evolution of galaxies with non-solar carbon-to-oxygen abundance ratios (e.g., \citealt{erb10,cooke11}; see also \citealt{garnett99}). In our model, the main adjustable parameters pertaining to the stellar ionizing radiation are the stellar metallicity, \zav, the IMF (the upper mass cutoff can reach $\mup=300\,\msol$) and the star formation history. The main adjustable parameters pertaining to the ISM are the interstellar metallicity, \zism\ (taken to be the same as that of the ionizing stars), the zero-age ionization parameter of a newly born \hii\ region, \Us, the dust-to-metal mass ratio, \xid, the carbon-to-oxygen abundance ratio, \CO, and the hydrogen gas density, \nh. These should be regarded as `effective' parameters describing the global conditions of the gas ionized by young stars throughout the galaxy. We have built a comprehensive grid of photoionization models of star-forming galaxies spanning a wide range of physical parameters (Table~\ref{tab:parameters}). These models reproduce well the optical (e.g. \oiid, \hb, \oiii, \ha, \nii, \siid) and ultraviolet (e.g. \nv, \civd, \heii, \oiiis, \ciiid, \siliiid) emission-line properties of observed galaxies at various cosmic epochs. We find that ultraviolet emission lines are more sensitive than optical ones to parameters such as the \CO\ ratio, the hydrogen gas density, the upper IMF cutoff and even the dust-to-metal mass ratio, \xid. This implies that spectroscopic studies of the redshifted rest-frame ultraviolet emission of galaxies out to the reionization epoch should provide valuable clues about the nature of the ionizing radiation and early chemical enrichment of the ISM. In fact, the model presented in this paper has already been combined with a model of the nebular emission from narrow-line emitting regions of active galaxies to identify ultraviolet line-ratio diagnostics of photoionization by an AGN versus star formation \citep{feltre16}. It has also been used successfully to constrain the ionizing radiation and ISM parameters of galaxies at redshifts $2\lesssim z\lesssim9$, based on observed ultraviolet emission-line properties \citep{stark14,stark15a,stark15b,stark16}. It is worth mentioning that the stellar population synthesis code used to generate the ionizing radiation in this paper (Section~\ref{sec:stellar_code}) does not incorporate binary stars, while models including binary stars \citep[e.g.,][]{eldridge08,eldridge12} have been preferred over existing `single-star' models to account for the observed rest-frame far-ultraviolet and optical composite spectra of a sample of 30 star-forming galaxies at redshift around $z=2.40$ \citep{steidel16}. We find that our new model can account remarkably well for the observed ultraviolet and optical emission-line properties of the composite \citet{steidel16} spectrum, with best-fitting metallicity $\zism=0.006$ (i.e. $0.4\zavsol$), ionization parameter $\log\Us=-3.0$ and carbon-to-oxygen ratio $\CO=0.52\COsol$.\footnote{Specifically, the predicted emission-line ratios of the best-fitting model are $\oiii/\hb=4.14$ (to be compared with the observed $4.25\pm0.09$), $\nii/\ha=0.09$ ($0.10\pm0.01$), $\siid/\ha=0.18$ ($0.18\pm0.01$), $\oiit/\oiii=0.68$ ($0.66\pm0.01$), $\siliiit/\ciiit=0.16$ ($0.33\pm0.07$), $\ciiit/\oiiit=4.05$ ($4.22\pm0.53$) and $\oiiit/\heii=6.84$ ($4.47\pm2.69$).} We further find that the predictions of our model for the \heii\ emission luminosity in low-metallicity star-forming galaxies are comparable to those of \citet{schaerer03}. For example, for a \citet{chabrier03} IMF truncated at 1 and 100\,\msol\ (very similar to the \citealt{salpeter55} IMF with same lower and upper cutoffs used by \citealt{schaerer03}) and for the metallicity $\zav=0.001$, we find line luminosities between $\rm 2.7\times10^{38}\,erg\,s^{-1}$ and $\rm 4.04\times10^{39}\,erg\,s^{-1}$ per unit star formation rate (depending on \Us\ and \xid), consistent with the value $\rm 8.39\times10^{38}\,erg\,s^{-1}$ in table~4 of \citet[][his IMF `A']{schaerer03}. The calculations of \citet{schaerer03} extend all the way down to zero metallicity. For the smallest metallicity investigated here, $\zav=0.0001$, our model spans a range of \heii\ luminosities between $\rm 1.03\times10^{39}\,erg\,s^{-1}$ and $\rm 9.70\times10^{39}\,erg\,s^{-1}$, which can be compared to the values $\rm 2.91\times10^{37}$, $\rm 1.40\times10^{39}$ and $\rm 1.74\times10^{40}\,erg\,s^{-1}$ at $\zav=10^{-5}$, $10^{-7}$ and $0.$ in table~4 of \citet{schaerer03}. Our fully self-consistent modeling of the nebular emission and chemical composition of the ISM in star-forming galaxies provides a unique way to test the reliability of standard recipes based on the direct-\Te\ method to measure element abundances from emission-line luminosities \citep[e.g.,][]{aller84,garnett95icf,izotov99,izotov06,shapley03,erb10}. We find that, for gas-phase metallicities around solar to slightly sub-solar, widely used formulae to constrain oxygen ionic fractions and the \CO\ ratio from ultraviolet and optical emission-line luminosities are reasonably faithful. However, the recipes break down at non-solar metallicities (both low and high; see Section~\ref{sec:COicf}), making their application inappropriate to studies of chemically young galaxies. In such cases, a fully self-consistent model of the kind presented in this paper is required to interpret the observed nebular emission. This can be achieved in an optimal way by appealing to a dedicated spectral analysis tool, such as the \beagle\ tool of \citet{chevallard16}, which already incorporates our model. Finally, we note that, while all the calculations presented in this paper pertain to ionization-bounded galaxies, our model provides a unique means of investigating the spectral signatures of the escape of ionizing photons from density-bounded galaxies. This will be the subject of a forthcoming study. The model grid presented in this paper is available electronically from \url{http://www.iap.fr/neogal/models.html}.
16
7
1607.06086
1607
1607.02148_arXiv.txt
Merging neutron star binaries are prime candidate sources for heavy $r$-process nucleosynthesis. The amount of heavy $r$-process material is consistent with the mass ejection and rates of mergers, and abundances of relic radioactive materials suggest that heavy $r$-process material is produced in rare events. Observations of possible macronovae provide further support for this model. Still, some concerns remain. One is the observation of heavy $r$-process elements in Ultra Faint Dwarf (UFD) galaxies. The escape velocities from UFDs are so small that the natal kicks, taking place at neutron stars' birth, might eject such binaries from UFDs. Furthermore the old stellar populations of UFDs requires that $r$-process nucleosynthesis must have taken place very early on, while it may take several Gyr for compact binaries to merge. This last problem arises also within the Milky Way where heavy $r$-process materials has been observed in some low metallicity stars. We show here that {$\gtrsim 0.5$} of neutron star binaries form with a {sufficiently} small proper motion {to remain bound even in} a UFD. Furthermore, {approximately $90\%$ of DNSs with an initial separation of $10^{11}$cm merge within 300Myrs and $\approx 15\%$ merge in less than 100Myrs.} This population of ``rapid mergers" explains the appearance of heavy $r$-process material in both UFDs and in the early Milky Way.
\label{sec:Introduction} Merging double neutron stars (DNSs) that eject highly neutron rich material are prime candidates for the production sites of heavy $r$(apid)-process elements \citep{lattimer1976ApJ, eichler1989Nature,freiburghaus1999ApJ}. The overall amount of heavy $r$-process material in the Milky Way is consistent with the expectations of mass ejection in numerical merger simulations \cite[e.g.][]{sekiguchi2015PRD,radice2016} with their expected rates as estimated from Galactic DNSs \cite[see e.g.][]{kim2015MNRAS} or from the rate \citep{GuettaPiran06,wanderman2015MNRAS} of short Gamma-Ray Bursts (sGRBs). Discoveries of $r$-process driven macronova~(kilonova) candidates associated with sGRBs ~\citep{tanvir2013Nature, berger2013ApJ,yang2015NatCo, jin2016} provided further observational evidence of the DNS merger scenario \cite[e.g.][]{piran2014}. {Following these developments, several recent works \citep{shen2015ApJ, vandevoort2015MNRAS,wehmeyer2015MNRAS,Montes16} have shown that under reasonable assumptions DNS mergers can account for the history of R-process enrichment in the Galaxy.} However, recently, \cite{Bramante(2016)} have used the observations of $r$-process elements in dwarf satellite galaxies to question the DNS merger scenario for $r$-process production. \cite{ji2015}, and independently, \cite{roederer2016} reported the discovery of an $r$-process enriched ultra-faint dwarf~(UFD) galaxy Reticulum II, the total stellar luminosity of Reticulum II is $\sim 1000L_{\odot}$ and the {line of sight} velocity dispersion is $\sim 4$~km/s~\citep{walker2015ApJ}. \cite{Bramante(2016)} suggested that the kick given to the DNS during the second collapse would eject the binary from such a small galaxy. {A second problem that arises is that} UFDs are composed of very old stellar population~\citep{brown2014ApJ,weisz2015ApJ}, suggesting that the chemical abundances have been frozen since $\approx 13$~Gyr ago. This requires that the $r$-process formation should take place relatively soon after the formation of the first stars. This raises the question whether mergers could take place sufficiently rapidly so that their $r$-process material would be able to enrich the old stellar population. \cite{Bramante(2016)} suggested, therefore, that a different mechanism must have produced the observed $r$-process material in these galaxies. A significant population of ``rapid mergers" ($<$Gyr) is natural, and in fact is expected from observations of DNS systems in our Galaxy. Two of the ten observed DNS systems in our Galaxy (that don't reside in globular clusters and for which the masses are well constrained), the double pulsar, J0737-3039, and the original binary pulsar, B1913-16, will merge in less than a few hundred Myr. Given the spin down time of the pulsars in these systems, we can constrain the {life time of these systems since the formation of the DNS to less than 140\,Myr and 460\,Myr respectively.} {Indeed the existence rapid mergers has previously been suggested using population synthesis models \citep{Belczynski(2002),Belczynski2007,O'Shaughnessy2008ApJ}}. Furthermore, the observed small proper motion of J0737-3039, $\lesssim 10 \mbox{~km s}^{-1}$ \citep{Kramer2006Sci}, implies that some ``rapid mergers" {move slowly} enough to remain confined even within UFDs \footnote{{Evolving J0737-3039 backwards in time we obtain an upper limit on the semi major axis and eccentricity right after the second collapse, $a_1$ and $e_1$. This, in turn, constrains the separation before the collapse, $a_0$, to $\min[a(1-e),a_1(1-e_1)]<a_0<a_1(1+e_1)\approx9.5\times10^{10}$\,cm.}}. \citet[][hereafter BP16]{BP(2016)} used the observed orbital parameters of the Galactic DNS population to constrain the distributions of mass ejection and kick velocities associated with the formation of the second neutron star in the binary. {While the smallness of the sample and unknown selection effects don't allow an accurate estimate of these distributions a clear picture emerges,} there are two distinct types of neutron star formation. The majority of the systems, about two thirds, involve a minimal mass ejection ($\Delta M\lesssim 0.5M_\odot$) and low kick velocities ($v_{k}\lesssim 30$~km\,s$^{-1}$). The double pulsar system, PSR J0737-3039, is a prime candidate of this kind of collapse with $\Delta M =0.1-0.2M_\odot$ and $v_{k}=3-30 \mbox{\,km s}^{-1}$ \citep{Piran(2005),Dall'Osso2014}. Such a population of collapses with low mass ejection and kicks has been suggested on both observational \citep{Piran(2005),Wang2006,Wong2010,Dall'Osso2014} and {theoretical \citep{DewiPols2003,Ivanova2003,VossTauris2003,Tauris2015} grounds. {Subsequent addition of a low mass ejection }channel of neutron star formation (via electron capture SNe) to population synthesis models \citep{Belczynski2008} improved the fit of the models to the observed DNS population.} A large fraction of DNSs, born via the same mechanism, remain bound to their dwarf hosts. {On a related topic, \cite{RamirezRuiz2015ApJ} have argued that many DNSs are expected to remain confined and merge within globular clusters, which have comparable escape velocities to UFDs.} To explore these ideas we begin, in \S \ref{sec:kicks} with a simulation of the typical velocities of DNS systems using the implied distributions of mass ejection and kicks from BP16. We then address the delay times between formation and merger in \S \ref{rapid}. We show that a significant fraction of DNS systems will remain confined in UFDs and merge rapidly, demonstrating the viability of DNS mergers as sources of $r$-process material in UFDs. In \S \ref{sec:impGalaxy} we consider the implications of these findings to the related problem of observation of heavy $r$-process material in some very low metallicity stars in the Galaxy. We summarize our results and the case for DNS mergers as the source of heavy $r$-process nucleosynthesis in \S \ref{sec:summary}.
\label{sec:summary} We have examined here DNS mergers as sources of heavy $r$-process material in dwarf galaxies. We have shown that both arguments raised by \cite{Bramante(2016)}, that DNS systems may be ejected from their galaxies due to strong kicks received at formation, and that DNS would not be able to merge rapidly enough before the star formation in these galaxies has stopped, are naturally overcome. First, due to a significant population of DNS receiving weak kicks at birth (BP16), a large fraction of DNS systems would have CM velocities $<15 $\,km\,s$^{-1}$ (comparable to the lowest escape velocities from UFDs). Second, given limits on the separation of the double pulsar system before its second collapse, a significant fraction of systems are expected to both remain confined in UFD galaxies and merge within less than a Gyr. Moving from UFDs to the Galaxy, where small velocities are not required to remain confined, the fraction of rapid mergers becomes even larger due to the contribution of ``confining kicks". Limits on the time between the second collapse until eventual merger, of two out of the ten observed DNS systems in the galaxy, imply that many DNS systems are expected to merge within less than a few hundred Myr. This is also supported by observations of a rapid decay in the sGRB rate following the peak of star formation in the universe. This population of rapid mergers is composed mainly of systems that were formed with small amounts of mass ejection $\approx 0.1M_{\odot}$, (BP16). Moreover, the mergers take place far away from the place of the progenitor SNe. This implies that DNS mergers can naturally account for the observations of significant amounts of $r$-process material in some low metallicities halo stars in the Galaxy. This invalidates the argument used against DNS mergers as the origin of $r$-process nucleosynthesis in our Galaxy \citep{argast2004A&A}. In fact, DNS mergers are more easily compatible with the large fluctuations in the Eu/Fe ratio in metal poor stars which are more easily accounted for by rare events. In a companion paper \cite{Beniamini2016} we provide further quantitative evidence that $r$-process material is produced in rare events in both our own Galaxy and UFDs. We thank Nicholas Stone {and Todd Thompson} for useful discussions and comments. This work was supported in part by an Israel Space Agency (SELA) grant, the Templeton foundation and the I-Core center for excellence ``Origins" of the ISF. \hyphenation{Post-Script Sprin-ger} \hyphenation{Post-Script Sprin-ger}
16
7
1607.02148
1607
1607.02979_arXiv.txt
{We explore the dynamics and evolution of the Universe at early and late times, focusing on both dark energy and extended gravity models and their astrophysical and cosmological consequences. Modified theories of gravity not only provide an alternative explanation for the recent expansion history of the universe, but they also offer a paradigm fundamentally distinct from the simplest dark energy models of cosmic acceleration. In this review, we perform a detailed theoretical and phenomenological analysis of different modified gravity models and investigate their consistency. We also consider the cosmological implications of well motivated physical models of the early universe with a particular emphasis on inflation and topological defects. Astrophysical and cosmological tests over a wide range of scales, from the solar system to the observable horizon, severely restrict the allowed models of the Universe. Here, we review several observational probes---including gravitational lensing, galaxy clusters, cosmic microwave background temperature and polarization, supernova and baryon acoustic oscillations measurements---and their relevance in constraining our cosmological description of the Universe. } \keyword{scalar-tensor gravity; modified theories of gravity; inflationary models; topological defects; observational cosmology; cosmic microwave background radiation; weak lensing; spacetime inhomogeneities} \begin{document} \newpage
During the last few decades Cosmology has evolved from being mainly a theoretical area of physics to become a field supported by high precision observational data. Recent experiments call upon state of the art technology in Astronomy and Astrophysics to provide detailed information on the contents and history of the Universe, which has led to the measurement of parameters that describe our Universe with increasing precision. The standard model of Cosmology is remarkably successful in accounting for the observed features of the Universe. However, a number of fundamental open questions remains at the foundation of the standard model. In particular, we lack a fundamental understanding of the recent acceleration of the Universe \cite{Perlmutter:1998np,Riess:1998cb}. What is the so-called ``dark energy'' that is driving the cosmic acceleration? Is it vacuum energy or a dynamical field? Or is the acceleration due to infra-red modifications of Einstein's theory of General Relativity (GR)? How is structure formation affected in these alternative scenarios? What are the implications of this acceleration for the future of the Universe? The resolution of these fundamental questions is extremely important for theoretical cosmology. Dark energy models are usually assumed to be responsible for the acceleration of the cosmic expansion in most cosmological studies. However, it is clear that these questions involve not only gravity, but also particle physics. String theory provides a synthesis of these two branches of physics and is widely believed to be moving towards a viable quantum gravity theory. One of the key predictions of string theory is the existence of extra spatial dimensions. In the brane-world scenario, motivated by recent developments in string theory, the observed 3-dimensional universe is embedded in a higher-dimensional spacetime \cite{Maartens:2003tw}. The new degrees of freedom belong to the gravitational sector, and can be responsible for the late-time cosmic acceleration \cite{Dvali:2000hr,deRham:2007rw}. On the other hand, generalizations of the Einstein-Hilbert Lagrangian, including quadratic Lagrangians which involve second order curvature invariants have also been extensively explored \cite{Sotiriou:2008rp,DeFelice:2010aj,Capozziello:2011et,Nojiri:2010wj,Lobo:2008sg}. These modified theories of gravity not only provide an alternative explanation for the expansion history of the Universe \cite{Capozziello:2002rd,Nojiri:2003ft,Carroll:2003wy}, but they also offer a paradigm fundamentally distinct from the simplest dark energy models of cosmic acceleration \cite{Copeland:2006wr}, even from those that perfectly mimic the same expansion history. Nevertheless, it has been realized that a large number of modified gravity theories are amenable to a scalar-tensor formulation by means of appropriate metric re-scalings and field redefinitions. It is, therefore, not surprising that we can think of scalar-tensor gravity theories as a first stepping stone to explore modifications of GR. They have the advantage of apparent simplicity and of a long history of examination. First proposed in its present form by Brans and Dicke for a single scalar field \cite{BD 61}, they have been extensively generalised and have maintained the interest of researchers until the present date. For instance, an extensive field of work has been developed in the cosmological dynamics of scalar-tensor theories. This can be elegantly summarised by carrying out a unified qualitative analysis of the dynamical system for a single scalar field. We also have an established understanding of the observational bounds in these models, where we can use the Parametrized Post-Newtonian formalism to constrain the model parameters. Finally, with a conformal transformation, these theories can be recast as matter interacting scalar fields in General Relativity. In this format, they can still play an important role in dark energy modelling, such as in coupled quintessence models. The consideration of a multi-scalar fields scenario, which can be perceived as the possible reflection of a multi-scalar tensor gravity theory allows for cooperative effects between the fields yielding assisted quintessence. Indeed, scalar fields are popular building blocks used to construct models of present-day cosmological acceleration. They are appealing because such fields are ubiquitous in theories of high energy physics beyond the standard model and, in particular, are present in theories which include extra spatial dimensions, such as those derived from string theories. Recently, relative to scalar-tensor theory, much work has been invested in the Galileon models and their generalizations \cite{deRham:2014zqa}. The latter models allow nonlinear derivative interactions of the scalar field in the Lagrangian and lead to second order field equations, thus removing any ghost-like instabilities. The Lagrangian was first written down by Horndeski in 1974 \cite{Horndeski:1974wa}, which contains four arbitrary functions of the scalar field and its kinetic energy. The form of the Lagrangian is significantly simplified by requiring specific self-tuning properties (though it still has four arbitrary functions), however, the screening is too effective, and will screen curvature from other matter sources as well as from the vacuum energy \cite{Charmousis:2011ea}. An alternative approach consists of searching for a de Sitter critical point for any kind of material content \cite{stmodels}. These models might alleviate the cosmological constant problem and can deliver a background dynamics compatible with the latest observational data. A promising alternative to explain the late-time cosmic acceleration is to assume that at large scales Einstein's theory of GR breaks down, and a more general action describes the gravitational field. Thus, one may generalize the Einstein-Hilbert action by including second order curvature invariants such as $R^2$, $R^{\mu\nu}R_{\mu\nu}$, $R^{\mu\nu\alpha\beta}R_{\mu\nu\alpha\beta}$, $C^{\mu\nu\alpha\beta}C_{\mu\nu\alpha\beta}$, etc. Some of the physical motivations for these modifications of gravity were inspired on effective models raised in string theory, which indeed may lead to the possibility of a more realistic representation of the gravitational fields near curvature singularities \cite{Nojiri:2003rz}. Moreover, the quantization of fields in curved space-times tell us that the high-energy completion of the Einstein-Hilbert Lagrangian of GR involves higher-order terms on the curvature invariants above \cite{QFT}. This is in agreement with the results provided from the point of view of treating GR as an effective field theory \cite{Cembranos-effective}. Among these extensions of GR the so-called $f(R)$ gravity has drawn much attention over the last years, since it can reproduce late-time acceleration and in spite of containing higher order derivatives, it is free of the Ostrogradsky instability, as can be shown by its equivalence with scalar-tensor theories (for a review on $f(R)$ gravity see Refs.\cite{DeFelice:2010aj,Sotiriou:2008rp,Capozziello:2011et,Nojiri:2010wj}). Moreover, $f(R)$ gravities have been also proposed as solutions for the inflationary paradigm \cite{Bamba:2015uma}, where the so-called Starobinsky model is a successful proposal, since it satisfies the latest constraints released by Planck \cite{Planck:2013jfk}. In addition, the equivalence of $f(R)$ gravities to some class of scalar-tensor theories has provided an extension of the so-called chameleon mechanism to $f(R)$ gravity, leading to some viable extensions of GR that pass the solar system constraints \cite{Hu:2007nk,Nojiri:2007as}. Other alternative formulations for these extensions of GR have been considered in the literature, namely, the Palatini formalism, where metric and affine connection are regarded as independent degrees of freedom, which yields an interesting phenomenology for Cosmology \cite{Olmo}; and the metric-affine formalism, where the matter part of the action now depends and is varied with respect to the connection \cite{Sotiriou:2006qn}. Recently, a novel approach to modified theories of gravity was proposed that consists of adding to the Einstein-Hilbert Lagrangian an $f({\cal R})$ term constructed {\it a la} Palatini \cite{Harko:2011nh}. It was shown that the theory can pass the Solar System observational constraints even if the scalar field is very light. This implies the existence of a long-range scalar field, which is able to modify the cosmological and galactic dynamics, but leaves the Solar System unaffected. Note that these modified theories of gravity are focussed on extensions of the curvature-based Einstein-Hilbert action. Nevertheless, one could equally well modify gravity starting from its torsion-based formulation and, in particular, from the Teleparallel Equivalent of General Relativity (TEGR) \cite{Linder:2010py}. The interesting point is that although GR is completely equivalent with TEGR at the level of equations, their modifications (for instance $f(R)$ and $f(T)$ gravities, where $T$ is the torsion) are not equivalent and they correspond to different classes of gravitational modifications. Hence, $f(T)$ gravity has novel and interesting cosmological implications, capable of describing inflation, the late-time acceleration, large scale structure, bouncing solutions, non-minimal couplings to matter, etc \cite{Cai:2015emx,Harko:2014sja,Harko:2014aja}. Another gravitational modification that has recently attracted much interest is the massive gravity paradigm, where instead of introducing new scalar degrees of freedom, such as in $f(R)$ gravity, it modifies the graviton itself. Massive gravity is a well-defined theoretical problem on its own and has important cosmological motivations, namely, if gravity is massive it will be weaker at large scales and thus one can obtain the late-time cosmic acceleration. Fierz and Pauli presented the first linear massive gravity. However, it was shown to suffer from the van Dam-Veltman-Zakharov (vDVZ) discontinuity \cite{vanDam:1970vg,Zakharov:1970cc}, namely the massless limit of the results do not yield the massless theory, namely, GR. The incorporation of nonlinear terms cured the problem but introduced the Boulware-Deser (BD) ghost. This fundamental problem puzzled physicists until recently, where a specific nonlinear extension of massive gravity was proposed by de Rham, Gabadadze and Tolley (dRGT), in which the BD ghost is eliminated by a Hamiltonian constraint \cite{deRham:2014zqa}. This new nonlinear massive gravity has interesting cosmological implications, for instance, it can give rise to inflation, late-time acceleration \cite{deRham:2014zqa}. However, the basic versions of this theory exhibit instabilities at the perturbative level, and thus suitable extensions are necessary. These could be anisotropic versions, $f(R)$ extensions, bigravity generalizations, partially-massive constructions. The crucial issue is whether one can construct a massive gravity and cosmology that can be consistent as an alternative to dark energy or other models of modified gravity, and whether this theory is in agreement with high-precision cosmological data, such as the growth-index or the tensor-to-scalar ratio, remains to be explored in detail. Quantum field theory predicts that the universe underwent, in its early stages, a series of symmetry breaking phase transitions, some of which may have led to the formation of topological defects. Different types of defects may be formed depending on the (non-trivial) topology of the vacuum manifold or the type of symmetry being broken. For instance, {\it Domain Walls}---which are surfaces that separate domains with different vacuum expectation values---may arise due to the breaking of a discrete symmetry, whenever the vacuum manifold is disconnected. Line-like defects, or \textit{Cosmic strings}, are formed if the vacuum is not simply connected or, equivalently, if it contains unshrinkable loops. This type of vacuum manifold results, in general, from the breaking of an axial symmetry. Moreover, if the vacuum manifold contains unshrinkable surfaces, the field might develop non-trivial configurations corresponding to point-like defects, known as \textit{Monopoles}. The spontaneous symmetry breaking of more complex symmetry groups may lead to the formation of textures, delocalized topological defects which are unstable to collapse. The production of topological defects networks as remnants of symmetry breaking phase transitions is thus predicted in several grand unified scenarios and in several models of inflation. Moreover, recent developments in the braneworld realization of string theory suggest that its fundamental objects---p-dimensional D-branes and fundamental strings---may play the cosmological role of topological defects. Topological defects networks, although formed in the early universe, may in most instances survive throughout the cosmological history and leave a variety of imprints on different observational probes. The observational consequences of topological defect networks can be very diverse, depending both on the type of defects formed and on the evolution of the universe after they are generated. Although the possibility of a significant contribution to the dark energy budget has been ruled out both dynamically \cite{PinaAvelino:2006ia} and observationally \cite{Ade:2015xua}, light domain walls may leave behind interesting astrophysical and cosmological signatures. For instance, they may be associated to spatial variations of the fundamental couplings of nature (see, e.g., \cite{Avelino:2014xsa}). On the other hand, cosmic strings may contribute significantly to small-scale cosmological perturbations and have consequently been suggested to have significant impact on the formation of ultracompact minihalos \cite{Anthonisen:2015tda}, globular clusters \cite{Barton:2015zra}, super-massive black holes \citep{Bramberger:2015kua} and to provide a significant contribution to the reionization history of the Universe \cite{Avelino:2003nn}. Both cosmic strings and domain walls may be responsible for significant contributions to two of the most significant observational probes: the temperature and polarization anisotropies of the cosmic microwave background and the stochastic gravitational wave background. This fact---alongside the possibility of testing string theory through the study of topological defects---greatly motivates the interest on the astrophysical and cosmological signatures of topological defects. An extremely important aspect of modern cosmology is the synergy between theory and observations. Dark energy models and modified gravity affect the geometry of the universe and cosmological structure formation, impacting the background expansion and leaving an imprint on the statistical properties of the large-scale structure. There are a number of well-established probes of cosmic evolution, such as type Ia supernovae, baryon acoustic oscillations (BAO), weak gravitational lensing, galaxy clustering and galaxy clusters properties \cite{weinberg:probes}. Different methods measure different observables, probing expansion and structure formation in different and often complementary ways and have different systematic effects. In particular, joint analyses with Cosmic Microwave Background (CMB) data are helpful in breaking degeneracies by constraining the standard cosmological parameters. Indeed, CMB has revolutionized the way we perceive the Universe. The information encoded in its temperature and polarization maps provides one of the strongest evidences in favour of the hot Big-Bang theory and has enabled ways to constrain cosmological models with unprecedented accuracy \citep{2015arXiv150201589P}. The CMB also encodes additional information about the growth of cosmological structure and the dynamics of the Universe through the secondary CMB anisotropies. These are originated by physical effects acting on the CMB after decoupling \citep{2008RPPh...71f6902A}, such as the integrated Sachs-Wolfe effect and the Sunyaev-Zel'dovich (SZ) effect, manifest respectively on the largest and arc-minute scales of the CMB. In this review, we will discuss in some detail both a well-established acceleration probe (weak lensing) and a few promising ones related to galaxy cluster properties and the SZ effect. Galaxy clusters are the largest gravitationally bound objects in the Universe and are among the latest bound structures forming in the Universe. For this reason, their number density is highly sensitive to the details of structure formation as well as to cosmological background parameters and cluster abundance is a well established cosmological probe. The Sunyaev-Zel'dovich effect \cite{1972CoASP...4..173S, 1999PhR...310...97B} is the scattering of CMB photons by electrons in hot reservoirs of ionized gas in the Universe, such as galaxy clusters. In particular, SZ galaxy cluster counts, profiles, scaling relations, angular power spectra and induced spectral distortions are promising probes to confront model predictions with observations. Weak gravitational lensing \cite{bands} describes the deflection of light by gravity in the weak regime. Its angular power spectrum is a direct measure of the statistical properties of gravity and matter on cosmological scales. Weak lensing, together with galaxy clustering, is the core method of the forthcoming Euclid mission to map the dark universe. Euclid \cite{redbook} will provide us with weak lensing measurements of unprecedented precision. To obtain high-precision and high-accuracy constraints on dark energy or modified gravity properties, with both weak lensing and SZ clusters, non-linearities on structure formation must be taken into account. While linear covariant perturbations equations may be evolved with Boltzmann codes, non-linearities require dedicated N-body cosmological simulations \cite{nbodymodgrav}. There currently exist a number of simulations for various modified gravity and dark energy models, together with a set of formulas that fit a non-linear power spectrum from a linear one. Hydrodynamic simulations, commonly used in cluster studies, are increasingly needed in weak lensing applications to model various baryonic effects on lensing observables, such as supernova and AGN feedback, star formation or radiative cooling \cite{semboloni}. The $\Lambda$CDM framework provides a very good fit to various datasets, but it contains some open issues \cite{bulloslo}. As an example, there are inconsistencies between probes, such as the tension between CMB primary signal (Planck) and weak lensing (CFHTLenS) \cite{joudaki2016}, as well as problems with the interpretation of large-scale CMB measurements (the so-called CMB anomalies) \cite{anomalies}. Alternatives to $\Lambda$CDM or deviations to General Relativity are usually confronted with data using one of two approaches: model selection or parameterizations. In model selection a specific model is analyzed and its parameters constrained. Such analyses have a narrower scope but may be better physically motivated. Parameterizations are good working tools and are helpful in highlighting a particular feature and in ruling out larger classes of models, however they must be carefully defined in a consistent way. Parameterizations are commonly applied to the dark energy equation of state and to deviations from General Relativity. An example of the latter is the gravitational slip, which provides an unambiguous signature of modified gravity, and can be estimated combining weak lensing measurements of the lensing potential with galaxy clustering measurements of the Newtonian potential. Model parameters in both of the approaches are usually estimated with Monte Carlo techniques, while the viability of different models may be compared using various information criteria. Besides model testing, cosmological data is also useful to test foundational assumptions, such as the (statistical) cosmological principle and the inflationary paradigm. The common understanding is that cosmological structures are the result of primordial density fluctuations that grew under gravitational instability collapse. These primordial density perturbations would have originated during the inflationary phase in the early universe. Most single field slow-roll inflationary models produce nearly Gaussian distributed perturbations, with very weak possible deviations from Gaussianity at a level beyond detection \cite{2003NuPhB.667..119A,2003JHEP...05..013M,2004PhR...402..103B}. However, non-Gaussianities may arise in inflationary models in which any of the conditions leading to the slow-roll dynamics fail \cite{2009astro2010S.158K}, such as the curvaton scenario \cite{2004PhRvD..69d3503B,2006PhRvD..74j3003S,2002PhLB..524....5L}, the ekpyrotic inflacionary scenario \cite{2010AdAst2010E..67L,2008PhRvD..77f3533L}, vector field populated inflation \cite{2008JCAP...08..005Y,2009PhRvD..80b3509K, 2010AdAst2010E..65D} and multi-field inflation \cite{2010AdAst2010E..76B,1998PhLB..422...52M,1994PhRvD..50.6123P,2006PhRvD..73h3522R}. Tests of non-Gaussianity are thus a way to discriminate between inflation models and to test the different proposed mechanisms for the generation of primordial density perturbations. Likewise, the assumption of statistical homogeneity may be tested. Locally, matter is distributed according to a pattern of alternate overdense regions and underdense regions. Since averaging inhomogeneities in the matter density distribution yields a homogeneous description of the Universe, then the apparent homogeneity of the cosmological parameters could also result from the averaging of inhomogeneities in the cosmological parameters, which would reflect the inhomogeneities in the density distribution. The theoretical setup closest to this reasoning is that of backreaction models, where the angular variations in the parameters could also source a repulsive force and potentially emulate cosmic acceleration. Hence the reasoning is to look for these inhomogeneities, not in depth, but rather across the sky and then to use an adequate toy model to compute the magnitude of the acceleration derived from angular variations of the parameters compared to the acceleration driven by a cosmological constant \cite{carvalho_2015}. This work will focus on all of the above-mentioned topics. More specifically, this article is outlined in the following manner: In Section \ref{sec2}, we present scalar-tensor theories, and in Section \ref{sec3}, we consider Horndeski theories and the self-tuning properties. In Section \ref{sec4}, $f(R)$ modified theories of gravity and extensions are reviewed. In Section \ref{sec5}, an extensive review on topological defects is carried out. The following sections are dedicated to observational cosmology. In particular, in Section \ref{sec6}, cosmological tests with galaxy clusters at CMB frequencies are presented and in Section \ref{sec7}, gravitational lensing will be explored. In Section \ref{sec8}, the angular distribution of cosmological parameters as a measurement of spacetime inhomogeneities will be presented. Finally, in Section \ref{sec:concl} we conclude.
\label{sec:concl} In this work, we have explored the dynamics and evolution of the Universe at early and late times, focusing on both dark energy and extended gravity models and their astrophysical and cosmological consequences. More specifically, we presented scalar-tensor theories, focussing on a brief review of the general formalism, the conformal picture, discussed the experimental and observational status of the theory, presented a detailed dynamical system analysis of the cosmological dynamics, and concluded with a quintessence scenario where a multi-scalar-tensor theory is responsible for the present accelerated expansion of the Universe. Furthermore, we considered Horndeski theories and the self-tuning properties, and explored realistic cosmologies that can be constructed, by studying the properties of this intriguing theory. We also reviewed $f(R)$ modified theories of gravity and extensions, where the foundational questions and astrophysical/cosmological issues of the Palatini approach were extensively explored; we also focussed on recently proposed theories such as on curvature-matter couplings and the hybrid metric-Palatini theory. The inflationary epoch was also presented in $f(R)$ gravity, where the so-called Starobinsky inflation is known as one of the more reliable candidates for explaining the inflationary paradigm as provided by the recent Planck/Bicep2 constraints. Moreover, we have reviewed the evolution and cosmological consequences of topological defect networks that may be formed in the early universe. In particular, we focused on two of the most significant signatures of these networks: the cosmic microwave background anisotropies and the stochastic gravitational wave background. As mentioned in the Introduction, due to the extremely important aspect of the synergy between theory and observations, the second part of this review was dedicated to observational cosmology. In particular, cosmological tests with galaxy clusters at CMB frequencies were presented. More specifically, galaxy clusters and the thermal and kinetic Sunyaev-Zel'dovich (SZ) effects were reviewed, and the cosmological tests using SZ cluster surveys were presented, focussing on SZ cluster counts, SZ power spectra and the possibility of probing new physics with the SZ clusters; a review on probing primordial non-gausianities with galaxy clusters was also presented, with an emphasis on the parametrization of primordial non-Gaussianities, the non-Gaussian mass function, biased cosmological parameter estimation with cluster counts, the impact of primordial non-Gaussianities on galaxy clusters scaling relations. Furthermore, gravitational lensing was also explored, with a focus on cosmological scales, with the possibility of testing deviations from General Relativity, in particular, with future cosmic shear data. In conclusion, the angular distribution of cosmological parameters as a measurement of spacetime inhomogeneities was presented, using a method of averaging over the local parameter estimation. Thus, cosmological tests were discussed, and their relevance in constraining our cosmological description of the Universe. \vspace{6pt}
16
7
1607.02979
1607
1607.03843_arXiv.txt
Extreme-ultraviolet and X-ray jets occur frequently in magnetically open coronal holes on the Sun, especially at high solar latitudes. Some of these jets are observed by white-light coronagraphs as they propagate through the outer corona toward the inner heliosphere, and it has been proposed that they give rise to microstreams and torsional Alfv\'{e}n waves detected {\em in situ} in the solar wind. To predict and understand the signatures of coronal-hole jets, we have performed a detailed statistical analysis of such a jet simulated with an adaptively refined magnetohydrodynamics model. The results confirm the generation and persistence of three-dimensional, reconnection-driven magnetic turbulence in the simulation. We calculate the spatial correlations of magnetic fluctuations within the jet and find that they agree best with the M\"{u}ller - Biskamp scaling model including intermittent current sheets of various sizes coupled via hydrodynamic turbulent cascade. The anisotropy of the magnetic fluctuations and the spatial orientation of the current sheets are consistent with an ensemble of nonlinear Alfv\'{e}n waves. These properties also reflect the overall collimated jet structure imposed by the geometry of the reconnecting magnetic field. A comparison with Ulysses observations shows that turbulence in the jet wake is in quantitative agreement with that in the fast solar wind.
\label{sec:intro} Magnetohydrodynamic (MHD) turbulence plays a fundamental role in numerous systems of critical interest to heliophysics: the solar convective zone and atmosphere; the interplanetary medium; the magnetotails of the Earth and other magnetized planets; and the heliopause, at the interface between the heliosphere and the interstellar medium. In plasmas characterized by moderate to high values of plasma $\beta \equiv n k_B T /(B^2 / 2 \mu_0)$, where $n$ and $T$ are plasma number density and temperature, $B$ is magnetic field strength, $k_B$ is Boltzmann's constant, and $\mu_0$ is the permeability of free space, thermal pressure reaches or exceeds the pressure exerted by the magnetic field, and the turbulent flows readily bend and fold the field to produce current sheets throughout the volume. Magnetic reconnection or resistive diffusion across these sheets converts magnetic free energy to kinetic and thermal energies of the bulk plasma and to kinetic energy of highly accelerated particles. In low-$\beta$ plasmas, where the magnetic field is dominant, the current sheets form and the associated reconnection/diffusion processes occur preferentially near null points of the field. Null regions therefore act as generators of reconnection-driven turbulence in highly conducting, low-$\beta$ plasmas such as the solar corona. In general, turbulence occurs naturally in moving fluids characterized by a high Reynolds number, $Re = v_l l / \nu$, where $v_l$ is a typical ambient flow speed at the scale $l$ and $\nu$ is the kinematic viscosity. (In resistive MHD plasmas, the magnetic Reynolds or Lunquist number is defined using the magnetic diffusivity $\eta$ in place of the viscosity.) The condition $Re \gg 1$, satisfied in many astrophysical systems, ensures that a broad range of inertial spatial scales $l$, bounded by the large driving scale $l_D$ and the small dissipative scale $l_d$, $l_d \ll l \ll l_D$, is supported. While the physical mechanisms underlying energy dissipation at the dissipative scale $l_d$ play a critical role by providing a sink for the energy injected into the system at the driving scale $l_D$, the inertial-range behavior in between is, to a large extent, independent of the details of the dissipation process. This leads to a statistical self-similarity of fluctuations in the velocity and magnetic fields across those scales, even in weakly collisional to collisionless plasmas where the dissipation is dominated by wave-particle interactions or other kinetic effects \citep{schekochihin2009,daughton2011,leonardis13}. The turbulent outflows from reconnection-driven systems such as coronal jets can provide important clues about the geometry of the reconnection region and enable remote sensing of the driving mechanism through its characteristic signatures in the flow. Such features include unstable velocity shear and ensuing multiscale vorticity in the reconnection exhaust of terrestrial substorms \citep{keiling09}, fragmented current sheets and filaments embedded in larger-scale outflow from field-reversed configurations \citep{uritsky01, klimas04}, and topological markers of the underlying magnetic-field configuration through the three-dimensional (3D) geometry of the turbulent flows \citep[e.g.\ ][]{biskamp03}. These and other observational hallmarks of the reconnection process have been identified in extreme ultraviolet (EUV) images of the corona from {\em Solar and Heliospheric Observatory} and {\em Solar Terrestrial Relations Observatory } spacecraft \citep{uritsky07, uritsky13}, flyby time-series data from {\em MErcury Surface, Space ENvironment, GEochemistry, and Ranging} probe \citep{uritsky11}, and numerical data from high-resolution 3D simulations of prescribed, vorticity-laden MHD flows \citep{uritsky10a}. Solar EUV and X-ray jets \citep[][and references therein]{raouafi2016} are transient, highly dynamic brightenings of the low solar corona that produce fast collimated outflows of plasma. When these events occur in the open magnetic fields of coronal holes, the quasi-radial jet outflows sometimes are observed by white-light coronagraphs to propagate several solar radii from the Sun into the inner heliosphere. \citet{neugebauer2012} suggested that these jets may be the origin of small-scale microstreams detected {\em in situ} in the solar wind \citep{neugebauer1995}. Because many jets sensed remotely in the corona exhibit a distinctively helical structure that traverses the corona at highly supersonic speeds, leading to its identification as a nonlinear Alfv\'en wave, it is plausible that coronal hole jets also are the source of such waves detected in the interplanetary medium \citep{gosling10,marubashi10}. In this paper, we analyze a first-principles, 3D numerical simulation of the initiation and propagation of a coronal hole jet \citep{karpen16} to establish and characterize its turbulent nature. The physical model underlying the simulation is null-point magnetic reconnection occurring at the interface between the ambient coronal-hole flux of one polarity and a concentrated patch of opposite-polarity flux provided by an embedded bipole \citep{lau1990,antiochos1990}. The topology of this configuration, in which an inner system of flux that closes to the solar surface is embedded within an outer system of flux that opens to the heliosphere, supports strong electric currents associated with steep gradients in the magnetic-field direction at the interface between the two flux systems \citep{antiochos1996}. Previous Cartesian, gravity-free, uniform-background 3D numerical simulations of such configurations \citep{pariat09, pariat10, pariat15, pariat16} demonstrated that this model produces explosive jets with helical structure, density-enhanced outflows, and Alfv\'enic wave fronts, in accord with observations. The energy source for the jets is the twisted magnetic flux under the separatrix. The new work \citep{karpen16} extends those investigations by including the effects of spherical geometry, solar gravity, density and magnetic-field stratification, and an isothermal solar wind. The ensuing reconnection-driven jet wave front propagates unhindered into the outer corona, reaching 5 solar radii in 1250 s, and its duration, length, diameter, plasma-outflow speed, leading-front speed, plane-of-the-sky transverse speed, kinetic energy, and helical morphology are all typical of observed coronal hole jets. We review the relevant statistical hierarchical models of turbulent hydrodynamic and magnetohydrodynamic cascades, with and without intermittency, in \S \ref{sec:turb}. A concise summary of the numerical simulation, which is described in detail by \citet{karpen16}, is given in \S \ref{sec:model}. The main results of our analysis of the jet are presented in the succeeding three sections. We discuss noteworthy large-scale features of its radial structure in \S \ref{sec:spat}, analyze the statistics of its small-scale velocity- and magnetic-field fluctuations in \S \ref{sec:stat}, and illustrate its filamentary electric-current structures in \S \ref{sec:cs}. The paper concludes with a brief discussion of the implications of our results in \S \ref{sec:disc_conc}.
\label{sec:disc_conc} Our analysis of adaptively refined 3D MHD simulations confirms the occurrence of reconnection-driven turbulence in a supersonic coronal hole jet, and explores its structure and evolution. We have found that spatial correlations of magnetic fluctuations inside the jet are in quantitative agreement with a scaling ansatz of intermittent MHD turbulence proposed by \citet{muller00}. The MB scaling model implies that the turbulent cascade inside the jet is supported by filamentary structures representing fluid vortices, whereas the energy dissipation takes place in intermittent current sheets. The current sheets in the simulated coronal jet obey this scenario and exhibit a scale-dependent geometry, with the largest sheets stretched into highly elongated structures parallel to the large-scale flow and the smallest sheets being more isotropic. Our results show that the turbulent wake of the jet contains three radially stratified regions (Figure \ref{fig_jet_sketch}): (1) the immediate wake behind the leading edge of the jet, dominated by shear Alfv\'{e}n turbulence; (2) the remote wake characterized by both Alfv\'{e}nic and compressible turbulence; and (3) the dense portion of the jet adjacent to the reconnection driver and dominated by compressible, non-Alfv\'{e}nic plasma motions. Our additional analysis (not shown) indicates that the dense region is dynamically rather important as it carries the largest momentum and kinetic energy densities compared to the other two jet regions. The prevailing role of Alfv\'{e}nic turbulence in the immediate wake is confirmed by the Wal\'{e}n ratio $R_w \approx 1$ and by the spatial alignment of the transverse velocity and magnetic field perturbations, which maintain their multiscale structure as the jet propagates. The remote wake, in turn, exhibits a considerable degree of compression in addition to Alfv\'{e}nic fluctuations found in the immediate wake. The nature of stochastic motions in the dense jet remains to be understood. It is likely that the 3D geometry of the reconnected field plays a major role in this region but how exactly it couples to the multiscale plasma flow is not clear. \begin{figure}% \begin{center} \includegraphics[width=7.9cm]{fig_jet_sketch_} \vspace{10mm} \caption{\label{fig_jet_sketch} Schematic diagram showing the internal structure of the jet according to our analysis. Regions of Alfv\'{e}nic and compressible wave activity are marked with yellow and blue curves, correspondingly.} \end{center} \end{figure} Coronal hole jets are most commonly observed at the Sun's polar regions. These jets can couple directly to the fast solar wind, where the lack of significant dynamical interaction between flows at different speeds creates conditions for an unperturbed propagation of extremely low frequency Alfv\'{e}nic fluctuations across large heliocentric distances \citep[see, e.g.,][and references therein]{bruno13}. Most of what is currently known about turbulence in the polar wind is based on the observations by the Ulysses spacecraft in 1994-1995. These observations show that polar Alfv\'{e}nic turbulence evolves similarly to solar wind in the ecliptic plane but on a significantly slower time scale, due to the absence of strong velocity shears and interplanetary shocks \citep[e.g.,][]{bavassano00}. Ulysses magnetic field and plasma measurements above $\pm 30^\circ$ latitude demonstrate the presence of strong statistical correlations between the transverse components of magnetic and velocity fields characteristic of Alfv\'{e}nic turbulence \citep{smith95, goldstein95}. The spectral index of magnetic field variations in the polar wind is close to $\alpha = 5/3$ at frequencies above $\sim 10^{-3}$Hz \citep{horbury95, horbury96}. Within statistical uncertainty, these estimates are indistinguishable from the prediction of the MB model $\alpha \approx 1.741$ and the flows in the immediate wake of our jet. Higher order structure-function analysis of magnetic field fluctuations above the Sun's polar coronal holes \citep{nicol08} provides extra support for the MB scenario showing a stable exponent ratio $\zeta_2/\zeta_3 \approx 0.75$ consistent with Equation (\ref{eq_MB}). For a nominal flow speed of the order of 700 km s$^{-1}$ and the lowest frequency $\sim 10^{-3}$Hz, the radial scales of the MHD turbulence in the polar wind should be no larger than $7 \times 10^5$km. Assuming that a volume of plasma transported by the polar wind expands in the transverse directions but not along the radial direction \citep{dong14}, this upper scale limit can be compared with jet fluctuations. Figure \ref{fig_VB_img} shows that the largest radial scale of jet fluctuations is on the order of $10^5$ km, which agrees with polar wind measurements. To summarize, observations suggest that the polar wind turbulence is dominated by the same type of energy cascade as the one found in the coronal hole jet simulation studied here. It should be noted that the compact size and the transient nature of coronal hole jets make it difficult to trace their individual contributions to the solar wind dynamics. However, a cumulative effect of many such events organized into relatively large and long-lived formations such as coronal plumes could strongly influence magnetic and velocity field fluctuations in the adjacent heliosphere and explain much of its stochastic structure. The upcoming {\it Solar Probe Plus} and {\it Solar Orbiter} solar missions will likely provide more insight into the physics of the jet turbulence and its relevance to the solar wind. Among indicative single-spacecraft turbulence tests that could be conducted by these missions are the higher-order time-domain structure functions converted into the spatial domain using appropriate dispersion relations, the Wal\'{e}n ratio and the velocity - magnetic field orientation analysis as local markers of shear-Alfv\'{e}n modes, and compressibility analysis using mass density fluctuations. In this context, additional simulations involving multiple coronal hole jets could be instrumental for clarifying the mechanism of the coupling of the jet flows with the fast wind. The fact that turbulent characteristics of a { \it single} jet agree with polar solar wind measurements, as has been established here, may indicate that a limited number of jets can in fact be sufficient to reproduce a realistic turbulent solar wind outflow.
16
7
1607.03843
1607
1607.03303.txt
{We investigate an extension of the Standard Model containing two Higgs doublets and a singlet scalar field (2HDSM). We show that the model can have a strongly first-order phase transition and give rise to the observed baryon asymmetry of the Universe, consistent with all experimental constraints. In particular, the constraints from the electron and neutron electric dipole moments are less constraining here than in pure two-Higgs-doublet model (2HDM). The two-step, first-order transition in 2HDSM, induced by the singlet field, may lead to strong supercooling and low nucleation temperatures in comparison with the critical temperature, $T_n \ll T_c$, which can significantly alter the usual phase-transition pattern in 2HD models with $T_n \approx T_c$. Furthermore, the singlet field can be the dark matter particle. However, in models with a strong first-order transition its abundance is typically but a thousandth of the observed dark matter abundance.}
\label{sec:intro} % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% The matter-antimatter asymmetry in the universe presents one of the major quests for particle cosmology. Due to cosmic inflation, such asymmetry cannot be an initial condition for the thermal history of the universe, but calls for a dynamical explanation. The Standard Model (SM) of elementary particle interactions fails in providing a successful mechanism for baryogenesis, and one must look at different extensions of the SM. In this paper we address these issues in the context of a 2HDSM featuring an extended scalar sector with two gauged Higgs doublets and an extra singlet. Generation of the matter-antimatter asymmetry in connection with the electroweak phase transition, i.e. electroweak baryogenesis, is a particularly appealing scenario due to the possibility of connecting it with the collider experiments. Generic 2HDMs have been studied earlier in connection with the electroweak baryogenesis problem~\cite{Turok:1990zg, Turok:1991uc, Funakubo:1993jg, Davies:1994id, Cline:1995dg, Laine:2000rm, Fromme:2006cm, Cline:2011mm, Dorsch:2013wja}. They provide both a new source of CP violation arising from complex parameters in the 2HDM potential and a strong first-order phase transition arising from the one-loop effective potential. However, observational constraints are placing stringent limits also on 2HDMs~\cite{Cline:2011mm}. Here we show that these constraints are alleviated when the model is further extended by a real scalar singlet field. A generic feature of 2HDM, also inherited by the 2HDSM, is the danger of generating large flavour changing neutral currents. To avoid these, one has to constrain the Higgs-fermion couplings in one way or the other. Here we choose to work in the context of universal Yukawa alignment, which may be argued for by a requirement that the whole Lagrangian is invariant under the group GL(2,$\mathbbm{C}$) of linear reparametrization transformations in the doublet space. We also use the reparametrization invariance to develop an elegant way explore the vacuum stability and the phase-transition pattern in the model. In the 2HDM context large CP violation requires that scalar couplings have large complex phases and strong transition requires that couplings are large in magnitude. When combined, these requirements tend to give too large electron and neutron electric dipole moments (EDMs). We will show that the presence of the additional scalar allows for a strong two-step electroweak phase transition, which does not rely on large radiative corrections to the effective potential. This alleviates the burden on the scalar self-couplings and significantly increases the phase space consistent with EDM constraints in the 2HDSM. The singlet scalar can also be a dark matter (DM) candidate when a discrete $Z_2$ symmetry is imposed to stablize it. However, we will find that a strong first-order phase transition is not consistent with a dominant singlet scalar DM particle. The problem is that a strong two-step transition requires a large coupling between the singlet and doublet sectors and this implies so large annihilation rate for the DM that its relic abundance becomes too small to account for the full observed DM density. This conclusion is generic for all models of this type. We observe that two-step transitions may also give rise to {\em too strong} transitions. It is possible that fields get trapped in the metastable minimum so that electroweak symmetry remains unbroken. Also, the latent heat released in the transition may be so large that the transition walls necessarily become supersonic. However, we find also parameters for which walls may be subsonic, consistent with the electroweak baryogenesis scenario. Overall, we are able to find models that satisfy all observational and experimental constraints and can also give rise to a successful electroweak baryogenesis, accompanied by a subleading DM in the 2HDSM context. The structure of the paper is as follows: In Sec.~\ref{sec:model} we introduce the model and discuss the most general GL(2,$\mathbbm{C}$)-reparametrization invariant 2HDSM Lagrangian including Yukawa couplings. Here we also develop methods to study the vacuum stability and the phase-transition patterns in the theory. In Sec.~\ref{sec:results} we first go through the experimental constraints on the model and evaluate the DM relic abundance and the DM search limits on model parameters. We then evaluate the strength of the transition and compute the baryon asymmetry created in the electroweak phase transition. The section is concluded by a study of bubble nucleation in the 2HDSM and in the singlet extension of the SM. In Sec.~\ref{sec:conclusions} we conclude and outline some directions for future research. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %
\label{sec:conclusions} % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% We have studied the viability of a two-Higgs-doublet and inert-singlet model for EWBG and for DM, taking into account also all existing observational and collider constraints. Our model is based on the maximal GL(2,$\mathbbm{C}$) reparemetrization symmetry. This implies a universal Yukawa-alignment scheme, where both Higgs fields couple similarly to all fermions and there are no FCNCs. Exploiting the GL(2,$\mathbbm{C}$) symmetry, the the model can, in a particular basis, be written with a type-I Yukawa sector combined with the most general CP-violating potential. Following~\cite{Ivanov:2006yq,Ivanov:2007de,Ivanov:2008er} we implemented a novel way to construct potentials with a tree-level stability and to study the symmetry breaking patterns at finite temperatures. This construction was based on the Lorentz symmetry induced by the reprametrization symmetry on bilinears formed from Higgs doublets. These techniques are applicable to all 2HDM models, and they proved extremely useful when performing large-scale parametric scans over the multidimensional phase space of the model. Dark matter and the strength of the electroweak phase transition in the model follow a similar pattern to the pure singlet extension: in accordance with~\cite{Cline:2012hg}, we find that strong two-step phase transitions are easily found, but they are consistent only with a subleading DM. Likewise, we find that experimental and observational constraints are fairly easy to avoid, with the outstanding exception of the electron-EDM bound, which strongly constrains the CP-violating parameters on the model. EDM constraints are particularly important because creating a large baryon asymmetry during the electroweak phase transition requires large CP-violating parameters; we found that the electron EDM indeed strongly constrains the phase space consistent with baryogenesis. Yet the bounds are not as strong here as in the pure 2HDM case~\cite{Cline:2011mm}, and we found a number of models consistent with all requirements. Finally, we observed that two-step transitions may suffer from an unexpected problem of providing a {\em too strong} phase transition. We found that fields may get trapped in the metastable minimum, and transition walls may not be subsonic as required by a successful baryogenesis scenario. However, our analysis in the full model was restricted to the thin-wall approximation. We then studied the bubble nucleation in full generality in the case of a pure singlet extension of the SM. While the generic problem of too strong transitions remained, we found that thin-wall limit tends to overestimate the strength of the transition. Based on these results we argued that, when corrected for the thin-wall bias, the walls may well remain subsonic in the full 2HDSM. In a revised scan concentrating to models with a large critical temperature, we found many models potentially consistent with all available constraints. While our results are not a definite proof, they provide a strong indication of a success of baryognesis in the 2HDM and an inert singlet model. Settling the issue beyond any doubt would require two significant improvements. First is a detailed analysis of the bubble wall dynamics, including a microscopic computation of the friction on the wall. Second is going beyond the $T_c$-bounce solution, when solving the scalar field profiles over the bubble wall to compute the top quark mass profile and eventually the baryogenesis source. These are both very interesting topics that deserve to be studied in detail in the future. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %
16
7
1607.03303
1607
1607.03244_arXiv.txt
In this paper we study, using Monte Carlo simulations, the possibility to discriminate the mass of the Ultra High Energy Cosmic Rays (UHECRs) by combining information obtained from the maximum $X_{max}^{\mu}$ of the muon production rate longitudinal profile of Extensive Air Showers (EAS) and the number of muons, $N^{\mu}$, which hit an array of detectors located in the horizontal plane. We investigate the sensitivity of the 2D distribution $X_{max}^{\mu}$ versus $N^{\mu}$ to the mass of the primary particle generating the air shower. To this purpose we analyze a set of CORSIKA showers induced by protons and iron nuclei at energies of $10^{19}$eV and $10^{20}$eV, at five angles of incidence, $0^{\circ}$, $37^{\circ}$, $48^{\circ}$, $55^{\circ}$ and $60^{\circ}$. Using the simulations we obtain the 2D Probability Functions $Prob(X_{max}^{\mu},N^{\mu} \ | \ p)$ and $Prob(X_{max}^{\mu},N^{\mu} \ | \ Fe)$ which give the probability that a shower induced by a proton or iron nucleus contributes to a specific point on the plane ($X_{max}^{\mu}$, $N^{\mu}$). Then we construct the probability functions $Prob(p\ | \ X_{max}^{\mu},N^{\mu})$ and $Prob(Fe \ | \ X_{max}^{\mu},N^{\mu})$ which give the probability that a certain point on the plane ($X_{max}^{\mu}$, $N^{\mu}$) corresponds to a shower initiated by a proton or an iron nucleus, respectively. Finally, a test of this procedure using a Bayesian approach, confirms an improved accuracy of the primary mass estimation in comparison with the results obtained using only the $X_{max}^{\mu}$ distributions.
The mass composition of the primary UHECRs together with their energy spectrum and arrival directions are the fundamental data when searching for the sources and the acceleration mechanisms of the cosmic rays. Various detection techniques, such as surface detectors (scintillation modules \cite{::2013dga} or water Cherenkov tanks \cite{Ave:2007zz}), fluorescence detectors \cite{Abraham:2009pm}, \cite{Tokuno:2012mi}, radio antennas \cite{Abreu:2012pi}, microwave detection \cite{PhysRevLett.113.221101}, have been proposed to study these observables. Despite concerted efforts in many experiments, such as Pierre Auger Observatory \cite{ThePierreAuger:2015rma}, Telescope Array \cite{AbuZayyad201287}, HiRES \cite{Sokolsky201174}, AGASA \cite{Chiba:1991nf} to answer these fundamental questions, a clear answer is not yet given. In the present work we focus on the problem of the properties of the primary particle which initiates the EAS using the informations from the ground particle detectors. One observable which is sensitive to the mass of the primary particle is the atmospheric depth where the density of the secondary charged particles reaches its maximum. This observable decreases roughly proportionally with the logarithm of the mass $A$ of the primary particle. Its sensitivity to $A$ is illustrated by the difference in the values for $p$ and $Fe$ induced showers of about $100$ g \, cm$^{-2}$ \cite{Aab:2015bza} at the same energy. It can be obtained experimentally by measuring the shower UV light with fluorescence detectors (FD) \cite{ThePierreAuger:2015rma}, \cite{Abraham:2009pm}, \cite{AbuZayyad201287}, \cite{Sokolsky201174}. Indeed, the intensity of UV light emitted from an elementary volume consequent to the excitation of the nitrogen molecules in the atmosphere by the secondary charged particles in EAS, is proportional with the charge density. Thus, with the FDs the dependence of the charged particle density on atmospheric depth can be obtained. The drawback of this technique is the low duty cycle of FD measurements (up to $\sim 15\%$ \cite{ThePierreAuger:2015rma}), due to the fact that the UV light from an EAS can be measured only during moonless nights and only in good atmospheric conditions. This fact, combined with the low statistics of the UHECRs at $E > 10^{19}$ eV, has a significant contribution to the uncertainty of mass reconstruction by FDs measurements. To increase the observational duty cycle, the reconstruction of the primary mass on the basis of the signal of the surface detectors (duty cycle $\sim 100 \%$) would be advantageous. This can be done using the reconstructed profile of the muon production depth (MPD) from EAS on the basis of the signal of the surface detectors, as proposed by Cazon et al. \cite{Cazon:2004zx, Cazon:2003ar} in the case of the Pierre Auger Observatory. The individual muon production depth (the muon production point expressed in units of atmospheric depth) can be calculated using the muon arrival time in the detectors and the arrival time of the shower core. Then, the longitudinal profile of the muon production rate can be obtained as the depth dependence of the number of muons produced per unit of atmospheric depth. The maximum $X_{max}^{\mu}$ of this profile was proposed as an observable sensitive to the primary mass. The number of muons in the shower is also sensitive to the primary mass. However, it has a stronger dependence on the energy of the primary particle than on the primary mass, and due to this fact the uncertainty of energy determination has a high impact on mass discrimination using this observable. In a preliminary study \cite{Arsene,Arsene_Sima} we have shown that by using the information included in the correlation $X_{max}^{\mu}$ versus $N^{\mu}$, the accuracy of the primary mass reconstruction can be improved in comparison with the method which uses only the $X_{max}^{\mu}$ distribution. This correlation could also be used to test the high energy interaction models. Our preliminary study was based on simulations done with the CORSIKA code \cite{corsika,corsika1} using the thinning option, without applying a resampling scheme. In the present work the study is extended by applying the resampling scheme proposed by Billoir \cite{Billoir:2008zz}. Also, the parametrization of the 2D distribution $X_{max}^{\mu}$ versus $N^{\mu}$ is improved. The study is done both in the case when $N^{\mu}$ corresponds to all the muons from a given radial range where the muon production depth is reconstructed from the arrival times of all these muons and in the realistic case when $N^{\mu}$ and the production depth correspond to the muons which hit the detectors from an array like AMIGA surface detector array \cite{Wainberg:2013koa}, \cite{Videla:2015xia}, \cite{Aab:JINST2016} of the Pierre Auger Observatory. In order to test the principle of the method, in this exploratory work the experimental uncertainties are not included and the detector simulation is not done. However, some results of the effects of uncertainties in the arrival time and in the reconstruction of the shower parameters are presented. In Section II the observables $X_{max}^{\mu}$ and $N^{\mu}$ are introduced. % In Section III the simulations used are presented and the data analysis for obtaining the muon production depth and the muon number is discussed; the resampling scheme applied is briefly described. In Section IV the 2D distribution $X_{max}^{\mu}$ versus $N^{\mu}$ is presented and parameterized. In Section V a Bayesian approach is applied in order to test the mass discrimination performance on the basis of this 2D distribution. Section VI concludes the paper.
In this work, using CORSIKA simulations, we evaluated the possibility of discriminating the mass of the primary cosmic rays on the basis of the MPD taking into account the 2D distributions $X_{max}^{\mu}$ versus $N^{\mu}$. Because both $N^{\mu}$ and $X_{max}^{\mu}$ depend on the mass of the primary particle, but in a different way, the 2D distribution may contain more information on the mass of the primary cosmic ray than the individual distributions. Using this distribution we constructed the Probability Functions $Prob(p \ | \ X_{max}^{\mu},N^{\mu})$ and $Prob(Fe \ | \ X_{max}^{\mu},N^{\mu})$ which give the probability that a certain point from the plane $(X_{max}^{\mu}$ , $N^{\mu})$ corresponds to a proton or an iron shower. We qualitatively found that the mass reconstruction accuracy improves when the information from the correlation $X_{max}^{\mu}$ versus $N^{\mu}$ is used in comparison with the method based only on the $X_{max}^{\mu}$ distribution. \subsection*
16
7
1607.03244
1607
1607.03134_arXiv.txt
{Several studies have shown that the occultation of stellar active regions by the transiting planet can generate anomalies in the high-precision transit light curves, and these anomalies may lead to an inaccurate estimate of the planetary parameters (e.g., the planet radius). Since the physics and geometry behind the transit light curve and the Rossiter-McLaughlin effect (spectroscopic transit) are the same, the Rossiter-McLaughlin observations are expected to be affected by the occultation of stellar active regions in a similar way. In this paper we perform a fundamental test on the spin-orbit angles as derived by Rossiter-McLaughlin measurements, and we examine the impact of the occultation of stellar active regions by the transiting planet on the spin-orbit angle estimations. Our results show that the inaccurate estimation on the spin-orbit angle due to stellar activity can be quite significant (up to $\sim 30$ degrees), particularly for the edge-on, aligned, and small transiting planets. Therefore, our results suggest that the aligned transiting planets are the ones that can be easily misinterpreted as misaligned owing to the stellar activity. In other words, the biases introduced by ignoring stellar activity are unlikely to be the culprit for the highly misaligned systems.}
As a star rotates, the part of its surface that rotates toward the observer will be blue-shifted and the part that rotates away will be red-shifted. During the transit of a planet, the corresponding rotational velocity of the portion of the stellar disk that is blocked by the planet is removed from the integration of the velocity over the entire star, creating the radial velocity (RV) signal which is known as the Rossiter-McLaughlin (RM) effect \citep{Rossiter-24, McLaughlin-24}. This effect has been used to determine the projected rotation velocity of a star ($v \sin i$), and the angle between the sky-projections of the stellar spin axis and the planetary (or the eclipsing binary's secondary) orbital plane (hereafter spin-orbit angle)\footnote{The planet-to-star radius ratio can also be estimated through RM measurements; however, this parameter has been frequently derived from the photometric transit light curve.} \citep[e.g.,][]{Winn-05, Hebrard-08, Winn-10, Simpson-10, Triaud-10, Hirano-11, Albrecht-12, Hirano-16}. In the era of high-precision RV measurements, like those provided by HARPS and HARPS-N and the upcoming spetrograph ESPRESSO, determination of the spin-orbit angle and $v \sin i$ can be influenced by second-order effects such as the convective blueshift \citep{Shporer-11, Cegla-16}, the differential stellar rotation \citep{Albrecht-12}, and the microlensing effect due to the transiting planet's mass \citep{Oshagh-13c}. It has been shown in several studies that the occultation of stellar active regions (i.e., stellar spots and plages) by the transiting planet can generate anomalies in the high-precision transit light curves and may lead to an incorrect estimate of the planetary parameters \citep[e.g.,][]{Sanchis-Ojeda-11a, Oshagh-13b, Sanchis-Ojeda-13, Barros-13, Oshagh-15a, Oshagh-15b}. The detection of these anomalies in the transit light curves becomes the norm in the exoplanet community after the high-precision photometric observations achieved by space-based telescopes (such as CoRoT and Kepler). Since the physics and geometry behind the transit light curve and RM effect are the same, they are both expected to be affected by the occultation of active regions by the transiting planet in a similar way. Probing the impact of the occultation of stellar active regions anomalies in the high-precision RM observations on the spin-orbit angle measurements, and quantifying its influence is the main objective of this paper \footnote{There have been several studies that assess the impact of non-occulted stellar active regions on the transit light curves and the planetary parameters estimations \citep[e.g.,][]{Czesla-09,Ioannidis-16}. The non-occulted stellar active regions would also affect RM measurements; however, probing this effect is beyond scope of this paper, but will be pursued in forthcoming publications. }. In Sect. 2 we present the details of our models that are used to produce mock RM observations. In Sect. 3 we discuss to what extent stellar activity can affect spin-orbit angle measurements. In Sect. 4, we present the possible interpretation of our results, and we conclude our study by summarizing the results in Sect. 5. \begin{figure} \includegraphics[width=95mm, height=140mm]{plot_transit_rm.pdf} \caption{Top: Simulated transit light curve anomaly due to occultation of a transiting planet with a spot or a plage shown by the blue and green dots, respectively. The red dots show the simulated transit light curve without occultation with any active regions. The simulation was done for an aligned Jupiter-size transiting planet overlapping a dark spot with filling factor of 0.5\% and plage with filling factor of 2.25\%. The error bar obtained by adding random Gaussian noise at the level of 300 ppm (which is consistent with the photometric precision achievable by Kepler for an 11 magnitude star on short cadence). Bottom: Same as the top panel for the simulated RM observations. The error bar obtained by adding random Gaussian noise at the level of 1 $m s^{-1}$ (which is consistent with the RV precision achievable by current spectrographs such as HARPS and HARPS-N). } \label{sample-figure} \end{figure}
In this paper we performed a fundamental test on the spin-orbit angles as derived by RM measurements, and we examined the impact of stellar active regions occultation by the transiting planet on the spin-orbit angle estimations. We generated mock RM observations with RM anomalies for different configurations, and then analyzed them to quantify the impact of stellar activity. Our results showed the inaccurate estimation on the spin-orbit angle can be quite significant (up to $\sim 30$ degrees), particularly for the edge-on, aligned, and small transiting planets. Our results also suggested that the aligned transiting planets are the ones that can be easily misinterpreted as misaligned owing to the stellar activity. Our results can be used to explain the conflicting spin-orbit angles for the small number of cases in the literature. Our results also indicate that the inaccuracy on the spin-orbit angle estimation increases, up to $\sim 40$ degrees, for the cases with long time-sampling. Moreover, we discovered that increasing the RV precision will improve the accuracy on the spin-orbit angle estimation. Finally, our results suggest that performing RM observation at long wavelengths (e.g., NIR) will help to improve the accuracy on the spin-orbit angle estimation.
16
7
1607.03134
1607
1607.02304_arXiv.txt
We present observations of the 85.69~GHz continuum emission and H42$\alpha$ line emission from the central 30~arcsec within NGC~4945. Both sources of emission originate from nearly identical structures that can be modelled as exponential discs with a scale length of $\sim$2.1~arcsec (or $\sim$40 pc). An analysis of the spectral energy distribution based on combining these data with archival data imply that 84\%$\pm$10\% of the 85.69~GHz continuum emission originates from free-free emission. The electron temperature is 5400$\pm$600~K, which is comparable to what has been measured near the centre of the Milky Way Galaxy. The star formation rate (SFR) based on the H42$\alpha$ and 85.69~GHz free-free emission (and using a distance of 3.8~Mpc) is 4.35$\pm$0.25 M$_\odot$ yr$^{-1}$. This is consistent with the SFR from the total infrared flux and with previous measurements based on recombination line emission, and it is within a factor of $\sim$2 of SFRs derived from radio data. The {\it Spitzer} Space Telescope 24~$\mu$m data and Wide-field Infrared Survey Explorer 22~$\mu$m data yield SFRs $\sim$10$\times$ lower than the ALMA measurements, most likely because the mid-infrared data are strongly affected by dust attenuation equivalent to $A_V=150$. These results indicate that SFRs based on mid-infrared emission may be highly inaccurate for dusty, compact circumnuclear starbursts.
\label{} \addtocounter{footnote}{5} The Atacama Large Millimeter/submillimeter Array (ALMA) is capable of detecting two different forms of emission from photoionized gas in the star forming regions within other galaxies. First, ALMA can measure continuum emission at 85-100~GHz where the spectral energy distributions (SEDs) of galaxies are dominated by free-free emission \citep[e.g.][]{peel11}. Second, ALMA is sensitive enough to detect recombination line emission that appears at millimetre and submillimetre wavelengths. Both free-free and millimetre recombination line emission as star formation tracers have advantages over ultraviolet, optical, and near-infrared tracers in that they are unaffected by dust attenuation. Unlike infrared or radio continuum emission, the millimetre continuum and recombination line emission directly traces of photoionized gas and therefore should be more reliable for measuring accurate star formation rates (SFRs). For additional discussion about this, see \citet{murphy11}. Millimetre continuum observations of nearby galaxies have been relatively straightforward, but the recombination line emission has been more difficult to detect. Before ALMA, the millimetre recombination lines have been detected in multiple star forming regions within the Milky Way \citep[e.g. ][]{waltman73, wilson84, gordon89, gordon90}, but extragalactic millimeter recombination line emission has only been detected in M82 \citep{seaquist94, seaquist96}, NGC 253 \citep{puxley97}, and Arp 220 \citep{anantharamaiah00}. ALMA is capable of reaching sensitivity levels at least an order of magnitude better than other telescopes \citep[see][for a technical overview]{remijan15} and can therefore lead to detections in many more nearby infrared-luminous sources than was previously possible \citep{scoville13}. At this time, however, ALMA detections of specifically recombination line emission have been limited. \citet{bendo15b} and \citet{meier15} reported the detection of millimetre recombination line emission from the nearby starburst galaxy NGC~253, and \citet{bendo15b} used the 99.02~GHz continuum and H40$\alpha$ (99.02 GHz) line emission to measure electron temperatures, SFRs, and (along with near-infrared data) dust attenuation in the centre of the galaxy. \citet{scoville15} reported the marginal detection of H26$\alpha$ emission from Arp 220, but the presence of the nearby HCN(4-3) line made it difficult to perform any analysis with this line detection. In this paper, we report on the detection of 85.69~GHz and H42$\alpha$ recombination line emission from the centre of NGC~4945, a nearby \citep[3.8$\pm$0.3~Mpc;][]{karachentsev07, mould08} spiral galaxy with an optical disc of 20.0$\times$3.8~arcmin \citet{devaucouleurs91}. While the galaxy contains a composite active galactic nucleus and starburst nucleus and while the AGN is one of the brightest hard X-ray source as seen from Earth \citep{done96}, analyses based on near- and mid-infrared emission lines imply that the starburst is the primary energy source for exciting the photoionized gas and that the AGN is heavily obscured \citep{marconi00, spoon00, spoon03, perezbeaupuits11}. The central region is very dusty, so even near-infrared star formation tracers such as Pa$\alpha$ lines are strongly attenuated \citep{marconi00}. In such a source, millimetre star formation tracers would provide a much more accurate measurement of SFR. Previously, the only detection of millimetre or radio recombination line emission from NGC~4945 were the H91$\alpha$ and H92$\alpha$ lines observed by \citet{roy10}. However, these low-frequency lines are significantly more affected by masing and collisional broadening effects than millimetre lines \citep{gordon90}, and the photoionized gas is typically optically thick at radio wavelengths but optically thin at millimetre wavelengths. Given this, the millimetre continuum and line data should provide a better measure of SFR than the radio line data. After a description of the data in Section~\ref{s_data}, we present the analysis and the discussion of the results in five sections. Section~\ref{s_image} presents the images and also shows simple models fit to the data. Section~\ref{s_spec} shows the spectra as well as maps of the parameters describing the recombination line profiles. Section~\ref{s_sed} presents the SED of the emission from the central source between 3 and 350~GHz, which we used to determine the fraction of continuum emission originating from free-free emission. Section~\ref{s_te} presents the derivation of electron temperatures ($T_e$) from the ratio of the millimetre recombination line to the free-free continuum emission and discusses the results in the context of other measurements within NGC 4945, the Milky Way Galaxy, and other galaxies. Section~\ref{s_sfr} describes the measurement of SFRs from the millimetre data, the measurement of SFRs from infrared data, and SFR measurements based on published radio continuum photometry as well as previously-published SFR measurements based on other radio data. We then perform a comparison of these various SFR measurements to identify consistencies between the measurements and to understand the reasons why some measurements may appear inconsistent with others. A summary of all of the results is presented in Section~\ref{s_conclu}.
\label{s_conclu} We have presented here ALMA observations of 85.69~GHz continuum emission and H42$\alpha$ line emission from the centre of NGC~4945. These data are one of only a small number of currently-existing ALMA data that include the detection of recombination line emission from an extragalactic source, and our analysis is one of the earliest comparisons of SFR measurements from ALMA data with SFR measurements from infrared data. In summary, we have obtained the following results: \vspace{0.625em} \setlength{\leftskip}{1em} \noindent $\bullet$ The 85.69~GHz continuum and H42$\alpha$ line emission originates from a structure that can be modelled as an exponential disc with a scale length of $\sim$2.1~arcsec ($\sim$40~pc). The spatial extent of the emission as well as the absence of any enhancement in the centre as well as the absence of any broad line emission suggest that the emission originates primarily from photoionized gas associated with the circumnuclear starburst and not from the AGN. \vspace{0.625em} \noindent $\bullet$ The SED for the central source implies that 84\%$pm$10\% of the 85.69~GHz continuum emission from the central disc originates from free-free emission. \vspace{0.625em} \noindent $\bullet$ The $T_e$ for the central star forming disc based on the ratio of the H42$\alpha$ line emission to 85.69~GHz free-free emission is $5400\pm600$~K. This is similar to what is measured near the centre of the Milky Way. These results also imply that the AGN contributes $\ltsim$10\% of the total continuum emission from the central disc. \vspace{0.625em} \noindent $\bullet$ The SFR for the central source derived from both the 85.69~GHz continuum and H42$\alpha$ line emission is $4.35\pm0.25$~M$_\odot$~yr$^{-1}$. This is comparable to what we obtain using the total infrared flux, and it is consistent with the range of SFR values estimated from previous radio recombination line measurements. \vspace{0.625em} \noindent $\bullet$ The SFR measurements from either previously-published radio continuum data or from radio observations of supernovae are a factor of $\sim$2 higher than what is obtained from the ALMA data. This is potentially related to a combination of calibration issues with the estimates of the SFR based on the radio data or changes in the SFR between 3-40~Myr ago and the present. \vspace{0.625em} \noindent $\bullet$ The SFR measurements from {\it Spitzer} 24~$\mu$m and WISE 22~$\mu$m data are $\sim$10$\times$ lower than the measurements based on the ALMA data as well as the measurement based on the total infrared flux or the measurements based on radio data. This probably occurs because the central starburst is optically thick at mid-infrared wavelengths, which is a condition where the conversion between mid-infrared flux and SFR should no longer yield reliable results. \setlength{\leftskip}{0pt} \vspace{0.625em} This analysis not only demonstrates how effective ALMA can be in terms of studying star formation in the centres of starburst galaxies but also how such ALMA observations can be used to cross-check SFR measurements from both infrared and radio data. Mid-infrared flux has been favoured for use as a star formation tracer because of how well the emission has been correlated with other ultraviolet, optical, and near-infrared star formation tracers and because it had been assumed that mid-infrared emission is not affected by the same dust extinction effects as star formation tracers at shorter wavelength. The results here demonstrate that mid-infrared fluxes may not be reliable star formation tracers in compact starbursts. Additional ALMA observations of star forming regions in other nearby galaxies should be used to explore the reliability of infrared emission as a star formation metric for such dusty systems.
16
7
1607.02304
1607
1607.02132_arXiv.txt
The leading theory for the formation of the Earth's moon invokes a collision between a Mars-sized body and the proto-Earth to produce a disk of orbiting material that later condenses to form the Moon. Here we study the early evolution of the protolunar disk. First, we show that the disk opacity is large and cooling is therefore inefficient ($t_{cool}\Omega \gg 1$). In this regime angular momentum transport in the disk leads to steady heating unless $\alpha < (t_{cool}\Omega)^{-1} \ll 1$. Following earlier work by Charnoz and Michaut, and Carballido et al., we show that once the disk is completely vaporized it is well coupled to the magnetic field. We consider a scenario in which turbulence driven by magnetic fields leads to a brief, hot phase where the disk is geometrically thick, with strong turbulent mixing. The disk cools by spreading until it decouples from the field. We point out that approximately half the accretion energy is dissipated in the boundary layer where the disk meets the Earth's surface. This creates high entropy material close to the Earth, driving convection and mixing. Finally, a hot, magnetized disk could drive bipolar outflows that remove mass and angular momentum from the Earth-Moon system.
The giant impact theory for the origin of the Earth's moon invokes a collision with a Mars-sized impactor \citep{Hartmann1975, Cameron1976}. Such collisions are expected to be common in young planetary systems \citep[e.g.][]{Chambers1998, Kenyon2006, Meng2014}. The giant impact has been modeled numerically \citep[e.g.][]{Benz1985, Canup2004, Wada2006, Canup2013, Nakajima2014}. It typically leads to the formation of a circumterrestrial disk and, in the giant impact scenario, the disk eventually condenses to form the Moon at a radius comparable to the Roche radius $\simeq 2.9 \Rea$. The initial conditions for the giant impact are characterized by a minimum of 10 parameters, including two masses, a relative velocity in the plane of the collision, and the spin angular momentum vector of each body. The two bodies may also differ in their chemical composition, isotopic composition, and magnetic field strength and geometry. While a great deal is now understood about the outcome of these collisions, the collision parameters remain uncertain. Early simulations were constructed under the assumption that the angular momentum of the system was approximately conserved from the impact to the present day \citep[e.g.][]{Canup2004}, when tidal coupling has transferred most of the angular momentum of the Earth-Moon system to the Moon's orbit. Later work has challenged this assumption. In particular, the Earth-Moon system may pass through an ``evection'' resonance where the Moon's apsidal precession period is close to one year. The resulting resonant coupling then removes angular momentum from the Earth-Moon system \citep{Cuk2012}. Uncertainties in the initial conditions for the impact translate into uncertainties in physical conditions in the post-impact protolunar disk. \cite{Nakajima2014}, for example, consider post-impact disks that vary widely in their mass, angular momentum, and entropy (and hence vapor fraction). A common outcome, however, is a disk with mass $M_D \simeq 2 \Mm$, typical radius $(L_D/M_D)^2/(G \Mea) \simeq 2.5 \Rea$ ($L_D \equiv$ total disk angular momentum), and a distribution of temperatures from $3000-7000\K$.\footnote{Temperatures tend to be lower in models that assume a slow spinning Earth and angular momentum conservation \citep[e.g.][]{Canup2004} and higher in models with an initially fast-spinning Earth with later resonant angular momentum removal \citep{Cuk2012}; see \cite{Nakajima2014}.} The chemical and isotopic composition of the present-day Earth and Moon potentially provide strong constraints on giant impact models \citep[e.g.][]{Jones2000, Wiechert2001, Dauphas2014, Melosh2014}. Earth differs sharply in chemical composition from the Moon, and in particular has a substantial iron core. The bulk lunar density $\rho_{\moon} \approx 3.34 \gcm$ \citep{Bills1977} compares to $\rho_\oplus \approx 5.5 \gcm$, which implies the Moon has a small iron core with $1-10\%$ of its mass, in contrast to Earth's iron core, which contains $\approx 30\%$ of its mass \citep{Canup2004b}. The Earth and Moon are surprisingly similar in isotopic abundance, however, in light of the differences between the Earth and Mars and between Earth and some meteorites. The lunar oxygen isotope ratio ($\delta ^{17} {\rm O} / \delta ^{18} {\rm O}$) lies very close to the terrestrial fractionation line, but far from Mars and other solar system bodies. A similar trend is also found in other elements, including refractory elements such as Ti. There are at least two ways of producing a similar isotopic composition in the Moon and the Earth \citep[see review of][]{Melosh2014}: mix material between the Earth's mantle and the protolunar disk either after the impact \citep[e.g.][]{Pahlevan2007} or during the impact \citep{Cuk2012}, or invent a scenario in which the impactor and the Earth begin with nearly identical isotopic composition \citep{Belbruno2005,Mastro2015}. Our ability to assess the consequences of the giant impact and its aftermath relies on numerical models of the impact. In this paper we ask whether treating the impact and post-impact disk using an ideal hydrodynamics model is self-consistent. In \S 2 we introduce a reference disk model, evaluate its opacity, and show that cooling is inefficient, so that unless $\alpha$ is very small the disk will experience runaway heating. In \S 3 we evaluate the conductivity of a hot vapor disk and show that it is well coupled to the magnetic field. In \S 4 we investigate implications of magnetic coupling for development of magnetorotational instability driven turbulence. In \S 5 we describe a scenario in which the disk and boundary layer are strongly magnetized and heat up to the virial temperature, producing rapid accretion, spreading, mixing, and potentially outflows. \S 6 contains a summary and discussion.
In this paper we have investigated the early evolution of a remnant disk formed by a giant impact with Earth. We estimated that a vapor disk has large Rosseland mean optical depth. Cooling is ineffective, even if the disk is convective. Any form of turbulent angular momentum transport, characterized by Shakura-Sunyaev parameter $\alpha$, will heat it on a timescale $(\alpha\Omega)^{-1}$, which is short compared to the thin disk evolution timescale $(\alpha\Omega)^{-1} (r/H)^2$. We showed, following \cite{Carballido2016}, that if the disk contains a vapor component with $T \gtrsim 4000\K$ then that component is well coupled to the magnetic field. The precise lower limit for coupling depends on composition, particularly the abundance of K and Na. Once the critical temperature is exceeded--and this may happen during initial collision--then there is the possibility of MHD turbulence driven by the magnetorotational instability. The evolution of a magnetically coupled disk depends on the initial field strength and geometry. If MHD turbulence is present, the numerical evidence suggests that it will heat the disk still further and transport angular momentum efficiently. Assuming that angular momentum {\em is} transported efficiently, we have put forward a scenario in which the disk first heats to the virial temperature and then spreads on a timescale of $\sim 600$ hrs. The outcome is a ring of material at the outer edge of the disk that spreads and cools until it decouples from the magnetic field. Using a simple model, we estimate that the decoupled remnant disk has radius $\sim 10 \Rea$. We estimated that the disk mass $\sim t^{-1/3}$, and that by the time the disk decouples $\sim 1/2$ of the original disk mass is left. A large fraction of the protolunar disk's power is dissipated in the boundary layer where the disk meets Earth's surface. The boundary layer will produce high entropy material, and we have speculated that this material might mix back into the disk, or become unbound and leave in the form of a powerful, possibly magnetized wind originating from the Earth or from the disk itself \citep{Blandford1982}. Any wind from the disk is likely enhanced in volatiles: inefficient heat transport from the disk interior implies that the disk surface temperature cannot be sustained above the grain condensation temperature, so the disk will outgas as condensation and settling are driven by radiative cooling at the surface. Two uncertainties hang over our scenario. First, what is the distribution of temperatures in the post-impact disk? Most numerical simulations of the collision generate some hot material in the disk, with the final temperature distribution dependent on initial conditions. Nevertheless, even if the inital disk is cool, any angular momentum diffusion in the post-collision disk will heat the disk, and the inefficiency of heat transport guarantees that the disk will heat before it spreads, reaching $Re_M \gg 1$. Second, what is the initial field strength? If the field is weak enough then resistive diffusion damps the magnetorotational instability, and (if $Re_M > 1$) differential rotation will provide only a modest, linear-in-time field amplification. The post-impact temperature distribution depends on details of the impact dynamics. Simulations of merging magnetized neutron stars \citep[e.g.][]{Kiuchi2015}--also a merger of degenerate objects--exhibit fields that are amplified by at least a factor of $10^3$ in turbulence driven by shear discontinuities formed in the collision. In \cite{Kiuchi2015} the amplification increases with resolution, with no sign of convergence. It is reasonable to think that the field will saturate when magnetic energy is comparable to turbulent kinetic energy, and requires only a few shear times, a time comparable to the duration of the collision. In sum, it is plausible that magnetic coupling alters the dynamics of the collision itself, the subsequent circularization of the disk, and the initial thermal state of the disk. The magnetic field strength and geometry of the pre-impact Earth and impactor will likely never be known. Still, one can ask how weak a field is required to initiate runaway heating of the disk. The boundary layer may be particularly constraining because $\sim 0.3 \eV$ per nucleon is dissipated in the layer, suggesting that the boundary layer will immediately generate hot, well-coupled vapor even if coupling is initially poor elsewhere in the disk. Turbulence associated with the boundary layer might then provide a large amplification factor for the initial field and mix it outward into the disk. Our work follows the recent interesting paper by \cite{Carballido2016} (hereafter CDT), who demonstrate that the protolunar disk is likely to be magnetically coupled (\cite{Charnoz2015} also suggested that the protolunar disk might be well coupled, but they do not provide a detailed evaluation of the ionization fraction or instability conditions). CDT also consider mixing in the protolunar disk. We have performed a less careful evaluation of the ionization fraction, but CDT's work suggests that ionization is, in any event, dominated by Na and K. While CDT use an unstratified shearing box model to estimate a lower limit on the angular momentum transport efficiency due to MHD turbulence of $\alpha \sim 7 \times 10^{-6}$, these zero-net-flux, unstratified, shearing boxes are known to be nonconvergent \citep{Fromang2007}. Simulations of stratified shearing box models, global models, and models with explicit dissipation tend to produce $\alpha \sim $few$ \times 10^{-2}$. The weight of numerical evidence therefore suggests much higher $\alpha$ and more rapid evolution of a magnetized protolunar disk. Interestingly, stratified shearing box models \citep[e.g.][]{Stone1996, Davis2010, Ryan2016} show that $\alpha$ depends on distance from the midplane, with $\alpha \sim 1$ at $z \sim 2 H$. If this obtains for a near-virial protolunar disk, and the ratio of turbulent angular momentum diffusion to turbulent mixing is of order unity \citep{Carballido2005}, then mixing would occur on a small multiple of the {\em dynamical} timescale. The same efficient mixing might also transport magnetic fields outward from the boundary layer into the bulk of the disk. \cite{Charnoz2015} (hereafter CM) recently considered several scenarios for the long-term evolution of the protolunar disk with the aid of a numerical model. CM's models typically have a hot disk near the inner edge, close to the boundary layer, with $T \sim 5000$K, consistent with magnetic coupling. Following \cite{Thompson1988} and others, CM assume that the disk cools from the surface with a photospheric effective temperature $\sim 2000$K. CM do not solve self-consistently for the temperature of the disk photosphere, although this is exceedingly difficult because a cool surface would consist of a mixture of vapor, liquids, and solids. As in CM, viscous heating and cooling do not balance in our scenario. Our scenario is unorthodox in that it assumes a hot initial disk, with the cold disk forming later at of order ten Earth radii. In the canonical picture the moon forms just outside the Roche radius. The early evolution of the Moon's orbit is very poorly constrained, however. Certainly the tidal coupling of Earth and Moon is too poorly known for this to constrain the initial semimajor axis of the moon \citep{Bills1999}.
16
7
1607.02132
1607
1607.02903_arXiv.txt
{% In Keplerian accretion disks, turbulence and magnetic fields may be jointly excited through a subcritical dynamo process involving the magnetorotational instability (MRI). High-resolution simulations exhibit a tendency towards statistical self-organization of MRI dynamo turbulence into large-scale cyclic dynamics. Understanding the physical origin of these structures, and whether they can be sustained and transport angular momentum efficiently in astrophysical conditions, represents a significant theoretical challenge. The discovery of simple periodic nonlinear MRI dynamo solutions has recently proven useful in this respect, and has notably served to highlight the role of turbulent magnetic diffusion in the seeming decay of the dynamics at low magnetic Prandtl number Pm (magnetic diffusivity larger than viscosity), a common regime in accretion disks. The connection between these simple structures and the statistical organization reported in turbulent simulations remained elusive, though. Here, we report the numerical discovery in moderate aspect ratio Keplerian shearing boxes of new periodic, incompressible, three-dimensional nonlinear MRI dynamo solutions with a larger dynamical complexity reminiscent of such simulations. \textcolor{black}{These ``chimera'' cycles are characterized by multiple MRI-unstable dynamical stages, but their basic physical principles of self-sustainment are nevertheless identical to those of simpler cycles found in azimuthally elongated boxes}. In particular, we find that they are not sustained at low Pm either due to subcritical turbulent magnetic diffusion. These solutions offer a new perspective into the transition from laminar to turbulent instability-driven dynamos, and may prove useful to devise improved statistical models of turbulent accretion disk dynamos.}
The magnetorotational instability (MRI) is considered one of the main sources of angular momentum-transporting turbulence in astrophysical accretion disks. It requires a magnetic field and a differentially rotating flow whose angular velocity decreases with distance to the rotation axis \citep{velikhov59,chandra60,balbus91}. Numerical studies have shown that in the presence of a constant net vertical magnetic flux, the MRI acts as a powerful linear instability which amplifies arbitrarily small perturbations that break down nonlinearly into MHD turbulence \citep{hawley95,stone96}. The efficiency of the turbulence at transporting angular momemtum, however, remains a matter of debate and may depend on dissipative processes. This transport may notably be limited in the astrophysically relevant regime of low magnetic Prandlt number (Pm), where Pm denotes the ratio between the kinematic viscosity and magnetic diffusivity of the fluid \citep{lesur07,balbus08,meheut15}. A distinct but related problem is the origin of the magnetic field that supports the MRI in such astrophysical systems. In the absence of an externally imposed field, one possibility is that the field is created and sustained inside the disk by a turbulent dynamo process. A good candidate is the so-called subcritical MRI dynamo process, by which magnetohydrodynamic (MHD) perturbations excited by the MRI nonlinearly sustain the large scale field that made the instability possible in the first place \citep{hawley95,hawley96,rincon07b,lesur08b,rincon08}. Simulations of "zero net magnetic flux" configurations have shown that this process can self-sustain and lead to MHD turbulence that transports significant angular momentum. Many simulations of MRI dynamo turbulence also exhibit self-organized large-scale dynamics characterised by chaotic reversals of the large-scale magnetic field, somewhat reminiscent of the "butterfly" diagram of the solar dynamo \citep{branden95,davis10,simon11,gressel15}. The viability of an MRI dynamo process in disks remains unclear though, as numerical studies explicitly taking into account dissipative process suggest that it cannot be sustained for $\text{Pm}\lesssim 1$ \citep{fromang07b}. The highest-resolution incompressible simulations to date indicate that no MRI dynamo can be excited at $\text{Pm}=1$ for magnetic Reynolds numbers (Rm) as large as $\text{Rm}=45000$ \citep{walker16}, but other studies have found some dependence of this effect on geometry and stratification \citep{oishi11,shi16}. A fundamental physical understanding of the different processes at work in the subcritical MRI dynamo appears to be required to make sense of these different numerical observations. Simple three-dimensional nonlinear periodic dynamo solutions of the MHD equations \citep[hereafter H11]{Herault2011} have recently been shown to provide the first germs of nonlinear MRI dynamo excitation in the Keplerian differential rotation regime in azimuthally elongated shearing box numerical configurations at transitional kinematic (Re) and magnetic Reynolds numbers \citep[hereafter R13]{riols2013}. These solutions represent an interesting avenue of research to investigate the inner workings and transitional properties of this dynamo. A recent analysis of the energy budget of a few such cycles has notably proven useful to pinpoint the possible role of a ``subcritical'' turbulent magnetic diffusion in the seeming decay of the dynamics at low Pm \citep[hereafter R15]{riols15}. One possible caveat with this approach so far, though, has been the difficulty to connect these fairly ordered (yet fully three-dimensional and nonlinear) periodic solutions to developed turbulent MHD states produced in generic moderate aspect ratio simulations at larger Re and Rm, and in particular to statistically self-organized MRI dynamo butterflies. The aim of this paper is to bridge part of this gap by presenting several new three-dimensional nonlinear periodic MRI dynamo solutions computed in moderate aspect ratio shearing boxes. As we shall see, the dynamical complexity of these cycles, which we call ``MRI dynamo chimeras'' \textcolor{black}{because they involve multiple MRI-unstable dynamical stages}, is significantly larger than that of the simpler solutions discussed previously, and is reminiscent of the statistical \textcolor{black}{behaviour} observed in generic numerical simulations. Yet, their sustainment rests on the exact same few linear and nonlinear dynamical processes underlying the dynamics of simpler cycles. The mathematical and numerical frameworks of our study are presented in Sect.~\ref{framework}, which also provides a discussion of the effects of changing the aspect ratio in this problem. Section~\ref{cycles} documents the dynamical properties of two new pairs of dynamically complex chimera MRI dynamo cycles computed in moderate aspect ratio configurations with a Newton-Krylov algorithm, and shows that their existence is limited to Pm larger than a few. Section~\ref{budget} extends the magnetic energy budget analysis of R15 to these structures, and shows that the enhancement of nonlinear transfers of magnetic energy to small scales at large Re (``subcritical turbulent magnetic diffusion'') prevents the sustainment of these structures at lower Pm, just as in the case of simpler cycles. In Sect.~\ref{dynamostat}, we discuss the semi-statistical nature of these new cycles and the perspectives that their computation opens for the development of improved statistical models of Keplerian dynamos, and more generally instability-driven dynamos. A summary of the main conclusions and short discussion of how the results may fit into the wider astrophysical context concludes the paper.
\label{conclusions} Motivated by the quest for a physically-grounded description of the nonlinear process of turbulent dynamo action and angular momentum transport in astrophysical accretion disks, we have computed several new periodic, three-dimensional, fully nonlinear incompressible magnetorotational dynamo solutions in moderate aspect ratio shearing boxes. The dynamical complexity of these ``chimera'' solutions is significantly larger than that of solutions identified earlier by H11, R13 and R15, and is reminiscent of the seemingly complex statistical magnetic organization observed in many fully turbulent simulations of the problem. Yet, we have shown that their sustainement can be understood in terms of the same few linear and nonlinear dynamical processes underlying simpler cycles. These solutions, like their simpler counterparts (R15), are also not sustained for magnetic Prandtl numbers smaller than a few. In order to understand the dynamics in a physically transparent way, we have introduced a decomposition into active and slaved modes. The former include a large-scale axisymmetric MRI-supporting field component, and non-axisymmetric MRI-unstable energy-injecting \textcolor{black}{perturbations}. The latter consist of \textcolor{black}{perturbations} passively excited through nonlinear interactions that drain energy from larger scales. Using this decomposition, we have been able to understand how the magnetic energy of the system can be sustained via the MRI, and to confirm the results of R15 regarding the role of turbulent magnetic dissipation in the seeming disappearance of the MRI dynamo at low Pm. With this basic effect identified and confirmed, it may now be possible to better understand how it affects the dynamo and turbulent angular momentum transport in different geometric and physical configurations \citep{shi16,walker16}. The results presented in this paper are obviously not in the astrophysically asymptotic regimes and do not accomodate all \textcolor{black}{the relevant physics in this context}, such as magnetic buoyancy and stratification effects. However, we have shown that our physically transparent, fully three-dimensional, nonlinear magnetorotational dynamo chimeras share some interesting properties with existing effective two-dimensional statistical models of accretion disk dynamo cycles and as such seem to offer an interesting path in parameter space towards statistical asymptotic regimes. This raises the prospects that improved effective statistical models of the MRI dynamo and other instability-driven dynamos \citep{spruit02,cline03,miesch07,rincon08} can be derived from physical first principles, and may in the near future provide trustable insights into magnetic field generation and turbulent transport processes in a variety of stellar and circumstellar environments.
16
7
1607.02903
1607
1607.02768_arXiv.txt
We relate the information entropy and the mass variance of any distribution in the regime of small fluctuations. We use a set of Monte Carlo simulations of different homogeneous and inhomogeneous distributions to verify the relation and also test it in a set of cosmological N-body simulations. We find that the relation is in excellent agreement with the simulations and is independent of number density and the nature of the distributions. We show that the relation between information entropy and mass variance can be used to determine the linear bias on large scales and detect the signatures of non-Gaussianity on small scales in galaxy distributions.
Understanding the formation and evolution of the large scale structures in the Universe is one of the most complex issues in cosmology. The galaxies are the basic building blocks of the large scale structures and their spatial distribution reveals how the luminous matter is distributed in the Universe. The study of the distribution of galaxies is one of the most direct probe of the large scale structures. The modern galaxy redshift surveys like the Sloan Digital Sky Survey (SDSS) \citep{york} has now mapped the distribution of more than a million galaxies and quasars which provides the most detailed three-dimensional maps of the Universe ever made in the history of mankind. The maps reveal that the galaxies are distributed in an interconnected complex filamentary network namely the cosmic web. The cosmic web emerges naturally from the gravitational amplifications of the primordial density fluctuations seeded in the early universe. The distribution of galaxies in the cosmic web encodes a wealth of information about the formation and evolution of the large scale structures. A large number of statistical tools have been developed so far to quantify the galaxy distribution and unravel the large scale structures. The correlation functions \citep{peeb80} characterize the statistical properties of the galaxy distributions. The two-point correlation function and its Fourier space counterpart, the power spectrum remain some of the most popular measure of galaxy clustering todate. These statistics provide a complete description for the primordial density perturbations which are assumed to be Gaussian in the linear regime. But in subsequent stages of non-linear gravitational evolution, the phase coupling of the Fourier modes produces non-vanishing higher-order correlation functions and polyspectra. In principle a full hierarchy of N-point statistics is required to provide a complete description of the distribution. The void probability function \citep{sdm} provides a characterization of voidness that combine many higher moments of the distribution. Other methods to quantify the cosmic web includes the percolation analysis \citep{shand1,einas1}, the genus statistics \citep{gott}, the minimal spanning tree \citep{barrow}), the Voronoi tessellation \citep{ike,weygaert}, the Minkowski functionals \citep{mecke,smal}, the Shapefinders \citep{sahni}, the critical point statistics \citep{colombi}, the marked point process \citep{stoi1}, the multiscale morphology filter \citep{arag}, the skeleton formalism \citep{sous} and the local dimension \citep{sarkar}. The popularity of the two-point correlation function and the power spectrum lies in the fact that they can be easily measured and related to the theories of structure formation whereas it is hard to do so for most of the other statistics. The variance of the mass distribution smoothed with a sphere of radius $r$ is a simple and powerful statistical measure which is directly related to the power spectrum. Information entropy is a statistical measure which can help us to study the formation and evolution of structures from an information theoretic viewpoint. Recently information entropy has been used as a measure of homogeneity \citep{pandey13,pandey15,pandey16a} and isotropy \citep{pandey16b} of the Universe. Both the information entropy and the mass variance can be used as a measure of the non-uniformity of a probability distribution. The entropy uses more information about the probability distribution as it is related to the higher order moments of a distribution. The variance can be treated as an equivalent measure only when the probability distribution is fully described by the first two moments such as in a Gaussian distribution. However a highly tailed distribution is not uniquely determined even by its all the higher order moments \citep{patel,romano,carron2,carron1}. Different statistical tools have been designed to explore different aspects of the galaxy distribution and when possible it is important to relate these statistical measures for a better interpretation of different cosmological observations. In future, Information theory can find many potential applications in cosmology. It would be highly desirable to relate the information entropy to the other conventional measures such as the mass variance and the power spectrum. In this paper we particularly explore the relation between the discrete Shannon entropy and the mass variance of a distribution and test it using N-body simulations and the Monte Carlo simulations of different distributions. We also explore a few possible applications of this relation in cosmology particularly to constrain the large-scale linear bias and non-Gaussianity in galaxy redshift surveys. Galaxies are known to be a biased tracer of the underlying dark matter distribution. Currently there exist several methods to determine the large scale linear bias. The bias can be directly determined from the two-point correlation function and power spectrum \citep{nor,teg,zehavi10}, the three-point correlation function and bispectrum \citep{feldman, verde,gaztanaga}, the redshift space distortion parameter $\beta=\Omega_m^{0.6}/b$ \citep{haw,teg} and the filamentarity of the galaxy distribution \citep{bharad,pandeyb}. We employ the information entropy-mass variance relation to propose a new method which can be used to determine the large scale linear bias from galaxy redshift surveys. Non-Gaussianity of the cosmic density field is one of the most interesting issues in cosmology. In the current paradigm, the primordial density fluctuations are assumed to be Gaussian and signatures of non-Gaussianities in these fluctuations can be used to constrain different inflationary models in cosmology \citep{bartolo}. As the density fluctuations grow, the probability distribution function of the cosmic density field develope extended tails in the overdense regions and gets truncated in the underdense regions. These non-Gaussianities induced by the structure formation are much stronger and dominates any primordial non-Gaussianities from the early Universe. In the present work we do not address the primordial non-Gaussianities but the non-Gaussianties introduced by the nonlinear evolution of the cosmic density field. We investigate if the information entropy-mass variance relation can be used to detect the signatures of non-Gaussianity in the galaxy distribution. A brief outline of the paper follows. In section 2 we describe the relation between information entropy and mass variance followed by a discussion on some possible applications of this relation in section 3. We describe the data in section 4 and finally present the results and conclusions in section 5.
We first investigate the relationship between the information entropy and the mass variance for all the distributions described in section 4. We find that in all the distributions though entropy and $\log \sigma_{r}$ show some correlations it is difficult to find any general relation between them. But when we study the relation between the deviation of entropy from its maximum value i.e. $(H_{r})_{max}-H_{r}$ and the mass variance $\sigma_{r}^2$ for the same distributions, interestingly we find that they are very tightly correlated (\autoref{fig:result1}). In \autoref{fig:result1} we find that for a wide range in their values, the relation between these two quantities in all these distributions can be described by a straight line of the form $(H_{r})_{max}-H_{r}=\frac{\sigma_{r}^2}{a}$ where $a$ is a constant to be determined. We determine the value of $a$ for each of these distributions by fitting the data with this straight line. We find that the best fit value of $a=4.65\pm0.03$ is the same for all the distributions when the data is fitted over $(H_{r})_{max}-H_{r}$ in the range $10^{-5}-10^{-1}$. Some deviations from this relation are noticed beyond this range. It is interesting to note that in this regime the relation is exactly described by the relation given in \autoref{eq:shannvar}. The relation is found to be independent of the nature of the distribution as predicted by the \autoref{eq:shannvar}. Further we carry out analysis with different sampling rates to find that the relation also does not depend on the number density of the distributions in the appropriate regime as predicted by the same relation. In \autoref{fig:result2} we show $(H_{r})_{max}-H_{r}$ and $\frac{\sigma_{r}^2}{4.6}$ for different distributions as a function of length scale $r$. In the top left panel we show the results for a homogeneous and isotropic Poisson point process. We find that for this distribution the relation holds very well for nearly the entire length scale range. This is related to the fact that for a homogeneous and isotropic Poisson distribution the only source of fluctuations are shot noise which is only important on small scales. We show the results for the radially inhomogeneous distributions with $\lambda(r)=\frac{1}{r}$ in the middle left panel and $\lambda(r)=\frac{1}{r^{2}}$ in the bottom left panel. Here we find that there are significant deviations from this relation on scales $r\leq 40 \hmpc$. Interestingly the differences decrease with increasing length scales and the results are in excellent agreement with the relation for these distributions beyond the length scales of $r>50 \hmpc$. In the top right, middle right and bottom right panels of \autoref{fig:result2} we show the results for the $\Lambda$CDM model with different linear bias values as indicated in each panel. We find small departures from the relation in each of these distributions on smaller length scales $r\leq 20 \hmpc$ but the relation holds astonishingly well on length scales of $r>20 \hmpc$. It may be noted in different panels of \autoref{fig:result2} that the shape of the $(H_{r})_{max}-H_{r}$ curves are quite different from each other which are the characteristics of the respective distributions. But the relation given in \autoref{eq:shannvar} holds quite well irrespective of the nature of the distributions. The deviations of the results from \autoref{eq:shannvar} in all these distributions originate from the presence of larger fluctuations on those length scales. This retains the non-vanishing higher order terms in \autoref{eq:shannon3} giving rise to those differences. But the higher order terms become negligible in the small fluctuation regime where \autoref{eq:shannvar} becomes exact. Therefore deviation from the \autoref{eq:shannon3} provides the degree of non-linearities present and the length scales where the non-linearities become important in a distribution. We now investigate if the information entropy-mass variance relation can be used to determine the linear bias and characterize the non-Gaussianities. In the top left panel of \autoref{fig:result3} we show the $(H_{r})_{max}-H_{r}$ values as a function of length scales for the unbiased ($b=1$) $\Lambda$CDM simulation and its biased counterpart with the bias value $b=1.5$. We see that when the $(H_{r})_{max}-H_{r}$ values for the unbiased distributions are scaled by a factor of $1.5^{2}$, it exactly reproduces the $(H_{r})_{max}-H_{r}$ values for the biased distributions on scales $r>30 \hmpc$. The top right and bottom left panels of \autoref{fig:result3} similarly show that on scales $r>30 \hmpc$ the $(H_{r})_{max}-H_{r}$ values for the biased distributions with $b=2$ and $b=2.5$ are simply $2^{2}$ and $2.5^{2}$ times the $(H_{r})_{max}-H_{r}$ values of the unbiased distributions on those scales. However it can be clearly seen in all these panels that this simple scaling does not work on smaller scales. The assumption of the scale independent linear bias does not hold on small scales where the non-linearities play an important role. The particles are distributed in diverse environments in an unbiased distribution whereas they are preferentially selected from the density peaks in a biased distribution. In a biased distribution the measuring spheres centered on the particles would encompass regions with similar densities at smaller radii. But the measuring spheres would encompass varying degrees of empty regions beyond the characteristic scales of the density peaks leading to non-uniformity in the measurements with increasing radii. On the other hand, in an unbiased distribution, the measuring spheres would pick up regions of diverse densities at smaller radii reflecting a less uniform behaviour on those scales. The unbiased distribution would be more uniform at larger radii when the measuring spheres would encompass statistically similar number of sites from different environments. This explains why the biased distributions appear to be more uniform on smaller scales and less uniform on larger scales as compared to the unbiased distributions. These characteristic behaviours of the biased distributions give rise to the observed differences from the \autoref{eq:biasval} on smaller scales. Despite these differences it is clear that one can use the ratios of $(H_{r})_{max}-H_{r}$ values of different distributions on large scales to determine their relative bias parameters. The method can be also used to determine the linear bias for galaxies with different physical properties. In future we plan to study the luminosity-bias relation for the galaxies in the SDSS using this method. In the bottom right panel of \autoref{fig:result3} we show $(H_{r})_{max}-H_{r}-\frac{\sigma_{r}^{2}}{4.6}$ as a function of length scales for the biased and the unbiased distributions considered here. We find that $(H_{r})_{max}-H_{r}-\frac{\sigma_{r}^{2}}{4.6}<0$ for all the distributions upto a length scales of $\sim 40 \hmpc$ suggesting that all of them are non-Gaussian. We see that it is more negative in the unbiased distributions as compared to the biased distributions. It may be also noted that the measure becomes less negative with increasing bias. This behaviour is possibly related to the fact that a biased distribution becomes more uniform on small scales with increasing bias. However as this measure is a combination of different odd and event moments with alternating signs, it is difficult in general to absolutely compare the degree of non-Gaussianity present in these distributions. In this work we present a relation between the information entropy and the mass variance and show that on large scales the relation can be used to determine the linear bias from galaxy surveys. The relation may be also employed to constrain the growth rate of density fluctuations and time evolution of linear bias on large scales. On small scales one can use the relation to characterize the non-Gaussianities present in the galaxy distribution. Finally we note that the present analysis suggests that the information entropy can serve as an important tool for the study of large scale structures in the Universe.
16
7
1607.02768
1607
1607.01395_arXiv.txt
{The leading order dynamics of the type IIB Large Volume Scenario is characterised by the interplay between $\alpha'$ and non-perturbative effects which fix the overall volume and all local blow-up modes leaving (in general) several flat directions. In this paper we show that, in an arbitrary Calabi-Yau with at least one blow-up mode resolving a point-like singularity, any remaining flat directions can be lifted at subleading order by the inclusions of higher derivative $\alpha'$ corrections. We then focus on simple fibred cases with one remaining flat direction which can behave as an inflaton if its potential is generated by both higher derivative $\alpha'$ and winding loop corrections. Natural values of the underlying parameters give a spectral index in agreement with observational data and a tensor-to-scalar ratio of order $r=0.01$ which could be observed by forthcoming CMB experiments. Dangerous corrections from higher dimensional operators are suppressed due to the presence of an approximate non-compact shift symmetry.}
The type IIB Large Volume Scenario (LVS) for closed string moduli stabilisation \cite{Balasubramanian:2005zx,Conlon:2005ki} has led to a theoretically well motivated class of phenomenological models of beyond the Standard Model physics \cite{Blumenhagen:2009gk,deAlwis:2009fn,Baer:2010uy,deAlwis:2012cz,Aparicio:2014wxa}. According to the general analysis of \cite{Cicoli:2008va}, LVS supersymmetry breaking AdS minima exist whenever the compactification manifold is a Calabi-Yau (CY) with negative Euler number ($h^{1,2}> h^{1,1}$) and at least one blow-up mode resolving a point-like singularity.\footnote{LVS vacua with positive Euler number might be obtained by allowing some tuning in string loop corrections to the K\"ahler potential \cite{Cicoli:2012fh}.} Each blow-up mode (together with its axionic partner) is fixed by non-perturbative effects at a small size of order the inverse string coupling, $\tau_s\sim g_s^{-1}$, while the overall volume is stabilised at exponentially large values, $\vo \sim e^{-1/g_s}$, by the interplay between $\alpha'^3$ corrections to the K\"ahler potential $K$ \cite{Becker:2002nn} and non-perturbative contributions to the superpotential $W$ \cite{Kachru:2003aw}.\footnote{The dilaton and the complex structure moduli are instead fixed by background fluxes as in \cite{Giddings:2001yu}. For references to earlier work see the references therein and the reviews \citep{Grana:2005jc,Douglas:2006es}.} In the presence of $N_{\rm small}$ small blow-up modes, at leading order in an expansion in inverse powers of the overall volume, there are $N_{\rm flat}= h^{1,1}-N_{\rm small}-1$ flat directions left over. In principle these can be lifted via either non-perturbative effects in $W$ or perturbative corrections to $K$. However \cite{Cicoli:2008va} showed that only perturbative contributions to $K$ can be used since non-perturbative effects are either subdominant if the $N_{\rm flat}$ moduli are larger than the $N_{\rm small}$ blow-up modes, or cannot yield a minimum in a region where the effective field theory can be trusted if the $N_{\rm flat}$ moduli are small, i.e. of the same size of the $N_{\rm small}$ blow-up modes. Hence ref. \cite{Cicoli:2008va} focused on the case where each of these remaining $N_{\rm flat}$ moduli is larger than the $N_{\rm small}$ blow-up modes (but not necessarily as large as the overall volume), i.e. $N_{\rm flat}= N_{\rm large}-1$, and argued that all the $N_{\rm flat}$ directions should be lifted via the inclusion of string loop corrections to the K\"ahler potential \cite{Berg:2007wt, Cicoli:2007xp} since they generically introduce a dependence on all K\"ahler moduli at subleading order and the overall volume has already been fixed at leading order (hence we do not expect any destabilisation due to subdominant effects). Some explicit examples where remaining flat directions are lifted by string loops are given in \cite{Cicoli:2007xp,Cicoli:2008va} but, as we discuss later, it is difficult to give a general argument. As pointed out in \cite{Cicoli:2011it}, blow-up modes resolving a point-like singularity diagonalise the volume form, and so the case where $N_{\rm flat}=0$ corresponds to so-called `strong Swiss-cheese' CY manifolds whose volume looks like \cite{Gray:2012jy}: \be \vo = \tau_b^{3/2}-\sum_{i=1}^{N_{\rm small}}\lambda_i\tau_i^{3/2}\,. \ee Globally consistent LVS chiral compactifications with branes and fluxes and this type of CY manifold have been constructed in \cite{CYembedding}. On the other hand cases with $N_{\rm flat}>0$ involve `weak Swiss-cheese' CY manifolds with volume \cite{Cicoli:2011it}: \be \vo = f_{3/2}(\tau_j)-\sum_{i=1}^{N_{\rm small}}\lambda_i\tau_i^{3/2}\,, \label{weak} \ee where $f_{3/2}(\tau_j)$ is a homogeneous function of degree $3/2$ in the variables $\tau_j$, $j=1,...,N_{\rm large}=N_{\rm flat}+1 = h^{1,1}-N_{\rm small}$. Globally consistent LVS chiral brane models with this type of CY manifolds have been built in \cite{Cicoli:2011qg}. The simplest examples with $N_{\rm flat}=1$ are K3 or $T^4$ fibrations over a $\mathbb{P}^1$ base where $f_{3/2}(\tau_1,\tau_2)=\sqrt{\tau_1}\tau_2$ \cite{Cicoli:2011it}. These simple `weak Swiss-cheese' CY manifolds have been used both in \cite{Cicoli:2008gp} to develop a promising inflationary scenario called `Fibre Inflation' which predicts a tensor-to-scalar ratio $r$ of order $r\simeq 0.006$, and in \cite{Cicoli:2011yy} to obtain anisotropic compactifications with effectively 2 micron-sized extra dimensions and TeV scale strings. Following the philosophy of \cite{Cicoli:2008va}, the inflationary potential of Fibre Inflation is generated by string loop corrections which are naturally smaller than $\alpha'^3$ effects due to the extended no-scale structure \cite{Cicoli:2007xp}. This is a cancellation of the leading order loop contribution to the scalar potential which is due to supersymmetry and has two important implications for the naturalness of the inflationary model: ($i$) being a leading order flat direction, the inflaton is naturally lighter than the Hubble scale during inflation, ($ii$) potentially dangerous higher dimensional operators do not cause an $\eta$-problem since the inflaton enjoys an approximate non-compact shift symmetry \cite{Burgess:2014tja, Burgess:2016owb}. In fact the no-scale feature of type IIB models ensures that at tree-level the potential is invariant under rescaling symmetries which correspond to non-compact shift symmetries for the K\"ahler moduli. This symmetry is approximate since it gets broken by string loop effects. Hence inflaton-dependent higher dimensional operators get generated but they are suppressed by the symmetry breaking parameter which turns out to be small since string loops are both $g_s$ and $\vo$-suppressed with respect to the tree-level contribution. Despite these very promising features, the potential of Fibre Inflation is not under full control since string loop effects in $K$ can be explicitly computed only for simple toroidal cases \cite{Berg:2005ja}. However in order to stabilise the remaining flat direction and develop the inflationary potential, one needs to know just the K\"ahler moduli dependence of these corrections which is the simplest to estimate, together with the dependence on the dilaton $S$ (since $\langle {\rm Re} (S)\rangle = g_s^{-1}$), in contrast to the dependence on the complex structure moduli $U$ which is already rather complicated even in the toroidal case. In fact, the $S$ and $U$-moduli are fixed at semi-classical level by background fluxes \cite{Giddings:2001yu}, and so, at this level of approximation, can be considered just as flux-dependent constants. The K\"ahler moduli dependence of string 1-loop corrections to $K$ for an arbitrary CY manifold can be estimated by both generalising the toroidal result \cite{Berg:2007wt} and matching to the low-energy Coleman-Weinberg potential \cite{Cicoli:2007xp}. Moreover, due to the extended no-scale cancellation, 1-loop corrections are smaller than expected and turn out to be effectively of the same order as 2-loop effects. Even if perturbation theory is still under control since the expansion parameter is small (for $g_s\ll 1$ and $\vo \gg 1$), a full treatment of the inflationary potential should include also 2-loop effects. Even if these have been estimated to have the same inflaton dependence as the first non-vanishing 1-loop effect \cite{Cicoli:2008gp}, and so should not modify the final results of Fibre Inflation, it is important to look for additional perturbative effects that could stabilise these flat directions. These additional terms can arise from higher derivative $\alpha'^3$ corrections to the K\"ahler potential \cite{Ciupke:2015msa}. These new $F^4$ terms depend on all $h^{1,1}$ K\"ahler moduli and can be shown to lift all of them except for the overall volume mode for an arbitrary CY manifold. In \cite{Ciupke:2015msa} a full minimum has been achieved for positive CY Euler number ($h^{1,2}< h^{1,1}$) by fixing the volume via balancing $F^2$ against $F^4$ $\alpha'^3$ effects. However since the minimum is obtained by comparing two different orders in the superspace derivative expansion, the effective field theory does not seem to be fully under control, resulting in a gravitino mass of order the Kaluza-Klein (KK) scale \cite{Cicoli:2013swa}.\footnote{Given that the higher derivative expansion at $\mc{O}(\alpha'^3)$ involves terms just up to $F^8$, one might still hope to keep it under control.} In this paper we show how to overcome at the same time both this control issue and the difficulty of showing explicitly that any flat direction in LVS models can be lifted by string loops for a generic CY manifold with at least one blow-up mode. This can be done by including $F^4$ terms in the LVS scenario where the overall volume mode is stabilised at order $F^2$ by balancing $\alpha'^3$ against non-perturbative effects. All the $N_{\rm flat}= N_{\rm large}-1$ remaining flat directions can then be lifted at subleading $F^4$ order by the inclusion of the higher derivative $\alpha'^3$ effects computed in \cite{Ciupke:2015msa}. This demonstrates that the class of phenomenologically viable LVS models extends well beyond the original framework in which there was only one large K\"ahler modulus. Given that string loop corrections to the scalar potential scale as $V_{g_s}\sim g_s W_0^2 \vo^{-10/3}$ whereas higher derivative $\alpha'^3$ effects behave as $V_{F^4}\sim g_s^{1/2} W_0^4 \vo^{-11/3}$, string loops can be safely neglected only for relatively small values of the internal volume: $\vo \ll W_0^6 g_s^{-3/2}$. As a reference example, natural values $g_s\simeq 0.1$ and $W_0\simeq 20$ would give $\vo\ll 10^9$. The minimum is again AdS but it can be uplifted to dS by using various mechanisms already proposed in the literature.\footnote{An important feature of LVS models is that the negative vacuum energy is parametrically below the gravitino mass, and so the final phenomenology is not affected at leading order by the specific uplift mechanism.} Some of the most popular ways to achieve dS vacua involve anti-branes \cite{Kachru:2003aw, deAlwis:2016cty}, T-branes \cite{Cicoli:2015ylx} (or hidden matter F-terms \cite{CYembedding}) and non-perturbative effects at singularities \cite{Cicoli:2012fh}. Besides moduli stabilisation, a very interesting application of $F^4$ terms is inflation. Ref. \cite{Broy:2015zba} followed the same philosophy of the Fibre Inflation model developed in \cite{Cicoli:2008gp} and focused on simple K3 or $T^4$-fibred CY manifolds with $h^{1,1}=3$, $N_{\rm small}=1$, $N_{\rm large}=2$ and so $N_{\rm flat}=1$ flat direction which can play the r\^ole of the inflaton. After showing that higher derivative terms alone cannot yield a potential which is flat enough to drive inflation, \cite{Broy:2015zba} combined $F^4$ terms with string loop effects due to the exchange of KK closed strings between stacks of non-intersecting branes and neglected $g_s$ corrections coming from the exchange of winding modes between intersecting stacks of branes. This can be justified if the underlying brane set-up does not involve intersecting branes.\footnote{Or more precisely if each intersection locus does not admit non-contractible 1-cycles \cite{Berg:2007wt}.} However, the leading order KK 1-loop correction to the scalar potential vanishes due to extended no-scale \cite{Cicoli:2007xp}, and so the first non-zero contribution scales effectively as 2-loop KK effects whose form is poorly understood. The final prediction for the cosmological observables reproduces the result $r= 2 (f/M_p)^2 (n_s-1)^2$ of generalised Fibre Inflation models with a potential of the form $V=V_0-V_1\,e^{-\phi/f}$ \cite{Burgess:2016owb}. The effective decay constant $f$ can be either equal to the one of the original Fibre Inflation model, $f=f_{\scriptscriptstyle \rm FI}=\sqrt{3} M_p$, or smaller, $f=f_{\scriptscriptstyle \rm FI}/2$, depending on whether the plateau region of the potential is generated by $F^4$ or KK loops. Hence the tensor-to-scalar ratio turns out to be $r\lesssim 0.006$. On the other hand ref. \cite{Cicoli:2015wja} considered the single modulus case and combined $F^2$ and $F^4$ $\alpha'^3$ contributions with $g_s$ effects and different uplifting terms to have enough freedom to develop a potential for the volume mode which features, together with a dS minimum, also an inflection point supporting inflation. In this way the volume mode can evolve from the end of inflaton to its present value, allowing for larger values of the gravitino mass during inflation. In this paper, we consider a different cosmological application of $F^4$ terms which is under better control and leads to a larger prediction for tensor modes. We focus again on LVS models where the CY manifold has a simple fibred structure with $N_{\rm flat}=1$ flat direction which is lifted by the inclusion of both $F^4$ terms and winding loop corrections. KK loop effects can be absent by construction if, for example, the compactification does not include any O3-plane and D3-brane and all O7-planes and D7-branes intersect or are on top of each other \cite{Berg:2007wt}. If instead KK $g_s$ effects get generated, being effectively 2-loop contributions, they can be neglected with respect to 1-loop winding corrections due to the additional suppression factor $g_s^2\ll 1$ \cite{Burgess:2016owb}. In this way, we do not have to worry about poorly understood 2-loop KK corrections. The resulting inflationary model features a plateau followed by a steepening region similar to Fibre Inflation-like models \cite{Cicoli:2008gp, Broy:2015zba}. However the inflationary dynamics is qualitatively different since the plateau is longer and the steepening behaviour at large inflaton values is milder. Given that horizon exit cannot take place in the plateau region for natural values of the underlying parameters, the general relation $r= 2 (f/M_p)^2 (n_s-1)^2$ is generically not satisfied in this class of models. Given that horizon exit is close to the steepening region, the final prediction for the tensor-to-scalar ratio is larger, $r\simeq 0.01$, and could be tested by forthcoming cosmological observations \cite{rForecasts}. Notice that such a large value of $r$ can never be obtained in Fibre Inflation models since, even if the microscopic parameters are tuned to have horizon exit close to the region where the potential starts to raise, the spectral index would become too blue. This is avoided in our model since the steepening is milder. Given that our model is qualitatively different from Fibre Inflation, we name it `$\alpha'$-Inflation' to distinguish it from Fibre Inflation and to stress that $\alpha'$ effects play a crucial r\^ole to develop the inflationary potential (both $F^4$ $\mc{O}(\alpha'^3)$ and $F^2$ $\mc{O}(\alpha'^4 g_s^2)$ effects). Notice however that the flatness of the inflationary potential is again protected by the same approximate non-compact shift symmetry as in Fibre Inflation-like models. This paper is organised as follows. In Sec. \ref{Section1} we show how any remaining flat direction in the LVS scenario can be lifted by $F^4$ terms for an arbitrary CY manifold with at least one blow-up mode. In Sec. \ref{Sec2} we then focus on the particular LVS case with just one remaining flat direction and present a viable inflationary model that leads to observable tensors of order $r\simeq 0.01$ by taking into account $F^4$ corrections as well as 1-loop winding string corrections. After presenting our conclusions in Sec. \ref{Concl}, in App. \ref{AppB} we show that the form of the $F^4$ terms derived in \cite{Ciupke:2015msa} for a constant superpotential applies also in LVS models up to volume-suppressed corrections coming from non-perturbative effects in $W$ that mildly break the underlying no-scale structure.
\label{Concl} In this paper we have shown explicitly the existence of generalised LVS vacua for arbitrary CY manifolds with at least one local blow-up mode. The new key-ingredient is the inclusion of higher dimensional $\alpha'^3$ corrections recently computed in \cite{Ciupke:2015msa}. At leading order in an expansion in inverse powers of the internal volume $\vo$, $\alpha'^3$ $F^2$ effects compete with non-perturbative corrections to $W$ to fix $\vo$ and all $N_{\rm small}$ small blow-up mode. Each of the $N_{\rm flat} = h^{1,1} - N_{\rm small}-1$ remaining flat directions can be shown to be lifted in general by $\alpha'^3$ $F^4$ terms if these have the correct sign. When the superspace derivative expansion is under control, i.e. when $F^4$ effects are subdominant with respect to $F^2$ terms which is equivalent to require $m_{3/2}\ll M_\KK$ \cite{Cicoli:2013swa}, these flat directions are lifted at subleading order, and so the corresponding moduli are naturally lighter than all the other modes. This crucial feature makes all of them promising inflaton candidates. Hence in the second part of this paper we focused on cosmological applications for cases where $N_{\rm flat}=1$. We developed a new inflationary model, which we named `$\alpha'$ Inflation', where $\mc{O}(\alpha'^3)$ $F^4$ terms compete with $\mc{O}(\alpha'^4 g_s^2)$ $F^2$ loop corrections coming from the exchange of winding modes between intersecting stacks of branes. The inflationary potential is characterised by an exponentially flat plateau followed by a steepening region. The flatness of the inflationary potential is protected against dangerous higher dimensional operators by a non-compact shift symmetry inherited from the no-scale structure which is broken only beyond tree-level \cite{Burgess:2014tja,Burgess:2016owb}. For natural values of the underlying parameters horizon exit for $N_e\sim 50 - 60$ occurs in a region where the steepening effect is not negligible, and so the model predicts a large tensor-to-scalar ratio of order $r\simeq 0.01$ together with a spectral index $n_s\simeq 0.97$ in agreement with present data. Future cosmological observations will soon test the predictions of our model \cite{rForecasts}. The inflationary scale is of order $M_{\rm inf}\simeq 1.04\cdot 10^{16}$ GeV while the KK scale is slightly higher, $M_\KK\simeq 4.35\cdot 10^{16}$ GeV, showing that the effective field theory is marginally under control since $\left(M_{\rm inf}/M_\KK\right)^4\simeq 0.003$ and the ratio between the mass of the volume mode and the Hubble scale is $(H/m_\vo)^2\simeq 0.025$. This shows also that $r\simeq 0.01$ is probably the largest possible prediction for $r$ which is compatible with a trustable effective field theory. Due to the high scale of the moduli potential, the gravitino mass also turns out to be very high, $m_{3/2}\sim 10^{15}$ GeV, leading to soft terms much higher than the TeV scale. Sequestering supersymmetry breaking from the visible sector \cite{Aparicio:2014wxa} might help to suppress the soft terms from the gravitino mass but these would still be very far from low energy. Comparing our model with Fibre Inflation \cite{Cicoli:2008gp}, the potential of $\alpha'$ Inflation has a very similar shape but with a milder raising behaviour at large field values. Hence horizon exit can take place close to the steepening region, enhancing the tensor-to-scalar ratio without obtaining a spectral index which is too blue. An interesting future line of work involves the construction of global models of $\alpha'$ Inflation in concrete CY manifolds with explicit brane set-up and choice of fluxes. Moreover it would be interesting to investigate how reheating takes place after the end of inflation along the lines of \cite{Cicoli:2015bpq, Reheat}.
16
7
1607.01395
1607
1607.03673_arXiv.txt
We report a new 1-pc (30$\arcsec$) resolution CS($J=2-1$) line map of the central 30 pc of the Galactic Center (GC), made with the Nobeyama 45m telescope. We revisit our previous study of the extraplanar feature called polar arc (PA), which is a molecular cloud located above SgrA* with a velocity gradient perpendicular to the Galactic plane. We find that the PA can be traced back to the Galactic disk. This provides clues of the launching point of the PA , roughly $6\times10^{6}$ years ago. Implications of the dynamical time scale of the PA might be related to the Galactic Center Lobe (GCL) at parsec scale. Our results suggest that in the central 30 pc of the GC, the feedback from past explosions could alter the orbital path of the molecular gas down to the central tenth of parsec. In the follow-up work of our new CS($J=2-1$) map, we also find that near the systemic velocity, the molecular gas shows an extraplanar hourglass-shaped feature (HG-feature) with a size of $\sim$13 pc. The latitude-velocity diagrams show that the eastern edge of the HG-feature is associated with an expanding bubble B1, $\sim$7 pc away from SgrA*. The dynamical time scale of this bubble is $\sim3\times10^{5}$ years. This bubble is interacting with the 50 km s$^{-1}$ cloud. Part of the molecular gas from the 50 km s$^{-1}$ cloud was swept away by the bubble to $b=-0.2\degr$. The western edge of the HG-feature seems to be the molecular gas entrained from the 20 km s$^{-1}$ cloud towards the north of the Galactic disk. Our results suggest a fossil explosion in the central 30 pc of the GC a few 10$^{5}$ years ago.
Our Galactic Center (GC) is the nearest nucleus of a galaxy ($d$=8.5 kpc) \citep{reid93,ghez08,reid14}. It is the best target to study detailed structures and dynamics in a circumnuclear environment at sub-pc scale, which cannot be easily done in external galaxies with ground-based telescopes. However, observations of the GC suffer from our edge-on vantage point. Optical and near infrared (IR) emission suffers from large amounts of extinction. The V-band extinction ($A_{\rm v}$) varies from 20 to 50 mag with a median value of 31.1 mag \citep{scoville03}. Dust is transparent at radio wavelengths. However, at millimeter wavelengths, the more abundant molecular lines (e.g., CO, J = 1--0) become optically thick and suffer from foreground absorption or self-absorption in the direction towards the GC \citep{guesten87,wright01,chris05}. High-excitation molecular lines, or high-density tracers are less affected by the foreground/ambient cold gas \citep{maria,tsuboi99,jackson93,mcgary01,mcgary02,herrnstein02,herrnstein05}. Fruitful studies were already carried out with low-excitation molecular lines and with low angular resolutions \citep[e.g.][]{sco72,burton83,bally87,bally88,burton92,tsuboi99}. Our focus is on the complex activities and physical conditions in the nuclear region probed by the dense molecular gas. We conducted wide-field single-dish observations of the central 30 pc of the GC with multiple transitions of the CS molecule \citep[][hereafter paper I]{hsieh15}. In paper I, we reported a new feature called the connecting ridge (CR) which was detected with the CS($J=4-3$) line. The CR has a velocity gradient perpendicular to the disk rotation. It is physically associated with the extraplanar polar arc (PA) \citep{bally88,henshaw16}. The PA extends from north of the SgrA* region at a 40$\degr$ angle and shows a large velocity gradient from $(l,b,V_{\rm sys}) = (0\degr,0\fdg05,70~\rm km~s^{-1}$) to $(0\fdg2,0\fdg25,140~\rm km~s^{-1})$. Below $V_{\rm sys}$ of 70 km s$^{-1}$, the PA lies close to the Galactic plane and becomes confused with the molecular clouds in the SgrA* region. In paper I, we find that the kinematic and spatial structures connect the Galactic disk, the CR, and the PA. These results suggest that the molecular gas might be lifted out of the Galactic plane. We, thus, proposed the idea of a molecular outflow in the central 30 pc of the GC and suggested that the PA is pushed away, possibly by the energy of 8-80 supernovae explosions. The importance of Galactic outflows is that they may be the primary mechanism to recycle metals and to deposit them into the intergalactic medium \citep{veilleux05}. In starburst galaxies, vast amounts of stellar winds from massive stars as well as supernovae explosions generate a huge amount of energy and high pressure to create high-velocity galactic winds, which interact with and sweep up the ambient gas. The GC is the nearest nucleus, and therefore, it is the best target to resolve the structure, kinematics, and physical conditions of nuclear outflows. The presence of atomic and molecular gas allows us to measure the outflow of neutral material, its impact, and transfer of energy and momentum % to the surrounding interstellar medium (ISM). In this paper, we continue to investigate the possibility of the outflow nature in the GC with our new CS($J=2-1$) data.
From our 1-pc resolution CS($J=2-1$) line maps, we find that the molecular gas in the central 30 pc of the Galactic Center has a morphology that is reflecting recent explosions $(3-6)\times10^{5}$ years ago. As a follow-up study of paper I, we revisit the idea that the PA is a molecular outflow launched from the Galactic disk. The linearly increasing velocity of the PA suggests that it is moving out of the Galactic disk. Near the systematic velocity, the molecular gas shows an HG-feature, which might be a mixture of multiple explosive events. The north-western edge of the HG-feature might trace the entrainment from the 20 MC from $b=-0.07\degr$ to $b=0.06\degr$. The southern part of the eastern edge of the HG-feature suggests an expanding bubble (B1). The low-level emission of B1 south to the Galactic plane is shocked. These individual features might be complexities on top of a general outflowing shell.
16
7
1607.03673